So, on a small, unrelated matter, and pretty much ‘out of the blue’, I emailed the Finds Liaison Officer – Wales (Mark) from the Portable Antiquities Scheme (PAS). As you do. In his reply he mentioned the dagger I’ve pictured above and asked about some new ‘fancy-pantsy’ recording technique that he heard I’d been experimenting with. In the same breath (well, it would have been if it wasn’t text) he reminded me that budgets in the heritage sector are under – stress – at the moment.
Now, drawing this flint (and doing it justice) would take many hours using traditional techniques. As usual with objects brought in by conscientious and archaeologically-minded members of the public (as this was) there was a time limit on recording it before its return to the owner. So making a visual record had to be reasonably speedy, inexpensive and capture as much information as possible.
Reflectance Transformation Imaging (RTI). I’m not going to describe it in depth here, all the software and documentation I use are free from Cultural Heritage Imaging, a nonprofit corporation whose website describes how it all works with great clarity here. This is how I described the technique to Mark though, “What you end up with, is a custom file you can view (and zoom in on), with the (free!) and easy-to-use software. You can change the lighting (move it about) however you like using a little track-ball.” I also sent him some screenshots like the ones in this short gallery:
The flint shown here incidentally, is a modern piece made by the inimitable flint-knapper John Lord.
So what does it involve?
Photography. And shiny black spheres which the software uses to calculate where the light is coming from. Then there’s the software to build the RTI and view the RTI.
The object and the camera must stay perfectly still, so a good tripod and solid surface are important. A camera stand would be best for small objects but isn’t very portable. You can then use a handheld, continuous light or a flash synchronised with your camera shutter to provide the light source. The light is the only thing which moves during the sequence of photographs you need to take.
Where does the flash go?
Imagine a hemisphere over your target object, now imagine that it has ribs like those on an umbrella (I imagine 12 ribs corresponding with the numbers on a clock-face). Now, mentally mark out on those ribs 4 or 5 points (depending upon how many photos you think you may need to capture enough detail) spaced evenly between 15° and 65° from the horizontal. Those points are where you move the flash to for each photo. This means 48 or 60 photos depending upon whether you chose 4 or 5 points for each ‘rib’. It’s best to do all this in very low light…it doesn’t have to be darkroom conditions (if anyone remembers what a darkroom is these days) but as dark as you can get it.
It’s best to just see what this looks like. Here’s my camera set-up in the basement of Amgueddfa Cymru – National Museum Wales:
See the string? In order to maintain the correct distance from the target object (remember our umbrella/hemisphere?) the best technology is a piece of string of the right length. The bits of sticky stuff on the corners of my little card guide and under the foot of my tripod are there to help me not ruin the sequence if I accidentally nudge either of them. All the photos must be perfectly aligned. There are two spheres so that if I move one the other will still be usable. They’re sitting on little metal washers.
The software processes the highlight positions for both spheres – you can pick the best. The spheres should occupy more than 250 pixels in the frame and generally ‘suit’ the size of the object. I have marbles, cupboard knobs, ball-bearings sprayed with gloss black enamel, glass headed pins (for really small things), garden ornaments and an eye for anything similar whilst wandering around the shops.
This is what the spheres capture:
The image below shows every flash position (reflected in one of the spheres) along the ribs of our imaginary umbrella for the sequence of photographs I took. It’s wonky because I have to work around the tripod (very carefully). The most important things are to maintain the same light intensity and the same distance from the object. In truth, it would be better if the flashes were slightly staggered in order to fill in those blank places between the ‘ribs’. However it’s too easy to lose track of what you’re doing if you’re imagining umbrellas with zig-zag ribs…
The photographic sequence:
If you’re careful and pay attention to the instructions from CHI then you’ll end up with a usable sequence of photographs. As you can imagine, there are many parameters to bear in mind and much more than I’ve mentioned so far. Here’s the full sequence of images I took for one side of the dagger:
Hang on a minute though, that’s enough to make a little video of a light moving over an object – you could then use the timeline to pretend you’re moving the light around couldn’t you? Well you probably could, but this sequence is just a set of photos showing the light for the views we’ve chosen. There’s little information for all the points between and any video would probably end up looking fairly chunky and unlovely. It would be pretty useless for study purposes. There’s not much comparison.
So these images go into the ‘RTI Builder’ software which can be downloaded from here. Remember, its a nonprofit organisation but they’re happy to accept donations in support of the amazing work they do.
In the software, you load up your images and weed out any mistakes. You then identify where to find the spheres on one image and the programme detects the spheres in all the photos. It then identifies every highlight from your flashgun, as reflected in the spheres for each frame. The last part is the important bit: it identifies the normal (i.e. a line perpendicular to a given object) for every pixel of the RTI image. That means, wherever you point the ‘light’ on the image of your object in the viewing software, it ‘knows’ how the light will behave anywhere on that object.
What do you get at the end of all this?
You have the option of using two different algorithms to make the RTI: Polynomial Texture Mapping (PTM) or Hemispherical Harmonics (HSH). There are several differences between them but the most obvious one is that in the ‘RTI Viewer’ you can render the object in 3 modes for HSH and 10 modes for PTM.
None the wiser? Here’s a gallery of the options for viewing a PTM. These all have the light set directly above (x and y are at zero). They are stills taken via the RTI viewer software, saved as .png files, tweaked in Photoshop and finally optimised for web.
Here are the PTM options. The caption for each picture refers to the rendering option depicted:
These (below) are the three options for HSH (referred to as RTI in the CHI documentation):
‘Specular Enhancement’ is particularly good when using the HSH algorithm.
The RTI files are quite big (which is why there aren’t any here for you to download). Mine are usually somewhere between 100-200 MB. And they can only be viewed with ‘RTI Viewer’ software which is pretty intuitive and easy to use. This is good, because the ‘RTI Builder’ software has some quirks and is not so intuitive.
So what can you print/publish?
You can take a .jpg or .png snapshot of any view you need and then tweak them in Photoshop (as I’ve done here). I have ‘RTI Viewer’ running on a Windows tablet device quite happily. It’s like having the object and a flashlight on your desk. I’ve seen RTIs projected onto a screen as part of a lecture too. I’d personally like to see some way of viewing one on the web, but it would probably have to be compressed (reducing the quality) which rather defeats the object of the exercise. As an illustrator, I’m pretty sure that I can draw from an RTI image (whereas digital photos aren’t usually good enough on their own). That’s probably the next little project.
You can also make a video of it in action and upload it to YouTube as I have here. But the quality of the video is a poor reflection of the original and the viewer has little control over it. Incidentally, I’m not that interested in making videos of things at the moment, so I used a nifty piece of free software called Fraps to do the video capture.
So back to the original question: How useful is RTI?
Setting up the camera (getting the exposure right, tripod at the correct height, arranging and supporting the target object etc.) usually takes a maximum of half an hour. Sequence capture takes about half an hour for each set of images (usually less, assuming I don’t bump into the tripod). So that’s roughly one and a half hours for both sides.Some image processing is required (downloading the images and making RAW files into jpegs) plus the use of the RTI software itself, but I can be doing something else during much of that time (hooray for batch processing!).
RTI is mostly used to capture fine relief detail on inscriptions, manuscripts and the surfaces of paintings but I have experimented with things which are more sculptural and not particularly flat with some interesting and potentially useful results. I’ll post some more experiments in the future.
I think RTI is the quick option for capturing some kinds of artefact in considerable detail without breaking the bank or using up too much illustration time. It’s another tool in the armoury, with the advantage that (if my hunch is correct) more time-consuming methods like traditional illustration can be considered at a later date if necessary.