Tuesday 25 October 2011

Photography: Photons & imagination

'1934 Kodak Brownie
Hawkeye 2A vintage
camera' by Kevin Dooley
under a CC license
Capturing moments in time or instances of human imagination on something solid is nothing new. Paintings, sketches, sculptures, photographs - to name a few means - have all been part of, more-or-less, the same game. From amongst those means, photography is probably the most recent addition.

Manipulating light through pinholes or lenses has been known since the BC era. Finding a way to 'freeze' light on a piece of film proved to be a bit more challenging. The first useable form of photography - as an innovative technology - came up at about 1820. The next few decades were certainly exciting with huge steps towards better equipment and superior consumables. Technological progress and consumer demand went hand-in-hand for several decades. Even a few years ago, just before the dawn of the digital era of photography, cameras and films were practically for all, available in all sorts of flavours and costs.

Regardless of the technological advances, the main idea has remained mostly the same since the early photography days: Collect light from an object/ person/ scene, drive it on a photo-sensitive surface and capture the moment! Even with the coming of CCDs, which eventually made digital cameras possible, the idea has remained unchanged; it is just the film that has been put aside. (Edit: When it comes to photography and the corresponding equipment, people often like retro-looking technology.)

Around that main theme, a number of variations have developed. Different kind of lenses, numerous filters allowing for all sorts of visual effects, software that enables post-processing with - practically - no limits, etc. People have even looked at how things look outside the narrow limits of (our) visible light spectrum; infrared and ultraviolet photography are niches that still maintain their audience and are always associated with a certain 'cool-factor' (e.g., common things in the IR and a more structured approach in UV/IR photography).

A much less known area of photography is 'light field photography'. Putting science aside for the time being, the idea is somehow different than classic photography: instead of getting a single projection of rays of light on a plane (be it a film or a sensor) let's get more information about the light received by the camera, i.e., not only intensity and frequency (colour) but also direction. Having captured an instance where the received light rays have been 'better documented' makes it easier to manipulate that instance after its capture, changing, for instance, the focus point or altering (slightly) the view point.

Stanford university hosted quite a lot of work on light field photography. It worth visiting their webpages, e.g. http://graphics.stanford.edu/papers/lfcamera/ (there is a nice, illustrative video at the bottom there). Ren Ng, one of Stanford's researchers has started his own company using that technology, Lytro. Lytro has made quite an impact on the photography press lately by launching a camera with the capability to focus after the fact.


Promotional video of the Lytro camera





Sample photo from the Lytro website. Click on an area of the photo to refocus.


Now, personally, I find both the science behind and the application quite exciting! Despite the fact that some experts were rather critical on the particular implementation (e.g., Thom Hogan's blog). And no, I believe that Lytro is not the first plenoptic camera that reaches the market (e.g., Raytrix GmbH), although it does come in a very consumer-oriented form.

Myself, I find the whatever technological or practical constraints bearable. For instance, the resolution offered is likely to be quite far from what the current dSLR or prosumer options. Also, merely viewing lightfield photos requires proprietary software and so does sharing such photos. But still, it's the new thing around. It may feel clumsy and strange but if it stays around long enough, it is bound to improve!

However, I admit, it sort of beats the purpose of getting photos in the first place. Yes, it still allows you to 'capture the moment'. But it takes away the magic of finding the right angles, focusing on the spot that highlights your point of view behind the photo. It is basically about the same debate around video vs. photography. (Edit: for those of you who wonder, light field video does exist - e.g., http://pages.cs.wisc.edu/~lizhang/projects/lfstable/ - yet not in a commercial product AFAIK; having a light field video camera allowing for ex-post manipulation of the output with respect to POV or focus would be cooooool, too.)

I think that we are about to see plenty more developments in the world of image capturing and processing.

BTW, just before closing this post I can't resist saying that, yesterday, I saw at Slashdot a link to Kevin Karsch's site on 'Rendering synthetic objects into legacy photographs'. I find pretty amazing what they have managed to accomplish. Also a bit scary. Here is a video they have made available:


Rendering Synthetic Objects into Legacy Photographs from Kevin Karsch on Vimeo.


If we keep on that pace of development, I - sometimes - wonder how much more innovation can we possibly accommodate :-)

(Note: I'm not affiliated to any of the companies mentioned above. This is not a product review - I neither own nor have access to any of the light field cameras mentioned.)