We have moved! Please visit us at ANTHROECOLOGY.ORG. This website is for archival purposes only.

Dec 06 2010

Near-Infrared Structure from Motion?

Some time ago we purchased a calibrated digital camera for the purpose of capturing reflectance of near-infrared (NIR) light from vegetation for our computer vision remote sensing research.  The goal was to make 3D structure from motion point clouds with images recording light in a part of the spectrum that is known to provide very useful information about vegetation.

We purchased a Tetracam ADC Lite for use with our small aerial photography equipment.  This camera has a small image sensor similar to what might be found in the off-the-shelf digital cameras we use for our regular applications, but has a modified light filter that allows it to record light reflected in the near-infrared portion of the electromagnetic spectrum.  Plants absorb red and blue light for photosynthesis and reflect green light, hence why we see most plants as green.  Plants are also highly reflective of near-infrared light: light in that portion of the spectrum just beyond visible red.  This portion of light is reflected by the structure of plant cell walls and this characteristic can be captured using a camera or sensor sensitive to that part of the spectrum.  For example, in the image above the green shrubbery is seen as bright red because the Tetracam is displaying near-infrared reflectance as red color.  Below is a normal looking (red-green-blue) photo of the same scene.

Capturing NIR reflectance can be useful for discriminating between types of vegetation cover or for interpreting vegetation health when combined with values of reflected light in other ‘channels’ (e.g., Red, Green, or Blue).  A goal would be to use NIR imagery in the computer vision workflow to be able to make use of the additional information for scene analysis. 

We have just started to play around with this camera, but unfortunately all the leaves are gone off of the main trees in our study areas.  The new researcher to our team, Chris Leeney, took these photos recently as he was experimenting on how best to use the camera for our applications.

It was necessary to import the images as DCM format into the included proprietary software to be able to see the ‘false-color’ image seen above.  I also ran a small set of images through Photosynth, with terrible results and few identified features, link here.  I wonder if there is such poor reconstruction quality because of the grey scale transformation applied prior to SIFT?  It is likely impossible to say what is being done within Photosynth, but I ran some initial two image tests on my laptop with more promising results. 

I am running OpenCV on my Mac and am working with an open source OpenCV implementation of the SIFT algorithm written in C, written by Rob Hess and blogged about previously, 27 October 2010 “Identifying SIFT features in the forest”.  Interestingly Mr. Hess recently won 2nd place for this implementation in an open source software competition, congratulations!  

Initial tests showed about 50 or so correspondences between two adjacent images and when I ran the default RGB to gray scale conversion it was not readily apparent that a large amount of detail was lost and a round of the SIFT feature detector turned up thousands of potential features.  The next thing to do will be to get things running in Bundler and perhaps take more photos with the camera.

Sorry to scoop the story Chris, I was playing with the camera software and got the false-color images out and just had to test it out.  I owe you one!