We have moved! Please visit us at ANTHROECOLOGY.ORG. This website is for archival purposes only.


Jan 31 2011

Indoor / Outdoor "GPS" Tracking Camera

"A camera that remembers where you've been...even when you don't." That is the catch phrase for the newest in geo-aware digital cameras on the market, and the ploy / technology behind the advertisement has me very intrigued for the possibilities of computer vision enhanced ecology and remote sensing that it may enable.

While working on the ooo-gobs of other aspects of my current work (in other words, all that dissertation proposal and exam stuff), I wondered about the latest progress in GPS enabled digital cameras.  Generally, GPS positions tagged to images could be used to improve the computer vision, structure from motion process. Bundler is not enabled for this, but Noah Snavely suggests in his articles about Bundler that this would be possible.  I was thinking about how useful camera GPS positions would be as I was trying to subset out a set of photos to perform a pilot study of my phenology - color based analysis.  

After a brief web search, I came up with this puppy, the new Casio EXILIM EX-H20G shown here (image and source docs at http://exilim.casio.com/products_exh20g.shtml).  At first blush, it looks like a newer version of the EX-FS150 that we bought over the summer, but never used for much.

The kicker about the EX-H20G is the Hybrid GPS system that uses built-in accelerometers to track position when GPS signal is lost, for example when you go in a building ... or under a forest canopy...?  This a pretty new device and it is still relatively expensive (about $350) but the ability to track and geo-tag position when GPS signal is lost could prove to be very valuable for linking aerial and ground based 3D point clouds.  Unfortunately, a review of the manual indicates that continuous shooting mode is not available on this model, but it may be worth picking one up to see how it works.

Next then will be to soup up Bundler to use camera GPS positions to initialize that computationally dreadful bundle adjustment stage!

 

Oct 26 2010

Identifying SIFT features in the forest

So that's what SIFT features look like in a forest!  As part of my final project for my Computational Photography class I am working on exploring the characteristics of SIFT features in the context of vegetation areas.  SIFT (Scale Invariant Feature Transform) is an image processing algorithm that uses a series of resampling and image convolutions (e.g., filters) to identify distinctive 'features' within an image that can be used for a number of image processing procedures, including Structure from Motion and 3D reconstruction with computer vision, as in Bundler.

While we have seen maps of SIFT features on images in the setting of urban scenes or landmarks, we had never viewed or investigated the SIFT features of our own data.  These features form the basis of the 3D point clouds that we use to measure ecosystem characteristics and it is my goal with this class project, and further with my dissertation research to investigate the nature of these features in greater detail.  Are SIFT features single leaves, groups of leaves, branches, or something else?  Questions left to be explored; it is a good thing I am able to exercise my recently discovered interests in programming and computer science for this research!  

What is in the picture?  The pink arrows represent the "locations, scales and orientations of the key features" (SIFT README, Lowe 2005).  The location of a feature (the non-arrow end of the arrow) is defined as the local maximum or minimum of a grayscale image intensity apparent through a series of image convolutions.  The scale of a feature (the size of the arrow here) represents the relative size of that feature in the image.  Clicking on the image to get a full-res version, also here, allows us to get an idea of what a feature might be. The original image without features can be seen here.  We can see that the large black shadow areas in the image, representing gaps in the canopy, typically have one large arrow extending out from somewhere within the dark area.  In this case that entire black area is the feature, with the point indicated by the arrow as the local maximum or minimum grayscale intensity.  I am still working through the mathematical explanation of how that location is determined, but it does not have to be a geometric center.  There are other approaches that might allow me to plot boxes or shapes around the features, which I will explore next.  The orientation of the key features represents the intensity gradient.  This is computed as part of the feature descriptor to make the feature invariant to image rotation when being matched to other features.

From here I will generate a sample set of photos from our study sites that cover different types of ground cover (forest, grass, pavement, water) and will analyze the characteristics of SIFT features based on scene content, feature texture and perhaps illumination.  Lot's of programming ahead!

I processed this image using a great open-source SIFT library implemented with OpenCV from a PhD Student at Oregon State, Rob Hess, in Terminal on my MacBook.

Refs:

SIFT README, available online in SIFT download, http://www.cs.ubc.ca/~lowe/keypoints/

Lowe, D. G. (2004). "Distinctive Image Features from Scale-Invariant Keypoints." International Journal of Computer Vision 60(2): 91-110.