We have moved! Please visit us at ANTHROECOLOGY.ORG. This website is for archival purposes only.


Nov 01 2011

Personal remote sensing goes live: Mapping with Ardupilot

Folks all over are waking up to the fact that remote sensing is now something you really should try at home!  Today DIYDrones published a fine example of homebrew 3D mapping using an RC plane, a regular camera, and a computer vision software: hypr3d (one I’ve never heard of).  Hello Jonathan!

 

PS: I’d be glad to pay for a 3D print of our best Ecosynth- hypr3D can do it, so can landprint.com

Jul 09 2011

Leafsnap: An Electronic Field Guide

Yet another dimension of computer vision...

Leafsnap:

“ is the first in a series of electronic field guides being developed by researchers from Columbia University, the University of Maryland, and the Smithsonian Institution. This free mobile app uses visual recognition software to help identify tree species from photographs of their leaves. Leafsnap contains beautiful high-resolution images of leaves, flowers, fruit, petiole, seeds, and bark. Leafsnap currently includes the trees of New York City and Washington, D.C., and will soon grow to include the trees of the entire continental United States. This website shows the tree species included in Leafsnap, the collections of its users, and the team of research volunteers working to produce it."

Apr 14 2011

Computer vision beats Kinect?

“Just when you thought Kinect had the body tracking problem all sewn up, another approach promises to be cheaper and implementable using nothing but software and standard video cameras. The good news is that the software is open source, download-able and ready to go.”

http://www.i-programmer.info/news/105-artificial-intelligence/2310-predator-better-than-kinect.html

Apr 10 2011

Visualizing point clouds in your browser

Check out 3DTubeMe.com to see some of the latest in web based 3D visualizations.  I was directed to a post on Slashdot about the website by a professor and am totally thrilled about what this could mean for visualizing or own 3D point cloud data.  Currently you need to login and add this as an app through Facebook to upload and view, but the website authors say they are going to get rid of this feature soon.  I uploaded a small set of photos for processing, but was notified that my camera was not in their database and to wait to hear back about the processing of my cloud.  Maybe we could get this WebGL working to visualize our own point clouds? 

That’s all for now, back to the grind!

Oct 26 2010

Identifying SIFT features in the forest

So that's what SIFT features look like in a forest!  As part of my final project for my Computational Photography class I am working on exploring the characteristics of SIFT features in the context of vegetation areas.  SIFT (Scale Invariant Feature Transform) is an image processing algorithm that uses a series of resampling and image convolutions (e.g., filters) to identify distinctive 'features' within an image that can be used for a number of image processing procedures, including Structure from Motion and 3D reconstruction with computer vision, as in Bundler.

While we have seen maps of SIFT features on images in the setting of urban scenes or landmarks, we had never viewed or investigated the SIFT features of our own data.  These features form the basis of the 3D point clouds that we use to measure ecosystem characteristics and it is my goal with this class project, and further with my dissertation research to investigate the nature of these features in greater detail.  Are SIFT features single leaves, groups of leaves, branches, or something else?  Questions left to be explored; it is a good thing I am able to exercise my recently discovered interests in programming and computer science for this research!  

What is in the picture?  The pink arrows represent the "locations, scales and orientations of the key features" (SIFT README, Lowe 2005).  The location of a feature (the non-arrow end of the arrow) is defined as the local maximum or minimum of a grayscale image intensity apparent through a series of image convolutions.  The scale of a feature (the size of the arrow here) represents the relative size of that feature in the image.  Clicking on the image to get a full-res version, also here, allows us to get an idea of what a feature might be. The original image without features can be seen here.  We can see that the large black shadow areas in the image, representing gaps in the canopy, typically have one large arrow extending out from somewhere within the dark area.  In this case that entire black area is the feature, with the point indicated by the arrow as the local maximum or minimum grayscale intensity.  I am still working through the mathematical explanation of how that location is determined, but it does not have to be a geometric center.  There are other approaches that might allow me to plot boxes or shapes around the features, which I will explore next.  The orientation of the key features represents the intensity gradient.  This is computed as part of the feature descriptor to make the feature invariant to image rotation when being matched to other features.

From here I will generate a sample set of photos from our study sites that cover different types of ground cover (forest, grass, pavement, water) and will analyze the characteristics of SIFT features based on scene content, feature texture and perhaps illumination.  Lot's of programming ahead!

I processed this image using a great open-source SIFT library implemented with OpenCV from a PhD Student at Oregon State, Rob Hess, in Terminal on my MacBook.

Refs:

SIFT README, available online in SIFT download, http://www.cs.ubc.ca/~lowe/keypoints/

Lowe, D. G. (2004). "Distinctive Image Features from Scale-Invariant Keypoints." International Journal of Computer Vision 60(2): 91-110.

 

Sep 25 2010

Camera Exposure Calibration

Even though we have had great success with the Canon and Casio continuous shooting**, high speed cameras for getting high image overlap, we are still having issues with image exposure.  I purchased a Lastolite EzyBalance camera calibration card from Service Photo in Baltimore as way to systematically deal with these issues.

When in continuous shooting mode, the cameras make calculations for focus and exposure based on the first photo taken when the button is pressed (Canon SD4000 Camera Manual).  This means that all photos in the scene will have an exposure (under-, over-, or “correct”) based on the lighting conditions of the first photo.  We discovered this when first using these cameras on the Slow Sticks.  When attaching the camera to the underside of the plane frame, the camera is pointed up at the sky and sun.  When the continuous shooting mode is activated the camera records the light conditions as if it were receiving direct light from the sky/sun.  When the plane is turned right side-up less light is entering the lens and so the photos are underexposed, too dark.  If I mount the camera and activate continuous shooting mode with the camera pointed at the ground it will record lower lighting conditions than what will be observed at altitude above the canopy, so the photos are overexposed, too light.

I am still learning about how SIFT and computer vision work and we are just now at the point where we can start to test changes in camera settings, but based on some preliminary research I think it will be important to strive for consistent illumination among images.  SIFT is largely invariant to changes in illumination between images, so it should still be possible to match photos of the same place under slightly varying illumination conditions (Lowe 1999).***  Since the camera settings are consistent between photos, there should not be changes in feature illumination between photos for the same image collection, unless clouds move into the scene during the flight.  However, under- or over- exposed images may result in a reduction in the detection of image features.  Many things to be tested for sure, but I want to start with trying to achieve consistent image illumination.

Here are some simple examples of my backyard for illustration.  I used the open-source GIMP image editor to generate the image intensity histograms for each image.  The interpretation of image intensity histograms is somewhat subjective or scene based, but the examples below merely serve as illustration of the value of the cal panel. 

The image on the left is over-exposed and the image on the right is under-exposed.  Prior to setting the camera into continuous shooting mode I pointed the lens down at the shadow of my body at my feet and then out across my lawn, resulting in the over-exposed image at left.  For the right image I started with the camera pointed up at the sun and then down to my lawn, resulting in under-exposure.  The left histogram is so white that almost all values are at the far right end of the chart and are hard to see.  The right histogram has values clumped at the left side, representing darker values throughout the whole scene.  In either image it is difficult to make out features, for example the grass in shadow at the right side of the right image, and of course nothing is visible in the left image.

By photographing a fully illuminated grey calibration panel first I get a resulting image with much more natural looking and distributed color intensity, as can be seen in the image at right.  This more spread out histogram is interpreted as having more tonal variation.  While we still have lots to test about camera settings, the goal is that by using this cal panel prior to flight we will be able to achieve consistent photo illumination and exposure.  There are other panels with black, grey and white that can be used to deliberately cause images to be under or over exposed, e.g., the Lastolite XpoBalanceused by some to calibrate digital photos for portraits and also for calibrating the intensity of LiDAR beams (Vain et al. 2009).

OK, that’s enough.  It is too beautiful out to drag this post along any more!

 

** I just discovered a ‘low-light’ 2.5MP resolution camera setting that makes it possible to achieve 5 photos per second with the Canon SD4000, wow!  This has the effect of increasing the camera ISO which may result in grainy photos under high illumination and it is not possible to change that resolution setting.

*** Thanks to my Computational Photography course that I am taking this semester, my review of the Lowe SIFT paper for this post finally made sense! 

refs:

Lowe, D.G. 1999. Object recognition from local scale-invariant features. In International Conference
on Computer Vision
, Corfu, Greece, pp. 1150-1157.

Vain A., Kaasalainen S., Pyysalo U., Krooks A., Litkey P. Use of Naturally Available Reference Targets to Calibrate Airborne Laser Scanning Intensity Data. Sensors. 2009; 9(4):2780-2796.