We have moved! Please visit us at ANTHROECOLOGY.ORG. This website is for archival purposes only.


Mar 02 2011

Xbee Solutions?

Is the solution to our Xbee problems to not use them at all?

You may recall several posts and rants about our problems with Xbees, http://ecotope.org/ecosynth/blog/?tag=/XBee, and we have been actively looking for a solution.  The picture at left shows a potential new system for transferring data from the field computer to the Hexakopter on the ground and then in the air.  In this picture is the Mikrokopter MKUSB module that is used for hard-wired USB data communication between the Hexakopter and the Mikrokopter tool software; a new Sparkfun USB Explorer Regulated board; 2 Digikey Xbee Pro 900 modules; and a Sparkfun Xbee Explorer USB module.

The plan is to use the MKUSB module to upload waypoints and control settings to the Hexa prior to flight, then pop on the Xbee wireless configuration for spotty telemetry during flight.

Our problem for months now has been very unpredictable performance with the Xbees for wireless telemetry communication.  The set up was: 2 Xbee Pro 900’s both mounted to Sparkfun Xbee Explorer USB modules.  One Explorer USB was set up for USB communication with the laptop (the module shown at right in the image) and one Explorer USB had a ribbon cable soldered to it for plugging into the Hexa (image not shown).  At first, this seemed to work OK, but for some very odd reason, this set up started failing, to the point where it was impossible to wirelessly upload waypoints to the Hexa from the laptop within just a few feet of the unit.  I do not want to think about the countless hours wasted on this.

This new setup aims to 1) circumvent the need to us Xbees for mission critical steps (waypoint and configuration upload) and 2) use a different hardware configuration to attempt to re-establish successful wireless telemetry communication using Xbees during flight.

We will use the MKUSB in the field to transmit waypoints and configuration settings to the Hexa via wired communication and then use the Sparkfun / Digikey configuration shown above for getting what wireless telemetry data we can during flight (with the assumption that the Xbees will still fail or be spotty).  Sparkfun recommends this configuration over the use of 2 Explorer USB modules and we are not entirely sure why we have the configuration that we have been using.

With a leaf-off flight planned for the UMBC Herbert Run site on Saturday, I hope to have some positive results to report next week!

Feb 25 2011

Photoscan is awesome!

Agisoft’s Photoscansoftware is simply amazing!

The picture at left is an orthorectified photo mosaic over our Knoll research site on the UMBC campus generated by Photoscan automatically using only input photos that I took with the Hexakopter.  For reference, each Hexakopter photo covered less than a 10th of the area observed in this scene. 

An orthophoto is a photo that has been mathematically distorted based on the differences in elevation of the scene so that everything appears ‘flat’, or it appears that the camera was right above each point in the photo.

Photoscan uses similar computer vision technology that Bundler and Photosynth use to automatically recreate the 3D structure of a scene from only photos.

The professional version of the software also makes it very easy to georeference the scene to a geographic coordinate system, making it possible to easily view in a GIS software … or in Google Earth.

Here is a link to a Google Earth image file that Photoscan generated from our photo set, enjoy (35MB kmz file)! 

I am working on getting some 3D output to Google Earth next.

Feb 23 2011

Hexakopter Suspended Payload Tests–Results

The results of our Hexakopter payload tests were better than expected!

You can see a few clips from the test flights here: http://www.youtube.com/watch?v=OZlamfvl3VU

Basically, we suspended a 1.5lb weight from a metal cable, dangling about 12 feet below the Hexakopter.  We tested take-off, landing, auto-controlled hover with GPS and altitude lock and waypoint flying.

We found that the onboard computer can fly the unit with payload better than I can and that it performs very well at staying balanced despite a pendulum effect from the payload.  The unit appears to dampen this effect after a few seconds and in periods of calm wind the whole rig appeared virtually motionless.  In an auto-hover test, the unit lasted about 12 minutes before reaching the battery limit, it is expected that this time would be a bit less in a real flight.

Overall, a successful experiment and a great day to be out in the field!

Feb 17 2011

Hexakopter Suspended Payload Tests

How well will a Hexakopter work at carrying an instrument payload suspended several meters below?

That is the question we will be trying to answer in the next few days as we get ready for some work for the Forest Service. The goal is to suspend an instrument payload several meters below the Hexakopter on a light-weight metal cable. The payload will weigh about 1.25 lb (0.56 kg) and needs to be far enough away from the Hexa to avoid the effects of downward prop wash. The payload and Hexa will be flying through smoke and we want the instruments to be unaffected by the Hexa itself.

I purchased some 1/16" (~1.6 mm) braided metal cable, some ferrules and some clips from the local hardware store to build the suspension system. I am going to use a 'calibrated' water bottle in place of the instrument payload for weight.

I am going to test:

1) At what distance below the Hexakopter will the effects of prop wash be non-existent / negligible? This will be done in the field by flying a Hexa above a pole with flagging tape on it. This distance will be referred to as X meters.

2) Can the Hexakopter fly in manual and auto mode with a 1.25 lb payload suspended at X meters from a 1/16” metal cable? This will be tested by performing take-off, manual flying, auto-hold, auto-waypoint flying, and landing with the payload attached. Results will suggest total success or a range of flight performance. It is expected that wind will play a significant factor.

3) How long can the Hexakopter fly with the payload attached? This will be tested by first getting the Hexa to altitude with payload and letting it to hover until the battery is at the minimum safe capacity. Then, with a fresh battery installed, it will be tested by flying a simple ‘back and forth’ route over the flight area to simulate increase battery demand.  It is expected that there is a great potential for pendulum affects to occur during flight.

Stay tuned for some results!

UPDATE: I forgot, one of the main Hexakopter videos shows Holger doing his insane Hexa flying witha 1kg soda bottle suspended from below.

http://www.youtube.com/watch?v=gvH2f-AewX8&t=8m0s

Feb 11 2011

Rising Popularity of the R Programming Language

According to a recent analysis of the search hit popularity of the top 100 programming languages, the R Statistical Computing language, has surpassed both MATLAB and SAS.

I first read about this from the Revolutions blog, a blog dedicated to posting news and content about R, and was happy to see from the survey report charts that the free R software has such relatively high popularity compared to similar languages.  It is worth noting here that the popularity difference is slight due to the fact that this survey counts many languages that are more popular than either R, MATLAB, or SAS. R (#25) had a popularity of 0.561%, MATLAB (#29) 0.483%, and SAS (#30) 0.474%.  Meanwhile Python (#4) has a popularity of about 7%, C (#2) about 15% and Java at #1 with about 18.5%.  The Revolutions blog also makes the important point that the methods used to compute these stats may be a bit controversial, but the stats still serve a purpose.

I first learned R from taking a graduate level statistics course at UMBC, Environmental Statistics 614, and have developed my skills with the programming language to help with data analysis and preparing graphs and figures for papers.  I used R to perform the data analysis and generate the non-map figures for our first paper on Ecosynth and will continue to do so for future publications.

I have only used MATLAB to execute a camera calibration program for my Computational Photography class last semester and I learned a bit of SAS programming for my Multivariate Statistics course last year.  I think both have their uses, but I am really fond of the relatively light-weight size and 'cost' of R.  I am also interested in adding in the scientific and numerical programming functions of Python, SciPy and NumPy.  The SAGE project utilizes SciPy and NumPy to establish a robust free and open-source alternative to for-pay analytical tools like MATLAB, and is also increasing in popularity.  

Free open-source revolution!  This makes me want to put up a post about open-source GIS software...

Feb 09 2011

The sky's ... limited?

The future of model aviation in the US is ... vague.  The AMA (Academy of Model Aeronautics) released a brief statement in its recent member newsletter with a link to their most up-to-date news about the FAA's (Federal Aviation Administration) progress in developing model aircraft regulations.  You can view that statement here.

The word is that an update is expected some time this summer with rumor that exemptions from regulation for small, model recreational grade units are a possibility.  A previous update from the FAA indicated that recreational use is still regulated by the "below 400' and away from airports and air traffic" policy outlined in a circular from 1981.  We keep to this specification in our remote sensing missions out of respect for the regulations and more importantly because of the need to have very high-resolution and detailed images of the canopy.  Our scientific needs require us to fly at very, very low height above ground level, typically between 50m - 100m (164' - 328').

It looks like the AMA is trying to make sure that the large body of model fliers and aircraft, like us and the equipment we use, don't get lumped into the same category as those users flying large aircraft at high altitudes.

So we will stay tuned for the latest from the AMA and the FAA this summer.

Jan 31 2011

Indoor / Outdoor "GPS" Tracking Camera

"A camera that remembers where you've been...even when you don't." That is the catch phrase for the newest in geo-aware digital cameras on the market, and the ploy / technology behind the advertisement has me very intrigued for the possibilities of computer vision enhanced ecology and remote sensing that it may enable.

While working on the ooo-gobs of other aspects of my current work (in other words, all that dissertation proposal and exam stuff), I wondered about the latest progress in GPS enabled digital cameras.  Generally, GPS positions tagged to images could be used to improve the computer vision, structure from motion process. Bundler is not enabled for this, but Noah Snavely suggests in his articles about Bundler that this would be possible.  I was thinking about how useful camera GPS positions would be as I was trying to subset out a set of photos to perform a pilot study of my phenology - color based analysis.  

After a brief web search, I came up with this puppy, the new Casio EXILIM EX-H20G shown here (image and source docs at http://exilim.casio.com/products_exh20g.shtml).  At first blush, it looks like a newer version of the EX-FS150 that we bought over the summer, but never used for much.

The kicker about the EX-H20G is the Hybrid GPS system that uses built-in accelerometers to track position when GPS signal is lost, for example when you go in a building ... or under a forest canopy...?  This a pretty new device and it is still relatively expensive (about $350) but the ability to track and geo-tag position when GPS signal is lost could prove to be very valuable for linking aerial and ground based 3D point clouds.  Unfortunately, a review of the manual indicates that continuous shooting mode is not available on this model, but it may be worth picking one up to see how it works.

Next then will be to soup up Bundler to use camera GPS positions to initialize that computationally dreadful bundle adjustment stage!

 

Dec 06 2010

Near-Infrared Structure from Motion?

Some time ago we purchased a calibrated digital camera for the purpose of capturing reflectance of near-infrared (NIR) light from vegetation for our computer vision remote sensing research.  The goal was to make 3D structure from motion point clouds with images recording light in a part of the spectrum that is known to provide very useful information about vegetation.

We purchased a Tetracam ADC Lite for use with our small aerial photography equipment.  This camera has a small image sensor similar to what might be found in the off-the-shelf digital cameras we use for our regular applications, but has a modified light filter that allows it to record light reflected in the near-infrared portion of the electromagnetic spectrum.  Plants absorb red and blue light for photosynthesis and reflect green light, hence why we see most plants as green.  Plants are also highly reflective of near-infrared light: light in that portion of the spectrum just beyond visible red.  This portion of light is reflected by the structure of plant cell walls and this characteristic can be captured using a camera or sensor sensitive to that part of the spectrum.  For example, in the image above the green shrubbery is seen as bright red because the Tetracam is displaying near-infrared reflectance as red color.  Below is a normal looking (red-green-blue) photo of the same scene.

Capturing NIR reflectance can be useful for discriminating between types of vegetation cover or for interpreting vegetation health when combined with values of reflected light in other ‘channels’ (e.g., Red, Green, or Blue).  A goal would be to use NIR imagery in the computer vision workflow to be able to make use of the additional information for scene analysis. 

We have just started to play around with this camera, but unfortunately all the leaves are gone off of the main trees in our study areas.  The new researcher to our team, Chris Leeney, took these photos recently as he was experimenting on how best to use the camera for our applications.

It was necessary to import the images as DCM format into the included proprietary software to be able to see the ‘false-color’ image seen above.  I also ran a small set of images through Photosynth, with terrible results and few identified features, link here.  I wonder if there is such poor reconstruction quality because of the grey scale transformation applied prior to SIFT?  It is likely impossible to say what is being done within Photosynth, but I ran some initial two image tests on my laptop with more promising results. 

I am running OpenCV on my Mac and am working with an open source OpenCV implementation of the SIFT algorithm written in C, written by Rob Hess and blogged about previously, 27 October 2010 “Identifying SIFT features in the forest”.  Interestingly Mr. Hess recently won 2nd place for this implementation in an open source software competition, congratulations!  

Initial tests showed about 50 or so correspondences between two adjacent images and when I ran the default RGB to gray scale conversion it was not readily apparent that a large amount of detail was lost and a round of the SIFT feature detector turned up thousands of potential features.  The next thing to do will be to get things running in Bundler and perhaps take more photos with the camera.

Sorry to scoop the story Chris, I was playing with the camera software and got the false-color images out and just had to test it out.  I owe you one!

Nov 08 2010

Don't Forget to Backup!

Quick! Find the corrupt data!  

I had made the mistake of not being careful and meticulous with data backups a few months ago when I came in to the lab to find my primary data drive was toasted and some of my data was gone.  I do not think that that was a major loss, but I made sure to get my redundant backups up and running.  I have also been encouraging the practice with friends and loved ones ...  I don't think my parents have a clue what a Terabyte is though.

Out in the field I also backup data, mostly for fear of physically losing the media.  I have been in the habit of making a local copy of images collected during the day on the laptop and typically have enough SD card space to keep the originals on the cards.  This morning when I came in to dump the data from the weekend I kept getting an error that I couldn't transfer data from the card to my hard drive.  After some investigation I discovered what is seen in the image in one of the main image folders on a Sandisk Extreme 30MB/s 16GB SD card I had used during a flight.  In addition, Windows thought the card had 78GB of data on it!  I get the same results mounting the card in Windows and on my Macbook, and no problems reading other cards on the same Dynex card reader.  I also discovered that about 1000 photos were gone from the set and quickly panicked to find the laptop and confirm my backup copy was intact.  It was and I proceeded to transfer the good data over to the main system where we have nightly backups.

So do we wipe and reuse the card, perhaps also defragmenting, or is it time to find a replacement?  Considering the value of the data, it is probably time for another card.

Of course, the other explanation is that the card did some time traveling over night...

Oct 26 2010

Identifying SIFT features in the forest

So that's what SIFT features look like in a forest!  As part of my final project for my Computational Photography class I am working on exploring the characteristics of SIFT features in the context of vegetation areas.  SIFT (Scale Invariant Feature Transform) is an image processing algorithm that uses a series of resampling and image convolutions (e.g., filters) to identify distinctive 'features' within an image that can be used for a number of image processing procedures, including Structure from Motion and 3D reconstruction with computer vision, as in Bundler.

While we have seen maps of SIFT features on images in the setting of urban scenes or landmarks, we had never viewed or investigated the SIFT features of our own data.  These features form the basis of the 3D point clouds that we use to measure ecosystem characteristics and it is my goal with this class project, and further with my dissertation research to investigate the nature of these features in greater detail.  Are SIFT features single leaves, groups of leaves, branches, or something else?  Questions left to be explored; it is a good thing I am able to exercise my recently discovered interests in programming and computer science for this research!  

What is in the picture?  The pink arrows represent the "locations, scales and orientations of the key features" (SIFT README, Lowe 2005).  The location of a feature (the non-arrow end of the arrow) is defined as the local maximum or minimum of a grayscale image intensity apparent through a series of image convolutions.  The scale of a feature (the size of the arrow here) represents the relative size of that feature in the image.  Clicking on the image to get a full-res version, also here, allows us to get an idea of what a feature might be. The original image without features can be seen here.  We can see that the large black shadow areas in the image, representing gaps in the canopy, typically have one large arrow extending out from somewhere within the dark area.  In this case that entire black area is the feature, with the point indicated by the arrow as the local maximum or minimum grayscale intensity.  I am still working through the mathematical explanation of how that location is determined, but it does not have to be a geometric center.  There are other approaches that might allow me to plot boxes or shapes around the features, which I will explore next.  The orientation of the key features represents the intensity gradient.  This is computed as part of the feature descriptor to make the feature invariant to image rotation when being matched to other features.

From here I will generate a sample set of photos from our study sites that cover different types of ground cover (forest, grass, pavement, water) and will analyze the characteristics of SIFT features based on scene content, feature texture and perhaps illumination.  Lot's of programming ahead!

I processed this image using a great open-source SIFT library implemented with OpenCV from a PhD Student at Oregon State, Rob Hess, in Terminal on my MacBook.

Refs:

SIFT README, available online in SIFT download, http://www.cs.ubc.ca/~lowe/keypoints/

Lowe, D. G. (2004). "Distinctive Image Features from Scale-Invariant Keypoints." International Journal of Computer Vision 60(2): 91-110.