We have moved! Please visit us at ANTHROECOLOGY.ORG. This website is for archival purposes only.

Apr 04 2012

Hexakopter Flying and Testing the GoPro

Stephen and I practed flying the hexakoptors.  We were able to fly Roflkopter (one of the hexakopters) from the lab to the library, over the library and adjacent garage, and land on a 2ft by 2ft board.  In addition to the lirary expedition, we also practiced maneuvering the hexakopters, landing on a target, and getting them flying at the correct altitude.  Furthermore, we used the GoPro camera to capture video and pictures of the flights.  (The camera was mounted on the hexakopter.)  Unfortunately, the pictures had a lot of compression (as can be seen by the picture to the left that was taken in the lab).  Next week we will be testing to see if adjusting the setting will yield better images.

Below is a link to a video from the GoPro as we flew through Academic Row.  The first half of the video is with the distortion and the second half is the cleaned-up version.




Nov 11 2011

GoPro HERO 2 in hand, now I just need time!

OK, so now I have a new GoPro HERO2 camera shooting 11MP stills at 2fps, I just need the time to go out and test it at our study sites.

First things first, this camera is shooting stills with relatively wide field of view (FOV) and we don't know what that is going to do to structure from motion computation.  The camera shoots in full 170º FOV in 11MP and full or medium 127º FOV at 8MP and 5MP.  Narrow, 90º FOV, options most similar (although still wider) than the other cameras used in our research, are only available in video mode.

Some initial tests with ground subjects on campus have produced somewhat positive results, I think it is too early to tell for sure.

More to follow, when I can get to it.

Oct 25 2011

A new contender: GoPro launches HD Hero2

This camera looks like it might be a great new camera to try in aerial and ground ecosynth work (note: 2 11 MP photos per second):

GoPro launches HD Hero2 helmet cam, announces video streaming Wi-Fi pack for winter

List of HD HERO2 Feature Enhancements:
• Professional 11MP Sensor
• 2x Faster Image Processor
• 2X Sharper Glass Lens
• Professional Low Light Performance
• Full 170º, Medium 127º, Narrow 90º FOV in 1080p and 720p Video
• 120 fps WVGA, 60 fps 720p, 48 fps 960p, 30 fps 1080p Video
• Full 170º and Medium 127º FOV Photos
• 10 11MP Photos Per Second Burst
• 1 11MP Photo Every 0.5 Sec Timelapse Mode
• 3.5mm External Stereo Microphone Input
• Simple Language-based User Interface
• Compatible with Wi-Fi BacPac™ and Wi-Fi Remote™
- Long Range Remote Control of up to 50 GoPro Cameras per Wifi Remote
- Wi-Fi Video/Photo Preview, Playback and Control via GoPro App
- Live Streaming Video and Photos to the Web

Aug 03 2011

Pentax WG-1 GPS camera–too slow for scanning

I loved the Pentax WG-1 GPS camera when it first arrived.  It looked cool, had a non-extending lens, and offered the potential for GPS tagging our photos during flight – a feature that could be very time-saving for reconstructions.

But out of the box I quickly noted some major drawbacks.  The first was that the GPS only updates every 15 seconds.  At the average speed of 5 m/s of a Hexakopter, that meant that GPS logs would be something like 75m apart!  The unit also has a slower continuous shooting mode than the SD4000, about 1 fps.  The biggest drawback by far though was the lag, which I can only assume is a memory write lag.

I set up the camera to the maximum image quality settings, in continuous shooting mode, and with 15 second GPS refresh.  I was using a brand new Sandisk Extreme 16GB memory card, which would provide professional grade write speeds.  I strapped down the shutter button by lightly taping a plastic nut over the button and wrapping the unit with a velcro strap, just like we do with the SD4000s.  The Pentax WG-1 would take a continuous stream of about 30 photos then stop.  It would show the ‘number of images remaining’ counting down and just hung out.  After sometimes 10-15 seconds it would then resume taking photos continuously, but then repeat the same thing after another 30 photos.  The camera was not taking photos for 10-15 seconds while in continuous shooting mode.  At a flying speed of 5 m/s that means that for 50-75 meters in the air, no pictures would be taken!

I repeated this test with increasingly lower camera settings until I got down to the lowest possible settings of maximum compression and 640x480 resolution.  This time the camera took lots more photos  (~100 or so) but still had a long lag of no photos.

It was this that finally made us decide to send the Pentax WG-1 back.

Based on my research this GPS camera has the fastest GPS refresh time of any other point and shoot style camera, but the continuous shooting ‘lag’ was a deal breaker.

Jun 28 2011

Automated terrestrial multispectral scanning

3D scanning just keeps getting better (but not cheaper!).

A post from Engadget: Topcon's IP-S2 Lite (~$300K) creates panoramic maps in 3D, spots every bump in the road (video) http://www.engadget.com/2011/06/28/topcons-ip-s2-lite-creates-panoramic-maps-in-3d-spots-every-bu/.

More from Topcon:




In China recently, we had the good fortune to collaborate in using a wonderful new ground-based (terrestrial) LiDAR scanner (TLS) from Riegl: The VZ-400, which fuzes LiDAR scans with images acquired from a digital camera (~$140K). Pictured at left- graduate students of the Chinese Academy of Forestry with us in the field- literally!

Jan 31 2011

Indoor / Outdoor "GPS" Tracking Camera

"A camera that remembers where you've been...even when you don't." That is the catch phrase for the newest in geo-aware digital cameras on the market, and the ploy / technology behind the advertisement has me very intrigued for the possibilities of computer vision enhanced ecology and remote sensing that it may enable.

While working on the ooo-gobs of other aspects of my current work (in other words, all that dissertation proposal and exam stuff), I wondered about the latest progress in GPS enabled digital cameras.  Generally, GPS positions tagged to images could be used to improve the computer vision, structure from motion process. Bundler is not enabled for this, but Noah Snavely suggests in his articles about Bundler that this would be possible.  I was thinking about how useful camera GPS positions would be as I was trying to subset out a set of photos to perform a pilot study of my phenology - color based analysis.  

After a brief web search, I came up with this puppy, the new Casio EXILIM EX-H20G shown here (image and source docs at http://exilim.casio.com/products_exh20g.shtml).  At first blush, it looks like a newer version of the EX-FS150 that we bought over the summer, but never used for much.

The kicker about the EX-H20G is the Hybrid GPS system that uses built-in accelerometers to track position when GPS signal is lost, for example when you go in a building ... or under a forest canopy...?  This a pretty new device and it is still relatively expensive (about $350) but the ability to track and geo-tag position when GPS signal is lost could prove to be very valuable for linking aerial and ground based 3D point clouds.  Unfortunately, a review of the manual indicates that continuous shooting mode is not available on this model, but it may be worth picking one up to see how it works.

Next then will be to soup up Bundler to use camera GPS positions to initialize that computationally dreadful bundle adjustment stage!


Dec 06 2010

Near-Infrared Structure from Motion?

Some time ago we purchased a calibrated digital camera for the purpose of capturing reflectance of near-infrared (NIR) light from vegetation for our computer vision remote sensing research.  The goal was to make 3D structure from motion point clouds with images recording light in a part of the spectrum that is known to provide very useful information about vegetation.

We purchased a Tetracam ADC Lite for use with our small aerial photography equipment.  This camera has a small image sensor similar to what might be found in the off-the-shelf digital cameras we use for our regular applications, but has a modified light filter that allows it to record light reflected in the near-infrared portion of the electromagnetic spectrum.  Plants absorb red and blue light for photosynthesis and reflect green light, hence why we see most plants as green.  Plants are also highly reflective of near-infrared light: light in that portion of the spectrum just beyond visible red.  This portion of light is reflected by the structure of plant cell walls and this characteristic can be captured using a camera or sensor sensitive to that part of the spectrum.  For example, in the image above the green shrubbery is seen as bright red because the Tetracam is displaying near-infrared reflectance as red color.  Below is a normal looking (red-green-blue) photo of the same scene.

Capturing NIR reflectance can be useful for discriminating between types of vegetation cover or for interpreting vegetation health when combined with values of reflected light in other ‘channels’ (e.g., Red, Green, or Blue).  A goal would be to use NIR imagery in the computer vision workflow to be able to make use of the additional information for scene analysis. 

We have just started to play around with this camera, but unfortunately all the leaves are gone off of the main trees in our study areas.  The new researcher to our team, Chris Leeney, took these photos recently as he was experimenting on how best to use the camera for our applications.

It was necessary to import the images as DCM format into the included proprietary software to be able to see the ‘false-color’ image seen above.  I also ran a small set of images through Photosynth, with terrible results and few identified features, link here.  I wonder if there is such poor reconstruction quality because of the grey scale transformation applied prior to SIFT?  It is likely impossible to say what is being done within Photosynth, but I ran some initial two image tests on my laptop with more promising results. 

I am running OpenCV on my Mac and am working with an open source OpenCV implementation of the SIFT algorithm written in C, written by Rob Hess and blogged about previously, 27 October 2010 “Identifying SIFT features in the forest”.  Interestingly Mr. Hess recently won 2nd place for this implementation in an open source software competition, congratulations!  

Initial tests showed about 50 or so correspondences between two adjacent images and when I ran the default RGB to gray scale conversion it was not readily apparent that a large amount of detail was lost and a round of the SIFT feature detector turned up thousands of potential features.  The next thing to do will be to get things running in Bundler and perhaps take more photos with the camera.

Sorry to scoop the story Chris, I was playing with the camera software and got the false-color images out and just had to test it out.  I owe you one!

Nov 08 2010

Don't Forget to Backup!

Quick! Find the corrupt data!  

I had made the mistake of not being careful and meticulous with data backups a few months ago when I came in to the lab to find my primary data drive was toasted and some of my data was gone.  I do not think that that was a major loss, but I made sure to get my redundant backups up and running.  I have also been encouraging the practice with friends and loved ones ...  I don't think my parents have a clue what a Terabyte is though.

Out in the field I also backup data, mostly for fear of physically losing the media.  I have been in the habit of making a local copy of images collected during the day on the laptop and typically have enough SD card space to keep the originals on the cards.  This morning when I came in to dump the data from the weekend I kept getting an error that I couldn't transfer data from the card to my hard drive.  After some investigation I discovered what is seen in the image in one of the main image folders on a Sandisk Extreme 30MB/s 16GB SD card I had used during a flight.  In addition, Windows thought the card had 78GB of data on it!  I get the same results mounting the card in Windows and on my Macbook, and no problems reading other cards on the same Dynex card reader.  I also discovered that about 1000 photos were gone from the set and quickly panicked to find the laptop and confirm my backup copy was intact.  It was and I proceeded to transfer the good data over to the main system where we have nightly backups.

So do we wipe and reuse the card, perhaps also defragmenting, or is it time to find a replacement?  Considering the value of the data, it is probably time for another card.

Of course, the other explanation is that the card did some time traveling over night...

Oct 29 2010

Testing the Scanning Camera

  Flashback to a couple of weeks ago- this shot was taken with the scanning camera when the Hexakopter believed that it had flown its intended route and was waiting for someone to tell it to land. While the flight itself was a disaster we did gain some valuable information from the photos it took.

  As you can see from the picture we did get some good quality photos, even though it’s mostly parking lots… However what was most concerning was that the pictures taken were not consistent in terms. You can clearly see the difference between the two pictures taken less than three minutes apart.


  What I wanted to know was why the second shot blurred, at the time of this shot the Hexakopter was hovering in the same spot for about 5 minutes with little movement. At first I thought that the vibration caused by the motors or the gyro reaction to the swinging mass had affected the shots.

  So in the lab, with Jonathan’s help, we came up with a way to test my theories. We suspended a Hexakopter from a pipe and mounted the scanning camera on the Hexakopter, in the  same fashion as the previous flight. We started by turning on the scanning camera without the Hexakopter on, in other words this was our control test. We quickly found out that the swinging motion did not cause the Hexakopter to move at all, putting some doubt on one of my theories.  After close examination of the setup in action, I noticed a small vibration occurred on the camera mount whenever the camera moved to the front or back. Noting this, we continued the test except with the Hexakopter powered and the motors on. With the motors on, the Hexakopter reacted no differently than with the motors off. We did observe a cool damping effect whenever you pushed the Hexakopter.

  After the testing was completed, I decided to take a look at the pictures that were taken by the scanning camera throughout the entire test. I found that the small vibrations from the camera moving forward did in fact cause the photos to come out blurred. In addition, I found that making the mounting the scanning camera on a more stable base reduced the vibrations, resulting in sharper images.

Much later I found out that the two above images were taken at different shutter speeds. The above left photo was taken at a shutter speed of 1/320 sec, while the one right was taken at a shutter speed of 1/100 sec. Upon checking the rest of the photos taken by the camera that day, I saw that alot of the images had varying shutter speeds. I tried to compare these photos with the ones taken in the lab and found out that the two sets were incomparable due to different lighting conditions and consequently much different shutter speeds (the lab photos were taken arround 1/60 sec).

  In the end, Jonathan found out that the scanning camera setup would still sweep back and forth without the camera attached. Using the fact that the scanning camera had inconsistent shutter speeds, he attached the Cannon SD4000 to the setup.

Oct 20 2010

SD4000 with v5 Beta of CHDK

The Canon Point-and-shoot cameras for 2010 are amazingly powerful and priced to sell, but with the “simple” addition of CHDK you can get all of the features that are usually only found on high end cameras. The main feature that interests us is CHDK’s ability to incorporate an intervalometer, which allows for continuous shooting at controlled intervals…in theory.

The problem of running any beta version is that there are often going to be many un-foreseen errors that often plague open-source firmware in their early betas. After plenty of research, we were finally able to get CHDK onto a 4GB SD card using CardTricks v1.44 and running with little or no problem.  Once the script for the Ultra Intervalometer was added under /SCRIPTS/ folder on the SD the problems began to become apparent. Although the on-screen prompts for the intervalometer are all present, the actual functionality is not there as only one photo at a time can be taken despite many different settings changes and modifications to the code.

As the beta progresses I hope to see more people working with an intervalometer and perhaps then we can test it’s functionality to that of the SD4000’s stock “Continuous” setting.  Check back for updates on CHDK with this project!!



CardTricks v 1.44 available at: http://drop.io/chdksoft

CHDK for SD4000 beta v5 at: http://drop.io/chdk_ixus300_sd4000

CHDK Ultra Intervalometer at : http://chdk.wikia.com/wiki/UBASIC/Scripts:_Ultra_Intervalometer