We have moved! Please visit us at ANTHROECOLOGY.ORG. This website is for archival purposes only.


May 25 2011

Transect Grid Image Capture Pattern

We tried several methods of gathering ground data which ended up being unable to match up enough detail to project camera positions correctly in the software.  Borrowing a page from the flight capture patttern, I decided that what we needed was a large number of intersections between tracks, so that error-reducing loop closures would be maximized rather than rendering an area as one big track in which error accumulates.  A grid system accomplishes this simply and effectively.

The acquisition has several steps:

1) Secure the corners of the area, by marking out four points representing a 25 meter square, or quadrat, and collecting geospatial reference information on these corners.  Mark the corners using two things: First, a highly visible marker, and second, a barely visible marker that will be durable in the face of people attempting to destroy the site.

2) Interpolate points between these corners via the method of your choice, in order to construct 25 individual 5x5m grid cells.  Mark points on hte exterior with something highly visible, which will show up in a synth.  For interior points, use some unobstructive flag that is at least visible to the user.

3) Optionally, collect forest inventory information based on these 5x5m grid cells.

4) Pick a 'Home Point' and orient yourself pointing towards the center of the quadrat.  If the quadrat is part of a bigger area, pick a home point based on some consistent factor, like "the northernmost corner point".

5) Photography Stage 1. Based on that orientation, walk to your right between two columns of markers until you reach the end of the quadrat.  Come back between the next pair of columns, then walk back out in a switchback pattern until you reach the opposite end of the quadrat from the home point.

6) Photography Stage 2. Turn around and walk the exact same path, pointed in the opposite direction, until you reach the home point.

7) Photography Stage 3. This time, walk to your left between two rows of markers until you reach the end of the quadrat.  Come back between the next pair of rows, then walk back out in a switchback pattern until you reach the opposite end of the quadrat from the home point.

8) Photography Stage 4. Turn around and walk the exact same path, pointed in the opposite direction, until you reach the home point.

9) Run the images in Photoscan or other software with orthophoto-oriented optimizations turned off

10) Verify that the software has put cameras in the right place in the point cloud, and georeference the point clouds.


This pattern was tested in the GES485 field methods class (post coming).

May 25 2011

Backpack Camera Mount

We have very successfully incorporated the Clik Elite Bodylink backpack camera bag/mount, designed for standing telephoto pictures, into our terrestrial Ecosynth workflow.  The pack mounts to the front, and has an adjustable-angle clamp attached to an extendable bar with the correct gauge bolt for holding a camera.  It removes a lot of the physical stress and risk of dropping from protracted ground photo capture sessions, and potentially reduces camera blur.

May 25 2011

Flash card testing

After ordering yet another grade of SDHC card, I decided to compare the three different models we have quantity of on hand using the SD4000 cameras.

SD official grading is supposed to be for minimum worst-case write speed from class 2 to class 10, so class 6 gets 6MB/s, class 10 gets 10MB/s, et cetera. Faster speeds are not graded under that system. Sandisk presents their own grades of 15MB/s, 20MB/s, 30MB/s, etc for their Sandisk Extreme line under non-industry-standard testing conditions. Recently, another official SD consortium grade, UHS-1, has come into existence for 45MB/s under controlled testing conditions.

Tests were performed by leaving the cameras with the shutter depressed until they ran out of battery. Lighting (and scene complexity) appeared to affect the results - the highly compressible files that were written when the lights were turned off 20 minutes into the tests came out at a steady 158 shots per minute. 10 minutes of lights-on shots were averaged to get these figures:

  • Transcend Class 6 16GB $26.75  - 110 shots per minute
  • Transcend Class 10 16GB $27.11 - 116 shots per minute
  • Sandisk Extreme 30MB/s 16GB $53.45  - 156 shots per minute
May 23 2011

Geometry Matching for Coordinate Transform Computation Looks Promising

Our coordinate transform algorithm has given us encouraging output. By assuming that the camera and GPS data points all follow the same general geometry, we were able to interpolate and pick 100 points from both the camera and the GPS that should theoretically be in the same geometrical location. Using these 100 points, we used the least squares function and the Helmert coordinate transform algorithm to find the 7 unknown rotation, translation, and scaling parameters. We then used those parameters and the Helmert equations to transform our 100 camera points to match those of the GPS. Out data definitely appears similar in geometry, though the camera points are a bit off from the GPS: The average distance error is about 5.3 meters. This could potentially be corrected by picking a larger number of points from the splined data.

We used these parameters to also transform the point cloud, though oddly enough it transformed upside-down! Also, we are getting errors when the order of the camera list is not synchronized with the GPS. If we can synchronize the first Camera time with the first GPS data time, we could potentially reorganize the camera list so that it matches that of the GPS.

Though a bit more tweaking needs to be done to our code, this method of matching the data and GPS points looks very promising.

May 19 2011

End of Semester Summary

Hey everybody!
  As finals week draw to a close it looks like the forestry interns will have the time to get some much-needed field work done. Although it's coming close to the wire, or departure date, we're confident we can get our classifications and digitizing done within the next week. Here's little summary of what our contribution to the EcoSynth team has been over the past semester: Digitizing 25x25 field mapping done in previous semesters (February/March), Assisted with several Hexacopter flights (throughout), Surveyed remaining portion of the Knoll (April-present). The work that still needs to be accomplished includes completion of the survey in the Knoll by getting species and crown height measurements, and potentially creating a campus tree-identification guide. Additional related field work completed outside of the internship, and within the environmental mapping class( GES 485) is a complete, and digitized 25x75 meter survey subset into 5 meter grid cells, this survey included the positioning of every tree, large detritus items and stream profile. All of this collected data can hopefully be used to further calibrate the computer vision system, and can be used as a base for further research.

Extra Links: Using backlighting only from an IPhone, Grant Schindler developed this low budget (point cloud it seems) 3D scanning software. While it is nowhere near research grade equipment, He seems to have chosen using a non-natural lighting source to assist in 3d visualization, and for the Iphone it makes perfect sense to use the backlight adjacent to the front-facing camera.


http://www.newscientist.com/blogs/onepercent/2011/04/scan-your-face-in-3d-with-your.html
Grant's page: http://www.cc.gatech.edu/~phlosoft/

May 13 2011

Camera Stabilization

Denny Rowland, blogging at DIYDrones, is claiming dramatic performance on his image-stabilized arducopter. Keeping an 80x zoom steady is something hard enough to do with fast exposures on a specialized motion picture mount. It's so difficult that tripod-mounted binoculars don't even bother getting that high magnification for the most part. The telescope equivalent to his measurements, 18 arc seconds of periodic error at the mount (before camera IS), would be something to be proud of in designing your own astrophotography mount.  If you could keep the platform as steady in translation as it apparently is in rotation (unlikely), you could do high-sharpness long exposure images of anything in the world - Ecosynth aerial scans by moonlight, if necessary.  The translation movement problem though, is going to be minimized by going to higher altitude, where rotation becomes the whole game.

His flying camera quad, built for 12kg maximum thrust, proves that any pointing and stabilization needs within the realm of reason are well within our reach if we ever need to address them.

May 13 2011

Structure from Motion for Augmented Reality

I've posted about some of Henri Astre's toolkits, which are a hobby + learning tool for him.  When I mentioned that his professional work involves augmented reality (presumably from a smartphone) with structure from motion as well as commercial animation, it was noted that this meshes quite closely with some of the distant goals of our CS work, to have a Tegra SfM solution.  Henri has posted a new blog with his progress on realtime video SfM for pose estimation.  I believe that right now he's planning on a several-fps video upload with offsite processing, and creating a detailed site synth beforehand, for later AR use.

May 07 2011

Links of the Moment

  • Krzysztof Bosak, who created the Pteryx UAV, is suggesting a photogrammetric aerial robotics contest.  Pteryx, Smartplanes, Cropcam, and several others are now hovering outside the market space associated with our fixed-wing work, waiting for the US and others to legalize their industry.
  • ArduCopter Mega, the unification of the Ardupilot navigation toolset with the quad/hexa/octocopter flight control code, is being released very soon
  • At the UW GRAIL lab: The maker of SiftGPU, Changchang Wu, has put together a multi-core bundle adjustment algorithm with available open software
  • At EPFL CV lab:
May 05 2011

This past weekend

This past Saturday, our Environmental Mapping class (GES485) completed an extremely high detail forestry map. The plot was 3, 25x25 meter cells ranging from the top of the hill within the forested area in Herbert run, down to the stream, and back up the opposite slope. The purpose of this survey was to provide a framework for the hypothesis'
of students within 485, and as a longer term source of reference for collected EcoSynth data. We classified as much as was feasibly possible within a day; Fallen trees, stream borders, and any standing tree 2m tall and 1cm wide. As noted in prior blogging, the last survey of this part of campus covered more area, but with less detail. Because we are not exactly sure what the recognizable DBH (diameter @ breast height) threshold of the current camera setup is, this data should become rather useful for calibration, and error checking in the future. One final note regarding the picture at left, each dot is proportional to its relative area the tree takes up on the map, many of them that are very thin trees,
 at this low a resolution, cannot be seen here.

Side question/ attempt at humor of the day:


 If we can get Ecosynth software to recognize objects by color based on previously taken photos, are we developing hue-ristics?

May 02 2011

Redwood Forests- Natural Cathedrals in 3D

Redwood forests- with Earth’s tallest trees, one of the most impressive natural ecosystems on Earth.   For Arbor Day, Save the Redwoods League worked with Google Earth Outreach to model old-growth redwoods on Google Earth.  Great work- check out the kml in Google Earth!  A great example of what citizen science can do to raise awareness.

Makes me think of our Earthwatch Project in Huang Cun China this summer- can we create visualizations of the ancient village landscapes of China?  These are every bit as impressive and nearly as ancient as Redwood forests.   Can citizen science raise global awareness of these?