We have moved! Please visit us at ANTHROECOLOGY.ORG. This website is for archival purposes only.


May 25 2011

Transect Grid Image Capture Pattern

We tried several methods of gathering ground data which ended up being unable to match up enough detail to project camera positions correctly in the software.  Borrowing a page from the flight capture patttern, I decided that what we needed was a large number of intersections between tracks, so that error-reducing loop closures would be maximized rather than rendering an area as one big track in which error accumulates.  A grid system accomplishes this simply and effectively.

The acquisition has several steps:

1) Secure the corners of the area, by marking out four points representing a 25 meter square, or quadrat, and collecting geospatial reference information on these corners.  Mark the corners using two things: First, a highly visible marker, and second, a barely visible marker that will be durable in the face of people attempting to destroy the site.

2) Interpolate points between these corners via the method of your choice, in order to construct 25 individual 5x5m grid cells.  Mark points on hte exterior with something highly visible, which will show up in a synth.  For interior points, use some unobstructive flag that is at least visible to the user.

3) Optionally, collect forest inventory information based on these 5x5m grid cells.

4) Pick a 'Home Point' and orient yourself pointing towards the center of the quadrat.  If the quadrat is part of a bigger area, pick a home point based on some consistent factor, like "the northernmost corner point".

5) Photography Stage 1. Based on that orientation, walk to your right between two columns of markers until you reach the end of the quadrat.  Come back between the next pair of columns, then walk back out in a switchback pattern until you reach the opposite end of the quadrat from the home point.

6) Photography Stage 2. Turn around and walk the exact same path, pointed in the opposite direction, until you reach the home point.

7) Photography Stage 3. This time, walk to your left between two rows of markers until you reach the end of the quadrat.  Come back between the next pair of rows, then walk back out in a switchback pattern until you reach the opposite end of the quadrat from the home point.

8) Photography Stage 4. Turn around and walk the exact same path, pointed in the opposite direction, until you reach the home point.

9) Run the images in Photoscan or other software with orthophoto-oriented optimizations turned off

10) Verify that the software has put cameras in the right place in the point cloud, and georeference the point clouds.


This pattern was tested in the GES485 field methods class (post coming).

May 25 2011

Backpack Camera Mount

We have very successfully incorporated the Clik Elite Bodylink backpack camera bag/mount, designed for standing telephoto pictures, into our terrestrial Ecosynth workflow.  The pack mounts to the front, and has an adjustable-angle clamp attached to an extendable bar with the correct gauge bolt for holding a camera.  It removes a lot of the physical stress and risk of dropping from protracted ground photo capture sessions, and potentially reduces camera blur.

May 25 2011

Flash card testing

After ordering yet another grade of SDHC card, I decided to compare the three different models we have quantity of on hand using the SD4000 cameras.

SD official grading is supposed to be for minimum worst-case write speed from class 2 to class 10, so class 6 gets 6MB/s, class 10 gets 10MB/s, et cetera. Faster speeds are not graded under that system. Sandisk presents their own grades of 15MB/s, 20MB/s, 30MB/s, etc for their Sandisk Extreme line under non-industry-standard testing conditions. Recently, another official SD consortium grade, UHS-1, has come into existence for 45MB/s under controlled testing conditions.

Tests were performed by leaving the cameras with the shutter depressed until they ran out of battery. Lighting (and scene complexity) appeared to affect the results - the highly compressible files that were written when the lights were turned off 20 minutes into the tests came out at a steady 158 shots per minute. 10 minutes of lights-on shots were averaged to get these figures:

  • Transcend Class 6 16GB $26.75  - 110 shots per minute
  • Transcend Class 10 16GB $27.11 - 116 shots per minute
  • Sandisk Extreme 30MB/s 16GB $53.45  - 156 shots per minute
May 13 2011

Camera Stabilization

Denny Rowland, blogging at DIYDrones, is claiming dramatic performance on his image-stabilized arducopter. Keeping an 80x zoom steady is something hard enough to do with fast exposures on a specialized motion picture mount. It's so difficult that tripod-mounted binoculars don't even bother getting that high magnification for the most part. The telescope equivalent to his measurements, 18 arc seconds of periodic error at the mount (before camera IS), would be something to be proud of in designing your own astrophotography mount.  If you could keep the platform as steady in translation as it apparently is in rotation (unlikely), you could do high-sharpness long exposure images of anything in the world - Ecosynth aerial scans by moonlight, if necessary.  The translation movement problem though, is going to be minimized by going to higher altitude, where rotation becomes the whole game.

His flying camera quad, built for 12kg maximum thrust, proves that any pointing and stabilization needs within the realm of reason are well within our reach if we ever need to address them.

May 13 2011

Structure from Motion for Augmented Reality

I've posted about some of Henri Astre's toolkits, which are a hobby + learning tool for him.  When I mentioned that his professional work involves augmented reality (presumably from a smartphone) with structure from motion as well as commercial animation, it was noted that this meshes quite closely with some of the distant goals of our CS work, to have a Tegra SfM solution.  Henri has posted a new blog with his progress on realtime video SfM for pose estimation.  I believe that right now he's planning on a several-fps video upload with offsite processing, and creating a detailed site synth beforehand, for later AR use.

May 07 2011

Links of the Moment

  • Krzysztof Bosak, who created the Pteryx UAV, is suggesting a photogrammetric aerial robotics contest.  Pteryx, Smartplanes, Cropcam, and several others are now hovering outside the market space associated with our fixed-wing work, waiting for the US and others to legalize their industry.
  • ArduCopter Mega, the unification of the Ardupilot navigation toolset with the quad/hexa/octocopter flight control code, is being released very soon
  • At the UW GRAIL lab: The maker of SiftGPU, Changchang Wu, has put together a multi-core bundle adjustment algorithm with available open software
  • At EPFL CV lab:
Apr 06 2011

Long Range Flying

Reading a post about a long range radio transmitter, I came across Cristi Rigotti's record FPV attempt.  I was inspired to reconsider the feasibility of a 1km flight.  I re-worked my flight path models for a variety of transect spacings.  50m transect spacing at 120m altitude (above target, which we can do if we launch from a location at the same elevation as the target altitude) gives a highly redundant image, but a very long flight path.  In planning the 'weave' pattern, I'd been assuming that we would tackle it in two flights, using an overlapping grid.  There is so much effort involved in launch and landing, though, that one flight is really preferable.  So I scaled it up to 100m spacing in order to examine the redundancy.  It turns out, there are areas where 100m spacing covers the target plane from only two directions, but not the other two - not very good for geometric accuracy, or complex geometry.  I decided to try an intermediate spacing, 70m, which offers half the aerial density of 100m but twice that of 50m.  It turns out that 70m spacing is just enough to allow for a 36km flight in order to synth a 1 square kilometer AOI, with all areas being shot from a variety of angles.  36km is well within the envelope of possibility for a larger airframe with a lot of batteries, far closer than I had realized. Assuming we forswear "crunchy" fiberglass and balsa, as well as expensive carbon fiber, and stick with foam, there are still a lot of options big enough to hold an indefinite number of batteries.  In particular, I would note the Skywalker 1.8m, and the Diamond 2500 2.5m.

 

Mar 15 2011

Noise Removal

Photoscan is demonstrably superior in many respects to Bundler, but it seems to have problems whenever the horizon is in the picture.  Silhouettes being incorrectly matched causes a lot of noise in the data.   What I ended up doing to deal with this was primarily based on computing nearest neighbor distance in Point Set -> Estimate Radius from Density filter, and then selecting and deleting points using the Selection -> Conditional Vertex Selection filter based on rad.  If the noise is significantly different in color, rad can be combined with spectral metrics from the r, g, and b variables to make equations like "(rad > 0.6) AND (b > 100)".

Here is the before and after comparison:

I've also prepared a video on the process:

Mar 15 2011

"Simple" Pointcloud Georeferencing

I'm aware that we have multiple transform optimization algorithms of varying completeness in the pipeline, but I decided to try and figure out the simplest means of georeferencing a pointcloud last night.  This is a crude, error-prone method that is only usable when flat ground can be identified.  It performs better when the flat area is large in both X and Y dimensions.  A warning - ArcGIS took tens of seconds to display and classify my pointcloud, and minutes to spatially adjust it.  Shut it down, and you may lose previous work, not just what you're currently doing.  Plan on this taking a while, and multitask.

1. Place readily identifiable markers in your sample area.

2. Take GPS points of those markers using an accurate, WAAS-corrected signal.

3. Take photos.

4. Synth photos.

5. Denoise synth.

6. Convert those GPS points to shapefile format.

7. In Meshlab's Render menu, select the bounding box and labelled axes.  Use the Normals, Curvatures & Orientation -> Transform: Rotate tool in Meshlab with the 'barycenter' option selected to rotate the synth until the flat ground is coplanar with at the X-Y plane.

8. Export the pointcloud as a .ply with non-binary(plaintext) encoding.

9. Rename the .ply to .txt extension.

10. Open the .txt file in Notepad.

11. Replace the header information with space-delimited 'x y z r g b alpha'  and save.

12. Open the .txt file in Excel as a space-delimited spreadsheet.

14. Save as a .csv file.

15. Open the .csv in your planning document in ArcMap, where you already have the GPS points open with UTM coordinate system.

16. Use 'Add XY data' and use the X and Y columns.

17. Right click on the new 'Events' layer and export it as a new shapefile.  Add it to your map.

18. Begin editing that new shapefile.

19. Symbolize the points by color or color ratios using R, G, B columns and cross-reference manually with Meshlab in order to locate your markers.

20. In column alpha (which should have the default value 255), set the marker points to 1.  Symbolize by alpha, unique categories, to make the markers stand out.  Save your edits.

21. Write down the X and Y coordinates of each marker after finding them using 'select by attributes'.

22. Enable the 'Spatial Adjustment' extension, put a check next to the similarity feature, and set adjust data to all features in your pointcloud layer.

23. Place a new displacement link for each marker with the first end at your marker in the pointcloud, and the other end at the corresponding GPS marker in ArcGIS.

24. Hit the 'Adjust' button. 

25. Save your edits and stop editing.

26. Optional: Use SQRT(X^2+Y^2) to determine the distance between two markers in your original coordinate system.  Use the ruler to detemine the distance between them in UTM.  Using the field calculator, multiply the Z factor by the ratio of UTM distance to pointcloud distance.

Mar 15 2011

Learning Photoscan

I've really gotten my hands dirty in Photoscan this past week.  I've learned a number of things:

  • A periodic sampling regime ("Every third photo", etc) can produce a *SUBSTANTIALLY* worse pointcloud than every-photo for complex surfaces.  Simple surfaces aren't affected as much.  This could be applied selectively to cut down on runtime.
  • The "Estimating Scene Structure" time remaining display is only useful as a minimum bound, and may be 10-100x what is currently displayed.  The other estimators seem to be accurate.
  • Due to speed penalties at high imagecounts, choosing image subsets is going to play a very important role in synthing areas of interest, and we need to develop better methods for this.
  • Paused photoscan has a 'sleep mode' where it shifts down to a fraction of the memory (10GB -> 1.3GB) and no CPU, but it needs 10 or 15 minutes to enter it after pause is initiated, and uses full memory and 95% CPU during that time.
  • Tree trunks are readily identifiable in Photoscan given full-resolution pictures and a rapid frame rate, but care must be taken during turns to unify the synth using extra pictures
  • For small image sets, periodic subsetting (every other picture) may be attempted, and then supplemented with extra information in corners.
  • Photoscan is relatively noise-free for aerial photos, but anything that includes the sky will cause serious noise problems on silhouettes - points of sky and tree will appear in a distinctive projected pattern in what should be air.  Photosynth does not suffer from these problems.
  • Markers need to be sizable and textured to be seen in a synth.  Strings don't come close to working, though their directionality is very useful for walking consistant transects without good visibility.  The pin flags work very occasionally, but it would probable be better adding flat, 8.5x11 textured bullseyes as well.  GPSing those markers gets you a crude georeferencing, but in small forested transects this is not very useful due to the error bounds on the GPS.  Increasing the lateral dimension of the synth using a cross transect would make georeferencing much easier.
  • Symmetrically cropping the edges out of pictures (keeping the same centerpoint) can be an effective way to cut down noise and processing time in a camera held in a constant orientation.  Removing the top and bottom significantly decreased sky noise and processing time on a test dataset.
  • Our tree database is not very accurate.  It's missing stems < 12cm, but even the big ones are very hard to locate in the synth using relative positions.  A set of 'reference trees' would be very helpful here as a complement to GPS markers.