Dec 26 2011

Ecogeo versus spline codes

There was one last thing that I did for the error analysis. 

Going through the raw ply data set from Herbert Run Spring 2010 in an arbitrary coordinate system, I picked out the locations of 5 buckets that were in the shape of an X on campus:
100, 102, 108, 111, 114.

Using ScanView like before, I was able to pick out each location for these buckets by individually chosing points within the area where a bucket should be that appeared to be a part of a clump of orange. I took the average of each x,y,z coordinate for each set of points to obtain an approximate center of where the buckets should be located in the arbitrary coordinate system generated when the point cloud was made. I then paired these coordinates with the referenced locations of where each specific bucket is located in GPS coordinates. 

This data was used by a different python code ecogeo4.py, which is also a way of getting the 7 Helmert parameters needed to transform arbitrary point cloud into the correct GPS coordinate system. This code takes one parameter text file which should be in the following format:

arbitraryX arbitraryY arbitraryZ realX realY realZ,

one point per row, seperated by spaces not tabs.  

Using the 5 buckets mentioned before, I ran the ecogeo code to obtain a new set of helmert parameters. I then used the applyHelmert python code to transform a list of the locations of the buckets in the raw point cloud, consisting of just 14 points.

This yielded data similar to the process of using the spline.py code. The z direction is still inverted, which is the coordinate that most of the error is coming from. The x and y directions are very good.
For the tranformed x values verses the expected x values, the trend line y = 1.0008x - 265.39, with an R2 of 0.9998.

For the y values, y = 0.9998x + 3372.5, also with an R2 of 0.9998.

The z coordinates are odd with a trend line of y = -0.2557x + 68.562 and an R2 of 0.1563, which is really bad not only because the data is inverted, but it seems to be quite unrelated. 

This data resulted in root mean square errors of distances between actual bucket locations and predicted bucket locations of 2.354 in the XY plane, 9.045 in the Z direction and an overall error of 9.346.

The result I recieved with the spline code had RMSE errors of 4.198 for XY, 95.167 for Z and 95.299 overall. Obviously the spline code does a much worse job converting the data in the z direction than this ecogeo code does, but in the xy plane, the errors aren't too far off.

Overall, the spline code seems to work almost as well as the ecogeo code did with this small data set in the x and y directions, but there is still the confusion with the z direction due to inversion.

Nov 30 2011

Georeferencing Code Updates

Continuing from my last post, I did the same analysis on the Herbert Run point cloud that was generated from spring 2011. It turns out at first, the set of GPS data was not ordered properly, so the spline function didn't work correctly. This yielded the following results:

The x-y-z axes show how the orientation of the data is set up. Ideally, this picture would show an untilted image as if one were looking down on the campus perpendicularly. This point cloud was given an incorrect set of helmert parameters, due to having a poorly constructed spline of the GPS and camera data. This problem was fixed and once I analyzed the data again, I got much better results.

 

 This point cloud transformation was much better, now that the GPS points were in the correct order. The x and y axes appear to be close enough to where they should be and it seems that we are perpendicularly looking down onto campus, but there is one glitch that this picture does not show. All of the z coordinates appear to have been inverted. The high points in the point cloud are actually the low points, and the low points in the cloud are the real high points. This is indicated in the analysis of the orange field bucket position in the point cloud versus their actual position in space when the pictures were taken. 

These scatter plots are for this second attempt of transforming the point cloud. The graph is of the X-values of the manually detected buckets in the point cloud, versus the actual GPS coordinates of those buckets in the field. The equation of the trend line for the x coordinates is y=0.996x + 1398.7 with an R-squared = 0.9995. The graph of the y-values of the data is not shown, but is very similar to the first graph, with the trend line for the y values for the buckets being y=1.0073x - 31820 with an R-squared = 0.9994. The graphs  of x and y show a strong correlation between the two data sets for each. Both slopes are very close to 1. 

The second graph shown is for the values of the estimated z coordinates of the buckets versus the GPS z coordinates. You can see a correlation between the two by the trend line, but the slope is negative. The trend line is y = -1.0884x +187.29 and R-squared = 0.9872. This negative slope seems to be tied to the fact that all of the point cloud data had inverted z coordinate values. 
Overall, this data is much, much better than the original result. We are currently trying to find a solution to the inverted z-axis, but the following is the first attempt to fix this problem.

When the helmert parameters were compared to the original data set from Herbert Run in Fall 2010, the fourth parameter which was for scaling turned out to be negative for the spring. We wanted to see how the transformed point cloud would react if we forced the scaling constant to be greater than zero. This change results in the following point cloud orientation:

This did exactly what we wanted for the z-axis, all the real world high points became point cloud high points and lows becames lows. The obvious problem here is that it inverted the x and y axes. This "solution" really did not solve much due to the fact that it caused the problem it was attempting to fix in different axes. The correlation between the 3 sets of variables only changed by making the slopes of the trend lines of opposite sign to what they were before. The R-squared values did not change when the scale parameter was altered. Besides this, despite having the z axis in the correct orientation, the data seems a little wierd. The z coordinates were falling in a range of about (-3,7). I took the differences between the real GPS height of the buckets and the calculated heights of the buckets and it looks like there is a consistent difference between the two. The calculated data is about 50.7 units below that of the expected GPS heights, for each bucket. 
I want to see how just altering the applyHelmert code to multiply anything involving the z-axis by the absolute value of the scale parameter and leaving the x and y axes multiplications alone will do. If we can maintain the x,y axes from the first attempt with ordered data, and use the z-axis orientation with ordered data and only being multiplied by the absolute value of the scale parameter for the z-components, the point cloud should be oriented in the correct way, just translated down too low by a constant amount. (Which is something that has not been explained yet.)

Nov 17 2011

The Algorithmic Beauty of Plants


In searching for research related to the structure and architecture of trees and canopies, I came upon the book The Algorithmic Beauty of Plants and the research of Dr. Przemyslaw Prusinkiewicz and his Algorithmic Botany lab in the Department of Computer Science at the University of Calgary.  All I can say is, 'Wow!'

The image at left is from a 2009 paper on procedural, self-organizing reconstructions of tree and forest landscapes.

Dr. Prusinkiewicz's research spans over two decades and his website includes published algorithms for procedurally generating 3D, colored, and textured plants.  Some of the figures in these papers look amazing.

I look forward to looking more into Dr. Prusinkiewicz's research for inspiration and insights in support of my own research with computer vision remote sensing based reconstruction of canopies.  Some of Prusinkiewicz's work covers the use of point clouds to 


represent tree structure, so I am definitly interested in learning more about that data model.

References & image credit:

Wojciech Palubicki, Kipp Horel, Steven Longay, Adam Runions, Brendan Lane, Radomir Mech, and Przemyslaw Prusinkiewicz. Self-organizing tree models for image synthesis. ACM Transactions on Graphics 28(3), 58:1-10, 2009.

Nov 08 2011

Color and Statistics


This was just just searching for a specific range of Hue, Saturation and Values. It obviously picked up a lot of stuff other than what I wanted, like almost green colors. Also there are way too many points. 

 

 

 

 

 

This is using the correlation between each point and an orange of (30,100,100). Less points have been picked up but most of them are still not from the buckets. I need to pick the perfect value for orange and then pick a really large correlation coefficient to limit the points to just the buckets. Its going to be a little hard to find the exact right color to use as the base color.

I'm going to manually search through some of the point clouds and find out what color the buckets actually are. I already have an idea from one small subset of points that included one bucket, but I need a larger sample of points to get better mean values for h s and v. Once I've done that maybe it will be easier to pick out the buckets from the above messes of points.

Apr 10 2011

Visualizing point clouds in your browser

3DTubeMe_logo

Check out 3DTubeMe.com to see some of the latest in web based 3D visualizations.  I was directed to a post on Slashdot about the website by a professor and am totally thrilled about what this could mean for visualizing or own 3D point cloud data.  Currently you need to login and add this as an app through Facebook to upload and view, but the website authors say they are going to get rid of this feature soon.  I uploaded a small set of photos for processing, but was notified that my camera was not in their database and to wait to hear back about the processing of my cloud.  Maybe we could get this WebGL working to visualize our own point clouds? 

That’s all for now, back to the grind!

Mar 15 2011

Learning Photoscan

I've really gotten my hands dirty in Photoscan this past week.  I've learned a number of things:

  • A periodic sampling regime ("Every third photo", etc) can produce a *SUBSTANTIALLY* worse pointcloud than every-photo for complex surfaces.  Simple surfaces aren't affected as much.  This could be applied selectively to cut down on runtime.
  • The "Estimating Scene Structure" time remaining display is only useful as a minimum bound, and may be 10-100x what is currently displayed.  The other estimators seem to be accurate.
  • Due to speed penalties at high imagecounts, choosing image subsets is going to play a very important role in synthing areas of interest, and we need to develop better methods for this.
  • Paused photoscan has a 'sleep mode' where it shifts down to a fraction of the memory (10GB -> 1.3GB) and no CPU, but it needs 10 or 15 minutes to enter it after pause is initiated, and uses full memory and 95% CPU during that time.
  • Tree trunks are readily identifiable in Photoscan given full-resolution pictures and a rapid frame rate, but care must be taken during turns to unify the synth using extra pictures
  • For small image sets, periodic subsetting (every other picture) may be attempted, and then supplemented with extra information in corners.
  • Photoscan is relatively noise-free for aerial photos, but anything that includes the sky will cause serious noise problems on silhouettes - points of sky and tree will appear in a distinctive projected pattern in what should be air.  Photosynth does not suffer from these problems.
  • Markers need to be sizable and textured to be seen in a synth.  Strings don't come close to working, though their directionality is very useful for walking consistant transects without good visibility.  The pin flags work very occasionally, but it would probable be better adding flat, 8.5x11 textured bullseyes as well.  GPSing those markers gets you a crude georeferencing, but in small forested transects this is not very useful due to the error bounds on the GPS.  Increasing the lateral dimension of the synth using a cross transect would make georeferencing much easier.
  • Symmetrically cropping the edges out of pictures (keeping the same centerpoint) can be an effective way to cut down noise and processing time in a camera held in a constant orientation.  Removing the top and bottom significantly decreased sky noise and processing time on a test dataset.
  • Our tree database is not very accurate.  It's missing stems < 12cm, but even the big ones are very hard to locate in the synth using relative positions.  A set of 'reference trees' would be very helpful here as a complement to GPS markers.
Aug 18 2010

3D Scanning of UMBC?

 UMBC_3D_scan_crop

Ready for an Ecosynth scan over the entire UMBC campus?  I think it’s time!

I just gave a brief talk about our 3D mapping work to the University administration at the annual retreat.  Along with Stu Schwartz (Senior Scientist, CUERE), Suzanne Braunschweig (Lecturer, UMBC GES), and Patricia La Noue (Director, UMBC Dept. of Interdisciplinary Studies) I was on a panel discussing the value of UMBC’s natural spaces as classroom and laboratory.  I spent my 5 minutes talking about how I use the forests on campus as my lab for developing our new approach to ecological remote sensing.  Suzanne talked about her experiences teaching science classes using the natural environment of UMBC, Stu talked about the campus as a lab for studying the hydrology and planning side of stormwater management strategies, and Patricia talked about her work engaging students of interdisciplinary studies with UMBC’s natural spaces through the Greenway project.

I ended my talk with this point cloud image from the Herbert Run site that really captured the 3D structure of buildings and trees around the dorms of campus, I think it was a big hit!  Link to the Photosynth, here.  I mentioned in this slide how we are thinking about an Ecosynth scan of the whole campus.  Afterwards several people came up to ask about Ecosynth and about a campus ecological inventory.

The area inside the loop is about 63 hectares, easily 10 times bigger than anything we have done before.   But, I think it is possible.  We met with another RC flier on Monday who is a member of the Baltimore Area Soaring Society and is very excited by the value that our work places on his hobby.  He thinks that the Slow Stick might be a great aerial platform for a campus acquisition simply because it requires minimal space for take off and landing (recall that flight from 7/30 where we staged from atop a parking garage).  So, I will have to see how a 3D scan of campus fits in with my schedule of dissertation work.  I think I will need to get some help!  My slides are attached in PDF form below.

 

DANDOIS_ELLIS_UMBC_Ecosynth_short.pdf (3.94 mb)

Aug 01 2010

Adventures in Personal Remote Sensing

First Post! 

Welcome to the Ecosynth Blog.  I am Jonathan Dandois, a Ph.D. student in the Anthropogenic Landscape Ecology Lab here at UMBC.  I am working on Ecosynth as a system for personal remote sensing for my dissertation research in the Department of Geography and Environmental Systems.

I am building this page into a resource for those interested in using the Ecosynth system at their own research sites, or in their own backyards, and as a place where myself and other ‘Ecosynthers’ can post about their own progress and experiences with personal remote sensing.  You can find out a bit of the history of Ecosynth on the About Ecosynth page. I am building a page that details our techniques for personal remote sensing using the computer vision software Bundler and Photosynth, but that one is not ready for the world just yet.  I am also setting up a page about the history of our “adventures” doing remote sensing using RC planes, helicopters and kites. 

But, back to the fun stuff.

With the purchase of two high-speed cameras (thanks to Erle’s research), a Canon SD4000 and a Casio EX FS10, our aerial photo acquisitions have taken a giant step forward.  We attach the cameras to the underside of the GWS Slow Stick frame in a mount that holds it in place and keeps the shutter pressed so that the camera takes photos continuously. 

Here is an oqlique aerial panorama I made with some photos I took of campus with the SD4000 mounted on a Slow Stick.  This panorama was made with the free software Hugin, which uses the same SIFT feature identification algorithm that Bundler and Photosynth use.

Aerial Panorama of campus

 

 

 

 

 

 

 

 

 

 

 

 

For someone that has worked with images of the land taken from airplanes and satellites, it is very exciting to be collecting my own remote sensing imagery.  We are also generating great 3D 'synths' from the high-overlap photos collected with the SD4000.   This screen capture of a 3D point cloud was generated from a collection of 1000 photos we took over the Knoll yesterday afternoon. The photosynth can be viewed here, link. This screen-cap is from the free-software Meshlab and I used the free Photosynth Point Cloud Exporter tool to grab the points from the Photosynth website for local use.

This is really promising.  While we are still refining are choice of aerial platform, but now we are at the point where we can begin to perform our research about understanding how computer vision can be used for remote sensing, and the intricate details that will make it work reliably.

PPCE_07302010_Knoll_0Snap00

 

 

 

 

 

 

 

 

 

 

 

 

 

 

We also just purchased a Garmin Edge 500, for making a GPS track of the flight. While this is designed for biking and tracking ‘calories burned’ or ‘power’ we wanted to see how it would work for us.  It is very light-weight (57g) and easy to use.   We are still trying to work with component / data logger based GPS equipment commonly marketed for use with remote cotrolled planes, e.g., the Eagle Tree telemetry systems, but the Garmin Edge has so far proven very easy to use and likely offers the same GPS position accuracy.

Below is the track uploaded in Google Earth.  We use the Garmin Training Center software to interface with the GPS.  The software is quite user friendly and has a few nice features.  It effortlessly uploads data to Google Earth, which can then be exported to KML and then off to ArcGIS.  It plots a simple map of the track onto a background map if it has one available.  It also plots graphs of the speed and other characteristics of the flight, mostly things we don’t need though!

google-earth_track_capture

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Another synth I was running from a set of photos I collected over our Herbert Run site just finished, link here.

That is all for now.  A lot more progress to follow.  The cutting edge of remote sensing is quite exciting!