We have moved! Please visit us at ANTHROECOLOGY.ORG. This website is for archival purposes only.

Nov 30 2011

Georeferencing Code Updates

Continuing from my last post, I did the same analysis on the Herbert Run point cloud that was generated from spring 2011. It turns out at first, the set of GPS data was not ordered properly, so the spline function didn't work correctly. This yielded the following results:

The x-y-z axes show how the orientation of the data is set up. Ideally, this picture would show an untilted image as if one were looking down on the campus perpendicularly. This point cloud was given an incorrect set of helmert parameters, due to having a poorly constructed spline of the GPS and camera data. This problem was fixed and once I analyzed the data again, I got much better results.


 This point cloud transformation was much better, now that the GPS points were in the correct order. The x and y axes appear to be close enough to where they should be and it seems that we are perpendicularly looking down onto campus, but there is one glitch that this picture does not show. All of the z coordinates appear to have been inverted. The high points in the point cloud are actually the low points, and the low points in the cloud are the real high points. This is indicated in the analysis of the orange field bucket position in the point cloud versus their actual position in space when the pictures were taken. 

These scatter plots are for this second attempt of transforming the point cloud. The graph is of the X-values of the manually detected buckets in the point cloud, versus the actual GPS coordinates of those buckets in the field. The equation of the trend line for the x coordinates is y=0.996x + 1398.7 with an R-squared = 0.9995. The graph of the y-values of the data is not shown, but is very similar to the first graph, with the trend line for the y values for the buckets being y=1.0073x - 31820 with an R-squared = 0.9994. The graphs  of x and y show a strong correlation between the two data sets for each. Both slopes are very close to 1. 

The second graph shown is for the values of the estimated z coordinates of the buckets versus the GPS z coordinates. You can see a correlation between the two by the trend line, but the slope is negative. The trend line is y = -1.0884x +187.29 and R-squared = 0.9872. This negative slope seems to be tied to the fact that all of the point cloud data had inverted z coordinate values. 
Overall, this data is much, much better than the original result. We are currently trying to find a solution to the inverted z-axis, but the following is the first attempt to fix this problem.

When the helmert parameters were compared to the original data set from Herbert Run in Fall 2010, the fourth parameter which was for scaling turned out to be negative for the spring. We wanted to see how the transformed point cloud would react if we forced the scaling constant to be greater than zero. This change results in the following point cloud orientation:

This did exactly what we wanted for the z-axis, all the real world high points became point cloud high points and lows becames lows. The obvious problem here is that it inverted the x and y axes. This "solution" really did not solve much due to the fact that it caused the problem it was attempting to fix in different axes. The correlation between the 3 sets of variables only changed by making the slopes of the trend lines of opposite sign to what they were before. The R-squared values did not change when the scale parameter was altered. Besides this, despite having the z axis in the correct orientation, the data seems a little wierd. The z coordinates were falling in a range of about (-3,7). I took the differences between the real GPS height of the buckets and the calculated heights of the buckets and it looks like there is a consistent difference between the two. The calculated data is about 50.7 units below that of the expected GPS heights, for each bucket. 
I want to see how just altering the applyHelmert code to multiply anything involving the z-axis by the absolute value of the scale parameter and leaving the x and y axes multiplications alone will do. If we can maintain the x,y axes from the first attempt with ordered data, and use the z-axis orientation with ordered data and only being multiplied by the absolute value of the scale parameter for the z-components, the point cloud should be oriented in the correct way, just translated down too low by a constant amount. (Which is something that has not been explained yet.)

Nov 22 2011

Analyzing the Point Cloud Transformations

This graph represents the data for the Herbert Run site from October 11, 2010. I used ScanView to locate the exact coordinates of the orange buckets in the transformed point cloud that was created with the previously written helmert code. The values on the X-Axis represent the actual GPS values from the georeferencing in the x direction, where higher values are more western, I believe. The values on the Y-Axis correspond to the calculated mean of the orange points I extracted with ScanView. The black line is the line of best fit of the data and has a slope of 0.9941, which is quite close to 1. A slope of 1 would indicate an exact correlation between the two data sets. This is good in two ways: the slope is actually positive, so there's a positive correlation between the two data sets, and the slope is very close to 1, which means the correlation is strong. The graph for the Y values is very similar, with a positive slope of 1.0079. What's really good about this is the results I got before I did this analysis, with the point cloud of a different data set.

This is for the knoll site from fall 2010. There is a negative correlation, and the slope is no where close to 1, so this mean the transformation of this particular point cloud did not turn out well at all. It's possible that I made a mistake running the spline.py code to get the 7 Helmert parameters. The 4th parameter which is for scaling was negative which doesn't seem nice, but it looked like the data wasn't rotated enough either. I still have another data set to test out, and once that is done I'm going to retry this data set to see if it was just a mistake I made.

A small note about the bucket search based on colors, some of the buckets were on top of blue boxes which seemed to be altering the color of the orange points, they looked pretty pink which was not a color I was searching for. This could be a reason why some of the buckets were not registering in my search. Plus Jonathan pointed out that some of the trees were starting to change colors at this point, so that could be a small source of some of the extraneous points.

Nov 17 2011

The Algorithmic Beauty of Plants

In searching for research related to the structure and architecture of trees and canopies, I came upon the book The Algorithmic Beauty of Plants and the research of Dr. Przemyslaw Prusinkiewicz and his Algorithmic Botany lab in the Department of Computer Science at the University of Calgary.  All I can say is, 'Wow!'

The image at left is from a 2009 paper on procedural, self-organizing reconstructions of tree and forest landscapes.

Dr. Prusinkiewicz's research spans over two decades and his website includes published algorithms for procedurally generating 3D, colored, and textured plants.  Some of the figures in these papers look amazing.

I look forward to looking more into Dr. Prusinkiewicz's research for inspiration and insights in support of my own research with computer vision remote sensing based reconstruction of canopies.  Some of Prusinkiewicz's work covers the use of point clouds to 

represent tree structure, so I am definitly interested in learning more about that data model.

References & image credit:

Wojciech Palubicki, Kipp Horel, Steven Longay, Adam Runions, Brendan Lane, Radomir Mech, and Przemyslaw Prusinkiewicz. Self-organizing tree models for image synthesis. ACM Transactions on Graphics 28(3), 58:1-10, 2009.

Nov 11 2011

GoPro HERO 2 in hand, now I just need time!

OK, so now I have a new GoPro HERO2 camera shooting 11MP stills at 2fps, I just need the time to go out and test it at our study sites.

First things first, this camera is shooting stills with relatively wide field of view (FOV) and we don't know what that is going to do to structure from motion computation.  The camera shoots in full 170º FOV in 11MP and full or medium 127º FOV at 8MP and 5MP.  Narrow, 90º FOV, options most similar (although still wider) than the other cameras used in our research, are only available in video mode.

Some initial tests with ground subjects on campus have produced somewhat positive results, I think it is too early to tell for sure.

More to follow, when I can get to it.

Nov 08 2011

Color and Statistics

This was just just searching for a specific range of Hue, Saturation and Values. It obviously picked up a lot of stuff other than what I wanted, like almost green colors. Also there are way too many points. 






This is using the correlation between each point and an orange of (30,100,100). Less points have been picked up but most of them are still not from the buckets. I need to pick the perfect value for orange and then pick a really large correlation coefficient to limit the points to just the buckets. Its going to be a little hard to find the exact right color to use as the base color.

I'm going to manually search through some of the point clouds and find out what color the buckets actually are. I already have an idea from one small subset of points that included one bucket, but I need a larger sample of points to get better mean values for h s and v. Once I've done that maybe it will be easier to pick out the buckets from the above messes of points.

Nov 02 2011

Baby-Steps: Taking 'Personal' Multicopter to a Whole New Level

My friend just sent me a link to a Gizmodo article about a truly personal multirotor aircraft: a 16 motor electric (li-po?) behemoth equipped with its own passenger/driver seat and designed by the e-volo team in Germany.  Check out the video on their website, I want one!

Not only could this provide an interesting platform for the personal remote sensing we are interested in with Ecosynth - but I can only imagine the thrill of skimming above the tree tops, getting a truly birds-eye view of the canopy.

The future is so cool.

So Garrett and Nisarg...next lab project?

Image credit: http://www.e-volo.com/Prototype_files/e-volo_IMGP2420.jpg

Nov 01 2011


I'm still working on trying to pick out the orange points in the point cloud. To possibly remove the problem of extraneous "orange" points that don't actually point to the buckets placed in the field. The buckets are spread out far enough that the volume I search for points in can be pretty large, like almost the size of the problematic building. Now if there are only a few points with in a certain radius, this could suggest that there is a bucket there. I'm going to restrict the number of points per unit volume to be less than 5 or 10. 

There seems to be a lot of things that could go wrong with this approach and I was told a different color space might be less messy. Besides RGB, there is also a number of other spaces. I was told that HSV might be a good thing to look into. HSV is better because it defines color relationships the same way the human eye does. RGB does not do this, which could be why its hard to pick out one particular color. 


Nov 01 2011

Personal remote sensing goes live: Mapping with Ardupilot

Folks all over are waking up to the fact that remote sensing is now something you really should try at home!  Today DIYDrones published a fine example of homebrew 3D mapping using an RC plane, a regular camera, and a computer vision software: hypr3d (one I’ve never heard of).  Hello Jonathan!


PS: I’d be glad to pay for a 3D print of our best Ecosynth- hypr3D can do it, so can landprint.com