We have moved! Please visit us at ANTHROECOLOGY.ORG. This website is for archival purposes only.


Mar 21 2012

Tree mapping Technique

There have been many methods for mapping the trees within our 25x25 meter grid that we have identified. The one certatinty we have decided on is the grid must be sectioned into a 5x5 meter grid before we can begin mapping. The picture on the left shows a method found in the field guide Methods For Establishment And Inventory Of Permanent Plots. This method involves usining geometry to determine the exact point of a tree and we thought it could be more accurate and faster than other ideas. However when we went to our forrest to test we discovered that it was not only more tedious but may not improve accuracy by a reasonable amount if at all. The problems arose when we needed to take measurments on unlevel surfaces. It would involve 3 or more people with much instruction and using handfulls of equpment, it was uneffective for our purposes. We plan on going on another test run before the week ends to try another method that will hopefully work for what we need. 

References:

Dallmeier, F. (1992). "Long-term monitoring of biological diversity in tropical forest areas." Methods for establishment and inventory of permanent plots. MAB Digest Series, 11. UNESCO. Paris

Nov 30 2011

Georeferencing Code Updates

Continuing from my last post, I did the same analysis on the Herbert Run point cloud that was generated from spring 2011. It turns out at first, the set of GPS data was not ordered properly, so the spline function didn't work correctly. This yielded the following results:

The x-y-z axes show how the orientation of the data is set up. Ideally, this picture would show an untilted image as if one were looking down on the campus perpendicularly. This point cloud was given an incorrect set of helmert parameters, due to having a poorly constructed spline of the GPS and camera data. This problem was fixed and once I analyzed the data again, I got much better results.

 

 This point cloud transformation was much better, now that the GPS points were in the correct order. The x and y axes appear to be close enough to where they should be and it seems that we are perpendicularly looking down onto campus, but there is one glitch that this picture does not show. All of the z coordinates appear to have been inverted. The high points in the point cloud are actually the low points, and the low points in the cloud are the real high points. This is indicated in the analysis of the orange field bucket position in the point cloud versus their actual position in space when the pictures were taken. 

These scatter plots are for this second attempt of transforming the point cloud. The graph is of the X-values of the manually detected buckets in the point cloud, versus the actual GPS coordinates of those buckets in the field. The equation of the trend line for the x coordinates is y=0.996x + 1398.7 with an R-squared = 0.9995. The graph of the y-values of the data is not shown, but is very similar to the first graph, with the trend line for the y values for the buckets being y=1.0073x - 31820 with an R-squared = 0.9994. The graphs  of x and y show a strong correlation between the two data sets for each. Both slopes are very close to 1. 

The second graph shown is for the values of the estimated z coordinates of the buckets versus the GPS z coordinates. You can see a correlation between the two by the trend line, but the slope is negative. The trend line is y = -1.0884x +187.29 and R-squared = 0.9872. This negative slope seems to be tied to the fact that all of the point cloud data had inverted z coordinate values. 
Overall, this data is much, much better than the original result. We are currently trying to find a solution to the inverted z-axis, but the following is the first attempt to fix this problem.

When the helmert parameters were compared to the original data set from Herbert Run in Fall 2010, the fourth parameter which was for scaling turned out to be negative for the spring. We wanted to see how the transformed point cloud would react if we forced the scaling constant to be greater than zero. This change results in the following point cloud orientation:

This did exactly what we wanted for the z-axis, all the real world high points became point cloud high points and lows becames lows. The obvious problem here is that it inverted the x and y axes. This "solution" really did not solve much due to the fact that it caused the problem it was attempting to fix in different axes. The correlation between the 3 sets of variables only changed by making the slopes of the trend lines of opposite sign to what they were before. The R-squared values did not change when the scale parameter was altered. Besides this, despite having the z axis in the correct orientation, the data seems a little wierd. The z coordinates were falling in a range of about (-3,7). I took the differences between the real GPS height of the buckets and the calculated heights of the buckets and it looks like there is a consistent difference between the two. The calculated data is about 50.7 units below that of the expected GPS heights, for each bucket. 
I want to see how just altering the applyHelmert code to multiply anything involving the z-axis by the absolute value of the scale parameter and leaving the x and y axes multiplications alone will do. If we can maintain the x,y axes from the first attempt with ordered data, and use the z-axis orientation with ordered data and only being multiplied by the absolute value of the scale parameter for the z-components, the point cloud should be oriented in the correct way, just translated down too low by a constant amount. (Which is something that has not been explained yet.)

Nov 22 2011

Analyzing the Point Cloud Transformations

This graph represents the data for the Herbert Run site from October 11, 2010. I used ScanView to locate the exact coordinates of the orange buckets in the transformed point cloud that was created with the previously written helmert code. The values on the X-Axis represent the actual GPS values from the georeferencing in the x direction, where higher values are more western, I believe. The values on the Y-Axis correspond to the calculated mean of the orange points I extracted with ScanView. The black line is the line of best fit of the data and has a slope of 0.9941, which is quite close to 1. A slope of 1 would indicate an exact correlation between the two data sets. This is good in two ways: the slope is actually positive, so there's a positive correlation between the two data sets, and the slope is very close to 1, which means the correlation is strong. The graph for the Y values is very similar, with a positive slope of 1.0079. What's really good about this is the results I got before I did this analysis, with the point cloud of a different data set.

This is for the knoll site from fall 2010. There is a negative correlation, and the slope is no where close to 1, so this mean the transformation of this particular point cloud did not turn out well at all. It's possible that I made a mistake running the spline.py code to get the 7 Helmert parameters. The 4th parameter which is for scaling was negative which doesn't seem nice, but it looked like the data wasn't rotated enough either. I still have another data set to test out, and once that is done I'm going to retry this data set to see if it was just a mistake I made.

A small note about the bucket search based on colors, some of the buckets were on top of blue boxes which seemed to be altering the color of the orange points, they looked pretty pink which was not a color I was searching for. This could be a reason why some of the buckets were not registering in my search. Plus Jonathan pointed out that some of the trees were starting to change colors at this point, so that could be a small source of some of the extraneous points.

Jul 14 2011

Sub-centimeter positioning on mobile phones?

Just came across this today at Slashdot: "Sub-centimeter positioning coming to mobile phones": http://bit.ly/pIvQ0e.

Apparently this is based on a technique called “SLAM”.  From wikipedia: “Simultaneous localization and mapping (SLAM) is a technique used by robots and autonomous vehicles to build up a map within an unknown environment (without a priori knowledge), or to update a map within a known environment (with a priori knowledge from a given map), while at the same time keeping track of their current location.”

I could imagine this becoming VERY interesting for high spatial resolution 3D scanning in Ecosynth- but maybe I am missing some potential limitation to this? 

Your thoughts?

Mar 15 2011

"Simple" Pointcloud Georeferencing

I'm aware that we have multiple transform optimization algorithms of varying completeness in the pipeline, but I decided to try and figure out the simplest means of georeferencing a pointcloud last night.  This is a crude, error-prone method that is only usable when flat ground can be identified.  It performs better when the flat area is large in both X and Y dimensions.  A warning - ArcGIS took tens of seconds to display and classify my pointcloud, and minutes to spatially adjust it.  Shut it down, and you may lose previous work, not just what you're currently doing.  Plan on this taking a while, and multitask.

1. Place readily identifiable markers in your sample area.

2. Take GPS points of those markers using an accurate, WAAS-corrected signal.

3. Take photos.

4. Synth photos.

5. Denoise synth.

6. Convert those GPS points to shapefile format.

7. In Meshlab's Render menu, select the bounding box and labelled axes.  Use the Normals, Curvatures & Orientation -> Transform: Rotate tool in Meshlab with the 'barycenter' option selected to rotate the synth until the flat ground is coplanar with at the X-Y plane.

8. Export the pointcloud as a .ply with non-binary(plaintext) encoding.

9. Rename the .ply to .txt extension.

10. Open the .txt file in Notepad.

11. Replace the header information with space-delimited 'x y z r g b alpha'  and save.

12. Open the .txt file in Excel as a space-delimited spreadsheet.

14. Save as a .csv file.

15. Open the .csv in your planning document in ArcMap, where you already have the GPS points open with UTM coordinate system.

16. Use 'Add XY data' and use the X and Y columns.

17. Right click on the new 'Events' layer and export it as a new shapefile.  Add it to your map.

18. Begin editing that new shapefile.

19. Symbolize the points by color or color ratios using R, G, B columns and cross-reference manually with Meshlab in order to locate your markers.

20. In column alpha (which should have the default value 255), set the marker points to 1.  Symbolize by alpha, unique categories, to make the markers stand out.  Save your edits.

21. Write down the X and Y coordinates of each marker after finding them using 'select by attributes'.

22. Enable the 'Spatial Adjustment' extension, put a check next to the similarity feature, and set adjust data to all features in your pointcloud layer.

23. Place a new displacement link for each marker with the first end at your marker in the pointcloud, and the other end at the corresponding GPS marker in ArcGIS.

24. Hit the 'Adjust' button. 

25. Save your edits and stop editing.

26. Optional: Use SQRT(X^2+Y^2) to determine the distance between two markers in your original coordinate system.  Use the ruler to detemine the distance between them in UTM.  Using the field calculator, multiply the Z factor by the ratio of UTM distance to pointcloud distance.