Nov 01 2011

Personal remote sensing goes live: Mapping with Ardupilot

Folks all over are waking up to the fact that remote sensing is now something you really should try at home!  Today DIYDrones published a fine example of homebrew 3D mapping using an RC plane, a regular camera, and a computer vision software: hypr3d (one I’ve never heard of).  Hello Jonathan!

 

PS: I’d be glad to pay for a 3D print of our best Ecosynth- hypr3D can do it, so can landprint.com

Oct 25 2011

CAO Dreaming

Breakthrough technology enables 3D mapping of rainforests, tree by tree” - the latest news from the Carnegie Airborne Observatory (CAO)- but also old news: since about 2006, the CAO has been the most powerful 3D forest scanning system ever devised, and Greg Asner has continually improved it.

The CAO was the original inspiration behind Ecosynth.  In 2006/2007, I  was on sabbatical at the Department of Global Ecology at the Carnegie Institute of Washington at Stanford, and my office was right next to Greg’s.   Though he was mostly in Hawaii getting the CAO up and running, he and his team at Stanford completely sold me on the idea that the future of ecologically relevant remote sensing was multispectral 3D scanning (or better- hyperspectral- but one must start somewhere!). 

I coveted the CAO.   I wanted so much to use it to scan my research sites in China.  Our high-resolution ecological mapping efforts there had been so difficult and the 3D approach seemed to offer the chance to overcome so many of the challenges we faced. 

Yet it still seemed impossible to make it happen- gaining permission to fly a surveillance-grade remote sensing system over China?  It would take years and tremendous logistical and political obstacles to overcome.  So I changed my thinking…

What if we could fly over landscapes with a small hobbyist-grade remote controlled aircraft with a tiny LiDAR and a camera?  Alas, no, - LiDAR systems (high grade GPS + IMU) are way too heavy, and will be for a long time.

Then I saw Photosynth, and I thought- maybe that approach to generating 3D scans from multiple photographs might allow us to scan landscapes on demand without major logistical hassles?  The answer is yes, and the result, translated into reality by Jonathan Dandois, is Ecosynth.

Can Ecosynth achieve capabilities similar to CAO?  Our ultimate goal is to find out.   And make it cheap and accessible to all- as the first “personal” remote sensing system of the Anthropocene.

Aug 18 2011

KinectFusion builds 3D models in real time

imageFrom KinectFusion HQ:

“KinectFusion, a system that takes live depth data from a moving depth camera and in real-time creates high-quality 3D models. The system allows the user to scan a whole room and its contents within seconds. As the space is explored, new views of the scene and objects are revealed and these are fused into a single 3D model. The system continually tracks the 6DOF pose of the camera and rapidly builds a volumetric representation of arbitrary scenes.”

Would probably only work at night or under low lighting- but I wonder what this would do in a forest understory?   Would attempt to model a solid surface- but would that be useful?

Hmmm… Definitely deserves to be experimented with!

Aug 17 2011

Kinect for ArcGlobe

image

Yet another reason to get a Kinect - we can get more exercise while using ArcGIS !

According to this blog post- we can use a Kinect to navigate ArcGlobe:

The Applications Prototype Lab atEsri has just completed a prototype using a Kinect to navigate in ArcGlobe.

To fly forward, the user can raise their right hand. The display will navigate in the direct the right hand is pointing. We call this “superman navigation”. If the left hand is elevated, the display will pivot around a central location on the globe surface. And lastly, if both hands are raised, the screen will zoom in or out as the hands are both together or apart.

http://blogs.esri.com/Dev/blogs/apl/archive/2011/08/10/Kinect-for-ArcGlobe.aspx

Shall we get one?

Aug 03 2011

Kinect 3D Scanning for Archeologists

As we’ve seen before, Kinect 3D scanning keeps getting more popular all the time, including for outdoor work in the sciences:  “Archaeologists Now Use Kinect to Build 3-D Models During Digs”.

 

Still some clear and major issues with using the Kinect outside and for scanning forests, maybe it is time to give this a try in the lab?

Jul 29 2011

Introducing "Vanga"

I work for REBIOMA - a joint project of UC Berkeley's Kremen Lab and the Wildlife Conservation Society, Madagascar. We develop and apply spatial tools for biodiversity conservation in Madagascar. For example, we work with a wide array of individuals and institutions to publish high-quality biodiversity occurrence data and species distribution models on our data portal - work that has helped to identify 4 million hectares of new protected areas.

Last week, I visited the Ecosynth team to build and practice flying what we're calling "Vanga" - a Hexacopter that we will take to Madagascar in late 2011 to map forest cover and forest disturbance in the Makira and Masoala protected areas. 

We're excited about the potential for low-cost, high-frequency forest monitoring in two and three dimensions. We will start by testing the capacity of the system for producing high-resolution 2D ortho-mosaics of selected field sites. We also hope to explore the 3D modeling capabilities - this has real potential for contributing to ongoing biomass measurements, and contributing to forest carbon inventories. Finally, we plan to evaluate the potential of this system as a tool to help communities adjacent to protected areas measure and monitor their forest resources.

Jul 14 2011

Sub-centimeter positioning on mobile phones?

Just came across this today at Slashdot: "Sub-centimeter positioning coming to mobile phones": http://bit.ly/pIvQ0e.

Apparently this is based on a technique called “SLAM”.  From wikipedia: “Simultaneous localization and mapping (SLAM) is a technique used by robots and autonomous vehicles to build up a map within an unknown environment (without a priori knowledge), or to update a map within a known environment (with a priori knowledge from a given map), while at the same time keeping track of their current location.”

I could imagine this becoming VERY interesting for high spatial resolution 3D scanning in Ecosynth- but maybe I am missing some potential limitation to this? 

Your thoughts?

Jul 09 2011

Image-based Tree Modeling

Image-based Modeling

Can the geometry of trees be captured using computer vision and then used to create models of tree structure?  YES!  Super cool work described here at Ping Tan’s website at the National University of Singapore:

http://www.ece.nus.edu.sg/stfpage/eletp/Projects/ImageBasedModeling/

 

Still a long way to go before this will be useful for ecologists- but a huge step in the right direction!

Youtube version here…

Jun 28 2011

Automated terrestrial multispectral scanning

topcon_ips23D scanning just keeps getting better (but not cheaper!).

A post from Engadget: Topcon's IP-S2 Lite (~$300K) creates panoramic maps in 3D, spots every bump in the road (video) http://www.engadget.com/2011/06/28/topcons-ip-s2-lite-creates-panoramic-maps-in-3d-spots-every-bu/.

More from Topcon:

http://www.topconpositioning.com/products/mobile-mapping/ip-s2

http://global.topcon.com/news/20091204-4285.html

 

IMG_1433

In China recently, we had the good fortune to collaborate in using a wonderful new ground-based (terrestrial) LiDAR scanner (TLS) from Riegl: The VZ-400, which fuzes LiDAR scans with images acquired from a digital camera (~$140K). Pictured at left- graduate students of the Chinese Academy of Forestry with us in the field- literally!

May 23 2011

Geometry Matching for Coordinate Transform Computation Looks Promising

Our coordinate transform algorithm has given us encouraging output. By assuming that the camera and GPS data points all follow the same general geometry, we were able to interpolate and pick 100 points from both the camera and the GPS that should theoretically be in the same geometrical location. Using these 100 points, we used the least squares function and the Helmert coordinate transform algorithm to find the 7 unknown rotation, translation, and scaling parameters. We then used those parameters and the Helmert equations to transform our 100 camera points to match those of the GPS. Out data definitely appears similar in geometry, though the camera points are a bit off from the GPS: The average distance error is about 5.3 meters. This could potentially be corrected by picking a larger number of points from the splined data.

We used these parameters to also transform the point cloud, though oddly enough it transformed upside-down! Also, we are getting errors when the order of the camera list is not synchronized with the GPS. If we can synchronize the first Camera time with the first GPS data time, we could potentially reorganize the camera list so that it matches that of the GPS.

Though a bit more tweaking needs to be done to our code, this method of matching the data and GPS points looks very promising.