Recently I ran into a memory problem running large point cloud arrays through Python and Numpy. I quickly determined that I was asking Numpy to work on massive arrays that were exceeding the limits of the 32-bit Python process in Windows. I came up with a workaround whereby I truncate the UTM X-Y coordinate information so I can store the numbers as 32-bit floating point values without losing precision, then add the extra numbers back at the end. Basically, I translated the X-Y coordinates (352845.49 4346713.91 --> 2845.49 6713.91) then back again to the original UTM values after computation. This was OK, but I wanted to overcome the 32-bit limit in Python.
There are unofficial 64-bit builds available here, but I wanted to try something established. I got one of our machines running dual-boot Windows 7 / Ubuntu 11.10 64-bit and compiled all the Python, Scipy, and Numpy libs in Ubuntu. These builds are inherently 64-bit because of the 64-bit OS install, so there are no issues with addressing large arrays of data. Back to work!