We attempted to acquire a bunch of data last weekend but faced a new challenge in regards to acquiring aerial images. We control the plane over a 2.4GHz wireless link which is the same band as our video transmission. There were a few issues where the video was causing us to lose control of the aircraft so we decided it safest not to fly.
To address this we are moving the airplane to 72MHz which should get rid of the issue. On the upside, the video comes through very clearly in all of our tests so far.
We have written a program to capture the video data and save it to disk using OpenCV and are working on a more advanced interface that renders the video in OpenGL.
The weather should be fine to perform a flight this weekend. If the flight is delayed for any reason, we are going to lay the targets out and capture them from a high building on campus to simulate the airplane so we can get started working with the images.
The first thing we plan on doing, after labeling the images, is to compute L*A*B* histograms of the targets and see what types of observations we can make from that. For actually segmenting the data in a real image, we will compare something like a simple sliding window approach using color histograms versus a more complex saliency based approach which we have already implemented to run on the GPU.