We are starting to implement the methods found in a paper we posted a couple weeks ago, about invariant pattern recognition.
The paper outlines its methods as follows.
"1. Normalize the pattern of size n×n so that it is translation- and scale-invariant.
2. Discard al
l those pixels that are outside the surrounding circle with center (n/2,n/2) and radius n/2.
3. Project the pattern in 2n different orientations (θ) to get the radon transform coefficients.
4. Perform a 1D dual-tree complex wavelet transform on the radon coefficients along the radial direction.
5. Conduct a 1D Fourier transform on the resultant coefficients along the angle direction and get the Fourier spectrum magnitude.
6. Save these features into the feature database."
I found code online that does a radon transform and returns results in a fairly usable manner here. It only calculates using 8 angles, so the data isnt as precise as we would like, and we will likely modify it to do far more angles, then port it to CUDA (run on the GPU).
We have gotten to step 3, though we are not projecting in as many orientations as our source paper calls for, so this is a modification we will need to make to the code. Here are some preliminary results. Note that all pixels in the gray area are discarded, the angle is displayed on the vertical axis, and the translation is displayed on the horizontal axis. I discarded 40% of the far left and far right translations because they had no worthwhile information.
As you can see, with this information, the B and the A appear to be identical. It is possible that putting these results through the last two steps, or analyzing the specific numbers, would distinguish the two. I feel that the biggest problem here is the small number of orientations in the radon transform. This is a problem with the current implementation, and not the theory, and its something we can fix fairly easily.
In unrelated news, I hate the blogger posting system.