Uh, Joozey, nice! But how is this supposed to work out, what is the player doing? or supposed to smile I'm just curious.

I finished the Gabor Wavelet Transform, which is basically the same as above, but parallelized and for a family of wavelets, applied on one image. The goal is to find approximated positions of the features in an image. Today it's Tiffany's turn:



You see in the upper section the used filters, which represent differently aligned edges in different scales (broader edges and thinner edges), whereas the smaller kernels represent detail structures and bigger kernels represent bigger changes in the underlying image.

The second section shows the complex responses re + im * i of the image convolved with the complex kernels (whereas only the real-valued terms are shown).

The third section show the magnitude sqrt(re*re + im*im) and the orientation atan(im, re) of the convolution results. The magnitudes form some kind of a likelihood map, in which each pixel stores a response to the filter. The higher the response, the more probably it is to find that feature at that position. For example, sharph borders of the eyelids, the eyebrows and wrinkles between the nose the mouth generate for thin horizontally wavelets strong results; this doesn't happen, though for broader versions, which are capturing bigger structures, like fingers, the arm, hair strands and the like.

The orientation map encodes for each pixel the orientation of the surface at that very pixel, relative to the given orientation of the wavelet. These relative angles are very useful in the actual face detection step, when accurate landmarks of the face are to be found in the image.