Face Blind was my final project for Visualizing the Five Senses last semester. However for some reason I never had a chance to document it, and as a kicking off of my “blog catching up” week I would like to start with this project.
Just as a quick recap of the project, Face Blind is a visualization of prosopagnosia symptom and calls for the awareness of this perception disorder. On Wikipedia’s prosopagnosia page, the brief intro is given as follows,
Prosopagnosia (sometimes known as face blindness) is a disorder of face perception where the ability to recognize faces is impaired, while the ability to recognize other objects may be relatively intact. The term usually refers to a condition following acute brain damage, but recent evidence suggests that a congenital form of the disorder may exist. The specific brain area usually associated with prosopagnosia is the fusiform gyrus.
Few successful therapies have so far been developed for affected people, although individuals often learn to use ‘piecemeal’ or ‘feature by feature’ recognition strategies. This may involve secondary clues such as clothing, hair color, body shape, and voice. Because the face seems to function as an important identifying feature in memory, it can also be difficult for people with this condition to keep track of information about people, and socialize normally with others.
The difficulty of face-blind people could hardly be explained and understood by people around because we are taking face recognition so for granted and seldom think of how faces are specifically optimized in our memory system. The analogy of the stones were used by Cecilia Burman on her website on prosopagnosia and I found it a relatively effective approach of making this explanation short. I decided to create a set of cloud masks for every student in the ITP community to illustrate this idea in a similar yet different way.
The clouds are generated by several different features on peoples faces and are overlaying on their faces to obscure them The optimized mechanism for face-recognition in our brain would not be activated since most of the facial features are turned off by the noises (clouds). However since the clouds are generated from an abstraction of facial features so ideally I could still create a unique cloud mask for each person. In this way, we could probably experience the difficulty of face blindness and, however, could also experience the “piecemeal” or “feature by feature” recognition strategy by observing the shape, texture and other features of the clouds.
My first difficulty in the process is creating the cloud texture programmatically. I first thought of using stock photos of clouds textures and trimming them for my use. After some research I decided to create my clouds by Perlin Noise, which seems to be a classic and efficient way of generating the texture. Since I’m not only looking for the randomness of the clouds, I would also like to gain control of the generated result through a set of configuration parameters (which would be the output of my face recognition/analysis tool).
The first iteration and the later revision of the cloud generator looked like this. The clouds used in the final visualization were composition of two layers of cloud highlights and another two layers of shadows, plus a adjusting filter to make sure the generated cloud cover at least 50% of the face area.
Since I’m not building a sophisticated robot with computer vision that watch your door or build you an iron armor, my goal for the facial recognition program was to just extract certain features from the face, which in this case includes aspect ratio of the face shape, positions of the eyes, nose and mouth, and whether the person is wearing glasses or not.
In fact it worked pretty well. Click on the thumbnails below to view larger pictures.
- Rectangle on the top left indicates aspect radio of face shape;
- Red stripe – position of eyes, brown of wearing glasses;
- Lower stripes – position of nose/mouth, not really accurate from time to time;
After the preparation of the previous two steps, I tried to develop a showcase for the visualization. The performance problem emerged as I was putting everything together. The realtime rendering of the cloud for each individual took about 3-4 seconds in average and created an unacceptable latency in the viewing experience. Due to the short of time, I pre-rendered all graphics beforehand. I cannot argue this is the best way to present the final product. It might be better to cache the whole cloud space to optimize calculating time than pre-rendering. Anyway this is what I presented at the final.
See if you could recognize anyone. Click on the thumbnail for higher resolution. All headshot images courtesy of Social Genius. If you’re interested in getting your “cloud” just drop me a message at leejayxia [at] gmail.com.