Visual attention analysis using eyetracker data
Little research has been done on the task of how a person searches for images when presented with a set of images, typically those presented by image search engines. By investigating the properties we might be able to present the images in a different manner to ease the users search for the image he/she is looking for. The work was performed at Chiba University under the supervision of Norimichi Tsumura and Reiner Lenz.
I created an experimental platform which first showed a target image and then a 7 × 4 grid in which the users task would be to locate the target image. The experiment data was recorded with a NAC EMR-8B eyetracker that saved the data as both a video and serial data stream. The data was later used to extract certain characteristics for different image sets, like how the eye fixates, and how different image sets affect the scan.
The initial place where the user started his/her search was dependent on where the user previously was fixating. It was also more probable that subsequent fixations were placed in a close proximity to the previous fixation. My results also show that the search task was slightly faster when images where placed with a high contrast between neighboring images, i.e. dark images next to bright ones etc.
AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)