Date of Award

12-31-2017

Document Type

Open Access Thesis

Degree Name

Master of Science (MS)

Department

Computer Science

First Advisor

Marc Pomplun

Second Advisor

Dan A. Simovici

Third Advisor

Craig Yu

Abstract

Eye trackers are used for measuring a person’s eye movements, for example, while reading text on a screen. This information can be used by scientists to study the human visual system during object recognition or text comprehension. Moreover, engineers can build gaze-controlled interfaces that trigger a variety of actions when the user looks at pre-specified icons on a screen. In my research, I studied how gaze-contingent displays can be used to enhance the information that is provided by texts. First, I implemented a simple scripting language that allows even users without programming experience to set up gaze-contingent text displays. The language allows to display any text on the screen and define keywords that trigger actions when the user looks at them. These actions can either be a specific sound file being played or a specific bitmap image being displayed at a given position on the screen. Second, using this scripting language, I conducted an experiment on 15 users. They saw displays of 20 written words, which they had to memorize. In a control condition, the display would be static. In another condition, whenever they looked at a word, that word would be spoken, in a third condition, an image associated with the word would be shown, and in a fourth condition, both effects would occur at the same time. Surprisingly, memory performance was reduced by all gaze-contingent effects, whereas subjects believed that especially the image condition was helpful for memorization. The results suggest that gaze-contingent text enhancement is appreciated by its users but, instead of presenting identical information in different forms, should provide additional information related to the attended words.

Share

COinS