Sunday, March 20, 2011

Paper Reading #16: Using fNIRS brain sensing in realistic HCI settings: experiments and guidelines

Comments
Evin Schuchardt
Luke Roberts

Reference Information
Title: Using fNIRS brain sensing in realistic HCI settings: experiments and guidelines
Authors: Erin Treacy Solovey, Audrey Girouard, Krysta Chauncey, Leanne M. Hirshfield, Angelo Sassaroli, Feng Zheng, Sergio Fantini , Robert J.K. Jacob
Presentation Venue: UIST 2009: 22nd annual ACM symposium on User interface software and technology; October 4-7, 2009; Victoria, British Columbia, Canada

Summary
This paper explores the idea of using functional near-infrared spectroscopy (fNIRS) in HCI laboratory settings. The researchers believe that with added information collected from brain scanning and imaging, researchers could improve their evaluation process and design interfaces that use cognitive state information as input.

The researchers describe several sources of noise that may be present during fNIRS: head movement, facial movement, ambient light, ambient noise, muscle movement, respiration and heartbeat. These potential sources of noise are the focus of the paper.

Through five experiments the researchers test to see how problematic it would be for fNIRS when the user moves his head, makes facial expressions, uses a keyboard and uses a mouse. In each experiment they have the user do the same cognitive task to measure cognitive state information. The user has to recall a 7-digit number. The apparatus they used for the fNIRS was two probes attached to the forehead.
Image taken from paper: The user is wearing a fNIRS probe


In the first experiment they assured that they could tell – based off of the scan – that they could identify the point where the user does the cognitive task. In the second experiment they tested keyboard input by having the user type randomly on the keyboard for fifteen seconds, rest for fifteen seconds, type for fifteen more seconds while the number appears, recite the number and then rest again. They found that typing is an acceptable form of interaction when using fNIRS. While typing can be picked up by the scan, they can still identify when the user does the cognitive task based on the scan.

In the third experiment they tested mouse input. It was very similar to the second experiment except rather than typing, the user was required to move the mouse cursor into a square and click. Then the square would move and the user would move the cursor inside it and click again. They found that clicking is also acceptable but only in controlled experiments where the user is clicking during the resting times as well. If the user does not click during the resting times, they cannot identify the resting points and cognitive thinking points on the scan.

In the fourth experiment they tested head movement. The process was similar to the experiments already described. Instead of typing or clicking, the user moved his head up and down. The results were similar to clicking. The user has to move his head during resting times and during the cognitive task for them to be able to identify the point of the cognitive task. In the final experiment they tested facial movement by having the user frown during the test. They found that frowning data can always be distinguished from non-frowning data and that frowning should be avoided during experiments.

At the end of the paper the researchers suggest ways to avoid interference with the fNIRS through chin rests, filter algorithms and isolating caps.

Discussion
While this paper was more of a preliminary one to direct future studies involving fNIRS, the material was presented well and the experiments thoroughly explained. I think researchers interested in this area will find this paper useful as they incorporate fNIRS into HCI areas. I especially liked that they gave suggestions for the problem areas they discovered.

An area of future study mentioned in the paper would be for them to do these experiments in more realistic settings and have the users do more realistic activities like type real words and sentences rather than typing randomly. They also explain that if they could create a database of these different sources of noise and what the noise looks like on the scan, they could devise algorithms to help remove the noise.

5 comments:

  1. Yea I agree this sounds pretty preliminary and they will really need to add in real tasks because random actions vs. actually typing coherently will be quite different. This definitely sounds like a crowd source type project where a bunch of people will use it and then a database can be created from their interactions easily.

    ReplyDelete
  2. Though it is a preliminary study, I think it is awesome that this seems to be a landmark study and most if not all research in this area could stem from this paper! So I would agree with you, Cindy, that researchers of this area will find this paper useful.

    ReplyDelete
  3. So, what exactly is fNIRS? You write out the acronym, but give us no idea beyond that. Is that a holdover from the paper?

    ReplyDelete
  4. It stands for Functional Near-infrared Spectroscopy.

    ReplyDelete
  5. Dr. Laura Ann Petitto has uncovered key brain structures underlying early human language processing and, with brain imaging technology called functional Near-infrared Spectroscopy (fNIRS), she has tracked the typical and atypical development of these brain structures across the human lifespan (infants through adults).

    ReplyDelete