Tuesday, January 25, 2011

Paper Reading #3: Recognizing shapes and gestures using sound as feedback

Comments
Luke Roberts - http://lroberts-tamuchi.blogspot.com/2011/01/paper-reading-3-robotany-breeze.html
Shena Hoffmann - http://csce436-hoffmann.blogspot.com/2011/01/paper-reading-3-manual-deskterity.html

Reference Information
Title: Recognizing shapes and gestures using sound as feedback
Author: Javier Sanchez
Presentation Venue: CHI 2010: 28th ACM Conference on Human Factors in Computing Systems; April 10-15, 2010; Atlanta, GA, USA

Summary
This paper describes a technique for recognizing shapes and gestures through sound as feedback, as the title describes. The researcher, Javier Sanchez, begins the paper by giving several real-world examples of communicating information through the use of nonspeech audio including the Acoustic Parking System and Geiger counter.

The system Sanchez has developed can use any of the common pointer devices including a mouse or pen tablet. With the device a user can explore the screen and know they are nearing a shape when a sound is generated. Based on how the user moves the pen and the curve of the shape, the sound changes to give the user a spatial representation of the shape. As the user gets closer to the curve, the sound gets larger. The sound also changes in pitch and timbre as the user moves the pen depending on how they move it. The paper also describes the programming environment used, MAX/MSP, and the parametric curves used.


Discussion
While the idea is an interesting one, I felt it was very rushed. There were a lot of grammar mistakes, which made it a little harder to follow, and some of the earlier pictures looked like they were hastily drawn in paint. I also felt like the researcher spent too much time describing other people’s work with sonificaiton in the introduction.

I did feel this is an interesting idea for people who cannot see, but I’m a little curious about how difficult it would be to understand what the different pitches mean. As for future work, the researcher mentions some applications that researchers are working on, but he did not give any details. I find the last idea about using these techniques in common tasks to be an interesting one. Perhaps it could be used to help someone navigate and interact with a user interface they cannot see.

An image in the paper showing what happens as the user moves the the pen along a shape

5 comments:

  1. I agree that it was hard to follow. I appreciated the introduction, however, that the author talked about many different aspects of sound use in our interaction. I thought it gave a good background into the place sound is right now.

    ReplyDelete
  2. Grammar errors in a presented paper is kind of surprising. You'd think he could find a proofreader...

    I think that, unless you are going to have either a very detailed, non-visual manual or tutorials from a person it would be very difficult for a blind person to pick this up, barring *very* precise hardware. I'm a fan of intuitive interfaces, and I'm not sure how sold I am that one is feasible with just audio right now.

    On a somewhat related note, was there any discussion of merging this with tactile feedback?

    ReplyDelete
  3. Nope, not that I can recall. The focus was totally on sound as the form of feedback.

    ReplyDelete
  4. I'm surprised you noticed grammar mistakes. I did not look for that when I was reading the paper. I did however get the same impression. I wasn't sure where the paper was going, and why the research was necessary since solutions for visually impaired individuals already exist. Perhaps the precedence is the curvature recognition system.

    ReplyDelete
  5. This is interesting, but I think that is about as far as it goes. Determining if someone is close or far away from an object on the screen is a pretty easy thing to do. It seems like this idea needs to be refined for other uses, rather than the ones that the author pointed out explicitly.

    ReplyDelete