Sunday, April 10, 2011

Paper Reading #21: Towards maximizing the accuracy of human-labeled sensor data

Comments
Evin Schuchardt
Luke Roberts

Reference Information
Title: Towards maximizing the accuracy of human-labeled sensor data
Authors: Stephanie L. Rosenthal and Anind K. Dey
Presentation Venue: IUI 2010: Proceedings of the 15th international conference on Intelligent user interfaces; February 7-10, 2010; Hong Kong, China

Summary
In this paper the researchers discuss the impact that different amounts of information have on people when they label things. The main types of information given to labelers that the researchers explore are:
1. Different amounts of contextual information
2. High and low level explanations
3. Prediction
4. User Feedback
5. Level of uncertainty

Image from paper: Users were asked questions about a task to help differentiate between tasks
To study the impact that the above types of information have on labelers, they used the wizard-of-oz technique and presented labelers with varying amounts of information to test their accuracy when labeling. They also focused on the differences between people labeling data they had not seen before and their own data.

After the study, they found that the five types of information had a positive affect on the labelers, because it gave them more information or helped to direct their thought processes. For example, when asking for feedback, some users changed their label to a better one as they thought more deeply about why they had labeled it the way they did. The researchers also found that whether the labeler was familiar with the data before the labeling or not had no impact on the accuracy of the label.

Discussion
After reading the first page of this paper, I just stopped and decided I would read it another day, because I had no idea what they were talking about at first – intelligent agent’s data classification, say what? I think the only reason I caught on when I did was because someone had done a presentation on Amazon’s Mechanical Turk, which is mentioned in this paper.

I didn’t fully follow all of their studies or understand why they chose to do them. They just jumped right into the topic, rendering someone like me who knows little about this kind of work confused. While I think it could be worthwhile to improve the accuracy of labelers, this paper did nothing to encourage the idea, in my opinion. In future studies the researchers mention focusing on other types of information besides the five given above. They also say they need to do more tests in more long-term studies.

1 comment:

  1. I'm right there with you, Cindy. The paper was a bit hard to follow and I think it is interesting they did such a poor job of readability.

    ReplyDelete