Screen:
Paul Aoki, Palo Alto Research Center
Palo Alto, California
www2.parc.com/csl/members/aoki
Paul Dourish, University of California at Irvine
Irvine, California
www.ics.uci.edu/~jpd
Scott Hudson, Carnegie Mellon University
Pittsburgh, Pennsylvania
www.cs.cmu.edu/~hudson
James Landay, University of California at Berkeley
Berkeley, California
www.cs.berkeley.edu/~landay
Brad A. Myers, Carnegie Mellon University
Pittsburgh, Pennsylvania
www.cs.cmu.edu/~bam
Terry Winograd, Stanford University
Stanford, California
hci.stanford.edu/~winograd
Speech:
Julia Hirschberg, Columbia University
New York, New York
www1.cs.columbia.edu/~julia
Elizabeth Shriberg, SRI International
Menlo Park, California
www-speech.sri.com/people/ees
Victor Zue, Massachusetts Institute of Technology
Cambridge, Massachusetts
www.sls.lcs.mit.edu/zue/zue.html
Alex Waibel, Carnegie Mellon University
Pittsburgh, Pennsylvania
www-2.cs.cmu.edu/afs/cs.cmu.edu/user/ahw/www/
Gesture/Multimodal:
Gregory Abowd, Georgia Institute of Technology
Atlanta, Georgia
www.cc.gatech.edu/fac/Gregory.Abowd
Justine Cassell, Massachusetts Institute of Technology
Cambridge, Massachusetts
web.media.mit.edu/~justine
Joe Paradiso, Massachusetts Institute of Technology
Cambridge, Massachusetts
web.media.mit.edu/~joep
Rajeev Sharma, Pennsylvania State University
University Park, Pennsylvania
www.cse.psu.edu/~rsharma
Mandayam A. Srinivasan, Massachusetts Institute of Technology
Cambridge, Massachusetts
rleweb.mit.edu/rlestaff/p-srin.htm
Matthew Turk, University of California at Santa Barbara
Santa Barbara, California
www.cs.ucsb.edu/~mturk
Neural:
Peter Fromherz, Max Planck Institute for Biochemistry
Martinsried, Germany
www.biochem.mpg.de/mnphys
Miguel Nicolelis, Duke University
Durham, North Carolina
www.neuro.duke.edu/Faculty/Nicolelis.htm
What to Look For
Existing input devices:
Low-cost and more accurate gaze tracking system
Low-cost and more accurate gesture recognition system
Low-cost and more accurate object recognition system
Combined Input:
Fully integrated speech, gesture and object recognition
Interfaces that tap human subtleties:
Speech recognition software that uses prosody
Systems that recognizes conversational gestures
Systems that recognizes basic emotions
Direct connections:
A monkey using a brain implant to consciously control a robot arm
A brain implant connected to thousands of neurons
A brain implant that restores paralyzed limb control