Challenges in Multi-Modal and Context-Aware UI (the 5 minute version) 11/18/2018 Ken Fishkin;SoftBook Press;fishkin@softbook.com
Ken Fishkin;SoftBook Press;fishkin@softbook.com Multi-Modal I’m using multiple output media (audio, visual, gestural, stance, posture, eye gaze, etc.). Challenge: apply this same fluidity, range, and artful redundancy/overlap to computer input techniques. 11/18/2018 Ken Fishkin;SoftBook Press;fishkin@softbook.com
Ken Fishkin;SoftBook Press;fishkin@softbook.com Context-Aware Where am I? What room? What noise level? What lighting level? What elevation? 11/18/2018 Ken Fishkin;SoftBook Press;fishkin@softbook.com
Ken Fishkin;SoftBook Press;fishkin@softbook.com Context-aware (2) Who am I? Biometrics best tailoring extreme interfaces Who else is nearby? 11/18/2018 Ken Fishkin;SoftBook Press;fishkin@softbook.com
Ken Fishkin;SoftBook Press;fishkin@softbook.com Context-aware (3) What is around me? Other devices (IR/RF communication networks) true plug-n-play (Handspring). 11/18/2018 Ken Fishkin;SoftBook Press;fishkin@softbook.com
Ken Fishkin;SoftBook Press;fishkin@softbook.com Context-aware (4) How do you know what I mean? Clutching problem How far back does context apply? Affects all the other issues (what, when, etc.) 11/18/2018 Ken Fishkin;SoftBook Press;fishkin@softbook.com
Ken Fishkin;SoftBook Press;fishkin@softbook.com 2 modest proposals get 20 itsy’s (or similar technology), “embrace and extend” them. Lego Mindstorms Study sign language 11/18/2018 Ken Fishkin;SoftBook Press;fishkin@softbook.com
Ken Fishkin;SoftBook Press;fishkin@softbook.com Hmmm…. When two fluent speakers of North American Indian Sign Language communicate 60% - redundant 30% - augment 10% - disjoint! 11/18/2018 Ken Fishkin;SoftBook Press;fishkin@softbook.com