Human-Machine Reconfigurations Plans and Situated Actions
Preface to 1st Edition European and Trukese navigators Interpretations Caricatures of plans and situated action Interpretations Cultural differences in purposeful action Nature of activity and level of expertise However planned, purposeful actions are inevitably situated actions Talk vs. walk, plans are also explanations of action generated for communication The caricature of western navigation is “reified in the design of intelligent machines”
Interactive Artifacts
Shared Understanding & Mutual Intelligibility Interpretation of action the field of social studies Goal is to come up with accounts of the significance of human action As part of this, so is the study of how individuals understand one another’s actions Study how members of society accomplish mutual intelligibility of action Relationship between observable behavior and processes that make it meaningful Behavior/action can be part of indefinite number of meanings/goals Goals can be achieved through indefinite number of behaviors
Understanding Mutual Intelligibility Relation between actions and the reasoning processes that make them meaningful In psychology – cognitive focus on formation of beliefs, desires, intentions, etc. In social studies – focus on interpersonal (inter-agent) relationships and relations with the environment Relationship to Simon’s notion of interface
Practical vs. Theoretical Goals of AI Different meanings ascribed to Strong AI Machines that reason in the same way as humans Machines with an intelligence that matches or exceeds that of humans Weak AI Develop systems whose behavior appears intelligent regardless of how it is achieved Perhaps deep understanding is required for either
Interactive Artifacts Computer as evocative object (Turkle) Children’s view of computers as blending of Physical: things we build, design, use Social: things we communicate with Human-computer interaction/communication implies mutual intelligibility Need to answer how this works for humans before considering machines
Cognitive Science and Automata Automata brought focus on how the structure of an agent generates observable behavior Mind viewed as neither substantial nor insubstantial but as an abstraction Introspection-> behaviorism -> cognitive science Cognitive science combines discussion of “beliefs, desires, symbols, schemata, planning, problem solving” with scientific method Testing of cognitive models sufficiency on computers Often results in a view of intelligence as the manipulation of symbols
Human-Computer Interaction History: batch processing -> interactive computing -> shared languages Uses terms from human interaction Hayes/Reddy say difference is robustness Ability to respond to unanticipated circumstances Ability to detect and remedy troubles in communication Said no graceful systems exist but components are there Abilities cited are necessary but not sufficient Work done was in limited domains
Should Interaction be Human-like? Benefits More natural More accessible to those that are new to or shy away from technology Costs Might conceal miscommunication May not allow taking advantage of strengths of partners People have a tendency to assume more capability than shown to exist Opaqueness of computer also results in reification Is intentional vocabulary a shortcut?
Self-Explanatory Artifacts Machines should be able to explain goals and relations of actions to goals Self-explanatory as: Obvious/discoverable, e.g. a hammer Able to explain itself, e.g. training applications Need to know when not to say things WEST Watched student and only interrupted when viewed appropriate
Understanding Computers Computer as artifact designed for a purpose Increasing use of computers means increasingly complex technology should be usable with decreasing training Purposes are not always obvious (e.g. archeology) https://www.youtube.com/watch?v=RUZ7-w1WBPc&feature=youtu.be&t=801
Instruction as a Goal Face-to-face training relies on specifics and context (different each time) Tailored to current needs Written instruction relies on generalization Reusable for large number of people and situations Interactive systems can be both reusable and individualized Example of WEST
Computers as Purposeful Artifacts as presenting not just purposes of users or designers but having their own goals Designer builds system to be accountably rational History of Turing Test Does not care about similarity of process ELIZA as limited success (Weizenbaum denied intelligence) DOCTOR (Rogerian therapist) – people assumed reasons even if none existed Eliza conceals lack of understanding where “graceful interaction” requires it to be made explicit https://www.youtube.com/watch?v=G7JNJRZszNE https://www.youtube.com/watch?v=1uDa7jkIztw