Presentation is loading. Please wait.

Presentation is loading. Please wait.

 2004, G.Tecuci, Learning Agents Center CS 7850 Fall 2004 Learning Agents Center and Computer Science Department George Mason University Gheorghe Tecuci.

Similar presentations


Presentation on theme: " 2004, G.Tecuci, Learning Agents Center CS 7850 Fall 2004 Learning Agents Center and Computer Science Department George Mason University Gheorghe Tecuci."— Presentation transcript:

1  2004, G.Tecuci, Learning Agents Center CS 7850 Fall 2004 Learning Agents Center and Computer Science Department George Mason University Gheorghe Tecuci tecuci@gmu.edu http://lac.gmu.edu/

2  2004, G.Tecuci, Learning Agents Center Overview Artificial Intelligence and intelligent agents Knowledge acquisition for agents development Domain for hands-on experience Overview of the course Class introduction and course’s objectives

3  2004, G.Tecuci, Learning Agents Center Cartoon

4  2004, G.Tecuci, Learning Agents Center Course Objectives Present principles and major methods of knowledge acquisition for the development of knowledge-based agents that incorporate the problem solving knowledge of a subject matter expert. Major topics include: overview of knowledge engineering; analysis and modeling of the reasoning process of a subject matter expert; ontology design and development; rule learning; problem solving and knowledge-base refinement. The course will emphasize the most recent advances in this area, such as: agent teaching and learning; mixed-initiative knowledge base refinement; knowledge reuse; frontier research problems. Provide an overview of Knowledge Acquisition and Problem Solving.

5  2004, G.Tecuci, Learning Agents Center Course Objectives (cont) Learn about all the phases of building a knowledge-based agent and experience them first-hand by using the Disciple agent development environment to build an intelligent assistant that helps the students to choose a Ph.D. Dissertation Advisor. Disciple has been developed in the Learning Agents Center of George Mason University and has been successfully used to build knowledge-based agents for a variety of problem areas, including: planning the repair of damaged bridges and roads; critiquing military courses of action; determining strategic centers of gravity in military conflicts; generating test questions for higher-order thinking skills in history and statistics. Link Knowledge Acquisition and Problem Solving concepts to hands-on applications by building a knowledge-based agent.

6  2004, G.Tecuci, Learning Agents Center Course organization and grading policy The classes will consist of: -a theoretical recitation part where the instructor will present and discuss the various methods and phases of building a knowledge- based agent; -a practical laboratory part where the students will apply this knowledge to specify, design and develop the Ph.D. selection advisor. Grading Policy -Exam, covering the theoretical aspects presented – 50% -Assignments, consisting of lab participation and the contribution to the development of the Ph.D. selection advisor – 50% Regular assignments will consist of incremental developments of the Ph.D. selection advisor which will be presented to the class. Course organization

7  2004, G.Tecuci, Learning Agents Center Readings Lecture notes provided by the instructor (required). Tecuci G., Building Intelligent Agents: An Apprenticeship Multistrategy Learning Theory, Methodology, Tool and Case Studies, Academic Press, 1998 (recommended). Additional papers recommended by the instructor.

8  2004, G.Tecuci, Learning Agents Center Overview Artificial Intelligence and intelligent agents Overview of the course Class introduction and course’s objectives Knowledge acquisition for agents development Domain for hands-on experience

9  2004, G.Tecuci, Learning Agents Center Artificial Intelligence and intelligent agents What is an intelligent agent Characteristic features of intelligent agents What is Artificial Intelligence Sample tasks for intelligent agents Why are intelligent agents important

10  2004, G.Tecuci, Learning Agents Center Artificial Intelligence is the Science and Engineering that is concerned with the theory and practice of developing systems that exhibit the characteristics we associate with intelligence in human behavior: perception, natural language processing, reasoning, planning and problem solving, learning and adaptation, etc. What is Artificial Intelligence

11  2004, G.Tecuci, Learning Agents Center Central goals of Artificial Intelligence Understand the principles that make intelligence possible (in humans, animals, and artificial agents) Developing intelligent machines or agents (no matter whether they operate as humans or not) Formalizing knowledge and mechanizing reasoning in all areas of human endeavor Making the working with computers as easy as working with people Developing human-machine systems that exploit the complementariness of human and automated reasoning

12  2004, G.Tecuci, Learning Agents Center Artificial Intelligence and intelligent agents What is an intelligent agent Characteristic features of intelligent agents What is Artificial Intelligence Sample tasks for intelligent agents Why are intelligent agents important

13  2004, G.Tecuci, Learning Agents Center What is an intelligent agent Intelligent Agent user/ environment output/ sensors effectors input/ An intelligent agent is a system that: perceives its environment (which may be the physical world, a user via a graphical user interface, a collection of other agents, the Internet, or other complex environment); reasons to interpret perceptions, draw inferences, solve problems, and determine actions; and acts upon that environment to realize a set of goals or tasks for which it was designed.

14  2004, G.Tecuci, Learning Agents Center Humans, with multiple, conflicting drives, multiple senses, multiple possible actions, and complex sophisticated control structures, are at the highest end of being an agent. At the low end of being an agent is a thermostat. It continuously senses the room temperature, starting or stopping the heating system each time the current temperature is out of a pre-defined range. The intelligent agents we are concerned with are in between. They are clearly not as capable as humans, but they are significantly more capable than a thermostat. What is an intelligent agent (cont.)

15  2004, G.Tecuci, Learning Agents Center An intelligent agent interacts with a human or some other agents via some kind of agent-communication language and may not blindly obey commands, but may have the ability to modify requests, ask clarification questions, or even refuse to satisfy certain requests. It can accept high-level requests indicating what the user wants and can decide how to satisfy each request with some degree of independence or autonomy, exhibiting goal-directed behavior and dynamically choosing which actions to take, and in what sequence. What is an intelligent agent (cont.)

16  2004, G.Tecuci, Learning Agents Center An intelligent agent can : collaborate with its user to improve the accomplishment of his or her tasks; carry out tasks on user’s behalf, and in so doing employs some knowledge of the user's goals or desires; monitor events or procedures for the user; advise the user on how to perform a task; train or teach the user; help different users collaborate. What an intelligent agent can do

17  2004, G.Tecuci, Learning Agents Center Artificial Intelligence and intelligent agents What is an intelligent agent Characteristic features of intelligent agents What is Artificial Intelligence Sample tasks for intelligent agents Why are intelligent agents important

18  2004, G.Tecuci, Learning Agents Center If an object is on top of another object that is itself on top of a third object then the first object is on top of the third object. RULE  x,y,z  OBJECT, (ON x y) & (ON y z)  (ON x z) represents ON CUP1 BOOK1 ON TABLE1 CUP BOOK TABLE INSTANCE-OF OBJECT SUBCLASS-OF Application Domain Model of the Domain An intelligent agent contains an internal representation of its external application domain, where relevant elements of the application domain (objects, relations, classes, laws, actions) are represented as symbolic expressions. Knowledge representation and reasoning ONTOLOGY This mapping allows the agent to reason about the application domain by performing reasoning processes in the domain model, and transferring the conclusions back into the application domain. (cup1 on book1) & (book1 on table1)  (cup1 on table1) (cup1 on table1)

19  2004, G.Tecuci, Learning Agents Center Basic agent architecture Ontology Rules/Cases/… Problem Solving Engine Intelligent Agent User/ Environment Output/ Sensors Effectors Input/ Knowledge Base Implements a general method of interpreting the input problem based on the knowledge from the knowledge base Data structures that represent the objects from the application domain, general laws governing them, action that can be performed with them, etc.

20  2004, G.Tecuci, Learning Agents Center There are two basic components of an agent: the knowledge base and the problem solving engine. The knowledge base contains data structures that represent the application domain. It includes representations of objects and their relations (the object ontology), but also representations of laws, actions, rules, cases or elementary problem solving methods. The problem solving engine implements a problem solving method that manipulates the data structures in the knowledge base to reason about the input problem, to solve it, and to determine the actions to perform next. That is, there is a clear separation between knowledge (which is contained into the knowledge base) and control (represented by the problem solving engine). This separation allows the development of general tools, or shells, that do not contain any domain specific knowledge in the knowledge base. By defining this knowledge, one can develop a specific agent. The idea of these tools is to re-use the problem solving engine for a new application by defining the appropriate content of the knowledge base.

21  2004, G.Tecuci, Learning Agents Center The knowledge possessed by the agent and its reasoning processes should be understandable to humans. The agent should have the ability to give explanations of its behavior, what decisions it is making and why. Without transparency it would be very difficult to accept, for instance, a medical diagnosis performed by an intelligent agent. The need for transparency shows that the main goal of artificial intelligence is to enhance human capabilities and not to replace human activity. Transparency and explanations

22  2004, G.Tecuci, Learning Agents Center An agent should be able to communicate with its users or other agents. The communication language should be as natural to the human users as possible. Ideally, it should be free natural language. The problem of natural language understanding and generation is very difficult due to the ambiguity of words and sentences, the paraphrases, ellipses and references which are used in human communication. Ability to communicate

23  2004, G.Tecuci, Learning Agents Center In order to solve "real-world" problems, an intelligent agent needs a huge amount of domain knowledge in its memory (knowledge base). Example of human-agent dialog: User:The toolbox is locked. Agent:The key is in the drawer. In order to understand such sentences and to respond adequately, the agent needs to have a lot of knowledge about the user, including the goals the user might want to achieve. Use of huge amounts of knowledge

24  2004, G.Tecuci, Learning Agents Center User: The toolbox is locked. Agent: Why is he telling me this? I already know that the box is locked. I know he needs to get in. Perhaps he is telling me because he believes I can help. To get in requires a key. He knows it and he knows I know it. The key is in the drawer. If he knew this, he would not tell me that the toolbox is locked. So he must not realize it. To make him know it, I can tell him. I am supposed to help him. The key is in the drawer. Use of huge amounts of knowledge (example)

25  2004, G.Tecuci, Learning Agents Center An intelligent agent usually needs to search huge spaces in order to find solutions to problems. Example: A search agent on the Internet. Exploration of huge search spaces

26  2004, G.Tecuci, Learning Agents Center A heuristic is a rule of thumb, strategy, trick, simplification, or any other kind of device which drastically limits the search for solutions in large problem spaces. Heuristics do not guarantee optimal solutions. In fact they do not guarantee any solution at all. A useful heuristic is one that offers solutions which are good enough most of the time. Use of heuristics Intelligent agents generally attack problems for which no algorithm is known or feasible, problems that require heuristic methods.

27  2004, G.Tecuci, Learning Agents Center The ability to provide some solution even if not all the data relevant to the problem is available at the time a solution is required. Reasoning with incomplete or conflicting data The ability to take into account data items that are more or less in contradiction with one another (conflicting data or data corrupted by errors). Example: The reasoning of a military intelligence analyst that has to cope with the deception actions of the enemy. Examples: The reasoning of a physician in an intensive care unit. Planning a military course of action.

28  2004, G.Tecuci, Learning Agents Center The ability to improve its competence and performance. An agent is improving its competence if it learns to solve a broader class of problems, and to make fewer mistakes in problem solving. An agent is improving its performance if it learns to solve more efficiently (for instance, by using less time or space resources) the problems from its area of competence. Ability to learn

29  2004, G.Tecuci, Learning Agents Center Extended agent architecture Ontology Rules/Cases/Methods Problem Solving Engine Intelligent Agent User/ Environment Output/ Sensors Effectors Input/ Knowledge Base Learning Engine The learning engine implements methods for extending and refining the knowledge in the knowledge base.

30  2004, G.Tecuci, Learning Agents Center Artificial Intelligence and intelligent agents What is an intelligent agent Characteristic features of intelligent agents What is Artificial Intelligence Sample tasks for intelligent agents Why are intelligent agents important

31  2004, G.Tecuci, Learning Agents Center Sample tasks for intelligent agents Example: Determine the actions that need to be performed in order to repair a bridge. Planning: Finding a set of actions that achieve a certain goal. Example: Critiquing a military course of action (or plan) based on the principles of war and the tenets of army operations. Critiquing: Expressing judgments about something according to certain standards. Example: Interpreting gauge readings in a chemical process plant to infer the status of the process. Interpretation: Inferring situation description from sensory data.

32  2004, G.Tecuci, Learning Agents Center Sample tasks for intelligent agents (cont.) Examples: Predicting the damage to crops from some type of insect. Estimating global oil demand from the current geopolitical world situation. Prediction: Inferring likely consequences of given situations. Examples: Determining the disease of a patient from the observed symptoms. Locating faults in electrical circuits. Finding defective components in the cooling system of nuclear reactors. Diagnosis: Inferring system malfunctions from observables. Example: Designing integrated circuits layouts. Design: Configuring objects under constraints.

33  2004, G.Tecuci, Learning Agents Center Sample tasks for intelligent agents (cont.) Examples: Monitoring instrument readings in a nuclear reactor to detect accident conditions. Assisting patients in an intensive care unit by analyzing data from the monitoring equipment. Monitoring: Comparing observations to expected outcomes. Examples: Suggesting how to tune a computer system to reduce a particular type of performance problem. Choosing a repair procedure to fix a known malfunction in a locomotive. Debugging: Prescribing remedies for malfunctions. Example: Tuning a mass spectrometer, i.e., setting the instrument's operating controls to achieve optimum sensitivity consistent with correct peak ratios and shapes. Repair: Executing plans to administer prescribed remedies.

34  2004, G.Tecuci, Learning Agents Center Sample tasks for intelligent agents (cont.) Examples: Teaching students a foreign language. Teaching students to troubleshoot electrical circuits. Teaching medical students in the area of antimicrobial therapy selection. Instruction: Diagnosing, debugging, and repairing student behavior. Example: Managing the manufacturing and distribution of computer systems. Control: Governing overall system behavior. Any useful task: Information fusion. Information assurance. Travel planning. Email management. Help in choosing a Ph.D. Dissertation Advisor

35  2004, G.Tecuci, Learning Agents Center Artificial Intelligence and intelligent agents What is an intelligent agent Characteristic features of intelligent agents What is Artificial Intelligence Sample tasks for intelligent agents Why are intelligent agents important

36  2004, G.Tecuci, Learning Agents Center Why are intelligent agents important Humans have limitations that agents may alleviate (e.g. memory for the details that isn’t effected by stress, fatigue or time constraints). Humans and agents could engage in mixed-initiative problem solving that takes advantage of their complementary strengths and reasoning styles.

37  2004, G.Tecuci, Learning Agents Center Why are intelligent agents important (cont) The evolution of information technology makes intelligent agents essential components of our future systems and organizations. Our future computers and most of the other systems and tools will gradually become intelligent agents. We have to be able to deal with intelligent agents either as users, or as developers, or as both.

38  2004, G.Tecuci, Learning Agents Center Intelligent agents are systems which can perform tasks requiring knowledge and heuristic methods. Intelligent agents: Conclusion Intelligent agents are helpful, enabling us to do our tasks better. Intelligent agents are necessary to cope with the increasing complexity of the information society.

39  2004, G.Tecuci, Learning Agents Center Overview Artificial Intelligence and intelligent agents Overview of the course Class introduction and course’s objectives Knowledge acquisition for agents development Domain for hands-on experience

40  2004, G.Tecuci, Learning Agents Center Problem: Choosing a Ph.D. Dissertation Advisor Choosing a Ph.D. Dissertation Advisor is a crucial decision for a successful dissertation and for one’s future career. An informed decision requires a lot of knowledge about the potential advisors. In this course we will develop an agent that interacts with a student to help selecting the best Ph.D. advisor for that student. See the project notes: “1. Problem”

41  2004, G.Tecuci, Learning Agents Center Overview Artificial Intelligence and intelligent agents Overview of the course Class introduction and course’s objectives Knowledge acquisition for agents development Domain for hands-on experience

42  2004, G.Tecuci, Learning Agents Center Knowledge Acquisition for agent development Demo: Agent teaching and learning Approaches to knowledge acquisition Research vision on agents development Disciple approach to agent development

43  2004, G.Tecuci, Learning Agents Center How are agents built: Manual knowledge acquisition A knowledge engineer attempts to understand how a subject matter expert reasons and solves problems and then encodes the acquired expertise into the agent's knowledge base. The expert analyzes the solutions generated by the agent (and often the knowledge base itself) to identify errors, and the knowledge engineer corrects the knowledge base. Knowledge Engineer Knowledge Base Problem Solving Engine Intelligent Agent Programming Dialog Results Subject Matter Expert

44  2004, G.Tecuci, Learning Agents Center Why it is hard The knowledge engineer has to become a kind of subject matter expert in order to properly understand expert’s problem solving knowledge. This takes time and effort. Experts express their knowledge informally, using natural language, visual representations and common sense, often omitting essential details that are considered obvious. This form of knowledge is very different from the one in which knowledge has to be represented in the knowledge base (which is formal, precise, and complete). This transfer and transformation of knowledge, from the domain expert through the knowledge engineer to the agent, is long, painful and inefficient (and is known as "the knowledge acquisition bottleneck“ of the AI systems development process).

45  2004, G.Tecuci, Learning Agents Center Mixed-initiative knowledge acquisition The expert teaches the agent how to perform various tasks, in a way that resembles how an expert would teach a human apprentice when solving problems in cooperation. This process is based on mixed-initiative reasoning that integrates the complementary knowledge and reasoning styles of the subject matter expert and the agent, and on a division of responsibility for those elements of knowledge engineering for which they have the most aptitude, such that together they form a complete team for knowledge base development. Subject Matter Expert Knowledge Base Problem Solving Engine Intelligent Learning Agent Knowledge Dialog Results Learning Engine

46  2004, G.Tecuci, Learning Agents Center This is the most promising approach to overcome the knowledge acquisition bottleneck. DARPA’s Rapid Knowledge Formation Program (2000-2004): Emphasized the development of knowledge bases directly by the subject matter experts. Central objective: Enable distributed teams of experts to enter and modify knowledge directly and easily, without the need for prior knowledge engineering experience. The emphasis was on content and the means of rapidly acquiring this content from individuals who possess it with the goal of gaining a scientific understanding of how ordinary people can work with formal representations of knowledge. Program’s primary requirement: Development of functionality enabling experts to understand the contents of a knowledge base, enter new theories, augment and edit existing knowledge, test the adequacy of the knowledge base under development, receive explanations of theories contained in the knowledge base, and detect and repair errors in content. Mixed-initiative knowledge acquisition (cont.)

47  2004, G.Tecuci, Learning Agents Center Autonomous knowledge acquisition The learning engine builds the knowledge base from a data base of facts or examples. In general, the learned knowledge consists of concepts, classification rules, or decision trees. The problem solving engine is a simple one-step inference engine that classifies a new instance as being or not an example of a learned concept. Defining the Data Base of examples is a significant challenge. Current practical applications limited to classification tasks. Knowledge Base Problem Solving Engine Autonomous Learning Agent Knowledge Data Results Learning Engine Data Base

48  2004, G.Tecuci, Learning Agents Center Autonomous knowledge acquisition (cont.) The knowledge base is built by the learning engine from data provided by the text understanding system able to understand textbooks. In general, the data consists of facts acquired from the books. This is not yet a practical approach, even for simpler agents. Knowledge Base Problem Solving Engine Autonomous Language Understanding and Learning Agent Knowledge Data Learning Engine Text Text Understanding Engine Results

49  2004, G.Tecuci, Learning Agents Center Knowledge Acquisition for agent development Disciple approach to agent development Approaches to knowledge acquisition Research vision on agents development Demo: Agent teaching and learning

50  2004, G.Tecuci, Learning Agents Center The expert teaches the agent how to perform various tasks in a way that resembles how the expert would teach a person. 1. Mixed-initiative problem solving 2. Teaching and learning 3. Multistrategy learning Interface Problem Solving Learning Ontology + Rules Disciple approach to agent development Research Problem: Elaborate a theory, methodology and family of systems for the development of knowledge-based agents by subject matter experts, with limited assistance from knowledge engineers. Approach: Develop a learning agent that can be taught directly by a subject matter expert while solving problems in cooperation. The agent learns from the expert, building, verifying and improving its knowledge base

51  2004, G.Tecuci, Learning Agents Center Disciple-WA (1997-1998): Estimates the best plan of working around damage to a transportation infrastructure, such as a damaged bridge or road. Demonstrated that a knowledge engineer can use Disciple to rapidly build and update a knowledge base capturing knowledge from military engineering manuals and a set of sample solutions provided by a subject matter expert. Disciple-COA (1998-1999): Identifies strengths and weaknesses in a Course of Action, based on the principles of war and the tenets of Army operations. Demonstrated the generality of its learning methods that used an object ontology created by another group (TFS/Cycorp). Demonstrated that a knowledge engineer and a subject matter expert can jointly teach Disciple. Sample Disciple agents

52  2004, G.Tecuci, Learning Agents Center A Disciple agent for Center of Gravity determination The center of gravity of an entity (state, alliance, coalition, or group) is the foundation of capability, the hub of all power and movement, upon which everything depends, the point against which all the energies should be directed. Carl Von Clausewitz, “On War,” 1832. If a combatant eliminates or influences the enemy’s strategic center of gravity, then the enemy will lose control of its power and resources and will eventually fall to defeat. If the combatant fails to adequately protect his own strategic center of gravity, he invites disaster. (Giles and Galvin, USAWC 1996).

53  2004, G.Tecuci, Learning Agents Center Knowledge bases and agent development by subject matter experts, using learning agent technology. Experiments in the USAWC courses. Formalization of the center of gravity determination process 319jw Case Studies in Center of Gravity Analysis Use of Disciple in a sequence of two joint warfighting courses 589jw Military Applications of Artificial Intelligence Students developed scenarios Students developed agents Synergistic collaboration and transition to the USAWC George Mason University - US Army War College Artificial Intelligence Research Military Strategy Research Military Education & Practice Disciple

54  2004, G.Tecuci, Learning Agents Center Government Military People Economy Alliances Etc. Which are the critical capabilities? Are the critical requirements of these capabilities satisfied? If not, eliminate the candidate. If yes, do these capabilities have any vulnerability? Based on the concepts of critical capabilities, critical requirements and critical vulnerabilities, which have been recently adopted into the joint military doctrine of USA (Strange, 1996). Applied to current war scenarios (e.g. War on terror 2003, Iraq 2003) with state and non-state actors (e.g. Al Qaeda). Identify potential primary sources of moral or physical strength, power and resistance from: Test each identified COG candidate to determine whether it has all the necessary critical capabilities: Identification of COG candidatesTesting of COG candidates Approach to Center of Gravity (COG) determination

55  2004, G.Tecuci, Learning Agents Center Problem Solving Approach: Task Reduction A complex problem solving task is performed by: successively reducing it to simpler tasks; finding the solutions of the simplest tasks; successively composing these solutions until the solution to the initial task is obtained. Object Ontology Reduction Rules Composition Rules Knowledge Base

56  2004, G.Tecuci, Learning Agents Center Question Which is a member of ?O1 ? Answer ?O2 IF Identify and test a strategic COG candidate corresponding to a member of the ?O1 THEN Identify and test a strategic COG candidate for ?O2 US_1943 Which is a member of Allied_Forces_1943? We need to Identify and test a strategic COG candidate corresponding to a member of the Allied_Forces_1943 Therefore we need to EXAMPLE OF REASONING STEP IF Identify and test a strategic COG candidate corresponding to a member of a force The force is ?O1 THEN Identify and test a strategic COG candidate for a force The force is ?O2 Plausible Upper Bound Condition ?O1ismulti_member_force has_as_member ?O2 ?O2 isforce Plausible Lower Bound Condition ?O1isequal_partners_multi_state_alliance has_as_member ?O2 ?O2issingle_state_force Identify and test a strategic COG candidate for US_1943 Problem Solving and Learning LEARNED RULE FORMAL STRUCTURE INFORMAL STRUCTURE ONTOLOGY FRAGMENT

57  2004, G.Tecuci, Learning Agents Center Disciple Agent KB Problem solving Disciple was taught based on the expertise of Prof. Comello in center of gravity analysis. Disciple helps the students to perform a center of gravity analysis of an assigned war scenario. Teaching Learning The use of Disciple is an assignment that is well suited to the course's learning objectives Disciple should be used in future versions of this course Use of Disciple at the US Army War College 319jw Case Studies in Center of Gravity Analysis Disciple helped me to learn to perform a strategic COG analysis of a scenario Global evaluations of Disciple by officers from the Spring 03 course

58  2004, G.Tecuci, Learning Agents Center Use of Disciple at the US Army War College 589jw Military Applications of Artificial Intelligence course Students teach Disciple their COG analysis expertise, using sample scenarios (Iraq 2003, War on terror 2003, Arab-Israeli 1973) Students test the trained Disciple agent based on a new scenario (North Korea 2003) I think that a subject matter expert can use Disciple to build an agent, with limited assistance from a knowledge engineer Spring 2001 COG identification Spring 2002 COG identification and testing Spring 2003 COG testing based on critical capabilities Global evaluations of Disciple by officers during three experiments

59  2004, G.Tecuci, Learning Agents Center Extended KB stay informed be irreplaceable communicate be influential Integrated KB Initial KB have support be protected be driving force 432 concepts and features, 29 tasks, 18 rules For COG identification for leaders 37 acquired concepts and features for COG testing COG identification and testing (leaders) Domain analysis and ontology development (KE+SME) Parallel KB development (SME assisted by KE) KB merging (KE) Knowledge Engineer (KE) All subject matter experts (SME) DISCIPLE-COG Training scenarios: Iraq 2003 Arab-Israeli 1973 War on Terror 2003 Team 1 Team 2Team 3Team 4Team 5 5 features 10 tasks 10 rules Learned features, tasks, rules 14 tasks 14 rules 2 features 19 tasks 19 rules 35 tasks 33 rules 3 features 24 tasks 23 rules Unified 2 features Deleted 4 rules Refined 12 rules Final KB: +9 features  478 concepts and features +105 tasks  134 tasks +95 rules  113 rules DISCIPLE-COG Testing scenario: North Korea 2003 Correctness = 98.15% 5h 28min average training time / team 3.53 average rule learning rate / team Parallel development and merging of knowledge bases

60  2004, G.Tecuci, Learning Agents Center Knowledge Acquisition for agent development Approaches to knowledge acquisition Research vision on agents development Demo: Agent teaching and learning Disciple approach to agent development

61  2004, G.Tecuci, Learning Agents Center Demonstration Disciple Demo Teaching Disciple how to determine whether a strategic leader has the critical capability to be protected.

62  2004, G.Tecuci, Learning Agents Center Knowledge Acquisition for agent development Approaches to knowledge acquisition Research vision on agents development Demo: Agent teaching and learning Disciple approach to agent development

63  2004, G.Tecuci, Learning Agents Center Vision on the future of software development Mainframe Computers Software systems developed and used by computer experts Personal Computers Software systems developed by computer experts and used by persons that are not computer experts Learning Agents Software systems developed and used by persons that are not computer experts

64  2004, G.Tecuci, Learning Agents Center The military applications presented in this session show that Disciple has reached a significant level of maturity, being usable to rapidly develop complex knowledge based agents. However, these are only initial results of a long-term research aimed at changing the way intelligent agents are built, from “being programmed” by a knowledge engineer to “being taught” by a user that does not have prior knowledge engineering experience. Making this vision a reality would allow a normal computer user, who is not a trained knowledge engineer, to build by himself an intelligent assistant as easily as he now uses a word processor to write a paper. It is expected that this research will contribute to a new revolution in the use of computers, probably even more important than the creation of personal computers. Indeed, it will allow every person to no longer be only a user of pro-grams developed by others, but also an agent developer himself.

65  2004, G.Tecuci, Learning Agents Center Overview Artificial Intelligence and intelligent agents Overview of the course Class introduction and course’s objectives Knowledge acquisition for agents development Domain for hands-on experience

66  2004, G.Tecuci, Learning Agents Center Overview of the course Mixed-initiative knowledge acquisition. Overview of the Disciple approach. Problem solving through task reduction. Modeling the reasoning of subject matter experts. Overview of knowledge engineering and of the manual knowledge acquisition methods. Ontology design and development. Agent teaching and multistrategy learning. Mixed-initiative problem solving and knowledge base refinement. Knowledge bases integration. Discussion of frontier research problems. Development of an assistant for choosing a Ph.D. Dissertation Advisor Scripts development for scenario elicitation.

67  2004, G.Tecuci, Learning Agents Center Additional recommended reading G. Tecuci, Building Intelligent Agents, Academic Press, 1998, pp. 1-12.


Download ppt " 2004, G.Tecuci, Learning Agents Center CS 7850 Fall 2004 Learning Agents Center and Computer Science Department George Mason University Gheorghe Tecuci."

Similar presentations


Ads by Google