Download presentation
Presentation is loading. Please wait.
Published byAlisha Hoover Modified over 9 years ago
1
1 Yapay Zeka 2007 Ocak
2
2 Doğal Dil İşleme (Natural Language Processing) Doğal Dil Anlama (Natural Lang. Understanding) Belirli amaçlarla söylenen ve yazılan cümleleri anlama ile ilgilidir. Doğal Dil Üretme (Natural Lang. Generation) (Tam tersi) Söylenmek istenenleri, Türkçe veya İngilizce gibi doğal bir dil ile temsiliyle ilgilenir.
3
3 Natural Language Processing Natural language processing (NLP) is a subfield of artificial intelligence and linguistics.artificial intelligencelinguistics It studies the problems of automated generation and understanding of natural human languages.natural human languages Natural language generation systems convert information from computer databases into normal-sounding human language, and natural language understanding systems convert samples of human language into more formal representations that are easier for computer programs to manipulate.
4
4 Natural Language Understanding Stages of Natural Language Understanding
5
5 Doğal Dil Anlamanın 4 Aşaması Speech Recognition (Ses Tanıma) İşlenmemiş ses sinyalleri analiz edilerek, konuşulan kelime dizisi elde edilir. Syntactic Analysis (Sözdizimsel Analiz) Dilin gramer bilgisi kullanılarak kelimeler analiz edilir ve cümle yapısı elde edilir.
6
6 Doğal Dil Anlamanın 4 Aşaması Semantic Analysis (Anlam Analizi) Cümle yapısından ve içindeki kelimelerin anlamından yararlanılarak tüm cümlenin kısmi anlamı elde edilir. Pragmatic Analysis (Pragmatik Analiz) Genel durum yani şartlara bakılarak cümlenin anlamı tamamlanır. Kim, ne zaman, nerede kime dedi gibi.
7
7 Reasoning (Akıl Yürütme) Reasoning is the act of using reason to derive a conclusion from certain premises, using a given methodology. The two most commonly used explicit methods to reach a conclusion are deductive reasoning and inductive reasoning.reasonconclusionpremisesmethodologydeductive reasoning inductive reasoning From wikipedia
8
8 Chaining (Zincirleme) There are two main methods of reasoning when using inference rules: backward chaining and forward chaining.reasoning Forward chainingForward chaining starts with the data available and uses the inference rules to conclude more data until a desired goal is reached. An inference engine using forward chaining searches the inference rules until it finds one in which the if-clause is known to be true. It then concludes the then-clause and adds this information to its data. It would continue to do this until a goal is reached. Because the data available determines which inference rules are used, this method is also called data driven.goalif-clausetruethen-clauseinformationdata Backward chainingBackward chaining starts with a list of goals and works backwards to see if there is data which will allow it to conclude any of these goals. An inference engine using backward chaining would search the inference rules until it finds one which has a then-clause that matches a desired goal. If the if-clause of that inference rule is not known to be true, then it is added to the list of goals. From wikipedia
9
9 Forward Chaining İleri Zincirleme For example, suppose that the goal is to conclude the color of my pet Fritz, given that he croaks and eats flies, and that the rulebase contains the following two rules: 1.If Fritz croaks and eats flies - Then Fritz is a frogfrog 2.If Fritz is a frog - Then Fritz is green The given rule (that Fritz croaks and eats flies) would first be added to the knowledgebase, as the rulebase is searched for an antecedent that matches its consequent; This is true of the first Rule, so the conclusion (that Fritz is a Frog) is also added to the knowledgebase, and the rulebase is again searched. This time, the second rules' antecedent matches our consequent, so we add to our knowledgebase the new conclusion (that Fritz is green). Nothing more can be inferred from this information, but we have now accomplished our goal of determining the color of Fritz.knowledgebaserulebase antecedentconsequent knowledgebaserulebaseantecedentconsequentknowledgebase
10
10 Backward Chaining Geri Zincirleme For example, suppose that the goal is to conclude the color of my pet Fritz, given that he croaks and eats flies, and that the rulebase contains the following two rules: 1.If Fritz croaks and eats flies - Then Fritz is a frogfrog 2.If Fritz is a frog - Then Fritz is green This rulebase would be searched and the second rule would be selected, because its conclusion (the Then clause) matches the goal (that Fritz is green). It is not yet known that Fritz is a frog, so the If statement is added to the goal list (in order for Fritz to be green, he must be a frog). The rulebase is again searched and this time the first rule is selected, because its Then clause matches the new goal that was just added to the list (whether Fritz is a frog). The If clause (Fritz croaks and eats flies) is known to be true and therefore the goal that Fritz is a frog can be concluded (Fritz croaks and eats flies, so must be green; Fritz is green, so must be a frog).rulebase
11
11 Comparing Forward Chaining & Backward Chaining “ left to right ” work forward from conditions to conclusion data-driven bottom-up –gather facts to infer conclusion –i.e. building up from smaller pieces into a bigger one “ right to left ” work backward from hypothesis to goals goal-driven top-down –break down hypothesis into subgoals –decompose a big piece into smaller ones Forward Chaining Backward Chaining
12
12 Forward vs. Backward Forward Chaining : The top-down approach of forward chaining is commonly used in expert systems, such as CLIPS.expert systemsCLIPS Backward Chaining : Because the list of goals determines which rules are selected and used, this method is called goal driven, in contrast to data-driven forward-chaining inference. This bottom-up appoach is often employed by expert systems.goal drivendata-drivenexpert systems Programming languages such as Prolog or Eclipse supports backward chaining.Prolog
13
13 Propositional logic: Syntax Propositional logic is the simplest practical logic –The proposition symbols P 1, P 2 etc are sentences –If S is a sentence, ( S) is also (negation) –If S 1 and S 2 are sentences, (S 1 S ) ) is also (conjunction) –If S 1 and S 2 are sentences, (S 1 S 2 ) is also (disjunction) –If S 1 and S 2 are sentences, (S 1 S 2 ) is also (implication) –If S 1 and S 2 are sentences, (S 1 S 2 ) is also (biconditional)
14
14 Truth tables for connectives
15
15 Logical equivalence Two sentences are logically equivalent means –They are true in the same models Some well-known equivalences
16
16 First-order logic Whereas propositional logic assumes the world contains facts, first-order logic (like natural language) assumes the world contains –Objects: people, houses, numbers, colors, baseball games, wars, … –Relations: red, round, prime, brother of, bigger than, part of, comes between, … –Functions: father of, best friend, one more than, plus, …
17
17 Syntax of FOL: Basic elements ConstantsKingJohn, 2, SNU,... PredicatesBrother, sibling, >,... –(sibling means either brother or sister) FunctionsSqrt, LeftLegOf,... Variablesx, y, a, b,... Connectives , , , , Equality= –(can be treated as just a special predicate) Quantifiers , –Note: we follow standard mathematical terminology, so constants have capitals, variables are lower case –This is the opposite convention to prolog
18
18 Semantic Networks Bilgi bir ağ (network) veya çizge (graph) olarak temsil edilir. Animal Reptile Elephant Nellie Mammal apples large head subclass haspart subclass instance likes size Africa livesin
19
19 Frames Frames were the next development, allowing more convenient “packaging” of facts about an object. Frames look much like modern classes, without the methods. We use the terms “slots” and “slot values” mammal: subclass: animal elephant: subclass: mammal size: large haspart: trunk Nellie: instance: elephant likes: apples
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.