Download presentation
Presentation is loading. Please wait.
Published byBernard Hart Modified over 9 years ago
1
Conceptual Role Semantics - And a model for cross-system translations – Presented by Ellie Hua Wang
2
The background Traditional philosophy of language: meaning – symbol–world relation Frege-problem: Hesperus, Phosphorus Philosophy of mind: Content of thought - concept-world relation External grounding account vs. Conceptual web account Putnam-problem: the twin earth case
3
Conceptual role semantics (CRS) The meanings of expressions (concepts) depend on their connections to each other in a system Defenders in philosophy: Wilfrid Sellars(1963), Ned Block (1986, 1999), Gilbert Harman (1974, 1987), and Michael Devitt, Brian Loar, William Lycan, Hartry Field
4
Two theories for CRS in Philosophy Putnam-problem Two-factor semantic theory (Block 1986, Field 1977, Lycan 1984): a theory of meaning consists of two components, a theory of truth and CRS. One-factor (long arm) semantic theory (Harman 1987): the conceptual roles reach out into the world of referents
5
Applications of CRS Linguistics - Saussure (1915, 1959) : concepts are negatively defined Philosophy of Science - Kuhn (1962) : incommensurability thesis Psychology - Barr and Caplan (1987), Goldstone (1993, 1995, 1996) : concepts are frequently characterized by their associative relations to other concepts
6
More applications Computer science: - Lenat and Feigenbaum (1991) - Landauer and Dumais (1997) – Latent Semantic Analysis - Rapaport (2002) - SNePs
7
External grounding account Are roles in conceptual network sufficient for meaning? –Symbol grounding problem –Putnam’s problem Arguments from Psychology and Computer Science
8
Criticisms of CRS Concept identity - a problem with meaning holism Concept similarity The problems with the two theories
9
Cross-system translation (Goldstone and Rogosky 2002) The notion of Conceptual Correspondence: two concepts correspond to each other if they play equivalent roles within their systems A neural network ‘ABSURDIST’ (Aligning Between Systems Using Relations Derived Inside Systems for Translation) provides a formal method for deciding conceptual correspondence across systems solely on the basis of relations between concepts within a system
10
ABSURDIST Goal: - to illustrate the sufficiency of a conceptual web account for translating between systems - to indicate synergistic interactions between within-system information and extrinsic information
11
ABSURDIST Does not connect concepts to the external world Is not to create rich translations between systems, but just explores the simplest representations of concepts Is not a complete model for meaning Is not a simulation on human translation
12
ABSURDIST – input two 2D proximity matrices
13
ABSURDIST-the algorithm
14
ABSURDIST-the algorithm cont.
15
ABSURDIST - assessment The distances between every pair of elements within a system D (x, y) are computed by:
16
ABSURDIST’s tolerance to distortion It also shows that the algorithm’s ability to recover correspondences generally increases as a function of the number of elements in each system, at least for small levels of noise. The performance gradually deteriorates with added noise, but that the algorithm is robust to at least modest amounts of noise. The performance is a lot better than chance performance.
17
ABSURDIST’s tolerance to distortion cont. This graph reveals that partially correct translations are rarely obtained. With relatively few exceptions, either ABSURDIST finds all of the correct correspondences, or finds none.
18
More graphs The number of iterations required for good performance is not appreciably affected by the number of items per system.
19
Different-size systems When different- sized systems are compared, ABSURDIST’s correspondences are still typically one-to-one, but not all elements of the larger system are placed in correspondence.
20
Subset matching ABSURDIST will draw correspondences between the three pairs of elements that share the majority of their roles in common, but not between the fourth, mismatching elements.
21
Indirect similarity relations if two elements within a system enter into the same set of similarity relations, they still may be disambiguated because all correspondences are worked out simultaneously
22
Integrating internal and external information One way to incorporate extrinsic biases into the system is by initially seeding correspondence units with values. The amount by which translation accuracy improves beyond the amount predicted generally increases as a function of system size.
23
Integrating internal and external information cont. Using only information intrinsic to a system results in better correspondences than using only extrinsic information. The superior performance of the network that uses both intrinsic and extrinsic information derives from its robustness in the face of noise.
24
Conclusions Translations between two systems can be found using only information about the relations between elements within a system. Intrinsic relations suffice to determine cross-system translations, but if extrinsic information is available, more robust, noise resistant translations can be found.
25
Discussions The correspondence between concepts within an internal system and physically measurable elements of an external system Analytic and synthetic distinction Psychological distance determination The assumption of the model – similar structures of the systems
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.