Download presentation
Presentation is loading. Please wait.
Published byDerick Hodge Modified over 9 years ago
1
SAND2009-2391C 1/20 ParaText™ Leveraging Scalable Scientific Computing Capabilities for Large-Scale Text Analysis and Visualization Daniel M. Dunlavy, Timothy M. Shead, Patricia J. Crossno Sandia National Laboratories SIAM Conference on Computational Science and Engineering March 2, 2009 Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy’s National Nuclear Security Administration under contract DE-AC04-94AL85000.
2
2/20 Motivation Unstructured text Database Data analyst Processing and analysisVisualization Gigabytes to terabytes Few and overworked Scalable: ParaText™Scalable: VTK / ParaView / Titan
3
3/20 Text Analysis Pipeline Ingestion Pre-processing Transformation Analysis Post-processing Archiving File readers (ASCII, UTF-8, XML, PDF,...) Tokenization, stemming, part-of-speech tagging named entity extraction, sentence boundaries Data modeling, dimensionality reduction, feature weighting, feature extraction/selection Information retrieval, clustering, summarization, classification, pattern recognition, statistics Visualization, filtering, summary statistics Database, file, web site
4
4/20 Vector Space Model Vector Space Model for Text –Terms (features): –Documents (objects): –Term Document Matrix: – : measure of importance of term in document Term Examples –Sentence: “Danny re-sent $1.” –Words: danny, sent, re [# chars?], $ [sym?], 1 [#?], re-sent [-?] –n-grams (3): dan, ann, nny, ny_, _re, re-, e-s, sen, ent, nt_, … –Named entities (people, orgs, money, etc.): danny, $1 Document Examples –Documents, paragraphs, sentences, fixed-size chunks [G. Salton, A. Wong, and C. S. Yang (1975), "A Vector Space Model for Automatic Indexing," Comm. ACM.]
5
5/20 Latent Semantic Analysis (LSA) SVD: Boolean query (query as new “doc”): Truncated SVD: LSA query: terms documents d1d1 d2d2 dndn t2t2 t1t1 tmtm … d3d3 d4d4...... Truncated SVD terms concepts documents concepts singular values [Deerwester, S. C., et al. (1990), “Indexing by latent semantic analysis.” J. Am. Soc. Inform. Sci.]
6
6/20 ParaText™: Scalable Text Analysis ParaText™ –Scalable client-server system Leverages existing capabilities ParaView, VTK, Trilinos –Latent semantic analysis Term-document matrix analysis Document similarities Term (concept) similarities –Algebraic Engine General object-feature analysis General Use –OverView: open source information visualization –Timeline Treemap Browser: temporal information visualization –LSAView: text algorithm analysis –LDRDView: funding portfolio analysis –ParaSpace: simulation data analysis
7
7/20 Leveraging VTK / ParaView Useful capabilities already in VTK / ParaView –Pipeline architecture –Components for reading, processing, and visualizing (scientific) data sets –Distributed-memory parallel processing –Streaming data –ParaView client/server architecture Informatics capabilities recently added by Titan –Data structures: tables, trees, graphs –Data storage: delimited text files, common graph formats, SQL databases –Algorithms: graph, statistics, Matlab and R integration –Visualization: informatics-oriented paradigms Recent Titan features driven by ParaText™ –Dense & sparse N-way arrays for algebraic / tensor analysis –Unicode text storage, manipulation, and rendering –Text ingestion, LSA, creating/storing algebraic models, performing algebraic analyses
8
8/20 Leveraging Trilinos Useful capabilities already in Trilinos –Sparse matrix distributed data structure (Epetra) –SVD (Anasazi) –Incremental SVD (RBGen) –Load balancing (Isorropia/Zoltan) Remaining issues for ParaText™ –Data layout: different needs throughout pipeline –Data storage: Epetra only provides doubles (Tpetra) –Data storage: compressed row storage –Target platforms: Linux focus (Autotools → CMake)
9
9/20 ParaText™ Server (PTS) Artifact DB PTS Reader P0P0 P1P1 PkPk Parser Matrix SVD Reader Parser Matrix SVD Reader Parser Matrix SVD Fully scalable pipeline Matrices DB HPC Resource (cluster, multicore server, etc.) Database Servers XML HTTP Master ParaText™ Server ParaText™ Client OverView
10
10/20 ParaText™ Scaling Examples Types of scaling –Strong: fixed data size, increasing # of processors –Weak: increasing data size, increasing # of processors –Fixed resource: increasing data size, fixed # of processors Data –Subset of Spock Challenge Data (~10 Gb HTML documents) –64 Mb, 3318 documents, 281904 unique terms Hardware –16 Dell 1850 Servers, Dual 3.6Ghz EM64T, 6GB RAM
11
11/20 Strong Scaling
12
12/20 Weak Scaling
13
13/20 Weak Scaling by Filter
14
14/20 Fixed Resource Scaling
15
15/20 APW19990519.0113 1999-05-19 21:11:17 usa Pulses May Ease SchizophrenicVoices WASHINGTON (AP)Schizophrenia patients whose medication couldn't stop the imaginary voices in theirheads gained some relief after researchers repeatedly sent a magnetic field into asmall area of their brains. About half of 12 patients studied said their hallucinations becamemuch less severe after the treatment, which feels like ``having a woodpeckerknock on your head'' once a second for up to 16 minutes, said researcherDr.Ralph Hoffman. The voices stopped completely in three of these patients. The effect lasted for up toa few days for most participants, and one man reported that it lasted seven weeksafterbeing treated daily for four days. Hoffman stressed that thestudy is only preliminary and can't prove that the treatment would be useful. ``We need to do much moreresearch on this,'' he said in an interview. Hoffman, deputy medicaldirector of the Yale Psychiatric Institute, is scheduled to present the workThursday at the annual meeting of the American Psychiatric Association. Not all people withschizophrenia hear voices, and of those who do, Hoffman estimated that maybe 25percent can't control them with medications even when other disease symptomsabate. So the workcould pay off for ``a small but very ill group of patients,'' he said. The treatment is calledtranscranial magnetic stimulation, or TMS. While past research indicates it mightbe helpful in lifting depression, it hasn't been studied much in schizophrenia. In TMS, anelectromagnetic coil is placed on the scalp and current is turned on and off to create apulsing magnetic field that reaches into a small area of the brain. The goal is to make brain cellsunderneath the coil fire messages to adjoining cells. The procedure is muchdifferent from electroconvulsive therapy, called ECT, which applies pulses ofelectricity rather than a magnetic field to the brain. Unlike TMS, ECT creates a briefseizure and is performed under general anesthesia. ECT is used most often fortreatingsevere depression. In TMS, the magnetic pulses are thought to calm the affected partof the brain if they're given as slowly as once per second, Hoffman said. He and colleagues targeted an area involved in understanding speech, above and behind the left ear, on the theory that hallucinated voices come from overactivity there. The treatment can make scalp muscles muscle contract, leading tothe woodpecker feeling, he said, but patients could tolerate it. Headachewas the most common side effect, and there was no sign that the treatment affected the ability to understandspeech, he said. To make sure the study resultsdidn'treflect just the psychological boost of getting a treatment, researchers gave sham and real treatments to each studyparticipant and studied the difference in how patients responded. <s docid="APW19990519.0113" num="26" starting with
16
16/20 LSAView: Algorithm Analysis/ Development LSAView –Analysis and exploration of impact of informatics algorithms on end-user visual analysis of data –Aids in discovery process of optimal algorithm parameters for given data and tasks Features –Side-by-side comparison of visualizations for two sets of parameters –Small multiple view for analyzing 2+ parameter sets simultaneously –Linked document, graph, matrix, and tree data views –Interactive, zoomable, hierarchical matrix and matrix-difference views –Statistical inference tests used to highlight novel parameter impact –Used in developing and understanding ParaText™ algorithms
17
17/20 LSAView
18
18/20 LSAView Impact Document similarities: Inner product view: Scaled inner product view: What is the best weighting of singular values for document similarity graph generation? original scalingno scalinginverse sqrtinverse [Leisure Studies of America Data: 97 documents, 335 terms]
19
19/20 Summary and Future Work ParaText™ –Leverages VTK / ParaView / Trilinos (scientific computing) –Scalable infoviz / text analysis pipeline –General data object-feature analysis capability More work ahead … –Scaling experiments: bigger data, layout constraints, etc. –Extensions: load balancing, incremental SVD, etc. –Other applications: clustering, summarization, classification, etc. –Relevance feedback: automatically incorporating user feedback to improve analysis (e.g, priors, metric learning)
20
20/20 Thank You ParaText™: Leveraging Scalable Scientific Computing Capabilities for Large-Scale Text Analysis and Visualization Danny Dunlavy dmdunla@sandia.gov http://www.cs.sandia.gov/~dmdunla
21
21/20 Backup Slides
22
22/20 Feature Weighting Term Document Matrix Scaling:
23
23/20 d 1 : Hurricane. A hurricane is a catastrophe. d 2 : An example of a catastrophe is a hurricane. d 3 : An earthquake is bad. d 4 : Earthquake. An earthquake is a catastrophe. d 1 : Hurricane. A hurricane is a catastrophe. d 2 : An example of a catastrophe is a hurricane. d 3 : An earthquake is bad. d 4 : Earthquake. An earthquake is a catastrophe. 1011catastrophe 2100earthquake 0012hurricane d4d4 d3d3 d2d2 d1d1 0catastrophe 0earthquake 1hurricane q A.30.15.60.59catastrophe.92.96.02-.03earthquake.11-.11.78 hurricane d4d4 d3d3 d2d2 d1d1 A2A2 00.71.89qTAqTA.11–.78 qTA2qTA2 Remove stopwords normalization only rank-2 approximation captures link to doc 4 LSA Example 1.450.71.45catastrophe.89100earthquake 00.71.89hurricane d4d4 d3d3 d2d2 d1d1 A
24
24/20 LSA Example 2 ∆policy ∆planning ∆politics ∆tomlinson ∆1986 oSport in Society: policy, Politics and Culture, ed A. Tomlinson (1990) oPolicy and Politics in Sport, PE and Leisure eds S. Fleming, M. Talbot and A. Tomlinson (1995) oPolicy and Planning (II), ed J. Wilkinson (1986) oPolicy and Planning (I), ed J. Wilkinson (1986) oLeisure: Politics, Planning and People, ed A. Tomlinson (1985) ∆parker ∆lifestyles ∆1989 ∆part oWork, Leisure and Lifestyles (Part 2), ed S. R. Parker (1989) oWork, Leisure and Lifestyles (Part 1), ed S. R. Parker (1989) [Leisure Studies of America Data: 97 documents, 335 terms]
25
25/20 Modeling Text Model document meaning as a linear equation –Meaning (document) = j meaning (term j ) –Induces a high dimensional vector space model Create term-document (occurrence) matrix –Examples of “terms”: Words Word stems, lemmas Entities (person, organization, location, etc.) N-grams (characters or words) terms documents d1d1 d2d2 dndn t2t2 t1t1 tmtm … d3d3 d4d4......
26
26/20 Latent Semantic Analysis Conceptual searching –rank(k) : more exact data similarities –rank(k) : more conceptual data similarities –Compute larger rank, then use smaller rank terms k = 6 terms k = 24 concepts increasing k more conceptual more exact
27
27/20 ParaText™ Operations Document parsing, matrix creation and weighting SVD: Truncated: Query scores (query as new “doc”): LSA Ranking: Document similarities: Term Similarities: terms concepts documents concepts singular values T DTDT
28
28/20 Document Similarity Graphs Document similarity matrix Document similarity graph –Each document (or term, entity, etc.) is a vertex –Each row defines an edge documents concepts documents concepts singular values threshold sparse coordinate format
29
29/20 Graph Similarities Statistics on edges –One graph: one-sample t statistic –Two graphs: two-sample t statistic Edges from graph 1 Edges from graph 2
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.