1、1,The New “Bill of Rights” of Information Society,Raj Reddy and Jaime Carbonell Carnegie Mellon University March 23, 2006 Talk at Google,2,New Bill of Rights,Get the right information e.g. search engines To the right people e.g. categorizing, routing At the right time e.g. Just-in-Time (task modelin
2、g, planning) In the right language e.g. machine translation With the right level of detail e.g. summarization In the right medium e.g. access to information in non-textual media,3,Relevant Technologies,“right information” “right people” “right time” “right language” “right level of detail” “right me
3、dium”,search engines classification, routing anticipatory analysis machine translation summarization speech input and output,4,“right information” Search Engines,5,The Right Information,Right Information from future Search Engines How to go beyond just “relevance to query” (all) and “popularity” Eli
4、minate massive redundancy e.g. “web-based email” Should not result in multiple links to different yahoo sites promoting their email, or even non-Yahoo sites discussing just Yahoo-email. Should result ina link to Yahoo email, one to MSN email, one to Gmail, one that compares them, etc. First show tru
5、sted info sources and user-community-vetted sources At least for important info (medical, financial, educational, ), I want to trust what I read, e.g., For new medical treatments First info from hospitals, medical schools, the AMA, medical publications, etc. , andNOT from Joe Shmos quack practice pa
6、ge or from the National Enquirer. Maximum Marginal Relevance Novelty Detection Named Entity Extraction,6,Beyond Pure Relevance in IR,Current Information Retrieval Technology Only Maximizes Relevance to Query What about information novelty, timeliness, appropriateness, validity, comprehensibility, de
7、nsity, medium,.? Novelty is approximated by non-redundancy! we really want to maximize: relevance to the query, given the user profile and interaction history, P(U(f i , ., f n ) | Q & C & U & H)where Q = query, C = collection set,U = user profile, H = interaction history .but we dont yet know how.
8、Darn.,7,query,documents,MMR,IR,Standard IR,Maximal Marginal Relevance vs. Standard Information Retrieval,8,Find the first report of a new event (Unconditional) Dissimilarity with Past Decision threshold on most-similar story (Linear) temporal decay Length-filter (for teasers) Cosine similarity with
9、standard weights:,Novelty Detection,9,New First Story Detection Directions,Topic-conditional models e.g. “airplane,” “investigation,” “FAA,” “FBI,” “casualties,” topic, not event “TWA 800,” “March 12, 1997” event First categorize into topic, then use maximally-discriminative terms within topic Rely
10、on situated named entities e.g. “Arcan as victim,” “Sharon as peacemaker”,10,Link Detection in Texts,Find text (e.g. Newstories) that mention the same underlying events. Could be combined with novelty (e.g. something new about interesting event.) Techniques: text similarity, NEs, situated NEs, relat
11、ions, topic-conditioned models, ,11,Purpose: to answer questions such as: Who is mentioned in these 100 Society articles? What locations are listed in these 2000 web pages? What companies are mentioned in these patent applications? What products were evaluated by Consumer Reports this year?,Named-En
12、tity identification,12,President Clinton decided to send special trade envoy Mickey Kantor to the special Asian economic meeting in Singapore this week. Ms. Xuemei Peng, trade minister from China, and Mr. Hideto Suzuki from Japans Ministry of Trade and Industry will also attend. Singapore, who is ho
13、sting the meeting, will probably be represented by its foreign and economic ministers. The Australian representative, Mr. Langford, will not attend, though no reason has been given. The parties hope to reach a framework for currency stabilization.,Named Entity Identification,13,Finite-State Transduc
14、ers w/variables Example output:FNAME: “Bill” LNAME: “Clinton” TITLE: “President”FSTs Learned from labeled data Statistical learning (also from labeled data) Hidden Markov Models (HMMs) Exponential (maximum-entropy) models Conditional Random Fields Lafferty et al,Methods for NE Extraction,14,Extracte
15、d Named Entities (NEs)People Places President Clinton Singapore Mickey Kantor Japan Ms. Xuemei Peng China Mr. Hideto Suzuki Australia Mr. Langford,Named Entity Identification,15,Motivation: It is useful to know roles of NEs: Who participated in the economic meeting? Who hosted the economic meeting?
16、Who was discussed in the economic meeting? Who was absent from the the economic meeting?,Role Situated NEs,16,Emerging Methods for Extracting Relations,Link Parsers at Clause Level Based on dependency grammars Probabilistic enhancements Lafferty, Venable Island-Driven Parsers GLR* Lavie, Chart Nyber
17、g, Placeway, LC-Flex Rose Tree-bank-trained probabilistic CF parsers IBM, Collins Herald the return of deep(er) NLP techniques. Relevant to new Q/A from free-text initiative. Too complex for inductive learning (today).,17,Example: (Who does What to Whom)“John Snell reporting for Wall Street. Today F
18、lexicon Inc. announced a tender offer for Supplyhouse Ltd. for $30 per share, representing a 30% premium over Fridays closing price. Flexicon expects to acquire Supplyhouse by Q4 2001 without problems from federal regulators“,Relational NE Extraction,18,Useful for relational DB filling, to prepare d
19、ata for “standard” DM/machine-learning methodsAcquirer Acquiree Sh.price Year _Flexicon Logi-truck 18 1999Flexicon Supplyhouse 30 10 2000. . . .,Fact Extraction Application,19,“right people” Text Categorization,20,The Right People,User-focused search is key If a 7-year old is working on a school p
20、roject taking good care of ones heart and types in “heart care”, she will want links to pages like “You and your friendly heart”, “Tips for taking good care of your heart”, “Intro to how the heart works” etc. NOT the latest New England Journal of Medicine article on “Cardiological implications of im
21、muo-active proteases”. If a cardiologist issues the query, exactly the opposite is desired Search engines must know their users better, and the user tasks Social affiliation groups for search and for automatically categorizing, prioritizing and routing incoming info or search results. New machine le
22、arning technology allows for scalable high-accuracy hierarchical categorization. Family group Organization group Country group Disaster affected group Stockholder group,21,Text Categorization,Assign labels to each document or web-page Labels may be topics such as Yahoo-categories finance, sports, Ne
23、wsWorldAsiaBusiness Labels may be genres editorials, movie-reviews, news Labels may be routing codes send to marketing, send to customer service,22,Manual assignment as in Yahoo Hand-coded rules as in Reuters Machine Learning (dominant paradigm) Words in text become predictors Category labels become
24、 “to be predicted” Predictor-feature reduction (SVD, 2, ) Apply any inductive method: kNN, NB, DT,Text Categorization,Methods,23,Multi-tier Event Classification,24,“right timeframe” Just-in-Time - no sooner or later,25,Get the information to user exactly when it is needed Immediately when the inform
25、ation is requested Prepositioned if it requires time to fetch & download (eg HDTV video) requires anticipatory analysis and pre-fetching How about “push technology” for, e.g. stock alerts, reminders, breaking news? Depends on user activity: Sleeping or Dont Disturb or in Meeting wait your chance Rea
26、ding email now if info is urgent, later otherwise Group info before delivering (e.g. show 3 stock alerts together) Info directly relevant to users current task immediately,Just in Time Information,26,“right language” Translation,27,Language Identification (from text, speech, handwriting) Trans-lingu
27、al retrieval (query in 1 language, results in multiple languages) Requires more than query-word out-of-context translation (see Carbonell et al 1997 IJCAI paper) to do it well Full translation (e.g. of web page, of search results snippets, ) General reading quality (as targeted now) Focused on getti
28、ng entities right (who, what, where, when mentioned) Partial on-demand translation Reading assistant: translation in context while reading an original document, by highlighting unfamiliar words, phrases, passages. On-demand Text to Speech Transliteration,Access to Multilingual Information,28,Knowled
29、ge-Engineered MT Transfer rule MT (commercial systems) High-Accuracy Interlingual MT (domain focused) Parallel Corpus-Trainable MT Statistical MT (noisy channel, exponential models) Example-Based MT (generalized G-EBMT) Transfer-rule learning MT (corpus & informants) Multi-Engine MT Omnivorous appro
30、ach: combines the above to maximize coverage & minimize errors,“in the Right Language”,29,Types of Machine Translation,Syntactic Parsing,Semantic Analysis,Sentence Planning,Text Generation,Source(Arabic),Target (English),Transfer Rules,Direct: EBMT,Interlingua,30,English: I would like to meet her. M
31、apudungun: Aykefun trawael fey engu.,English: The tallest man is my father. Mapudungun: Chi doy ftra chi wentru fey ta inche i chaw.,English: I would like to meet the tallest man Mapudungun (new): Aykefun trawael Chi doy ftra chi wentru Mapudungun (correct): Ayken i trawael chi doy ftra wentruengu.,
32、EBMT example,31,Multi-Engine Machine Translation,MT Systems have different strengths Rapidly adaptable: Statistical, example-based Good grammar: Rule-Based (linguisitic) MT High precision in narrow domains: KBMT Minority Language MT: Learnable from informant Combine results of parallel-invoked MT Se
33、lect best of multiple translations Selection based on optimizing combination of: Target language joint-exponential model Confidence scores of individual MT engines,32,Illustration of Multi-Engine MT,33,State of the Art in MEMT for New “Hot” Languages,We can do now: Gisting MT for any new language in
34、 2-3 weeks (given parallel text) Medium quality MT in 6 months (given more parallel text, informant, bi-lingual dictionary) Improve-as-you-go MT Field MT system in PCs,We cannot do yet: High-accuracy MT for open domains Cope with spoken-only languages Reliable speech-speech MT (but BABYLON is coming
35、) MT on your wristwatch,34,“right level of detail” Summarization,35,Automate summarization with hyperlink one-click drilldown on user selected section(s). Purpose Driven: summaries are in service of an information need, not one-size fits all (as in Shaoms outline and the DUC NIST evaluations) EXAMPL
36、E: A summary of a 650-page clinical study can focus on effectiveness of the new drug for target disease methodology of the study (control group, statistical rigor,) deleterious side effects if any target population of study (e.g. acne-suffering teens, not eczema suffering adults .depending on the us
37、ers task or information query,Right Level of Detail,36,Hierarchical multi-level pre-computed summary structure, or on-the-fly drilldown expansion of info. Headline 20 words Abstract 1% or 1 page Summary 5-10% or 10 pages Document 100%Scope of Summary Single big document (e.g. big clinical study) Tig
38、ht cluster of search results (e.g. vivisimo) Related set of clusters (e.g. conflicting opinions on how to cope with Irans nuclear capabilities) Focused area of knowledge (e.g. Whats known about Pluto? Lycos has good project in this via Hotbot) Specific kinds of commonly asked information(e.g. synthe
39、size a bio on person X from any web-accessible info),Information Structuring and Summarization,37,Types of Summaries,Document Summarization,38,“right medium” Finding information in Non-textual Media,39,Speech text (speech recognition) Text speech TTS: FESTVOX by far most popular high-quality system
40、Handwriting text (handwriting recognition) Printed text electronic text (OCR) Picture caption key words (automatically) for indexing and searching Diagram, tables, graphs, maps caption key words (automatically),Indexing and Searching Non-textual (Analog) Content,40,Conclusion,41,What is Text Mining,
41、Search documents, web, news Categorize by topic, taxonomy Enables filtering, routing, multi-text summaries, Extract names, relations, Summarize text, rules, trends, Detect redundancy, novelty, anomalies, Predict outcomes, behaviors, trends, ,Who did what to whom and where?,42,Data Mining vs. Text Mi
42、ning,Data: relational tables DM universe: huge DM tasks: DB “cleanup” Taxonomic classification Supervised learning with predictive classifiers Unsupervised learning clustering, anomaly detection Visualization of results,Text: HTML, free form TM universe: 103X DM TM tasks: All the DM tasks, plus: Extraction of roles, relations and facts Machine translation for multi-lingual sources Parse NL-query (vs. SQL) NL-generation of results,