ImageVerifierCode 换一换
格式:PPT , 页数:50 ,大小:652KB ,
资源ID:376724      下载积分:2000 积分
快捷下载
登录下载
邮箱/手机:
温馨提示:
如需开发票,请勿充值!快捷下载时,用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)。
如填写123,账号就是123,密码也是123。
特别说明:
请自助下载,系统不会自动发送文件的哦; 如果您已付费,想二次下载,请登录后访问:我的下载记录
支付方式: 支付宝扫码支付 微信扫码支付   
注意:如需开发票,请勿充值!
验证码:   换一换

加入VIP,免费下载
 

温馨提示:由于个人手机设置不同,如果发现不能下载,请复制以下地址【http://www.mydoc123.com/d-376724.html】到电脑端继续下载(重复下载不扣费)。

已注册用户请登录:
账号:
密码:
验证码:   换一换
  忘记密码?
三方登录: 微信登录  

下载须知

1: 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。
2: 试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。
3: 文件的所有权益归上传用户所有。
4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
5. 本站仅提供交流平台,并不能对任何下载内容负责。
6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

版权提示 | 免责声明

本文(Introduction to Text Categorization.ppt)为本站会员(bowdiet140)主动上传,麦多课文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知麦多课文库(发送邮件至master@mydoc123.com或直接QQ联系客服),我们立即给予删除!

Introduction to Text Categorization.ppt

1、Introduction to Text Categorization,Lecturer: Paul Bennett 20-760: Web-Based Information Architectures July 23, 2002,Lecture Overview,Motivation for Automatic Text Categorization A Text Categorization Example Approaches to Automatic Text Categorization (emphasizing the “k-Nearest Neighbor approach”)

2、 Evaluation Methodology and Performance MetricsSummary,Motivation for Automatic Text Categorization,Shopping on the Web,Suppose you want to buy a cappuccino maker as a gifttry Google for “cappuccino maker”try “Yahoo! Shopping” for “cappuccino maker”,Observations,Broad indexing & speedy search alone

3、are not enough. Organizational view of data is critical for effective retrieval. Categorized data are easy for user to browse. Category taxonomies become most central in well-known web sites (Yahoo!, Lycos, .).,Text Categorization Applications,Web pages organized into category hierarchies Journal ar

4、ticles indexed by subject categories (e.g., the Library of Congress, MEDLINE, etc.) Responses to Census Bureau occupations Patents archived using International Patent Classification Patient records coded using international insurance categories E-mail message filtering News events tracked and filter

5、ed by topics,Cost of Manual Text Categorization,Yahoo! 200 (?) people for manual labeling of Web pages using a hierarchy of 500,000 categories MEDLINE (National Library of Medicine) $2 million/year for manual indexing of journal articles using MEdical Subject Headings (18,000 categories) Mayo Clinic

6、 $1.4 million annually for coding patient-record events using the International Classification of Diseases (ICD) for billing insurance companies US Census Bureau decennial census (1990: 22 million responses) 232 industry categories and 504 occupation categories $15 million if fully done by hand,What

7、 does it take to compete?,Suppose you were starting a web search company, what would it take to compete with established engines? You need to be able to establish a competing hierarchy fast. You will need a relatively cheap solution. (Unless you have investors that want to pay millions of dollars ju

8、st to get off the ground.),Why not a semi-automatic text categorization tool?,Humans can encode knowledge of what constitutes membership in a category.This encoding can then be automatically applied by a machine to categorize new examples.For example.,Rule-based Approach to TC,Text in a Web Page “Sa

9、eco revolutionized espresso brewing a decade ago by introducing Saeco SuperAutomatic machines, which go from bean to coffee at the touch of a button. The all-new Saeco Vienna Super-Automatic home coffee and cappucino machine combines top quality with low price!”Rules Rule 1. (espresso or coffee or c

10、appucino ) and machine* Coffee Maker Rule 2. automat* and answering and machine* Phone Rule .,Expert System for TC (late 1980s),Defining Rules By Hand,Experience has shown too time consuming too difficult inconsistency issues (as the rule set gets large),Replace Knowledge Engineering with a Statisti

11、cal Learner,Knowledge Statistical Engineering Learning,For US Census Bureau Decennial Census 1990 232 industry categories and 504 occupation categories $15 million if fully done by hand Define classification rules manually: Expert System AIOCS Development time: 192 person-months (2 people, 8 years)

12、Accuracy = 47% Learn classification function Nearest Neighbor classification (Creecy 92: 1-NN) Development time: 4 person-months (Thinking Machine) Accuracy = 60%,vs.,A Text Categorization Example,Predicting Topics of News Stories,Given: Collection of example news stories already labeled with a cate

13、gory (topic). Task: Predict category for news stories not yet labeled.For our example, well only get to see the headline of the news story.Well represent categories using colors. (All examples with the same color belong to the same category.),Our Labeled Examples,What to predict before seeing the do

14、cument?,?,Predict with Evidence,Senate Panel Studies Loan Rate, Set Aside Plans,The Actual Topic,Senate Panel Studies Loan Rate, Set Aside Plans,A few practical details ,Handling Documents with Multiple Classes,Representing Documents,Usually, an example is represented as a series of feature-value pa

15、irs. The features can be arbitrarily abstract (as long as they are easily computable) or very simple. For example, the features could be the set of all words and the values, their number of occurrences in a particular document.,Japan Firm Plans to Sell U.S. Farmland to Japanese,Farmland:1 Firm:1 Jap

16、an:1 Japanese:1 Plans:1 Sell:1 To:2 U.S.:1,Representation,Approaches to Text Categorization (emphasizing the “k-Nearest Neighbor approach”),1-Nearest Neighbor,Looking back at our example Did anyone try to find the most similar labeled item and then just guess the same color? This is 1-Nearest Neighb

17、or,Key Components of Nearest Neighbor,“Similar” item: We need a functional definition of “similarity” if we want to apply this automatically. How many neighbors do we consider? Does each neighbor get the same weight? All categories in neighborhood? Most frequent only? How do we make the final decisi

18、on?,Nearest Neighbor Classification,Instance-Based Learning, Lazy Learning well-known approach to pattern recognition initially by Fix and Hodges (1951) theoretical error bound analysis by Duda & Hart (1957) applied to text categorization in early 90s strong baseline in benchmark evaluations among t

19、op-performing methods in TC evaluations scalable to large TC applications,1-Nearest Neighbor (graphically),K-Nearest Neighbor using a majority voting scheme,K-NN using a weighted-sum voting Scheme,Category Scoring for Weighted-Sum,The score for a category is the sum of the similarity scores between

20、the point to be classified and all of its k-neighbors that belong to the given category. To restate: where x is the new point; c is a class (e.g. black or white); d is a classified point among the k-nearest neighbors of x; sim(x,d) is the similarity between x and d; I(d,c) = 1 iff point d belongs to

21、 class c; I(d,c) = 0 otherwise.,The kth Nearest Neighbor Rule (Fix and Hodges, 1951),Define a metric to measure “closeness” between any two points. Fix k (empirically chosen). Given a new point x and a training set of classified points. Find the k nearest neighbors (kNN) to x in the training set. Cl

22、assify x as class y if more of the nearest neighbors are in class y than in any other classes (majority vote).,kNN for Text Categorization (Yang, SIGIR-1994),Represent documents as points (vectors). Define a similarity measure for pairwise documents. Tune parameter k for optimizing classification ef

23、fectiveness. Choose a voting scheme (e.g., weighted sum) for scoring categories Threshold on the scores for classification decisions.,Thresholding for Classification Decisions,Alternative thresholding strategies: Rcut: For each document to be categorized, rank candidate categories by score, and assi

24、gn YES to the top-m categories (where m is some fixed number). Pcut: Applies only when we have a whole batch of documents to be categorized. Make the category assignments proportional to the category distribution in the training set (i.e. if 1/4th of the training documents were in the category “Coff

25、ee Maker” then we will assign 1/4th of the documents in this batch to the “Coffee Maker” category). Scut: For each category, choose a threshold score (empirically). Any document with a category score that surpasses its respective threshold will be predicted to be a member of that category.,Possible

26、Similarity Measures,Cosine similarity Euclidean distance Kernel functions Kullback-Leibler distance (distance between two probability distributions),Key Components (revisited),Functional definition of “similarity” e.g. cos, Euclidean, kernel functions, . How many neighbors do we consider? Value of k

27、 determined empirically (see methodology section) Does each neighbor get the same weight? Weighted-sum or not All categories in neighborhood? Most frequent only? How do we make the final decision? Rcut, Pcut, or Scut,Approaches to Automated Text Categorization,Regression based on Least Squares Fit (

28、1991) Nearest Neighbor Classification (1992) Bayesian Probabilistic Models (1992) Symbolic Rule Induction (1994) Neural Networks (1995) Rocchio approach (traditional IR, 1996) Support Vector Machines (1997) Boosting or Bagging (1997) Hierarchical Language Modeling (1998) First-Order-Logic Rule Induc

29、tion (1999) Maximum Entropy (1999) Hidden Markov Models (1999) Error-Correcting Output Coding (1999) .,Evaluation Methodology and Performance Metrics,How are we doing?,Suppose we have a set D of labeled documents that we use as our training set for 1-NN. We need an idea of how well this system will

30、perform in the future. So, we go through D and make predictions for each document. What will our accuracy be? Is this a fair assessment of its performance? (i.e. is it likely that the performance will be within a small tolerance of what weve estimated).,Hold-out Sets,Estimating our performance on da

31、ta we used in training is likely to give us a very skewed estimate of the final systems performance. As a result, if we have a set of labeled data, D, we typically split it into a training set,Dtrain, and a hold-out set, Dtest.Dtrain is the only data given to the classifier for training. Dtest can t

32、hen be used to estimate performance independently. Once performance estimates are used to choose the best classifier, the final classifier is usually trained over all of D before deployment (more data generally means better performance so our estimate was pessimistic).,Empirically tuning parameters,

33、When parameters need to be empirically tuned as a part of training (e.g. choosing k), the performance of each possible choice needs to be estimated. For the same reasons as above, the classifier cannot simply check the performance on Dtrain to estimate future performance. Therefore Dtrain is usually

34、 subdivided into a portion used to train and another portion used for picking optimal parameters (usually referred to as the validation set). After setting the parameters, the classifier trains over all of Dtrain before returning to the function that will evaluate its performance over Dtest.,Classif

35、ication Performance Measures,Given n test documents and m classes in consideration, a classifier makes n m binary decisions. A two-by-two contingency table can be computed for each class.,Classification Performance Measures,Recall = a/(a+c) where a + c 0 (o.w. undefined). Did we find all of those th

36、at belonged in the class? Precision = a/(a+b) where a+b0 (o.w. undefined). Of the times we predicted it was “in class”, how often are we correct? Accuracy = (a + d) / n When one classes is overwhelmingly in the majority, this may not paint an accurate picture. Others: miss, false alarm (fallout), er

37、ror, F-measure, break-even point, .,Summary,Pros of kNN,Simple and effective (among top-5 in benchmark evaluations) Non-linear classifier (vs linear) Local estimation (vs global) Non-parametric (very few assumptions about data) Reasonable similarity measures (borrowed from IR) Computation (time & sp

38、ace) linear to the size of training data Low cost for frequent re-training, i.e., when categories and training documents need to be updated (common in Web environment and e-commerce applications),Cons of kNN:,Online response is typically slower than eager learning algorithms Trade-off between off-li

39、ne training cost and online search cost Scores are not normalized (probabilities) Comparing directly to and combining with scores of other classifiers is an open problem Output not good in explaining why a category is relevant Compared to DTree, for example (take this with a grain of salt).,Summary,

40、Organizational view of data effective navigation. TC studies how to automatically classify documents by concepts. kNN is one of the most effective methods (and arguably the simplest). Many algorithms have been developed for TC in machine learning. More operational TC systems in large applications ar

41、e needed. Automated TC for Yahoo!, etc., is an open challenge.,Other Resources & Notes,Machine Learning in Automated Text Categorization, F. Sebastiani, ACM Computing Surveys, Vol. 34, No. 1, 2002. To access, go to www.acm.org and follow the links: Digital Library Journals ACM Computing Surveys Much of the material of this lecture was borrowed with permission from Yiming Yangs lectures www.cs.cmu.edu/yiming . Her page has links to many other useful papers and text categorization resources.,

copyright@ 2008-2019 麦多课文库(www.mydoc123.com)网站版权所有
备案/许可证编号:苏ICP备17064731号-1