ImageVerifierCode 换一换
格式:PPT , 页数:25 ,大小:125.50KB ,
资源ID:378868      下载积分:2000 积分
快捷下载
登录下载
邮箱/手机:
温馨提示:
如需开发票,请勿充值!快捷下载时,用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)。
如填写123,账号就是123,密码也是123。
特别说明:
请自助下载,系统不会自动发送文件的哦; 如果您已付费,想二次下载,请登录后访问:我的下载记录
支付方式: 支付宝扫码支付 微信扫码支付   
注意:如需开发票,请勿充值!
验证码:   换一换

加入VIP,免费下载
 

温馨提示:由于个人手机设置不同,如果发现不能下载,请复制以下地址【http://www.mydoc123.com/d-378868.html】到电脑端继续下载(重复下载不扣费)。

已注册用户请登录:
账号:
密码:
验证码:   换一换
  忘记密码?
三方登录: 微信登录  

下载须知

1: 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。
2: 试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。
3: 文件的所有权益归上传用户所有。
4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
5. 本站仅提供交流平台,并不能对任何下载内容负责。
6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

版权提示 | 免责声明

本文(Bayesian Learning.ppt)为本站会员(medalangle361)主动上传,麦多课文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知麦多课文库(发送邮件至master@mydoc123.com或直接QQ联系客服),我们立即给予删除!

Bayesian Learning.ppt

1、Bayesian Learning,Provides practical learning algorithms Nave Bayes learning Bayesian belief network learning Combine prior knowledge (prior probabilities)Provides foundations for machine learning Evaluating learning algorithms Guiding the design of new algorithms Learning from models : meta learnin

2、g,Bayesian Classification: Why?,Probabilistic learning: Calculate explicit probabilities for hypothesis, among the most practical approaches to certain types of learning problems Incremental: Each training example can incrementally increase/decrease the probability that a hypothesis is correct. Prio

3、r knowledge can be combined with observed data. Probabilistic prediction: Predict multiple hypotheses, weighted by their probabilities Standard: Even when Bayesian methods are computationally intractable, they can provide a standard of optimal decision making against which other methods can be measu

4、red,Basic Formulas for Probabilities,Product Rule : probability P(AB) of a conjunction of two events A and B: Sum Rule: probability of a disjunction of two events A and B:Theorem of Total Probability : if events A1, ., An are mutually exclusive with,Basic Approach,Bayes Rule:,P(h) = prior probabilit

5、y of hypothesis h P(D) = prior probability of training data D P(h|D) = probability of h given D (posterior density ) P(D|h) = probability of D given h (likelihood of D given h),The Goal of Bayesian Learning: the most probable hypothesis given the training data (Maximum A Posteriori hypothesis ),An E

6、xample,Does patient have cancer or not?,A patient takes a lab test and the result comes back positive. The test returns a correct positive result in only 98% of the cases in which the disease is actually present, and a correct negative result in only 97% of the cases in which the disease is not pres

7、ent. Furthermore, .008 of the entire population have this cancer.,MAP Learner,For each hypothesis h in H, calculate the posterior probability,Output the hypothesis hmap with the highest posterior probability,Comments:Computational intensiveProviding a standard for judging the performance of learning

8、 algorithmsChoosing P(h) and P(D|h) reflects our prior knowledge about the learning task,Bayes Optimal Classifier,Question: Given new instance x, what is its most probable classification? Hmap(x) is not the most probable classification! Example: Let P(h1|D) = .4, P(h2|D) = .3, P(h3 |D) =.3 Given new

9、 data x, we have h1(x)=+, h2(x) = -, h3(x) = - What is the most probable classification of x ? Bayes optimal classification:,Example:,P(h1| D) =.4, P(-|h1)=0, P(+|h1)=1 P(h2|D) =.3, P(-|h2)=1, P(+|h2)=0 P(h3|D)=.3, P(-|h3)=1, P(+|h3)=0,Nave Bayes Learner,Assume target function f: X- V, where each in

10、stance x described by attributes . Most probable value of f(x) is:,Nave Bayes assumption:,(attributes are conditionally independent),Bayesian classification,The classification problem may be formalized using a-posteriori probabilities:P(C|X) = prob. that the sample tuple X= is of class C.E.g. P(clas

11、s=N | outlook=sunny,windy=true,)Idea: assign to sample X the class label C such that P(C|X) is maximal,Estimating a-posteriori probabilities,Bayes theorem: P(C|X) = P(X|C)P(C) / P(X) P(X) is constant for all classes P(C) = relative freq of class C samples C such that P(C|X) is maximum = C such that

12、P(X|C)P(C) is maximum Problem: computing P(X|C) is unfeasible!,Nave Bayesian Classification,Nave assumption: attribute independence P(x1,xk|C) = P(x1|C)P(xk|C) If i-th attribute is categorical: P(xi|C) is estimated as the relative freq of samples having value xi as i-th attribute in class C If i-th

13、attribute is continuous: P(xi|C) is estimated thru a Gaussian density function Computationally easy in both cases,Naive Bayesian Classifier (II),Given a training set, we can compute the probabilities,Play-tennis example: estimating P(xi|C),Example : Nave Bayes,Predict playing tennis in the day with

14、the condition (P(v| o=sunny, t= cool, h=high w=strong) using the following training data:,Day Outlook Temperature Humidity Wind Play Tennis 1 Sunny Hot High Weak No 2 Sunny Hot High Strong No 3 Overcast Hot High Weak Yes 4 Rain Mild High Weak Yes 5 Rain Cool Normal Weak Yes 6 Rain Cool Normal Strong

15、 No 7 Overcast Cool Normal Strong Yes 8 Sunny Mild High Weak No 9 Sunny Cool Normal Weak Yes 10 Rain Mild Normal Weak Yes 11 Sunny Mild Normal Strong Yes 12 Overcast Mild High Strong Yes 13 Overcast Hot Normal Weak Yes 14 Rain Mild High Strong No,we have :,The independence hypothesis, makes computat

16、ion possible yields optimal classifiers when satisfied but is seldom satisfied in practice, as attributes (variables) are often correlated. Attempts to overcome this limitation: Bayesian networks, that combine Bayesian reasoning with causal relationships between attributes Decision trees, that reaso

17、n on one attribute at the time, considering most important attributes first,Nave Bayes Algorithm,Nave_Bayes_Learn (examples)for each target value vj estimate P(vj)for each attribute value ai of each attribute a estimate P(ai | vj )Classify_New_Instance (x),Typical estimation of P(ai | vj),Where n: e

18、xamples with v=v; p is prior estimate for P(ai|vj) nc: examples with a=ai, m is the weight to prior,Bayesian Belief Networks,Nave Bayes assumption of conditional independence too restrictive But it is intractable without some such assumptions Bayesian Belief network (Bayesian net) describe condition

19、al independence among subsets of variables (attributes): combining prior knowledge about dependencies among variables with observed training data. Bayesian Net Node = variables Arc = dependency DAG, with direction on arc representing causality,Bayesian Networks: Multi-variables with Dependency,Bayes

20、ian Belief network (Bayesian net) describe conditional independence among subsets of variables (attributes): combining prior knowledge about dependencies among variables with observed training data.Bayesian Net Node = variables and each variable has a finite set of mutually exclusive states Arc = de

21、pendency DAG, with direction on arc representing causality To each variables A with parents B1, ., Bn there is attached a conditional probability table P (A | B1, ., Bn),Bayesian Belief Networks,Age, Occupation and Income determine if customer will buy this product. Given that customer buys product,

22、 whether there is interest in insurance is now independent of Age, Occupation, Income. P(Age, Occ, Inc, Buy, Ins ) = P(Age)P(Occ)P(Inc) P(Buy|Age,Occ,Inc)P(Int|Buy)Current State-of-the Art: Given structure and probabilities, existing algorithms can handle inference with categorical values and limite

23、d representation of numerical values,Age,Occ,Income,Buy X,Interested in Insurance,General Product Rule,Nodes as Functions,input: parents state values output: a distribution over its own value,A,B,a,b,ab,ab,ab,ab,0.1 0.3 0.6,0.7 0.2 0.1,0.4 0.4 0.2,X,0.2 0.5 0.3,0.1 0.3 0.6,P(X|A=a, B=b),A node in BN

24、 is a conditional distribution function,l m h,l m h,Special Case : Nave Bayes,h,e1,e2,en,.,P(e1, e2, en, h ) = P(h) P(e1 | h) .P(en | h),Inference in Bayesian Networks,Age,Income,House Owner,EU,Voting Pattern,Newspaper Preference,Living Location,How likely are elderly rich people to buy Sun?,P( pape

25、r = Sun | Age60, Income 60k),Inference in Bayesian Networks,Age,Income,House Owner,EU,Voting Pattern,Newspaper Preference,Living Location,How likely are elderly rich people who voted labour to buy Daily Mail?,P( paper = DM | Age60, Income 60k, v = labour),Bayesian Learning,B E A C N,b e a c n b e a

26、c n .,Burglary,Earthquake,Alarm,Call,Newscast,Input : fully or partially observable data cases Output : parameters AND also structureLearning Methods: EM (Expectation Maximisation) using current approximation of parameters to estimate filled in data using filled in data to update parameters (ML) Gradient Ascent Training Gibbs Sampling (MCMC),

copyright@ 2008-2019 麦多课文库(www.mydoc123.com)网站版权所有
备案/许可证编号:苏ICP备17064731号-1