Bayesian Learning.ppt

上传人:medalangle361 文档编号:378868 上传时间:2018-10-09 格式:PPT 页数:25 大小:125.50KB
下载 相关 举报
Bayesian Learning.ppt_第1页
第1页 / 共25页
Bayesian Learning.ppt_第2页
第2页 / 共25页
Bayesian Learning.ppt_第3页
第3页 / 共25页
Bayesian Learning.ppt_第4页
第4页 / 共25页
Bayesian Learning.ppt_第5页
第5页 / 共25页
亲,该文档总共25页,到这儿已超出免费预览范围,如果喜欢就下载吧!
资源描述

1、Bayesian Learning,Provides practical learning algorithms Nave Bayes learning Bayesian belief network learning Combine prior knowledge (prior probabilities)Provides foundations for machine learning Evaluating learning algorithms Guiding the design of new algorithms Learning from models : meta learnin

2、g,Bayesian Classification: Why?,Probabilistic learning: Calculate explicit probabilities for hypothesis, among the most practical approaches to certain types of learning problems Incremental: Each training example can incrementally increase/decrease the probability that a hypothesis is correct. Prio

3、r knowledge can be combined with observed data. Probabilistic prediction: Predict multiple hypotheses, weighted by their probabilities Standard: Even when Bayesian methods are computationally intractable, they can provide a standard of optimal decision making against which other methods can be measu

4、red,Basic Formulas for Probabilities,Product Rule : probability P(AB) of a conjunction of two events A and B: Sum Rule: probability of a disjunction of two events A and B:Theorem of Total Probability : if events A1, ., An are mutually exclusive with,Basic Approach,Bayes Rule:,P(h) = prior probabilit

5、y of hypothesis h P(D) = prior probability of training data D P(h|D) = probability of h given D (posterior density ) P(D|h) = probability of D given h (likelihood of D given h),The Goal of Bayesian Learning: the most probable hypothesis given the training data (Maximum A Posteriori hypothesis ),An E

6、xample,Does patient have cancer or not?,A patient takes a lab test and the result comes back positive. The test returns a correct positive result in only 98% of the cases in which the disease is actually present, and a correct negative result in only 97% of the cases in which the disease is not pres

7、ent. Furthermore, .008 of the entire population have this cancer.,MAP Learner,For each hypothesis h in H, calculate the posterior probability,Output the hypothesis hmap with the highest posterior probability,Comments:Computational intensiveProviding a standard for judging the performance of learning

8、 algorithmsChoosing P(h) and P(D|h) reflects our prior knowledge about the learning task,Bayes Optimal Classifier,Question: Given new instance x, what is its most probable classification? Hmap(x) is not the most probable classification! Example: Let P(h1|D) = .4, P(h2|D) = .3, P(h3 |D) =.3 Given new

9、 data x, we have h1(x)=+, h2(x) = -, h3(x) = - What is the most probable classification of x ? Bayes optimal classification:,Example:,P(h1| D) =.4, P(-|h1)=0, P(+|h1)=1 P(h2|D) =.3, P(-|h2)=1, P(+|h2)=0 P(h3|D)=.3, P(-|h3)=1, P(+|h3)=0,Nave Bayes Learner,Assume target function f: X- V, where each in

10、stance x described by attributes . Most probable value of f(x) is:,Nave Bayes assumption:,(attributes are conditionally independent),Bayesian classification,The classification problem may be formalized using a-posteriori probabilities:P(C|X) = prob. that the sample tuple X= is of class C.E.g. P(clas

11、s=N | outlook=sunny,windy=true,)Idea: assign to sample X the class label C such that P(C|X) is maximal,Estimating a-posteriori probabilities,Bayes theorem: P(C|X) = P(X|C)P(C) / P(X) P(X) is constant for all classes P(C) = relative freq of class C samples C such that P(C|X) is maximum = C such that

12、P(X|C)P(C) is maximum Problem: computing P(X|C) is unfeasible!,Nave Bayesian Classification,Nave assumption: attribute independence P(x1,xk|C) = P(x1|C)P(xk|C) If i-th attribute is categorical: P(xi|C) is estimated as the relative freq of samples having value xi as i-th attribute in class C If i-th

13、attribute is continuous: P(xi|C) is estimated thru a Gaussian density function Computationally easy in both cases,Naive Bayesian Classifier (II),Given a training set, we can compute the probabilities,Play-tennis example: estimating P(xi|C),Example : Nave Bayes,Predict playing tennis in the day with

14、the condition (P(v| o=sunny, t= cool, h=high w=strong) using the following training data:,Day Outlook Temperature Humidity Wind Play Tennis 1 Sunny Hot High Weak No 2 Sunny Hot High Strong No 3 Overcast Hot High Weak Yes 4 Rain Mild High Weak Yes 5 Rain Cool Normal Weak Yes 6 Rain Cool Normal Strong

15、 No 7 Overcast Cool Normal Strong Yes 8 Sunny Mild High Weak No 9 Sunny Cool Normal Weak Yes 10 Rain Mild Normal Weak Yes 11 Sunny Mild Normal Strong Yes 12 Overcast Mild High Strong Yes 13 Overcast Hot Normal Weak Yes 14 Rain Mild High Strong No,we have :,The independence hypothesis, makes computat

16、ion possible yields optimal classifiers when satisfied but is seldom satisfied in practice, as attributes (variables) are often correlated. Attempts to overcome this limitation: Bayesian networks, that combine Bayesian reasoning with causal relationships between attributes Decision trees, that reaso

17、n on one attribute at the time, considering most important attributes first,Nave Bayes Algorithm,Nave_Bayes_Learn (examples)for each target value vj estimate P(vj)for each attribute value ai of each attribute a estimate P(ai | vj )Classify_New_Instance (x),Typical estimation of P(ai | vj),Where n: e

18、xamples with v=v; p is prior estimate for P(ai|vj) nc: examples with a=ai, m is the weight to prior,Bayesian Belief Networks,Nave Bayes assumption of conditional independence too restrictive But it is intractable without some such assumptions Bayesian Belief network (Bayesian net) describe condition

19、al independence among subsets of variables (attributes): combining prior knowledge about dependencies among variables with observed training data. Bayesian Net Node = variables Arc = dependency DAG, with direction on arc representing causality,Bayesian Networks: Multi-variables with Dependency,Bayes

20、ian Belief network (Bayesian net) describe conditional independence among subsets of variables (attributes): combining prior knowledge about dependencies among variables with observed training data.Bayesian Net Node = variables and each variable has a finite set of mutually exclusive states Arc = de

21、pendency DAG, with direction on arc representing causality To each variables A with parents B1, ., Bn there is attached a conditional probability table P (A | B1, ., Bn),Bayesian Belief Networks,Age, Occupation and Income determine if customer will buy this product. Given that customer buys product,

22、 whether there is interest in insurance is now independent of Age, Occupation, Income. P(Age, Occ, Inc, Buy, Ins ) = P(Age)P(Occ)P(Inc) P(Buy|Age,Occ,Inc)P(Int|Buy)Current State-of-the Art: Given structure and probabilities, existing algorithms can handle inference with categorical values and limite

23、d representation of numerical values,Age,Occ,Income,Buy X,Interested in Insurance,General Product Rule,Nodes as Functions,input: parents state values output: a distribution over its own value,A,B,a,b,ab,ab,ab,ab,0.1 0.3 0.6,0.7 0.2 0.1,0.4 0.4 0.2,X,0.2 0.5 0.3,0.1 0.3 0.6,P(X|A=a, B=b),A node in BN

24、 is a conditional distribution function,l m h,l m h,Special Case : Nave Bayes,h,e1,e2,en,.,P(e1, e2, en, h ) = P(h) P(e1 | h) .P(en | h),Inference in Bayesian Networks,Age,Income,House Owner,EU,Voting Pattern,Newspaper Preference,Living Location,How likely are elderly rich people to buy Sun?,P( pape

25、r = Sun | Age60, Income 60k),Inference in Bayesian Networks,Age,Income,House Owner,EU,Voting Pattern,Newspaper Preference,Living Location,How likely are elderly rich people who voted labour to buy Daily Mail?,P( paper = DM | Age60, Income 60k, v = labour),Bayesian Learning,B E A C N,b e a c n b e a

26、c n .,Burglary,Earthquake,Alarm,Call,Newscast,Input : fully or partially observable data cases Output : parameters AND also structureLearning Methods: EM (Expectation Maximisation) using current approximation of parameters to estimate filled in data using filled in data to update parameters (ML) Gradient Ascent Training Gibbs Sampling (MCMC),

展开阅读全文
相关资源
猜你喜欢
相关搜索

当前位置:首页 > 教学课件 > 大学教育

copyright@ 2008-2019 麦多课文库(www.mydoc123.com)网站版权所有
备案/许可证编号:苏ICP备17064731号-1