ImageVerifierCode 换一换
格式:PPT , 页数:43 ,大小:2.10MB ,
资源ID:389495      下载积分:2000 积分
快捷下载
登录下载
邮箱/手机:
温馨提示:
如需开发票,请勿充值!快捷下载时,用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)。
如填写123,账号就是123,密码也是123。
特别说明:
请自助下载,系统不会自动发送文件的哦; 如果您已付费,想二次下载,请登录后访问:我的下载记录
支付方式: 支付宝扫码支付 微信扫码支付   
注意:如需开发票,请勿充值!
验证码:   换一换

加入VIP,免费下载
 

温馨提示:由于个人手机设置不同,如果发现不能下载,请复制以下地址【http://www.mydoc123.com/d-389495.html】到电脑端继续下载(重复下载不扣费)。

已注册用户请登录:
账号:
密码:
验证码:   换一换
  忘记密码?
三方登录: 微信登录  

下载须知

1: 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。
2: 试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。
3: 文件的所有权益归上传用户所有。
4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
5. 本站仅提供交流平台,并不能对任何下载内容负责。
6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

版权提示 | 免责声明

本文(Support Vector Machines and Kernels.ppt)为本站会员(diecharacter305)主动上传,麦多课文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知麦多课文库(发送邮件至master@mydoc123.com或直接QQ联系客服),我们立即给予删除!

Support Vector Machines and Kernels.ppt

1、Support Vector Machines and Kernels,Adapted from slides by Tim Oates Cognition, Robotics, and Learning (CORAL) Lab University of Maryland Baltimore County,Doing Really Well with Linear Decision Surfaces,Outline,Prediction Why might predictions be wrong? Support vector machines Doing really well with

2、 linear models Kernels Making the non-linear linear,Supervised ML = Prediction,Given training instances (x,y) Learn a model f Such that f(x) = y Use f to predict y for new x Many variations on this basic theme,Why might predictions be wrong?,True Non-Determinism Flip a biased coin p(heads) = Estimat

3、e If 0.5 predict heads, else tails Lots of ML research on problems like this Learn a model Do the best you can in expectation,Why might predictions be wrong?,Partial Observability Something needed to predict y is missing from observation x N-bit parity problem x contains N-1 bits (hard PO) x contain

4、s N bits but learner ignores some of them (soft PO),Why might predictions be wrong?,True non-determinism Partial observability hard, soft Representational bias Algorithmic bias Bounded resources,Representational Bias,Having the right features (x) is crucial,X,O,O,O,O,X,X,X,X,O,O,O,O,X,X,X,Support Ve

5、ctor Machines,Doing Really Well with Linear Decision Surfaces,Strengths of SVMs,Good generalization in theory Good generalization in practice Work well with few training instances Find globally best model Efficient algorithms Amenable to the kernel trick,Linear Separators,Training instances x n y -1

6、, 1 w n b Hyperplane+ b = 0 w1x1 + w2x2 + wnxn + b = 0 Decision function f(x) = sign( + b),Math Review Inner (dot) product:= a b = ai*bi = a1b1 + a2b2 + +anbn,Intuitions,X,X,O,O,O,O,O,O,X,X,X,X,X,X,O,O,Intuitions,X,X,O,O,O,O,O,O,X,X,X,X,X,X,O,O,Intuitions,X,X,O,O,O,O,O,O,X,X,X,X,X,X,O,O,Intuitions,X

7、,X,O,O,O,O,O,O,X,X,X,X,X,X,O,O,A “Good” Separator,X,X,O,O,O,O,O,O,X,X,X,X,X,X,O,O,Noise in the Observations,X,X,O,O,O,O,O,O,X,X,X,X,X,X,O,O,Ruling Out Some Separators,X,X,O,O,O,O,O,O,X,X,X,X,X,X,O,O,Lots of Noise,X,X,O,O,O,O,O,O,X,X,X,X,X,X,O,O,Maximizing the Margin,X,X,O,O,O,O,O,O,X,X,X,X,X,X,O,O,“

8、Fat” Separators,X,X,O,O,O,O,O,O,X,X,X,X,X,X,O,O,Why Maximize Margin?,Increasing margin reduces capacity Must restrict capacity to generalize m training instances 2m ways to label them What if function class that can separate them all? Shatters the training instances VC Dimension is largest m such th

9、at function class can shatter some set of m points,VC Dimension Example,X,X,X,O,X,X,X,O,X,X,X,O,O,O,X,O,X,O,X,O,O,O,O,O,Bounding Generalization Error,Rf = risk, test error Rempf = empirical risk, train error h = VC dimension m = number of training instances = probability that bound does not hold,Rf

10、Rempf +,Support Vectors,X,X,O,The Math,Training instances x n y -1, 1 Decision function f(x) = sign( + b) w n b Find w and b that Perfectly classify training instances Assuming linear separability Maximize margin,The Math,For perfect classification, we want yi ( + b) 0 for all i Why? To maximize the

11、 margin, we want w that minimizes |w|2,Dual Optimization Problem,Maximize over W() = i i - 1/2 i,j i j yi yj Subject toi 0 i i yi = 0 Decision function f(x) = sign(i i yi + b),What if Data Are Not Perfectly Linearly Separable?,Cannot find w and b that satisfy yi ( + b) 1 for all i Introduce slack va

12、riables i yi ( + b) 1 - i for all i Minimize |w|2 + C i,Strengths of SVMs,Good generalization in theory Good generalization in practice Work well with few training instances Find globally best model Efficient algorithms Amenable to the kernel trick ,What if Surface is Non-Linear?,Image from http:/ M

13、ethods,Making the Non-Linear Linear,When Linear Separators Fail,X,O,O,O,O,X,X,X,x1,x2,Mapping into a New Feature Space,Rather than run SVM on xi, run it on (xi) Find non-linear separator in input space What if (xi) is really big? Use kernels to compute it implicitly!, : x X = (x),(x1,x2) = (x1,x2,x1

14、2,x22,x1x2),Image from http:/web.engr.oregonstate.edu/afern/classes/cs534/,Kernels,Find kernel K such that K(x1,x2) = Computing K(x1,x2) should be efficient, much more so than computing (x1) and (x2) Use K(x1,x2) in SVM algorithm rather than Remarkably, this is possible,The Polynomial Kernel,K(x1,x2

15、) = 2 x1 = (x11, x12) x2 = (x21, x22)= (x11x21 + x12x22)2 = (x112 x212 + x122x222 + 2x11 x12 x21 x22)(x1) = (x112, x122, 2x11 x12)(x2) = (x212, x222, 2x21 x22) K(x1,x2) = ,The Polynomial Kernel,(x) contains all monomials of degree d Useful in visual pattern recognition Number of monomials 16x16 pixe

16、l image 1010 monomials of degree 5 Never explicitly compute (x)! Variation - K(x1,x2) = ( + 1) 2,Kernels,What does it mean to be a kernel? K(x1,x2) = for some What does it take to be a kernel? The Gram matrix Gij = K(xi, xj) Positive definite matrixij ci cj Gij 0 for ci, cj Positive definite kernel

17、For all samples of size m, induces a positive definite Gram matrix,A Few Good Kernels,Dot product kernel K(x1,x2) = Polynomial kernel K(x1,x2) = d (Monomials of degree d) K(x1,x2) = ( + 1)d (All monomials of degree 1,2,d) Gaussian kernel K(x1,x2) = exp(-| x1-x2 |2/22) Radial basis functions Sigmoid

18、kernel K(x1,x2) = tanh( + ) Neural networks Establishing “kernel-hood” from first principles is non-trivial,The Kernel Trick,“Given an algorithm which is formulated in terms of a positive definite kernel K1, one can construct an alternative algorithm by replacing K1 with another positive definite ke

19、rnel K2”,SVMs can use the kernel trick,Using a Different Kernel in the Dual Optimization Problem,For example, using the polynomial kernel with d = 4 (including lower-order terms).Maximize over W() = i i - 1/2 i,j i j yi yj Subject toi 0 i i yi = 0 Decision function f(x) = sign(i i yi + b),( + 1)4,X,( + 1)4,X,So by the kernel trick, we just replace them!,Exotic Kernels,Strings Trees Graphs The hard part is establishing kernel-hood,Application: “Beautification Engine” (Leyvand et al., 2008),Conclusion,SVMs find optimal linear separator The kernel trick makes SVMs non-linear learning algorithms,

copyright@ 2008-2019 麦多课文库(www.mydoc123.com)网站版权所有
备案/许可证编号:苏ICP备17064731号-1