ImageVerifierCode 换一换
格式:PPT , 页数:11 ,大小:103KB ,
资源ID:376487      下载积分:2000 积分
快捷下载
登录下载
邮箱/手机:
温馨提示:
如需开发票,请勿充值!快捷下载时,用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)。
如填写123,账号就是123,密码也是123。
特别说明:
请自助下载,系统不会自动发送文件的哦; 如果您已付费,想二次下载,请登录后访问:我的下载记录
支付方式: 支付宝扫码支付 微信扫码支付   
注意:如需开发票,请勿充值!
验证码:   换一换

加入VIP,免费下载
 

温馨提示:由于个人手机设置不同,如果发现不能下载,请复制以下地址【http://www.mydoc123.com/d-376487.html】到电脑端继续下载(重复下载不扣费)。

已注册用户请登录:
账号:
密码:
验证码:   换一换
  忘记密码?
三方登录: 微信登录  

下载须知

1: 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。
2: 试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。
3: 文件的所有权益归上传用户所有。
4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
5. 本站仅提供交流平台,并不能对任何下载内容负责。
6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

版权提示 | 免责声明

本文(Inter-rater Reliability of Clinical Ratings- A Brief Primer on .ppt)为本站会员(confusegate185)主动上传,麦多课文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知麦多课文库(发送邮件至master@mydoc123.com或直接QQ联系客服),我们立即给予删除!

Inter-rater Reliability of Clinical Ratings- A Brief Primer on .ppt

1、Inter-rater Reliability of Clinical Ratings: A Brief Primer on Kappa,Daniel H. Mathalon, Ph.D., M.D.Department of Psychiatry Yale University School of Medicine,Inter-rater Reliability of Clinical Interview Based Measures,Ratings of clinical severity for specific symptom domains (e.g, PANSS, BPRS, SA

2、PS, SANS) Continuous scales Use intraclass correlations to assess inter-rater reliability. Diagnostic Assessment Categorical Data / Nominal Scale Data How do we quantify reliability between diagnosticians? Percent Agreement, Chi-Square, Kappa,Rater 2,Rater 1,Category,nij=number of casesfalling into

3、cell=freq of joint event ij,n=total number of cases,pij= nij / n = proportion of casesfalling into particular cell.,Two raters classify n cases into k mutually exclusive categories.,Reliability by Percentage Agreement = ipii = 1/n inii,Percent Agreement Fails to Consider Agreement by Chance,Rater 1,

4、Rater 2,Assume that two raters whose judgments are completely independent (i.e., not influenced by the true diagnostic status of the patient) each diagnose 90% of cases to have schizophrenia and 10% of cases to not have schizophrenia (i.e., Other). Expected agreement by chance for each category obta

5、ined by multiplying the marginal probabilities together. Can get Percentage Agreement of 82% strictly by chance.,Proportion Agreement = .82,.90 x .90 = .81,.10 x .10 = .01,Chi-Square Test of Association as Proposed Solution,Rater 1,Rater 2, Can perform a Chi-Square Test of Association to test null h

6、ypothesis that the two raters judgments are independent. To reject independence, show that observed agreement departs from what would be expected by chance alone. Chi-Square = cells (Observed - Expected)2 / Expected Problem: In example below, we have a perfect association between the Raters with zer

7、o agreement. Chi-Square is a test of Association, not Agreement. It is sensitive to any departure from chance agreement, even when the dependency between the raters judgments involves perfect non-agreement. So, we cannot use Chi-Square Test to assess agreement between raters.,Kappa Coefficient (Cohe

8、n, 1960),Rater 2,Rater1,pi. x p.i .39 .075 .01,High reliability requires that the frequencies along the diagonal should be chance and off diagonal frequencies should be chance. Use marginal frequencies/probabilities to estimate chance agreement.,Proportion agreement observed, po= ipii = 1/n inii,Pro

9、portion agreement expected by chance, pc= ipi. x p.i,Interpretations of Kappa K = P (agreement | no agreement by chance) 1-pc = 1- .475 = .525 of cases where no agreement by chance po - pc = .7- .475 = .225 of cases are those non-chance agreement cases where observers agreed.Kappa is the probability

10、 that judges will agree given no agreement by chance. Can test Ho that Kappa = 0, Kappa is normally distributed with large samples, can test significance using normal distribution. Can erect confidence intervals for Kappa.,Weighted Kappa Coefficient,Rater 2,Rater 1,Can assign weights, wij, to classi

11、fication errors according to their seriousness using ratio scale weights.,po(w) - pc(w),Kappa Rules of Thumb,K .75 is considered excellent agreement. K .46 is considered poor agreement.,Is an intraclass correlation coefficient ( except for factor of 1/n) when weights have following property: wij = 1

12、 - (i - j)2,Weighted Kappa and the ICC,(k - 1) 2,Problems with Kappa,Affected by base rates of diagnoses. Cant easily compare across studies that have different base rates, either in the population, or in the reliability study. Chance agreement is a problem? When the null hypothesis of rater independence is not met (which is most of the time), the estimate of chance agreement is inaccurate and possibly inappropriate).,

copyright@ 2008-2019 麦多课文库(www.mydoc123.com)网站版权所有
备案/许可证编号:苏ICP备17064731号-1