NISO RP-25-2016 Outputs of the NISO Alternative Assessment Metrics Project.pdf

上传人:eventdump275 文档编号:1008910 上传时间:2019-03-19 格式:PDF 页数:86 大小:487.32KB
下载 相关 举报
NISO RP-25-2016 Outputs of the NISO Alternative Assessment Metrics Project.pdf_第1页
第1页 / 共86页
NISO RP-25-2016 Outputs of the NISO Alternative Assessment Metrics Project.pdf_第2页
第2页 / 共86页
NISO RP-25-2016 Outputs of the NISO Alternative Assessment Metrics Project.pdf_第3页
第3页 / 共86页
NISO RP-25-2016 Outputs of the NISO Alternative Assessment Metrics Project.pdf_第4页
第4页 / 共86页
NISO RP-25-2016 Outputs of the NISO Alternative Assessment Metrics Project.pdf_第5页
第5页 / 共86页
点击查看更多>>
资源描述

1、NISO RP-25-2016 Outputs of the NISO Alternative Assessment Metrics Project A Recommended Practice of the National Information Standards Organization NISO RP-25-2016 Alternative Assessment Metrics Project About NISO Recommended Practices A NISO Recommended Practice is a recommended “best practice“ or

2、 “guideline“ for methods, materials, or practices in order to give guidance to the user. Such documents usually represent a leading edge, exceptional model, or proven industry practice. All elements of Recommended Practices are discretionary and may be used as stated or modified by the user to meet

3、specific needs. This recommended practice may be revised or withdrawn at any time. For current information on the status of this publication contact the NISO office or visit the NISO website (www.niso.org). Published by National Information Standards Organization (NISO) 3600 Clipper Mill Road Suite

4、302 Baltimore, MD 21211 www.niso.org Copyright 2016 by the National Information Standards Organization All rights reserved under International and Pan-American Copyright Conventions. For noncommercial purposes only, this publication may be reproduced or transmitted in any form or by any means withou

5、t prior permission in writing from the publisher, provided it is reproduced accurately, the source of the material is identified, and the NISO copyright status is acknowledged. All inquiries regarding translations into other languages or commercial reproduction or distribution should be addressed to

6、: NISO, 3600 Clipper Mill Road, Suite 302, Baltimore, MD 21211. ISBN: 978-1-937522-71-1 Contents Section 1: Altmetrics Definitions and Use Cases . 1 1.1 Purpose and Scope . 1 1.2 A Definition of Altmetrics . 1 1.2.1 What is Altmetrics? 1 1.2.2 Scholarly Impact and the Role of Altmetrics in Research

7、Evaluation 1 1.3 Main Use Cases . 2 1.3.1 Persona #1: Librarians . 3 1.3.2 Persona #2: Research Administrators . 3 1.3.3 Persona #3: Member of a Hiring Committee . 4 1.3.4 Persona #4: Member of a Funding Agency . 4 1.3.5 Persona #5: Academics / Researchers . 5 1.3.6 Persona #6: Publishers / Editors

8、. 5 1.3.7 Persona #7: Media Officers / Public Information Officers / Journalists 6 1.3.8 Persona #8: Content Platform Provider . 6 Section 2: Alternative Outputs in Scholarly Communications 7 2.1 Background and Context . 7 2.2 Alternative Scholarly Outputs 7 2.3 Implications for Future Research 7 Se

9、ction 3: Study and Recommendations on Data Metrics . 9 3.1 Summary 9 3.2 Key recommendations . 9 3.3 Background and Context . 10 3.4 Data Metrics Definitions . 11 3.4.1 What is a Published Dataset? 11 3.4.2 What is Data Citation? . 11 3.4.3 What is Data Usage? . 11 3.4.4 What are altmetrics for data

10、? . 12 3.4.5 What are persistent identifiers for data? 12 3.5 Organizations Involved in Research Metrics . 12 3.5.1 Method . 12 3.5.2 Consulted bodies . 12 3.6 Findings . 13 3.7 Recommendations 14 3.7.1 Access to Research Data Metrics 14 3.7.2 Data Citation 15 3.7.3 Machine-actionable Persistent Ide

11、ntifiers 15 3.7.4 Required Metadata 15 3.7.5 Landing Pages . 15 3.7.6 Reference Lists 15 3.7.7 Research Data Usage Statistics 15 3.7.8 Altmetrics for Datasets . 16 Section 4: Persistent Identifiers in Scholarly Communications 17 4.1 Background and Context . 17 4.2 Persistent Identifiers in Scholarly

12、 Communications Document 17 4.3 Implications for Future Research 18 Section 5: Altmetrics Data Quality Code of Conduct 19 NISO RP-25-2016 Alternative Assessment Metrics Project 5.1 Purpose and Scope . 19 5.2 Data Quality Code of Conduct Terminology . 19 5.3 Recommendations 19 5.3.1 Transparency . 19

13、 5.3.2 Replicability 20 5.3.3 Accuracy 20 5.4 Annual Report . 20 Appendix A: NISO Altmetrics Working Group C “Data Quality“ Code of Conduct Self-Reporting Table . 21 Appendix B: NISO Altmetrics Working Group C “Data Quality” Code of Conduct Self-Reporting Table: Samples . 24 Appendix C: Glossary 70

14、Appendix D: Combined Bibliography . 72 NISO RP-25-2016 Alternative Assessment Metrics Project iv Foreword About this Recommended Practice Altmetrics are increasingly being used and discussed as an expansion of the tools available for measuring the scholarly impact of research in the knowledge enviro

15、nment. The NISO Alternative Assessment Metrics Project was begun in July 2013 with funding from the Alfred P. Sloan Foundation to address several areas of limitations and gaps that hinder thebroader adoption of altmetrics. This document, the output from the project, was created bythree working group

16、s. “Working Group A” extensively studied the altmetrics literature and othercommunications and discussed in depth various stakeholders perspectives andrequirements for these new evaluation measures. “Working Group B” created documents that are intended to help users betterunderstand the landscape of

17、 data metrics and thus offer recommendations towardimprovements, and to help organizations that wish to use altmetrics effectivelycommunicate about them with each other and with those outside the community. “Working Group C” studied and discussed issues of data quality in the altmetricsrealm, an ess

18、ential aspect of evaluation before metrics can be used for research andpractical purposes.NISO Business Information Topic Committee This recommended practice is part of the portfolio of the Business Information Topic Committee. At the time the Topic Committee approved this recommended practice for p

19、ublication, the following individuals were committee members: Yanick Beaudoin International Development Research Centre Anne Campbell (co-chair) EBSCO Information Services Todd Carpenter National Information Standards Organization (NISO) Suzanne Hopkins Thomson Reuters Nettie Lagace National Informa

20、tion Standards Organization (NISO) Stuart Maxwell Scholarly iQ Greg Raschke North Carolina State University Libraries Angela Riggio University of California, Los Angeles Sin Romaine University of WashingtonChristine Stamison (co-chair) NERL Program, Center for Research Libraries Gavin Swanson Cambri

21、dge University Press Elizabeth Winter Georgia Tech University Libraries NISO RP-25-2016 Alternative Assessment Metrics Project v NISO Altmetrics Initiative Working Group Members The following individuals served on NISO Altmetrics Initiative Working Group A, responsible for the content of Section 1 o

22、f this recommended practice: Rachel Borchardt American University Library Robin Chin Roemer (co-chair) University of Washington Dianne Cmor Nanyang Technological University Rodrigo Costas Centre for Science and Technology Studies, University of Leiden Tracey DePellegrin Genetics Society of America S

23、haron Dyas-Correia University of Toronto Martin Fenner DataCite Karen Gutzman Galter Health Sciences Library at Northwestern University Michael Habib (co-chair) Independent, scholarly communications, publishing, library markets Kazuhiro Hayashi NISTEP (National Institute of Science Capacity; Code an

24、d Software; Communications; Data; Education and Training Materials; Events; Grey Literature; Images, Diagrams, and Video; Industry; Instruments, Devices, and Inventions; Methodologies, Publications; Regulatory, Compliance, and Legislation; Standards; and Other. This work is not complete, as the very

25、 nature of scholarly activity continually evolves. However, the rich array of outputs represented in this table help to better establish the breadth and depth of scholarly work that may be produced by an investigator or by a research team. Through this effort, the working group hopes to generate dis

26、cussion about how we may begin to leverage integrated data, persistent identifiers, and automated workflows to better capture and track the full complement of research activity, as is possible for publication data. 2.3 Implications for Future Research Future work in this area requires a more compreh

27、ensive inventory of output types and the work presented here can serve as a springboard for these efforts to gain their perspectives about research outputs of interest (e.g., in the UK and Australia). This future comprehensive inventory should include additional stakeholders, including funders from

28、countries where research assessment exercises are underway. Other key areas of work include the integration of various research assessment frameworks1; a formal assessment of the extent of nontraditional research output types not 1Graham, K.E.R., H.L. Chorzempa, P.A. Valentine, 21(5):354-67. doi: 10

29、.1093/reseval/rvs027. NISO RP-25-2016 Alternative Assessment Metrics Project 8 yet managed with persistent identifiers, but necessary for long-term management of access and relationships; and creation of a priority list for incorporating these output types into existing information systems across th

30、e research spectrum and workflow. These activities could help guide development of comprehensive alternative metrics measures and methodologies reaching far beyond the traditional academic publication. Manville C., S. Guthrie, M.-L. Henham, B. Garrod, S. Sousa, A. Kirtley, S. Castle-Clarke, 2013. Pa

31、nel on Return on Investment in Health Research, 2009. Making an Impact: A Preferred Framework and Indicators to Measure Returns on Investment in Health Research. Canadian Academy of Health Sciences, Ottawa, ON, Canada. http:/www.cahs-acss.ca/wp-content/uploads/2011/09/ROI_FullReport.pdf Sarli, C.C.,

32、 E.K. Dubinsky, representatives from other ongoing efforts; and within the working group itself. Section 3 also presents a set of recommendations for data metrics, directed toward the spectrum of groups working with research data, including institutions and repository managers, international researc

33、h organizations, and funders. 3.2 Key recommendations Metrics on research data should be made available as widely as possible. Data citations should be implemented following the Force11 Joint Declaration of DataCitation Principles (https:/www.force11.org/group/joint-declaration-data-citation-princip

34、les-final), in particular:o Use machine-actionable persistent identifierso Provide metadata required for a citationo Provide a landing pageo Data citations should go into the reference list or similar metadata. Standards for research-data-use statistics need to be developed. They should be based on

35、theCOUNTER Code of Practice (https:/www.projectcounter.org/code-of-practice-sections/general-information/) but should also take into consideration some special aspects ofresearch data usage. There should be two formulations for data download metrics, to examineboth “human” downloads and research-foc

36、used non-human agents. Research funders should provide mechanisms to support data repositories in implementingstandards for interoperability and obtaining metrics. Data discovery and sharing platforms should support and monitor “streaming” access to datavia API queries.There have been recent attempt

37、s to define and identify large-scale data queries, but there is, as yet, no consensus in this area. This form of data is therefore explicitly excluded from these recommendations. For more on this groups recommendations, see “Findings” (3.6). NISO RP-25-2016 Alternative Assessment Metrics Project 10

38、3.3 Background and Context Research data is now understood to be a primary goal of academic endeavor. This section of the Recommended Practice reports on the state of metrics relating to published research data, and makes recommendations to those stakeholders operating in this area of work. Publishi

39、ng datasets is an integral part of scholarly communication, and libraries need to consider digital datasets alongside journal articles and other resources. However, it has been reported that data are often used in publications without being properly cited. One of the reasons for lack of data citatio

40、n in the literature is the lack of standards for data publication. In order to increase the visibility of datasets, the international consortium known as DataCite (https:/www.datacite.org) was created. It allocates unique digital object identifiers (DOIs) and metadata for digital and physical object

41、s with a focus on research data. The unique identifiers and associated metadata provided by DataCite should promote a culture of reusing data as there is a significant correlation between data documentation quality and data reuse satisfaction. However, the absence of widely accepted rules for citing

42、 data as separate academic artifacts has been identified as a significant reason behind the lack of research data citations. In response to the challenge, in 2014 FORCE11: The Future of Research Communication and e-Scholarship and others published the Joint Declaration of Data Citation Principles wi

43、th the aim of increasing data citation adoption. Researchers have been attempting to measure the research impact of datasets for the last two decades. In recent years, scientometricians, researchers in various domains, and funding organizations have explored ways to capture the metrics around datase

44、ts to use as evidence for identifying the research impact of data. The main motivation for capturing data citations is to credit scientists and data publishers for their contribution in creating, managing, and curating research data, and to provide evidence of reuse to funders. Linking datasets to o

45、ther forms of publications and growing transparency in scholarly communication ecosystem are additional major reasons for measuring metrics around datasets. Although measuring impacts of research data is complicated, some efforts have started to identify data citations. From a conventional citation

46、based perspective, Thomson Reuters launched the Data Citation Index (http:/ in 2012, which indexes datasets and their citations from main repositories across different disciplines. Using statistics for searches, viewing, and downloads, Ingwersen and Chavan2suggested a Data Usage Index, which could r

47、eveal the impact of datasets from novel points of view. In the same way, Fear3came to a conclusion that the impact of scholarly datasets cannot be measured through a single indicator, and as a result, suggested multiple metrics for measuring the value of datasets. Strasser, Kratz, and Lin4reported t

48、hat data citation was still underused, and that the second most valued metric after data citation would be derived from repository download data. 2Ingwersen, P., and V. Chavan. “Indicators for the Data Usage Index (DUI): An Incentive for Publishing Primary Biodiversity Data Through Global Informatio

49、n Infrastructure. BMC Bioinformatics 12 (Suppl 15), S3. (2011). http:/doi.org/10.1186/1471-2105-12-S15-S3 3Fear, K. M. “Measuring and anticipating the impact of data reuse.” University of Michigan. (2013). http:/deepblue.lib.umich.edu/handle/2027.42/102481 4Strasser, C., John Ernest Kratz, and Jennifer Lin “Make Data Count - Unit 1 Final Report.” (2015). Figshare. http:/dx.doi.org/10.6084/m9.figshare.1328291 NISO RP-25-2016 Alternative Assessment Metrics Project 11 3.4 Data Metrics Definitions 3.4.1 What is a Published Da

展开阅读全文
相关资源
猜你喜欢
相关搜索

当前位置:首页 > 标准规范 > 国际标准 > 其他

copyright@ 2008-2019 麦多课文库(www.mydoc123.com)网站版权所有
备案/许可证编号:苏ICP备17064731号-1