Skip to Main Content

Open Research

Move from authorship to contributorship

What ways are there to open up collaboration and acknowledge all those contributing to the production of research outputs? 

  • You could acknowledge all contributors to the research in your published research outputs, rather than just the authors.  

  • Where authors are those who have been significantly involved in all aspects of the research project, contributors are all those involved in planning, conducting, and writing up but who have not necessarily participated in all aspects of the research process.  

  • You could acknowledge their contribution using the Contributor Roles Taxonomy CRediT, allocating the terms appropriately to your contributors within research outputs. 

  • A move to acknowledging contributorship invites us to collectively explore attitudes and behaviours around authorship and rethink assumptions about ‘who counts’ in our research activities. 

One way to open your research is to re-think who matters in your research work. You could move from only acknowledging the authors to acknowledging all those contributing to the production of your research output.

Authors are those who have been significantly involved in all aspects of the research project. Contributors are those involved in planning, conducting, and writing up but who have not necessarily participated in all aspects of the research process.  

Why?

A move to acknowledging contributorship rather than authorship invites us to collectively explore attitudes and behaviours around authorship and rethink assumptions about ‘who counts’ in our research activities. This will also facilitate better collaboration.

How?

You could acknowledge everyone’s contribution using the Contributor Roles Taxonomy CRediT, allocating standard terms describing types of contribution appropriately to your contributors within research outputs. 

Sample CRediT author statement (from CRediT author statement | Elsevier)

Zhang San: Conceptualization, Methodology, Software Priya Singh: Data curation, Writing- Original draft preparation Wang Wu: Visualization, Investigation Jan Jansen: Supervision Ajay Kumar: Software, Validation Sun Qi: Writing- Reviewing and Editing.

Want to know more?

Allen, L., O’Connell, A. and Kiermer, V. (2019), How can we ensure visibility and diversity in research contributions? How the Contributor Role Taxonomy (CRediT) is helping the shift from authorship to contributorship. Learned Publishing, 32: 71-74. https://doi.org/10.1002/leap.1210

Pre-register your study design

If you undertake empirical research, you can pre-register your research plan—such as your hypotheses and analysis plan— prior to the research commencing using a public registration platform such as the Open Science Framework.

Alternatively, you can publish the research question and the proposed methodology of your study as a registered report.  This is a variation of preregistration: a registered report is a journal article for which the methods and proposed analyses are peer-reviewed and the results accepted for publication prior to the data being collected and actually analysed. The article will be published if the authors follow through with the registered methodology. 

Covidence have some helpful guidance for health systematic and related reviews.

https://www.cos.io/initiatives/prereg  

https://plos.org/open-science/preregistration/  

https://www.hra.nhs.uk/planning-and-improving-research/research-planning/research-registration-research-project-identifiers/ 

Why?

Preregistration separates hypothesis-generating  (exploratory) from hypothesis-testing (confirmatory) research. This particularly helps to avoid Questionable Research Practises (QRPs) such as reporting exploratory research as confirmatory (Hypothesising After the Results are Known, a.k.a. HARKing) and collecting data until analyses return statistically significant effects or selectively reporting analyses that reveal desirable outcomes (p-hacking).

Registering your work helps to reduce research duplication, and for participatory works means your research can be found by prospective participants.

For clinical trials, pre-registration is mandatory, and is good practice for other forms of research.

How?

Want to know more?

Nosek, B.A., Ebersole, C.R., DeHaven A.C., Mellor, D.T. (2017). The preregistration revolution. Proceedings of the National Academy of Sciences (PNAS),115 (11): 2600-2606. https://www.pnas.org/doi/10.1073/pnas.1708274114.

Chambers, C.D., Feredoes, E., Muthukumaraswamy, S.D., Etchells, P.J. (2014).  Instead of “playing the game” it is time to change the rules: Registered Reports at AIMS Neuroscience and beyond. AIMS Neuroscience 1(1): 4-17. https://orca.cardiff.ac.uk/id/eprint/59475/1/AN2.pdf

Think about how you assess the impact of research OR Avoid using citation-based metrics

The Research Impact at SHU webpages provided by Research and Innovation Services, include information about what impact is, training and resources on impact, tools and services, and sources of support with impact.

You may come across quantitative metrics when considering the impact of research work. The San Francisco Declaration on Research Assessment (DORA) underlines the need to assess research on its own merits rather than on the basis of the journal in which the research is published, and therefore prohibits the use of journal-based metrics as a surrogate measure of the quality of individual research articles, or an individual academic’s contributions to their field, or for hiring, promotion, and funding decisions. SHU is a signatory of DORA and has also developed guidance on the responsible use of not only journal-based metrics, but also other citation-based metrics, altmetrics and other quantitative measures.  For more information, please see our pages on responsible metrics and the SHU Guide to responsible metrics.

 You can use academic social networks such as ResearchGate, Academia.edu, and Mendeley to discuss open science, and/or draw attention to your project and its findings. Alternatively, you could use peer reviewers’ reports, published as part of open peer review, as a more qualitative indicator for your evaluation. 

The San Francisco Declaration on Research Assessment underlines the need to assess research on its own merits rather than based on the journal in which the research is published, and therefore discourages the use of journal-based metrics. In order to assess the quality of research peer review is central, supported by responsible use of quantitative indicators. SHU has developed guidance on the responsible use of metrics.

Why?

First, journal-based metrics such as the journal Impact Factor say nothing about the quality of individual contributions published in a journal. The Journal Impact Factor was originally created as a tool to help librarians identify journals to purchase, not as a measure of the scientific quality of research in an article.

Secondly, any assessment based on citation metrics has significant limitations (academic over popular interest, various motivations for citing, manipulation potential [“gaming”], failure to account for author ordering, and citations only appearing in “indexed” journals).

Lastly, there is a need to assess research on its own merits rather than based on the journal in which the research is published.

How?

  • Always rely on qualitative expert judgement and peer review go and only use appropriate quantitative indicators to support this, not to supplant this.
  • To counteract the shortcomings of metrics based on citations, you could look at the wider impact a piece of research has had. You could use alternative metrics, which are metrics not based on traditional citations but on online citations to digital research as well as usage-based indicators, such as views, downloads, comments, bookmarks, or mentions on social media.You can find some of these alternative metrics in Altmetric Explorer.

According to NISO, there are three main use cases for alternative metrics:

  1. showcase achievements (highlight the positive achievements garnered by one or more scholarly outputs by assessing their reach, engagement and influence--You could for example incorporate alternative metrics in your portfolio to complement your other accomplishments)
  2. evaluate research (assess the impact or reach of scholarly outputs)
  3. discover research (discover scholarly outputs or researchers or make them more discoverable, For example to identify potential collaborators at other institutions or to identify influential research and debate that is important)

Want to know more?

National Information Standards Organization (2016). Outputs of the NISO Alternative Assessment Metrics Project. A Recommended Practice of the National Information Standards Organization. NISO RP-25-2016. Baltimore, MD: National Information Standards Organization. https://groups.niso.org/higherlogic/ws/public/download/17091.

Worrall, J. L., & Cohn, E. G. (2023). Citation Data and Analysis: Limitations and Shortcomings. Journal of Contemporary Criminal Justice39(3), 327-340. https://doi.org/10.1177/10439862231170972

Incorporate participatory methods

In this context collaborative research refers to projects in which members of the public are involved in some way as active participants, rather than simply being the subject of research or the source of data. 


Why?

Opening up research in this way invites new and innovative collaborations. It acts as an engagement tool in and of itself, increasing reach of research.

How?

Methods include:

  • Co-production, with an emphasis on public service development and improvement 

  • Community based research- the emphasis here being on collaboration with community groups to work on issues 

  • Citizen science - here members of the public gather data and perform analyses as part of a collaborative effort to research scientific problems 

For more information, see: 

 


Adsetts Library
Collegiate Library



Sheffield Hallam University
City Campus, Howard Street
Sheffield S1 1WB