Community consensus on core open science practices to monitor in biomedicine


In November 2021, UNESCO adopted its Recommendation on Open Science, defining open science “as an inclusive construct that combines various movements and practices aiming to make multilingual scientific knowledge openly available, accessible and reusable for everyone, to increase scientific collaborations and sharing of information for the benefits of science and society, and to open the processes of scientific knowledge creation, evaluation and communication to societal actors beyond the traditional scientific community” [1]. UNESCO recommends that its 193 member states take action towards achieving open science globally. The recommendation emphasizes the importance of monitoring policies and practices in achieving this goal [1]. Open science provides a means to improve the quality and reproducibility of research [2,3], and a mechanism to foster innovation and discovery [4,5]. The UNESCO Recommendation has cemented open science’s position as a global science policy priority. It follows other initiatives from major research funders, such as the Open Research Funders Group, as well as national efforts to implement open science via federal open science plans [6,7].

Despite these commitments from policymakers and funders, adopting and implementing open science has not been straightforward. There remains debate about how to motivate and incentivize individual researchers to adopt open science practices [810], and how to best track open science practices within the community. A key concern is the need for funding to cover the additional fees and time costs needed to adhere to some open science best practices, when the academic reward system and career advancement still incentivize traditional, closed research practices. What “counts” in the tenure process is typically the outwardly observable number of publications in prestigious—typically high impact factor and often paywalled—journals, rather than efforts towards making research more accessible, shareable, transparent, and reusable. Monitoring open science practices is essential if the research community intends to evaluate the impact of policies and other interventions to drive improvements, and understand the current adoption of open science practices in a research community. To improve their open science practices, institutions need to measure their performance; however, there is presently no effective system for efficient and large-scale monitoring without significant effort.

Consider the example of open access publishing. A researcher-led large analysis of researchers’ compliance with funder mandates for open access publishing showed that the rate of adherence varied considerably by funder [11]. In Canada, the Canadian Institutes of Health Research (CIHR) had an open access requirement for depositing articles between 2008 and 2015. This deposit requirement was modified when CIHR and the other two major Canadian funding agencies harmonized their policies. The result was a drop in openly available CIHR-funded research from approximately 60% in 2014 to approximately 40% in 2017 [11]. In the absence of monitoring, it is not possible to evaluate the impact of introducing a new policy or to measure how other changes in the scholarly landscape impact open science practices.

The Coronavirus Disease 2019 (COVID-19) pandemic has created increased impetus for, and attention to, open science, which has contributed to the development of new discipline-specific practices for openness [1214]. The current project aimed to establish a core set of open science practices within biomedicine to implement and monitor at the institutional level (Box 1). Our vision to establish a core set of open science practices stems from the work of Core Outcome Measures in Effectiveness Trials (COMET) [15]. If trialists agree on a few core outcomes to assess across trials, it strengthens the totality of evidence, enables more meaningful use in systematic reviews, promotes meta-research, and may subsequently reduce waste in research. We sought to apply this concept of community-agreed standardization to open science specifically in biomedical research, which currently lacks consensus on best practices, and work to operationalize different open science practices.

Box 1. Summary of key points

  • Funders and other stakeholders in the international research ecosystem are increasingly introducing mandates and guidelines to encourage open science practices.
  • Research institutions cannot currently monitor compliance with open science practices without engaging in time-consuming manual processes that many lack the expertise to undertake.
  • We conducted an international Delphi study to agree which open science practices would be valuable for research institutions to monitor, with a view to creating an automated dashboard to support monitoring.
  • We report 19 open science practices that reached consensus for institutional monitoring in an open science dashboard and describe how we intend to implement these.
  • The open science practices identified may be of broader value for developing policy, education, and interventions.

The core set of open science practices identified here will serve the community in many ways, including in developing policy, education, or other interventions to support the implementation of these practices. Most immediately, the practices can inform the development of an automated open science dashboard that can be deployed by biomedical institutions to efficiently monitor adoption of (and provide feedback on) these practices. By establishing what should be reported in an institutional open science dashboard through a consensus building process with relevant stakeholders, we aim to ensure the tool is appropriate to the needs of the community.


Ethics statement

This study received ethical approval from the Ottawa Health Science Network Research Ethics Board (20210515-01H). Participants were presented with an online consent form prior to viewing round 1 of the Delphi, their completion of the survey was considered implied consent.

For complete study methods, please see S1 Text. We conducted a 3-round modified Delphi survey study. Delphi studies structure communication between participants to establish consensus [16]. Typically, Delphi studies use several rounds of surveys in which participants, experts in the topic area, vote on specific issues. Between rounds, votes are aggregated and anonymized and then presented back to participants along with their own individual scores, and feedback on others’ anonymized voting decisions [17,18]. This gives participants the opportunity to consider the group’s thoughts and to compare and adjust their own assessment in the next round. A strength of this method of communication is that it allows all individuals in a group to communicate their views. Anonymous voting also limits direct confrontation among individuals and the influence of power dynamics and hierarchies on the group’s decision.

Participants in our Delphi were from a convenience sample obtained through snowball sampling of academic institutions interested in open science. The individuals from the institutions represented any/all of the following groups:

  1. Library or scholarly communication staff (e.g., responsible for purchasing journal content, responsible for facilitating data sharing or management).
  2. Research administrators or leaders (e.g., head of department, CEO, senior management).
  3. Staff involved in researcher assessment (e.g., appointment and tenure committee members).
  4. Individuals involved in institutional metrics assessment or reporting (e.g., performance management roles).

Because titles and roles differ from institution to institution, we left it to the discretion of the institution to identify participants. Broadly, we aimed to include people who either knew about scholarly metrics or made decisions regarding researcher assessment or hiring. We also explicitly encouraged the institutions to consider diversity of their representing participants (including gender and race) when inviting people to contribute. However, there are a variety of stakeholders that may influence institutional monitoring of open science practices. A limitation of the current work is that we included exclusively participants directly employed by academic institutions. While our intention is to implement the proposed dashboard at biomedical institutions, it is possible we missed nuance or richness, for example, by failing to include representatives from scholarly publishers, academic societies, or funding agencies.

The first two rounds of the Delphi were online surveys administered using Surveylet. Surveylet is a purpose-built platform for developing and administering Delphi surveys [19]. To start with, the Delphi participants were presented with an initial set of 17 potential open science practices to consider that were generated by the project team based on a discussion. Round 3 took the form of two half-day meetings hosted on Zoom [20]. Hosting round 3 in the form of an online meeting is a modification of the traditional Delphi approach. This was done to provide an opportunity for more nuanced discussion among participants about the potential open science practices while still retaining anonymized online voting. We opted for a virtual meeting given the COVID-19 pandemic restrictions at the time and the cost effectiveness for enabling international participation. However, while our use of a modified Delphi in which round 3 took place online provided the opportunity for more nuanced discussion prior to voting, it also meant that we ultimately reduced the overall number of participants taking part in that round in order to host a manageable sized group for the online meeting. This methodological approach may have reduced some of the diversity in potential response despite providing greater richness in responses.

While the structured, anonymous, and democratic approach of the Delphi process offers many advantages to reaching consensus, it is not without limitations. The methods used here may have impacted our outcome. For example, the use of a forced choice item rather than a scale in rounds 2 and 3 may have contributed to a greater likelihood for items to reach consensus in these rounds. While we endeavored to attract a diverse and representative sample of institutions to contribute, ultimately given our sampling approach, it is likely that the participants and institutions that agreed to take part may not be as representative of the global biomedical research culture as we desired, and may have a stronger interest in or commitment to open science than is typical. While the sample may not be generalizable, these institutions likely represent early adopters or willing leaders in open science. Further, our Delphi surveys and consensus meetings were conducted in English only, and the meeting was not conducive for attendance across all time zones. These factors will have created barriers to participation for some institutions or participants. Defining who is an “expert” to provide their views in any Delphi exercise provides an inherent challenge [21]. We faced this challenge here, especially considering the diversity of open science practices and the nuances of applying these practices in distinct biomedical subdisciplines. For example, our vision to create a single biomedical dashboard to deploy at the institutional level may mean we have missed nuances in open science practices in preclinical as compared to clinical research.

Round 1

Participants: We excluded participants who did not complete 80% or more of the survey in this round. A total of 80 participants from 20 institutions in 13 countries completed round 1. Full demographics are described in Table 1. A total of 44 (55.0%) participants identified as men, 35 (43.8%) as women, and 1 (1.3%) as another gender. Of the 32 research institutions that were invited to contribute to the study, 20 (62.5%) ended up contributing, and 1 to 7 participants from each organization responded to our survey. Researchers (N = 31, 38.8%) and research administrators (N = 18, 22.5%) comprised most of the sample.

Voting: Of the 17 potential core open science practices presented in round 1, two reached consensus. Participants agreed that “registering clinical trials on a registry prior to recruitment” and “reporting author conflicts of interest in published articles” were essential to include. See full results in Table 2.

Participants suggested 10 novel potential core open science practices to include in round 2 for voting; they were as follows: use of Research Resource Identifiers (RRIDs) where relevant biological resources are used in a study; inclusion of funder statements; information on whether a published paper has open peer reviews available (definitions vary for open peer review [22], but we define this as having transparent peer reviews available); sharing a data management plan; use of open licenses when sharing data/code/materials; use of nonproprietary software when sharing data/code/materials; use of persistent identifiers when sharing data/code/materials; sharing research workflows in computational environments; reporting on the gender composition of the authorship team; and reporting results of trials in a manuscript-style publication (peer reviewed or preprint) within 2 years of study completion.

Round 3

Participants: Twenty-one participants were present on day 1 and 17 on day 2 of the consensus meeting. Full demographics are described in Table 1. One participant on each day did not provide any demographic information.

Voting: There were 19 items that had not reached consensus in round 2. After discussing each item, some were reworded slightly, expanded into two items, or collapsed into a single item (see notes on modifications made in Table 2). Ultimately, participants voted on 22 potential open science practices in round 3. One of these items asked participants to vote on “reporting whether registered clinical trials were reported in the registry within 1 year of study completion.” An item describing “reporting that registered clinical trials were reported in the registry within 2 years of study completion” reached consensus in round 2; however, several participants commented that the timeframe was inconsistent with requirements of funders that have signed the World Health Organization joint statement on public disclosure of results from clinical trials, which specified 12 months. Based on this, participants were asked to revote on this item using the 1-year cutoff.

Of the 22 potential items voted on in round 3, 12 reached consensus for inclusion: whether systematic reviews have been registered; whether there was a statement about study materials sharing with publications; the use of persistent identifiers when sharing data/code/materials; whether data/code/materials are shared with a clear license; whether the data/code/materials license is open or not; citations to data; what proportion of articles are published open access with a breakdown of time delay; the number of preprints; that registered clinical trials were reported in the registry within 1 year of study completion; trial results in a manuscript-style publication (peer reviewed or preprint); systematic review results in a manuscript-style publication (peer reviewed or preprint); and whether research articles include funding statements. One item reached consensus for exclusion from the dashboard: Reporting whether workflows in computational environments were shared. Participants agreed this item should be a component of the existing item, “reporting whether code was shared openly at the time of publication (with limited exceptions).”

Participants discussed how some of the items that reached consensus for inclusion represented essential practices more broadly related to transparency or reporting than practices generally considered traditional open science procedures. Following round 3, items that reached consensus were grouped based on these broad categories (traditional open science versus broader transparency practices for reporting) and participants were asked to rank the practices based on how they should be prioritized for programming for inclusion in our proposed dashboard (Table 3). Items with higher scores represent those that were given a higher priority. The top two traditional open science practices by priority were reporting whether clinical trials were registered before they started recruitment, and reporting whether study data were shared openly at the time of publication (with limited exceptions). The top two broader transparency practices by priority were reporting whether author contributions were described, and reporting whether author conflicts of interest were described.

Traditional open science practices

  1. Reporting whether clinical trials were registered before they started recruitment. This practice is required by several organizations and funders internationally. Despite clear mandates for registration, we know this practice is not optimal [23]. Standardized reporting of trial registration will allow for linkage of trial outputs to the registry and help contribute to the reduction of selective outcome reporting and non-reporting.
  2. Reporting whether study data were shared openly at the time of publication (with limited exceptions). Policies encouraging and mandating open data are growing. This practice considers whether there is a statement about open data in a publication. It does not require that this statement indicate that data are in fact publicly available. As culture around data sharing becomes more normative, it may be of value to reevaluate whether tracking the proportion of openly available data is of value. To do so effectively will require changes in the culture around and use of DOIs. Information on the data available and its useability would be essential to provide quality control and for an individual to determine not just if data can be used, but whether it should be used for the intended purpose. Exceptions would include nonempirical pieces (e.g., a study protocol).
  3. Reporting what proportion of articles are published open access with a breakdown of time delay. This practice reports on the proportion of articles published open access (i.e., publicly available without restriction). Part of this reporting will include the timing of the open access from first publication (e.g., immediate open access versus delayed open access publication).
  4. Reporting whether study code was shared openly at the time of publication (with limited exceptions). Similar to practice 2, this practice considers whether there is a statement about open code sharing in the publication. It does not require that this statement indicate that code is in fact publicly available. As culture around code sharing becomes more normative, information about the quality and type of code shared and compliance to best practices (e.g., FAIR principles) may be valuable to monitor. Exceptions would include nonempirical pieces.
  5. Reporting whether systematic reviews have been registered. This practice is required by some journals and is common within knowledge synthesis projects. Standardized reporting of systematic review registration will allow for linkage of review outputs to the registry and help contribute to reduce unnecessary duplication in reviews.
  6. Reporting that registered clinical trials were reported in the registry within 1 year of study completion. The practice of reporting trial results in the registry they were first registered in is required by several organizations and funders. This practice would track the proportion of trials in compliance with reporting results within 1 year or study completion.
  7. Reporting whether there was a statement about study materials sharing with publications. This practice considers whether there is a statement about materials sharing with a publication. It does not consider whether or not materials are indeed shared openly. As with data and code sharing, materials sharing is not yet widespread across biomedicine. As a starting point, statements about materials sharing will be monitored, but in time, it may be of value to track the frequency of materials sharing at an institution. This could inform infrastructure needs.
  8. Reporting whether study reporting guideline checklists were used. Reporting guidelines are checklists of essential information to include in a manuscript; these are widely endorsed by medical journals and have been shown to improve the quality of reporting of publications [24]. This item would track whether reporting guidelines were cited in a publication. In the future, tracking actual compliance to reporting guideline items may be more relevant.
  9. Reporting citations to data. This practice monitors whether a given dataset shared from researchers at an institution has received citations in other works. This is an assay to data reuse and may be a relevant metric to consider alongside others when considering study impact.
  10. Reporting trial results in a manuscript-style publication (peer reviewed or preprint). This practice would report whether a trial registered on a trial registry had an associated manuscript-style publication within 1 year of study completion. This will include reporting in the form of preprints.
  11. Reporting the number of preprints. This practice reports the frequency of preprints produced at the institution over a given timeframe.
  12. Reporting systematic review results in a manuscript-style publication (peer reviewed or preprint). This practice would report whether a registered systematic review had an associated manuscript-style publication within 1 year of study completion. This will include reporting in the form of preprints.

Broader transparency practices

  1. Reporting whether author contributions were reported. Journals are increasingly requiring or permitting authors to make statements (e.g., using the CREDIT Taxonomy) about their role in the publication. This helps to clarify the diversity of contributions each author has made. This practice would track the presence of these statements in publications. Monitoring the use of author contribution statements may help institutions to devise ways to recognize individual’s skills when hiring and promoting researcher.
  2. Reporting whether author conflicts of interest were reported. Reporting of conflicts of interest is a standard practice at many journals, but this practice is not uniform, with some publications lacking statements altogether. Monitoring conflicts of interest reporting helps to ensure transparency. In the absence of a statement of conflicts of interest, the reader cannot assume none exist. For this reason, we reached consensus that all papers should have such a statement irrespective of whether conflicts exist.
  3. Reporting the use of persistent identifiers when sharing data/code/materials. Persistent identifiers such as DOIs are digital codes for online objects that remain consistent over time. Use of persistent identifiers of research outputs such as data, code, and materials foster collation and linkage.
  4. Reporting whether ORCID identifiers were reported. ORCID identifiers are persistent researcher identifiers. This practice would track whether publications report these. Knowledge about use of ORCID will help inform iterations of our open science dashboard. While our dashboard will focus at the research institution level, ORCIDs may be relevant to use to collate institution publications, or to produce researcher-level outputs.
  5. Reporting whether data/code/materials are shared with a clear license. This practice monitors whether licenses are used when research outputs like data, code, and materials are shared (e.g., use of creative commons licenses).
  6. Reporting whether research articles include funding statements. Reporting on funding is a standard practice at many journals and required by some funders, but this practice is not uniform, with some publications lacking statements altogether. Monitoring funding statements helps to ensure transparency and provide linkage between funding and research outputs. For this reason, we reached consensus that all papers should have funder statements irrespective of whether funding was received. In the future, knowledge of what types of funding a publication received may foster meta-research on funding allocation and research outputs.
  7. Reporting whether the data/code/materials license is open or not. Among research outputs shared with a license, this practice monitors the proportion of these that are “open” (i.e., publicly available with no restrictions to access when appropriate to the data).

Future directions

The next phase of this research program will involve developing the open science dashboard interface and its programming. While we aim to create a fully automated tool, some core open science practices that reached consensus for inclusion in the dashboard may not lend themselves to reliable, automated analysis. For example, the fact that digital identifiers are not widely used on some research outputs (e.g., when sharing code or study materials) may create challenges in accurate measurement. If we find this to be the case, in these instances, we will exclude the open science practice from monitoring. We chose not to restrict the community of Delphi participants in terms of the ease of automation of what they wanted in the tool—we encouraged participants to “think big.” Ultimately, some items may not be possible to include due to feasibility. We anticipate iterative consultation with the community as we work to develop a dashboard that best meets their needs. As infrastructure and the use of identifiers evolve within the biomedical community, there will be a need to refresh consensus and reconsider processes used to best automate the core open science practices.

We anticipate that the open science dashboard will serve as a tool for institutions to track their progress in adopting the agreed open science practices, but also to assess their performance relevant to existing mandates. For example, the dashboard will enable institutions to monitor their adherence to mandates related to open access publishing, clinical trial registration and reporting, and data sharing, all of which are commonly mandated by funders globally and related stakeholders in the research ecosystem [2527]. We also anticipate that several of the open science practices included in the dashboard will not reflect practices that are widely performed or mandated. Some items may therefore reflect aspirational practices for the community. The dashboard can be used to benchmark for improvements in these areas.

The proposed dashboard is a necessary precursor for providing institutional feedback on the performance of the agreed open science practices. As we pilot implementation of the dashboard, we will consider how the tool can provide tailored feedback to individual institutions, or distinct settings. The central goal of the dashboard is not to facilitate comparison between institutions (i.e., where adherence to practices can be directly compared within the dashboard across different institutions). This type of ranking is counter to our community-driven initiative that seeks to provide a tool for institutional-level improvement in open science rather than to pit organizations, who often are situated quite differently, against one another. Our vision is that the tool will not develop to be punitive, competitive, or a prestige indicator, as this is likely to further contribute to the systematic enablement of high-resource institutions. Nonetheless, a core set of agreed practices is helpful for comparative meta-research around open science.

We intend for the dashboard to be implemented at the individual institution level. Understanding a given institution’s setting, current norms, and resource circumstances will be critical to deciding how to best implement the dashboard in that environment. A key step in the program to develop the proposed dashboard will be to carefully consider the appropriateness of the dashboard being publicly available versus hosted internally by biomedical institutions. Preference is likely to vary across institutions based on their circumstances. As we implement the proposed open science dashboard, it will also be important to measure how nuances in language, geographic location, discipline, and other institutional differences impact optimal local adoption. Even subtle differences in understanding of, and experiences with, open science at different institutions may have an important impact on how an eventual dashboard can be implemented to best meet institutional needs while still retaining a core set of practices to monitor.

Over time, we will also need to monitor the dashboard itself. As open science becomes increasingly embedded in the research ecosystem, the core practices of today may differ from those of the future. During implementation, we will evaluate how the tool is impacted by subtleties and practical constraints differing between institutions, countries, and geographical regions (for example, how appropriate the tool is in a Global North versus Global South setting). Addressing the distinct challenges will help to foster harmonization in measuring open science practices in the biomedical community. We will need to monitor and stay abreast of the global communities needs and practices to ensure the dashboard is sustainable and relevant over time.


  1. 1.
    UNESCO Recommendation on Open Science [Internet]. UNESCO. 2020 [cited 2021 Dec 17]. Available from: https://en.unesco.org/science-sustainable-future/open-science/recommendation.
  2. 2.
    Munafò MR, Nosek BA, Bishop DVM, Button KS, Chambers CD, Percie N, et al. A manifesto for reproducible science. Nat Hum Behav. 2017;1:1–9. pmid:33954258
  3. 3.
    Errington TM, Denis A, Perfito N, Iorns E, Nosek BA. Challenges for assessing replicability in preclinical cancer biology. eLife. 2021 Dec 7;10:e67995. pmid:34874008
  4. 4.
    Dahlander L, Gann DM. How open is innovation? Res Policy. 2010;39(6):699–709.
  5. 5.
    Bogers M, Chesbrough H, Moedas C. Open Innovation: Research, Practices, and Policies. Calif Manage Rev. 2018;60(2):5–16.
  6. 6.
    Government of Canada. Roadmap for Open Science—Science.gc.ca [Internet]. [cited 2020 Sep 16]. Available from: http://science.gc.ca/eic/site/063.nsf/eng/h_97992.html.
  7. 7.
    Second National Plan for Open Science: INRAE to manage the Recherche Data Gouv national research-data platform [Internet]. INRAE Institutionnel. [cited 2022 Jan 8]. Available from: https://www.inrae.fr/en/news/second-national-plan-open-science-inrae-manage-recherche-data-gouv-national-research-data-platform.
  8. 8.
    Moher D, Goodman SN, Ioannidis JPA. Academic criteria for appointment, promotion and rewards in medical research: Where’s the evidence? Eur J Clin Invest. 2016;46(5):383–385. pmid:26924551
  9. 9.
    The San Francisco Declaration on Research Assessment (DORA). Available from: http://www.ascb.org/dora/.
  10. 10.
    Ali-Khan SE, Harris LW, Gold ER. Motivating participation in open science by examining researcher incentives. eLife. 2017;6:e29319. pmid:29082866
  11. 11.
    Larivière V, Sugimoto CR. Do authors comply when funders enforce open access to research? Nature. 2018;562(7728):483–486. pmid:30356205
  12. 12.
    Policy on data, software and materials management and sharing | Wellcome [Internet]. [cited 2018 Jun 19]. Available from: https://wellcome.ac.uk/funding/managing-grant/policy-data-software-materials-management-and-sharing.
  13. 13.
    Open Access and Altmetrics in the pandemic age: Forescast analysis on COVID-19 literature | bioRxiv [Internet]. [cited 2020 Sep 10]. Available from: https://www.biorxiv.org/content/10.1101/2020.04.23.057307v1.abstract
  14. 14.
    Kupferschmidt K. ‘A completely new culture of doing research.’ Coronavirus outbreak changes how scientists communicate. Science [Internet]. 2020 Feb 26 [cited 2020 Dec 21]; Available from: https://www.sciencemag.org/news/2020/02/completely-new-culture-doing-research-coronavirus-outbreak-changes-how-scientists.
  15. 15.
    Prinsen CAC, Vohra S, Rose MR, King-Jones S, Ishaque S, Bhaloo Z, et al. Core Outcome Measures in Effectiveness Trials (COMET) initiative: protocol for an international Delphi study to achieve consensus on how to select outcome measurement instruments for outcomes included in a ‘core outcome set’. Trials. 2014;15(1):247.
  16. 16.
    Linstone HA, Turoff M. Delphi: A brief look backward and forward. Technol Forecast Soc Change. 2011;78(9):1712–1719.
  17. 17.
    Dalkey N, Helmer O. An Experimental Application of the Delphi Method to the Use of Experts. Manag Sci. 1963;9(3):458–467.
  18. 18.
    McMillan SS, King M, Tully MP. How to use the nominal group and Delphi techniques. Int J Clin Pharm. 2016;38:655–662. pmid:26846316
  19. 19.
    Calibrum. DELPHI SURVEYS [Internet]. Calibrum. [cited 2020 Dec 22]. Available from: https://calibrum.com/features.
  20. 20.
    Video Conferencing, Web Conferencing, Webinars, Screen Sharing [Internet]. Zoom Video. [cited 2020 Dec 22]. Available from: https://zoom.us/.
  21. 21.
    Pill J. The Delphi method: Substance, context, a critique and an annotated bibliography. Socioecon Plann Sci. 1971;5(1):57–71.
  22. 22.
    Ross-Hellauer T. What is open peer review? A systematic review [version 2; referees: 4 approved]. F1000. 2017;6(588).
  23. 23.
    Alayche M, Cobey KD, Ng JY, Ardern CL, Khan KM, Chan AW, et al. Evaluating prospective study registration and result reporting of trials conducted in Canada from 2009–2019 [Internet]. medRxiv; 2022 [cited 2022 Oct 25]. p. 2022.09.01.22279512. Available from: https://www.medrxiv.org/content/10.1101/2022.09.01.22279512v1.
  24. 24.
    Turner L, Shamseer L, Altman DG, Schulz KF, Moher D. Does use of the CONSORT Statement impact the completeness of reporting of randomised controlled trials published in medical journals? A Cochrane review. Syst Rev. 2012;1(1):60.
  25. 25.
    World Medical Association. World Medical Association Declaration of Helsinki: Ethical Principles for Medical Research Involving Human Subjects. JAMA. 2013;310(20):2191–2194. pmid:24141714
  26. 26.
    ICMJE | About ICMJE | Clinical Trials Registration [Internet]. [cited 2022 Mar 17]. Available from: http://www.icmje.org/about-icmje/faqs/clinical-trials-registration/.
  27. 27.
    Joint statement on public disclosure of results from clinical trials [Internet]. Available from: http://www.who.int/ictrp/results/jointstatement/en/.
  28. 28.
    French SD, Green SE, O’Connor DA, McKenzie JE, Francis JJ, Michie S, et al. Developing theory-informed behaviour change interventions to implement evidence into practice: a systematic approach using the Theoretical Domains Framework. Implement Sci. 2012;7(1):38.

Source link

Back to top button