IADR and AADOCR Policy Statement on Ethics in Artificial Intelligence (AI) in Dental, Oral, and Craniofacial Research
J. Cunha-Cruz, N. Dame-Teixeira, M. Daou, I. Garcia, C. Godoy, P. Gupta, A. Hbibi, G. Kotsakis, S. Naavaal, B.F. Nicolau, L. Pinzon, R. Sherwood, A. Stavropoulos, M. Tatullo, L.M.A. Tenuta, O. Uti, M. Charles-Ayinde, and C. Fox.
Given the rapid integration of artificial intelligence (AI) into dental, oral, and craniofacial research and clinical practice, including applications such as image analysis for early disease detection, predictive modeling of treatment outcomes, personalized oral health care planning, and large-scale data mining for epidemiological studies, as well as its use in writing and reviewing research reports, the IADR and AADOCR affirm their commitment to ethical implementation. This policy aligns with IADR’s mission to drive dental, oral, and craniofacial research for health and well-being worldwide and reflects its core values of scientific excellence, social responsibility, and commitment to a diverse and inclusive scientific community. Accordingly, the IADR and AADOCR endorse the following recommendations for the ethical use of AI in dental, oral, and craniofacial research and practice:
1. Uphold Core Ethical Principles
Artificial Intelligence (AI) is defined as the capability of a device to perform functions that are normally associated with human intelligence such as reasoning, learning, and self-improvement1. Generative AI has the capability to generate new content, including text, code, imagery, video, and speech, based on human prompts2. AI technologies in dental research and practice must be developed and applied according to foundational principles of transparency, integrity, accountability, equity, privacy, and human oversight. AI tools should be explainable and interpretable for researchers, clinicians, patients, and research participants, supporting rather than replacing clinical judgment and research conduct. AI must be designed and used to promote human well-being, protect human dignity and privacy, and uphold professional and research integrity3,4,5.
2. Recognize and Respect the Rights and Empowerment of Patients and Research Participants
IADR and AADOCR affirm a commitment to safeguarding the autonomy, beneficence, non-maleficence and justice of patients and research participants involved in AI-related dental, oral, and craniofacial research and practice. This includes:
(i). Ensuring informed consent processes explicitly convey the role, benefits, and limitations of AI technologies in research and clinical care, empowering individuals to make knowledgeable decisions3,4,5. Where feasible and appropriate, efforts should be made to re-consent participants or provide notification when their data is repurposed for AI-driven analyses, especially if the original consent did not explicitly cover such uses.
(ii). Providing clear communication and accessible explanations about AI’s function and implications, tailored to the needs of different stakeholders3,4.
(iii). Guaranteeing transparency about data collection, privacy protections, secondary uses, and potential risks3,4.
(iv). Implementing rigorous confidentiality and data security measures in compliance with relevant local and international laws3,4.
(v). Minimizing risks and promoting equitable benefit sharing, with heightened attention to vulnerable and underserved populations, including the provision of tools to empower patients to engage with AI-driven dental care and research3,4.
(vi). Ensuring accountability through independent ethical review and oversight from Research Ethics Committees or Institutional Review Boards5,6.
(vii). Engaging patients, research participants, and community representatives early in the design, implementation, and evaluation of AI tools, where applicable, to ensure alignment with user values, needs, and expectations.
3. AI as a Data Analytic Tool and Unique Ethical Challenges
AI should be recognized as a powerful data analytic tool integral to dental research, subject to the same standards of scientific rigor, reproducibility, and ethical review as other research methodologies. However, AI’s scale, autonomy, and opacity introduce distinct challenges that require specific ethical safeguards, transparency, and ongoing vigilance3,4,5. This includes:
(i). Follow recognized AI reporting guidelines and clearly acknowledge and differentiate any AI-generated data from real-world data to prevent misinterpretation7.
(ii). Protect the accuracy, reliability, and confidentiality of datasets, including sensitive clinical, research, and educational records, while ensuring compliance with local and international data protection standards.
(iii). Address risks of algorithmic bias by promoting diverse and representative datasets, while requiring AI models to be transparent, explainable, and reproducible to support equitable outcomes.
(iv). Ensure AI supports, rather than replaces, professional and scholarly judgment, with clear accountability structures and human review of AI-generated outputs.
(v). Provide safeguards against the inappropriate use of AI in academic settings, and equip trainees and professionals with training to apply AI responsibly and ethically in their work.
4. Responsible Governance, Continuous Monitoring, and Implementation
Institutions and governments should establish governance frameworks ensuring responsible AI use in research and practice, which include:
(i). Robust guidelines for ethical data sourcing, consent, and privacy protection3,4.
(ii). Promotion of inclusive, representative datasets alongside diverse, multidisciplinary research teams, employing participatory design approaches that involve stakeholders from diverse and underserved communities to reduce bias and enhance fairness3,4.
(iii). Rigorous validation and continuous post-deployment monitoring of AI tools’ safety, effectiveness, fairness, and performance in real-world conditions, including mechanisms to detect model drift and rapidly respond to any ethical or safety concerns8,9,10.
(iv). Training and capacity-building initiatives for trainees and professionals focused on ethical AI use and the protection of patients and research participants3,4.
(v). Consideration of the environmental sustainability of AI technologies, promoting efforts to minimize computational resource consumption and carbon footprint3.
5. Interdisciplinary Collaboration for Systemic Impact
Dental research societies should collaborate with health, scientific, regulatory, and policy communities to develop evidence, policies, and guidelines on AI ethics. This collaboration should aim to:
(i). Harmonize AI ethical frameworks across dental and medical disciplines to enhance interdisciplinary collaborations.
(ii). Address social determinants of health and mitigate structural inequities related to AI implementation3,4.
(iii). Advance universal health coverage and oral health equity in the AI era3,4.
(iv). Identify and address emerging research gaps concerning AI’s impact on oral health outcomes and ethical standards3,4.
(v). Enhance international cooperation for low-resource settings to promote global digital or scientific equity in the AI era.
(vi). Develop interdisciplinary task forces that include researchers, ethicists, regulators, and tech developers to co-develop agile guidelines and anticipate ethical challenges in emerging AI applications.
6. Ethical Use of AI in Scientific Publication
The use of AI in the writing, review, and publication of scientific manuscripts must uphold the highest standards of publication ethics and integrity, such as those outlined in the Committee on Publication Ethics (COPE) guidelines11. In addition to adhering to the journal specific requirements and reporting guidelines, authors, editors/reviewers, and publishers should:
(i). Ensure human accountability by having contributors review and verify all AI-assisted content for accuracy, originality, and integrity, and by recognizing that AI tools or models do not meet authorship criteria and cannot be credited as authors11.
(ii). Ensure reproducibility and transparency in AI use in manuscript preparation by explicitly disclosing and attributing any application of AI in creation, analysis, review, or editorial processes1. Describe the purpose, scope, known limitations, risks, and potential biases, and maintain records as required1.
(iii). Safeguard confidentiality and fairness by responsibly using AI in handling unpublished work and peer review processes according to editorial policies, ensuring privacy, impartiality, and integrity are maintained.
(iv). Uphold ethical standards in citations by ensuring that all references generated or suggested with AI are accurate, verifiable, and appropriately sourced, avoiding fabricated or misattributed references.
(v). Promote ongoing education, policy compliance, and adaptability so all stakeholders remain informed of evolving ethical standards and AI capabilities and regularly review and update policies to reflect new developments.
(vi). Use AI responsibly to support, not substitute, critical thinking, domain expertise, and scholarly judgment, ensuring disclosure of AI assistance. Peer reviewers must not use AI tools in ways that risk confidentiality or ethics, and should not process unpublished or sensitive materials with AI.
7. Compliance with Regulatory and International Ethical Standards
AI applications must comply with all applicable regulatory frameworks and standards, including evolving guidelines such as the FDA’s Artificial Intelligence/Machine Learning (AI/ML) regulatory framework8, the European Union’s AI Act9, international ethical instruments such as UNESCO’s Recommendation on the Ethics of Artificial Intelligence10, and publication ethics standards from bodies like COPE Adherence to these frameworks reinforces accountability, safety, and public trust.
Statement of Support
Based on the best available evidence, global ethical frameworks, and established standards for human subjects research, IADR and AADOCR support the responsible, transparent, and equitable use of AI in dental, oral, and craniofacial research and practice. This includes an enduring commitment to uphold ethical standards, safeguard the rights and welfare of patients and research participants, promote diversity and inclusivity, ensure continuous oversight, embrace sustainability, and maintain scientific integrity.
*To ensure ongoing relevance as AI technologies and norms evolve, these guidelines will be periodically reviewed and updated to reflect best practices in research, clinical care, and publication ethics.
Adopted 2026
References
- Schwendicke F, Singh T, Lee J-H, Gaudin R, Chaurasia A, Wiegand T, Uribe S, Krois J. (2021). Artificial intelligence in dental research: Checklist for authors, reviewers, readers. J Dent. 107:103610.
- Schwendicke F, Sidhu SK, Ferracane JL, Tichy A, Jakubovics NS. (2025). Generative AI: Opportunities, Risks, and Responsibilities for Oral Sciences. J Dent Res. 0(0).
- American Medical Association. Advancing health care AI through ethics, evidence and equity [Internet]. Chicago: AMA; 2025 [cited 2025 Jul 30]. Available from: https://www.ama-assn.org/practice-management/digital-health/advancing-health-care-ai-through-ethics-evidence-and-equity
- World Health Organization. Ethics and governance of artificial intelligence for health [Internet]. Geneva: WHO; 2021 [cited 2025 Jul 30]. Available from: https://www.who.int/publications/i/item/9789240029200
- International Association for Dental Research. IADR Code of Ethics [Internet]. [cited 2025 Jul 30]. Available from: https://www.iadr.org/resources/code-of-ethics
- Coleman CH, Khadem A, Reeder JC, et al. A World Health Organization tool for assessing research ethics oversight systems. Bull World Health Organ. 2025;103(5):403-409. doi:10.2471/BLT.24.292219
- Blau W, Cerf VG, Enriquez J, Francisco JS, Gasser U, Gray ML, Greaves M, Grosz BJ, Jamieson KH, Haug GH, Hennessy JL, Horvitz E, Kaiser DI, London AJ, Lovell-Badge R, McNutt MK, Minow M, Mitchell TM, Ness S, Parthasarathy S, Perlmutter S, Press WH, Wing JM, Witherell M. (2024). Protecting scientific integrity in an age of generative AI. Proc. Natl. Acad. Sci. 121(22): e2407886121.
- U.S. Food and Drug Administration. Proposed regulatory framework for modifications to artificial intelligence/machine learning (AI/ML)-based software as a medical device (SaMD) [Internet]. Silver Spring (MD): FDA; 2021 [cited 2025 Jul 30]. Available from: https://www.fda.gov/media/122535/download
- European Commission. Proposal for a regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence (Artificial Intelligence Act) [Internet]. Brussels: EC; 2021 [cited 2025 Jul 30]. Available from: https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence
- United Nations Educational, Scientific and Cultural Organization. Recommendation on the ethics of artificial intelligence [Internet]. Paris: UNESCO; 2021 [cited 2025 Jul 30]. Available from: https://unesdoc.unesco.org/ark:/48223/pf0000373434
- Committee on Publication Ethics (COPE). COPE position - Authorship and AI - English [Internet]. 2025 [cited 2025 Jul 30]. Available from: https://doi.org/10.24318/cCVRZBms.