Skip Navigation
Skip to contents

Restor Dent Endod : Restorative Dentistry & Endodontics

OPEN ACCESS

Articles

Page Path
HOME > Restor Dent Endod > Ahead-of print articles > Article
Editorial Artificial intelligence hallucinations in endodontics: implications for scientific integrity and clinical decision-making
Emmanuel João Nogueira Leal da Silva1,2,*orcid, Fernanda Nehme Simão Jorge Riche2orcid
Restor Dent Endod [Epub ahead of print]
DOI: https://doi.org/10.5395/rde.2026.51.e18
Published online: April 7, 2026

1Postgraduate Program in Translational Biomedicine (BIOTRANS), Grande Rio University (UNIGRANRIO), Rio de Janeiro, RJ, Brazil

2Department of Endodontics, Rio de Janeiro State University (UERJ), Rio de Janeiro, RJ, Brazil

*Correspondence to Emmanuel João Nogueira Leal da Silva, PhD Department of Endodontics, Rio de Janeiro State University (UERJ), Rua Herotides de Oliveira, Blvd. 28 de Setembro, 157, Vila Isabel, RJ 20551-030, Brazil Email: nogueiraemmanuel@hotmail.com

Citation: Silva EJNL, Riche FNSJ. Artificial intelligence hallucinations in endodontics: implications for scientific integrity and clinical decision-making. Restor Dent Endod 2026;51(2):e18.

• Received: March 4, 2026   • Accepted: March 10, 2026

© 2026 The Korean Academy of Conservative Dentistry

This is an Open-Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

  • 284 Views
  • 53 Download
Artificial intelligence (AI) is rapidly transforming how knowledge is generated, accessed, and communicated. As with previous technological shifts, AI should not be viewed as a replacement for human expertise, but as a tool that can expand creativity, support problem-solving, and facilitate learning. However, its integration into scientific and clinical practice requires careful scrutiny to preserve ethical standards, originality, and critical thinking. At the same time, the rapid incorporation of AI tools into scientific writing has begun to raise important concerns for journals, editors, and reviewers regarding the reliability of submitted manuscripts and the integrity of the scientific record.
Unlike human intelligence, which arises from complex biological processes, large language models operate through statistical pattern recognition. These systems are probabilistic models trained to predict the most likely sequence of words based on vast datasets. Consequently, they do not truly “understand” concepts; instead, they simulate knowledge by reproducing linguistic patterns. Their outputs are therefore constrained by the quality of training data, algorithmic design, and even commercial or institutional influences that may shape which information is emphasized or omitted. These limitations raise important concerns about intellectual autonomy and the preservation of analytical reasoning in scientific environments.
One of the most widely discussed limitations of large language models is the phenomenon known as “hallucination”. Originally described in research on neural machine translation and later adopted to characterize generative AI systems, hallucination refers to the confident production of factually incorrect or unsupported information [1,2]. Because these models prioritize linguistic plausibility rather than factual verification, they may generate coherent yet erroneous responses instead of acknowledging uncertainty. As a result, outputs may appear convincing and well-structured while containing fabricated details, inaccurate claims, or nonexistent references. In healthcare, such hallucinations extend beyond technical inaccuracies; they can reinforce misinformation, create misplaced confidence in algorithmic authority, and ultimately influence professional judgment, potentially affecting diagnostic reasoning and treatment decisions.
In endodontics, the implications of hallucinated information can be particularly concerning. In the scientific domain, increasing reliance on AI for drafting manuscripts, summarizing literature, or assisting in the development of research protocols introduces the risk of fabricated references, distorted interpretations, or oversimplified representations of complex findings. In this context, manipulated citations generated by AI in endodontic research can distort literature reviews, mislead readers, and compromise the integrity of scientific discussion. Moreover, literature is beginning to witness the emergence of highly speculative conceptual publications proposing novel terminologies, theoretical constructions, or mechanistic explanations that remain entirely unvalidated. When such frameworks are generated or amplified through AI-assisted writing, they may create an illusion of scientific novelty while lacking biological plausibility or empirical support. Without rigorous verification, these ideas may introduce misleading concepts into literature. Once inaccurate information enters the scientific record, it may propagate through secondary citations, gradually contaminating the evidence base and conferring unwarranted legitimacy to incorrect claims. Because academic publications form the foundation for guidelines, education, and clinical decision-making, the introduction of hallucinated content threatens not only individual studies but also the reliability of the broader knowledge framework that supports the discipline. Editors and reviewers must therefore remain particularly vigilant when evaluating manuscripts that may involve AI-assisted writing or AI-generated content.
The potential consequences extend to clinical practice. Endodontic diagnosis relies on the careful integration of clinical findings, radiographic interpretation, and patient history. AI-generated responses that present confident but oversimplified conclusions risk masking uncertainty and discouraging differential diagnosis. Similarly, treatment suggestions generated without appropriate contextual understanding may promote generalized protocols that overlook anatomical variability and patient-specific conditions. When such outputs are accepted without verification, clinical reasoning and ultimately patient care may be compromised.
Addressing these challenges requires more than technological refinement; it demands a culture of critical appraisal. AI-generated information must be interpreted through the lens of professional expertise and verified against primary scientific evidence. Cross-checking claims, maintaining rigorous peer review, and restricting AI to supportive roles—such as information retrieval or hypothesis generation—are essential safeguards.
AI undoubtedly offers important opportunities to enhance access to information, accelerate knowledge exchange, and support education. Yet its limitations, including hallucinations, bias, and the risk of misinformation, require sustained vigilance. Dentistry must therefore engage with AI in a balanced and reflective manner: embracing its benefits while preserving the methodological rigor and critical reasoning that underpin evidence-based practice. Safeguarding the reliability of the scientific record must remain a shared responsibility among authors, reviewers, and editors as AI tools become increasingly integrated into the research ecosystem. Only through such collective vigilance can AI strengthen—rather than undermine—the scientific and clinical foundations of endodontic practice.

CONFLICT OF INTEREST

Emmanuel João Nogueira Leal da Silva is an Associate Editor of Restorative Dentistry and Endodontics and was not involved in the review process of this article. The authors declare no other conflicts of interest.

FUNDING/SUPPORT

None.

AUTHOR CONTRIBUTIONS

Conceptualization: Silva EJNL. Project administration: Silva EJNL, Riche FNSJ. Writing - original draft: Silva EJNL, Riche FNSJ. Writing - review & editing: Silva EJNL, Riche FNSJ. All authors read and approved the final manuscript.

  • 1. Koehn P, Knowles R. Six challenges for neural machine translation. In: Luong T, Birch A, Neubig G, Finch A, eds. Proceedings of the First Workshop on Neural Machine Translation. Vancouver, Canada: Association for Computational Linguistics; 2017. p. 28-39.
  • 2. Dziri N, Milton S, Yu M, Zaiane O, Reddy S. On the origin of hallucinations in conversational models: is it the datasets or the models? arXiv [Internet]. 2022 [cited 2026 Mar 6]. Available from: https://arxiv.org/abs/2204.07931

Tables & Figures

REFERENCES

    Citations

    Citations to this article as recorded by  

      • ePub LinkePub Link
      • Cite
        CITE
        export Copy Download
        Close
        Download Citation
        Download a citation file in RIS format that can be imported by all major citation management software, including EndNote, ProCite, RefWorks, and Reference Manager.

        Format:
        • RIS — For EndNote, ProCite, RefWorks, and most other reference management software
        • BibTeX — For JabRef, BibDesk, and other BibTeX-specific software
        Include:
        • Citation for the content below
        Artificial intelligence hallucinations in endodontics: implications for scientific integrity and clinical decision-making
        J Korean Acad Conserv Dent. ;e18  Published online April 7, 2026
        Close
      • XML DownloadXML Download
      Artificial intelligence hallucinations in endodontics: implications for scientific integrity and clinical decision-making
      Artificial intelligence hallucinations in endodontics: implications for scientific integrity and clinical decision-making

      Restor Dent Endod : Restorative Dentistry & Endodontics
      Close layer
      TOP