Ethical Considerations in The Use of AI Tools Like ChatGPT and Gemini in Academic Research

Main Article Content

Bornali Konwar

Abstract

The rapid integration of generative artificial intelligence (AI) tools, such as ChatGPT and Gemini, into academic research has transformed scholarly workflows, offering unprecedented efficiency in tasks like literature reviews, data analysis, and manuscript drafting. However, their adoption raises significant ethical concerns, including issues of authorship, plagiarism, data integrity, and bias perpetuation. This paper explores the ethical implications of using AI tools in research, drawing on Elsevier’s Responsible AI Principles, stakeholder theory, and empirical studies. It examines challenges such as the risk of fabricated references, lack of transparency in AI-generated outputs, and potential inequities in access to advanced AI tools. Recommendations are provided for researchers, institutions, and publishers to ensure ethical use, including transparent disclosure of AI involvement, rigorous validation of outputs, and adherence to academic integrity standards. This study underscores the need for balanced integration of AI to enhance research while safeguarding ethical principles.

Article Details

Section

Articles

Author Biography

Bornali Konwar

Librarian, Borholla College, Jorhat, Assam