Knowledge, Perceptions and Attitude of Researchers Towards Using ChatGPT in Research

This study investigated the perceptions of Egyptian researchers regarding the utilization of ChatGPT in academic research. The findings indicate that ChatGPT adoption remains in its nascent stages within this cohort. Despite this, awareness of its potential benefits is burgeoning, with many researchers expressing interest in leveraging it to enhance their work.

Despite the presence of earlier chatbot releases, ChatGPT has sparked a notable surge of interest and engagement within academic circles. This is reflected in the relatively high degree of familiarity with ChatGPT among our study participants compared to other chatbots. Intriguingly, this awareness did not necessarily translate into widespread research utilization. Currently, the primary identified applications of ChatGPT in research involve paragraph rephrasing and reference retrieval. Notably, data analysis emerged as a potential function, although concerns surrounding the accuracy and reliability of outputs were expressed by many participants. Interestingly, our data revealed a positive association between age and chatbot use, with younger researchers exhibiting a higher likelihood of engagement. This association can potentially be attributed to the increased technological fluency and comfort often observed in younger generations, which is further supported by the observed higher usage among participants with prior familiarity with chatbots.

While ChatGPT and other language models present potential benefits for research endeavors, inherent limitations require critical consideration. One primary concern lies in their restricted comprehension of the complex nuances within the published literature. This inadequacy can lead to erroneous analyses and potentially misleading conclusions drawn from the processed information. Furthermore, the absence of robust citation mechanisms within these models creates a significant risk of perpetuating misinformation [15, 16].

This concern is demonstrably illustrated by an anecdotal incident encountered during the present study’s preparation. The first author of this manuscript engaged ChatGPT in a query regarding a specific research topic of interest. While the model provided a seemingly plausible explanation, its attempts at referencing relevant sources proved demonstrably unreliable. The two purported complete references (authors, year, journal, and DOI) offered by ChatGPT were either entirely fabricated, with a DOI linked to an unrelated article, or an existing publication, but its content bore no connection to the initial query, and the accompanying DOI was inaccurate.

While the identified limitations of ChatGPT, particularly its limited understanding of literature and unreliable citation mechanisms, pose significant challenges to its widespread adoption in research, we believe that advancements in AI technology and refined training methodologies hold the potential to mitigate these concerns over time. Nevertheless, until such advancements materialize, researchers and students should exercise caution when utilizing ChatGPT. Rigorous verification of the quality and accuracy of generated outputs is essential, and its application should be restricted to tasks that require minimal literary analysis or citation accuracy. Currently, tasks such as summarizing existing literature, enhancing written content, and conducting basic statistical analyses appear to be more suitable for ChatGPT’s capabilities.

More than one third of participants in our study believed ChatGPT could be designated as an author on scientific publications under the condition of its meaningful contribution to the research work. However, roughly half expressed concerns regarding the ethical implications of integrating AI applications into scientific research. Additionally, concerns pertaining to ethical, legal, and social issues (ELSI) surrounding chatbot implementation in research have been raised, despite existing examples where ChatGPT has been listed as an author in several articles and preprints [10, 17].

Major publishers have adopted a range of responses to the question of ChatGPT’s potential authorship, with some implementing restrictions on listing it as a co-author and others opting for a complete prohibition of its use. Similarly, the use of text generated by ChatGPT within research manuscripts incurs varying degrees of scrutiny, with some publishers imposing outright bans and others permitting its use for stylistic improvements under specific conditions, such as excluding critical tasks like data analysis and interpretation and mandating transparent disclosure of its involvement [18]. A leading plagiarism detection software company has unveiled a novel technology capable of recognizing AI-assisted writing, encompassing texts produced by ChatGPT [19].

The diverse array of responses to ChatGPT’s utilization in research necessitates closer examination. In this context, it is pertinent to consider the International Committee of Medical Journal Editors (ICMJE) guidelines, which stipulate four essential criteria for authorship in scientific publications and academic works: (1) conceptualization and design, (2) data collection, analysis, and interpretation, (3) substantial contribution to writing, drafting, or critical revision of the intellectual content, and (4) final approval of the version intended for publication [20, 21]; Individuals who do not meet the aforementioned criteria should be recognized solely in the acknowledgments section of the publication [21]. Furthermore, all authors bear the collective responsibility to ensure that any concerns regarding the accuracy or integrity of any aspect of the work are appropriately investigated and satisfactorily addressed [22]. Ethical and responsible authorship hinges upon three fundamental pillars: truthfulness, ensuring no falsity or misrepresentation is present; trustworthiness, demanding authors diligently strive to minimize bias; and fairness, upholding objectivity, and impartiality throughout the research process. Accountability, ethical conduct, and independence are further requisites for authors to fulfill their obligations [22, 23].

Based on the established criteria for authorship outlined above, two primary arguments preclude the listing of ChatGPT as an author on scientific publications. Firstly, its capabilities do not align with the aforementioned requirements. Secondly, and more importantly, ChatGPT lacks the capacity to be held accountable for the presented work, a fundamental characteristic for authorship as stipulated by The ICMJE guidelines, which state that “Chatbots (such as ChatGPT) should not be listed as authors because they cannot be responsible for the accuracy, integrity, and originality of the work, and these responsibilities are required for authorship.” [21].

In line with the International ICMJE authorship criteria, the World Association of Medical Editors (WAME)’s recent recommendations on AI-assisted writing explicitly deny Chatbots the status of author. This exclusion stems from their inability to fulfill crucial authorship responsibilities, such as approving the final version before publication, ensuring the work’s integrity and accuracy, comprehending, and legally signing the conflict-of-interest statement. Consequently, WAME emphasizes that authors bear the ultimate responsibility for the accuracy of any material generated by a chatbot and included in their publications [24].

As technological advancements continue, chatbots may gradually acquire the capability to perform more complex research tasks and, consequently, raise the question of their accountability for their actions. This potential scenario necessitates a critical re-examination of authorship guidelines to address the issue and formulate clear recommendations regarding the attribution of authorship in such situations.

The potential for widespread utilization of ChatGPT in research paper drafting also raises significant ethical concerns surrounding the potential for text similarity within papers addressing the same field. This could manifest as high rates of plagiarism flagged by plagiarism detection software. Additionally, the potential designation of ChatGPT as a co-author presents a novel challenge for the research community, sparking debate amongst supporters and opponents [25, 26]. Furthermore, the issue of transparency regarding AI-generated content within research outputs necessitates clear disclosure practices [27].

A systematic review investigating the potential and pitfalls of ChatGPT in healthcare education and research found that concerns surrounding its use were prevalent in over 90% of analyzed publications. These concerns encompassed ethical considerations, copyright and plagiarism issues, lack of originality, inaccurate content, limited knowledge base, incorrect citations, and a propensity for “artificial hallucination” – the generation of misleading or factually incorrect outputs by AI models [28]. This phenomenon of AI-generated “hallucinations,” particularly pronounced in models trained on extensive unsupervised data, underscores the crucial role of human evaluation in ensuring the accuracy and validity of generated content [29].

Artificial intelligence (AI) is increasingly permeating diverse medical fields, exhibiting promising potential in areas such as basic research, disease diagnosis, patient risk identification, drug discovery, and clinical trials [30, 31]. The integration of AI into healthcare raises a multitude of social concerns, with anxieties regarding the potential for AI to replace doctors occupying a prominent space [32]. This concern regarding AI replacing human practitioners is particularly salient for diagnosticians like radiologists and pathologists, whose workflows may be significantly impacted by AI technologies. While complete automation may not be imminent, the rapid advancements in AI necessitate exploration of the timeline for transitioning to semi-autonomous and eventually fully autonomous diagnostic systems [32].

The emergence of advanced language models like ChatGPT raises analogous questions about their potential roles in research. Specifically, their capabilities can blur the lines of traditional researcher functions, leading to concerns about replacing or significantly automating tasks currently performed by human researchers and supporting personnel, such as data analysts and language editors. Interestingly, similar anxieties regarding potential job displacement by AI have surfaced in other fields, notably with programmer concerns surrounding models like Google DeepMind’s AlphaCode. [33, 34] While current evidence suggests that AI is unlikely to entirely replace researchers in the near future, its growing capabilities necessitate a shift in focus from competition to collaboration. Researchers in the coming years should prioritize adapting to and effectively integrating AI into their workflows, rather than viewing it as a threat.

A key limitation of our study lies in the sampling and recruitment methods, as reliance on self-selection introduces the possibility of non-representative data. This potential bias, where only individuals already interested in the topic participated, may limit the generalizability of our findings, and necessitate caution in interpreting them.

留言 (0)

沒有登入
gif