"[R]ecently I’ve enjoyed developing our Health and Wellbeing collection, creating some additional resources in the form of wellbeing bags for staff to borrow." This is a short mention in this blogpost - just wondered if it's something we could think about?
This article provides a brief overview of the capabilities of ChatGPT for medical writing and its implications for academic integrity. It provides a list of AI generative tools, common use of AI generative tools for medical writing, and provides a list of AI generative text detection tools. It also provides recommendations for policymakers, information professionals, and medical faculty for the constructive use of AI generative tools and related technology. It also highlights the role of health sciences librarians and educators in protecting students from generating text through ChatGPT in their academic work.
Conclusion
Grammarly is unexpectedly most effective in detecting plagiarism in AI-generated articles compared to the other tools. This could be due to different softwares using diverse data sources. This highlights the potential for lower-cost plagiarism detection tools to be utilized by researchers.
In November, we held our inaugural gathering, welcoming 20 colleagues from various NHS trusts. Included as a reminder / inspiration in case anyone from our team is going to this, or will consider going.
Conclusion: The results of this study show heightened complexity in ChatGPT-generated SCI texts, surpassing optimal health communication readability. ChatGPT currently cannot substitute comprehensive medical consultations. Enhancing text quality could be attainable through dependence on credible sources, the establishment of a scientific board, and collaboration with expert teams. Addressing these concerns could improve text accessibility, empowering patients and facilitating informed decision-making in SCI.
In summary, despite errors and miss rates with the current platform, systematic literature search using AI appears very promising, eliminating hours of human labor while improving search quality. As AI technology continuously evolves, efforts to refine and improve AI-based literature search platforms should be continued.
Results: The 100 systematic review articles contained 453 database searches. Only 22 (4.9%) database searches reported all six PRISMA-S items. Forty-seven (10.4%) database searches could be reproduced within 10% of the number of results from the original search; 6 searches differed by more than 1000% between the originally reported number of results and the reproduction. Only one systematic review article provided the necessary search details to be fully reproducible.
On 1 August, Dutch publishing giant Elsevier released a ChatGPT-like artificial-intelligence (AI) interface for some users of its Scopus database, and British firm Digital Science announced a closed trial of an AI large language model (LLM) assistant for its Dimensions database. Meanwhile, US firm Clarivate says it’s working on bringing LLMs to its Web of Science database.
Although search engines sometimes highlight specific search results relevant to health, many resources remain underpromoted.5 AI assistants may have a greater responsibility to provide actionable information, given their single-response design. Partnerships between public health agencies and AI companies must be established to promote public health resources with demonstrated effectiveness. For instance, public health agencies could disseminate a database of recommended resources, especially since AI companies potentially lack subject matter expertise to make these recommendations, and these resources could be incorporated into fine-tuning responses to public health questions. New regulations, such as limiting liability for AI companies who implement these recommendations, since they may not be protected by 47 US Code § 230, could encourage adoption of government recommended resources by AI companies.
We examined how feelings shape people’s organizing and deleting practices, focusing on four affective aspects: anxiety, self-efficacy, belonging, and loss of control. We hypothesized that these affective aspects would predict the extent to which people utilize organizing and deleting practices. Access via CILIP subscription
Results
We included 79 studies and identified themes, including question realism, answer reliability, answer utility, clinical specialism, systems, usability, and evaluation methods. Clinicians’ questions used to train and evaluate QA systems were restricted to certain sources, types and complexity levels. No system communicated confidence levels in the answers or sources. Many studies suffered from high risks of bias and applicability concerns. Only 8 studies completely satisfied any criterion for clinical utility, and only 7 reported user evaluations. Most systems were built with limited input from clinicians.
Discussion
While machine learning methods have led to increased accuracy, most studies imperfectly reflected real-world healthcare information needs. Key research priorities include developing more realistic healthcare QA datasets and considering the reliability of answer sources, rather than merely focusing on accuracy.
Today a perceived lasting legacy of the Covid-19 pandemic is that more information literacy instruction is happening online than pre-pandemic, including ongoing adoption of synchronous modes of instruction in course-based and co-curricular contexts, and sustained integration of asynchronous learning resources either in standalone formats or as fundamental elements in what is described as a growing adoption of a more modular, scalable approach to information literacy instruction. At the same time, the role of in-person information literacy instruction has by no means been forgotten, with all OCUL libraries offering a majority of instruction this way by Fall 2022, when pandemic restrictions eased up. However, an ongoing legacy of the Covid-19 pandemic has been lasting changes in how librarians teach, and the nature of collaborative partnerships at work in shaping this information literacy instruction, to increasingly draw from a broader range of modalities to offer students a more flexible learning environment.
ChatGPT provides different answers to similar questions based on the prompts, and patients may not have expertise in prompting ChattPT to elicit a best answer. (Prompting large language models has been shown to be a skill that can improve). Of greater concern, ChatGPT fails to provide sources or references for its answers. At present ChatGPT cannot be relied upon to address patient questions; in the future, ChatGPT will improve. Today, AI requires physician expertise to interpret AI answers for patients.
A scoping review to determine how health service librarians instruct practicing clinicians and health sciences faculty in support of their continuing education.
Short, pithy, and practical article about the uses, and pitfalls, of AI. It includes some helpful suggestions about how to start using it, and some of the issues to look out for.
Kellutus tarkoittaa kaikessa yksinkertaisuudessaan sitä, että tavara jää kirjastoverkon siihen toimipisteeseen, johon se palautetaan. On kiehtovaa, kuinka noin yksinkertainen muutos voi aiheuttaa niin suuria kokonais- ja kerrannaisvaikutuksia.