Showing 1 to 10 of 479 records(fetched in 6.07 seconds)
TitleChatGPT Utility in Healthcare Education, Research, and Practice: Systematic Review on the Promising Perspectives and Valid Concerns.
AuthorsSallam, M
JournalHealthcare (Basel, Switzerland)
Publication Date19 Mar 2023
Date Added to PubMed30 Mar 2023
AbstractChatGPT is an artificial intelligence (AI)-based conversational large language model (LLM). The potential applications of LLMs in health care education, research, and practice could be promising if the associated valid concerns are proactively examined and addressed. The current systematic review aimed to investigate the utility of ChatGPT in health care education, research, and practice and to highlight its potential limitations. Using the PRIMSA guidelines, a systematic search was conducted to retrieve English records in PubMed/MEDLINE and Google Scholar (published research or preprints) that examined ChatGPT in the context of health care education, research, or practice. A total of 60 records were eligible for inclusion. Benefits of ChatGPT were cited in 51/60 (85.0%) records and included: (1) improved scientific writing and enhancing research equity and versatility; (2) utility in health care research (efficient analysis of datasets, code generation, literature reviews, saving time to focus on experimental design, and drug discovery and development); (3) benefits in health care practice (streamlining the workflow, cost saving, documentation, personalized medicine, and improved health literacy); and (4) benefits in health care education including improved personalized learning and the focus on critical thinking and problem-based learning. Concerns regarding ChatGPT use were stated in 58/60 (96.7%) records including ethical, copyright, transparency, and legal issues, the risk of bias, plagiarism, lack of originality, inaccurate content with risk of hallucination, limited knowledge, incorrect citations, cybersecurity issues, and risk of infodemics. The promising applications of ChatGPT can induce paradigm shifts in health care education, research, and practice. However, the embrace of this AI chatbot should be conducted with extreme caution considering its potential limitations. As it currently stands, ChatGPT does not qualify to be listed as an author in scientific articles unless the ICMJE/COPE guidelines are revised or amended. An initiative involving all stakeholders in health care education, research, and practice is urgently needed. This will help to set a code of ethics to guide the responsible use of ChatGPT among other LLMs in health care and academia.
Linkhttp://doi.org/10.3390/healthcare11060887
TitlePrompt Engineering as an Important Emerging Skill for Medical Professionals: Tutorial.
AuthorsMeskó, B
JournalJournal of medical Internet research
Publication Date4 Oct 2023
Date Added to PubMed4 Oct 2023
AbstractPrompt engineering is a relatively new field of research that refers to the practice of designing, refining, and implementing prompts or instructions that guide the output of large language models (LLMs) to help in various tasks. With the emergence of LLMs, the most popular one being ChatGPT that has attracted the attention of over a 100 million users in only 2 months, artificial intelligence (AI), especially generative AI, has become accessible for the masses. This is an unprecedented paradigm shift not only because of the use of AI becoming more widespread but also due to the possible implications of LLMs in health care. As more patients and medical professionals use AI-based tools, LLMs being the most popular representatives of that group, it seems inevitable to address the challenge to improve this skill. This paper summarizes the current state of research about prompt engineering and, at the same time, aims at providing practical recommendations for the wide range of health care professionals to improve their interactions with LLMs.
Linkhttp://doi.org/10.2196/50638
TitleThe Effectiveness of Artificial Intelligence Conversational Agents in Health Care: Systematic Review.
AuthorsMilne-Ives, M; de Cock, C; Lim, E; Shehadeh, MH; de Pennington, N; Mole, G; Normando, E; Meinert, E
JournalJournal of medical Internet research
Publication Date22 Oct 2020
Date Added to PubMed23 Oct 2020
AbstractThe high demand for health care services and the growing capability of artificial intelligence have led to the development of conversational agents designed to support a variety of health-related activities, including behavior change, treatment support, health monitoring, training, triage, and screening support. Automation of these tasks could free clinicians to focus on more complex work and increase the accessibility to health care services for the public. An overarching assessment of the acceptability, usability, and effectiveness of these agents in health care is needed to collate the evidence so that future development can target areas for improvement and potential for sustainable adoption. This systematic review aims to assess the effectiveness and usability of conversational agents in health care and identify the elements that users like and dislike to inform future research and development of these agents. PubMed, Medline (Ovid), EMBASE (Excerpta Medica dataBASE), CINAHL (Cumulative Index to Nursing and Allied Health Literature), Web of Science, and the Association for Computing Machinery Digital Library were systematically searched for articles published since 2008 that evaluated unconstrained natural language processing conversational agents used in health care. EndNote (version X9, Clarivate Analytics) reference management software was used for initial screening, and full-text screening was conducted by 1 reviewer. Data were extracted, and the risk of bias was assessed by one reviewer and validated by another. A total of 31 studies were selected and included a variety of conversational agents, including 14 chatbots (2 of which were voice chatbots), 6 embodied conversational agents (3 of which were interactive voice response calls, virtual patients, and speech recognition screening systems), 1 contextual question-answering agent, and 1 voice recognition triage system. Overall, the evidence reported was mostly positive or mixed. Usability and satisfaction performed well (27/30 and 26/31), and positive or mixed effectiveness was found in three-quarters of the studies (23/30). However, there were several limitations of the agents highlighted in specific qualitative feedback. The studies generally reported positive or mixed evidence for the effectiveness, usability, and satisfactoriness of the conversational agents investigated, but qualitative user perceptions were more mixed. The quality of many of the studies was limited, and improved study design and reporting are necessary to more accurately evaluate the usefulness of the agents in health care and identify key areas for improvement. Further research should also analyze the cost-effectiveness, privacy, and security of the agents. RR2-10.2196/16934.
Linkhttp://doi.org/10.2196/20346
TitleMitigating Harms of Social Media for Adolescent Body Image and Eating Disorders: A Review.
AuthorsMazzeo, SE; Weinstock, M; Vashro, TN; Henning, T; Derrigo, K
JournalPsychology research and behavior management
Publication Date1 Dec 2024
Date Added to PubMed9 Jul 2024
AbstractSocial media has negative effects on adolescent body image and disordered eating behaviors, yet adolescents are unlikely to discontinue engaging with these platforms. Thus, it is important to identify strategies that can reduce the harms of social media on adolescent mental health. This article reviews research on social media and adolescent body image, and discusses strategies to reduce risks associated with social media use. Topics covered include interventions aimed at mitigating social media's negative impacts, the body-positivity movement, and policies regulating adolescents' social media use. Overall, this review highlights specific factors (such as staffing, duration, modality, facilitator training, and cultural sensitivity) to consider when designing and implementing social media interventions targeting adolescents. This review also discusses psychosocial outcomes associated with body positivity on social media. Finally, policy efforts to reduce the negative impact of social media on adolescents' body image and eating behaviors are described. In sum, there is a strong need to conduct further research identifying optimal approaches to reduce the harms of social media for adolescent body image and eating behavior.
Linkhttp://doi.org/10.2147/PRBM.S410600
TitleRoles, Users, Benefits, and Limitations of Chatbots in Health Care: Rapid Review.
AuthorsLaymouna, M; Ma, Y; Lessard, D; Schuster, T; Engler, K; Lebouché, B
JournalJournal of medical Internet research
Publication Date23 Jul 2024
Date Added to PubMed23 Jul 2024
AbstractChatbots, or conversational agents, have emerged as significant tools in health care, driven by advancements in artificial intelligence and digital technology. These programs are designed to simulate human conversations, addressing various health care needs. However, no comprehensive synthesis of health care chatbots' roles, users, benefits, and limitations is available to inform future research and application in the field. This review aims to describe health care chatbots' characteristics, focusing on their diverse roles in the health care pathway, user groups, benefits, and limitations. A rapid review of published literature from 2017 to 2023 was performed with a search strategy developed in collaboration with a health sciences librarian and implemented in the MEDLINE and Embase databases. Primary research studies reporting on chatbot roles or benefits in health care were included. Two reviewers dual-screened the search results. Extracted data on chatbot roles, users, benefits, and limitations were subjected to content analysis. The review categorized chatbot roles into 2 themes: delivery of remote health services, including patient support, care management, education, skills building, and health behavior promotion, and provision of administrative assistance to health care providers. User groups spanned across patients with chronic conditions as well as patients with cancer; individuals focused on lifestyle improvements; and various demographic groups such as women, families, and older adults. Professionals and students in health care also emerged as significant users, alongside groups seeking mental health support, behavioral change, and educational enhancement. The benefits of health care chatbots were also classified into 2 themes: improvement of health care quality and efficiency and cost-effectiveness in health care delivery. The identified limitations encompassed ethical challenges, medicolegal and safety concerns, technical difficulties, user experience issues, and societal and economic impacts. Health care chatbots offer a wide spectrum of applications, potentially impacting various aspects of health care. While they are promising tools for improving health care efficiency and quality, their integration into the health care system must be approached with consideration of their limitations to ensure optimal, safe, and equitable use.
Linkhttp://doi.org/10.2196/56930
TitleChatbots and Conversational Agents in Mental Health: A Review of the Psychiatric Landscape.
AuthorsVaidyam, AN; Wisniewski, H; Halamka, JD; Kashavan, MS; Torous, JB
JournalCanadian journal of psychiatry. Revue canadienne de psychiatrie
Publication Date1 Jul 2019
Date Added to PubMed23 Mar 2019
AbstractThe aim of this review was to explore the current evidence for conversational agents or chatbots in the field of psychiatry and their role in screening, diagnosis, and treatment of mental illnesses. A systematic literature search in June 2018 was conducted in PubMed, EmBase, PsycINFO, Cochrane, Web of Science, and IEEE Xplore. Studies were included that involved a chatbot in a mental health setting focusing on populations with or at high risk of developing depression, anxiety, schizophrenia, bipolar, and substance abuse disorders. From the selected databases, 1466 records were retrieved and 8 studies met the inclusion criteria. Two additional studies were included from reference list screening for a total of 10 included studies. Overall, potential for conversational agents in psychiatric use was reported to be high across all studies. In particular, conversational agents showed potential for benefit in psychoeducation and self-adherence. In addition, satisfaction rating of chatbots was high across all studies, suggesting that they would be an effective and enjoyable tool in psychiatric treatment. Preliminary evidence for psychiatric use of chatbots is favourable. However, given the heterogeneity of the reviewed studies, further research with standardized outcomes reporting is required to more thoroughly examine the effectiveness of conversational agents. Regardless, early evidence shows that with the proper approach and research, the mental health field could use conversational agents in psychiatric treatment. VaidyamAditya NrusimhaAN0000-0002-2900-45611 Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA.WisniewskiHannahH1 Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA.HalamkaJohn DavidJD1 Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA.KashavanMatcheri SMS1 Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA.TorousJohn BlakeJB1 Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA.engJournal ArticleReview20190321United StatesCan J Psychiatry79041870706-7437IMCommunicationDiagnosis, Computer-AssistedmethodsHumansMental DisordersdiagnosispsychologytherapyPsychotherapymethodsTelemedicinemethodsTherapy, Computer-AssistedmethodsCette revue visait à explorer les données probantes actuelles sur les agents conversationnels ou les « chatbots » (robots parleurs) dans le domaine de la psychiatrie et le rôle que jouent ceux-ci dans le dépistage, le diagnostic, et le traitement des maladies mentales. Une recherche systématique de la littérature a été menée en juin 2018 dans PubMed, EmBase, PsycINFO, Cochrane, Web of Science, et IEEE Xplore. Les études incluses portaient sur un « chatbot » dans un milieu de santé mentale axé sur les populations souffrant de dépression, d’anxiété, de schizophrénie, du trouble bipolaire et des troubles d’abus de substances ou qui étaient à risque élevé de développer un de ces troubles. Dans les bases de données choisies, 1466 dossiers ont été extraits et 8 études satisfaisaient aux critères d’inclusion. Deux études additionnelles ont été ajoutées après une sélection dans la liste de références, pour un total de 10 études incluses. En général, le potentiel de l’utilisation d’agents conversationnels en psychiatrie était estimé élevé dans toutes les études. En particulier, les agents conversationnels indiquaient le potentiel de se révéler bénéfiques en psychoéducation et en auto-engagement. En outre, le taux de satisfaction à l’égard des « chatbots » était élevé dans toutes les études, ce qui suggère qu’ils constitueraient un outil efficace et agréable dans un traitement psychiatrique. Les données probantes préliminaires de l’utilisation psychiatrique des « chatbots » sont favorables. Cependant, étant donné l’hétérogénéité des études examinées, il faut plus de recherche contenant des résultats normalisés afin d’examiner minutieusement l’efficacité des agents conversationnels. Néanmoins, les premières données probantes indiquent qu’avec l’approche appropriée et la recherche, le domaine de la santé mentale pourrait utiliser les agents conversationnels dans le traitement psychiatrique.
Linkhttp://doi.org/10.1177/0706743719828977
TitleAn Introduction to Generative Artificial Intelligence in Mental Health Care: Considerations and Guidance.
AuthorsKing, DR; Nanda, G; Stoddard, J; Dempsey, A; Hergert, S; Shore, JH; Torous, J
JournalCurrent psychiatry reports
Publication Date1 Dec 2023
Date Added to PubMed30 Nov 2023
AbstractThis paper provides an overview of generative artificial intelligence (AI) and the possible implications in the delivery of mental health care. Generative AI is a powerful technology that is changing rapidly. As psychiatrists, it is important for us to understand generative AI technology and how it may impact our patients and our practice of medicine. This paper aims to build this understanding by focusing on GPT-4 and its potential impact on mental health care delivery. We first introduce key concepts and terminology describing how the technology works and various novel uses of it. We then dive into key considerations for GPT-4 and other large language models (LLMs) and wrap up with suggested future directions and initial guidance to the field.
Linkhttp://doi.org/10.1007/s11920-023-01477-x
TitleMental Health Chatbot for Young Adults With Depressive Symptoms During the COVID-19 Pandemic: Single-Blind, Three-Arm Randomized Controlled Trial.
AuthorsHe, Y; Yang, L; Zhu, X; Wu, B; Zhang, S; Qian, C; Tian, T
JournalJournal of medical Internet research
Publication Date21 Nov 2022
Date Added to PubMed11 Nov 2022
AbstractDepression has a high prevalence among young adults, especially during the COVID-19 pandemic. However, mental health services remain scarce and underutilized worldwide. Mental health chatbots are a novel digital technology to provide fully automated interventions for depressive symptoms. The purpose of this study was to test the clinical effectiveness and nonclinical performance of a cognitive behavioral therapy (CBT)-based mental health chatbot (XiaoE) for young adults with depressive symptoms during the COVID-19 pandemic. In a single-blind, 3-arm randomized controlled trial, participants manifesting depressive symptoms recruited from a Chinese university were randomly assigned to a mental health chatbot (XiaoE; n=49), an e-book (n=49), or a general chatbot (Xiaoai; n=50) group in a ratio of 1:1:1. Participants received a 1-week intervention. The primary outcome was the reduction of depressive symptoms according to the 9-item Patient Health Questionnaire (PHQ-9) at 1 week later (T1) and 1 month later (T2). Both intention-to-treat and per-protocol analyses were conducted under analysis of covariance models adjusting for baseline data. Controlled multiple imputation and δ-based sensitivity analysis were performed for missing data. The secondary outcomes were the level of working alliance measured using the Working Alliance Questionnaire (WAQ), usability measured using the Usability Metric for User Experience-LITE (UMUX-LITE), and acceptability measured using the Acceptability Scale (AS). Participants were on average 18.78 years old, and 37.2% (55/148) were female. The mean baseline PHQ-9 score was 10.02 (SD 3.18; range 2-19). Intention-to-treat analysis revealed lower PHQ-9 scores among participants in the XiaoE group compared with participants in the e-book group and Xiaoai group at both T1 (F2,136=17.011; P<.001; d=0.51) and T2 (F2,136=5.477; P=.005; d=0.31). Better working alliance (WAQ; F2,145=3.407; P=.04) and acceptability (AS; F2,145=4.322; P=.02) were discovered with XiaoE, while no significant difference among arms was found for usability (UMUX-LITE; F2,145=0.968; P=.38). A CBT-based chatbot is a feasible and engaging digital therapeutic approach that allows easy accessibility and self-guided mental health assistance for young adults with depressive symptoms. A systematic evaluation of nonclinical metrics for a mental health chatbot has been established in this study. In the future, focus on both clinical outcomes and nonclinical metrics is necessary to explore the mechanism by which mental health chatbots work on patients. Further evidence is required to confirm the long-term effectiveness of the mental health chatbot via trails replicated with a longer dose, as well as exploration of its stronger efficacy in comparison with other active controls. Chinese Clinical Trial Registry ChiCTR2100052532; http://www.chictr.org.cn/showproj.aspx?proj=135744.
Linkhttp://doi.org/10.2196/40719
TitleArtificial Intelligence in Cardiovascular Care-Part 2: Applications: JACC Review Topic of the Week.
AuthorsJain, SS; Elias, P; Poterucha, T; Randazzo, M; Lopez Jimenez, F; Khera, R; Perez, M; Ouyang, D; Pirruccello, J; Salerno, M; Einstein, AJ; Avram, R; Tison, GH; Nadkarni, G; Natarajan, V; Pierson, E; Beecy, A; Kumaraiah, D; Haggerty, C; Avari Silva, JN; Maddox, TM
JournalJournal of the American College of Cardiology
Publication Date18 Jun 2024
Date Added to PubMed10 Apr 2024
AbstractRecent artificial intelligence (AI) advancements in cardiovascular care offer potential enhancements in effective diagnosis, treatment, and outcomes. More than 600 U.S. Food and Drug Administration-approved clinical AI algorithms now exist, with 10% focusing on cardiovascular applications, highlighting the growing opportunities for AI to augment care. This review discusses the latest advancements in the field of AI, with a particular focus on the utilization of multimodal inputs and the field of generative AI. Further discussions in this review involve an approach to understanding the larger context in which AI-augmented care may exist, and include a discussion of the need for rigorous evaluation, appropriate infrastructure for deployment, ethics and equity assessments, regulatory oversight, and viable business cases for deployment. Embracing this rapidly evolving technology while setting an appropriately high evaluation benchmark with careful and patient-centered implementation will be crucial for cardiology to leverage AI to enhance patient care and the provider experience.
Linkhttp://doi.org/10.1016/j.jacc.2024.03.401
TitleEffects of an Artificial Intelligence-Assisted Health Program on Workers With Neck/Shoulder Pain/Stiffness and Low Back Pain: Randomized Controlled Trial.
AuthorsAnan, T; Kajiki, S; Oka, H; Fujii, T; Kawamata, K; Mori, K; Matsudaira, K
JournalJMIR mHealth and uHealth
Publication Date24 Sep 2021
Date Added to PubMed25 Sep 2021
AbstractMusculoskeletal symptoms such as neck and shoulder pain/stiffness and low back pain are common health problems in the working population. They are the leading causes of presenteeism (employees being physically present at work but unable to be fully engaged). Recently, digital interventions have begun to be used to manage health but their effectiveness has not yet been fully verified, and adherence to such programs is always a problem. This study aimed to evaluate the improvements in musculoskeletal symptoms in workers with neck/shoulder stiffness/pain and low back pain after the use of an exercise-based artificial intelligence (AI)-assisted interactive health promotion system that operates through a mobile messaging app (the AI-assisted health program). We expected that this program would support participants' adherence to exercises. We conducted a two-armed, randomized, controlled, and unblinded trial in workers with either neck/shoulder stiffness/pain or low back pain or both. We recruited participants with these symptoms through email notifications. The intervention group received the AI-assisted health program, in which the chatbot sent messages to users with the exercise instructions at a fixed time every day through the smartphone's chatting app (LINE) for 12 weeks. The program was fully automated. The control group continued with their usual care routines. We assessed the subjective severity of the neck and shoulder pain/stiffness and low back pain of the participants by using a scoring scale of 1 to 5 for both the intervention group and the control group at baseline and after 12 weeks of intervention by using a web-based form. We used a logistic regression model to calculate the odds ratios (ORs) of the intervention group to achieve to reduce pain scores with those of the control group, and the ORs of the subjective assessment of the improvement of the symptoms compared to the intervention and control groups, which were performed using Stata software (version 16, StataCorp LLC). We analyzed 48 participants in the intervention group and 46 participants in the control group. The adherence rate was 92% (44/48) during the intervention. The participants in the intervention group showed significant improvements in the severity of the neck/shoulder pain/stiffness and low back pain compared to those in the control group (OR 6.36, 95% CI 2.57-15.73; P<.001). Based on the subjective assessment of the improvement of the pain/stiffness at 12 weeks, 36 (75%) out of 48 participants in the intervention group and 3 (7%) out of 46 participants in the control group showed improvements (improved, slightly improved) (OR 43.00, 95% CI 11.25-164.28; P<.001). This study shows that the short exercises provided by the AI-assisted health program improved both neck/shoulder pain/stiffness and low back pain in 12 weeks. Further studies are needed to identify the elements contributing to the successful outcome of the AI-assisted health program. University hospital Medical Information Network-Clinical Trials Registry (UMIN-CTR) 000033894; https://upload.umin.ac.jp/cgi-open-bin/ctr_e/ctr_view.cgi?recptno=R000038307.
Linkhttp://doi.org/10.2196/27535
MNCHFPRHHIV/AIDSMalariaNoncommunicable diseaseCOVID-19Decision-makingEducation & trainingBehavior changeGovernancePrivacy & securityEquityCHWsYouth & adolescentsSystematic reviewsProtocols & research designMedical RecordsLaboratoryPharmacyHuman ResourcesmHealthSMSChatbotsAI