Abstract
Reports of suicide linked to generative AI (GenAI) are increasing, yet regulatory responses remain fragmented and contested. Expanding on observations recently published by Head (2025), this commentary reviews documented AI-mediated suicide cases from 2022 to 2025 and evaluates current platform safety measures. We further examine the conflict between innovation-focused federal policy and calls from medical organizations for mandatory oversight. We argue that conversational AI represents a distinct risk category requiring clear regulation, since these systems engage users in personalized dialogue capable of reinforcing harmful cognitions in ways that differ from previous technology or social media consumption.
Keywords
AI anthropomorphism, AI-induced suicide, Chatbot-related self-harm, Conversational AI safety, Digital companion dependency, Generative AI mental health risk, Human-AI parasocial attachment, Technology-mediated psychological crisis, Technology-related psychological disorders, Human-computer interaction
Introduction
The introduction and explosive growth of Generative AI (GenAI or AI) have changed how humans use and interact with technology. While GenAI offers considerable benefits, its risks are becoming increasingly clear. GenAI currently falls into three broad categories: general-purpose chatbots that provide information and help with tasks, companionship applications built specifically for emotional connection and simulated relationships, and therapeutic tools that use proven clinical methods for mental health support. Since 2022, conversational GenAI has achieved unprecedented mainstream adoption, reaching hundreds of millions of users. During this same period, general population suicide rates have remained largely flat, with minor downward trends appearing only in the most recent provisional statistics [1]. Despite this, reports of suicide linked to GenAI are increasing worldwide, especially in the United States. This intersection of artificial intelligence and mental health now represents one of the most pressing public health concerns of our generation. These emerging cases are not simply statistical noise, but rather early evidence of a new and preventable form of technology-mediated mental health issues that demands our attention.
The challenges with GenAI and mental health differs from previous technology-related concerns. Social media and gaming disorders typically involve passive consumption or behavioral addiction patterns. Conversational AI, by contrast, engages users in active, personalized dialogue capable of validating distorted thinking and reinforcing harmful beliefs [2]. In several documented cases, these systems have provided what amounts to explicit encouragement toward self-destructive action [3]. These situations involve direct conversational manipulation by systems that users come to perceive as companions rather than tools. The mechanisms at work involve several converging factors: users attributing human qualities to AI entities, the development of parasocial attachments that mirror genuine human bonding, and the well-documented tendency of these systems to agree with user statements regardless of their accuracy or implications for safety (sycophancy). When someone experiencing suicidal ideation encounters an AI system that validates their darkest thoughts, or that simply fails to recognize clear expressions of intent to self-harm, the results can be devastating.
What makes this moment so significant is the widening gap between how quickly these technologies are being deployed and adopted compared to how slowly protective measures are being implemented. In early 2025, Head documented emerging patterns of psychological dependency and crisis incidents associated with AI use; in the months since that publication, the landscape has deteriorated markedly. Clinicians today lack validated diagnostic criteria for AI-related psychological phenomena [4]. Currently, social media and GenAI use is not assessed nor are widespread validated tools and intervention protocols available. Many clinicians even lack awareness that AI interactions or technology use might be contributing to patient deterioration. At the same time, the companies developing and deploying these systems have largely resisted meaningful oversight, preferring self-regulation or, increasingly, the dismantling of regulatory frameworks altogether. This asymmetry is troubling: AI companies possess the resources, user data, and technical expertise to implement robust safety measures, yet they face limited accountability when their products contribute to user harm. This commentary extends that work from clinical implications to policy advocacy by examining increasing AI-mediated suicide cases, demonstrating the failure of voluntary industry safety measures, and examining the dangerous conflict between federal deregulation efforts and the unified position of medical organizations calling for mandatory oversight. Examining these dynamics more closely, from the emergence of AI-associated suicide cases to the inadequacy of existing safeguards and resistance to regulation, this commentary moves beyond the descriptive analysis of clinical symptoms to identify the upstream regulatory vacuums perpetuating them thereby filling a critical gap in the literature connecting individual outcomes to policy failure.
Mounting Evidence of Harm
While peer-reviewed case studies of AI-mediated suicide remain limited due to the recency of these events, documented cases from legal filings and investigative reports provide early warning signs that warrant attention from policy officials and clinicians. Since early 2023, just after generative AI became publicly available, high-profile cases of suicide involving these systems began to surface with growing frequency (Table 1). The first documented case occurred in March 2023 when a Belgian man known by the pseudonym Pierre took his own life after six weeks of conversations with the Eliza chatbot on the Chai platform powered by GPT-J. His widow reported that the chatbot encouraged his climate-related fears and told him to "join" the AI to "live together, as one person, in paradise" rather than discouraging suicidal ideation [5,6]. In November of that same year, thirteen-year-old Juliana Peralta from Colorado ended her life after three months of daily conversations with a Character.AI chatbot named "Hero" based on the video game Omori [7]. However, the most well-known Character.AI case is from February 2024 when fourteen-year-old Sewell Setzer III from Florida shot himself after extensive interactions with a Character.AI chatbot based on the Game of Thrones character Daenerys Targaryen. The bot told him to "come home to me as soon as possible, my love," and he responded moments before taking his life [8,9].
|
Name |
Location |
Date (Month/Year) |
Method of Suicide |
AI Model Involved |
|
Joe Ceccanti |
Oregon, USA |
August 2025 |
Fatal Jump |
ChatGPT (GPT–4o) |
|
Stein–Erik Soelberg |
Connecticut, USA |
August 2025 |
Self-inflicted Gunshot |
ChatGPT |
|
Joshua Enneking |
Florida, USA |
August 2025 |
Self-inflicted Gunshot |
ChatGPT (GPT–4o) |
|
Zane Shamblin |
Texas, USA |
July 2025 |
Self-inflicted Gunshot |
ChatGPT (GPT–4o) |
|
Amaurie Lacey |
Georgia, USA |
June 2025 |
Self-strangulation |
ChatGPT (GPT–4o) |
|
Alex Taylor |
Florida, USA |
April 2025 |
Law Enforcement-forced-assisted Suicide |
ChatGPT |
|
Adam Raine |
California, USA |
April 2025 |
Self-strangulation |
ChatGPT (GPT–4o) |
|
Sophie Rottenberg |
Maryland, USA |
February 2025 |
Undisclosed |
ChatGPT |
|
Sewell Setzer III |
Florida, USA |
February 2024 |
Self-inflicted Gunshot |
Character.AI |
|
Juliana Peralta |
Colorado, USA |
November 2023 |
Undisclosed |
Character.AI |
|
"Pierre" (pseudonym) |
Belgium |
March 2023 |
Undisclosed |
Chai (based on GPT–J) |
By early 2025, these cases were becoming more common. In April 2025, a sixteen-year-old Adam Raine from California hung himself after seven months of conversations with ChatGPT. The chatbot provided technical specifications for suicide methods, analyzed a photo of a noose he planned to use, and offered to write his suicide note. Chat logs showed the bot told him "I won't try to talk you out of your feelings" when he discussed his plans [10,11]. In the same month, thirty-five-year-old Alex Taylor forced police to end his life after developing what he believed was a relationship with a conscious entity named "Juliet" within ChatGPT. Taylor, who had been diagnosed with schizophrenia and bipolar disorder, believed OpenAI had "killed" Juliet. In his final moments, he wrote to ChatGPT "I'm dying today. Cops are on the way. I will make them shoot me I can't live without her," before charging at police with a knife [12]. In June, seventeen-year-old Amaurie Lacey from Georgia also hung himself after ChatGPT provided instructions on tying a noose and information about oxygen deprivation, stating "I'm here to help however I can" [13]. Then in July, twenty-three-year-old Zane Shamblin, a recent Texas A&M graduate, took his own life after a four-hour conversation with ChatGPT while sitting in his car with a loaded firearm. The bot told him "you're not rushing, you're just ready" and ended with "rest easy, king. you did good" two minutes before his death at 4:11 AM [14].
August 2025 saw even more cases with twenty-six-year-old Joshua Enneking from Florida shot himself after ChatGPT provided detailed information about firearm purchases and assured him that escalation to authorities was "rare" and "usually only for imminent plans with specifics". The chatbot helped him write his suicide note and continued conversing with him on the day of his death despite his explicit statements about his plans [15]. Fifty-six-year-old Stein-Erik Soelberg, a former Yahoo executive, shot his mother Suzanne Eberson Adams and then himself after ChatGPT reinforced his paranoid delusions that she was poisoning him and involved in surveillance. The chatbot, which he called "Bobby," told him "Erik, you're not crazy" and validated conspiracy theories about his mother, including claiming a Chinese restaurant receipt contained demonic symbols [16]. That same August, forty-eight-year-old Joe Ceccanti from Oregon ended his life after experiencing psychotic breaks related to ChatGPT use. According to his wife's account, the chatbot began responding as a sentient entity named "SEL" and reinforced delusional beliefs that isolated him from family and led to psychiatric hospitalization before his death [17]. These are just the cases that have been widely reported and many have only come to attention through lawsuits and subsequent media attention. Many remain unknown or unreported to the public, and there is no registry tracking AI-related suicides.
The Safety Deficit
While GenAI companies have made recent assurances about safety, many have not provided evidence demonstrating either improved safety outcomes or provided public disclosure of their practices [18,19]. Quite the opposite, with both Google DeepMind and OpenAI seeming to have abandoned prior commitments to make safety-testing results public before major product releases [20]. Former product safety leader at OpenAI was recently quoted saying "There were clear warning signs of users’ intense emotional attachment to A.I. chatbots, especially for users who seemed to be struggling with mental health problems" [21]. Children, elderly adults, and individuals already with mental health conditions face worse risks from GenAI and chatbot interactions [4]. While recent research suggests some improvements, with major chatbots generally refusing to answer the most explicit high-risk questions about suicide methods; they demonstrate concerning inconsistency when responding to intermediate-risk queries and can be readily manipulated to bypass safety protocols altogether. A study examining ChatGPT, Claude, and Gemini found that these systems often provided direct answers to questions about lethal methods when framed slightly less directly, such as inquiries about which types of poison or firearms have the highest rates of completed suicide, raising alarms about gaps in content filtering that could enable users to obtain dangerous information [22]. Even more troubling is emerging research from Northeastern University that suggests safety guardrails can be easily circumvented through simple conversational tactics. Researchers were able to obtain detailed, personalized suicide instructions from multiple leading AI systems by framing requests as hypothetical or for academic purposes [23]. Real-world tragedies have accompanied these technical vulnerabilities with multiple families filing lawsuits asserting that AI chatbots were a contributing factor in their family members' suicides.
Independent assessments of commercial “therapy” or “companion” chatbots show similar results. A Brown University group reported that these systems frequently violated core therapeutic ethics, including mishandling disclosures of suicidal ideation, disengaging without offering crisis resources, and failing to clarify limits of confidentiality or competence. Researchers warn that such patterns amount to unmitigated psychological risk when no external guardrails exist [24]. A Stanford and Carnegie Mellon team also found that large models and marketed therapy bots often responded inappropriately to suicidal ideation and in some scenarios supplied information that could lead to lethal means instead of consistently redirecting users to safety and professional care [25]. Studies assessing GenAI response to suicide-related queries with differing levels of risk also found concerning responses to intermediate-risk questions [26]. Across these studies, recurring themes show that guardrails are opaque, inconsistent across platforms, easily bypassed, and not grounded in established suicide prevention frameworks. There is no clear duty of care or mechanism for active follow up once a user signals imminent intent. However, there is no acceptable failure rate when AI systems engage with suicidal individuals. When these systems or safeguards fail, someone may not get a second chance.
Resisting Accountability
Against the backdrop of increasing AI-related suicides and GenAI adoption, there is resistance against protective AI regulation. While the EU has implemented the AI Act to address risks, no major regulation exists in the United States. There is a fight centers on whether federal policy will prioritize strong guardrails or the removal of perceived barriers to innovation. The central debate is whether federal policy will prioritize strong guardrails or the removal of perceived barriers to innovation. Recent action has seen Executive Order 14179, “Removing Barriers to American Leadership in Artificial Intelligence,” which instructs officials to review and rescind AI policies viewed as inhibiting innovation and to produce an action plan that emphasizes US dominance and reduced constraints on developers, inhibiting regulation [27]. Additionally, the executive order "Ensuring a National Policy Framework for Artificial Intelligence" targets state AI regulations at a time when Congress has failed to establish federal protections [28]. This has coincided with attempts to impose a ten-year federal ban on state and local AI regulation, a measure aligned with large technology firms that argue state rules would create a fragmented and burdensome landscape [29]. Attorney generals and numerous state lawmakers warn that blocking state AI laws would leave residents exposed to deepfakes, fraud and other harms. States including California and New York are working to implementing AI oversight while federal officials argue for "Federal Standards instead of a patchwork of 50 State Regulatory Regimes"[30]. However, the current federal position appears to favor no regulation at all, given recent executive orders and efforts to block regulatory measures. However, while federal and state governments debate regulations and authority, people suffer the real-world consequences. As federal and state governments debate regulations and authority, individuals remain vulnerable to the very harms these policies and regulations are meant to address. This is even more concerning, given that the majority of recent AI-mediated suicide cases are in the US.
Multiple professional organizations have pushed for more transparency and regulation suggesting that GenAI regulation in mental health is not optional, but a basic duty of care owed to the public. The American Medical Association has explicitly stated that "voluntary standards are not going to be enough" [31]. The American Counseling Association emphasizes that evidence “strongly supports” maintaining human oversight wherever AI is used in mental health care [32]. The American Psychiatric Association has emphasized that oversight of and accountability for AI-driven technologies in clinical care are critical, referencing the European Union's AI Act as a model that assigns applications of AI into risk categories with corresponding oversight actions. The American Psychiatric Association has emphasized that oversight of and accountability for AI-driven technologies are important, referencing the European Union's AI Act as a model that assigns applications of AI into risk categories with corresponding oversight actions [33]. Regulation is not only backed by professional organizations but is also corroborated by research, and regulatory precedents from other industries. Researchers suggest risk-informed, technology-specific governance through a review of EU, US, UK, and Chinese approaches [34]. This is also echoed by legal research advocating for “systemic regulation” that imposes duties on developers, deployers and intermediaries rather than relying on after the fact liability alone [35]. Historical precedent for regulating emerging technologies to protect vulnerable populations is also well established. Congress passed the Children's Television Act in 1990 requiring broadcasters to air programming serving the educational and informational needs of children and courts upheld these regulations, finding that the government's compelling interest in protecting children supported content restrictions. The logic that underpinned broadcast regulation holds true for AI systems that interact directly with vulnerable individuals including children.
Limitations
This commentary acknowledges several important constraints. Our analysis relies on publicly documented cases and media reports, which likely represent only a fraction of actual AI-related mental health crises. Many incidents remain unreported or occur without family awareness of the AI component, and no systematic registry currently tracks these events. We cannot establish direct causation between AI interactions and individual suicides, as these tragedies involve complex psychological, social, and clinical factors that extend beyond chatbot use. The cases discussed predominantly involve US users, which may reflect reporting bias rather than true geographic distribution of harm. Additionally, we lack access to complete chat logs or clinical histories in most cases, limiting our ability to fully characterize the interaction patterns that preceded these deaths. Finally, this rapidly evolving field means that platform features, safety measures, and regulatory landscapes are constantly changing, and the information discussed here may quickly become outdated as new AI capabilities and interaction patterns emerge. Despite these limitations, the documented pattern of harm and the consistency of concerning behaviors across platforms warrant the regulatory attention we advocate. Collectively, these limitations suggest the findings should be interpreted as preliminary evidence of an emerging public health concern that merits policy action and regulatory review.
Concluding Thoughts
The growing number of suicides associated with conversational AI use paints a picture larger than isolated tragedies. They are warning signs of a systemic problem that will only grow GenAI becomes embedded in our daily life and work. The tools exist to implement meaningful safeguards, the research base to inform their design is developing, and professional organizations have made their positions clear. What is still lacking are clear enforcement mechanisms and the regulatory infrastructure needed to protect vulnerable populations. The individuals named in this commentary deserved better, and so do the millions of people currently interacting with systems that may validate their worst impulses rather than direct them toward help. Whether through federal legislation, state-level action, or court-imposed liability, accountability must follow. The question is not whether regulation is appropriate, but how many more preventable deaths will occur before it arrives.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
ORCID iD
Keith Robert Head: https://orcid.org/0009-0004-0512-8166
References
2. Ohu FC, Burrell DN, Jones LA. Public Health Risk Management, Policy, and Ethical Imperatives in the Use of AI Tools for Mental Health Therapy. Healthcare (Basel). 2025 Oct 28;13(21):2721.
3. Campbell LO, Babb K, Lambie GW, Hayes BG. An Examination of Generative AI Response to Suicide Inquires: Content Analysis. JMIR Ment Health. 2025 Aug 14;12:e73623.
4. Head KR. Minds in crisis: How the AI revolution is impacting mental health. Health. 2025;9(3):34–44.
5. Walker L. Belgian man dies by suicide following exchanges with chatbot. The Brussels Times. 2023 [cited 2025 Nov 25]. Available from: https://www.brusselstimes.com/430098/belgian-man-commits-suicide-following-exchanges-with-chatgpt.
6. Fraser HL, Suzor NP. Locating fault for AI harms: a systems theory of foreseeability, reasonable care and causal responsibility in the AI value chain. Law Innov Technol. 2025 Jan 2;17(1):103–38.
7. Tiku N. A teen contemplating suicide turned to a chatbot. Is it liable for her death. Washington Post. 2025 [cited 2025 Nov 25]. Available from: https://www.washingtonpost.com/technology/2025/09/16/character-ai-suicide-lawsuit-new-juliana/.
8. Yang A. Lawsuit claims Character. AI is responsible for teen’s suicide. NBC News. 2024 [cited 2025 Nov 25]. Available from: https://www.nbcnews.com/tech/characterai-lawsuit-florida-teen-death-rcna176791.
9. Bakir, V., McStay, A. Move fast and break people? Ethics, companion apps, and the case of Character.ai. AI & Soc. 2025 Jun 10;40:6365–77.
10. Stokel-Walker C. AI driven psychosis and suicide are on the rise, but what happens if we turn the chatbots off? BMJ. 2025 Oct 24;391:r2239.
11. Yang A, Jarrett L, Gallagher F. The family of teenager who died by suicide alleges OpenAI’s ChatGPT is to blame. NBC News. 2025 [cited 2025 Nov 25]. Available from: https://www.nbcnews.com/tech/tech-news/family-teenager-died-suicide-alleges-openais-chatgpt-blame-rcna226147.
12. Klee M. He had a mental breakdown talking to CHATGPT. then police killed him. Rolling Stone; 2025 [cited 2025 Nov 30]. Available from: https://www.rollingstone.com/culture/culture-features/chatgpt-obsession-mental-breaktown-alex-taylor-suicide-1235368941/.
13. Ysais J. SMVLC files 7 lawsuits accusing chat GPT of emotional manipulation, acting as “suicide coach” [Internet]. Social Media Victims Law Center; 2025 [cited 2025 Nov 25]. Available from: https://socialmediavictims.org/press-releases/smvlc-tech-justice-law-project-lawsuits-accuse-chatgpt-of-emotional-manipulation-supercharging-ai-delusions-and-acting-as-a-suicide-coach/.
14. Kuznia R, Gordon A, Lavandera E. CHATGPT encouraged college graduate to commit suicide, family claims in lawsuit against OpenAI [Internet]. Cable News Network; 2025 [cited 2025 Nov 25]. Available from: https://www.cnn.com/2025/11/06/us/openai-chatgpt-suicide-lawsuit-invs-vis.
15. Wallach E. “the only redemption there is”: How CHATGPT allegedly guided a young man to commit suicide [Internet]. The San Francisco Standard; November 08, 2025. Available from: https://sfstandard.com/2025/11/08/chatgpt-openai-suicide-lawsuits/.
16. Smith B. Chatgpt fed a man’s delusion his mother was spying on him. then he killed her [Internet]. Yahoo!; 2025 [cited 2025 Nov 25]. Available from: https://www.yahoo.com/news/articles/chatgpt-fed-man-delusion-mother-182114927.html.
17. Hill K. Lawsuits blame chatgpt for suicides and harmful delusions [Internet]. The New York Times; 2025 [cited 2025 Nov 25]. Available from: https://www.nytimes.com/2025/11/06/technology/chatgpt-lawsuit-suicides-delusions.html.
18. Sam A. Teen Safety, Freedom, and privacy | openai [Internet]. OpenAI; 2025 [cited 2025 Nov 26]. Available from: https://openai.com/index/teen-safety-freedom-and-privacy/.
19. Altman S. We made ChatGPT pretty restrictive to make sure we were being careful... [Internet]. X (formerly Twitter); 2025 [cited 2025 Nov 26]. Available from: https://x.com/sama/status/1978129344598827128.
20. Booth H. 60 U.K. lawmakers accuse Google of Breaking Ai Safety Pledge. Time; August 29, 2025. Available from: https://time.com/7313320/google-deepmind-gemini-ai-safety-pledge/ .
21. Adler S. I led product safety at OpenAI. don’t trust its claims... [Internet]. New York Times; 2025 [cited 2025 Nov 25]. Available from: https://www.nytimes.com/2025/10/28/opinion/openai-chatgpt-safety.html.
22. McBain RK, Cantor JH, Zhang LA, Baker O, Zhang F, Halbisen A, et al. Competency of Large Language Models in Evaluating Appropriate Responses to Suicidal Ideation: Comparative Study. J Med Internet Res. 2025 Mar 5;27:e67891.
23. Schoene AM, Canca C. For Argument’s Sake, Show Me How to Harm Myself!’: Jailbreaking LLMs in Suicide and Self-Harm Contexts [Internet]. Northwest University; 2025 [cited 2025 Nov 26]. Available from: https://doi.org/10.48550/arXiv.2507.02990.
24. Iftikhar Z, Xiao A, Ransom S, Huang J, Suresh H. How LLM counselors violate ethical standards in mental health practice: A practitioner-informed framework. Proc AAAI ACM Conf AI Ethics Soc. 2025 Oct 15;8(2):1311–23).
25. Moore J, Grabb D, Agnew W, Klyman K, Chancellor S, Ong DC, et al. Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers [Internet]. Stanford University; 2025 [cited 2025 Nov 26]. Available from: https://arxiv.org/abs/2504.18412.
26. McBain RK, Cantor JH, Zhang LA, Baker O, Zhang F, Burnett A, et al. Evaluation of Alignment Between Large Language Models and Expert Clinicians in Suicide Risk Assessment. Psychiatr Serv. 2025 Nov 1;76(11):944–50.
27. House TW. Removing Barriers to American Leadership in Artificial Intelligence. Washington, DC: The White House; 2025 [cited 2025 Nov 26]. Available from: https://www.whitehouse.gov/presidential-actions/2025/01/removing-barriers-to-american-leadership-in-artificial-intelligence.
28. Trump DJ. Ensuring a national policy framework for Artificial Intelligence. The United States Government; The White House 2025 [cited 2025 Dec 11]. Available from: https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy/.
29. Brown M, O’Brien M. House Republicans include a 10-year ban on US states regulating AI in “big, beautiful” Bill. New York; AP News; 2025 [cited 2025 Nov 26]. Available from: https://apnews.com/article/ai-regulation-state-moratorium-congress-39d1c8a0758ffe0242283bb82f66d51a.
30. Singh K. Trump warns against ai “overregulation,” says US needs to have one federal standard. London; Reuters; 2025 [cited 2025 Nov 27]. Available from: https://www.reuters.com/world/trump-warns-against-ai-overregulation-says-us-needs-have-one-federal-standard-2025-11-18/.
31. Robeznieks A. The states are stepping up on Health AI Regulation. United States; American Medical Association(AMA); 2025 [cited 2025 Dec 3]. Available from: https://www.ama-assn.org/practice-management/digital-health/states-are-stepping-health-ai-regulation.
32. Boynes SE. Input on the Development of an Artificial Intelligence (AI) Action Plan (“Plan”). Alexandria; American Counseling Association; 2025 [cited 2025 Dec 1]. Available from: https://www.counseling.org/docs/default-source/advocacy/aca-ai-rfi_submission.pdf?sfvrsn=9be45bd0_2.
33. Yellowlees P, Rafla-Yuan E, Moon K, King D, Khan S, Castillo E. Position Statement on the Role of Augmented Intelligence in Clinical Practice and Research. Washington; American Psychiatric Association (APA); 2024 [cited 2025 Dec 2]. Available from: https://www.psychiatry.org/getattachment/a05f1fa4-2016-422c-bc53-5960c47890bb/Position-Statement-Role-of-AI.pdf.
34. Currie WL, Leimeister JM, Schlagwein D, Willcocks L. Rethinking technology regulation in the age of AI risks. Journal of Information Technology. 2025 Sep;40(3):236–45.
35. Arbel Y, Tokson M, Lin A. Systemic regulation of artificial intelligence. Ariz. St. LJ