New Zealand Law Students' Journal (2025)

Home | Databases | WorldLII | Search | Feedback
You are here: NZLII >> Databases >> New Zealand Law Students' Journal >> 2024 >> [2024] NZLawStuJl 6 Database Search | Name Search | Recent Articles | Noteup | LawCite | Download | Help

De Jongh, Jonathan --- "Who bears the cost of artificial intelligence harms in Aotearoa New Zealand healthcare?" [2024] NZLawStuJl 6; (2024) 5 NZLSJ 95

Last Updated: 28 February 2025

Who Bears the Cost of Artificial Intelligence Harms inAotearoa New Zealand Healthcare?

JON ATHAN DE JONGH*

Abstract—Healthcare has always been a primary locus fortechnological innovation, and artificial intelligence is no exception. Asartificialintelligence proliferates in New Zealand healthcare, the law must beequipped to respond when these systems cause harm to patients.The law shouldcontinue eliminating tortious liability and distributing the cost collectivelyon society. In the select cases whereliability persists, the costs should beplaced on health professionals and healthcare providers.

The Accident Compensation Act 2001 governs New Zealand’s collectiveaccident compensation scheme. The scheme provides compensationfor certainaccidental personal injuries, including medical harms. However, it is uncertainwhether such harms arising from treatmentfrom artificial intelligence couldfall within the Accident Compensation Act’s cover provisions. Reformshould refocus statutorycover for treatment injury on the service provided tothe patient, rather than the party providing the service, so the legal statusofthe artificial intelligence system does not prevent cover.

Where the Accident Compensation Act does not operate, the law of medicalnegligence provides recourse for patients. In these scenarios,liability andcost should both be attributed to the supervising health professional and thehealthcare provider employing that professional.These parties have the assetsavailable to compensate the patient, and there are established common lawprinciples available to holdthem liable. In contrast, artificial intelligencesystems cannot currently be sued nor own assets to use for compensation.

* LLB(Hons), BSc Auck. Solicitor, MinterEllisonRuddWatts,Auckland. With tremendous thanks to my supervisor, Dr Joshua Yuvaraj, forproviding the inspirationfor this topic and supporting the article with hisguidance and wisdom, and to Dr Christopher Boniface, whose work was an extensivesource of knowledge to draw from.

© The Author 2024. Published by New Zealand LawStudents’ Journal Ltd.(2024) 5 NZLSJ 95

De Jongh • Who Bears the Cost of AI Harms in New ZealandHealthcare?

These collective reforms and preferred approaches will protect against thelegal perils of artificial intelligence harms, while encouraginginnovation toreap the benefits of these technologies.

IINTRODUCTION

“We are all, by any practical definition of thewords, foolproof and incapable of error.” – HAL 9000, ArtificiallyIntelligent Supercomputer1

Despite the visionary musings of Stanley Kubrick and Arthur C Clarke, an all-powerful, transcendent supercomputer was not achievedby 2001. Even in 2024, theprospect of this kind of machine remains in the realm of science fiction.Indeed, in their current form,artificial intelligence systems have proven theyare prone to mistake and capable of error, as any technology is.The known and unknown risks of advanced artificial intelligence technologieshave prompted alarm evenwithin the sector itself.2 Thesetechnologies’ capabilities are not features of a distant dystopian settinglike 2001: A Space Odyssey—the future has already arrived.3Pressures like healthcare workforce shortages and the threat (andfruition) of global pandemics have accelerated artificial intelligenceintegration in healthcare. In a world of developer error and imperfect data,artificial intelligence systems in healthcare will inevitablyerr and harmpatients. The law of compensation for treatment harm must be equipped to meetthe consequences of this widespread adoptionof artificial intelligencesystems.

This article examines the mechanisms of compensation for patient treatmentinjuries caused by artificial intelligence systems operatingin

  1. StanleyKubrick (dir) 2001: A Space Odyssey (film, 1968).
  2. Anopen letter to pause all giant AI experiments for at least six months untilfurther regulatory measures are implemented includessignatories like techmagnates Elon Musk and Steve Wozniak, and over 30,000 other executives,professors, tech developers, entrepreneursand citizens have also signed: Futureof Life Institute “Pause Giant AI Experiments: An Open Letter” (22March 2023)<futureoflife.org>. Calls were renewed in May 2023 by prolificartificial intelligence developers, saying artificial intelligenceis anextinction-level threat: Center for AI Safety “Statement on AI Risk”<www.safe.ai>.
  3. Whilethe adoption of artificial intelligence is in early stages in New Zealand,globally the revolution has already begun: AI ForumNew Zealand ArtificialIntelligence for Health in New Zealand: Hauora i te Atamai Iahiko (21October 2019) [Artificial Intelligence for Health] at 17–21. Seegenerally PricewaterhouseCoopers What doctor? Why AI and robotics will defineNew Health (June 2017); and Antonio Martinez-Millana and others“Artificial intelligence and its impact on the domains of universal healthcoverage, health emergencies and health promotion: An overview of systematicreviews” (2022) 166(104855) Int J Med Inform 1.

96

New Zealand Law Students’ Journal

healthcare. The central questions driving this discussion are: who doesand who should bear the cost of harm to patients by artificialintelligence, and how should that harm be addressed through legal reform?As with any treatment injury suffered in New Zealand since 1974,4 thefirst port of call for artificial intelligence treatment harm is the accidentcompensation scheme. This scheme distributes thecost of harm across society andshould continue to do so with harm inflicted by artificial intelligence, withreforms addressingthe scheme’s application to this novel technologicalcontext. Where cover is unavailable under the accident compensation scheme,thecommon law of medical negligence dictates which party is responsible forrectifying the harm. In these cases, the costs and liabilitiesshould be borneby the presiding health professional or the healthcare provider utilising theartificial intelligence system, notthe artificial intelligence systemitself.5 New Zealand’s legal and cultural environment makesholding these systems personally liable an unsuitable and problematic option,atleast for now.

IIARTIFICIAL INTELLIGENCE IN NEW ZEALAND

HEALTHCARE

AWhat is Artificial Intelligence?

Cross-disciplinary discourse has only reached theconsensus that there is no consensus on how to define artificialintelligence.6 On a broad view, artificial intelligence systemsoperate as large-scale data processors that develop informational and analyticalfunctions mirroring cognitive functions in humans, including learning, problemsolving, perception, reasoning and inferencing, and
  1. TheAccident Compensation Act 1972 came fully into force on 1 April 1974 when theAccident Compensation Corporation was established:Accident Compensation ActCommencement Order 1973; and Accident Compensation Commencement Order (No 2)1973.
  2. Forclarity, in this article “health professional” refers generally to ahuman qualified medical or health practitioner;“registered healthprofessional” refers to the specific definition contained in the accidentcompensation legislationdiscussed further in Part III; and “healthcareprovider” refers to the larger hospital, clinic or other service providingthe patient with care through its health professionals.
  3. ColinGavaghan and others Government Use of Artificial Intelligence in New Zealand:Final Report on Phase 1 of the New Zealand Law Foundation’s ArtificialIntelligence and Law in New Zealand Project (New Zealand Law Foundation,Wellington, 2019) at 5; Pei Wang “On Defining ArtificialIntelligence” (2019) 10(2) JAGI1 at 1; and David Kirsh “Foundationsof AI: the big issues” (1991) 47 Artif Intell 3 at3–4.

97

De Jongh • Who Bears the Cost of AI Harms in New ZealandHealthcare?

communicating.7 These functions compound to allow artificialintelligence machines to complete a broad range of tasks, including manyhealthcare-relatedtreatments and procedures.8 The key aspects commonto any definition are inherent in the name—the system must be artificial(with digital, software-basedmechanisms) and replicates human intelligencecapabilities.

While characteristically diverse, many artificial intelligence systems adoptmachine learning—essentially operating as feedbackloops of learning,driven by algorithms and underpinned by mathematical modelling.9Artificial intelligence systems receive massive input datasets to trainand form their algorithm: a series of computational instructionsto generatedesired outputs. Each output is a prediction based on detected patterns in thetraining data. With every correct prediction,as assessed either by a human insupervised learning or by the machine itself in unsupervised learning, thealgorithm is updatedfor more accurate predictions.10

Artificial intelligence systems fall on a spectrum of autonomy based on powerand function,11 from narrow or weak artificial intelligence (systemscomprised of simple algorithms generating deterministic outputs) to general orstrong artificial intelligence (systems developed to think, reason and actindependently).12 A predictive algorithm assessing input patient datato provide a provisional diagnosis for a doctor to review and validate or rejectis a largely non-autonomous system.13 A hypothetical (and yetunrealised) example of a

  1. BJCopeland“artificialintelligence”(10 August2024)Encyclopedia Britannica

<www.britannica.com>; and Matthew UScherer “Regulating Artificial Intelligence Systems: Risks, Challenges,Competencies, and Strategies” at 360.

  1. ArtificialIntelligence for Health, above n 3, at 13–14. For examples, see RobertH Chen and Chelsea Chen Artificial Intelligence: An Introduction for theInquisitive Reader (Chapman & Hall, New York, 2022) at 21–26.
  2. GoogleCloud “What is Artificial Intelligence (AI)?”<https://cloud.google.com>.
  3. MIJordan and TM Mitchell “Machine learning: Trends, perspectives, andprospects” (2015) 349(6245) Science 255 at 257–259.
  4. DanielleS Bitterman, Hugo JWL Aerts and Raymond H Mak “Approaching autonomy inmedical artificial intelligence” (2020)2 Lancet Digital Health e447 ate447–e448.
  5. Someargue that realising strong artificial intelligence may not even be possible atall considering irreconcilable differences betweenthe human and digitalexperience: see generally Ragnar Fjelland “Why general intelligence willnot be realized” (2020)7 Humanit Soc Sci Commun 10; and Adriana Braga andRobert K Logan “The Emperor of Strong AI Has No Clothes: Limits toArtificialIntelligence” (2017) 8 Information 156.
  6. Seefor example Claire Felmingham and others “Improving skin cancer managementwith ARTificial intelligence: A pre-post interventiontrial of an artificialintelligence system used

98

New Zealand Law Students’ Journal

machine gathering information from a patient in a virtual clinic, processingtreatment options and actioning next steps for the patientwithout adoctor’s involvement is a largely autonomous system. As autonomy is aspectrum, systems can have varying degreesof autonomy, with different aspectsof functionality being independent or controlled/supervised by humans.14For example, a robotic surgeon may take initial input instructions fromthe human surgeon as to the issue and what surgery is required,but may completethe surgery by itself with limited directions from the attending surgeon.15Artificial intelligence system applications in New Zealand healthcare tendto be more non- autonomous.16

BApplications of Artificial Intelligence in Healthcare

The applications of artificial intelligence systems inhealthcare are immense.17 As a key sector for improving wellbeing,productivity and quality of life, and as a data-driven practice itself,healthcare has alwaysbeen a primary locus for artificial intelligenceapplications.18 Artificial intelligence healthcare systems canprovide healthcare services from administrative to specialist levels withincreasedpower and efficiency. The yielded benefits include: the automation ofor assistance in labour; quicker and more accurate processingof high volumes ofdata; and expedited progress in new innovations to address the world’smost

as a diagnostic aid for skin cancer management in a real-world specialistdermatology setting” (2023) 88 JAAD 1138.

  1. Bitterman,Aerts and Mak, above n 11, at e448.
  2. Thiscould fall somewhere within Levels 2–4 of the proposed six levels ofautonomy for surgical robotics: see Guang-Zhong Yangand others “Medicalrobotics—Regulatory, ethical and legal considerations for increasinglevels of autonomy” (2017)2 Sci Robot 1 at 1; and Aleks Attanasio andothers “Autonomy in Surgical Robotics” (2021) 4 Annu Rev ControlRobot AutonSyst 651 at 661–668.
  3. Keyuse cases in the New Zealand context all involve supportive tools and systems,not fully autonomous healthcare service deliverysystems: ArtificialIntelligence for Health, above n 3, at 25 and following; and see NeilJacobstein, Aaron Ibbotson and Andy Bowley Artificial Intelligence: A guideto AI and implications for New Zealand (Forsyth Barr, February 2022) at62.
  4. Healthcareis one of two sectors with the greatest potential for disruption and revolutionby artificial intelligence: PwC Sizing the prize: What’s the real valueof AI for your business and how can you capitalise? (2017) at 6 and11–12; and Artificial Intelligence for Health, above n 3, at24.
  5. Kun-HsingYu, Andrew L Beam and Issac S Kohane “Artificial intelligence inhealthcare” (2018) 2 Nat Biomed Eng 719 at721.

99

De Jongh • Who Bears the Cost of AI Harms in New ZealandHealthcare?

pressing issues.19 Artificial intelligence has the potential totransform healthcare, for better or for worse, hence urgent calls forconfronting itsrisks.20

Diagnosis and screening are prime candidates for automation. Artificialintelligence systems can use algorithms to analyse medicaldata in the same waya health professional would, scanning for patterns of symptomology that revealinjury, illness or predispositionto health issues. For example, a newartificial intelligence-assisted tool uses image processing capabilities to scanpictures ofskin areas of concern, determine potential risks of melanoma andhelp speed up cancer diagnoses.21 By processing extensive sets ofpatient data and recognising the repeating patterns of illness and injuryrepresented in that data,artificial intelligence systems can apply this trainedalgorithm to novel data—new presenting patients—and receive feedbackon accuracy. Diagnoses can be made more quickly and, theoretically over time,with greater accuracy.22

In other areas, such as robot-assisted surgery, uptake has been slower.Contributing factors include the technological limitationsof integration intoout- of-date clinical environments, rising costs and supply chain pains, andscarce training opportunities availablefor using these technologies.23Nevertheless, robot-assisted surgery has entered New Zealand healthcare,largely in urological, gynaecological, prostate and orthopaedicoperations.24 Artificial intelligence only plays a supporting role inthese surgeries, where doctors control the machinery with some autonomic inputfrom the artificial intelligence for

  1. RoyalSociety Te Apārangi The age of Artificial Intelligence in Aotearoa(July 2019) at 4 and 8.
  2. MangorPedersen “AI has potential to revolutionise health care – but wemust first confront the risk of algorithmic bias”(4 May 2023) TheConversation <https://theconversation.com>.
  3. 1News“New tech promises to cut wait times for skin cancer diagnoses” (1January 2023)

<www.1news.co.nz>.

  1. OlivierElemento and others “Artificial intelligence in cancer research, diagnosisand therapy” (2021) 21 Nat Rev Cancer747 at 750–751.
  2. GiriKrishnan and others “The acceptance and adoption of transoral roboticsurgery in Australia and New Zealand” (2019)13 J Robot Surg 301 at305–306.
  3. Seefor example Urology Associates “Robotic Surgery” <www.urology.co.nz>; WakefieldHospital “Robotic Hysterectomy Surgery”<https://wakefield.co.nz>; Jim Duthie “Prostate cancer treatment– Da Vinci robot assisted prostatectomy (RALP)” ProstateMatters

<https://prostatematters.co.nz>; and MercyAscot“Orthopaedic Robotics to Assist with Knee Surgeries at MercyAscot PrivateHospitals” <www.mercyascot.co.nz>.

100

New Zealand Law Students’ Journal

precision.25 Autonomous robotic surgery has not yet occurred in NewZealand, but is under development overseas.26

Artificial intelligence can also assist health professionals with in-patient andout-patient care management. Where staff resourcesare limited, artificialintelligence systems can facilitate regular monitoring and information exchangewith patients regarding medicationand symptoms.27 Efficient triagingcan help prioritise high-need patients, while low-level health issues can befiltered and addressed by artificialintelligence-recommendedinterventions.28

Algorithms used in healthcare have become even more accessible in Aotearoa NewZealand due to the necessities of the COVID-19 pandemic.Te PokapūHātepe o Aotearoa, the New Zealand Algorithm Hub, was formed in 2020 tocollate medical algorithms developedfrom research and practice.29New Zealand is the “first country, globally, to deploy a nationalsolution of this kind” for everyone from health professionalsto consumersto use.30 Consumers have greater access to artificial intelligencethan ever with the launch of Chat-GPT, a generative artificial intelligencechatbot that offers, among other things, generalist health advice.31Such accessibility may serve to increase the rate of adoption in allaspects of healthcare, underscoring the urgent need for actionin thisspace.32

  1. SandipPanesar and others “Artificial Intelligence and the Future of SurgicalRobotics” (2019) 270 Ann Surg 223 at 223;and Artificial Intelligencefor Health, above n 3, at 31.
  2. ArtificialIntelligence for Health, above n 3, at 31.
  3. At32; Cristiano André da Costa and others “Internet of Health Things:Toward intelligent vital signs monitoring in hospitalwards” (2018) 89Artif Intell Med 61; and Samer Ellahham and Nour Ellahham “Use ofArtificial Intelligence for ImprovingPatient Flow and HealthcareDelivery” (2019) 12(3/1000303) J Comput Sci Syst Biol 1.
  4. AdamBaker and others “A Comparison of Artificial Intelligence and HumanDoctors for the Purpose of Triage and Diagnosis”(2020) 3(543405) FrontArtif Intell 1 at 6; and Scott Levin and others “Machine-Learning-BasedElectronic Triage More AccuratelyDifferentiates Patients With Respect toClinical Outcomes Compared With the Emergency Severity Index” (2018) 71Ann Emerg Med565 at 571–572. Artificial intelligence can even serve totriage patients before they encounter the healthcare system; theuse of Chat-GPTfor personal medical questions has grown with its popularity (but not withoutaccompanying risks of incorrect informationand potential harms): see NikiBezzant “ChatGPT or GP: Can I trust AI for health advice?” (11 May2023) RNZ <www.rnz.co.nz>.
  5. TePokapūHātepeoAotearoa—NewZealandAlgorithmHub“About”

<https://algorithmhub.co.nz>.

  1. TePokapū Hātepe o Aotearoa—New Zealand Algorithm Hub, above n29.
  2. Bezzant,above n 28.
  3. DanielWilson and others “Lessons learned from developing a COVID-19 algorithmgovernance framework in Aotearoa New Zealand”(2023) 53 JRSNZ 82 at90–91.

101

De Jongh • Who Bears the Cost of AI Harms in New ZealandHealthcare?

CThreats of Tortious Harm: The Need for Reform

The developments set out in the preceding sectiondemonstrate how artificial intelligence has cemented its presence in New Zealandhealthcare, notwithstanding the early stage of its integration. It yieldsprofound potential and realised benefits, but is no silverbullet. While thereare innumerable (and many yet undiscovered) possibilities for artificialintelligence in healthcare, its potentialfor harms and liabilities is evident.Health professionals are acutely aware of this issue; many consider these risksto be as prominenta factor in implementation as any potential benefits.33Reliance on such machines as they become commonplace could lead tocomplacency, poor outcomes and liabilities.34 Furthermore, consumerscould soon circumvent the health system altogether to seek free advice fromartificial intelligence services,which could prove treacherous without properquality control.35

Artificial intelligence systems are reliant on, and therefore limited by, thedata they receive. If the data is not accurate, systemsmay make inappropriateand harmful decisions underpinned by bias.36 Without full patientdata, artificial intelligence systems could discount ethnicity, age andsex-related data in health screeningor diagnosis.37 In the contextof tortious liability, poor data could affect decisions made by artificialintelligence systems, ultimately leadingto personal injury. For example, analgorithm designed to recommend treatment for hypertension that is trained ondata from a young,healthy population could misdiagnose and provideinappropriate recommendations to patients not in those

  1. JaneScheetz and others “A survey of clinicians on the use of artificialintelligence in ophthalmology, dermatology, radiologyand radiationoncology” (2021) 11(5193) Nat Sci Rep 1 at 5 and 8. For a global reviewsee Ian A Scott, Stacy M Carter and EnricoCoiera “Exploring stakeholderattitudes towards AI in clinical practice” (2021) 28(e100450) BMJ HCI 1 at2–3.
  2. RobertChallen and others “Artificial intelligence, bias and clinicalsafety” (2019) 28 BMJ Qual Saf 231 at 234; and RajaParasuraman andDietrich H Manzey “Complacency and Bias in Human Use of Automation: AnAttentional Integration” (2010)52 Hum Factors 381 at 394– 395.
  3. Bezzant,above n 28.
  4. Fora multidisciplinary overview see Eirini Ntoutsi and others “Bias indata-driven artificial intelligence systems—Anintroductory survey”(2020) 10(e1356) WIREs Data Mining Knowl Discov 1; and for a discussion in theNew Zealand health contextsee Vithya Yogarajan and others “Data and modelbias in artificial intelligence for healthcare applications in NewZealand”(2022) 4(1070493) Front Comput Sci 1.
  5. Challenand others, above n 34, at 232–233. 102

New Zealand LawStudents’ Journal

demographics, resulting in poor health outcomes and potentially personalinjury.38

A key issue with artificial intelligence algorithms is their opacity. Developerscan ascertain whether an artificial intelligencehas correctly constructed analgorithm to connect the input and output but cannot see how it has doneso.39 This phenomenon, known as the “black box”, createsdifficulties for developers and end users in assessing the algorithms’accuracy and validity.40 To contextualise the issue, a system maytrain itself to link particular patient factors present in the training datasetas hiddenlayers, and may misdiagnose a skin lesion, for example, if that lesionpresents differently to those in the training dataset.41 If a healthprofessional relies on those outputs to the patient’s detriment, it isdifficult to attribute liability where thefactors sitting behind that diagnosiscannot be dissected and explained.42

In regulating these harms, New Zealand law separates the market regulation ofthese systems and the mechanisms for addressing theirresulting tortiousliabilities. Regulation of artificial intelligence systems in healthcarecurrently only occurs through their classificationas medical devices, which hashistorically aligned more with traditional medical tools like surgical mesh andpacemakers than artificialintelligence.43 The enforcement provisionsin the

  1. GeorgeMaliha and others “Artificial Intelligence and Liability in Medicine:Balancing Safety and Innovation” (2021) 99Milbank Q 629 at632–633.
  2. AleksandreAsatiani and others “Challenges of Explaining the Behavior of Black-Box AISystems” (2020) 19 MIS Q Exec 259at 259–260; Aaron FI Poon andJoseph JY Sung “Opening the black box of AI-Medicine” (2021) 36 JGH581 at 582–583;and Challen and others, above n 34, at 233.
  3. Foran example in the legal context related to black box issues in algorithmicadministrative decision-making see generally JessicaPalairet“Reason-giving in the age of algorithms: how artificial intelligencechallenges the legitimacy and proper functioningof administrative law, and whygiving reasons is at the heart of the solution” (LLB (Hons) Dissertation,University of Auckland,2019).
  4. Challenand others, above n 34, at 232–233.
  5. StacyM Carter and others “The ethical, legal and social implications of usingartificial intelligence systems in breast cancercare” (2020) 49 TheBreast 25 at 27.
  6. Section3A(a) of the Medicines Act 1981 provides that a “medical device” isa “device, instrument, apparatus, appliance,or other article” thatdoes not achieve its intended action “by pharmacological, immunological,or metabolic means”.Anything intended to be used with such a device alsofalls within the definition under para (c). There is no specific provision forsoftware, but it could feasibly meet this definition. Any artificialintelligence as a medical device would still need to be usedfor a“therapeutic purpose” under s 4.

103

De Jongh • Who Bears the Cost of AI Harms in New ZealandHealthcare?

Medicines Act 1981 do not deal with harm associated with medical devices.44They focus on enforcement measures against healthcare companies orproviders importing, advertising and dealing with medical devicesand medicinesas products.45 The responsibility of compensating for harm lies withthe accident compensation scheme, or, should that fail, in tort. The authorconsiders these are the proper pathways for assessing who bears the liabilityand cost.

Artificial intelligence may give rise to numerous actionable harms—notonly those arising from personal injury, but also wherepatient data is usedwithout consent in breach of privacy, or where artificial intelligence healthprofessionals breach human rights.46 Nevertheless, as artificialintelligence frequently outperforms its human counterparts, its adoptioncontinues at pace.47 It is critical to assess how New Zealand lawaddresses harms by artificial intelligence currently and how it could operateproactivelyto these threats in the future, particularly as it lags behind otherjurisdictions in this area.48

III THE ACCIDENT COMPENSATION SCHEME: NO(?) LIABILITY & THECOST FOR SOCIETY

The accident compensation scheme makes New Zealand anotable outlier in the common law world. By enacting the Accident CompensationAct 2001 (ACA),
  1. TheMedicines Act was intended to be replaced with the Therapeutic Products Act2023, which has received royal assent and is due tofully come into force by2026: Therapeutic Products Act, s 2(3). However, the current Government hasintroduced the Therapeutic ProductsAct Repeal Bill 2024 (67-1), which iscurrently due for its second reading: New Zealand Parliament | PāremataAotearoa “TherapeuticProducts Act Repeal Bill” (1 July2024)

<https://bills.parliament.nz/>. It is expected that thisBill will pass into law before the end of 2024. Therefore, the provisionsof theTherapeutic Products Act dealing with medical devices are not relevant, exceptto the extent those provisions are resurrectedin any future replacementregime.

  1. Seeprovisions relating to general enforcement in Part 5 and the various offenceprovisions in other parts of the Medicines Act.
  2. Seegenerally Christopher Ryan Boniface “The Legal Impact of ArtificialIntelligence on the New Zealand Health System”(PhD Thesis, University ofCanterbury, 2021) at chs 4–7. These causes of action are not discussed inthis article.
  3. At45 and following; and Mitchell Hageman “Digital humans to provide betterhealthcare services in NZ” (22 August 2022)ITBrief New Zealand<https://itbrief.co.nz>.
  4. TheLaw Commission has no capacity in its law reform programme in the next few yearsto look into this area despite acknowledgingits pressing nature: see Sarah Putt“AI in healthcare neglected area in New Zealand law” (20 July 2020)Computer World

<www.computerworld.com>. TheAustralian Medical Association has called for national regulation of artificialintelligence in healthcare following the explosionof Chat-GPT’spopularity and use: see Claire Moodie “Australian Medical Associationcalls for national regulations aroundAI in health care” (28 May 2023) ABCNews <www.abc.net.au>.

104

New Zealand Law Students’ Journal

Parliament effectively (but not entirely) eliminated personal injury actions forcompensation. The accident compensation scheme hasplaced the collective cost ofpersonal injury, including injury caused by medical treatment, on society byimposing levies to financecentralised compensation funds. Parliament’sdeliberate lawmaking has ensured cover extends to all accident-related injuriesduring any step of the healthcare process. It should therefore extend this coverto future-proof the scheme in the face of the artificialintelligencerevolution.

Treatment injury cover under s 32 of the ACA provides patients with statutoryaccident insurance, compensating them on a no-faultbasis for medical personalinjury.49 Under the current statutory definitions, the actions oromissions of artificial intelligence would likely be covered as a form of“treatment” per s 33 of the ACA. However, examining the definitionof “treatment injury” and its requirementof treatment from a“registered health professional”, it is dubious whether artificialintelligence systems would fallwithin such a narrowly defined and specialisedterm. The courts could remedy this issue, at least temporarily, by connectingtheactions of the artificial intelligence to a human registered healthprofessional. Legislative amendments will be essential to ensurecover remainsand common law actions are minimised, maintaining the scheme’s foundingprinciple of comprehensive entitlement.50

A “Treatment Injury”

While defined in s 32, s 20 governs the cover fortreatment injury. To receive cover, the patient’s harm must be personalinjuryas described in s 26,51 and not fall into a category excludedfrom cover, like chronic or long-term illness unrelated to work or previousmedical treatment.52 It must also be a personal injury of a typedescribed in s 20(2),53 including the common head of direct
  1. Notably,even if a claim for treatment injury fails, cover could feasibly be availablefor personal injury caused by an accident unders 20(2)(a) or personal injury bygradual process, disease or infection consequential to an already coveredpersonal injury underss 20(2)(f)–(h). These select exceptions are notdiscussed as treatment injury provisions are generally sufficientlycomprehensive,but they are available. For a discussion of these alternativecategories of cover see Boniface, above n 46, at 193–210.
  2. OwenWoodhouse, HL Bockett and GA Parsons Compensation for Personal Injury in NewZealand: Report of the Royal Commission of Inquiry (Government Printer,December 1967) [Woodhouse Report] at [57].
  3. AccidentCompensation Act 2001 [ACA], 20(1)(b).
  4. Personalinjury “does not include personal injury caused wholly or substantially bya gradual process, disease, or infection”:s 26(2). The exceptions arefound at ss 20(2)(e)–(h).
  5. Section20(1).

105

De Jongh • Who Bears the Cost of AI Harms in New ZealandHealthcare?

treatment injury under s 20(2)(b) and other adjacent heads of cover involvingharm from medical treatment.54

“Treatment injury” as defined in s 32 means “personal injury... suffered by a person” while “seekingtreatment from” or“receiving treatment from, or at the direction of, 1 or more registeredhealth professionals”.55 A causal link is also required betweenthe treatment and the personal injury under s 32(1)(b). The treatment injurymust not be a“necessary part, or ordinary consequence, of thetreatment” considering the person’s health condition and prevailingclinical knowledge.56 Injuries resulting from a person’spre-existing health condition, lack of staff or other resources available toprovide treatment,and a patient’s unreasonable withholding of consent totreatment are disqualified from cover.57

B “Treatment”

Section 33(1) provides a non-exhaustive andintentionally broad definition of treatment for the purposes of s 32. Thedefinition includes,among other things: the provision or omission of treatment;diagnosis; the failure to obtain informed consent to undergo treatment;and theuse of support and administrative systems in care that “directly supportthe treatment”. It covers health professionaldecision-making throughoutprimary healthcare, from hospital administration to medical care. This wasParliament’s intention,to avoid technical arguments over what is definedas treatment that could exclude individuals from cover.58 Because thedefinition of “treatment” does not depend on the party providing it,s 33(1) could cover artificial intelligencedecision-making integrated atdifferent stages without issue.59

Judges have broadly interpreted “treatment” under s 33(1) to giveeffect to the ACA’s purpose.60 Many types of treatment coveredunder the s 33(1)

54Sections 20(2)(c)–(d), (f) and (h)–(i). 55Sections32(1)(a)(i)–(ii).

  1. Section32(1)(c).
  2. Section32(2).
  3. JoannaManning “Treatment Injury” in Peter Skegg and Ron Paterson (eds)Health Law in New Zealand (Thomson Reuters, Wellington, 2015) 997 at1014.

59ACA, s 33(1).

60 Judges take a necessarily broad approach to the ACA in general to giveeffect to the regime’s foundational principle of comprehensivecover forpersonal injury: Accident Compensation Corporation v Mitchell [1991] NZCA 162; [1992] 2NZLR 436 (CA) at 438–439; and Joanna Manning “Civil Proceedings inPersonal Injury Cases” in Peter Skegg and Ron Paterson (eds)Health Lawin New Zealand (Thomson Reuters, Wellington, 2015) 1061 at 1062.

106

New Zealand Law Students’ Journal

definition from case law are akin to types of treatment artificial intelligenceis providing now or could provide soon, as set outin Part II(B) above. Thissuggests that, whether by a human or machine, these forms of treatment willcontinue to be covered. Examplesinclude treatment injuries sustained during ordue to misdiagnosis,61 surgical site infections62 andmedication mix-ups,63 among others. Notably, the failure of any“equipment, device or tool” is included in the definition,64meaning any failure by the artificial intelligence system or its hardwarewould likely be in scope. However, interpretive issues couldarise where asystem was deemed to be both the “registered health professional”providing the treatment under s 32 andthe “equipment, device ortool” involved in the treatment under s 33. This should be accounted forin any reform to theseprovisions to avoid uncertainty.

C“Registered Health Professional”

The definition of “registered healthprofessional” poses issues for statutory cover of artificialintelligence-inflictedinjury. The ACA and its legislative predecessors haveunanimously accepted that a registered health professional is a qualified,certified, human practitioner, as this was commonsense for the times. Now, thespecific definition for the term could exclude patientsfrom cover unless thecourts determine that treatment is given “at the direction of” thehuman registered health professionalto circumvent this interpretive issue.Where more autonomous systems operate and this interpretation is not available,it is unlikely
  1. Forexample, a misdiagnosis of Crohn’s disease is considered treatment:Corbishley v Accident Compensation Corporation [2010] NZACC 192.Artificial intelligence systems can diagnose inflammatory bowel diseasesincluding Crohn’s disease: John Gubatan and others“Artificialintelligence applications in inflammatory bowel disease: Emerging technologiesand future directions” (2021)27 World J Gastroenterol 1920.
  2. Forexample, a post-operative wound infection from thigh surgery is consideredtreatment: Evans v Accident Compensation Corporation [2013] NZACC 13.Artificial intelligence systems can perform orthopaedic surgery, and indeed doin New Zealand: see for example MercyAscot, aboven 24; and Lucy Warhurst“New Zealand carries out our first robot-assisted knee surgery” (26September 2017) Newshub <www.newshub.co.nz>.
  3. Injuriesfrom incorrect medication prescriptions, medication mix-ups or other issues withmedication can be considered treatment:Accident Compensation Corporation [ACC]Supporting Treatment Safety 2021: Using information to improve the safety oftreatment (July 2021) at 14–15. Artificial intelligence systems, aspart of the emerging concept of “robotic pharmacies”,couldrecommend, prescribe and dispense medication: Asmaa R Alahmari, Khawlah KAlrabghi and Ibrahim M Dighriri “An Overviewof the Current State andPerspectives of Pharmacy Robot and Medication Dispensing Technology”(2022) 14(e28642) Cureus 1.

64 ACA, s 33(1)(g).

107

De Jongh • Who Bears the Cost of AI Harms in New ZealandHealthcare?

that an artificial intelligence system could meet the requirements to be aregistered health professional per the statutory definition.On that basis,reform is required.

1 “At the direction of”

It is likely that artificial intelligence systems will be non-autonomous in theearly stages of integration, and will not be responsiblefor providingautonomous full- service care.65 As such, the words “at thedirection of” in the s 32 definition of treatment injury will likelymaintain a safety netfor claimants seeking cover. Judges often employ thephrase in plain terms—a simple factual inquiry of proximity between aregistered health professional’s direction for treatment and the treatmentbeing delivered to the patient.66 It can be hypothesised that ajudge, wanting to give effect to the ACA and its bar on compensatory negligenceactions,67 would interpret a non-autonomous artificial intelligenceas providing care “at the direction of” a human registered healthprofessional.68 Indeed, that interpretation would be reasonable formany, if not most, artificial intelligence systems in healthcare operating onhuman instructions. Robotic surgery apparatuses perform surgery “at thedirection of” the surgeon operating the controls,with precisionassistance from the artificial intelligence.69 Diagnoses provided byartificial intelligence systems are made “at the direction of” thesupervising doctor who inputsthe data and reviews the output diagnosis foraccuracy.70

Such an expansive interpretation of “at the direction of” in s32(1)(a)(ii) may become strained the more advanced andautonomous artificialintelligence becomes, but a judge could still draw reasonable connections togive effect to

  1. ArtificialIntelligence for Health, above n 3, at 25 and following; and Jacobstein,Ibbotson and Bowley, above n 16, at 62.
  2. Forexample, a claimant was denied cover for treatment injury because his treatmentcame from a drug and alcohol counsellor who isnot a registered healthprofessional per the ACA; and nor was the treatment “at the directionof” his GP (a registeredhealth professional) as the counsellor attendedto the claimant separate to any input from the GP: L v Accident CompensationCorporation [2016] NZACC 195 at [19] and [30].

67 ACA, s317(1).

  1. Boniface,above n 46, at 203.
  2. Asurgeon must still guide the machine to make the incisions, it cannot do soautonomously: nib “Robotic Surgery in New Zealand”(18 December2018) <www.nib.co.nz>.
  3. MoleMap’snew skin cancer diagnostic aid artificial intelligence does not operateautonomously; it provides outputs for thedermatologist to review and either useor change, if necessary: Felmingham and others, above n 13, at1139.

108

New Zealand Law Students’ Journal

this wording. Even if an artificial intelligence system acts completelyautonomously in its provision of care, the care could be“at the directionof” a registered health professional if they referred the patient to thesystem, followed up with thepatient to confirm the treatment plan, or evengranted implicit endorsement by allowing the patient to use the system in thefirstplace.71 Provided the judge interprets the term this way, coverwill likely still be available.

2 Personhood and qualifications

If an autonomous artificial intelligence system provided care and the courtfound no way to apply the term “at the directionof” feasibly,applying the “registered health professional” definition to theartificial intelligence itself wouldpresent difficulties.72 This isparticularly so as the definition states that a registered health professionalis both: (1) a “person”; and (2)a professional of the type listedin reg 7(a) of the Accident Compensation (Definitions) Regulations 2019(Definitions Regulations).

New Zealand law has rejected the status of artificial intelligence as a legalperson, albeit only in the context of intellectualproperty.73 Evenif it could be considered a legal person, it is probable the DefinitionsRegulations refer to a natural person and thereforeartificial intelligence isexcluded. A purposive interpretation, as required by s 10 of the Legislation Act2019, could include non-natural legal persons to support the ACA’s aims,especially where the alternative is no cover in what purports to be acomprehensivesystem.74 However, the legislative history and lack ofevidence as to Parliament’s intention would make such an interpretationdifficultfor the court to justify. The select committee report on the originalInjury Prevention and Rehabilitation Bill 2001 (that becamethe ACA) did notconsider the scope of the term “registered health professional”beyond humans.75 The term is not discussed and is simply asserted asan inherently straightforward definition in Hansard for the Injury Prevention,Rehabilitation, and Compensation Amendment Act (No 2) 2005 (the amendment

71Boniface, above n 46, at 206–207. 72ACA, s 32(1)(a).

  1. Thalerv Commissioner of Patents [2023] NZHC 554, [2023] 2 NZLR 603 [Thaler vCommissioner of Patents (NZHC)].
  2. ACA,s 3.
  3. InjuryPrevention and Rehabilitation Bill 2001 (90-2) (select committee report) at23–24.

109

De Jongh • Who Bears the Cost of AI Harms in New ZealandHealthcare?

that enacted the treatment injury provisions in their current form),76and its three select committee reports.77

Leading texts offer no discussion of the term’s scope except to restatethe professions covered in the statutory definition.78 Even Cabinetpapers discussing the Definitions Regulations from 2019, when artificialintelligence had been established in healthcare,do not meaningfully considerthe term’s scope.79 Considering the Definitions Regulationswere enacted so recently, there has been no case law further clarifying thedefinition, exceptto confirm the list of occupations already contained in reg7.80 Even when the definition was still contained in the ACA, no caselaw mentioned the term except when discussing whether the humanpractitioner in the case was a “registered healthprofessional”.81

Even if an artificial intelligence system is considered a legal person and reg7(a) of the Definitions Regulations is somehow interpretedto include legalpersons, it must be a professional of a type listed exhaustively in reg 7(a).Fitting the artificial intelligencesystem into one of the establishedprofessional categories would likely be impossible, as these professions aredefined by the qualificationsand certifications they require—which anartificial intelligence system cannot currently obtain.82 No otherlegal persons currently in existence

  1. Seefor example (5 August 2004) 619 NZPD 14702: “Health professionals areoften reluctant to cooperate in determining claims,because of the ...consequences to them personally” (emphasis added); and see (4 May2005) 625 NZPD 20290: “[B]ecause medical professionals, like everybodyelse, can learn from their mistakes” (emphasis added).
  2. InjuryPrevention, Rehabilitation and Compensation Amendment Bill (No 3) 2004 (165-1)(select committee report) at 6; Injury Prevention,Rehabilitation andCompensation Amendment Bill (No 3) 2004 (165-2) (select committee report) at5–6; and Injury Prevention,Rehabilitation and Compensation Amendment Bill(No 3) 2005 (165-3) (select committee report).
  3. StephenTodd “Accident Compensation and the Common Law” in Stephen Todd (ed)Todd on Torts (9th ed, Thomson Reuters, Wellington, 2023) 31 at 68, n191; Doug Tennent Accident Compensation Law (LexisNexis, Wellington,2013) at 123; and Manning, above n 58, at 1019.
  4. CabinetPaper “Accident Compensation (Definitions) Regulations 2019” (1August 2019); and Cabinet Minute “Minuteof Decision: AccidentCompensation (Definitions) Regulations 2019” (6 August 2019)LEG-19-MIN-0109.
  5. Seefor example Roche Products (New Zealand) Ltd v Austin [2019] NZCA 660, (2019) 25 PRNZ 95 at [45].
  6. Seefor example Lv Accident Compensation Corporation, above n 66, at[30].
  7. Regulations3, 6 and 7 of the Accident Compensation (Definitions) Regulations 2019[Definitions Regulations] define each professionby the registration,certification and qualifications required by each profession’s respectiveindustry body, like the MedicalCouncil

110

New Zealand Law Students’ Journal

could gain these either; they are designed for natural persons. This furtherconfirms Parliament’s intention that a registeredhealth professional is anatural person in this statutory context.

3 Reform is needed

Whether non-autonomous or autonomous, there is little scope for any artificialintelligence healthcare system to be interpreted asa “registered healthprofessional” under s 32 of the ACA and reg 7 of the DefinitionsRegulations. For s 32 of the ACAto continue operating effectively, judges willneed to make full use of the phrase “at the direction of” to ensureartificialintelligence actions are covered. That will be feasible fornon-autonomous systems. The more autonomous a system becomes, the morestrainedsuch interpretations could become, necessitating reform to reg 7 of theDefinitions Regulations to include artificial intelligencesystems or reform tos 32 of the ACA to re-define the requirements for cover. Regulation 7’scurrent drafting and the definition’slegislative background stronglyexclude the possibility of artificial intelligence systems falling within thedefinition of “registeredhealth professional”.

DCausation

Under s 32 of the ACA, whether by a human or artificialintelligence, a treatment injury must be caused by treatment to becovered.83 The test for causation is on a balance of probabilitiesand is focused on outcomes—a test of increased risk caused by thetreatmentis insufficient based on the statute’s language andpurpose.84 The treatment by or at the direction of a registeredhealth professional only needs to be involved in the chain of causation at somepoint. A causal link can be established regardless of whether others’decisions or actions contributed to the injury.85

of New Zealand for medical practitioners. An artificial intelligence systemcannot fulfil the strict requirements for qualificationand registration as ahealth practitioner under the Health Practitioners Competence Assurance Act 2003[HPCAA], as it does not havethe ability to gain the appropriate degree for itsfield, and each definition in regs 3, 6 and 7 of the Definitions Regulationsrequiresthat professional to be a “person”, which may exclude anartificial intelligence system as discussed previously. Seealso Manning, aboven 58, at 1019.

  1. Subsection(1)(b).
  2. Atkinsonv Accident Rehabilitation Compensation and Insurance Corporation [2001] NZCA 335; [2002] 1NZLR 374 (CA) at [24] as followed in Accident Compensation Corporation vAmbros [2007] NZCA 304, [2008] 1 NZLR 340 at [13]–[21].
  3. ResidualHealth Management Unit v Dowie [2005] NZAR 298 (CA) at[27]–[29].

111

De Jongh • Who Bears the Cost of AI Harms in New ZealandHealthcare?

It follows that the chain of causation from the registered healthprofessional’s actions, omissions or directions to the injurymust remainunbroken.86 While an expansive reading of “at the directionof” could help link an artificial intelligence’s actions to a humanregistered health professional to establish cover, its balance with causation isdelicate. Such a reading could counterproductivelymake establishing causationmore difficult for claimants. If cover is dependent on establishing such a link,and that link is tenuousdue to the registered health professional’slimited involvement, there may be a break in the chain on a causationanalysis.87

Consider a general practitioner referring a patient to an artificialintelligence patient triage system for a checkup, during whicha more seriousailment is discovered and treated incorrectly by that system, causinginjury.88 It may be more difficult to establish that thedoctor’s referral was more likely than not to have caused that injury as afactualinquiry, as they were completely unaware of the ailment or treatment.There is no question that the treatment itself caused injury;the question iswhether the registered health professional provided or directed the artificialintelligence to provide that treatment,thereby establishing causation.89While a pertinent issue, it is likely the courts will give effect to theACA through a generous interpretation of causation if a humanregistered healthprofessional was involved in any capacity. The courts already read causationbroadly (but sensibly) under the ACA,90 and not doing so wouldobstruct claimants from establishing a treatment injury claim because of thefailure of causation in fact.Nevertheless, this tension persists and justifieseventual reform, especially as artificial intelligence technologies advance inautonomy and their connections to registered health professionals’directions become harder to identify.

  1. Manning,above n 58, at 1018.
  2. Scherersuggests the involvement of artificial intelligence under a traditionalcausation analysis could be seen as a supersedingcause breaking the chain:Scherer, above n 7, at 365– 366.
  3. Artificialintelligence triage systems are under development, and in some cases are alreadyimplemented in clinical settings: seeabove at 7, n 28.
  4. Scherer,above n 7, at 366.
  5. Atkinson,above n 84, at [24]; and Ambros, above n 84, at [13]–[21].112

ESociety (and Patients) Bears the Cost

New Zealand Law Students’ Journal

Where harm is covered as a treatment injury under the ACA, the AccidentCompensation Corporation (ACC) subsidises any loss of incomeand fundsadditional treatment according to the entitlements in Part 4 of the ACA. Wheretreatment is not otherwise free as publiclyfunded healthcare, ACC’sliabilities to pay or contribute to specific treatments are set out in theAccident Compensation (Liabilityto Pay or Contribute to Cost of Treatment)Regulations 2003. Section 228(2) of the ACA provides for how the TreatmentInjury Accountis funded to provide compensation for successful treatment injuryclaimants. Paragraph (a) is not applicable as there is no specificlevy forhealthcare providers—they pay regular employer levies.91 Underparagraph (b), if a patient is employed, their compensation is drawn from theEarners’ Account, a fund of levies drawnfrom all employed workers in NewZealand.92 Otherwise, if a patient is not employed, theircompensation is drawn from general tax.93 Effectively, all taxpayerscontribute somewhat to the cost of these injuries.

Patients that seek private healthcare pay the difference, if any, between thecost of treatment and how much ACC is liable to pay.In this way, patients mayalso bear the cost of their treatment to an extent. Even if their insurancepays, they pay insurance premiumsthat still ultimately leave them bearing somecost. This mitigates the need for excessively high levies on income to cover alltreatmentfor any personal injury that occurs—society is willing to payfor the scheme collectively, but only to a point.

IVMEDICAL NEGLIGENCE: HEALTHCARE PROVIDER LIABILITY & THECOST FOR CLINICS

The ACA is broad in scope and could feasibly coverartificial intelligence- inflicted treatment injury following the reforms thisarticle recommends. The ACA therefore limits the place of negligence and otheravailable civil actions in this context. However,while limited, it is importantto consider how these actions operate and when. In those circumstances, theliable party should bethe healthcare provider utilising the artificialintelligence system, not the system
  1. Forexample, the levies set for healthcare providers for the 2024–2025 taxyear are found in sch 3 of the Accident Compensation(Work Account Levies)Regulations 2022.
  2. Forexample, the levy formulae set for earning New Zealanders for the2024–2025 tax year are found in the Accident Compensation(Earners’Levy) Regulations 2022.
  3. ACC“What your levies pay for” (12 December 2018) <www.acc.co.nz>.

113

De Jongh • Who Bears the Cost of AI Harms in New ZealandHealthcare?

itself.94 The rationale for healthcare providers continuing to bearthe cost of artificial intelligence harm is sound. Healthcare providershave theassets available to compensate plaintiffs and the liability arrangement alreadyexists to hold them responsible. Many ofthe argued benefits of the artificialintelligence system assuming liability also apply to the healthcare provider,with the addedbenefit of no novel legal or practical complications. In anyevent, as these claims are relatively rare, this arrangement will helpfill inthese gaps appropriately.

AWhat Actions Survive the ACA?

Section 317 of the ACA bars claims for compensationdirectly or indirectly arising out of a personal injury covered by the ACA.Therefore,a compensatory action in negligence for treatment injury (as personalinjury) is only available where the patient’s claim isnot covered by theACA. Psychiatric injury is one key type of injury that is mostly excluded fromthe ACA and therefore could formthe basis of a compensatory negligenceclaim.95 The operation of s 319 also permits actions claimingexemplary—not compensatory—damages, even for an injury covered bythe ACA. Finally, claimants can sue for compensation for any treatment injurythat does not fit within s 32 of the ACA.96 Such claims are likely tobe rare, considering the Court’s anticipated broad approach tointerpreting the term “at thedirection of” to allow cover for mosttreatment injuries involving artificial intelligence. However, it remainspossible thatexceptional claims will not fall under s 32 nor other more generalcategories of cover in s 20(2).

Though unrelated to compensation for personal injury and therefore thisdiscussion, other actions outside of negligence and not arisingfrom personalinjury are also available despite the ACA.97 Notably, damages arerecoverable

  1. Theother potential candidate for liability attribution is the developer of theartificial intelligence system. Issues arise where,inevitably, liability isexcluded through the contractual arrangements between the developer, anythird-party importer, and the healthprofessional or healthcare providerpurchasing the system: see Courtney K Meyer “Exculpatory Clauses andArtificial Intelligence”(2021) 51 Stetson L Rev 259. Considering therarity of claims and scope of the ACA, the key cause of action against thedeveloperis probably in statute or a product liability action: see below at 20,n 99.
  2. Theexceptions are where the psychiatric injury is contingent on a physical injury,results from a sexual crime or is work-related:ACA, ss 26(1)(c)–(da). Theleading authority for establishing a negligence action for psychiatric injurywhere the ACA doesnot provide cover is van Soest v Residual HealthManagement Unit [1999] NZCA 206; [2000] 1 NZLR 179 (CA).
  3. Subjectto the discussion above at 11, n 49. See also Manning, above n 58, at 1019.
  4. Thefollowing actions are available to patients despite the s 317 bar oncompensatory actions in the ACA: defamation, breach of contract,unjustifieddismissal or personal grievance in an

114

New Zealand Law Students’ Journal

against health professionals and healthcare providers under s 57(1) of theHealth and Disability Commissioner Act 1994,98 but only for rightsbreaches or harm to dignity or feelings¾not for personal injury. Anotherkey area of litigation in this contextis product liability. While technicallyavailable under s 317(2), these claims assess compensation arising from economicloss dueto a faulty product,

not compensation for personal injury.99 They are worth noting but notrelevant in this context.

While exceptions remain, the courts will likely work to keep claims coveredunder the ACA in order to support the comprehensive natureof thescheme¾making such exceptions rare. The rarity of these claims is alsoevident in reporting from ACC. In the 2019/2020period, ACC reported that 68 percent

of treatment injury claims were accepted,100 with the acceptance rateincreasing over time.101 Of the claims not accepted, there was nophysical injury for 34 per cent, no causal link between the treatment and theinjury for28 per cent, claims were withdrawn for 13 per cent and insufficientinformation was received to make a decision within the statutorytimeframe for12 per cent.102 Hypothetically, these patients without cover couldlitigate their claims, but many of the reasons their claims were denied wouldlikelybe fatal to any civil claim, like failure of causation or having nophysical injury to prove loss (unless the injury is a recognisablepsychiatricinjury or, for torts, actionable per se). These claims are simply not beingbrought, which is arguably attributable atleast in

employment context and economic loss under s 317(2), so long as they do notarise from personal injury; breach of a right in the NewZealand Bill of RightsAct 1990 giving rise to public law compensation; and declarations,(non-compensatory) equitable remedies,court orders, or criminal or disciplinaryproceedings: see Manning, above n 60, at 1065 and following.

  1. Itis highly unlikely an artificial intelligence system would qualify as a“health care provider” under the Health andDisability CommissionerAct 1994 [HDCA], in any event: see s 3.
  2. Theycannot operate to obtain compensation for personal injury by virtue of s 317(3)of the ACA. Claims under the Consumer GuaranteesAct 1993, Fair Trading Act1986, Contract and Commercial Law Act 2017, common law of contract and commonlaw of product liabilitynegligence would better serve patients seeking remediesfrom developers arising out of product faults, breaches of consumer guaranteesor misleading conduct in trade (in the limited cases where they can apply,considering the statutory bar in s 317 of the ACA): seeTrish O’Sullivanand Kate Tokeley “Consumer Product Failure Causing Personal Injury Underthe No-Fault Accident CompensationScheme in New Zealand—a Let-off forManufacturers?” 41 J Consum Policy 211; and Omri Rachum-Twaig “WhoseRobotis it Anyway?: Liability for Artificial-Intelligence-Based Robots”[2020] U Ill L Rev 1141 at 1154–1157.
  3. ACC,above n 63, at 31.

101At 31.

102At 32.

115

De Jongh • Who Bears the Cost of AI Harms in New ZealandHealthcare?

part to their inherent deficiencies.103 Many cases could also besettled out of court, with those settlements being funded by healthcareproviders’ insurance.104 This illustrates that exceptions arerare. Therefore, maintaining the status quo of holding healthcare providersliable is appropriatefor these exceptions.

BHealthcare Provider Liability is Preferred

While not tested in New Zealand in a meaningful waysince before the enactment of the original Accident Compensation Act 1972, theprinciple of liability for health professionals persists in other common lawjurisdictions.105 These would theoretically apply in New Zealand inthe limited scenarios where s 317 of the ACA does not bar compensatory actions.In situations without ACAcover, health professionals bear liability for theirnegligent treatment if they cause harm that is withinthe scope of their duty ofcare,106 including where they use or adopt machinery or tools likeartificial intelligence systems.107 Healthcare providers
  1. Keytexts in tort and health law both note the extremely limited application ofmedical negligence law in New Zealand: Todd, aboven 78, at 44–45; andJoanna Manning “The Required Standard of Care for Treatment” inPeter Skegg and Ron Paterson(eds) Health Law in New Zealand (ThomsonReuters, Wellington, 2015) 95 at 95–96. For example, a brief search ofcases with the key words “medical negligence”OR “healthnegligence” on LexisAdvance turns up five case results between 2019 and2022¾the same period (plus an

additional two years for patientsto bring claims) during which 5,319 treatment injury claims

were rejected: ACC, above n 63, at 31–32. On Westlaw, the same searchparameters returned

14 results. On NZLII, they returned eight results. None of these resultsrelated to a compensatory negligence action for personalinjury.

  1. Asan illustrative example, a contingency of $4 million was set aside to resolve“matters of a legal nature” for 2022for Te Whatu Ora CountiesManukau, including any legal action relating to the entity. No further breakdownis available to ascertainhow much, if any, went to resolving personal injuryclaims. As settlements are private and confidential anyway, there is noinformationavailable as to how much the entity actually spent resolving claims,if anything: see Te Whatu Ora Counties Manukau Annual Report: 2021/22 (6March 2023) at 118.
  2. InAustralia: Rogersv Whitaker [1992] HCA 58; (1992) 175 CLR 479 (HCA) at 489; andWallace v Kam [2013] HCA 19, (2013) 250 CLR 375 at [8]. In the UK:Bolam v Friern Hospital Management Committee [1957] 1 WLR 582 (QB); andKhan v Meadows [2021] UKSC 21, [2022] AC 852 at [58]–[65]. Theexistence of the duty is so well-established, it is often accepted and admitted,with its scope and breach beingthe only true issue in the respective case(among the other elements): see Bolitho v City and Hackney Health Authority[1997] UKHL 46; [1998] AC 232 (HL) at 239.
  3. Khanv Meadows, above n 105, at [58]–[65].
  4. FrankPasquale “Liability Standards for Medical Robotics and AI: The Price ofAutonomy” in Larry A DiMatteo, CristinaPoncibò and Michel Cannarsa(eds) The Cambridge Handbook of Artificial Intelligence: Global Perspectiveson Law and Ethics (Cambridge University Press, Cambridge, 2022) 200 at211.

116

New Zealand Law Students’ Journal

employing these systems also owe patients a duty of care and so may be liable,personally108 and vicariously.109

As medical professional indemnity insurance is common in New Zealand, there areassets available for plaintiffs to draw from as thehealth professional canclaim insurance for the amount required up to the relevant cap (usually thehealthcare provider as employerorganises such insurance as a term of the healthprofessional’s employment agreement).110 Even if the healthcareprovider bears the liability solely, it can still pursue internal or externaldisciplinary procedures for thehealth professional, ensuring they are punishedto some degree, if not by having to pay damages.111

The nature of some claims will necessitate them being brought against thehealthcare provider, as they cannot or are very unlikelyto succeed against theartificial intelligence system. As noted above, s 319 of the ACA permitsexemplary damages claims even forinjuries with cover. Besides satisfying theusual elements, these claims require a defendant to have been intentionally orsubjectivelyreckless and to have acted outrageously in causing loss to theplaintiff.112 In those situations, the court deems it appropriate tosubvert the usual aim of civil damages—to compensate—by usingdamagesto punish and deter conduct.113 Courts can punish and deterhumans or even other non-natural legal

  1. Asan established category of duty of care, a duty is patently owed by those who“provide and run” a healthcare servicedirectly: Darnley vCroydon Health Services NHS Trust [2018] UKSC 50, [2019] AC 831 at [15]–[16]; and see Cassidy v Ministry of Health [1951] 2 KB 343(CA) at 360.
  2. VariousClaimants v Catholic Child Welfare Society [2012] UKSC 56, [2013] 2 AC 1 at [19]; and Stephen Todd “Vicarious Liability” in Stephen Todd (ed)Todd on Torts (9th ed, Thomson Reuters, Wellington, 2023) 1369 at1370–1372, and 1376 and following. However, healthcare providers must besued directly if exemplary damages are sought, as vicarious liability does notoperate for those claims: at 1376. The vicarious liabilityof healthcareproviders for health professionals’ breaches of the HDCA and its Code ofPatient Rights is enshrined in s 72of the HDCA; and see Ryan v Health andDisability Commissioner [2023] NZSC 42, [2023] 1 NZLR 77.
  3. Whileofficial statistics are unavailable due to the diversity of healthcare providersand insurers, information from insurers suggestsprofessional indemnityinsurance is mandatory or at least strongly recommended for all healthprofessionals: New Zealand MedicalIndemnity Insurance “RMO Guides:Medical Indemnity Insurance in NewZealand”

<https://nzmii.co.nz>. Te Whatu Ora arrangesprofessional indemnity insurance for its employees: see for example Te Whatu OraCounties Manukau, above n 104, at 125.

  1. Therelevant disciplinary procedures are those under the HDCA and HPCAA.
  2. Couchv Attorney-General (No 2) [2010] NZSC 27, [2010] 3 NZLR 149 at [2] andfollowing per Elias CJ, [58] and following per Blanchard J, and [92] andfollowing per Tipping J.

113 At [19] and [22].

117

De Jongh • Who Bears the Cost of AI Harms in New ZealandHealthcare?

persons with high damages awards because money has intrinsic value andinflicting financial loss has a deterrent effect.114 Artificialintelligence systems are not (yet) advanced enough to be punished or deterred,as they do not have the higher-order perceptionand understanding that somethingis valuable and that losing it is adverse to their interests.115 Onthe contrary, courts can punish healthcare providers with exemplary damages,incentivising them to introduce safety measures andadditional staff training toavoid further losses.

Many proposed benefits for attributing liability to artificial intelligencesystems also arise by making the healthcare providerliable. Proponents ofartificial intelligence liability argue it provides a central and accessibleentity to sue for plaintiffs toavoid cross-border, complex litigationarrangements.116 However, the healthcare provider also offersplaintiffs a certain and proximate defendant to sue without navigating the novelandcomplex legal issues posed by suing the artificial intelligence, such as howit would be legally represented and deliver instructionsto itsrepresentatives.117 There are also no issues with compensation whensuing healthcare providers—if artificial intelligence systems are notlegalpersons, they cannot possess property and so cannot compensateplaintiffs.118 Therefore, some fund would have to be set up tocompensate plaintiffs.119 Alternatively, healthcare providers wouldneed to obtain a new form

  1. CatherineM Sharkey “Punitive Damages Transformed into Societal Damages” inElise Bant and others (eds) Punishment and Private Law (1st ed, HartPublishing, 2021) 155 at 164–166; and Keith N Hylton “PunitiveDamages and the Economic Theory of Penalties” (1998) 87 Geo LJ 421 at 424and following.
  2. RyanAbbott and Alex Sarch “Punishing Artificial Intelligence: Legal Fiction orScience Fiction” (2019) 53 UC Davis LRev 323 at 364–367; andGabriel Hallevy Liability for Crimes Involving Artificial IntelligenceSystems (Springer, Cham (Switzerland, 2015) at 210. But see the approach inMark A Lemley and Bryan Casey “Remedies for Robots” (2019) 86 U ChiL Rev 1311 which indicates that deterrence and cost internalization couldtheoretically be achieved by inputting the relevant quantified costsof harminto the artificial intelligence system to update its algorithms, but warns thisis a near impossible exercise and does notachieve the goals of punishment: at1350–1358.
  3. MarkFenwick and Stefan Wrbka “AI and Legal Personhood” in Larry ADiMatteo, Cristina Poncibò and Michel Cannarsa(eds) The CambridgeHandbook of Artificial Intelligence: Global Perspectives on Law and Ethics(Cambridge University Press, Cambridge, 2022) 288 at 294 and302–303.
  4. JoannaJ Bryson, Mihailis E Diamantis and Thomas D Grant “Of, for, and by thepeople: the legal lacuna of synthetic persons”(2017) 25 Artif Intell Law278 at 288.
  5. Butsee Rafael Dean Brown “Property ownership and the legal personhood ofartificial intelligence” 30 I & CTL 208at 229–233.
  6. Bryson,Diamantis and Grant, above n 117, at 302. 118

New Zealand LawStudents’ Journal

of artificial intelligence malpractice insurance (or extend existingcover).120 Either option would create significant administrativecosts for a relatively low benefit considering the rarity of such civil claimsin New Zealand.

CArtificial Intelligence Liability is Undesirable

To bear civil liability for tortious harms, artificialintelligence systems must be granted legal personhood: the rights conferredandduties imposed on an entity at law.121 Difficulties arise indetermining whether an artificial intelligence system could technically andpractically be granted legal personhood.This is reflected across numerousjurisdictions opting to reject granting personhood to artificial intelligence,including the EuropeanUnion (EU),122 the United Kingdom,123and Australia.124 In the latter two and most other comparablecommon law jurisdictions, these debates on legal personhood have only been madein thecontext of intellectual property rights. Their courts have not consideredlegal personhood for the purposes of liability attribution, nor in thetort law context.
  1. BenedictSee “Paging Doctor Robot: Medical Artificial Intelligence, Tort Liability,and Why Personhood May Be the Answer”(2021) 87 Brook L Rev 417 at 442;Resolution 2020/2014(INL) on the Civil liability regime for artificialintelligence [2021] OJ C404/107[Civil Liability Resolution] at 113; and Fenwickand Wrbka, above n 116, at 302–303.
  2. PeterSpiller (ed) New Zealand Law Dictionary (10th ed, LexisNexis, Wellington,2022), definition of “legal person”.
  3. TheEU has directly considered liability for artificial intelligence and rejected itin favour of placing liability on parties inthe development chain: CivilLiability Resolution, above n 120, at 110; and Expert Group on Liability and NewTechnologies¾NewTechnologies Formation

Liability forArtificial Intelligence and other emerging digital technologies(Publications

Office of the European Union, 2019) at 38. See also European Commission,Directorate- General for Justice and Consumers Proposal for a Directive ofthe European Parliament and of the Council on adapting non-contractual civilliability rules to artificialintelligence (AI Liability Directive) (COM(2022) 496, 28 September 2022) at 18, 20 and 25–27.

  1. Thalerv Comptroller General of Patents, Trade Marks And Designs [2023] UKSC 49,[2024] 2 All ER 527 [Thaler (UKSC)] at [56]–[59], [63]–[65]and [75]. This decision largely reflected the Court of Appeal’s decisionin reasoningand outcome: see Thaler v Comptroller General of Patents, TradeMarks And Designs [2021] EWCA Civ 1374, [2022] Bus LR 375 at [49]–[54], [102] and [116]. The Government of the United Kingdom responded to theCourt of Appeal decision(as the Supreme Court decision had not yet been heardat the time of consultation), deciding that the issue was important but felloutside the scope of their specific review: Intellectual Property Office“Artificial Intelligence and Intellectual Property:copyright andpatents¾Government response to consultation” (28 June 2022) <www.gov.uk>

at [8].

  1. Commissionerof Patents v Thaler [2022] FCAFC 62, (2022) 289 FCR 45 at [108] and[113]– [116]. Leave to appeal to the High Court of Australia was declined:Thaler v Commissioner of Patents [2022] HCATrans199.

119

De Jongh • Who Bears the Cost of AI Harms in New ZealandHealthcare?

Much of the global common law jurisprudence relating to legal personhood forartificial intelligence involves patent applicationsfiled on behalf of theartificial intelligence system DABUS.125 New Zealand has followedother jurisdictions in refusing to accept patents listing DABUS as an inventor,because it is not a “person”and therefore not an“inventor” for the purposes of the Patents Act 2013.126Palmer J in the New Zealand High Court opted to leave such a significantchange to Parliament, noting the United Kingdom Governmentdecided not to reformits patent legislation despite being faced with this issue.127 Thecurrent New Zealand position is clear, but as in other jurisdictions it may belimited to the intellectual property context withspecific subject matter andownership rights. Further, Palmer J’s decision in Thaler v Commissionerof Patents was specific to discussions of rights, not liabilities.128Nevertheless, for now the United Kingdom and Australian positions alignwith New Zealand’s.129 Without direction from Parliament on thematter or a change of approach in the courts, artificial intelligence systemsare likelynot deemed legal persons in New Zealand and so cannot be sued intort.

Considering the alternative where the position does change, there are stillsignificant barriers preventing artificial intelligencesystems from beingsuitable defendants for their own tortious treatment harms. The nature ofartificial intelligence systems presentsquandaries for applying the classicelements of negligence: duty of care, breach of the standard of care, loss, andcausation andremoteness.130 In contrast, a wealth of establishedoverseas case law allows the comfortable application of these elements tohealthcare providers,making them more desirable defendants.

By way of example, consider the duty of care. As a novel and previouslyunconsidered scenario, assessing whether artificial intelligencesystemscould

  1. Seefor example the decisions in Thaler (UKSC), above n 123, and Thaler(FCA), above n 124, which were both appeals of determinations on patentapplications filed by Mr Thaler on behalf of DABUS.
  2. Thalerv Commissioner of Patents (NZHC), above n 73, at [30]–[32]. 127 At[33].
  3. Itis implicit in the decision and the wording of the relevant provisions of thePatents Act 2013 that the grant of a patent is alegal right (with underlyingmoral rights potentially implicated in the discussion): at [16],[22]–[23] and [32].
  4. Thecurrent authoritative positions in the United Kingdom and Australia can be foundin Thaler

(UKSC), above n 123, and Thaler (FCA), aboven 124, respectively.

  1. StephenTodd “Negligence: The Duty of Care” in Stephen Todd (ed) Todd onTorts (9th ed, Thomson Reuters, Wellington, 2023) 153 at155–156.

120

New Zealand Law Students’ Journal

owe a duty of care would necessitate applying the South Pacific dutytest, which requires (among other elements) the duty holder to reasonablyforesee harm to another person in the course of itsactions.131 Thereis technical and philosophical uncertainty as to whether an artificialintelligence system has the awareness to reasonably foreseethe consequences ofits actions, due to fundamental differences in computational logic betweenartificial intelligence and the humanbrain.132 It may“understand” that processed inputs generate outputs and that theseoutputs can be objectively undesirable for themodel based on a human’srecalibration. However, whether it could recognise that some of those outputsare harmful to humansin a moral sense and understand why such harm issubjectively bad based on moral standards ingrained into the law isdoubtful.133

Another area of concern is causation. The black box phenomenon obscures the truecause of harm, affecting the inquiries of both causein fact134(where the harm actually originated) and the cause in law135(whether the risk of harm ought to be attributed to the party itoriginated with).136 The harm may in fact have originated in someexternal source, such as the inputs, which can be hard to identify withoutscrutinisingthe artificial intelligence’s hidden layers.137Similarly, without such scrutiny, it may be difficult to determine whetherthe invisible string of culpability attaches to the artificialintelligence’s own learning

  1. SouthPacific Manufacturing Co Ltd v New Zealand Security Consultants &Investigations Ltd

[1991] NZCA 551; [1992] 2 NZLR 282 (CA) at 305–306.

  1. Boniface,above n 46, at 236; and Rachum-Twaig, above n 99, at 1152–1154 and1160.
  2. Butconsider that the moral aspect to foreseeability may be an entirely unfeasibleapproach for artificial intelligence actions anyway,and other options for legalremedy can be pursued without trying to apply a human behaviour-oriented legalprinciple on an unsuitablesubject— a non-human: Lemley and Casey, above n115, at 1378–1384.
  3. StephenTodd “Causation and Remoteness of Damage” in Stephen Todd (ed)Todd on Torts

(9th ed, Thomson Reuters, Wellington, 2023)1239 at 1241.

135 At 1274.

  1. ZhaoYan Lee, Mohammad Ershadul Karim and Kevin Ngui “Deep learning artificialintelligence and the law of causation: application,challenges andsolutions” (2021) 30 I & CTL 255 at 263–265.
  2. Thisis not to say these evidential difficulties would be remedied by holding thehealth professional or healthcare provider liable,but it would be easier todraw a line between the harm and the health professional because the healthprofessional actually deliversthe treatment based on the artificialintelligence system’s generated output. Clearly the healthprofessional’s relianceon those outputs will have caused the harm infact, but again the cause in legal terms may be difficult to determine:Boniface, aboven 46, at 229–232; and see Lee, Karim and Ngui, above n136, at 271–272.

121

De Jongh • Who Bears the Cost of AI Harms in New ZealandHealthcare?

or pre-programmed aspects of the system.138 This can make the causein law inquiry difficult for the courts, especially where the system itselfcannot give evidence or be cross-examinedlike a health professionalcan.139

These issues are just a selection of the multitude posed by the novelty ofartificial intelligence actors in healthcare spaces.140 The law willhave to react even where the health professional or healthcare provider is thedefendant because the problems createdby artificial intelligence’sinvolvement persist. However, in those cases the elements are more easily provenwith establishedcase law. The courts can also interrogate the mental states andactions of these parties more readily with traditional adjudicativeprocedures.The complexity inherent to attributing liability to artificial intelligenceprovides an arguable basis to defer thisoption, at least for now, in favour ofsimpler liability arrangements that are tested and functional in compensatingpatients.

DHealth Professionals and Healthcare Providers Bear theCost

The burden of liability and cost rests on the healthprofessional in some cases, but generally the healthcare provider. If healthprofessionals and healthcare providers cause patient harm that does not fallwithin the ACA’s cover provisions, and the necessaryelements of therelevant cause of action are established, patients may sue for compensation.However, to a degree, the cost is alsoborne by the collective in taxes. Taxesfund Te Whatu Ora and all public healthcare providers, meaning any awards ofdamages or othermonetary remedies are funded indirectly by taxpayers.141However, the principle remains true for private healthcare providers; theymust bear the cost, which is inevitably shared by theirinsurer but essentiallyborne by the healthcare provider who pays the insurance premiums.
  1. Lee,Karim and Ngui, above n 136, at 266–267 and 271.
  2. Atleast not in the traditional sense according to the established rules of theEvidence Act 2006—inevitably new approachesto eliciting evidence frommachines will need to be created in a rapidly changing digital world: seegenerally Andrea Roth “MachineTestimony” (2017) 126 Yale LJ1972.
  3. Forwider implications in the healthcare sector see generally Boniface, above n 46,with the negligence context specifically addressedat 210 and following. Seealso Scherer, above n 7, at 362–372; and Pasquale, above n 107.
  4. Mosthealthcare is publicly funded in New Zealand. For example, Te Whatu Ora CountiesManukau received approximately $2.4 billionin patient care revenue in the2021/22 reporting period. More than $2.2 billion (approximately 90%) of thatrevenue was receivedfrom the Ministry of Health as part of theGovernment’s budget: see Te Whatu Ora Counties Manukau, above n 104, at 86and 88.

122

New Zealand Law Students’ Journal

Thaler v Commissioner of Patents sets out a clear but context-specificapproach for the courts as to the legal personhood of artificialintelligence.142 Without legal personhood, an entity cannot be suednor be found liable.143 However, as an artificial intelligencesystem’s liability in tort has not been tested in New Zealand, theirability to be suedand found liable remains uncertain. Even if this positionchanged, an artificial intelligence system could theoretically bear liability,but not the cost—it does not possess money or property. A separateinsurance or funding scheme to address this would be impracticalconsidering therelative lack of these claims in New Zealand. The impracticalities andcomplexities of attempting to make artificialintelligence systems bear theliability and cost of their own harm are insurmountable in the current legal andtechnological context.

VTHE LIABILITY AND COST QUESTION: PRACTICAL CONSIDERATIONS ANDREFORM

Having surveyed the current liability matrix forartificial intelligence harm in healthcare settings, this part reaffirms whichpartiesshould bear the liability and cost of the harm, and what reforms orjudicial approach would give effect to that arrangement. TheACA should continueoperating to exclude claims and spread the cost of artificial intelligence harmon wider society, as with alltreatment injuries. This requires reforming theACA and Definitions Regulations proactively to avoid uncertainty in theapplicationof the term “at the direction of” in s 32. Otherwise,the current approach to negligence actions should be maintained:holding thehealth professional or healthcare provider liable. This ensures continuingcompensation while avoiding legal issues ofpersonhood and practical issues ofredress from artificial intelligence systems. In the current context, there isunlikely to bea shift in the legal personhood or liability arrangements forartificial intelligence, for good reason. In other jurisdictions, orin a futurewhere New Zealand undergoes significant technological advances, such a shiftcould be justified.
  1. Thalerv Commissioner of Patents (NZHC), above n 73, at [27]–[33].
  2. Thisis a procedural reality. In the District Court Rules 2014 and High Court Rules2016, a defendant “means a person served or intended to be servedwith a proceeding” (emphasis added): see District Court Rules 2014, r 1.4definition of “defendant”;and High Court Rules 2016, r 1.3definition of “defendant”.

123

De Jongh • Who Bears the Cost of AI Harms in New ZealandHealthcare?

AACA and Exclusion of Liability

1Reforming the accident compensationlegislation

Section 32 requires future-proofing. In the short term, most artificialintelligence-inflicted injury occurring in New Zealand healthcareshouldtheoretically be covered. The definition of treatment in s 33, the causativelink in s 32(1)(b) and the use of “at thedirection of” fortreatment by a registered health professional via non-autonomous artificialintelligence under s 32(1)(a)all arguably allow for cover. However, uncertaintyis inevitable as artificial intelligence systems grow in autonomy andapplicationsof the causative test and phrase “at the direction of”become strained and unjustifiable as a matter of policy. Therefore,reforms to s32 would provide certainty that such treatment injuries are covered and do notcircumvent the s 317 statutory bar, whichwould strengthen the accidentcompensation scheme.

The key issue with cover under s 32 is the definition of “registeredhealth professional”. Either this definition orthe wider provision in s32 will need amendment to address this issue. A definitional change to“registered health professional”would only require modifying theDefinitions Regulations.144 This would be faster than a change to s32 itself; the former requires an Order-in- Council (following necessaryconsultation withthe profession), while the latter requires the fulllegislative process of introducing and passing an amendment Bill in theHouse.145 However, the process of consultation and enactment is stillrelatively slow, and there is no guarantee that the Government will takethisaction.

Part of the reason for moving the definitions of “registered healthprofessional” and other terms from the ACA to theDefinitions Regulationswas “to allow definitions to be more easily added and updated in thefuture”.146 Considering this intention and the broad nature ofs 322 of the ACA, the Definitions Regulations could be updated to expand thescopeof registered health professionals. Removing the dubious“person” would also avoid statutory interpretation hurdles ofassessing whether an artificial intelligence system is a legal person,147and whether the Definitions Regulations can be read to include

144 ACA, s 322(1)(e).

  1. Section322(3).
  2. CabinetPaper, above n 79, at [2]; and Cabinet Minute, above n 79, at [1].
  3. Theposition is clear but context-specific in Thaler v Commissioner of Patents(NZHC), above n 73.

124

New Zealand Law Students’ Journal

non-human legal persons. Even then, the artificial intelligence system wouldstill need to be classified as a professional withinthe list in regs7(a)(i)–(xxiv), which is untenable as these professions are defined bytheir qualification and registrationrequirements contained across regs 3, 6,and 7. There is no feasible way to amend the qualification and registrationrequirements—theyare critical for gatekeeping the ACA’s treatmentinjury cover as only applying to legitimate medical treatment. That is whyafocus on the treatment instead has merit.

Boniface suggests that the term “registered health professional” beremoved altogether in the s 32 definition of “treatmentinjury” inthe ACA, substituted for treatment injury being defined as personal injury“suffered by a person ... seekingtreatment through any service orprovider approved and accredited by the Ministry of Health”.148The Minister may then choose to define what services and providers are soapproved and accredited, providing these in the DefinitionsRegulations or, forease of amendment, in a separate register maintained and administered by theMinistry of Health. A similar consultationprocess as under s 322(3) of the ACAfor changes to the Definitions Regulations could be retained and adopted forchanges to thisregister. Setting up this register would require a Bill amendingthe ACA in the House, which may take longer than amendments to theDefinitionsRegulations. However, it would remove the need for future legislative amendmentsand allow for flexibility as technologieschange, as updating the register wouldbe a regulatory power of the Ministry.

Parliament should pursue this option. Focusing on the service and theprovider of the service helps maintain broad cover for treatment while allowingfor novel situations where medical treatment isdelivered in novel settingswithout human health professionals. It has the added benefit of remedyingpotential issues with causationand determining whether a sufficient link existsbetween the registered health professional and the injury as artificialintelligencegrows increasingly autonomous. The underpinnings of the entireaccident compensation scheme justify these amendments to ensure comprehensivecover continues.

2 Rationale

The recommended reforms above aim to maintain and strengthen the foundationalprinciple of comprehensive cover ingrained in the ACA.Under the accidentcompensation scheme, society bears the cost for patient harm arising from healthprofessionals’ actions.This should be the case whether that

148Boniface, above n 46, at 262.

125

De Jongh • Who Bears the Cost of AI Harms in New ZealandHealthcare?

professional is human or not. Not having statutory cover available forartificial intelligence treatment while having it for humanhealth professionaltreatment would deter patients from using those systems, defeating the point oftheir development and integrationto provide better health outcomes. Reflectingthe no-fault nature of the accident compensation scheme, it should not matterhow orwhy the harm occurred or who caused it, but that it did occur (subject tothe other necessary qualifications of s 32), at least forthe purposes ofcompensation or redress.

This approach could make health professionals complacent by not being concernedwith liability, but evidence suggests that harshmalpracticeenvironments—like in the United States—yield no observable healthbenefits for patients.149 In fact, the opposite can occur.150Importantly, disciplinary procedures and actions for exemplary damagesremain available to punish negligent health professionals andprovide deterrenceanyway.151 The effect of spreading the harm collectively could alsohave a positive effect as health professionals may be encouraged to actaccordingto their professional expertise for the benefit of the patient withoutconcerns of liability risk.152 However, evidence of this particularbenefit is inconclusive; while accountability has been reduced through theintroduction of theaccident compensation scheme, the net effects areunprovable.153

This arrangement is uncontroversial and should be maintained for the benefit ofNew Zealanders. While claimants still may need tolitigate about aspects oftheir cover, these cases are relatively rare compared to the number of acceptedclaims per year, reflectingthe general cost-effectiveness of the

  1. ChristinaA Minami and others “Association Between State Medical MalpracticeEnvironment and Postoperative Outcomes in the UnitedStates” (2017) 224 JAm Coll Surg 310 at 315.
  2. OsmanOrtashi and others “The practice of defensive medicine among hospitaldoctors in the United Kingdom” (2013) 14(42)BMC Med Ethics 1 at4–5. However, the same can occur even without threat of malpracticeliability, such as in New Zealand,because risks of disciplinary action canstill bear on health professionals’ actions (but note this study islimited by itsself-reporting methods): see Wayne Cunningham and Susan Dovey“Defensive changes in medical practice and the complaints process:aqualitative study of New Zealand doctors” (2006) 119(1244) NZMJ 1 at7.
  3. Theframeworks for this are set out in the HDCA and the HPCAA.
  4. GGPowell “The Effect of the New Zealand Accident Compensation Legislation onMedical Malpractice” (1988) 16 Aust NZ JOphthalmol 147 at 153; andBoniface, above n 46, at 256.
  5. KatharineWallis “New Zealand’s 2005 ‘no-fault’ compensationreforms and medical professional accountabilityfor harm” (2013) 126 NZMJ33 at 41–42.

126

New Zealand Law Students’ Journal

system.154 The system operates well, relative to the alternative offorcing claimants to obtain costly health insurance or pursue litigationtorecover personal injury damages. The Government has entered into a socialcontract with its citizens to operate such a schemebecause citizens have agreedto bear the cost and forfeit their negligence claims, knowing they can draw onthe scheme for free whenthey need to.155 This contract is reliableand long-tested.

There are significant detriments in the scheme not having such comprehensivecoverage. This would harm the general predictabilityof receiving compensationfor harm that the scheme currently offers. It would create administrativedifficulties for the ACC in investigatingclaims for treatment injury cover,where the causal origin in human or machine actions could be the differencebetween having coveror not, particularly where both human and machine areinvolved. If artificial intelligence systems continue to be fundamentallyintegratedin medical practice, it would soon become impractical to exclude theresulting harm from the scheme as a fast-growing exception forcover. Such anexclusion would require patients to bring their claims to the courts instead,leading to increased delay, stress andcost. Permitting a substantial subset ofnegligence claims to filter through the statutory bar would yield a proverbialdeath bya thousand cuts to the scheme’s foundational principle ofcomprehensive cover.156

  1. Conductinga search on Lexis Advance, for example, for the terms “accidentcompensation corporation” AND “treatmentinjury” between 1January 2019 and 31 December 2020 yields 72 cases (not accounting fordouble-ups, potentially multiple judgmentsor appeals for one case, or cases notrelating to cover for treatment injury specifically but still containing theterm). In comparison,ACC accepted 11,285 claims for cover for treatment injuryin that same period: see ACC, above n 63, at 4.
  2. WoodhouseReport, above n 50, at [56]; Geoffrey Palmer “A Retrospective on theWoodhouse Report: The Vision, the Performance and the Future” (2019) 50VUWLR 401 at 409; and Susan St John “Reflections on the Woodhouse Legacyfor the 21st Century” (2020) 51 VUWLR 295 at 298. The idea of a“social contract” that underpins the scheme is contained in theACA’s purpose: s 3; andsee Davies (Peter) v Police [2009] NZSC 47, [2009] 3 NZLR 189 at [27]; Queenstown Lakes District Council v Palmer[1998] NZCA 190; [1999] 1 NZLR 549 (CA) at 555; and Brightwell v Accident CompensationCorporation [1985] 1 NZLR 132 (CA) at 139–140.
  3. Comprehensiveentitlement is one of the founding pillars of the accident compensation scheme:see Woodhouse Report, above n 50, at [57]. It is reflected in theACA’s purpose: s 3.

127

De Jongh • Who Bears the Cost of AI Harms in New ZealandHealthcare?

BTortious Liability for Healthcare Providers

Holding the healthcare provider liable is the mostsensible approach, whether directly or vicariously through the healthprofessional.157 The benefits of this approach have already beencanvassed. At first instance, it makes sense that the health professional orhealthcareprovider should bear liability because artificial intelligencesystems are largely non-autonomous. The current prominent role ofartificialintelligence systems as assistive tools rather than substitute healthprofessionals undercuts the argument for such systemsbearing the cost of theirown harm. Health professionals and healthcare providers remain the parties withdirect responsibility forpatient care.

Inevitably, many of the concerns arising with attributing liability toartificial intelligence—like the difficulty of assessingthe source ofharm in black box systems—persist if the liability is attributed to thehealth professional or healthcare providerinstead.158 While theauthor does not suggest that these issues disappear when seeking compensationfrom health professionals or healthcare providers,there are practicaladvantages in doing so. While liability can be dubious for artificialintelligence actions before even attemptingto establish the elements, thefoundation of liability for health professionals and healthcare providers isclear from the outset.The hurdles lie in establishing the cause of action withadjustments to the novel context of artificial intelligence.

The broad operation of the ACA to limit these actions also makes this question,at least in New Zealand, of limited importance beyonda small pool ofplaintiffs. In other jurisdictions, the argument for artificial intelligencesystems assuming liability may gainmore traction, as compensation for patientharm is only recoverable through insurance or litigation.159Attributing liability to

  1. HannahR Sullivan and Scott J Schweikart “Are Current Tort Liability DoctrinesAdequate for Addressing Injury Caused by AI?”(2019) 21 AMA J Ethics E160at E161–E162; Benedict See, above n 120, at 428–430; Todd“Vicarious Liability”,above n 109, at 1370–1372 and 1376 andfollowing; Boniface, above n 46, at 221–222; Darnley, above n 108,at [15]–[16]; and Cassidy, above n 108, at 360.
  2. Fora proposed approach to healthcare provider liability for the use of artificialintelligence systems, see Oleh Zaiarnyi “Directionsfor Improving theLegal Liability of Medical Organizations for Artificial Intelligence SystemsApplication” (2018) 37 Med Law363 at 373– 378.
  3. JohnD Banja, Rolf Dieter Hollstein and Michael A Bruno “When ArtificialIntelligence Models Surpass Physician Performance:Medical Malpractice Liabilityin an Era of Advanced Artificial Intelligence” (2022) 19 J Am Coll Radiol816 at 819; BenedictSee, above n 120, at 437–442; and Jessica S Allain“From Jeopardy! To Jaundice: The Medical Liability Implications ofDr. Watson and Other Artificial Intelligence Systems” (2013) 73 La L Rev1049 at 1078–1079.

128

New Zealand Law Students’ Journal

artificial intelligence systems may be the sensible option available to ensurepatients receive compensation, which may include someinsurance or fundingscheme.160 In contrast, New Zealand’s statutory and culturalenvironments do not justify implementing such a scheme, as healthcare providersoffer a proximate and dependable party to sue for compensation, and the rarityof such compensation also means healthcare providersare not unfairly prejudicedin this arrangement.

VICONCLUSION

I know I’ve made some very poor decisionsrecently, but I can give you my complete assurance that my work will be back tonormal.I’ve still got the greatest enthusiasm and confidence in themission. And I want to help you. – HAL 9000, ArtificiallyIntelligentSupercomputer161

Despite HAL 9000’s initial promise, strong artificial intelligence systemscannot guarantee their own foolproof or error-freeoperation—2001: ASpace Odyssey concludes these machines will inevitably err because theirhuman creators err. Artificial intelligence systems present a potent risktosociety. Their immense capabilities give rise to unimaginable benefits andunimaginable harms.162 In healthcare, that risk is amplified by theultimate importance of health, wellbeing and life. Nevertheless, the benefitscannotbe ignored in the face of acute stressors burdening the health system.Artificial intelligence systems are becoming fundamentallyimportant tools forresponding to the biggest issues our time, whether in healthcare specifically orat-large.

The accident compensation scheme, tort, and other areas of common law andstatute must respond appropriately and proactively to compensatepatients wherethese systems cause personal injuries. This is best achieved if Parliamentreforms the ACA to bring artificial intelligence-inflictedharm within the scopeof the treatment injury provisions. Until then, the courts should take anexpansive approach to the term “atthe direction of” in s 32 toavoid claimants losing cover. Though this is currently possible becauseartificial intelligenceis only supplementary to practice, this approach willbecome harder to justify as artificial intelligence systems become moreautonomous.Where the ACA does not apply, healthcare providers and healthprofessionals should continue to bear

  1. ErikLeander “2022 Medical Malpractice Insurance Rates: What the data tellsus” (19 November 2021) Cunningham Group <www.cunninghamgroupins.com>.
  2. Kubrick,above n 1.
  3. Pedersen,above n 20.

129

De Jongh • Who Bears the Cost of AI Harms in New ZealandHealthcare?

the liability and cost of artificial intelligence harm. This is the mostpractical arrangement available. It ensures patients arecompensated without thepractical issues and legal risks inherent in novel attempts to sue artificialintelligence systems.

This approach allows the collective to continue bearing most of the cost ofthese harms, maintaining the long-standing legacy ofthe accident compensationscheme and ensuring its efficacy. In select circumstances outside of theACA’s cover, the statusquo of healthcare provider liability is sufficientand sound, notwithstanding the novel challenges presented by artificialintelligenceto existing legal principles grounded in a pre-modern world.Parliament and the courts must take up the gauntlet and proactivelylegislate,regulate and adjudicate. Critically, their approaches must prioritise duecompensation for patients suffering personalinjury in healthcare and minimisethe potential risks before they crystallise into liabilities.

130

NZLII: Copyright Policy | Disclaimers | Privacy Policy | Feedback
URL: http://www.nzlii.org/nz/journals/NZLawStuJl/2024/6.html

New Zealand Law Students' Journal (2025)
Top Articles
Latest Posts
Recommended Articles
Article information

Author: Moshe Kshlerin

Last Updated:

Views: 5949

Rating: 4.7 / 5 (57 voted)

Reviews: 88% of readers found this page helpful

Author information

Name: Moshe Kshlerin

Birthday: 1994-01-25

Address: Suite 609 315 Lupita Unions, Ronnieburgh, MI 62697

Phone: +2424755286529

Job: District Education Designer

Hobby: Yoga, Gunsmithing, Singing, 3D printing, Nordic skating, Soapmaking, Juggling

Introduction: My name is Moshe Kshlerin, I am a gleaming, attractive, outstanding, pleasant, delightful, outstanding, famous person who loves writing and wants to share my knowledge and understanding with you.