Introduction

The application of artificial intelligence ("AI") technology in the healthcare industry has undoubtedly revolutionised clinical practice. There is great optimism that the AI application can provide substantial improvements in all areas of healthcare from automating daily drudgery and routine medical tasks to performing diagnostics and treating functions. The upsurge in the usage of AI and medical robots in healthcare settings has even prompted certain agencies to devise specific rules and guidelines to govern the technology. In April 2021, the European Union ("EU") put forth its "Proposal for a Regulation on a European approach for AI" with the intention of establishing human-centric AI rules and "the first ever legal framework on AI, which addresses the risks of AI and positions Europe to play a leading role globally."1 Subsequently, in June 2021, in an unprecedented move, the World Health Organization ("WHO") issued six guiding principles to be the basis for AI regulation and governance for its member countries.2 These developments, together with the eruption of the 21st century global pandemics such as Ebola and COVID-19, indicate that there are strong merits in discussing the ways in which AI-powered healthcare products and services are regulated in Malaysia, which this article seeks to explore.

Healthcare AI in the Pandemic Age

The application of AI in the healthcare scene is typically practised as a partnership between human and machine, each bringing their complementary strengths. In December 2019, BlueDot, a Canada-based digital-health company that uses an AI-driven algorithm to predict the spread of infectious diseases, notified its clients to the detection of unusual pneumonia cases in Wuhan, China. Several days later, the World Health Organisation (WHO) released their first public warnings about a virus that we now know as COVID-19. The global spread of the COVID-19 outbreak has since accelerated the usage of AI in healthcare settings,3 owing to the deadly nature of the virus, which necessitates faster diagnosis and treatments.

AI has also been utilised for drug research by analysing available data on COVID-19. This technology has been used in accelerating drug testing in real-time, beating the speed of standard testing by humans which takes longer time.4 Further, AI has also become a powerful tool for diagnostic test designs in this pandemic age. For instance, in mid-2020, Huawei Malaysia collaborated with the Malaysian Ministry of Health to contribute Huawei Cloud AI-assisted diagnosis solution to a COVID-19 hospital in Malaysia in order to provide faster COVID-19 diagnosis. AI has also played its role in developing vaccines and treatments at much faster rate than usual and has been proven to be helpful for clinical trials during the development of the vaccine.5

As regards the development of pharmaceuticals, clinical trials take sometimes more than a decade and cost billions of dollars. Speeding this up and making it more cost-effective would have massive effect on everyday medicine. For instance, Atomwise uses supercomputers that root out therapies from a database of molecular structures. The company launched a virtual search for existing medicines that could be redesigned to treat the Ebola virus. They found two drugs predicted by the company's AI technology which may significantly lessen Ebola infectivity. This analysis, which typically would have taken months or years, was completed in less than one day.6

The global AI-associated healthcare market size was valued at USD 6.7 billion in 2020 and is expected to expand at a compound annual growth rate of 41.8% from 2021 to 2028.7 As this trend seems promising, it is even more important now for all AI companies to demonstrate the efficacy of their method and the safety of AI with peer-reviewed research, whilst ensuring full compliance with the law.

WHO Guiding Principles on AI

Growing use of AI technology for healthcare purposes has prompted WHO to issue formal guiding principles for proper design and usage of AI in the healthcare industry. Twenty experts reportedly spent two years developing the guidance, which marks the issuance of the first ever consensus report on AI regulatory and governance in healthcare settings. Through a guidance document issued in June 2021, WHO outlines the following six guiding principles, with the aim of limiting the risks and maximising the opportunities intrinsic to the use of AI for health:8

  • Protecting human autonomy: In the context of health care, this means that humans should remain in control of health-care systems and medical decisions; privacy and confidentiality should be protected, and patients must give valid informed consent through appropriate legal frameworks for data protection. ("Principle 1")
  • Promoting human well-being and safety and the public interest: The designers of AI technologies should satisfy regulatory requirements for safety, accuracy and efficacy for well-defined use cases or indications. Measures of quality control in practice and quality improvement in the use of AI must be available. ("Principle 2")
  • Ensuring transparency, explainability, and intelligibility: Transparency requires that sufficient information be published or documented before the design or deployment of an AI technology. Such information must be easily accessible and facilitate meaningful public consultation and debate on how the technology is designed and how it should or should not be used. ("Principle 3")
  • Fostering responsibility and accountability: Although AI technologies perform specific tasks, it is the responsibility of stakeholders to ensure that they are used under appropriate conditions and by appropriately trained people. Effective mechanisms should be available for questioning and for redress for individuals and groups that are adversely affected by decisions based on algorithms. ("Principle 4")
  • Ensuring inclusiveness and equity: Inclusiveness requires that AI for health be designed to encourage the widest possible equitable use and access, irrespective of age, sex, gender, income, race, ethnicity, sexual orientation, ability or other characteristics protected under human rights codes. ("Principle 5")
  • Promoting AI that is responsive and sustainable: Designers, developers and users should continuously and transparently assess AI applications during actual use to determine whether AI responds adequately and appropriately to expectations and requirements. AI systems should also be designed to minimize their environmental consequences and increase energy efficiency. Governments and companies should address anticipated disruptions in the workplace, including training for health-care workers to adapt to the use of AI systems, and potential job losses due to use of automated systems. ("Principle 6")

In the said document, WHO also stated that these six principles were devised in the hope that they can be the foundation for how governments, developers, and regulators approach the technology.

Legal Considerations

As it stands, Malaysia has no single regulatory framework that governs the usage of AI, generally, or its usage in the healthcare industry, specifically. Notwithstanding that, there are several legal provisions from a patchwork of existing legislation that may be applicable to AI-powered healthcare product and services.

Medical Legislation

 

In Malaysia, the regulation of medical devices is governed by the Malaysian Medical Device Authority through the provisions under the Malaysian Medical Device Act 2012 (MDA 2012) and its subsidiary legislation. This legislation defines "medical device" to include, among others, any instrument, apparatus, implement, machine, appliance, software, material or other similar or related article intended by the manufacturer to be used, alone or in combination, for human beings for the purpose of, among others, diagnosis, prevention, monitoring, treatment or alleviation of disease or injury or investigation, replacement or modification or support of the anatomy or of a physiological process.9

The term "medical device" is defined broadly under the MDA 2012 and the Act does not appear to distinguish between traditional software, which uses code as the primary artifact and which is programmed to perform a specific task, and AI-based software or robots which uses data as the primary artifact and is typically programmed to learn to perform a specific task. Accordingly, it could be deduced that any AI-powered instrument, apparatus, implement, machine, appliance, software, and material that possesses the capability of diagnosing, preventing, monitoring, treating or alleviating a disease or injury or investigating, replacing or modifying or supporting the anatomy or a physiological process would likely be covered by the term "medical device" under the MDA 2012.

By extension, this means that various legal requirements under the MDA 2012, including the rules on classification,10 grouping,11 and registration12 of medical device must be followed. The Malaysian Medical Device Authority is empowered under the MDA 2012 to license and approve any medical device. The MDA 2012 requires a medical device to be registered under the Act before it can be imported, exported or placed in the market.13 For that purpose, an application for the registration of a medical device must be made according to the requirement under the said Act and in the manner prescribed in Medical Device Regulation 2012. At this juncture, it is worth noting that manufacturers of medical devices have to adhere to specific obligations that are imposed on manufacturers under the MDA 2012, that is to ensure that their medical device (i) conforms to the prescribed essential principles of safety and performance, (ii) is manufactured in accordance with good manufacturing practice and any written directive issued by the Medical Device Authority, and (iii) is labelled, packaged, and marked as prescribed by the legislation. This rule also indicates to us that Malaysia's current legislation stands in line with Principle 2 outlined by WHO, which demands designers of AI technologies to satisfy regulatory requirements for safety, accuracy and efficacy for well-defined use cases or indications.

Further, the Medicine Advertisements Board of Malaysia is empowered under the Medicines (Advertisement and Sale) Act 1956, to approve any advertisement that relates to the sale of medicine, among others, and which includes any online display. Thus, marketers of AI-powered healthcare products and services should also take heed of the advertisement-related prohibitions outlined under the Medicines (Advertisement and Sale) Act 1956 which dictates that no person shall take any part in the publication of any advertisement referring to any article, or articles of any description, in terms which are calculated to lead to the use of that article or articles of that description as a medicine, an appliance or a remedy for the purpose of prevention or treatment of the diseases and conditions of human beings as specified in the Act, for the purpose of practising contraception among human beings, for the purpose of improving the condition or functioning of the human kidney or heart, or improving the sexual function or sexual performance of human beings and for the purpose of diagnosis of a disease as specified in the Act.14

In addition to the statutes mentioned above, other existing healthcare-related legislation may also apply to the AI-powered medical device in question, depending on the nature and type of the technology. Accordingly, compliance assessment of various parts of these AI-powered healthcare offerings will be necessary.

Protection of Health Data

AI-powered health technologies are mostly data-rich.15 Software such as Enlitic and Tempus, for instance, assess unstructured datasets and synthesize disease predictions to provide personalised and patient-specific healthcare recommendations. From a human rights perspective, an individual should always control his or her personal data. Individuals' right to their own data is grounded in concepts that are related to, but distinct from, ownership, including control, agency, privacy, autonomy and human dignity. This is perhaps why WHO places a special emphasis on preserving data protection rights within the usage of AI in Principle 1. Control may include various approaches to individual consent and also collective mechanisms to ensure that the data are used appropriately by third parties.

Data protection in Malaysia is primarily governed by the Personal Data Protection Act 2010 (PDPA 2010) and its subsidiary legislation. The Act is designed to safeguard personal data by requiring data users to comply with certain obligations and conferring certain rights to the data subject in relation to his personal data. In order for data to be considered as personal data under the PDPA 2010, the data must be information in respect of commercial transactions, such information must be processed wholly or partly by means of equipment operating automatically in response to instructions given for that purpose, be recorded with the intention that it should be wholly or partly processed by such equipment or be recorded as part of a relevant filing system or with the intention that it should form part of a relevant filing system, and the information must relate directly or indirectly to a data subject who is identified or identifiable from the information or other information in the possession of the data user.16

Health data is not specifically defined under the PDPA 2010 but such data would fall within the scope of sensitive personal data, which is defined to include information as to the "physical or mental health or condition of a data subject."17 At this juncture, it must be noted that the processing of sensitive personal data is specifically governed by section 40 of the PDPA 2010. The said section prescribes that sensitive personal data may only be processed in one of the following situations, namely: (i) with explicit consent of the data subject, (ii) if the sensitive personal data has been made public by the data subject; or (iii) if the processing satisfies certain statutory conditions set out in the PDPA 2010. Among the statutory conditions set out includes situations where the processing is done for medical purposes and if the processing is undertaken by a healthcare professional or a person who owes a duty of confidentiality which is equivalent to that which would arise if that person were a healthcare professional.18

It is worth mentioning that as the PDPA 2010 does not have any specific definition for the term "consent" or "explicit consent", nor does it specifically prescribes any minimum standard or formalities in terms of the consent that needs to be acquired. However, data users are reminded that there is a requirement to keep a record of consents from data subjects.19 Clearly, the requirement of consent as imposed by the PDPA 2010 on health data shows that our current legislation stands in line with Principle 1 outlined by the WHO on AI.

Consumer Protection Act

The principal law for consumer protection in Malaysia is the Consumer Protection Act (CPA 1999). However, this Act applies to goods and services that are offered or supplied to one or more consumers in trade20 and does not apply to healthcare services provided or to be provided by healthcare professionals or healthcare facilities.21 Under the CPA 1999, consumer is defined as a person who acquires or uses goods or services of a kind ordinarily acquired for personal, domestic or household purpose, use or consumption and does not acquire or use the goods or services, or hold himself out as acquiring or using the goods or services, primarily for the purpose of resupplying them in trade, consuming them in the course of a manufacturing process, or in the case of goods, repairing or treating, in trade, other goods or fixtures on land.22

Even though the provision of healthcare services by healthcare professionals or healthcare facilities does not fall under the purview of the CPA 1999, the provisions of the Act may still apply to manufacturers of certain AI-powered health technologies which do not involve the services provided by healthcare practitioner or healthcare facility. Particularly, wearable technologies in healthcare that takes the form of electronic devices that consumers can wear, such as wearable fitness trackers, health-tracking smart watches, and wearable ECG monitors, are likely to fall within the remit of the CPA 1999.

Liability

This revolution in AI technology also merits assessment in the areas of clinical and product development and liability. Principle 4 serves as a reminder that stakeholders bear the responsibility to ensure that they are used under appropriate conditions and by appropriately trained people, but because the application of AI is not without flaws, the question of who. should be held responsible for any fault that leads to inaccuracy cannot be left unanswered. As it stands, clinical negligence law is arguably ill-fitted for a world where diagnoses and treatment decisions are made by AI.

Generally, the primary defendants in medical negligence claims are typically healthcare professionals as well as healthcare service providers and their employees. However, the usage of AI in healthcare settings may challenge the existing status quo and extend liability on various other parties in addition to the parties mentioned above. Such scenario may arise when algorithm inaccuracy within the AI system used led to suboptimal clinical decision-making and adverse patient outcomes, or when medical and surgical robots fail and cause malpractice incidents.23

Claims resulting from AI errors may differ significantly from typical clinical negligence claims. The errors at present mostly involve human medical judgments or medical mishaps. It is no secret that AI technology heavily relies on data. As AI algorithms are highly data rich which often requiring millions of observations to reach acceptable performance levels,24 AI mistakes are more likely to involve errors in the utilisation and quality of data that has been fed into the software programs, algorithms, and hardware setup of the AI system. Accordingly, the determination of liability is likely to be heavily dependent on the cause of the injury that is suffered and if the cause can be trailed back to the use of the AI. Therefore, if the data introduced in the AI system are erroneous or inaccurate, the software will provide a mistaken diagnosis or treatment, which - if not detected by the physician using the technology - will likely result in damage to the patient's health or even death. If the damage done can be traced back to the AI and its manufacturer's fault, then the liability will likely fall on the AI manufacturer or supplier. Even when robotics and AI only complement a human professional, there still need to be opportunities for plaintiffs and courts to discover whether the technology's developers and vendors acted reasonably.

The liable party would likely be determined by the AI's capacity to function autonomously or semi-autonomously. Because of the intricacies of AI, such as the coding language and the numerous formulaic algorithms on which it is based, it is anticipated that courts may find it unjust to hold a healthcare professional accountable for their AI when the AI's errors are beyond the healthcare professionals' control or rectification. Although, as previously mentioned, the CPA 1999 does not apply to healthcare services provided by healthcare professionals or healthcare facilities, but the tortious liability in negligence may still be extended to the AI manufacturer or supplier should the facts support a claim.

As regards the standard of care for diagnosis or treatment in Malaysia, the test expounded in Bolam v Friern Hospital Management Committee25 would apply. The test specifies that whether there is a breach of the standard of care of medical practitioners in this respect will generally be determined in reference to the standards of such medical practitioners26. It can be seen that the law currently assumes that doctors, not robots or machines, make choices. Because humans are error-prone and their thinking may be affected by cognitive bias, doctors are expected to make mistakes.27 Thus, the Bolam test distinguishes between decisions that are reasonable, even if they are incorrect, and those that no rational doctor should make. Whilst the former can be excused, the latter counts as negligence. Accordingly, one could even argue that AI-based decisions would not fit into this Bolam paradigm, so it remains to be seen how the Malaysian court would adjudicate such cases in this area.

Further, the Bolam test also believes that medical questions can have multiple valid answers. As a result, practice may differ between responsible doctors who hold opposing viewpoints. Where that happens, the courts do not adjudicate between different approaches provided they withstand logical analysis. Shouldn't the law need a higher standard if AI has the capacity to make better decisions than human doctors, based on access to a significant amount of data and freedom of cognitive bias?28 Again, the Bolam test is arguably nugatory in this situation.

Ultimately, the issue of legal liability will be resolved on a case-by-case basis. The courts will likely consider the extent of involvement of the healthcare professionals in the AI technology's decision-making process. If there is a demonstrable technological flaw, blame will likely fall on the AI supplier. If there is a component of human mistake, however, blame may need to be divided accordingly. Regardless, claimants seeking compensation are likely to target healthcare professionals and healthcare providers who use such technology first, even if the liability may be transferred to, or shared with, other parties.

Conclusion

Hippocrates once stated, "healing is a matter of time, but it is sometimes also a matter of opportunity." The opportunity and the rush in implementing AI in various aspects of healthcare wherever possible is certainly welcomed, particularly following the upsurge in COVID-19 cases. Even though Malaysia has yet to devise any AI-specific regulatory framework, it appears that existing legislation that may be applicable to this subject matter stand in line with guiding principles outlined by the WHO. As regards the issue of liability, our current legal framework presumes human doctors make decisions and allows for the possibility of human error. In a world where robotic surgery has been rapidly adopted by hospitals across the world and where data programmers, algorithm and software designers behind AI technologies also affect clinical decisions and outcomes, this standard may need to be reshaped. Finally, due to the patchwork nature of the laws regulating AI in Malaysia, businesses seeking to bring AI-powered healthcare products and services into the Malaysian market are advised that to seek professional legal guidance to avoid stifling the law.

Footnotes

1 "Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts."(The European Commission, (21st April 2021).

2 "Ethics and Governance of Artificial Intelligence for Health" (WHO, 28th June 2021).

3 Danai Khemasuwan et al. "Applications and challenges of AI-based algorithms in the COVID-19 pandemic" (2021).

4 Biswas K et al. "Space-time dependence of coronavirus (COVID-19) outbreak" (6th March 2020).

5 Raju Vaishya et al. "Artificial Intelligence (AI) applications for COVID-19 pandemic." (2020).

6 "Atomwise finds first evidence towards new Ebola treatments." (Atomwise, 24th March 2015).

7 "Artificial Intelligence in Healthcare Market Size, Share & Trends Report: Artificial Intelligence in Healthcare Market Size, Share & Trends Analysis Report By Component." (Grand View Research, May 2021).

8 WHO (n 2).

9 Section 2, Medical Device Act 2012.

10 Section 3, Medical Device Act 2012.

11 Second Schedule, Medical Device Regulation 2012.

12 Sections 5 and 6, Medical Device Act 2012.

13 Section 5(1), Medical Device Act 2012.

14 Section 3(1), Medicines (Advertisement and Sale) Act 1956.

15 Chaitanya Mamillapalli et al. "Use of Artificial Intelligence in the Screening and Treatment of Chronic Diseases" (2020).

16 Section 4, Personal Data Protection Act 2010.

17 ibid.

18 Section 40, Personal Data Protection Act 2010.

19 Regulation 3(1), Personal Data Protection Regulations 2013.

20 Section 2(1), Consumer Protection Act 1999.

21 Section 2(2), Consumer Protection Act 1999.

22 Section 3, Consumer Protection Act 1999.

24 See WN Price, Sara Gerke, Glenn Cohen. "Potential liability for physicians using artificial intelligence." (4 October 2019).

24 Ziad Obermeyer and EJ Emanuel, "Predicting the Future - Big Data, Machine Learning, and Clinical Medicine" (29 September 2016).

25 [1957] 1 WLR 582.

26 As affirmed in Zulhasnimar Hasan Basri & Anor v Dr Kuppu Velumani P & Ors [2017] 8 CLJ 605. The Bolam test applies subject to the qualifications outlined in Bolitho v City & Hackney Health Authority [1997] 3 WLR 1151, which states that the body of medical opinion needs to withstand logical analysis. Please note that in relation to the duty of a medical practitioner to advise patients of risks associated with their treatment, the applicable test is the test laid down in Rogers v Whitaker [1992] HCA 58, as adopted by the Malaysian Federal Court in Foo Fio Na v Dr. Soo Fook Mun & Anor [2007] 1 MLJ 593.

27 see Paul Sankey, "AI Medical Diagnosis and Liability When Something Goes Wrong." (Enable Law, 24th February 2021).

28 Ibid.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.