Emerging AI Issues Affecting EU & UK Life Science Companies | Hogan Lovells

How the EU is leading the way in developing AI regulation

The panel began by noting that it has been a year since the European Commission (Commission) published its proposal EU regulatory framework on artificial intelligence (AI). The proposal, published in April 2021, represents the first cross-sectoral regulation of its kind, creating a comprehensive framework that will address difficult ethical issues such as bias and transparency as well as the risks arising from automated decision-making. According to the panel moderator IrelandA senior partner in Hogan Lovells’ Intellectual Property, Media and Technology group, the AI ​​legal landscape demands that we step outside the vacuum of our own sectors and work across silos.

Dan Whitehead, lawyer at Hogan Lovells Privacy and Cybersecurity, noted that in recent years the EU has focused significantly on digital regulation and that the proposed AI law will have a profound impact on the governance of the EU. AI in healthcare. Mr Whitehead noted that the penalties under the proposed AI law are even greater than those under the General Data Protection Regulation (GDPR) and could reach 30 million euros or 6% of the figure. annual global business report. Mr. Whitehead pointed out that the GDPR and other existing regulations (such as anti-discrimination and product safety laws) already indirectly address some of the key risks associated with AI, such as risk of bias, performance inaccuracy (false positives or negatives) the risks to patient safety when AI is used in a healthcare setting, and the challenges of explaining complex technologies and their impact on decisions and actions in the real world. The new EU AI regulatory framework will go further in managing these risks, especially in the context of AI technology.

Bonella Ramsay, Senior Counsel in the firm’s Global Regulatory practice, provided an overview of AI regulation in the context of medical devices and in vitro diagnostics (IVDs). She noted that the new EU Medical Devices Regulation (EU MDR) applies from May 2021 and Regulation on in vitro diagnostic medical devices (IVDR) from May 2022. Both are transitional, but in the context of software as a medical device, the AI ​​will automatically be considered a Class IIA medical device, possibly even Class III or class IIB, making conformity assessment for CE marking more complex. However, EU regulations do not expressly address AI as a medical device, raising questions as to how AI products will be treated under the current regulatory framework and the proposed AI Act. AI.

Is the UK keeping pace with the EU?

Mr Whitehead pointed out that while the EU is leading the way, the UK has also issued a National AI Strategy last year, which contains ambitious plans in terms of investment and regulation of AI in all sectors. It remains to be seen how these plans will be implemented in practice.

Ms Ireland noted that in the context of IP, the UK is keeping pace and is already looking at ways in which IP laws can and/or should respond to the complexities presented by AI. In 2021, the UK Intellectual Property Office (UK IPO) opened a Consultation asking, among other questions, whether inventions designed by AI should be protected by a patent, and if so, how? UK IPO guidelines are expected shortly.

Practical next steps

Louise Crawford, a senior partner in the technology practice of Hogan Lovells, provided an overview of how the current liability regime, which relies on a patchwork of laws on tort, product liability, discrimination, confidentiality and contracts, may not be sufficient to provide adequate remedies for those who suffer losses due to IA errors or faults. In this analysis, it is important to identify a link between the fault and the loss, which can be particularly difficult when several parties have been involved in the development and operation of a complex solution. This liability regime is currently under review by the European Commission and proposals for significant changes are expected in the near future.

Reflection on a CE White Paper 2020, Ms Crawford noted that although it is still in its infancy, the EU will likely take a two-pronged approach of 1) extending the current product liability regime to encompass digital products and 2) introduce a specific regime for AI operators that distinguishes between high-risk and low-risk systems and allocates liability accordingly. As regards legal reform in this area, the Commission’s priorities will be 1) harmonization between Member States; and 2) ensure that the accountability framework is robust enough to foster trust in AI technology and encourage continued development in this area.

Moving on to their “best advice” for clients to deal with AI-related liability risks, the panel emphasized that having the right governance framework in place will be critical. Whether local or from a third-party vendor, when using AI technology, businesses will need to have a framework of policies and protocols in place covering cost/benefit analyzes and factor evaluation mechanisms. AI risk. For companies that use vendors to deliver AI technology, it is also essential to perform due diligence on the vendor, the technology, and the data. Suppliers must be able to explain the effectiveness of their product as well as the risks and be able to demonstrate how these risks are mitigated. Appropriate contractual obligations should also be in place to address liability risks associated with the technology.

The panel closed its session by advising stakeholders to pay close attention to developments in this rapidly changing space: significant investments in AI technology are taking place alongside an evolving regulatory environment. Ms. Ramsay advised life science companies to stay abreast of the regulations that apply at every stage of designing and implementing an AI product. Whitehead said he advises clients to take AI governance seriously and not just tuck their AI compliance regime into another policy bucket. Mr. Whitehead also advised stakeholders to engage with regulators on policy proposals while it is still possible to do so: the AI ​​law is not yet in place and the proposals could change significantly before then. that they become law. Ms Ireland said people continue to be central, not only to the R&D process, but also in terms of monitoring and developing assets such as intellectual property.