EU Artificial Intelligence Act – Future proof safe AI development for 2023

We detail everything EHS teams need to know about the EU Artificial Intelligence Act.

August 24, 2022
4 min
EU Artificial Intelligence Act – Future proof safe AI development for 2023
No items found.
Share:
facebook
twitter
linkedin
facebook icon
linkedin icon
twitter icon

What is the current AI regulatory landscape? 

Artificial intelligence (AI) is a dynamic and transformative set of diverse technologies that are evolving at a very rapid pace. Still in its relative infancy, AI offers significant environment, health and safety (EHS) benefits, whether that is enhancing monitoring and regulatory compliance; reducing risk by potentially removing human error and improving the detection of faults; or optimising performance through better resource allocation. 

Although the UK is one of the global trailblazers in AI technology and policy, and a UK AI strategy was published in September 2021, there is currently no legislative framework regulating AI. In the US, one of the global leaders in AI development, the National Institute of Standards and Technology (NIST) is working on a voluntary AI risk management framework as part of a wider Federal response to the growth in AI products, services and systems.

However, it is the European Union that has arguably taken the boldest step so far with a proposed law on artificial intelligence that not only promises to be the first-ever legal framework to govern AI but also could potentially be adopted as a future global standard in the same way that the General Data Protection Regulation (GDPR) has been.

What is the EU Artificial Intelligence Act and what is its objective? 

Announced in April 2021, the proposal for an EU Artificial Intelligence Act has been driven by a realisation from European policymakers that AI development not only promises a wealth of benefits but also poses new risks for users. For this reason, mandatory safeguards are needed before AI systems can be placed on the EU market. 

Considering the speed of technological change taking place, the European Commission recognises that a balanced, flexible and proportionate risk-based regulatory approach is critical to stimulate and support the safe development of AI 

Building on the European Commission’s February 2020 white paper on AI, the proposed regulatory framework specifically seeks to:

  1. ensure that AI systems placed on the EU market and used are safe and respect existing law and the fundamental rights and EU values;
  2. ensure legal certainty to facilitate investment and innovation in AI;
  3. enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems; and
  4. facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation.

Not only does the proposed law pull together existing EU rules and regulations on AI under a single legislative umbrella, but it also spells out the regulation’s definition of an AI system.  

This is ‘software that is developed with one or more of the techniques and approaches listed in Annex I (machine learning, logic, knowledge-based, or statistical approaches), and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations or decisions influencing the environments they interact with.’

The EU Artificial Intelligence Act outlines a ‘future-proof’ definition of AI that categorises the risks of specific applications into three levels: unacceptable risk, high risk and low or minimal risk. 

This is a significant move because it places limitations and obligations on businesses that supply AI systems into the EU and on businesses that use them, depending on the level of risk identified.   

What we know is that the proposed legislation will ban harmful AI practices that pose a clear safety threat and these will not be allowed to be placed on the market, put into services or use in the EU. 

The legislation also details what are defined as high-risk AI systems. These are classified according to their intended purpose and meeting existing product safety legislation. Eight specific areas are identified and these include the management and operation of critical infrastructure, which does impact on EHS. 

Once the EU Artificial Intelligence Act comes into force, businesses that develop or use AI applications that are defined as posing a ‘high-risk’ for safety will have to comply with specific requirements and obligations and there are compliance costs to factor-in as well.  

How can businesses best prepare so they can comply with its requirements? 

Taking into account the amendments to the proposed law that will be put forward by the Council of Ministers and the European Parliament and also the different views on how to regulate AI among EU member states, most experts do not believe the Artificial Intelligence Act will be adopted before 2023. Businesses will then have two years before the act comes into force to ensure they can comply with its requirements. 

So how might the EU Artificial Intelligence Act impact on your business’ operations and how can you best prepare when the legislation’s final drafting will only emerge at a later date? 

  • Factor-in costs attached to comply with high-risk application requirements

It is important that developers and users of AI systems set aside resources as there will be costs associated with the placing and using of high-risk applications on the EU market and for human oversight, for example. 

Businesses that develop or use AI applications that are not classified as high risk will face lower costs and are being encouraged to work with others and adopt a code of conduct agreeing to follow suitable requirements and to ensure their AI systems are trustworthy.

  • Make sure you can verify that a high-risk AI system complies 

To ensure that risks to health, safety and fundamental rights are effectively mitigated, the legislation requires detailed information on how high-risk AI systems were developed and how they perform throughout their lifecycle. This means keeping detailed records so compliance can be assessed and making sure technical documentation is kept up to date.

  • Ensure high-risk AI systems have human oversight measures built-in

It’s important that systems are designed and developed so that humans can oversee their operation before they are placed on the market. The legislation highlights in-built operational constraints so the system cannot override itself. It also notes the system must be responsible to a human operator who needs to be competent, thoroughly trained and has the authority to carry out their role.  

  • Make sure you can manage any AI-related risks and avoid penalties

The legislation’s intention is to strengthen oversight of AI-systems placed on the EU market, so it is essential that you are able to manage any risks effectively and can demonstrate that you have. Penalties will be designed to be ‘effective, proportionate and dissuasive’ and businesses can expect to face heavy fines for any violations once the act comes into force.

To learn more about how EU compliant Protex AI is using camera software to detect risk before an accident occurs, encouraging businesses to embrace a proactive safety culture, chat to one of our product experts here 👈🏼