Artificial Intelligence (AI) is the generalisation of intelligence demonstrated by machines. Today AI is widely adopted by organisations from start-ups to small or large enterprises, both in private to public sectors. A key group of players in the market consists of cloud providers who rapidly integrated AI capabilities into their offerings. The use cases for AI include marketing, profiling, efficiency, optimisation, and problem detection.
The primary concerns around unfettered use of AI and profiling consist of a breach of human rights, discriminatory treatment of people based on gender, age, culture, religion, health, and other protected characteristics. Because of these potential risks, the EU has decided to begin the process of regulation, which balances human rights with commercial needs. The intention of the regulation is to set up a common regulatory framework for AI in the EU. It follows preparatory work that began with a 52-member high-level expert group in 2018, which drew upon more than 500 submissions from stakeholders and included a public consultation to consider the trustworthiness and negative aspects of the development and use of AI.
The AI Act Proposal
The proposal identifies three types of AI systems based on analysis from a risk management system.
- We have low risk AI systems, where the application of the technology is considered to be harmless and entirely lawful within a business context such as some marketing profiling systems or systems intended to more effectively analyse large scale data for optimisation or related business objectives.
- Then we have high risk AI systems, which have two characteristics.
- the AI system is intended to be used as a safety component of a product or is itself a product.
- the product whose safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment.
- Last are prohibited AI practices which have significant impact on individuals such as the use of subliminal techniques to distort a person’s consciousness, or to evaluate the trustworthiness of individuals potentially leading to discriminatory treatment, as well as real-time biometric identification systems. There are a limited number of exceptions where such systems may be used.
Note: a fourth category which is out of scope of this new law is the use of AI systems in policing, intelligence gathering and military applications.
Key Take Aways
Any system which creates a high risk to the health and safety or fundamental rights of natural persons must comply with design and development requirements. The core of the regulation is the use of a comprehensive risk management system which consists of a continuous process that runs through the entire life-cycle of a high risk AI system. It reinforces best practices required for data consumption and AI projects:
- Data quality, data governance and data management
- Importance of risk management framework and monitoring.
- Documentation, record keeping & traceability.
- Transparency and provision of guideline to users and relevant bodies.
- Human in the loop
- Robustness, accuracy, resilience and security.
Our view is that this act would force focus on the risks associated with the use of AI, not only in organisations but also in the population as users, consumers, or citizens. We feel that recipient of the output of AI systems will demand to know more about the use of AI and ethical fair treatments from organisations. This proposal makes total business sense. The most valuable business commodity is trust. To sustain trust, organisations should see the creation of their own “AI codes of conduct” to enable them to comply with good AI behaviour as a strategic business decision to compete, continue serving their community and grow.
If you are interested in learning more about our offer or would like to have a conversation with one of our experts, please send an email to email@example.com with “AI ACT Proposal” as subject, and you will be contacted promptly.