There’s no denying that artificial intelligence is currently the talk of the town, especially since AI tools have become accessible to the general public and not just tech experts. While playing with tools like ChatGPT or DALL-E can be entertaining, they are also being utilized in the business environment to enhance the productivity of everyday employees.
A survey conducted by Deloitte found that 94% of business leaders consider AI to be critical to success over the next five years [1]. However, at the same time, policymakers raise questions about its ethical use and the need to regulate it. And that’s what the AI Act aims at.
What the AI Act is
The AI Act is a proposed regulation by the European Union that aims to establish a comprehensive legal framework for artificial intelligence systems. The act is designed to provide rules and guidelines for the AI system development, deployment, and use in the EU to ensure their safety, transparency, and ethical use.
It’s worth noting that the AI Act is blazing a trail because, currently, no other country or region has an equivalent regulation. However, some countries have already introduced guidelines or recommendations related to AI development and use. For example, in the United States, there is the National AI Initiative Act of 2020 (NAIIA) [2], but it promotes AI development rather than limiting it. In addition, some states, such as California [3] and New York [4], have passed laws that regulate AI use in automated employment decision-making.
After the AI Act is introduced in the EU, other countries will likely enact similar regulations as well.
The reasoning behind AI regulation
Before we jump into the tools that tackle the proposed law, let’s look under the hood of the AI Act itself to figure out which areas require close attention. The EU is taking steps to regulate the AI market for three primary reasons:
- To ensure that AI systems are developed and then used responsibly, ethically, and transparently. For example, if you’re creating an AI recruitment tool, you have to make sure the algorithms are not biased against certain groups of people.
- To promote innovation and competitiveness in the industry. Once AI law is unified, investors get legal certainty on the solutions they put money in.
- To balance innovation and fundamental human rights in line with European values.
Why is it important for startups and investors?
Of course, we don’t know if the AI Act in the form we know at the moment will be passed. But if it does, it will significantly affect tech startups and their investors. So here are some examples of things you have to prepare your business for:
- Compliance costs: expenses on legal assessment and complying with the new regulations
- Limited market access: barriers to market entry or scaling, particularly for startups and small businesses
- Increased liability: greater responsibility which, for example, raises insurance costs
- Ethical considerations: the need to verify the AI systems against bias and discrimination.
How to prepare your startup for the EU AI Act?
Here are some steps startups and investors can take to prepare for the AI Act. We divided them based on the stage a startup is at.
Concept/ideation stage
At this point, you should confirm whether you should use AI technology in your product and to what extent. Then, you must validate your concept on a legal and technical basis if it meets the EU AI regulation. If you’re an investor, it’s better to ensure that before you put money into the startup to avoid problems later. To do so, consider the following measures:
- AI concept validation: the process of evaluating a concept/idea for an AI-based product or service to determine its feasibility and potential. It involves assessing its technical and commercial viability.
- AI concept assessment: the process of concept/idea evaluation that involves a detailed analysis of the technologies and data sources required. Concept assessment helps to identify any technical challenges at an early stage.
- Artificial Intelligence legal assessment: the process of evaluating the compliance of data processing activities with regulatory requirements of the Artificial Intelligence laws.
MVP stage
When your startup with AI moves beyond the concept stage, you’re exploring what an MVP should look like so you can get ready for its development. At this point, you should evaluate the solution’s purpose, tech choices, improvements, and the legal documentation required. The standard practices are as follows:
- Technical validation: the process of evaluating the technical feasibility of a startup. It involves assessing whether the technology required to build the product is technically feasible, scalable, and meets the target group needs. Technical validation may involve developing prototypes or proof-of-concept to test the product before you build an MVP.
- AI training validation: the process of evaluating the AI models’ performance and accuracy by testing it with a validation dataset. We do it to ensure the AI models can perform accurately on new data it hasn’t been trained on.
- AI Act evaluation: the process of assessing compliance with EU regulations to o
- Assessment report: a formal document that summarizes the findings of validation and assessment; it includes recommended corrective actions if necessary.
Growth stage
Startups that are well on developing their AI solution should fully review ethical, legal, and technical aspects to comply with the AI Act. The best way to do it is to get a summarized report on how the solution functions from a technical standpoint, as well as an assessment and recommendation of actions and improvements that should be carried out to comply with regulations. The elements of the evaluation include:
- AI stack review: evaluation of the technical architecture and components of the system. The examination typically applies to data collection and storage, data preprocessing, data protection, machine learning algorithms, model training, and system performance.
- Dataset and model revision: evaluation of the quality and relevance of the data used to train the AI system.
- Validity and reliability assessment: the process of evaluating the quality of data and measurements to ensure that they are accurate and meaningful.
- Bias and discrimination verification: the process of evaluating the system to identify potential sources of prejudice and discrimination.
- Legal and regulatory compliance assessment: the process of ensuring that the system complies with applicable laws and regulations.
- Report and recommendations: a document summarizing the findings and providing guidance on improving the AI system at the growth stage.
A wrap-up
There’s no denying that there is a growing interest in AI regulation among policymakers, and startups and investors should stay informed and get prepared to comply with any future laws that may be enacted. Although the AI Act hasn’t been passed yet, we can expect it will be introduced in this or another form in the near future.
Overall, the AI Act represents a general direction in AI regulation and will likely significantly impact the tech industry in Europe and beyond. Thus, businesses should take a proactive approach to the AI Act and make the necessary investments to comply with it early on. By doing so, they can mitigate risks in a rapidly evolving regulatory environment.
Having said that, we offer extensive legal and tech compliance services to give you a hand. Go to AI Compliance Services to learn more.
Sources:
- State of AI in the Enterprise, 5th Edition, Deloitte
- National Artificial Intelligence Initiative, United States Patent and Trademark Office (USPTO)
- California Issues Regulations on Artificial Intelligence in Hiring, The National Law Review
- New York State Passes AI-in-Employment Legislation, Lexology ()
- The AI Act proposal