News

Regulating AI: Key Reasons and State of Affairs

The impact of Artificial Intelligence (AI) on our lives and the business world is nothing short of transformative. Since early 2023, the rise of Generative AI technologies like Chat-GPT, has made significant headlines. The remarkable ease with which these AI models generate content has sparked both fascination and concern. Consequently, there has been a decisive push for AI regulation to address the potential risks and ethical implications. In this article, we will explore the reasons behind the need for AI regulation, the current state of affairs, and recent developments in regulatory frameworks.

Why do we actually need AI regulation?

With notable business figures like Elon Musk, Steve Wozniak, and OpenAI’s boss Sam Altman having already voiced their concerns over AI’s impact on our lives, some voices in the industry, particularly among the AI aficionados, are asking what’s all the fuss. According to some, regulating AI might be putting barriers to technological progress in general. However, the impact of AI is likely to be far more wide-ranging than that of any other major technology of the past. Some of the key areas of concern include:

1. Challenge to Data Privacy

The advent of AI has led to the collection and processing of vast amounts of personal and sensitive data. AI models often rely on extensive data sets to train and improve their performance. However, the access and utilisation of this data raise legitimate concerns about data privacy and security. To protect individuals' rights and ensure responsible data practices, regulations are necessary to establish clear guidelines for data collection, usage, and retention by AI systems.

2. Challenge of Bias in AI Models

AI models can inadvertently perpetuate biases present in the data used for training. This bias can and does often lead to wildly incorrect content and data generated by these models. To address this challenge, calls for transparency and accountability in AI models have grown louder. Regulations can require organisations to provide insights into the decision-making process of AI systems, including the sources of data used, algorithms employed, and methods of bias mitigation.

3. Challenge of Deception and Malicious Intent

Inadvertently mistaken content aside, the opacity of many AI models has raised concerns about their potential for deception and malicious manipulation. When users cannot fully comprehend how AI systems arrive at their decisions, it creates opportunities for manipulation or misuse. Instances of deepfakes, false information, and social media manipulation using these models abound online.

In some areas, such as government elections, this intentionally fake content generated by AI models can lead to disastrous consequences for society as a whole.

4. Ethical Issues

Generative AI, which can produce content that closely mimics human-generated content, raises ethical considerations. For instance, AI-generated financial advice online may require proper disclosure to ensure individuals are aware that the recommendations are not provided by human experts. Transparency and clear communication about the involvement of AI in content generation are essential to maintain trust, protect consumers, and ensure responsible use of AI technologies.

What is being done currently in the area of AI regulation?

While there’s been much talk about AI regulation, legislative actions in this domain are still being developed. In other words, as of today, no jurisdiction has passed a comprehensive AI act into law. The most advanced piece of legislation pertaining to AI regulation is currently being considered in the European Union (EU).

The legislation, called the EU AI Act, seeks to apply a comprehensive regulatory framework with regard to AI across the EU member states. Last month, the European Parliament adopted its negotiating position on the Act, marking a significant milestone in the legislative process. The act is yet to be signed into law by individual EU jurisdictions. It is envisioned that the Act will be passed into law by the end of the year.

Some of the critical points in the EU AI Act include:

- A complete ban on the use of AI for biometric surveillance, emotion recognition, and predictive policing.

- Mandatory disclosure requirements for generative AI systems, such as Chat-GPT, to indicate that the content is AI-generated.

- Classification of AI systems used to influence voters in elections as high-risk, subjecting them to additional scrutiny and safeguards.

While passing of the Act is, naturally, not a guaranteed outcome, the EU’s history on technologies that impact human privacy, such as the General Data Protection Regulation (GDPR), indicates that this piece of legislation is highly likely to be accepted. The expected time when the Act becomes law is probably around late 2023-early 2024.

Outside the EU, discussions regarding AI regulation are ongoing in nearly all major jurisdictions. However, a comprehensive AI legislative act similar to that of the EU AI Act has not been adopted anywhere yet.


Regulating AI is a critical priority in light of its pervasive impact on society and business. The need for AI regulation stems from concerns related to data privacy, bias, deception, and ethical considerations, among other things. While the EU is moving in this direction with its AI Act and other countries are considering their own AI regulations, there is a concern out there that the pace at which regulators and governments move doesn’t quite match the pace at which AI technologies march.