fbpx

THE REGULATION OF AI AND ITS RISKS

The irruption and settlement of AI is of concern to the different states, which are looking for ways to regulate this type of technology

Artificial intelligence (AI) has advanced rapidly in recent years, especially this year with the emergence of ChatGPT. Its application in various areas of society has led to growing concerns about the risks associated with this technology. Governments around the world are realizing the need to regulate the development and use of AI to ensure its benefit and minimize its potential dangers. But how is this situation being addressed?

ChatGPT, the ultimate AI exponent

ChatGPT, the language model developed by OpenAI, has gained the most popularity as a tool for interacting naturally with AI (discover here how to use it), but Microsoft’s Bing and Google’s Bard also stand out. All these developments have raised concerns related to misinformation, privacy and bias.

In response to these risks, governments and regulatory bodies have begun to implement measures to control its use. Italy, in fact, banned the use of the chat system for a while for not respecting the data protection law. Other countries, such as Germany and Ireland, have also considered following the boot-shaped country’s lead. In France, the Commission Nationale de l’Informatique et des Libertés (CNIL) reported that it was investigating several complaints about ChatGPT, although it did not detail them. Also, the Cyberspace Administration of China made a draft of measures to be followed by generative AI services, including the type of content these products can generate. The White House, through the National Science Foundation, plans to invest an additional $140 million to create seven new National AI Research Institutes that promote the public good. In addition, Vice President Kamala Harris met with the heads of Google, Microsoft, OpenAI and Anthropic, a startup specializing in AI, to discuss the new technological reality.

One of the most common approaches has been to establish ethical and legal standards for AI. These regulations focus on transparency during decision-making, explanation of algorithms, and liability in case of errors or damage. In addition, specific requirements are being established to safeguard user privacy and prevent misuse of personal data.

For this very reason, the European Union (EU) is still on the way to approving a law that will limit the use of AI to a specific regulation. After months of debate, those in charge of drafting the regulation passed the negotiating mandate last Thursday, May 11, for the European Parliament to vote on in mid-June. What are the obligations that the popular chatbots will have to meet? They will have to respect additional transparency requirements, which will require large companies to disclose whether they use copyrighted material, to specifically indicate what has been created with AI, and to design these systems so that they do not generate illegal content. In this way, the approved mandate aims to ensure that AI systems are safe, transparent, traceable, non-discriminatory and human-monitored. In turn, the idea is for European law to prohibit uses of AI that pose an intolerable danger, such as biometric surveillance, facial recognition, predictive policing systems, intentional manipulation or the categorization of people according to their social behavior, socioeconomic status or personal qualities.

Is this control really necessary?

The risks associated with AI go beyond ChatGPT. The central concern lies in its potential to make prejudicial or discriminatory decisions. Inherent biases in training data can be transferred to AI systems, leading to unfair or biased decisions. To address this problem, governments are demanding greater transparency in AI algorithms and conducting ethical impact assessments before implementing automated systems.

In addition, there are concerns that AI could be used for malicious purposes, such as developing autonomous weapons or manipulating public opinion. To counter these risks, some governments are exploring the possibility of establishing stricter legal limits and regulations on AI, especially in sensitive areas such as national security and democracy.

In this context, Europe and other countries want to control the references with which AIs work, such as the identity and data of the users they use, the management of the questions and answers that are asked, as well as their environmental impact and possible sanctions for negative uses. The collection and manipulation of this data may have legal implications for copyright, industrial property, privacy or data protection.

In the end, we find ourselves in a delicate moment, exploring an unknown terrain, without laws that establish some order and save disasters. An area full of opportunities to progress, but also of risks to combat and mitigate. It seems that the EU can lead the way towards a more reliable and secure AI, but in this uncertain landscape (in which there is uncertainty about whether to regulate regionally or globally), the only sure thing is that a tool has arrived that is changing everything and it is our responsibility to try to measure and direct the impact it has and will have on the reality we know.

Listen to our AI podcast episode

You already know what the AI hurricane is doing to governments, but how is this tool affecting startups? Don’t miss the episode of our podcast, created by and for entrepreneurs, in which we talk about it! Sergi Vila, co-founder and CEO of Bcombinator, talks with Miquel Mora, co-founder of Bcombinator and expert in agile methodologies and business idea validations, about the brand new role that AI plays in startups and how it is revolutionizing business and society in general. If you want to stay on the cutting edge of innovation and understand how AI is changing the business game, give us a listen. The episode “How AI will change the startup world” is available on our YouTube channel and Spotify.