Opinion
The Urgency of Regulation for AI

Mawla Robbi, Thursday, 20 July 2023

The Urgency of Regulation for AI

Artificial intelligence (AI) makes it easier for humans to create digital content and make decisions. However, never fully trust the results of our work to AI. Otherwise, we risk having the same outcome as a lawyer who quotes ChatGPT's "work" which turns out to provide fictitious case recommendations.

The story happened in the United States (US) when a local federal court reviewed a personal injury case of an airline company, Avianca Airlines.

We can see from this case that AI has two sides of the coin. On the one side, technology may support human work such as the production of digital content, including text, photographs, and videos. On the other hand, AI's existence is still debatable, particularly in terms of laws and social norms.

The magic of AI

It can be said that AI is the latest wonder of the world. How could it not be, that everyone can create various types of writing, edit images, and process videos without the need to have these technical skills. One only needs to type in specific keywords in an AI-integrated application. Then, the AI algorithm will work and give the humans what they want.

The existence of AI also changes the landscape of the use of information technology in the world, which for the past two decades has been dominated by the use of search engines. However, referring to Tech Jury data, by early 2023, 97% of smartphone users will utilize AI technology and 40% of them will use voice assistants to access it.

The fantastic role of AI is that many people are now using it not only to create digital products but also as an analytical tool in making decisions, including business decisions.

Unlike search engine technology, AI not only provides information options but is also able to process information scattered on the internet. By AI, the information can be sorted and processed to be able to display the right advice as if it was made by a professional.

AI is also much more efficient than a professional in doing similar work. This is as stated by IBM, which said the work done by AI is 30% more efficient than the work done by humans.

How AI Works

Basically, AI works by taking large samples of data in real time. Then, the data is processed and analyzed by algorithms using a repetitive approach. This means that AI will read data based on repeated usage patterns.

Thus, the more data samples it takes, the more accurate the resulting work will be. That way, the more people use it, the more intelligent AI will become, errors can be minimized and the level of accuracy will also increase.

The way it works is actually similar to the pattern of the human brain, which is equally capable of gathering information, processing it, and then making conclusions and decisions.

With such a work system, it is natural that AI can take over jobs that require human intelligence. In fact, AI can produce work that can minimize errors that are very likely to occur when done by humans (human error).

For example, in conducting data analysis AI can easily select what information is relevant and needed so as to produce accurate conclusions in a short time. If the work is done by humans, it will take longer and it is very likely that errors will occur at the data processing stage.

Regarding Legal Aspect

Behind all the brilliance and sophistication, we must remain aware that AI still has a number of issues. Because AI is only a technology built to read human behavior in the digital sphere and use that information as fuel for its processing power.

Keep in mind that technology is created to accomplish the goals set forth by its creator. Technology also lacks the same limitations that serve as a standard of reference for humans, such as morality. As a result, AI is unable to discern whether the data it analyzes is morally, normatively, or legally incoherent.

There are several legal issues related to the use of AI. First, the issue of confidential data that can be accessed by AI without confirmation is considered detrimental to the owner of the information. This is why a class action lawsuit was filed against Microsoft, GitHub, and OpenAI.

The AI technology providers were accused of using confidential information without permission and violating intellectual property rights by not crediting inventors, as well as not being transparent in retrieving data, all of which were done on a wide scale.

Actually, in various legal instruments from the international level such as the 1966 International Covenant on Civil and Political Rights (ICCPR) to the corporate level, privacy is something that should be protected. No party may access confidential information without the owner's consent.

Meanwhile, the random data sampling method allows AI to capture information that is confidential and should not be disclosed to the public (undisclosed information).

Second, AI is also considered to potentially violate the intellectual property rights of content or information. This is because in the creation of works, such as writings, paintings, drawings, lyrics, and videos, AI may take samples of works that have already been created. These works are then transformed into new works without giving credit to the creators of the previous works.

Third, AI also has the potential to trigger legal issues when the information it provides is inaccurate or even wrong and is then used as the basis for legally binding decisions. This is what happened in the case example at the beginning of this article.

The use of ChatGPT to make the legal basis of a case being reviewed by the US Federal Court was a fatal mistake in the use of AI. Given that the US adheres to a common law system that makes previous court decisions a precedent for deciding similar cases.

Fourth, the existence of AI was also a problem when it was used in the 2016 US presidential election process. In addition to being used as a tool for black campaigns and negative campaigns by the candidates, AI is also considered to have provided biased and non-neutral information by manipulating information by changing the day and location of the presidential election.

So, just imagine if such biased information is often used by consumers in making decisions. Then, in the future it turns out to have a detrimental impact, the Consumer Protection Law cannot be applied in this case considering that the element of consumer relationship with the producer or service provider is not fulfilled.

Of all the legal issues with AI, the most important one is that open-source AI platforms are designed to be accessible to everyone and AI learns from users. With this system, AI does not have an "absolute controller" who is responsible for what AI does.

Broadly speaking, this nullifies the two essential elements of being a legal subject, namely who is responsible and the ability to be responsible.

Both in civil law and criminal law, these two elements must be present to determine whether or not a legal subject is guilty and held responsible for an accusation. If these elements are not fulfilled, the defendant or accused cannot be found guilty by the court.

If left unchecked, these issues could lead to a worsening of the debate over the existence of AI. Because it cannot be denied, the existence of AI makes human life more efficient.

However, in order for AI to play a good role and the risk of conflict can be reduced, a formula is needed that can reduce and minimize every aspect that can harm it through Regulation, Education, Standards, and Together (REST).

Regulatory Challenges

Despite the fact that Indonesia does not experience AI-related legal issues. Our government needs to start preparing for and addressing this issue seriously.

Moreover, until now we do not have a legal umbrella that specifically regulates the use of AI in detail. So far, Indonesia has only limitedly regulated the aspect of AI involvement as an electronic agent, which is an electronic system device created to perform actions on electronic information automatically (Pratidina, 2017).

Based on Article 21 of the Electronic Information and Transaction Law, electronic agents are attached to the definition of electronic system providers that must be legal subjects such as the state, persons, legal entities, and communities.

The problem is that AI is an open-source platform, meaning that it is not operated by a specific legal subject. This is different from other digital platforms, such as marketplaces. For example, Bukalapak or Tokopedia are business entities that operate electronic systems.

Meanwhile, open-source artificial intelligence is not a company or individual who controls a business unit that organizes an electronic system. In other words, AI technology has not actually been impacted by present rules.

In the theory of law as a tool of social engineering put forward by Professor Mochtar Kusumaatmadja, the law must act as a tool of social engineering. This means that the law should be "out front". In other words, regulation should be able to create a good socio-technological climate, and then stakeholders can follow the path designed by the regulation (Regulation).

One of the crucial roles of regulation is to create guidelines on how AI can be equated with legal subjects who can be held responsible for their "actions". This equal treatment of AI as a legal subject will make it easier to control and enforce the law in the event of a law violation.

Therefore, the role of regulators is vital to provide protection for the rights of the public while also providing a favorable climate for the development of AI technology.

Using AI Wisely

In addition to making regulations, massive education about AI is also needed (Education). Because the sophistication of technology depends on the wisdom of its use.

A more knowledgeable society will be wiser and more responsible in using AI so that the legal issues described will not occur.

The development of AI must also rely on generally accepted ethical and legal standards (Standards). The use of standards can be done through codification of the procedures and ethics of algorithms used in AI development.

Then, in order to face the legal challenges related to AI in Indonesia, collaboration between the government, legal institutions, academics, and industry is very important. With good cooperation, an adequate legal framework can be built to support the development of society and technology and provide a sense of security for all stakeholders (Together).

 



Disclaimer! This article is a personal opinion and does not reflect the policies of the institution where the author works.

Related


Global Recognition
Global Recognition | Word Tax     Global Recognition | Word TP
Contact Us

Jakarta
MUC Building
Jl. TB Simatupang 15
Jakarta Selatan 12530

+6221-788-37-111 (Hunting)

+6221-788-37-666 (Fax)

Surabaya
Graha Pena 15th floor
Jl. Ahmad Yani 88
Surabaya 60231

+6231-828-42-56 (Hunting)

+6231-828-38-84 (Fax)

Subscribe

For more updates and information, drop us an email or phone number.



© 2020. PT Multi Utama Consultindo. All Rights Reserved.