There is a tendency of an accelerated introduction and adoption of technological solutions based on AI in various sectors of the economy and public relations in the world now. The beginning of the use of new technologies raises a lot of legal issues and requires their regulation at the legislative level.
What about AI in Russia?
To date, the legislative base of the Russian Federation regulating relations in the field of AI actually constitutes of only two documents:
- Decree of the President of the Russian Federation of October 10, 2019 №490 “On the development of artificial intelligence in the Russian Federation”
- Federal Law №123 of April 24, 2020 -FZ “On conducting an experiment to establish special regulation in order to create the necessary conditions for the development and implementation of Artificial Intelligence technologies in the constituent entity of the Russian Federation — the city with federal status — Moscow, and amendments to Articles 6 and 10 of the Federal Law ”On Personal Data“.
The legal meaning of the term Artificial Intelligence, established in the Federal Law, is:
“a set of technological solutions that makes it possible to simulate human cognitive functions (including self-learning and search for solutions without a predetermined algorithm) and to obtain, when performing specific tasks, results comparable, at least, with the results of human intellectual activity.”
Law №123-FZ, which consists of 8 articles, provides companies included in the special register with exemptions in the field of personal data processing. Such companies are not required to obtain the consent of PD subjects, provided they are depersonalized. E.g., with the technology of an access control system using face recognition, if a company wants to be included in the register, it must meet fairly simple criteria: have a registration of a legal entity in Moscow, the type of activity for the development of AI and the absence of a criminal record of the director.
However, there are much more other legal issues arising from the implementation of AI technologies both in Russia and abroad than the problems of processing personal data.
In my opinion, the main legal issue is the possibility of recognizing the qualities of the subject of law for AI. In other words: who is to blame if there is an incident involving AI?
Is Artificial Intelligence a thing, or is it a person (by analogy with a legal entity)? The answer and legal certainty with the main question will give an understanding of derivative issues: responsibility for harm caused by the use of Artificial Intelligence.
So, for example, in the case of using Artificial Intelligence in medicine, the question arises about the responsibility of the person who makes the decision to use AI in diagnostics or treatment, if the system makes a mistake.
It should also be noted that the legal status of an autonomous system with elements of Artificial Intelligence (“smart” electric car) and an autonomous object with full-fledged artificial intelligence (cyber-robot) cannot be the same. Also, different legal regulations should be in place in the use of AI in the civilian sphere and, for example, in the military or space industry.
What about AI in other countries?
The national legislation of other countries, as well as in Russia, is still in its infancy. In South Korea, in 2008, a Law On the Promotion and Distribution of Smart Robots was passed. It defines that a “smart robot” is “a mechanical device that independently perceives the external environment, recognizes the circumstances in which it operates, and moves independently.” In more than 30 countries, such as the UAE, China, France, Sweden, Mexico, Singapore, Japan, Germany and others, the legal framework for AI is reduced only to the Development Strategy.
At the same time, most of the named countries have a strategic focus on becoming leaders in AI technology. In the United States, the development of AI technology is a national priority. The founding document is the National Strategic Plan for Research and Development in AI. In March 2018, the country established an interim National Security Commission on AI and created an interagency structure to review progress in AI technology.
Of all the countries involved in the development of AI, the UK is now considered the leader in ethical standards. In April 2018, the UK government released a strategy document on AI, which states that an AI council will be established in collaboration with industry and academia. The functions of the Council are to guide the development of AI, oversee government policy in this area, encourage industry and advise the government on AI issues.