Deputy head of the Ministry of economic development: world experience will help Russia implement artificial intelligence

With the development of technology, artificial intelligence (AI) is becoming more and more firmly embedded in various areas of society: from healthcare to transport. In Russia, the use and implementation of artificial intelligence is not yet well regulated by law, so there are barriers to the penetration of technologies into daily processes. The authorities are currently finalizing regulations so that these barriers can be removed over time. Deputy head of the Ministry of economic development Oksana Tarasenko told TASS in an interview about what legal conflicts we will have to overcome on the way to introducing artificial intelligence into the lives of Russians, what hinders the development of artificial intelligence in Russia and which countries ' experience we consider the most successful.

- Oksana Valeryevna, artificial intelligence already exists, but there are no laws about it, right?

— So far, there is only a national strategy for the development of AI until 2030, which was approved by the President of Russia. And the recently adopted Concept of regulating AI and robotics technologies. And now we are just working on developing a schedule for regulatory acts for the development of AI for the period up to 2024.

It contains about 80 legal acts. All of them are divided into groups. The first category includes General draft laws related to personal data, regulatory "sandboxes", civil liability, and so on. The second one concerns the effective implementation of AI in those industries where there are currently regulatory barriers. In the near future, we plan to finalize the plan and approve it at the end of this or early next year.

— What issues do you consider important to address first? What hinders the lack of regulation? Doesn't that give you a lot of freedom?

— No, it's not. For example, the issue of regulation of legal relations in the field of personal data. Today, there are slightly outdated standards that often do not correspond to the best world practices. We need to regulate this sensitive area so as to maintain a balance between protecting the interests of individuals and developing technologies.

In particular, we want to create a mechanism for anonymizing data so that it is impossible to determine the person to whom this data belongs, even if they can be compared

That's what we're working on now.

Another area is the development of the law on so-called regulatory sandboxes adopted in July. Currently, we are actively working on related legal acts that will detail the regulation of certain areas, in particular, industry legislation in connection with the introduction of AI technology.

— What else hinders the development of AI in Russia?

— There are difficulties associated with the low efficiency of combining AI systems with information systems, there are problems in assessing the economic effect of the introduction of AI systems and the risks of artificial monopolization of certain intellectual technologies. Now we are developing standards that will support the direction of "Artificial intelligence" by removing such regulatory and technical barriers.

— In this regard, is Russia going its own way or is it borrowing the experience of other countries?

— Of course, we are also guided by the experience of our foreign colleagues, analyzing and working through their positive and negative results.

In China, for example, experiments are already underway with algorithms and business models that use AI. But, as experience shows, the national strategy for the development of artificial intelligence is only a catalyst for public relations. The very development of AI, its development, widespread implementation and use is still ahead. The Chinese Plan for the development of AI technologies only refers to the need to develop laws and ethical standards for the development of this technology and provides for the creation of special laws for AI by 2024.

The 2018 national strategy of India emphasizes that artificial intelligence is a "black box", that AI should be controlled and development should be responsible, and considers regulation in the context of ethics and security issues. Only in 2019, another Indian strategic planning document highlights a separate section that notes the need to adjust data legislation, antitrust and consumer legislation, and individual industry legal acts.

— Which model of regulation is closer to us?

— The closest, of course, for us is the experience of our European colleagues.

The "white paper on artificial intelligence", adopted by the European Commission in 2020, can be safely called the only analogue of the Russian concept

It sets out the rules for using artificial intelligence.

In particular, it is said that when implementing AI in areas with a high degree of risk, such as health and transport, artificial intelligence systems should be transparent so that they can be subordinated to human control.

Simultaneously with the European Commission, very important work is being carried out in CAHAI, which is responsible for developing and implementing the legal framework for artificial intelligence technologies, taking into account the standards of the European Union. These studies include the need to develop a generally accepted definition of AI, mapping the risks and opportunities associated with AI, especially with regard to its impact on human rights, the rule of law and democracy, and the opportunity to move forward towards a legally binding framework.


Go back