Ukraine rolls out its AI regulation roadmap
The roadmap is based on a bottom-up approach that suggests providing businesses with tools to prepare for future requirements before adopting any laws.
Ukraine’s Ministry of Digital Transformation presented its regulatory roadmap for artificial intelligence (AI) on Oct. 7. This has been published on the Ministry’s website, and states that it’s aimed at helping local companies prepare for adopting a law analogous to the European Union’s AI Act. It also seeks to educate citizens on how to protect themselves from AI risks.
According to the roadmap’s announcement text, it is based on a bottom-up approach that suggests moving from less to more: it will provide businesses with tools to prepare for future requirements before adopting any laws.
Related: Ukraine demands local crypto businesses provide financials
The roadmap sets this preliminary period to adopt the companies to potential laws in the next two to three years. As the Deputy Minister of Digital Transformation, Oleksandr Borniakov specifies:
“We plan to create a culture of business self-regulation in several ways. In particular, by signing voluntary codes of conduct that will testify to companies’ ethical use of AI by companies. Another tool is a White Paper that will familiarise businesses with the approach, timing, and stages of regulatory implementation.”
The draft of the Ukrainian AI legislation, according to the roadmap, is expected in 2024, but no sooner than the EU will pass into effect its AI Act so that the national law will take it into account.
In June, the EU AI Act passed the European Parliament. Once implemented, the act would prohibit certain types of artificial intelligence services and products while limiting or restricting others.
Among the technologies outright banned are biometric surveillance, social scoring systems, predictive policing, so-called “emotion recognition,” and untargeted facial recognition systems. Generative AI models, such as OpenAI’s ChatGPT and Google’s Bard, would be allowed to operate under the condition that their outputs be clearly labeled as AI-generated.