European Union representatives are reportedly negotiating a plan for additional regulations on the most extensive artificial intelligence (AI) systems in a far-reaching attempt to address the potentially harmful effects of AI.
According to a Bloomberg report, the European Parliament, European Commission and the various EU member states are said to be in discussions regarding the potential effects of larger language models (LLMMs) such as Meta’s Llama 2 and OpenAI’s ChatGPT-4 and possible additional restrictions to be imposed on them apart from the upcoming AI Act.
According to the Bloomberg report, a source close to the matter revealed that the main objective is not to overburden new startups with many regulations but to keep more prominent models in check. The sources went as far as to mention that the agreement reached by negotiators on the topic is still in the preliminary stages. The reports highlighted that the new proposed regulations for LLMs would be a similar approach to the EU’s Digital Service Act (DSA). Experts at Bitcoin Method Official mentioned that EU lawmakers recently implemented DSA to ensure that platforms and websites have standards to protect user data and scan for illegal activities. The law passed forces companies to rethink their policies on advertising, moderation and transparency.
Additionally, the DSA requires online platforms to provide more transparency on how their algorithms work, with the web’s most prominent platforms subject to stricter controls. Online platforms with over 45 million monthly users are affected and required to update their user numbers at least every six months. The EU has designed 19 platforms and search engines that fall into the category. If a platform has less than 45 million monthly users for an entire year, they will be removed from the list.
In August 2023, Google revealed that they plan to update some of the service policies to comply with the EU’s DSA. The tech giant highlighted plans to expand its Ads Transparency Center to increase content moderation and creation visibility. In the blog post, Google noted, “Such as the risk of making it easier for bad actors to abuse our services and spread harmful misinformation by providing too much information about our enforcement approach.” All the companies included in the list had until August 28 2023, to update service practices to comply with the EU standards.
The EU’s AI Act is considered one of the first mandatory rules for AI by a Western government. In August, China revealed its own set of AI regulations to be implemented in the wake of the recent boom in AI development with a joint effort between six government agencies. Just a month before, the rules published were referred to as “Generative AI Measures”. The 24 guidelines included measures requiring platforms providing AI services to register them and undergo a security review before public release. The Chinese government will mandate labels for artificially created content. China banned any AI-generated images of its president, Xi Jinping.
Additionally, China required that all data and foundation models be sourced from legitimate sources. China has been actively developing its AI scene with local tech giant Alibaba. It has been on a silent standoff with the United States in developing high-performing AI systems and chips that power them. Since China implemented its AI laws, reports have stated that more than 70 new AI models have been released.
During a joint meeting of the EU-US Trade and Technology Council in Sweden on May 31 2023, the EU Tech Chief Margrethe Vestager emphasised that the EU and US should push the AI industry to adopt a voluntary code of conduct to create safeguards while new laws are being developed. Vestager said, “If the EU and US take lead, they can create a code of conduct that would make everyone more comfortable with the trajectory of AI development. We need to act now.” She added, “That is the kind of speed you need to discuss in the coming weeks, a few months, and of course involve industry in order for society to trust what is ongoing.”
Ukraine has also just rolled out an AI regulation roadmap. The roadmap published by Ukraine’s Ministry of Digital Transformation revealed that it is designed to provide a clear framework for AI development, business growth and the protection of human rights to prepare for future requirements before adopting any laws.
Additionally, the roadmap is designed to educate citizens to protect themselves from AI risks while helping businesses with a law similar to the European Union’s AI Act and addressing the needs and concerns of various stakeholders in the AI ecosystem. The draft of Ukraine’s AI legislation is expected in 2024, but not before the EU’s AI Act to allow the national law to consider it. Once the EU AI Act is implemented, certain AI services and products will be prohibited while others are limited or restructured. Members of the European Parliament agreed on banning the use of facial recognition in public spaces. Generative AI models, such as OpenAI’s ChatGPT and Google’s Bard, would be allowed to operate if their outputs were labelled as AI-generated.
The legislation has yet to be enacted, and member states still can disagree with any of the proposals set forth by parliament. As we wait while the representatives in the European Union deliberate on additional regulations on the most prominent artificial intelligence systems.
Hannah Parker is a technology writer specialising in artificial intelligence regulations and digital policy developments.