Siirry sisältöön
AI
How the EUs AI act will be a landmark legislative line in the sand for AI in the EU

Authors:

William O’Gorman

AI hub officer, Ulysseus
Haaga-Helia ammattikorkeakoulu

Published : 16.06.2023

Just two years ago, April 2021, the European Commission published a proposal related to “Laying down harmonized rules on Artificial Intelligence and amending certain union legislative acts” (European Parliament). In the same year the Ulysseus Artificial Intelligence Innovation Hub was founded in Haaga-Helia and few could have predicted the rise and growth of AI between then and now.

Keeping AI in check

In an effort to keep pace with this unprecedented growth in technology the EU has issued a new draft of the legislation to keep the growth and use of AI in check and this will have a direct impact on how AI is developed and used in business and higher education practices.

It’s not a stretch of the imagination to say that there has been an explosion of AI in the short time since this act was penned with terms related to Generative Foundation Models, Language Learning Models, not even being mentioned, the only term that could relate being “chatbot”. Is it that these terms didn’t exist when the act was created or has it all developed so rapidly that the commission didn’t see them as being relevant? Whatever the case, as with everything dealing with AI, acts and policy seemingly become outdated quicker than new applications can be discovered.

Worlds first rules on AI

This new amendment to the above act, “Laying down harmonized rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain union legislative acts” (EUR-Lex), aims to zero in on the EUs desire to keep budding AI services under control for the protection of the public and this will have a direct impact and influence how AI could be used in higher education. Universities and other higher education institutions will need to carefully consider the implications of the AI Act and ensure compliance with its provisions.

The host of amendments that will enforce the upcoming AI act will be the world’s first rules (Sciencebusiness.net) on Artificial Intelligence and these rules will have a direct impact on how AI systems used in education are safe, transparent, free from bias, and that they protect the privacy and data rights of students and faculty.

The upcoming law will assign AI applications to the following risk categories:

Risk based approach to AI – Prohibited AI practices

AI systems with an unacceptable level of risk to people’s safety would be strictly prohibited, including systems that deploy subliminal or purposefully manipulative techniques, exploit people’s vulnerabilities or are used for social scoring (classifying people based on their social behaviour, socio-economic status, personal characteristics).

High-risk AI

Expanded classification related to high-risk areas to include harm to people’s health, safety, fundamental rights or the environment and also AI systems to influence voters in political campaigns and in recommender systems used by social media platforms

General-purpose AI – transparency measures

Obligations for providers of foundation models – a new and fast evolving development in the field of AI – who would have to guarantee robust protection of fundamental rights, health and safety and the environment, democracy and rule of law. They would need to assess and mitigate risks, comply with design, information and environmental requirements and register in the EU database.

Future development of AI practices in Higher Education

How will this relate to the development of AI based practice in higher education? The most relevant being that the act could require educational institutions to ensure that AI systems used in teaching are transparent, explainable, and free from bias. How this will be achieved remains to be seen, however, it will be important to be prepared to adapt to these laws in the not too distant future. There already exists guidelines on the “Ethical guidelines on the use of artificial intelligence and data in teaching and learning for educators” (European Commission) however these are just guidelines and do not put in place legislature to ensure compliance.

In addition, generative foundation models, like GPT, would have to comply with additional transparency requirements, like disclosing that the content was generated by AI, designing the model to prevent it from generating illegal content and publishing summaries of copyrighted data used for training.

Human centric by design

In order to meet this unprecedented expansion, the EU is taking a strong line to ensure that the development of AI in Europe is ethically advanced and human centric in its design. New tools, practices and procedures will need to be developed and integrated into higher education with the Ulysseus AI innovation hub being strategically placed to address these challenges both for Haaga-Helia and through collaboration with our 6 Ulysseus European University partners and innovation hubs.

Utilizing the ecosystem of 6 innovation hubs is a key tool to develop Satellite Projects under Ulysseus, with a Satellite Project consisting of 2 or more Ulysseus partners focusing on a common them and aiming to solve key EU challenges. As AI is seen as a transformative technology with the perceived ability to solve many of the EUs key goals, especially related to the Green Deal and Twin Transition, it will be important to consider the legislative repercussions of this act in all project preparations. With the act planning to come into force in late 2023 (European Commission) staying up to date with its progress and possible impact will be needed.

The AI act will be one of the most important pieces of legislation of its kind aiming to make AI human-centric, trustworthy and safe. Embracing these laws and adhering to the legislature will be required by higher education and have a direct impact on how new teaching process are developed utilising this rapidly advancing technology.