AI Forum Seminar, organized by the AI Forum project, brought together almost hundred people from different universities and companies passionate about hearing and sharing current perspectives and topics on AI-related business and research. We heard AI researchers’ and business experts’ views on the hot topics of AI and lessons learned during the AI Forum project. The seminar took place in Pasila Campus on the 2 June 2023 as well as remotely through the Liveto-platform. The seminar also offered a place for networking and provided the opportunity to develop future cooperation between universities and companies.
A core part of the seminar was a panel discussion with representatives from different companies. The discussion focused especially on the ethical aspects of AI, upcoming EU legislation, and future insights. The panelists were a representative group of experts from companies connected to AI: Expert Lead Juho Vainio and Lead AI scientist Elin Ehsani from Silo AI, CEO, Partner Teemu Heikkilä from Emblica, CEO, co-founder Anna Seppänen from CoHumans and Professor Caj Södergård, acting Secretary General of Adra – the European AI, Data and Robotics Association and founder of NextAI.
Next, we summarize the main points and thoughts of this lively panel discussion.
How do we experience AI?
AI is often seen as a tool for automating decision-making. On the other hand, if it “misjudges us”, it often feels unfair. Common problems with biased and imbalanced training data were seen and discussed. This causes for example recruitment based on ethnicity or gender. An interesting thought brought up in the panel discussion was that as we humans also make mistakes, how much do we allow AI to do so?
A huge challenge can be seen in where we draw the line, and what level of information we use – even if artificial intelligence makes it possible to use it. It is important to remember that it is part of designing AI systems to look at ethical aspects as well.
How will the regulations affect the development of AI applications?
In Europe, we have the advantage that the EU is a forerunner in AI legislation. Building a trustworthy system is valuable although we are still in the experimental phase of AI. We are building the rules case by case. One question raised was, should we somehow regulate decision-making? Should it be somebody’s responsibility to solve the problems in AI?
Ethics always goes to a deeper level. It is more than legislation: doing what is right, not only what legislation forces us to do.
Many legislative changes occur at the same time, so this is also a practical challenge. How can companies prepare? One answer to this problem is that the EU should provide some kind of centralized support and evaluation system for analyzing the data. Otherwise, it is too costly for companies to build their own systems.
And what about the question, can we use AI to fight against disinformation? It is difficult, but it is also part of the EU AI Act, where it is being worked on. It is also important to use AI for societal betterment. Promoting good is important – preventing bad is only one part of ethics, as one panelist mentioned.
Should we stop and wait or go forward?
Instead of stopping and waiting, it was viewed as better to go forward in developing and experiencing AI. All panelists agreed on this and gave some good advice. It is worth learning AI methods now when one has a competitive advantage. It is also important to keep an eye on the systems and have some “fact checker agents also in the system”. One should remember to use common sense, too.
Talking about the ethical aspects, they should be seen much more broadly than nowadays. Especially the technology developers should be seen as ethical agents as the development now is moving forward so quickly. We are often approaching ethics utilizing several guidelines, but they are narrow views of ethics as they don’t apply directly to everyday life. We should not only think of ethics as a list of principles, but as a common everyday skill. So, there is still a lot to be done in the true implementation of the ethics of those in organizations that either use or develop AI.
How to envision the human role in the coming era of AI?
In summary, panelists recommended to use AI as a tool but also noted that this is often a tricky proposition and can lead to philosophical queries. In addition, with any kind of tool, it can be thought of as an augmentation of ourselves. There is no difference between other technologies and AI. There is no bad technology, there are only bad applications and how are we using AI. We should fight against bad applications, not against the technology itself.
So, let’s enjoy the new tools and enjoy the ride, but also find the brakes somewhere. AI is amplifying our capabilities and intelligence. The future is very exciting, but we must keep track that everybody will remain on the same page.
We may conclude that if AI ethics were, in the past, considered a theoretical or philosophical exercise affecting and being of interest to few, those days are long gone. As AI has become an integral part of our digital services and tools that most of us use daily, either indirectly (e.g., recommender systems) or directly (e.g., ChatGPT), it affects us all and must be taken seriously. As was evident from our panel, this is a topic that brings together and requires a view of multiple disciplines, not just researchers and engineers developing AI models, but also those who design and use AI-enabled applications and services and make policies.