As AI development and data science reach new peaks, the conversation surrounding their potentiality, limits and functionality intensify. This is the main theme in this new interview, where Prof Michael Rovatsos, Professor of Artificial Intelligence at the University of Edinburgh, and Director of the Bayes Centre, the University’s innovation hub for Data Science and AI tells us more about the always-evolving AI landscape, the current state of AI development and the needed collaboration between academia and industries to drive innovation.
· About the importance of education and technology literacy. “I think education should provide the basic principles to students so they have the means to truly understand technology. And I believe technology should be at the same level than other subjects such as biology, etc. This is because we need citizens that are empowered to make the right decisions and share together all the innovation brought by technology. It is more important to understand it and then take it as ours instead of see it as a competition, a sort of race at a different speed where success is limited to the ones that achieve it.
· About the AI landscape. AI always goes through these hype cycles. When I came into this field in the 90s it was all about agents and autonomy. It was all about this idea that we create some individual software agents or robots that actually encapsulate lots of the decision making and reasoning and analyzing the world capabilities so we can devolve some intelligence from people to these robots. Machine Learning was already big back then but we didn’t have the computer ability and the amount of data that we have now so what my field failed back then is to achieve the kind of adoption we are seeing right now.
But it is also important to distinguish between AI as a scientific discipline and the AI technology we see flourish in industries and in real world applications. Said this, AI is a broad discipline that covers many different aspects like robots, language, understanding learning and reason, etc. And all of these are being studied and applied at different pace over the last 20 years. Language processing, for instance, has seen a massive development and widespread adoption.
· AI in the future. “Data science is becoming more and more important now. Mainly because the data we have collected in the last years have been more than the data we have gathered before in humanity. That is why I see data analytics and machine learning being widely adopted on a mass scale in the next few years, and they will greatly benefit companies and organizations around the world.
· About multi-agent systems. I have been in AI for 20 years now. Specifically, my research is about what’s called multi-agent systems so systems that involve different artificial or human agents and I am really concerned with building the architectures and algorithms that allow people or multiple agents to cooperate or collaborate. A good example of my work would be the algorithms that someone needs to run e-commerce applications or supply chain logistics. So basically anything that involves different stakeholders with different priorities. This means a lot of methods that come there from economics, the game theory, operations, etc.
· Making AI trustworthy. “Trust is one of the most important aspects for a system to thrive. We have put lots of different technologies in place like cloud computing, server, etc but the major challenge comes to make them trustworthy. What makes people trust technology? That is the challenge we are trying to crack right now. I believe that one of the important aspects to achieve that is following strict ethical standards. Also encouraging public debates to make people participants of these innovations.
· Data arms race. Regarding the data arms race, there are many conflicts worldwide but the way forward is to create standards out of collaboration between governments, companies, etc to make sure everyone follows the same rules. In that regard, Europe has made a massive effort with the GDPR being passed a few years ago.
· About Bayes Centre. The Bayes Centre is an innovation fund with a very ambitious target to create hundreds of companies and work with thousands of organizations, public and private. What we want is to create an ecosystem to drive innovation in data sharing, data centers for citizens, boost entrepreneurship, help SMEs, etc.
· How AI can help with the coronavirus pandemic. This is a bit of a “now or never” moment. We need to learn from it from an academic perspective to make sure that we will be prepared for the next crisis. Specifically, I believe that young people are more conscious about the importance of collaboration and social impact. One example is with the tracing apps. People everywhere immediately started engaging with that, sharing their concerns but also offering solutions. That public debate is empowered by technology.
Michael Rovatsos is Professor of Artificial Intelligence at the University of Edinburgh and Director of the Bayes Centre, the University's innovation centre for Data Science and AI. He obtained his PhD in Informatics from the Technical University of Munich in 2004, after which he went straight into a full-time academic position at Edinburgh. He has a track record of over 90 publications in AI, and has been involved in externally funded projects worth over £17m, out of which he has personally held £2.5m as PI.
He recently led technical work in the EPSRC-funded UnBias project, developing fair resource allocation algorithms and conducting empirical research into users' perceptions of algorithmic fairness. This work toward developing AI-assisted methods for ethical self-regulation of online platforms continues in the follow-up project ReEnTrust. He is an Associate Editor of the Knowledge and Information Systems Journal, was recently Blue Sky Track Co-Chair of the AAMAS 2018 conference, and is Conference Coordinator for ACM's AI Special Interest Group.
His research interests are in Artificial Intelligence with a specific focus on multiagent systems, automated planning, and human-friendly and ethical algorithm design. His recent involvement in large, interdisciplinary projects has led to a major shift of his research agenda toward ethical AI, developing intelligent decision-making algorithms and platform architectures that support the moral values of their stakeholders. In this context, he is particularly interested in diversity-awareness, i.e. the ability of systems to deal with conflicting user preferences regarding the properties that algorithmic decisions made by these systems should exhibit.
Hernaldo Turrillo is a writer and author specialised in innovation, AI, DLT, SMEs, trading, investing and new trends in technology and business. He has been working for ztudium group since 2017. He is the editor of openbusinesscouncil.org, tradersdna.com, hedgethink.com, and writes regularly for intelligenthq.com, socialmediacouncil.eu. Hernaldo was born in Spain and finally settled in London, United Kingdom, after a few years of personal growth. Hernaldo finished his Journalism bachelor degree in the University of Seville, Spain, and began working as reporter in the newspaper, Europa Sur, writing about Politics and Society. He also worked as community manager and marketing advisor in Los Barrios, Spain. Innovation, technology, politics and economy are his main interests, with special focus on new trends and ethical projects. He enjoys finding himself getting lost in words, explaining what he understands from the world and helping others. Besides a journalist, he is also a thinker and proactive in digital transformation strategies. Knowledge and ideas have no limits.