CIMPLE: Countering Creative Information Manipulation with Explainable AI
What’s the role that Artificial Intelligence can play in fighting misinformation?
CIMPLE project aims to research and develop innovative social and knowledge driven creative AI
explanations, and testing it in the domain of detection and tracking of manipulated information, while
taking into account social, psychological, and technical explainability needs and requirements.
“Our domain is the manipulation of information that we see on social media, in the news… it is about
the dissemination of information or news that is factually wrong because it does not tell the truth or it has been manipulated, misleading the public”, mentioned Sofia Pinto, the INESC-ID Researcher involved in the project.
The CIMPLE project is all about Explainable Artificial Intelligence (XAI) requirements related to AI-driven misinformation detection, XAI by design using Knowledge Graphs and XAI models for detecting
information manipulation. The Researchers involved aim to be able to generate creative and engaging
explainability visualisations and personalize XAI to end-users skills and topic affinity.
“INESC-ID’s role is to look for ways to explain the manipulation of information, in a way that the interlocutor has the ability to listen and to question himself. It is more than just presenting facts, as it is very difficult to convince someone else who believes something otherwise. And this is where creativity comes in. We are going to work on this explanation of manipulation using computational creativity to attract people’s attention, so that they can visualize where there was manipulation and so that they can reach to their own conclusions”, adds the researcher.
Explainability is of significant importance in the move towards trusted, responsible and ethical AI, yet
remains in its infancy. Most relevant efforts focus on the increased transparency of AI model design and
training data, and on statistics-based interpretations of resulting decisions (interpretability).
Explainability considers how AI can be understood by human users. The understandability of such
explanations and their suitability to particular users and application domains received very little
attention so far. Hence there is a need for an interdisciplinary and drastic evolution in XAI methods.
CIMPLE will draw on models of human creativity, both in manipulating and understanding information,
to design more understandable, reconfigurable and personalisable explanations. Human factors are key
determinants of the success of relevant AI models. In some contexts, such as misinformation detection,
existing XAI technical explainability methods do not suffice as the complexity of the domain and the
variety of relevant social and psychological factors can heavily influence users’ trust in derived
explanations.
Past research has shown that presenting users with true / false credibility decisions is inadequate and
ineffective, particularly when a black-box algorithm is used. Knowledge Graphs offer significant potential to better structure the core of AI models, using semantic representations when producing explanations for their decisions. By capturing the context and application domain in a granular manner, such graphs offer a much needed semantic layer that is currently missing from typical brute-force machine learning approaches.
To this end, CIMPLE aims to experiment with innovative social and knowledge driven AI explanations,
and to use computational creativity techniques to generate powerful, engaging, and easily and quickly
understandable explanations of rather complex AI decisions and behavior. These explanations will be tested in the domain of detection and tracking of manipulated information, taking into account social,
psychological and technical explainability needs and requirements.
The Project is a Partnership between INESC-ID, EUROCOM (Paris, France), The Open University (UK),
University of Economics and Business (Prague, Czech Republic) and WebLyzard technology, WLT
(Vienna, Austria).
CIMPLE was one of the CHIST-ERA projects approved under the 2019 call “Explainable Machine
Learning-based Artificial Intelligence”.
CHIST-ERA is a network of funding organisations in Europe and beyond supporting long term research on digital technologies with a high potential impact. It selects every year two new topics of emerging
importance and launches a call for transnational research projects on these topics.
Upcoming Events
NII International Internship Programme Presentation and Q&A by Emmanuel Planas
On April 30, Emmanuel Planas, the acting director of the Global Liaison Office (GLO) and responsible for the internationalisation program at the National Institute of Informatics (NII) in Tokyo, Japan, will give a presentation to introduce the NII and its internship program to INESC-ID students and IST’s Master’s in Computer Science students.
Date & Time: April 30, 14h00
Where: Sala Polivalente, Técnico – Taguspark
“The NII International Internship Program is an exchange activity with students from institutions with which NII has concluded a Memorandum of Understanding (MOU) agreement. This incentive program aims at giving interns the opportunity for professional and personal development by engaging in research activities under the guidance and supervision of NII researchers.
The NII Internship Program is open to Research Master’s and PhD students who are currently enrolled at one of the partner institutions that have signed an MOU agreement with NII.”
Educational Workshop on Responsible AI for Peace and Security (UNODA)
On June 6 and 7, The United Nations Office for Disarmament Affairs (UNODA) and the Stockholm International Peace Research Institute (SIPRI) are offering a selected group of technical students the opportunity to join a 2-day educational workshop on Responsible AI for peace and security.
The third workshop in the series will be held in Porto Salvo, Portugal, in collaboration with GAIPS, INESC-ID, and Instituto Superior Técnico. The workshop is open to students affiliated with universities in Europe, Central and South America, the Middle East and Africa, Oceania, and Asia.
Date & Time: June 6 a 7
Where: IST – Tagus Park, Porto Salvo
Registration deadline: April 8
Summary: “As with the impacts of Artificial intelligence (AI) on people’s day-to-day lives, the impacts for international peace and security include wide-ranging and significant opportunities and challenges. AI can help achieve the UN Sustainable Development Goals, but its dual-use nature means that peaceful applications can also be misused for harmful purposes such as political disinformation, cyberattacks, terrorism, or military operations. Meanwhile, those researching and developing AI in the civilian sector remain too often unaware of the risks that the misuse of civilian AI technology may pose to international peace and security and unsure about the role they can play in addressing them. Against this background, UNODA and SIPRI launched, in 2023, a three-year educational initiative on Promoting Responsible Innovation in AI for Peace and Security. The initiative, which is supported by the Council of the European Union, aims to support greater engagement of the civilian AI community in mitigating the unintended consequences of civilian AI research and innovation for peace and security. As part of that initiative, SIPRI and UNODA are organising a series of capacity building workshops for STEM students (at PhD and Master levels). These workshops aim to provide the opportunity for up-and-coming AI practitioners to work together and with experts to learn about a) how peaceful AI research and innovation may generate risks for international peace and security; b) how they could help prevent or mitigate those risks through responsible research and innovation; c) how they could support the promotion of responsible AI for peace and security.”