AI and the Future of Holocaust Memory
by Prof Victoria Grace Richardson-Walden, in conversation with Dr Mykola Makhortykh and Maryna Sydorova
In November, Mykola Makhortykh and Maryna Sydorova from the University of Bern will join the Landecker Digital Memory Lab as visiting researchers. In this interview, our Director, Dr Victoria Grace Richardson-Walden discusses a core focus of their research with them: AI and Holocaust memory.
Dr Victoria Grace Richardson-Walden: We’re really looking forward to hosting you here in the autumn semester. Mykola, we have of course worked closely together on a number of ventures over the past few years. Your and Maryna’s research explores the significance of algorithm organisation of memory culture online. What first inspired you to recognise the significance of algorithms more broadly, and AI more specifically, in relation to Holocaust memory?
Mykola Makhortykh: Like it often happens in the creative process, my inspiration for studying the significance of algorithms and AI for Holocaust memory originates in frustration. Back in the day, when I was doing my PhD on the platformisation of Second World War memory in Ukraine, I relied primarily on qualitative methods. However, at some point, it became obvious that I had more data than I could realistically process qualitatively.
I was so frustrated because of the data being there but me not being able to use it to its full potential. It was the moment when I started looking into computational approaches for processing platform-related data about the Holocaust and database structures for organising them.
For some (brief) time, I was very happy about new capacities, but then I started also recognising many shortcomings of using automated approaches for analysing data dealing with the very sensitive aspects of the past, such as genocides: from the ethical challenges of storing or processing data to often deficient insights which can be generated computationally (at least without putting major effort).
It was at this point that I became interested in how big tech companies, which are trailblazers in processing and ranking large volumes of data, deal with these challenges.
Maryna Sydorova: My inspiration originates from my interest in how societies remember and memorialise the past: when choosing a university programme, I first considered enrolling in archaeology or history but eventually ended up studying medicine and then computer science.
While I was always more interested in ancient history, I think that the case of the Holocaust is very special and distinct. It is not just a historical event but one of those focal points in history that shape how we, as a society, understand many present issues, including human rights, justice, and memory.
In this sense, the development and analysis of algorithms used to process information about the Holocaust is not only about optimising processes and uncovering patterns in data. It is also about the ethics of technology and countering many risks associated with the use of algorithms and AI in this very sensitive context.
For me as a data engineer, the process of working on this topic is very challenging but also very inspiring.
VGRW: Your work sits at the intersection of different disciplines: memory studies, media and communication studies, and computer sciences. How do you both perceive yourselves as ‘researchers’, how would you define your research identity?
MM: For me, research identity is more about the research subject than a disciplinary framing. When studying a complex subject such as AI, it is almost inevitable that you will have to cross disciplinary boundaries at some point.
For instance, computer science can provide an excellent understanding of more functional aspects of an AI system and a solid methodological toolbox for conducting the research.
However, it will not necessarily provide a critical perspective for assessing the normative and ethical implications of the system, which is where humanities and media studies excel. I would feel very limited if I had to study AI and Holocaust memory within one discipline instead of trying to bring what I see as the advantages of different disciplines together.
Consequently, I see myself more as a researcher who works on AI and memory and less as a memory studies or media studies scholar.
MS: In terms of research identity, I primarily see myself as a computer scientist. In Bern, I usually work on data engineering tasks, but I also have the skills required to analyse the performance of algorithms and AI systems through systematic audits.
I am also very much interested in the ethical design and implementation of these technologies.
So, despite being first and foremost a computer scientist, I also can call myself an interdisciplinary scholar.
VGRW: Do you feel frustrations with disciplinary silos? Does this hamper your work at all – if so, how? What do you feel are the benefits and limitations of interdisciplinary research?
MM: I think almost everyone who works in academia at some point gets frustrated with disciplinary silos. I am personally rather frustrated about the discrepancy between the common calls for interdisciplinarity in academia and the lack of interdisciplinary academic positions (at least beyond the postdoctoral level) in many European countries.
I feel that it is unfair that academia often nudges people, especially younger scholars, towards working across disciplines, and then it puts them into a disadvantageous position in the job market when they start applying for tenured jobs.
On a more practical level, disciplinary silos make it more difficult to bring together people from different fields, which is again crucial for studying topics such as the impact of AI on Holocaust memory.
We first need to find a common language to ensure that everyone is on the same page about what we are talking about when we talk about AI or memory: it often is a very productive and intellectually stimulating endeavour, but requires a lot of time and energy (and, ideally, funding to bring people together).
MS: My major concern about disciplinary silos relates to the difficulty of communication and collaboration. Researchers from different fields often have diverse expectations about research goals and methods, and occasionally it creates difficulties.
However, despite these challenges, I believe that the benefits of interdisciplinary research outweigh the limitations. The significant advantage is the ability to approach complex problems holistically as well as more creatively.
When you bring together diverse perspectives, it opens up new ways of thinking and encourages flexibility and adaptability, as it requires constant learning and the ability to integrate new methods and ideas.
“Projects using digital technology are integral for discovering how the benefits of AI for the sector can be realised and how the associated risks can be countered. However we cannot expect that Holocaust memory and education organisations will solve all the challenges associated with AI on their own.”
VGRW: Words like ‘algorithm’, ‘AI’ and ‘machine learning’ have become common parlance. However, as we’ve discussed in a forthcoming article (co-authored by myself and Mykola), both the terms and the history of their development seem to be grossly misunderstood. Could you give our readers a brief definition of these terms?
MM: This is a tough question. Personally, I treat the concept of an algorithm in a rather reductionist way. For me, it is a sequence of (computational) steps required to achieve a certain result.
AI is a different beast. I like the definition of AI by Nilsson, who, in his 1998 book Artificial intelligence: a New Synthesis, defined AI as the ability of human-made artefacts to engage in intellectual behaviour.
To practically apply this definition, we would need first to decide what we mean by intellectual behaviour (or, in the case of the use of AI for commemorative purposes, by individual or collective memory).
The need to make such decisions is certainly a challenge for conceptual and practical work on AI applications in the context of Holocaust memory and education, but it is also a possibility for us to clarify (or maybe even reconsider) concepts which we sometimes take for granted.
MS: I like the following brief definitions: algorithm is a set of instructions, AI is the field that seeks to emulate human-like intelligence in machines, not a single technology, but a collection of approaches and techniques, and machine learning is a technique within the AI field that enables systems to learn and improve from data without being explicitly programmed for each specific task.
VGRW: Could you give a brief overview of some of the findings from your research and how this might challenge presumptions about algorithms, AI, and machine learning and their consequences for Holocaust memory and education?
MM: This is a difficult question, considering that presumptions about AI in the context of Holocaust memory and education can vary rather broadly. For me personally, one of the most surprising findings regards the scope of differences in the performance of (commercial) algorithm- and AI-driven systems for different languages.
Somehow, there is still a common assumption that the performance of search engines or chatbots in English is indicative of how these systems will perform in German, Swedish, Hebrew, or Ukrainian, but this is not the case.
Our research highlights that differences in terms of performance can be dramatic, and it raises many questions about how such performance can be unified (and whether it is actually desired).
Another finding that I find important is that many algorithm- and AI-driven systems are inherently stochastic, and their performance in the context of Holocaust memory and education can result in unexpected outcomes simply due to randomisation. It has many implications for integrating these systems into educational and commemorative routines.
VGRW: How do you circumnavigate the biases in our own search histories and user profiles when doing internet research? What methodologies do you use and what are the opportunities and limitations of them?
MM: It depends on the method and the purpose of the concrete research project we are pursuing. Often, we rely on virtual agent-based audits, where we program agents, i.e. programmed scripts, from scratch and teach them how to perform certain routines.
Examples of such routines can be searching for specific Holocaust-related keywords at certain times of the day or engaging in dialogue with an AI-driven chatbot. In this way, we are able to control (even if in a rather reductionist way) for different forms of bias and track their impacts.
VGRW: In the recent UNESCO report AI and the Holocaust: rewriting history? The impact of artificial intelligence on understanding the Holocaust which, Mykola, you co-authored with Heather Mann, you offer recommendations based on the challenges AI poses for Holocaust education. Do you think Holocaust memory and education organisations are currently well-equipped to confront these challenges?
MM: I think there is certainly a capacity within heritage and educational organisations to counter some of these challenges.
We both know some incredible people working at Holocaust memory and education who are doing fantastic projects using digital technology. including algorithms and AI.
Such projects, in my view, are integral for discovering how the benefits of AI for the sector can be realised (and, simultaneously, how the associated risks can be countered).
However, it does not mean that we can expect that Holocaust memory and education organisations will solve all the challenges associated with the use of AI on their own.
Some of these challenges, for instance, the use of AI for propagating antisemitic hate speech or for generating fake historical evidence, require a very different level of regulatory intervention.
In these cases, the responsibility certainly lies not within memory and education institutions but with the national governments and transnational organisations. Of course, the memory and education sector can provide expertise, which is invaluable for developing better regulation, but it requires regulators to be willing to hear the sector’s voices and I am not certain that it is always the case.
Furthermore, it is very important to recognise that even for the interventions which are possible within the sector, more support is needed.
On a very practical note, these interventions are expensive, both in terms of infrastructure costs and technical expertise required for deployment, maintenance, and monitoring. To realise these interventions, there is an urgent need for more funding for Holocaust memory and education organisations (as well as heritage and education more generally).
Without it, I have strong doubts about how feasible the sustainable adoption of AI innovations will be; however, without it, it would be difficult to develop the set of best practices required to counter possible risks posed by AI to the sector.
Similarly, there is a need for more spaces for Holocaust heritage and education practitioners to discuss emerging uses of AI and associated possibilities and risks. I think the series of participatory workshops implemented by the Digital Holocaust Memory project was incredibly useful in this sense, and I hope to see more similar initiatives in future (these workshops lead to the publication of a series of recommendations, accessible here).
VGRW: Are we facing a crisis in Holocaust memory and education, or is the impact of AI overblown?
MM: I do think that there is a crisis, but in my view, AI is not necessarily the core reason for it. There are several factors contributing to it: the rise of antisemitism (especially following the October 7 attack by Hamas and Israel’s response to it); the passing of Holocaust survivors, engagement with whom, in my view, was one of the cornerstones of Holocaust education and memory; the growing incivility of online spaces and intensification of digital disinformation; and, finally, the deteriorating funding for humanities educational programmes and heritage sector around the world.
MS: The impact of AI is neither inherently good nor bad; it depends on how we use the related technologies. AI, of course, can contribute to a crisis in Holocaust memory: for instance, the rise of misinformation and deepfakes poses significant threats to historical facts.
However, when used thoughtfully, AI can help preserve and enhance Holocaust education and memory. For example, AI can assist in the digitisation and analysis of vast volumes of archival data, making them more accessible to researchers and the public. It can also help create interactive and immersive educational experiences that engage younger generations in ways that traditional approaches may not.
VGRW: In the recommendations report that we collaborated on with a range of participants, we considered how AI and machine learning could offer productive ways to review and rearrange historical data related to the Holocaust. Do you still think AI technologies offer potential opportunities for Holocaust education and memory?
MM: In my view, AI offers immense opportunities for Holocaust education and memory. It can be an incredible asset for assisting individuals seeking information about the Holocaust and encouraging such endeavours by making the exploration of the past more engaging.
AI has the potential to help us counter risks to Holocaust memory, for instance, by detecting different forms of denial and propagating factual information instead. It can also enable new forms of representing memory about the Holocaust by helping individuals create textual and visual expressions for commenting on the past.
Of course, it is going to be a challenging trial and error process to map the best practices of using AI for educating and remembering about the Holocaust with many questions to address (which uses of AI are appropriate and which are not? How does the use of AI interact with the notion of historical authenticity?), but in my view, it is something that is totally worth investing the effort.
The Landecker Digital Memory Lab is able to host visiting researchers thanks to its funding from the Alfred Landecker Foundation, and The Isaacsohn and André Families’ Visiting Fellowship scheme, supported through the Sussex Weidenfeld Institute of Jewish Studies.
Want to know more?
See Mykola’s previous blog on Algorithmic auditing, the Holocaust and search engine bias