top of page

Thing 15: AI and Its Research Applications

Updated: May 4, 2023

Dr Yann Ryan, Lecturer in the Centre for the Arts in Society, University of Leiden


The hype around ‘artificial intelligence’ (AI), after several aborted starts, has reached a crescendo in the past year. Proponents tell us AI is going to solve traffic jams with self-driving cars, fix the climate, take care of our elderly, and free us up from mundane tasks. Its detractors argue that true self-driving cars distract us from proper public transport, and in the workplace AI is going to shift the balance between workers and those who own the capital further in favour of the latter. The ‘AI is good’ group point to its uncanny ability to create art, music, and poetry; its detractors dismiss it as merely a ‘stochastic parrot’: an expert mimicker of human speech at best, at worst, naturally inclined towards creating racist, harmful speech.


The field of AI is developing very quickly and it can be difficult to keep up. In this blog post I’ll give a very general overview of the topic, try to separate the reality from the hype, and, mostly importantly, discuss what it all means for you as a researcher.


So what exactly is artificial intelligence? Three widely-used terms have become synonymous with AI but they actually have quite different meanings and it’s worth defining each of them before moving forward. Artificial intelligence itself is most commonly used in a very general sense to signal any approach to developing computer algorithms which do tasks typically thought to require human intelligence: problem-solving and reasoning, for instance, or writing complex language. The most widely applied method for achieving artificial intelligence these days is a branch of programming known as machine learning (ML): itself a family of techniques which take input data of some kind and use it to ‘learn’ how to achieve a particular goal. An important member of this ML family is deep learning, a technique using multiple ‘layers’ of neural networks, which claim to mimic the neurons of the brain: input data is broken down into a set of connected mathematical objects, and an algorithm adjusts the connections to produce the desired outcome. These neural networks have been around for almost half a century but their application in deep learning in recent years has come to dominate the cutting-edge of the research we refer to as artificial intelligence. To be even more specific, the latest AI hype revolves around the use of large language models (LLMs) using a type of architecture known as a Transformer, which uses neural networks. LLMs can be thought of as a complex statistical representation (or model) of language, typically trained by showing billions of words of written text to a neural network. Once trained, these models use their statistical knowledge of language to predict or ‘generate’ the most likely words for any given sequence.


These developments have been particularly visible in the areas of natural language processing and computer vision. We now widely use the results of machine or deep learning algorithms in our daily lives, when we do searches on the internet, go through an electronic passport gate, turn a new car to cruise control, or hit the ‘translate’ button in an internet browser. As a researcher, you are also likely using these technologies every day, with or without being aware of it. Modern Optical Character Recognition and Handwritten Text Recognition engines now almost invariably use deep learning methods, for example.

An AI generated image of a confused monkey holding a mobile phone











How is it useful for me?

Every since ChatGPT was released to the public, a whole cottage industry of associated applications has sprung up using its API, which promise to help with all sorts of tasks, boost productivity, and so forth. I think the utility of these apps is still unproven, but it’s worth keeping an eye on them, as long as we take their claims with a pinch of salt. ChatGPT, for example, can be very useful for typical mundane academic writing tasks such as reformatting references, but you’ll need to closely supervise the results to make sure it doesn’t change anything. One area where AI has proven useful is in finding relevant literature. While it’s not really using deep learning under the hood but nevertheless might be considered AI, a popular example is Research Rabbit: using co-citations and similarity scores, you can enter a number of papers and it will give you similar ones. I have found it more useful for scientific-type research, where the citations of a paper seem to reflect some kind of similarity. If you are a coder, certain AI tools (specifically ChatGPT and Copilot) are already pretty useful for help with writing code. I use it to translate code between languages, or to suggest how to code particular problems.


Other AI methods are more suited to particular domains. For historians, Handwritten Text Recognition (HTR) engines, which use deep learning, have been very successful and quite widely adopted. The gold standard is Transkribus, which has developed into a whole platform for transcription – allowing you to upload, recognise, and edit handwritten text, as well as manually transcribe and train a model specific to the hand used. Recognising patterns in images or data is a task they are particularly suited to, and they have been used widely in biomedical research – one of the stand-out tasks of Google’s research has been to understand patterns in complicated folded proteins. Computer vision can be used to identify medical issues in images or x-rays.


AI and large language models can also be useful to understand more about very specific language sets. The historical research project I have been working on has trained a large language model on 30 million pages of eighteenth-century text. Called ECCO-BERT, we can use this knowledge it has gained about this particular corpus of text – its word use, patterns and sequences – to ask questions, understand concepts, and changes in style within it. So far, we have been using ECCO-Bert to do some quite straightforward tasks such as predict the year of a particular document, or classify its genre, but there is the potential to use it for more interesting and nuanced tasks.


an AI generated image from the prompt “Caravaggio taking a bath”. It’s a confused cluster of figures in renaissance style painting, and the central figure is a nude muscular man sitting on a bench.
In my research I want to understand what Caravaggio was drinking (created using https://deepai.org/machine-learning-model/renaissance-painting-generator, Mike Rose 2023)

How can you use these in your own research?

If you don’t have a background in computer science, the barrier to entry might seem impossibly high. But if you are reasonably computer-savvy, you can probably build or at least adapt existing models to use in your own research. Services such as HuggingFace have developed APIs for deep learning models, which means essentially they have wrapped the algorithms in an interface which makes them more accessible and easier to use. It’s probably most well-known as a place to store and share Large Language Models, but there are also models for other purposes, such as computer vision.


Google provides free computing power without needing to install anything locally through its service Google Colab. A background in programming will help, and you’ll need to learn a bit of python, specifically. If you are interested in doing this stuff seriously, especially deep learning, python is by far the most popular programming language with which to do so. But Huggingface and similar APIs have made it so you don’t need a degree in computer science to get started. There are lots of free online courses in python and similar you can access.


What are the problems?

Of course, the other side of this development of AI are the problems associated with it. Researchers releasing language models trained on the internet have noticed that they very quickly began to turn out racist and hateful speech. Even a model not visibly racist or sexist likely contains more subtle biases. Machine learning tends to reinforce existing stereotypes, affecting everything from algorithm policing to medical research. It’s worth reading Emily Bender et.al.’s article ‘Stochastic parrots’ for a discussion on the ugly side of AI: the fallout got one of its authors, Timnit Gebru, fired from Google. On top of this, the danger to the climate is alarming: these models need huge amounts of energy to be trained, and their use seems to be growing exponentially.


The most obvious concern is AI-generated work misrepresented as original by students. At the moment it seems we are in the early stages of a moral panic amongst universities. Many of the readers of this blog will at some point teach, and while it’s still too early to tell, it seems likely that ChatGPT and similar models will impact how we teach and design assignments. However, we’ve been here before: many new technologies have been seen at first as leading to the end of teaching, for example mobile devices, before we integrated them into our own practice. In my view, our responsibility is stronger to educate students as to the limitations and biases of this new world rather than worry about policing their use of them.


Where is this all leading? Artificial Intelligence, almost by definition, is a moving goalpost. For example, recognising faces used to be something I would have considered to need human intelligence. Having gone through so many passport eGates over the past few years, it’s now something I think of as mundane and mechanical. We all use AI every day in some small way, whether it be the filters in the phone camera you use to take pictures in an archive, a search on Google, or for a rough translation of a text in a language you don’t speak. The paradox is that once these technologies become familiar we tend to stop thinking of them as ‘artificial intelligence’ and just more advanced mechanical processes. Perhaps the language generation of ChatGPT and the like will go the same way in a few years. If there is anything surprising about these, it’s the extent to which our practice of writing is so easily statistically guessable, and how easily we are ‘tricked’ into conflating coherent language with ‘intelligence’. At the same time, it does seem likely we will be using these tools in some form or another on a regular basis, quite soon.


Discussion points for your pod

  • Are you already using AI in your research – perhaps in ways you hadn’t really noticed until now?

  • Is there potential for harnessing AI to boost your research outputs or to redesign how you approach a problem? What are the benefits, challenges, and risks?

  • Why not spend some time playing around with ChatGTP4 or a similar programme to see what it can do? Ask it to write a poem about your research topic, for example, and share it on the forum!

an AI-generated image from the prompt “children’s book for researchers”. It shows a closeup of a cute panda-like creature with a read ball in its mouth
Play with me!! (Created using https://deepai.org/machine-learning-model/cute-creature-generator Mike Rose 2023)

Author bio:

Yann Ryan is a Lecturer in the Centre for the Arts in Society, University of Leiden, working on the project “High Performance Computing for the Detection and Analysis of Historical Discourses”. Previous postdoctoral work includes the AHRC-funded ‘Networking Archives’

project, based at Queen Mary, University of London. In 2020 he completed a Ph.D. thesis

‘Networks, Maps and Readers: Foreign News Reporting in London Newsbooks, 1645–1649’, and previously worked at the British Library as a Curator of newspaper data. He publishes work in Media History, Publishing History, and Digital Humanities Methods.


214 views7 comments

Recent Posts

See All
bottom of page