
Artificial Intelligence and Information Literacy: On the Art of Quartering Broccoli
How valid are concepts for teaching information literacy in the light of current developments in artificial intelligence? In this guest post, Lukas Tschopp from the University Library of Zurich illustrates his assessment with a culinary comparison.
by Lukas Tschopp
With the Framework for Information Literacy for Higher Education, created by the American Library Association (ACRL Framework), there has been a shift from the concrete learning objectives of previous standards to a concept-based framework. With Artificial Intelligence (AI) raising great hopes and concerns, it is questionable to what extent these concepts are still valid today. In this article, using the framework that “authority is constructed and contextual”, I attempt to show that, from my perspective, the ACRL framework remains important because it can provide valuable impetus for the management of information in higher education.
Exposing the absurdity of Riz Casimir
An analogy should illustrate what this framework is about. As long as the only curry I know that exudes a hint of exoticism is Riz Casimir (a classic of Swiss cuisine), I will believe that maraschino cherries belong in every curry. This is not only unfortunate, but also a wake-up call to try a masala dosa in a South Indian restaurant. It is only in the context of the masala dosa and its potato curry that I realise the absurdity of Riz Casimir and his maraschino cherries.
When it comes to AI, the problem is similar. Unless I’ve come across a better answer, I assume that what the AI has written is grammatically correct and true. As in the first example, context is needed to classify the answer. Or to elaborate in terms of the ACRL framework: Critical thinking is the tool to understand that knowledge is always constructed in a context. What does this mean when I want to talk about AI? It means being aware of who (more precisely: what) I’m dealing with. Generative AI is a technology that uses a large amount of data to form answers according to the law of probability. This technology produces language, not truths. Or to stay with the culinary image: Although we eat with our eyes, the food can taste terrible.
Playing Russian roulette with generative AI
To understand this better, let me turn to data. It seems to me like an ocean – deep and unfathomable. With some companies developing generative AI tools, it’s impossible to see what’s floating in this ocean of data. I have mixed feelings about it. Especially since I recently saw a short video on Instagram of a woman in her underwear quartering broccoli. Beyond the obvious question of gender stereotypes, I wonder about the informational content of such videos and what AI systems trained on them make of them. Although it’s currently unclear whether the use of such data to train AI systems will be permitted (Swiss Federal Data Protection and Information Commissioner, 2024; Süddeutsche Zeitung, 2024 (German)), the idea leaves me with a queasy feeling in my stomach.
On the other hand, I’m fascinated by what can emerge from such data. I like the back and forth that develops in the dialogue field of AI when I develop a draft for a text contribution with Claude AI. From my point of view, however, there is a significant difference between refining a text whose content is somewhat familiar to me and asking Claude AI about drug interactions – an example from my teaching with bachelor students. The latter is a very complex subject. Every body reacts differently to a single drug, let alone a combination of dozens of drugs. Relying on an AI answer is, in my opinion, a new form of Russian roulette.
Ridiculous and unbelievable
Which brings me to another topic. The world has become so complex that I have long since lost the overview. Maybe that’s part of the fascination of AI for me. The world suddenly becomes manageable because someone (or rather something) seems to be able to explain it to me. I can overlook the fact that the content is only partially accurate because I can at least read it, have it broken down into digestible chunks, and suddenly I believe it: Yes, you’ve got it!
Usually, disillusionment follows in time. There have been times when I have shown a draft to a colleague and noticed a mixture of regret and mischief in her expression. In short, if I don’t check the content carefully, if I don’t rely on reliable sources, I make myself not only look ridiculous, but also unbelievable. Only in context, which my colleague provides in this example, does it become clear how limited my knowledge is. Which also illustrates that the framework “authority is constructed and contextual” remains relevant even when a new technology changes text production.
Financial commitment without self-interest?
I don’t want to denigrate the use of AI. Rather, I want to emphasise something that is all too often forgotten in everyday life. Critical reflection on content. The question of where the answer comes from.
I recently took a closer look at who is behind Claude AI. Anthropic, the company behind Claude AI, has received substantial investments (German) from Google and Amazon (German). Although Anthropic emphasises its independence, this seems flimsy to me when hundreds of millions of dollars are invested by companies that want to make a profit.
This doesn’t mean that I don’t want to continue editing this text with the help of Claude AI, but it does mean that I must be vigilant and careful (German) about what I reveal to whom. And it also means that I take a close look at the results. Recently, Claude AI invented a quote to support one of my theses. I was disturbed because no reference was given.
Now, after all I’ve written, my stomach is rumbling. I need to eat a masala dosa.
This might interest you as well:
- Human-Centred AI: First Steps for the Enrichment of Library Work
- Discrimination Through AI: To What Extent Libraries are Affected and how Staff can Find the Right Mindset
- Tracking Science: How Libraries can Protect Data and Scientific Freedom
- AI in Academic Libraries, Part 1: Areas of Activity, Big Players and the Automation of Indexing
View Comments

EconStor Survey 2024: High user satisfaction, but ‘room for improvement’ in understanding how it works
By Ralf Toepfer and Olaf Siegert The disciplinary open access repository EconStor was...