Bicycles of the mind
My notes for a panel discussion on AI at Stellenbosch University's Research Indaba
‘I think one of the things that really separates us from the high primates is that we’re tool builders’, said Steve Jobs in a 1981 interview.
I read a study that measured the efficiency of locomotion for various species on the planet. The condor used the least energy to move a kilometer. And, humans came in with a rather unimpressive showing, about a third of the way down the list. It was not too proud a showing for the crown of creation. So, that didn’t look so good. But, then somebody at Scientific American had the insight to test the efficiency of locomotion for a man on a bicycle. And, a man on a bicycle, a human on a bicycle, blew the condor away, completely off the top of the charts.
And that’s what a computer is to me. What a computer is to me is it’s the most remarkable tool that we’ve ever come up with, and it’s the equivalent of a bicycle for our minds.
Jobs’ image captures something profound about technology: it multiplies our capacity without changing who we are. A bicycle doesn’t alter our biology; it lets us travel further, faster, and with less effort. Artificial intelligence is now emerging, I will argue, as a new bicycle of the mind – a technology that could speed up teaching, research, administration, and public engagement. The goal here is not to sell hype or warn of doom, but to ask how universities might use it wisely: to improve quality, hold down costs, and rebuild trust, all while testing each step through evidence.
AI in Teaching
Teaching sits at the heart of any university’s purpose, and this is where AI’s impact feels most immediate. Imagine every student having access to a tireless tutor: one that never grows impatient, adjusts explanations on the fly, and stays with a learner until understanding takes hold. Platforms like Mindjoy, which our department is piloting, let teachers design such tutors. They are not replacements for educators but extensions, digital companions that handle the routine work of diagnosing, explaining, and practising, while human teachers focus on motivation, inspiration, and depth.
The first evidence shows promise. A Harvard study found that students using an AI tutor learned more, in less time, than those in conventional active-learning classes.1 They scored higher and felt more engaged. The researchers called it a compelling case for ‘[AI’s] broad adoption in learning environments’. It’s clear that even great teachers cannot match the endless patience and adaptability of a well-trained algorithm.
This does not mean universities should tear up their systems and start again. Education has seen its share of promised revolutions, most fading once novelty gave way to habit. Early AI chatbots had uneven results: some improved learning, others encouraged students to skip hard thinking or trust wrong answers. The difference came down to design. Tools used in structured, active learning produced benefits; those treated as shortcuts did not.
My point is that AI in education needs the same rigour as medical science. Each tool should be tested through randomised control trials that measure not just exam scores but genuine comprehension. Results will differ across subjects. An AI tutor might excel in algebra but struggle in philosophy. Only systematic testing will reveal where the technology adds value. Universities that treat teaching as a continuous experiment will build a culture of improvement.
AI in Research
Universities exist to expand knowledge, but researchers now face an unmanageable flood of information. AI promises to act as a tireless assistant: scanning papers, finding connections, generating hypotheses, even sketching outlines of articles. This week, Sam Altman unveiled ChatGPT Pulse, a daily idea engine that ‘proactively does research to deliver personalised updates based on your chats, feedback, and connected apps like your calendar’. Its purpose is to cut the cost of searching, linking, and testing ideas.
Cognitive scientist Margaret Boden offers a useful way to think about this. She describes three kinds of creativity. Today’s AI excels at the first two: combining old ideas in new ways and exploring variations within existing rules. It might link a linguistic model to genetics or borrow a physics framework for economics. These are bicycles of the mind in motion, helping scholars move quickly and occasionally uncover neglected paths. The third kind – transformational creativity – is different. It is the creativity that reshapes the rules themselves, as in the birth of quantum mechanics or the discovery of DNA. AI, trained on the past, has not yet reached that level. It can accelerate our journey, but not chart entirely new maps.
Economic theory helps clarify this. Agrawal, McHale, and Oettl model innovation as a search through a vast landscape of hypotheses.2 Traditionally, scientists relied on intuition to decide what to test. AI can guide that search, pointing toward more promising terrain. In principle, this means faster discovery and higher returns. But as the authors warn, prediction is worthless without verification. If AI suggests a hundred new drug compounds, human labs still have to test them. Without investment in experimental capacity, insight remains theoretical. The message is simple: AI complements, but cannot replace, the infrastructure of research.
The focus should be on quality research. The number of papers indexed in major databases grew by nearly half in the last decade. By making it easier to produce text, AI could flood journals with mediocre or fabricated work. The strain on peer review is already visible. This has two consequences: first, it means that we spend our limited resources on research that is worthless and, second, it means, as one recent study shows, that innovation will slow down because researchers will find it difficult to ascertain what is novel and meaningful.3 That is why Stellenbosch (and South African universities more generally) should shift to rewarding substance, not volume. Still, AI can be a force for quality. Refine.ink, an AI-based reviewer, checks equations and consistency, catching errors that humans overlook. Other tools help editors spot formulaic writing. But the final judgement – on originality, insight, and importance – must stay human. The lesson from the economics of science is old but apt: reward a metric, and people will game it; so reward what truly matters.
AI in Administration
If teaching is a university’s soul and research its engine, administration is the machinery that keeps both running. Admissions, timetables, HR, finance, translation, student support all produce mountains of repetitive work. With budgets tightening, efficiency is no longer optional. AI can process forms, schedule meetings, generate reports, and flag students at risk of dropping out.
Used wisely, this need not mean fewer jobs. Automation can strip away drudgery, freeing staff for the work that requires empathy and judgement. A chatbot can handle routine queries so that advisors spend time with those in real need. Financial systems can audit transactions in real time, leaving staff to plan long-term.
Resistance is understandable. Economist Joshua Gans has shown that people often resist technologies that threaten their indispensability.4 Yet clinging to repetitive tasks can weaken an organisation. Universities that reward staff for redesigning their roles – rather than punishing them for becoming more efficient – will gain both morale and productivity.
Again, evidence matters. Pilot projects should be measured by hard data: hours saved, errors reduced, satisfaction improved. Tools that work should be scaled; those that fail should be scrapped. The aim is not austerity for its own sake, but efficiency that creates room for better teaching, research, and support. In the long run, efficiency is not a luxury but survival.
AI in Social Impact
If administration is the machinery of a university, communication is its voice. Too often that voice turns inward, leaving the public to fill the silence with suspicion. As Steven Pinker has noted, mistrust grows when institutions stop explaining themselves. In the United States, this retreat has coincided with falling budgets, declining legitimacy, and a rise in misinformation.
AI might help universities reverse that decline. One study found that when scientific papers were rewritten in plain language by LLMs, readers both understood them better and trusted them more.5 Tools like ElevenLabs can already translate lectures into multiple languages while keeping the lecturer’s own voice, a powerful possibility in multilingual societies. Stellenbosch could, at low cost, deliver lectures in English, Afrikaans, and isiXhosa. For the Language Centre, this could mean not obsolescence but transformation: from translators into a Science Communication Centre dedicated to understanding and testing how ideas travel and build trust.
Language is only one frontier. Midjourney can visualise complex data; Veo can turn findings into short films; Suno can compose songs that carry research into popular culture. (For this talk, I created an AI-generated song about the post itself.) Academics may scoff at such tools, but if universities are serious about public trust, they cannot ignore the media through which most people now learn.
As with every other domain, the rule should be experimentation and evidence. A Science Communication Centre could test which messages, formats, and platforms work best, training staff and students to use them responsibly. If universities invest in communication with the same seriousness as they do in labs and classrooms, AI could help them speak more clearly and more persuasively than ever before.
Conclusion
Artificial intelligence is a general-purpose technology that will reshape universities in ways we cannot yet predict. The real question is whether they will retreat from it or engage with disciplined curiosity. The wiser path is the latter: experiment, evaluate, and learn from both success and failure.
At their best, universities are not just places where students learn but learning organisations themselves. With imagination, AI can indeed become a bicycle for the mind, helping us travel further and faster in the pursuit of knowledge, and perhaps, in doing so, restoring some of the trust on which higher education depends.
Kestin, Greg, Kelly Miller, Anna Klales, Timothy Milbourne, and Gregorio Ponti. “AI tutoring outperforms in-class active learning: an RCT introducing a novel research-based design in an authentic educational setting.” Scientific Reports 15, no. 1 (2025): 17458.
Agrawal, Ajay, John McHale, and Alexander Oettl. “Artificial intelligence and scientific discovery: A model of prioritized search.” Research Policy 53, no. 5 (2024): 104989.
Chu, Johan SG, and James A. Evans. “Slowed canonical progress in large fields of science.” Proceedings of the National Academy of Sciences 118, no. 41 (2021): e2021636118.
Gans, Joshua S. When Will Entrepreneurs Choose to Make Themselves Replaceable?. No. w34271. National Bureau of Economic Research, 2025.
Markowitz, David M. “From complexity to clarity: How AI enhances perceptions of scientists and the public’s understanding of science.” PNAS nexus 3, no. 9 (2024): pgae387.