I err, therefore I am
Or why a bibliography is the cornerstone of science
Over the years, I have heard a great many apologies from students who missed a test or a deadline. Some are reasonable. Some are creative. I once had a student who, even after I had granted an extension, returned with a doctor’s certificate. Her finger was injured. She could not type. She showed me the plaster. I was close to saying I would happily accept an essay produced by the other nine fingers, even if there was the occasional letter missing, but feared she might escalate the matter.
But the most common excuse, by some distance, is this one: ‘Sir, it was a human error.’
I have always found that fascinating. On the surface, it is a kind of shrug. There was no good reason. Move along. But it carries something useful: there is a person in the room. There is a doctor, a finger, a certificate and a story that ties it all together. You can question them. You can correct them. That is what makes it human.
On Sunday, 26 April 2026, South Africa’s Minister of Communications and Digital Technologies, Solly Malatsi, withdrew the Draft National Artificial Intelligence Policy. Hardly anyone had read it. The Minister was withdrawing it because the bibliography was full of, well, ghosts. Several of the references at the back did not exist. They had been hallucinated by an LLM, copied into a Cabinet-approved policy, and waved through every layer of review until the public got hold of them.
A second fact, less remarked on, matters more. The body of the policy carried no in-text citations. The references at the back were never linked to claims at the front. Even the real ones had not been used. The bibliography was decoration before it was hallucinated.
Together, these two facts mean something simple. The document was not built on research at all.
The most common public reaction was to demand that someone be fired. The Minister has already suspended two of them.
My contrarian view is that this is the wrong response. The reason is that these hallucinated references are the only reason we now know that the policy was not built on research. A more careful drafter would have produced a polished bibliography of real papers, none of them actually used to write the text, and the Cabinet would have approved an AI policy that had been generated rather than researched. We would have read it, found nothing obviously wrong, and let it pass.
Firing the people who let the hallucinations through punishes the one feature of the document that gave its emptiness away. The lesson the next drafter will learn is cosmetic: make the bibliography look real next time.
The hallucinated references were, in this respect, a gift. They told us, before any expert could, that the underlying analysis was not real.
The philosopher of science Douglas Allchin reminds us, in his excellent recent book Toward a Philosophy of Error in Science, of something the school textbooks tend to obscure. Science is a long sequence of errors, each one slowly displaced by a less wrong successor. The Earth sat at the centre of the universe for fourteen centuries on the authority of Ptolemy, until Copernicus put the Sun there instead. Doctors bled their patients on the authority of Galen for nearly two thousand years, until germ theory replaced the four humours. Geologists read the rocks off the Book of Genesis until nineteenth-century fieldwork showed they could not have been laid down by a flood. Stomach ulcers were caused by stress, every doctor knew, until two Australians proved they were caused by a bacterium. Errors are how the system makes progress. What the system requires is that errors be locatable – visible enough that someone can come along and correct them.
The chain of correction is held together by citations. A bibliography is the visible portion of that chain – an audit trail of error. It is how a careful reader locates the basis of a claim and, once located, questions it.
A bibliography unconnected to the text cannot do this work. There is no path back, because there was no path forward in the first place. What you have is a list of names at the end of a document, and the presence of the list is meant to substitute for the work the list refers to.
Faking such a list used to be more expensive than producing it honestly. It took longer to invent plausible references than to cite real ones. AI inverts the asymmetry. This is George Akerlof’s market for lemons, applied to scholarship: once readers know some references may be fake, the signal value of all of them drops. Faking is now cheaper than honesty. The only thing that distinguishes the fake bibliography from a real one, at a glance, is whether the names happen to exist. In the draft AI policy, several of them did not. That is how we caught it.
The fix is not to be more careful with AI. The fix is to do the research.
That means reading the studies that bear on the question. It means asking what the evidence actually says about, say, the labour-market consequences of automation in middle-income countries, and citing the answer. AI can help with parts of this. It can summarise long papers. It can draft. It can even verify references; there are dozens of online tools for the verification step, and I will soon report on skills I’ve written to help with this. But AI cannot decide which papers bear on your claim and read them for you. That part is still ours.
What such a policy might actually look like, built on the research we already have, is the subject of next week’s post.
In the meantime, I will be grading student essays, waiting for the inevitable email from those who missed the deadline.
‘Sir, it was a human error.’
It always is.



