November means time, finally, for research. Exams are done, and even the university committees, with their endless meetings, are in hibernation. With a month or so before the holidays, the hope is always to make good headway on my ambitious to-do list of papers – each a promise made to collaborators in the flush of optimism past.
Yet, reality usually has other plans. Here’s the truth: I’ve always been a terrible coder. With almost all of my research requiring coding, I spend weeks cleaning and describing data, tasks that should take someone with more aptitude days or even hours. By the eve of the holidays, I’m usually drafting familiar apologies to my co-authors, each accompanied by that perennial pledge: ‘First thing next year. Promise.’
But not this year.
Tasks that would usually take weeks, now require just days. And the quality of the end product is vastly superior.
What happened? Simple: ChatGPT.
I read a lot of the scientific literature on the impact of AI, but none of the statistics – and there is a lot of it – could have prepared me to witness the power of AI coding. Whereas I’ve always had to Google endlessly in search of the correct code structure, I now simply describe what I want and ChatGPT delivers the code instantly. (There are others, like co-pilot, I know, but cut me some slack.) Encounter an error? Just paste the feedback back into ChatGPT and ask for a fix. Sure, it requires some double-checking (as my self-written code always did too), but the speed and ease of this error-correction process are on an entirely different level.
When I share this with colleagues, their reactions usually fall into two camps. A rare few respond with, ‘What took you so long?’ But the majority say something more sceptical: ‘I already code pretty fast – I’m not sure AI will make much of a difference for me.’
This, to me, sums up the distributional impact of AI. And, fittingly for this story, it is backed up by evidence. The first comes from a seminal 2023 NBER working paper by Erik Brynjolfsson and co-authors, which studied the introduction of generative AI among over 5,000 customer support agents. They found that while AI boosted overall productivity by 14%, its greatest impact was on the least skilled workers, who saw a remarkable 34% increase in issues resolved per hour. In contrast, experienced agents saw minimal gains. The study also showed how AI accelerates learning: new agents achieved the productivity levels of seasoned colleagues in just two months, compared to six months for those without AI.
It is this learning ability that holds the greatest promise for a country like South Africa, with our poor education outcomes. One of its main benefits is to lower the barriers to complex fields like coding, allowing people without technical training to participate. This is best demonstrated by the authors of a new paper in Nature Human Behavior, who argue that generative AI has evolved into ‘thought partners’ that help users engage more effectively with their tasks. The remarkable thing is that these systems adjust to individual needs by understanding what the user is trying to do and filling in knowledge gaps, which is especially helpful for beginners in fields like programming. They also take over repetitive tasks and support decision-making by giving explanations that fit the user’s goals and prior experience. This ability to make technical skills more accessible, the authors argue, is likely to change how people work with technology, making it more inclusive for everyone. That’s exactly the power of generative AI I discovered this November.
But, as one would expect, not everyone agrees with this rosy picture. The 2024 Nobel Economics laureate Daron Acemoglu has raised concerns that AI’s benefits might not be as beneficial for those at the bottom of the income distribution as they appear. His research shows that while AI can increase productivity, its effects on inequality depend heavily on how tasks are allocated between labour and capital. Automation tends to displace workers, particularly those in low-skill roles, and shifts value towards capital, exacerbating income disparities. Acemoglu argues that the labour share of income has been steadily declining in many economies due to rapid automation without sufficient counterbalancing the creation of new, labour-intensive tasks. This trend, he warns, may continue with AI unless its development and deployment prioritise complementing human workers rather than replacing them.
So far, though, the evidence points to generative AI’s complementary rather than substitution role. But governments can certainly take proactive steps to help entrepreneurs and workers maximise this complementarity. First, ensuring widespread, affordable internet access is critical. This includes not only supporting fibre expansion across South Africa but, most importantly, also embracing innovative solutions like Elon Musk’s Starlink. Last week, SpaceX completed the first orbital shell of its Starlink constellation, launching 20 satellites into Earth’s orbit to enable direct-to-cellphone connectivity anywhere on the planet. While bandwidth per beam is currently limited to about 10 Mb, future constellations promise significantly improved capabilities, according to Musk. It is entirely possible that within a year or two the fastest phone calls from anywhere between Kruger and Kgalagadi, Pongola and Sea Point, won’t require any physical mobile infrastructure. Talk about leapfrogging technology. To enable this, however, will require tackling the entrenched interests of South Africa’s oligopolistic mobile providers, which will demand significant political resolve.
Second, regulatory frameworks need to encourage innovation while balancing risks. There will undoubtedly be Luddites who resist these changes. Academics, in particular, may be critical of the benefits, partly because, I suspect, they fear that tools like ChatGPT could displace them. University policies already reflect this reluctance, discouraging AI use by students or failing to support it. The irony is that these highly skilled students, the very ones expected to lead, may be the ones left behind, unprepared for a workplace that will increasingly demand collaboration with robot ‘thought partners’.
Third, the government must itself invest in AI tools, including Digital Public Infrastructure (DPI), to improve service delivery and drive inclusion. By automating processes, securing biometrics, and eliminating identity fraud, these systems could ensure access to essential services, empower citizens and bring many into the formal economy. ‘Digital transformation’, Home Affairs Minister Leon Schreiber said in a speech to the National Council of Provinces in November, ‘holds the key to modernising every government service in South Africa’. That’s a good start.
Will AI reduce inequality? It depends. The evidence so far suggests this is not akin to automation, where low-skilled jobs were most at risk. Generative AI will be spectacularly beneficial to skilled workers in a range of domains, from software programming to content production to medicine. It is most likely to replace tasks rather than jobs. For example, doctors rarely have enough time to spend with each patient. A robot ‘thought partner’ that is fed the patient’s medical history can free up doctors to spend more time and communicate better with their patients. It has even helped this economic historian to focus on the things that I really enjoy about research, like thinking through the most compelling research question and crafting a convincing argument.
Its impact will, therefore, depend on who uses it. If it remains the domain of high-income individuals, AI will likely have little effect on inequality – or worse, deepen it. But if we can extend access to those at the bottom of the income distribution, the gains could be transformative, not only for those individuals but for the country as a whole. Of all economic policies, this should be at the top of the government’s list, because the potential rewards are exponential: a more inclusive economy where technology empowers rather than divides.
An edited version of this article was published on News24. Support more such writing by signing up for a paid subscription. The images were created with Midjourney v6.1.
It is indeed great what GenAI is bringing to the masses.
What must be remembered though, is that these machines are ultimately predictors of the next word with a certain probability and that hallucinations are fundamentally part of their "character".
That said, these things are very, very useful – especially for routine tasks (as you also mention).
When it comes to coding, one must be careful though: These things can make subtle mistakes that will only be spotted by an expert eye. In many cases, this won't matter though, but when it does, it can land you in hot water. While I believe that these things are useful to fulltime junior software engineers, such people must be careful. AI really is an expert tool in some cases.
It's also useful for rubber ducking (https://en.wikipedia.org/wiki/Rubber_duck_debugging).
Then there's the issue of copyright. We have some interesting years ahead.