Working to rule: humanity and the history of the algorithm
Can all intellectual endeavour be explained, and pursued, in terms of algorithms? Read Sandy Starr's expert lecture from the Battle of Ideas festival 2024...
In many ways, we seem to live our lives according to the output of algorithms. They determine the suggestions for the next thing to watch on Netflix and the next thing to buy on Amazon. Our social-media timelines throw up posts that algorithms deem to popular or likely to be of particular interest to us – there are claims this process stoked the summer riots in the UK – along with a side order of personalised adverts. One way or another, algorithms are blamed for many of society’s ills.
Over the course of their history, algorithms have proved to be an increasingly useful and powerful tool, while confronting humanity with an unsettling question – can all intellectual endeavour be explained, and pursued, in terms of algorithms? Answers to this question, once discovered, gave birth to the rich field of computer science. However, these answers were themselves strange and unsettling, and have been cast in new light by subsequent developments.
At the Battle of Ideas festival this year, deputy director of the Progress Educational Trust Sandy Starr gave a lecture on the history of the algorithm. We’re delighted to publish Sandy’s expanded speech from the weekend’s discussion in full, below. You can also get a copy of his Letter on Liberty - AI: Separating Man from Machine - here.
This lecture was given at the Battle of Ideas festival on 19 October 2024, and was dedicated to the memory of Professor Ross Anderson (1956-2024) and Dr Helene Guldberg (1965-2022). Ross Anderson established the Foundation for Information Policy Research, and did much to promote public understanding of algorithms. Helene Guldberg was a founder of the online publication spiked, and her final published work was a spiked essay about the history of Islam, including a period that we discuss in this lecture.
Working to rule: humanity and the history of the algorithm
Artificial intelligence (AI) dominates the news, and has seen some astonishing advances in recent years, but is not itself new.
This year's Nobel Prizes in physics and chemistry were awarded for AI-related breakthroughs dating as far back as 1982, while the phrase 'artificial intelligence' is older still. The current meaning of the phrase can be traced back to a 1955 proposal, for investigation of 'the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it'.
Fundamental to AI, and to electronic computing and information technology more generally, is an even older idea – that of the algorithm. An algorithm is a self-contained and precisely defined procedure, made up of a finite number of ordered steps, that can be used to accomplish a task.
The eminent American computer scientist Donald Knuth has discussed algorithms that are described in cuneiform on clay tablets from the First Babylonian Empire, dating from 1800-1600 BC. The term 'algorithm' had yet to be coined (we will discover where it came from shortly), but Knuth argues that 'the Babylonian procedures are genuine algorithms, and we can commend the Babylonians for developing a nice way to explain an algorithm by example'.
The oldest Babylonian algorithms concern something that would later become known as mensuration – taking certain measurements, and inferring other measurements from them. The 'nice way' that the ancient Babylonians explained their algorithms was with reference to ascertaining the height, width, cross-sectional area or volume of water cisterns, in situations where some but not all of these measurements were known at the outset.
If we jump forward more than a thousand years – from ancient Bablylon to Alexandria circa 300 BC, a few decades after that city was founded by Alexander the Great – we find the oldest algorithm commonly attributed to a named individual. The individual in question is the Greek mathematician Euclid and the algorithm in question is described in Euclid's Elements, perhaps the most enduringly successful non-religious book in all of human history,
The first and second propositions in the seventh volume of the Elements set out an algorithm that we now call 'Euclid's algorithm'. If you take two different whole numbers and apply Euclid's algorithm to them, then you will find the highest common factor – that is, the largest whole number that can divide either of the initial numbers and not leave a remainder. Several interesting aspects of algorithms are already apparent from this example.
For one thing, there is a reassuring certainty that comes from employing certain algorithms in certain contexts. If Euclid's algorithm is applied correctly to two different whole numbers, then it is guaranteed to yield a correct result. (An important caveat is that if you apply the algorithm to two numbers that are sufficiently large, then you may not arrive at this result within your lifetime – the efficiency of Euclid's algorithm would later become a rich topic of discussion in itself.)
For another thing, one can already sense – in embryonic form – the idea of automation in an algorithm. Obviously, the ancients had no recourse to electronic calculating or computing, and so they had to work by hand. But once you have a reliable algorithm, then from that point forward you can go on proverbial autopilot. Human ingenuity and insight have already been baked into the algorithm, and your task now is to competently carry out the instructions.
Then there is the larger context within which Euclid's algorithm exists – the context of his book, the Elements. This was a foundational text, exemplifying methods of reasoning and proof not just in mathematics, but in science and thought more broadly. The Elements would even go on have a powerful influence on politics, influencing (for example) the way that Thomas Jefferson and his associates drafted the United States Declaration of Independence.
The Elements shows that you can begin with a very small collection of axioms – things you assume to be true at the outset, without having to prove them – and then build a great edifice of demonstrable truths that proceed from these initial axioms. A question naturally arises from this. Is the process of reasoning and proof itself equivalent to, or reducible to, the carrying out of algorithms? In other words, could the entire thing – all of the reasoning contained in Euclid's Elements, or a comparable text – be automated? Hold that thought.
If we fast forward 700 years, from 300 BC to the fifth century AD, then we meet a philosopher – Proclus Lycius – who studied in Alexandria before going on to settle in Athens, and who wrote an influential commentary on the Elements. When this commentary was translated from Greek into English in the late eighteenth century, the translator (Thomas Taylor) used the phrase 'artificial intelligence' in a passage where Proclus quotes another thinker, Carpus of Antioch.
This is a candidate for the first ever appearance of the phrase 'artificial intelligence' in English. The phrase had yet to acquire its present-day meaning – here, it means something akin to 'expert ability' or 'special insight' – but its appearance is relevant to our theme.
Carpus, as quoted by Proclus (and then translated by Taylor), says that setting out a means of solving a practical problem 'is simple, and requires no artificial intelligence' because it 'commands us to accomplish something evident'. By contrast, setting out a more abstract insight 'is difficult, and requires the most accurate power, and a judgment productive of science'. Proclus draws upon these ideas of Carpus while observing that Euclid's first three propositions in the first volume of the Elements offer algorithm-like solutions to practical problems, whereas Euclid's fourth proposition offers a different (deeper?) sort of truth.
Let us jump forward another few centuries, from the fifth century AD to the eighth and ninth centuries, and let us move from Alexandria back to Babylon – or rather to Baghdad, which by this point in history had been established by an Islamic dynasty (the Abbasids) north of Babylon's ruins. We find ourselves now in a period of flourishing that later became known as the Golden Age of Islam (a phrase coined in the nineteenth century – funnily enough, by an Irish Presbyterian missionary).
It was in Baghdad, under the fifth Abbasid caliph (Hārūn al-Rashīd), that a scholar named al-Ḥajjāj created the first known translation of Euclid's Elements from Greek into Arabic. This translation was substantially revised by al-Ḥajjāj under the seventh Abbasid caliph (al-Ma'mūn), and these and other Arabic versions of the Elements played an important role in the transmission of Euclid's text. A complex web of medieval translations, retranslations and adaptations of the Elements emerged, encompassing not just Greek and Arabic but also Hebrew, Latin, Persian and Syriac.
Another significant development under the seventh caliph was the appointment of Muḥammad al-Khwārizmī as head of Baghdad's caliphal library, the House of Wisdom. This scholar – whose name means 'Muḥammad from Khwārizm' (Khwārizm in Central Asia is where he spent his earlier life, according to some accounts) – wrote influential books, including one whose (abbreviated) title is al-Jabr. This Arabic word means 'completion' or 'restoration', and its use in the title of al-Khwārizmī's book is the origin of our modern word 'algebra'.
Like Euclid's Elements, al-Khwārizmī's books were translated into medieval Latin, where the author's name was rendered as 'Algoritmi' or 'Algorismi'. These renderings of his name were then adapted into words describing numerals and numerical tools, such as the Middle English word 'augrym'. In the late fourteenth century, we find Geoffrey Chaucer discussing 'Augrym stones' in his Canterbury Tales and 'nowmbres of augrym' in his Treatise on the Astrolabe.
'Augrym' and related words evolved in turn into 'algorithm', and by the mid-seventeenth century, we find 'algorithm' included in Edward Phillips' New World of English Words, where it is described as 'a word compounded of Arabick and Spanish' and is defined as 'the art of reckoning by Cyphers'. By the late nineteenth century, the meaning of the word 'algorithm' stabilises, and we start to see references to 'Euclid's algorithm'. In short, Euclid became associated with an algorithm, but al-Khwārizmī effectively gave his name to all algorithms (Euclid's included).
The late nineteenth century also saw one of the world's most prominent thinkers, German mathematician David Hilbert, re-axiomatise the work of Euclid from more than 2,000 years earlier. Hilbert revised the axioms of the Elements – the book's starting assumptions and definitions – as he put it, 'in such a manner as to bring out as clearly as possible the significance of the different groups of axioms and the scope of the conclusions to be derived from the individual axioms'.
Hilbert went on to give a landmark lecture in Zürich during the First World War (later translated by William Ewald), which was entitled 'Axiomatic Thought' and which described problems including the following:
'The problem of the solvability in principle of every mathematical question.'
'The problem of the subsequent checkability of the results of a mathematical investigation.'
'The problem of the decidability of a mathematical question in a finite number of operations.'
These sorts of problems culminated in Hilbert's Entscheidungsproblem, or 'decision problem'. A 1928 book written jointly by Hilbert and his colleague Wilhelm Ackermann posed this problem as follows: 'Is it possible to determine whether or not a given statement pertaining to a field of knowledge is a consequence of the axioms?'
This is arguably tantamount to asking whether an entire province of intellectual endeavour can be reduced to settling upon appropriate axioms and then carrying out certain algorithms – in other words, advancing human knowledge on autopilot. Another way of expressing this idea, in relation to Hilbert's discipline of mathematics, is to ask the following. Is there really a difference between a mathematician and a computer? Or is a mathematician just a very sophisticated type of computer?
It is important to remember that electronic computers had not yet been developed in the 1920s (notwithstanding nineteenth-century proposals for comparable devices that I have discussed previously), and a 'computer' at the time meant a human computer – a person employed to carry out rote, algorithm-like procedures. Hilbert wanted to find out whether mathematical advancement was, deep down, of a piece with computing.
It transpires that there are a few specific contexts (including one particular approach to re-axiomatising Euclid) in which the Entscheidungsproblem can be said to have a solution, but that the problem can have no general solution. The latter was proved in the 1930s, first by the American logician Alonzo Church and then – more famously – by the British computing pioneer Alan Turing.
It is from these proofs of Church and Turing that the discipline we now call 'computer science' was born. The proofs established that a general-purpose computing device can emulate everything that a human computer was previously employed to do, which is why 'computer scientist' became an honourable job description for a human while 'computer' per se ceased to be so (and humanity is the better for it).
At the same time, Church and Turing established that a general-purpose computing device cannot emulate everything that a human mathematician does. In other words, we cannot successfully reduce the advancement of human knowledge to deciding upon axioms and then carrying out algorithms. 'Artificial intelligence', in the old-fashioned sense of Proclus and Carpus – 'a judgment productive of science' – is something that mathematicians (and humans more generally) must continue to bring to the table.
Turing later became synonymous with the idea that machines might become indistinguishable from humans (at least in certain contexts), following a 1950 paper in which he set out what became known as the 'Turing test'. This idea of Turing's helped inspire the 1955 proposal for research into a different notion (nowadays the prevailing notion) of 'artificial intelligence', which is where our lecture began.
It is a testament to Turing's wide-ranging thought and restless curiosity that he established a distinction between machines and humans, only to then consider how this distinction might be overcome. Still, it is worth remembering that computer science owes its existence not just to his discovery of how algorithms can emulate and serve humans, but also to his discovery of limits to such emulation.
Both halves of Turing's insight were required to bring about our algorithm-saturated world. Both halves of his insight remain relevant today, as we grapple with the latest advances in AI.
Sandy Starr is deputy director of the Progress Educational Trust and is author of AI: Separating Man from Machine, a pamphlet in the Letters on Liberty series.
Very good lecture. Today many see "tech" and "algorithms" as being outside of humanity. Something "other". The fact that technology and algorithms is simply part of human capability, and enables us to go beyond our current capabilities is important.
Being pro-tech or pro-ai IS being pro-human.
This lecture really helps contextualize the use of math in science and the use of science in progress and development. And its all human.