Same but more-ism
When you rise to the top of an institution, you rarely aim to change the structure of that institution. What does this imply for public policy?
Views my own, not those of my employer
A 1960’s IBM mainframe
I spend a lot of time asking people much smarter than me about what we would do to fix certain problems in science and technology. Many of the solutions offered have promising potential.
Yet there is also a category of proposals put forward that I find both fascinating but frustrating. They come down to what I would call 'same-but-more-ism'. I recall several times in recent years when people at the top of big organisations that influence the UK government say that the way the UK needs to lead in AI, and other critical fields, is “more PhD students...and we should rejoin Horizon”.
Now, there is a strong case for having more computational PhD students, but there are much more fundamental industrial, regulatory and political challenges that will determine if the UK can master AI and other emerging technologies. But again and again, this is something that you hear from some leaders in the field (especially academia).
This 'same-but-more-ism' is a perennial feature of public policy discourse. Medical practitioners may stipulate "to fix the NHS we need more GPs". Teaching tsars will claim "we just need more teachers". How many times have we heard that what we need to improve policing is "more bobbies on the beat"?
When you hear senior leaders of big institutions talk about how governments could support their sector, think about this. These answers may simply be explained by a lack of imagination, but in many cases, the lack of willingness to question the underlying DNA of the institutions they represent indicates how pervasive the problem is.
It is very difficult to see the world beyond what it currently is. James Phillips, a colleague of mine, recently gave a very interesting internal presentation to TBI colleagues exploring these themes.
We often lack the conceptual language to describe how we could change the way a particular thing is done because the innovation hasn't been made yet. In the 1960s, before the personal computer had been developed, it was very difficult to persuade White House officials that we should invest in these individualised microprocessors, because they simply did not exist in the mainstream. At that point in time, the computing paradigm involved these huge mainframes with one large processor that carried out an office worth of requests.
What IBM would tell the government is needed is similar to what many people tell the government today about their domain issue. "What we need is more, and bigger, mainframes". IBM was then the market leader of the computing world (producing 70% of all mainframes worldwide), but a market leader that had become bloated and was ultimately stuck in the previous paradigm. Moving beyond mainframes was beyond their conception, and also bad for their business model.
Innovation is bad for the ‘personal status’ business model of those that have risen to the top of bureaucracies. Why would they change the rules of the game that has got them to the top of the tree? As a result, policy advice and action often veers away from disrupting the status quo, and instead reinforces 'same-but-more-ism'. The antidote to this involves looking beyond just the traditional institutions for advice, to people that have a language for things that aren't quite yet in the mainstream. A mentor of mine's advice rings loudly in my ear: "You must look to the edges to find the centre".
Wholly agree in conceptual terms, but it must co-exist with the fact that sometimes you also do need more of things. To take one of my main preoccupations, the armed forces, there are a lot of innovations in technology and doctrine which need to be absorbed but the UK also currently just has too few sailors, soldiers and airmen. I think many would argue the same is true of the police.