Left-shift views - Algorithmic thinking and decision making

Our correspondent Melissa Harvard looks outside the box to provide a radical solution for healthcare

Left-shift views - Algorithmic thinking and decision making

If senior health and social care leaders were to behave more like computers, would services improve? Specifically, if they were to unpack the processes through which they make decisions would they be able to learn from, improve and be able to share lessons in ways that can improve service outcomes. 

All too often, decisions appear to be made in a black box – invisible to outsiders and shrouded in mystery. It's perhaps for this reason that marginal decisions are criticised as being affected by postcode lotteries. 

Detailed scrutiny of complex decisions often only occurs under internal pressure, from members in local government or from central government and boards in health. And when things go horribly wrong, inner workings can be exposed through serious case reviews. Even so, misdirection can interfere with detailed analysis. 

The reality is that decision making relies upon algorithmic thinking – following a series of steps from problem to solution in a pre-determined order taking account a range of factors and criteria. And this is what AI does, as well as drawing on data to predict the best next steps in any given scenario. 

But there are several key differences between humans and AI here and they matter. First, computers can make decisions faster and more efficiently than people. And they can do it all day and all night without tiring or allowing other biases to creep in. 

Second, computers are cheaper. Increasingly, for example, AI is being used to sift job candidates, ruthlessly applying a range of criteria and filtering out unsuitable candidates, saving money and time. 

Finally, computers can learn quickly. Proponents of AI-driven cars argue that in a computer-controlled world, AI could instantly share lessons across an AI network after incidents and accidents, gradually making traffic safer and removing risk from the system. People don't learn in the same way. 

And this is where a shift in thinking could vastly improve the quality and shareability of decision-making. If each element of a decision was unpacked, logged and analysed over a period of, say, three months, AI could be used to pull out the key lessons. 

Of course, the analysis would start by ensuring that all proper processes were followed – adherence to regulations, the law, local policies and financial regulations. But what if the research went deeper and considered other factors not normally exposed. 

For example, is the quality or nature of the decision affected by the time of the day at which it is made? It's been well-documented that people's decision-making capacity will shift depending on a range of biases, a significant one of which is hunger; our decisions before and after lunch can be wildly different. 

What about the people who are in the room – does the composition of the meeting, the presence of particular people or the inclusion of any form external scrutiny affect the quality of decision-making? 

And then there are external factors. Difficult decisions can, for example, be bumped until elections, particularly those involving cuts. And the noise surrounding other organisations' challenges or crises might impact on the way that decisions are taken. This zeitgeist effect can influence senior leaders' thinking as can ‘fear of the Daily Mail front page' syndrome. Neither might have anything to do with decisions that have to be made and yet, surprisingly, they may. 

AI can be used to detect patterns that are invisible to participants. Such insight may be powerful in enabling better decisions – or at the very least, help to avoid slots when the chief executive may be in a bad mood, for whatever reason. 

Publicly unpacking decision-making processes, routinely learning from other organisations and using this shared insight to anticipate emerging problems might not only prevent untoward incidents but could increase accountability. 

Culturally, it could be a stretch. Some organisations appear to live by the prerogative, never apologise, never explain. Certainly, opaqueness gives senior leaders cover and wriggle room when there are fears that things might not work out as planned. 

But where decision-making is unpacked, open to scrutiny and able to be shared, then the lessons that can be learned could be learned before things go awry. All too often, those in the lesson-learning business do so after the fact, something of little comfort to users. 

Harnessing AI for healthcare

Harnessing AI for healthcare

By Lee Peart 27 October 2025

We report back from The Health Foundation’s conference in London this month exploring the opportunities and challenges posed by AI adoption in healthcare

Restless for change

By Liz Wells 27 October 2025

Speakers, exhibitors and visitors at this year’s HETT show in London expressed a desire for policy, commitment, and innovation to bring about transformative ...

A place-led future for innovation

By Lee Peart 27 October 2025

Richard Stubbs looks back on his tenure as chair of the Health Innovation Network and maps out the future for innovation in the NHS