Data science

AI vs machine learning: what’s the story behind the data?

Organisations in almost every industry – including the public sector – are increasingly discovering new opportunities through the connection between artificial intelligence (AI) and machine learning. Rarely will a local government, healthcare or policing digital strategy fail to include the terms, with AI and machine learning often used interchangeably to pursue all sorts of business […]

Organisations in almost every industry – including the public sector – are increasingly discovering new opportunities through the connection between artificial intelligence (AI) and machine learning. Rarely will a local government, healthcare or policing digital strategy fail to include the terms, with AI and machine learning often used interchangeably to pursue all sorts of business and end-user benefits.

When drawing up these strategies, the AI vs machine learning question – and which is best for an organisation – is frequently asked. It’s a fair enough question, given they are different technologies. The general definition seems to be that AI is the concept of using trained computers to perform tasks, while machine learning is a subset of AI where said machines are trained using past data. That’s why I like to use “applicable statistics” to refer to machine learning.

Driving better outcomes by connecting AI and machine learning

The truth is that AI and machine learning are very closely related and connected. There is no solid line to differentiate between them, so when you investigate AI vs machine learning, you’re looking into their interconnection. To realise the full benefits of AI and machine learning, organisations need to fully consider the field of data science.

Facilitating decision-making through automated processes is beneficial for any public sector organisation that wishes to unlock internal efficiencies, increase cost avoidance, and drive better outcomes for its citizens. These positives could include highlighting commonly used phrases from unstructured text data in call centres to determine failure demand or risk stratifying individuals with a higher likelihood of moving from domiciliary to residential care.

When striving to achieve these benefits, there is a misconception that the larger the dataset at hand, the better. This is the case where data formats and processes are consistent, but AI’s capability is often limited by the quality and standardisation of the data.

It’s the story behind the data that matters

Understanding the “story” behind data is half (arguably more) the job of data scientists. It is easy to make a model that is more intelligent than the data, and the real challenge is being creative with methods and tailoring solutions to the problem.

What is the reason behind missing data? What is the consequence when a model makes an incorrect forecast for crime rates? How much should AI make the decision making as opposed to the clinician, service or end-consumer of the model insights? These are the questions a data scientist should ask before thinking about ‘solutionising’ and building an AI system.

For example, Agilisys has used a competition-based model to forecast crime rates at different city locations. Each forecasting algorithm is more intricate than the next and therefore makes the model adaptable to the complexity of crime. City centres tend to have higher crime rates and follow some seasonality, which would require a more complex model, whereas rural areas with only one or two crimes a month could do with just the straight-line average.

Another example is the work we have completed with hospital records data to summarise uploaded text, detect and surface medical concepts, assign assertions, infer relations between them, and link them to common medical ontologies. This helps summarise large bodies of text for clinicians who are already working at a fast-paced and demanding rate.

These examples have been successful for a couple of reasons. Firstly, the technology has been applied correctly. Secondly, deep technical knowledge has been amalgamated with domain evaluation and appraisal to understand the nuances and context around the problem we are trying to solve and the outcomes we seek to achieve with this technology.

Take a ‘fail fast’ approach to data science

It is imperative to understand that machine learning is not a one-stop solution for everything, and businesses should take a ‘fail fast’ approach to data science. In a field where it is easy to make fake and biased conclusions, a dangerous attitude is to force solutions with more complex machine learning models.

Andrew Ng, the co-founder of Google Brain, has stated that the next frontier for artificial intelligence as a whole is the ability to work with small datasets.

This is crucial for the public sector, where there is less standardisation of data formats and collection, making the implementation of this technology much more difficult.

However, this does not mean the public sector is doomed for innovation. We’ve seen how creativity and comprehension are just as crucial as a skillset for a data scientist as mathematical and technical concepts. Investing in people who have both technical and domain knowledge and expertise and harnessing a culture of curiosity and innovative thinking can help shape a successful future for AI in the public sector.

Book your complimentary AI and machine learning workshop

We’re offering a free workshop for all public sector organisations to discuss how AI and machine learning can take your services to the next level and transform your organisation – and answer the AI vs machine learning question.

In this workshop, we seek to collaborate with you to define some problem statements where the impact would be high for the organisation. We can then identify an insight gap that would lend itself to an advanced analytics approach using machine learning and artificial intelligence (ML/AI).

Click here to book your workshop with our specialist public sector AI team.