Exploring the use of artificial intelligence (AI) in Adult Social Care

Across the UK, there is a growing interest on how Artificial Intelligence-AI- can be used in Adult Social Care services and delivery. Artificial Intelligence (AI) is comp-lex and fast-moving, making it difficult to define, but broadly, it refers to the ability of computers to mimic human thought and perform tasks. This Network will explore the various ways AI is being used in Adult Social Care, and the potential benefit, challenges and risks it brings.

Exploring the use of artificial intelligence (AI) in Adult Social Care

Different Types of AI

Algorithms (sets of step-by-step instructions) that enable systems to identify patterns in data, make decisions, and ‘learn’.

Uses machine learning to understand text and respond, seen in applications that extract data from documents, chatbots, and voice assistants.

Technologies trained on existing data to create new material like text, images, and sounds.

AI, Policy, and Care Across the Four UK Nations

England

Policymakers are enthusiastic about AI, with the ‘AI Opportunities Action Plan’ launched in January 2025. Funding has been available through the ‘NHS AI Lab’, which includes social care too.

Scotland

The Scottish Government promotes ‘trustworthy, ethical and inclusive AI’ in adult social care through its AI Strategy, Digital Health and Care Strategy 2021, and Data Strategy for Health and Social Care 2023. Organisations like the Scottish AI Alliance and the Health and Social Care Alliance Scotland offer guidance and support.

Northern Ireland

The new Office of AI and Digital, established in June 2025, is tasked with creating an AI strategy and action plan likely to include adult social care.

Wales

The Artificial Intelligence (AI) Commission for Health and Social Care was created in 2024 to advise on safe and ethical AI use in health and social care. It endorses guidance such as the algorithmic transparency recording standard (ATRS).

Subtitle for This Block

Title for This Block

Text for This Block

Wearable technologies and ‘smart’ home devices use machine learning to monitor vital signs and home environments, generating alerts when unusual patterns of behaviour are detected, e.g. the person using the toilet more often.

Voice assistants like Alexa and Google Nests provide reminders and advice. Local authorities are developing or using existing ‘skills’ to provide service-specific information. Chatbots are used to help people navigate services and pass on support requests to the relevant team. An example is Hampshire County Council’s trial using Amazon Echo devices to improve lives of social care recipients, reducing isolation and providing reassurance to families.

Robots like Paro (a robotic seal) and Pepper (a humanoid robot) are designed to support social interaction. However, they are expensive which means there are only a few examples of their use in practice. 

These tools convert audio recordings into text and can use generative AI to summarize them, aiming to save time. An example is Magic Notes, used by local authorities like Kingston to automate transcription and summarization of care visits, saving social workers time on tasks like supervision write-ups. However, concerns exist regarding inaccuracies and ‘assumptions’ in the generated summaries, requiring practitioners to make edits.

The perceived benefits of AI in social care often include:

Predicting or preventing increased care needs or crises.

Increasing efficiencies in care service delivery

Greater potential for personalized care.

Accuracy and Security

AI can ‘hallucinate’, producing inaccurate information. This means what it produces needs to be carefully checked. AI is also vulnerable to cyberattacks, such as “prompt injection attacks”, making it produce offensive content or reveal confidential information, and “data poisoning” where training data is manipulated to produce negative outputs.

Trust

People are worried about whether AI can understand what they need, and about the risk of increasing loneliness if technology is introduced primarily to save money. 

Digital Infrastructure

Many AI applications in social care need reliable internet connections, which are often lacking in care settings, vary geographically and are hard for some people to afford.  

Human and Environmental Costs

AI or the devices that use AI can sometimes be done in factories where working conditions are bad. AI also needs a lot of energy to work, and freshwater to cool down the machines. The impact is worse in countries that are poorer.

Ethics

  • Informed Consent: AI is hard to understand and that makes it difficult for people to provide informed consent regarding its use in their care.
  • Data Use: Questions exist about what data is used to make predictions and what happens to data input into AI systems. AI ‘scrapes’ data, and it’s not always clear if people have agreed to their data being used in this way. Open AI systems, like Chat GPT, can make input data accessible to anyone, so sensitive data should not be entered. Closed-model AI systems are more secure and can use synthetic (made up) data to be less biased, but what it produces still needs to be checked. 
  • Bias: AI reproduces biases and errors present in existing data. If training data doesn’t include certain populations or is biased, predictions or generated content can be inaccurate or inappropriate.
  • Company Ethics: Some people are concerned about the way some of the AI companies behave- how they treat their workers, or whether they pay tax properly.

At the same time, there are concerns about the quality and independence of evidence supporting these claims, with much of it being small-scale and linked to technology developers. The British Association of Social Workers (BASW) advises caution and calls for more evaluation of AI tools.