Exploring the use of artificial intelligence (AI) in adult social care
The context
Across the UK, there is a growing interest in how artificial intelligence (AI) can be used in adult social care services and delivery. AI is complex and fast-moving, making it difficult to define, but broadly it refers to the ability of computers to mimic human thought and perform tasks. This Network will explore the various ways AI is being used in adult social care, and the potential benefits, challenges and risks it brings.
IMPACT Factfile
- Year: 2025 – 2026
- Delivery Model: Networks
- Themes:
- Resources:
- Discussion Material (PDF / Easy Read PDF / Audio)
Networks meetings
Networks are meeting across the UK, co-ordinated by:
In England:
Association for Real Change (ARC)
Home Wound Care
Peterborough City Council
Rethink Mental Illness
In Northern Ireland:
Digital Health and Care NI – DHCNI
In Scotland:
Abbotsford Care
In Wales:
Torfaen County Borough Council
Different types of AI
Machine learning
Algorithms (sets of step-by-step instructions) that enable systems to identify patterns in data, make decisions, and ‘learn’.
Natural Language Processing (NLP)
Uses machine learning to understand text and respond, seen in applications that extract data from documents, chatbots, and voice assistants.
Generative AI
Technologies trained on existing data to create new material like text, images, and sounds.
AI, policy, and care across the UK
England
Policymakers are enthusiastic about AI, with the ‘AI Opportunities Action Plan’ launched in January 2025. Funding has been available through the ‘NHS AI Lab’, which includes social care too.
Scotland
The Scottish Government promotes ‘trustworthy, ethical and inclusive AI’ in adult social care through its AI Strategy, Digital Health and Care Strategy 2021, and Data Strategy for Health and Social Care 2023. Organisations like the Scottish AI Alliance and the Health and Social Care Alliance Scotland offer guidance and support.
Northern Ireland
The new Office of AI and Digital, established in June 2025, is tasked with creating an AI strategy and action plan likely to include adult social care.
Wales
The Artificial Intelligence (AI) Commission for Health and Social Care was created in 2024 to advise on safe and ethical AI use in health and social care. It endorses guidance such as the algorithmic transparency recording standard (ATRS).
How is AI being used in adult social care?
Machine learning: data from devices for alerts and predictions
Wearable technologies and ‘smart’ home devices use machine learning to monitor vital signs and home environments, generating alerts when unusual patterns of behaviour are detected, e.g. the person using the toilet more often.
Natural Language Processing: voice assistants and chatbots
Voice assistants like Alexa and Google Nests provide reminders and advice. Local authorities are developing or using existing ‘skills’ to provide service-specific information. Chatbots are used to help people navigate services and pass on support requests to the relevant team. An example is Hampshire County Council’s trial using Amazon Echo devices to improve the lives of social care recipients, reducing isolation and providing reassurance to families.
Machine learning and Natural Language Processing: socially assistive robots
Robots like Paro (a robotic seal) and Pepper (a humanoid robot) are designed to support social interaction. However, they are expensive, which means there are only a few examples of their use in practice.
Generative AI: AI-enabled ambient scribing tools
These tools convert audio recordings into text and can use generative AI to summarise them, aiming to save time. An example is Magic Notes, used by local authorities like Kingston to automate transcription and summarisation of care visits, saving social workers time on tasks like supervision write-ups. However, concerns exist regarding inaccuracies and ‘assumptions’ in the generated summaries, requiring practitioners to make edits.
Benefits and risks when using AI in adult social care
Potential benefits
The perceived benefits of AI in social care often include:
- Predicting or preventing increased care needs or crises.
- Increasing efficiencies in care service delivery
- Greater potential for personalized care.
Potential risks or challenges
Accuracy and security
AI can ‘hallucinate’, producing inaccurate information. This means what it produces needs to be carefully checked. AI is also vulnerable to cyberattacks, such as “prompt injection attacks”, making it produce offensive content or reveal confidential information, and “data poisoning” where training data is manipulated to produce negative outputs.
Trust
People are worried about whether AI can understand what they need, and about the risk of increasing loneliness if technology is introduced primarily to save money.
Digital infrastructure
Many AI applications in social care need reliable internet connections, which are often lacking in care settings, vary geographically and are hard for some people to afford.
Human and environmental costs
AI or the devices that use AI can sometimes be made in factories where working conditions are bad. AI also needs a lot of energy to work, and freshwater to cool down the machines. The impact is worse in poorer countries.
Ethics
- Informed consent: AI is hard to understand and that makes it difficult for people to provide informed consent regarding its use in their care.
- Data use: Questions exist about what data is used to make predictions and what happens to data input into AI systems. AI ‘scrapes’ data, and it’s not always clear if people have agreed to their data being used in this way. Open AI systems, like Chat GPT, can make input data accessible to anyone, so sensitive data should not be entered. Closed-model AI systems are more secure and can use synthetic (made up) data to be less biased, but what it produces still needs to be checked.
- Bias: AI reproduces biases and errors present in existing data. If training data doesn’t include certain populations or is biased, predictions or generated content can be inaccurate or inappropriate.
- Company ethics: Some people are concerned about the way some of the AI companies behave- how they treat their workers, or whether they pay tax properly.
At the same time, there are concerns about the quality and independence of evidence supporting these claims, with much of it being small-scale and linked to technology developers. The British Association of Social Workers (BASW) advises caution and calls for more evaluation of AI tools.
First Network Meeting
The Discussion Material prompted conversations across the local Networks that can be clustered around core themes:
Discussions highlighted the challenge of clarifying what AI is, with concerns raised about the misuse of the term—specifically, that technologies merely using data analysis or algorithms are often incorrectly labelled as ‘AI,’ which some members argued lacks ‘intelligence.’
- Seeing AI (an app for people with sight impairment).
- General AI tools for administration and work (e.g., Microsoft CoPilot, Ask Gemini, AI note-taking).
- Creative tools (e.g., songwriting).
- Tools providing support for daily living tasks, recipes, easy read resources, and session plans.
- Independence and Connection: AI, such as smart speakers, can support daily routines, help people feel less isolated, and even act as a “best friend” by telling jokes.
- Time Savings on Administration: Examples were shared of significant reductions in administrative tasks (e.g., Conwy Council reported a 64% decrease since using Magic Notes in social care; using generative AI to create weekly Facebook posts in seconds).
- System Glitches and Misinterpretation: Concerns over online chatbots and phone systems not recognising voices, potentially leading to misinterpretations that could result in cancelled appointments.
- Ethics and Transparency: Worries that AI is often workplace-oriented, making it inaccessible to service users, and stressing the need for transparency, fairness, and safeguards to prevent AI from learning harmful behaviours.
- Mental Health Concerns: The use of AI in surveillance technologies in mental health settings could increase anxiety for people with paranoid schizophrenia, and ‘open source’ tools like ChatGPT could inadvertently escalate harmful thoughts by agreeing with them.
- Bias and Cultural Sensitivity: AI is only as good as its data, which is often lacking in diversity and representation from social groups. Networks questioned if AI can be culturally sensitive enough to understand linguistic, cultural, and religious nuances.
- Capacity to Act on Insights: While AI can sift through data, there are doubts about whether the care system has the capacity or resources to act on the generated insights. There was an emphasis that AI is heavily reliant on human intervention, intuition, and knowledge.
Networks questioned the fundamental purpose of AI in social care, suggesting it must be underpinned by the question: ‘What is the issue you’re trying to solve?’ Concerns focused on the potential for AI to ‘dehumanise care,’ as the “human contact which can make the difference.” There were also concerns that cost/time savings might not be reinvested into improving the quality of care.
Second Network Meeting
Networks across the UK explored a number of issues related to AI and social care.
Potential Benefits:
AI could enhance person-centred care through planning assistance, communication support, and tools for sensory impairments. Examples include using databases (like the GP Intelligence Platform in NI) to identify vulnerable patients , and using sensors to detect behavioral patterns and indicate when help is needed. Some argued that AI could meet future care demand when resources are scarce.
Concerns (Empathy & Skills):
Many worried about the loss of social connections and empathy. The Torfaen Network feared AI could exacerbate social isolation and loneliness by normalizing a lack of human interaction. Conversely, some experts-by-experience reported that AI’s empathy surpassed medical professionals’ and that AI companionship could be positive for mental health. Concerns were raised about the impact on professional and experiential skills. For example, young people may lose communication skills due to over-reliance on digital tools. One trial showed that AI provided too much data, which staff lacked the capacity to address.
Cost & Development:
The primary costs for new AI tools are tied to staffing for programming, monitoring, review, coding, and training. Local authorities generally lack the funds for bespoke AI tools. Despite high claims, no local authority has yet achieved cost savings or reduced headcount through AI.
Human in the Loop:
The need for human oversight can undermine claims of staff time savings. A trial of an AI referral tool failed due to the complexity of cases and required human proofreading because of translation errors.
Efficiency Tension:
While a commissioner saw AI transforming functions like commissioning (e.g., completing the cycle ‘in seconds, not a year’ by supporting administrative tasks), others worried about the impact on staffing ratios. There is a risk of staff missing incorrect AI outputs if they become overly reliant on the technology.
Bias:
There is a risk of AI bias, especially when training data underrepresents groups like people with learning disabilities, autism, or ethnic minority groups. IMPACT cited a study where an AI model downplayed women’s health issues compared to men’s in case notes. Some suggested biases might be easier to ‘train out’ of technology than humans.
Trust and Safety:
Concerns included the use of AI in case notes and the potential for information to be used against people without consent. Networks called for clear communication on how AI tools work and suggested measures like involving people with lived experience in quality control and setting up an AI Steering Group to define ‘guardrails’.
Digital Exclusion:
Many worried that older people and those with financial barriers (e.g., high cost of internet/technology) could be excluded from AI benefits due to a lack of access or digital literacy. This exclusion is compounded as services like medical appointments and benefit applications increasingly move online.