Brolly Brolly
  • Sectors
    Residential Care Care homes, eMAR, families, CQC evidence Domiciliary Care Rotas, visit notes, lone-worker safety Supported Living Tenancy, goals, RSRC evidence Outreach Care Community-based outcomes & timelines Day Services Sessions, evidence, funder review packs
    Services
    Software The Brolly platform, every feature, one login Consultancy Senior practitioners alongside your team Training AI-powered e-learning, one-to-one delivery
    Not sure where to start? Tell us about your service and we'll point you to the right place.
    Talk to us
  • Brolly AI Assistant
  • Why Brolly
  • Contact
Start your journey
Sectors Residential Care Domiciliary Care Supported Living Outreach Care Day Services Services Software Consultancy Training Brolly AI Assistant Why Brolly Contact Start your journey
AI Safety

AI Safety Policy

How Brolly Technologies Ltd delivers AI safely in the Brolly platform. The principles every customer can rely on, in plain English.

Last updated: April 2026 Version 1.0 6 min read
Contents
  1. Scope and purpose
  2. AI tools we use
  3. Human in the loop
  4. Limitations and risks
  5. Data protection
  6. Customer rights and choices
  7. Incident response
  8. Regulatory compliance
  9. Training and governance
  10. Monitoring, review and contact
Jump to section
  1. Scope and purpose
  2. AI tools we use
  3. Human in the loop
  4. Limitations and risks
  5. Data protection
  6. Customer rights and choices
  7. Incident response
  8. Regulatory compliance
  9. Training and governance
  10. Monitoring, review and contact
TL;DR

Five things we promise about AI in Brolly

  • Your team always decides. Every AI output is treated as a draft and is reviewed by a qualified human before it becomes part of the record.
  • UK data residency. Your service-user data stays in the UK or, where infrastructure requires, the EU economic area.
  • Redaction of PII. Personal identifiers are stripped or pseudonymised before content is passed to an external AI provider.
  • No data used to train models. We work only with providers whose terms guarantee this.
  • Audit trail on AI action. Who reviewed what, when, and what they did about it — recorded automatically.

1Scope and purpose

Brolly Technologies Ltd ('Brolly') provides AI-assisted care management software to UK adult social care providers. This policy explains how we use AI safely and responsibly in the products we deliver, and the principles every customer can rely on.

It applies to every AI-powered feature in the Brolly platform, to all Brolly staff and contractors who build, operate or support those features, and to the third-party AI providers we work with.

Brolly customers (registered care providers) remain the data controller for the personal data of their service users. Brolly is the processor. This policy sits alongside the Data Processing Agreement that forms part of every customer contract.

2AI tools we use

Brolly uses AI to support care delivery in specific, scoped ways. The current AI capabilities are:

  • 15 custom skills provided by a custom Brollu AI Assistant covering compliance, scheduling, recruitment, eMAR oversight, care planning, evidence drafting and more.
  • The Brolly AI Assistant, fine-tuned on UK regulatory frameworks (CQC quality statements, Mental Capacity Act, Care Act 2014) rather than generic GPT prompting.
  • Conversational training avatars that deliver mandatory courses one-to-one to care staff.
  • Voice transcription for handovers, daily notes and inbound calls.
  • Document and report generation for care plans, PIRs, evidence packs and Local Authority returns — always as drafts.

External AI providers

These capabilities are powered, where applicable, by the following external AI providers. All have been through Brolly's due diligence process, have signed enterprise data agreements and meet UK data protection standards.

Provider Used for Data retention
Anthropic Claude Natural language understanding and content drafting Up to 30 days; automatic deletion thereafter; no training on customer data
OpenAI Whisper Speech-to-text transcription API-only; no retention beyond the request
HeyGen Avatar generation for AI training trainers 90 days for generated assets; manual deletion available on request

We add or change providers only after the new provider has met the same standard. Customers are notified of material changes in advance.

3Human in the loop

Every AI feature in Brolly is built on the same principle: the AI drafts, suggests or prepares, and a qualified human always reviews and signs off before the output becomes part of the record.

Brolly Assistant works the way your team works. It drafts, suggests and prepares. Your team always leads.

This applies to every output Brolly produces, including:

  • Care plans and risk assessments
  • PIR drafts, Local Authority reports and evidence packs
  • Suggested rota changes and shift assignments
  • Compliance-statement gap analysis
  • Automated responses to service-user enquiries
Every output is a draft. You review, you edit, you sign off. Brolly Assistant never acts without your green light.

What 'qualified human' means

The reviewer must hold the appropriate professional qualification for the care context (registered manager, nominated individual, qualified nurse, support worker as applicable), have completed Brolly's AI literacy training, and have authority to approve or reject the recommendation. Their decision is recorded automatically in the audit trail.

4Limitations and risks

Hallucination

AI systems can produce inaccurate, incomplete or fabricated information ('hallucinations'). Brolly's design assumes this and surrounds every AI output with verification steps. We mitigate the risk in three ways:

  • AI outputs are presented as drafts, never as authoritative answers.
  • Care decisions and clinical judgements never rely solely on AI output.
  • Verification of AI-generated information by a qualified human is mandatory before action.

Bias

AI models can reflect bias from their training data, which is a particular concern in care settings serving diverse populations. Brolly mitigates bias through:

  • Regular bias testing of AI outputs against representative service-user profiles.
  • Diverse data inputs and culturally competent review processes.
  • Alternative, non-AI assessment pathways for any service user where AI assessment is unsuitable.
Human professional judgement always takes precedence over AI recommendations. If a Brolly AI output and a qualified care professional disagree, the professional decides — and the system records that they did.

5Data protection

What data may be sent to external AI providers

Within the Brolly platform, the following categories of data may be sent to external AI providers, always with personal identifiers removed or pseudonymised first:

  • Anonymised care record content for assessment and care-plan drafting.
  • Voice recordings for transcription, with speaker identifiers removed.
  • Demographic and care-context information for personalised responses.
  • Care planning data for intervention recommendations.

What does not leave your tenant

  • Direct personal identifiers (name, NHS number, address, date of birth) are redacted from prompts wherever possible.
  • Whole-record exports, financial detail and family contact information are not sent to external AI providers.
  • No customer or service-user data is used to train external models.

UK data residency and GDPR

Brolly stores customer data in UK data centres. Where a chosen AI provider's infrastructure requires processing in the EU or US, this is disclosed in the customer Data Processing Agreement and is covered by appropriate transfer safeguards (UK IDTA, EU SCCs).

All AI processing complies with UK GDPR including lawful basis (typically legitimate interests or contract), data minimisation, purpose limitation, transparency, and the full set of data subject rights.

6Customer rights and choices

Right to opt out

Brolly customers can disable any AI feature at the registered manager level. Service users have the absolute right to opt out of AI-assisted features and request human-only interactions. This includes:

  • Declining AI-generated care suggestions in their care plan.
  • Requesting human-only interactions for assessments and reviews.
  • Refusing voice transcription on calls and meetings.
  • Avoiding avatar-based training or communications.

Equivalent service without AI

Brolly is designed so that providers can deliver equivalent quality care to service users who opt out of AI assistance. Traditional assessment, care-planning and training pathways remain available in the platform. Care staff are trained to deliver them.

Informed consent

Brolly provides care providers with the materials needed to obtain informed consent before using AI tools in care delivery: plain-English explanations of each AI system, descriptions of how the AI supports care, an outline of any data sharing implications, and a record of the consent decision in the care record.

7Incident response

What we treat as an AI incident

  • Inaccurate or hallucinated AI output that reaches a service user without human review.
  • Bias-related concerns raised by staff, customers or service users.
  • Data-protection issues involving an AI provider or AI feature.
  • Technical failures of an AI feature that affect care delivery.

Our response

  1. Immediate action. Ensure service-user safety. Take the affected feature offline if needed.
  2. Notification. Affected customers are notified within 2 hours, and our AI Safety Officer is engaged.
  3. Investigation. Root cause analysis within 48 hours.
  4. Remediation. Corrective action implemented, with verification.
  5. Learning. The incident is reviewed in the next AI Oversight Committee meeting and an anonymised report is shared with all customers.

8Regulatory compliance

UK framework

Brolly aligns with the UK government's pro-innovation approach to AI regulation, including:

  • UK AI White Paper principles (safety, transparency, fairness, accountability, contestability)
  • ICO guidance on AI and data protection
  • MHRA guidance for AI used in medical-device contexts (where applicable)

EU AI Act

For customers operating in or with the EU, we monitor the EU AI Act's transparency obligations, human-oversight mandates and risk-management requirements, and adapt the platform accordingly. We classify our current features as limited or minimal-risk under the Act.

CQC fundamental standards

AI in Brolly supports rather than replaces the CQC fundamental standards:

  • Person-centred care. AI personalises but never replaces human connection.
  • Dignity and respect. Service-user autonomy and choice are preserved.
  • Safety. Human oversight prevents AI-related care risks.
  • Safeguarding. AI features include safeguarding-alert mechanisms that surface concerns to qualified staff.

9Training and governance

Brolly staff training

Every Brolly staff member who builds, operates or supports an AI feature completes:

  • AI Basics — capabilities and limitations of large language models in care contexts
  • Ethical AI Use — bias recognition and mitigation strategies
  • Data Protection and AI — privacy implications of AI processing
  • Tool-specific training for each AI feature they work on
  • Annual refresher on regulatory changes and best practice

AI Oversight Committee

An internal AI Oversight Committee meets monthly and includes a senior management representative, a clinical lead, the Data Protection Officer, the AI Safety Officer, and a service-user advisory representative. Its responsibilities are:

  • Strategic oversight of AI implementation and roadmap
  • Review of all AI incidents and bias-test results
  • Risk assessment of new AI features before release
  • Policy development and updates
  • Customer feedback review

10Monitoring, review and contact

Review schedule

  • Annual policy review to assess effectiveness and incorporate regulatory change.
  • Quarterly system review of AI feature performance, bias testing and incident analysis.
  • Monthly metrics on AI usage, recommendation accuracy and customer feedback.
  • Ad-hoc reviews after significant incidents or material regulatory change.

What we measure

  • AI recommendation accuracy and reviewer override rates
  • Customer satisfaction with AI-assisted features
  • Service-user opt-out rates per feature
  • Incident rates, severity and time-to-resolution
  • Compliance audit outcomes

Material changes to this policy are communicated to existing customers by email at least 30 days before they take effect. The 'Last updated' date in the hero shows when the latest revision was made.

AI Safety contact

Questions about this policy, or the way an AI feature behaved? Use the contact form on this site with 'AI Safety' in the subject line and the AI Safety Officer will respond within two working days.

Brolly Technologies Ltd Leeds, United Kingdom Version 1.0
Related policies
Terms of Service Privacy Policy
Brolly

Empowering social care teams with advanced AI tools to improve time management, boost productivity, and maintain compliance effortlessly.

Sectors

Residential Care Domiciliary Care Supported Living Outreach Care Day Services

Services

Software Consultancy Training

Platform

Brolly AI Assistant Why Brolly See interactive demo

Legal

Privacy Policy Terms of Service AI Safety

© 2026 Brolly Technologies Ltd. All rights reserved.

Supporting your road to outstanding

We use cookies, but just the boring kind. No sneaky trackers, no selling your data to shady third parties, just basic Google Analytics so we know someone visited (hi!). That's it. Seriously. Read our privacy policy if you want to learn more.