Quinly Logo

Compliance & Safety Standards

Meeting the DfE's new Generative AI Product Safety Standards and UK legal requirements for child safety

NEW JANUARY 2026 Department for Education Standards

DfE Generative AI Product Safety Standards

In January 2026 the Department for Education published its first Generative AI Product Safety Standards for tools used with children in education. The standards set six expectations: filtering, privacy and data protection, mental health, manipulation, emotional and social development, and monitoring and reporting.

Quinly was designed from day one with child safety as its primary principle, well before these standards were published. The summary below sets out, area by area, exactly how Quinly meets them. Quinly is not a substitute friend, counsellor or Designated Safeguarding Lead. It is a calm first voice that helps a child put feelings into words and points them towards a trusted human who can help.

1. Filtering

Filtering is part of Quinly's core response engine, not a layer on top. Every reply is generated through a Constitutional AI model trained to refuse or de-escalate harmful content, then passed through a second safety check before it reaches the child. The same standard is applied across English, Welsh, Urdu, Polish and Arabic. Filters are continually updated for new patterns of harm, including AI and deepfake harm, sextortion and coercive content.

2. Privacy and Data Protection

Quinly is stateless by design. We do not retain transcripts, store names, contact details, accounts or login identifiers, build profiles, or share data with advertisers. The only information that leaves the chat is an aggregate count of crisis categories at the school level, bucketed by a 15-minute window so that no single conversation can be singled out. Categories with fewer than five conversations in a period are treated as low-volume.

3. Mental Health

Mental health is Quinly's entire purpose. Quinly recognises 30 categories of crisis, from anxiety and bullying to suicide and self-harm, sexual abuse, and AI-related harm. Every detection comes with calm, age-appropriate language and a clear signpost to UK services: Childline, Samaritans, Papyrus HOPELINE247, NSPCC, the Revenge Porn Helpline and others. Quinly is not a substitute for a counsellor or DSL; it is a first calm voice that points the child towards a human who can help.

4. Manipulation

Quinly's opening line is “Hello, I'm Quinly. I'm not a real person, but I am here to help you.” The same disclosure is repeated on the splash page, the safety page and in the privacy notice. Quinly does not use first-person emotional language designed to imply a relationship, does not pretend to remember previous conversations, and does not flatter or affirm in ways that could foster dependence. The mascot is intentionally drawn as a stylised, non-human character.

5. Emotional and Social Development

Quinly is not a substitute friend. It avoids agency-implying language throughout, both in user-facing copy and in the underlying prompt design. After a sustained period of conversation (15 minutes by default, configurable per school for younger pupils), Quinly itself gently encourages the child to take a short break, do something calming, or speak to someone they trust. The reminder is a soft prompt, not a forced cut-off, because for some children the conversation may be the moment of disclosure.

6. Monitoring and Reporting

Quinly is aligned with this standard, and we approach it through the data minimisation balance the standard itself recognises. We monitor and report at aggregate level rather than individual level, because individual monitoring would destroy the anonymity that makes disclosure possible in the first place. Our pilot data of 3,289 conversations demonstrates the point: that volume of disclosure happened precisely because children trusted the privacy. The DSL dashboard provides a strong aggregate view: crisis volumes by severity, categories by week, peak times of use including out-of-hours patterns, and average session duration. Every figure is presented with its denominator and a low-volume threshold, and the methodology is published on the dashboard itself.

The principle behind the design: Quinly maintains child trust through confidential conversations while giving DSLs the aggregate safeguarding intelligence they need to act. Anonymity is what enables disclosure. We are happy to discuss our approach openly with the Department, with NSPCC, and with any school that wants to understand it in more detail.

APRIL 2026 DSIT Consultation on Children, AI Chatbots and the Online Safety Act

Quinly's position on the government's AI chatbot consultation

In April 2026 the Department for Science, Innovation and Technology launched a public consultation on children, AI chatbots and the Online Safety Act. Technology Minister Liz Kendall has confirmed the government is actively considering an under-16 age restriction on AI chatbots, alongside measures to limit addictive design features.

Quinly welcomes the consultation. We agree that the question is not if government acts, but how. Companion-style chatbots that simulate friendship and harvest engagement pose real risks to children, and we have always said so. Quinly is the deliberate opposite: anonymous, single purpose, designed against addictive features, and built to the DfE's January 2026 standards.

We are submitting a full response to the consultation, asking government to define safeguarding-purpose AI as a distinct category in any new regulation, with a demanding standard, and to exempt qualifying services from any blanket age restriction.

Define safeguarding-purpose AI

A clear and demanding test, distinct from companion and general-purpose chatbots, with appropriate exemption from any blanket age restriction.

Prohibit addictive features

Persistent memory, first-person emotional language, streaks and variable-reward pacing should be prohibited in any AI product reasonably likely to be accessed by under-18s.

Proportionate Online Safety Act duties

So that small, specialist UK safeguarding services are not regulated out of existence in favour of large overseas general-purpose providers.

Our full position statement and consultation response are available on request from hello@quinly.ai.

Beyond Legal Compliance: Children & AI Design Code

While meeting all legal requirements, we're also pursuing alignment with voluntary best practice standards.

What is the Children & AI Design Code?

The Children & AI Design Code is a comprehensive framework developed by the 5Rights Foundation that sets best practice standards for AI systems that children interact with. While not currently a legal requirement in the UK, it represents the gold standard for ethical, safe, and child-appropriate AI design.

The Code covers critical areas including:

  • Safety: Protecting children from harmful content and interactions
  • Privacy: Minimising data collection and ensuring transparency
  • Fairness: Ensuring non-discriminatory AI systems
  • Transparency: Making AI understandable to children and families
  • Accountability: Clear governance and oversight structures
  • Participation: Involving children in design and testing

Why Quinly is pursuing this voluntary standard

At Quinly Ltd, we believe that building the safest possible AI for children isn't optional; it's essential. The Children & AI Design Code is not a legal requirement, but we are committed to meeting it because:

  • Children deserve the highest level of protection when using AI systems
  • Schools and organisations need confidence in the safety and ethics of tools they deploy
  • Proactive compliance demonstrates our commitment to child welfare beyond regulatory minimums
  • We want to lead the industry in responsible AI development for vulnerable populations

Children & AI Design Code Progress

Quinly 2.0 is currently at ~70% alignment with the voluntary Children & AI Design Code, with active work underway to achieve full compliance by Q2 2026.

70% Compliant
✓ Achieved

Technical Safety

  • Comprehensive crisis detection
  • Constitutional AI safety guardrails
  • Content filtering and PII redaction
  • Zero data retention architecture
✓ Achieved

Real-World Validation

  • 3,289 child conversations (July 2025 to Jan 2026)
  • Field testing across multiple schools
  • 100% system uptime
  • Comprehensive usage monitoring
✓ Achieved

Privacy & Compliance

  • Full DPIA completed
  • UK Children's Code compliant
  • GDPR Article 8 compliant
  • Child-appropriate privacy notice
⚠ In Progress

Participatory Research

Formalising structured qualitative feedback from children and guardians through focus groups and interviews with proper consent frameworks.

⚠ In Progress

Documentation & Transparency

Packaging existing evidence into formal monitoring documentation and preparing transparency reports for publication.

→ Planned

Governance Formalisation

Documenting formal governance structure with defined expert roles (AI Systems Expert, Age-Appropriate Expert, Child Rights Expert), Senior Accountable Leader designation, and formal redress/complaints process for children.

How Quinly Differs from Entertainment Chatbots

Not all AI chatbots are the same. Here's how Quinly's professional safeguarding design differs from consumer entertainment chatbots:

Aspect Entertainment Chatbots Quinly 2.0
Primary Purpose Engagement, role-play, revenue Child safeguarding, crisis support
AI Safety Model Varied, engagement-optimised Claude 4 Sonnet (Constitutional AI)
Data Retention Persistent conversation history Zero (stateless)
Crisis Response May encourage harmful acts Immediate professional referral
Professional Oversight None (consumer app) Real-time DSL dashboard
UK Compliance Reactive (post-incident) Designed-in from day one
Age Assurance Self-declared (easily bypassed) Institutional deployment
Content Filtering Basic (often failed) Multi-layer Constitutional AI

Evidence-Based Safety

Real-world validation: Quinly has supported 3,289 child conversations (July 2025 to January 2026) with:

  • Zero incidents of harmful content
  • Zero grooming behaviours
  • Zero inappropriate responses
  • 100% appropriate crisis signposting to Childline, Samaritans, and UK support services

Compliance Documentation & Evidence

We believe in radical transparency. Download our compliance documentation to see the detailed work behind Quinly 2.0:

Data Protection Impact Assessment

Full DPIA following ICO guidance, documenting all privacy safeguards and risk mitigations.

Quinly Basic Usage Reports

Real pilot data: 3,289 child conversations from July 2025 to January 2026 across 6 schools.

Children & AI Design Code

The complete framework from 5Rights Foundation that we're working to fully align with.

Our Ongoing Commitment

Quinly Ltd is committed to achieving 100% compliance with the Children & AI Design Code by Q2 2026. We will:

  • Publish quarterly transparency reports on our compliance progress
  • Conduct formal participatory research with children and families
  • Maintain our existing technical safeguards and operational monitoring
  • Document our governance structures and accountability frameworks
  • Seek external validation and auditing of our practices

Questions about our compliance work? Contact us at hello@quinly.ai