Date: January 2026
Review Date: October 2026 (or sooner if statutory guidance changes)
Policy Owner: Designated Safeguarding Lead (DSL)
Nominated Governor: Safeguarding Governor
Version: v01.26
Refreshed for KCSIE 2025 — signed off April 2026
This policy was refreshed on 2026-04-29 to align with current statutory guidance: KCSIE 2025, Online Safety Act 2023 (phased duties on user-to-user services), DfE Filtering and Monitoring Standards 2024, Worker Protection (Amendment of Equality Act 2010) Act 2023 (preventive duty re sexual harassment, including third-party harassment), and emerging AI safeguarding considerations (DfE Generative AI in Education 2025, ICO Children’s Code).
Status: live — signed off 29 April 2026 by Proprietor and Governing Body.
1. Policy Statement and Purpose
The Haven is committed to ensuring that children and young people are kept safe when accessing education online. As a primarily online and hybrid provision, digital safety is integral to our safeguarding culture and practice.
This policy sets out how The Haven:
-
safeguards learners in online environments;
-
ensures platforms are used safely, responsibly, and securely;
-
trains staff to manage online risk effectively;
-
responds to online safety concerns, cyber incidents, or misuse.
This policy should be read alongside the Child Protection and Safeguarding Policy v01.26, Behaviour and Regulation Policy v10.25, Data Protection, Confidentiality & Privacy Policy v10.25, and Managing Allegations Against Staff Policy v01.26.
2. Scope
This policy applies to:
-
all learners, staff, contractors, and volunteers;
-
all online, hybrid, and digital learning activity;
-
all platforms and tools used for teaching, communication, safeguarding, administration, and assessment.
3. Platforms Used by The Haven
The Haven uses a restricted, approved set of platforms. Use of unapproved platforms for learner contact is not permitted.
3.1 Core Teaching and Communication Platforms
|
|
|
|---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3.2 Communication and Engagement Platforms
|
|
|
|---|---|---|
|
|
|
|
|
|
3.3 Assessment, Administration and Payment Systems
|
|
|---|---|
|
|
|
|
|
|
This policy applies to all use of the above platforms, whether during teaching, preparation, recording, or communication.
4. Roles and Responsibilities
4.1 Governing Body / Proprietor
-
Ensures appropriate policies, oversight, and resourcing for online safety.
-
Receives anonymised reports of incidents and audits.
4.2 Designated Safeguarding Lead (DSL)
The DSL:
-
holds overall responsibility for online safety and safeguarding;
-
responds to online safety concerns involving learners;
-
liaises with Local Authorities, commissioners, and external agencies;
-
oversees staff training and induction relating to online safety;
-
coordinates communication in serious incidents.
4.3 Cyber Security and Platform Incidents
-
Operational responsibility for responding to cyber-attacks, data breaches, or platform failures sits with senior leadership, in consultation with:
-
the DSL (where safeguarding risk exists),
-
the Data Protection Officer (DPO) (for data breaches),
-
platform providers where required.
-
-
Incidents are logged, risk assessed, and escalated appropriately.
4.4 Staff and Tutors
Staff:
-
follow the Acceptable Use Agreement (Appendix A);
-
use only approved platforms;
-
report online safety concerns immediately to the DSL;
-
do not engage in private, unsupervised digital contact with learners.
4.5 Learners and Parent Carers
-
Learners are supported to use platforms safely and appropriately.
-
Parent carers are informed of expectations and responsibilities at induction.
5. Platform Safety, Security and Quality Assurance
The Haven undertakes proportionate due diligence to ensure platforms are safe and appropriate, including:
-
use of reputable, education-grade platforms;
-
confirmation of data encryption in transit and at rest;
-
review of privacy notices and security documentation;
-
role-based access and permission settings;
-
regular review of safeguarding features and updates.
Access and Accounts
-
Learners are issued with school-managed log-ins where appropriate.
-
Platforms are accessed via secure log-in credentials.
-
Personal accounts are not used for teaching or safeguarding communication.
6. Monitoring and Oversight
-
Live sessions are supervised and moderated by staff.
-
Platform features (chat, breakout rooms, permissions) are used proportionately.
-
Concerns arising from platform use are recorded and reviewed.
-
Patterns of concern inform training, platform settings, or policy review.
7. Training and Awareness
7.1 DSL Training Responsibilities
The DSL is responsible for ensuring that:
-
staff receive annual safeguarding and online safety training;
-
updates are provided in response to emerging risks or incidents;
-
induction training includes platform-specific safeguarding guidance.
7.2 Staff Training
All staff receive:
-
online safety training at induction;
-
annual refresher training covering:
-
safeguarding in online environments,
-
professional boundaries and digital conduct,
-
platform-specific features and risks,
-
responding to online disclosures or incidents;
-
-
additional training when new platforms are introduced.
8. Responding to Online Safety Concerns
Concerns may include:
-
inappropriate online behaviour;
-
exposure to harmful content;
-
cyberbullying or peer-on-peer abuse;
-
misuse of platforms;
-
data security concerns.
All concerns are:
-
reported immediately to the DSL;
-
recorded factually;
-
escalated according to safeguarding thresholds;
-
shared with commissioning schools or LAs where required.
9. Online Safety Act 2023 — duties and alignment
The Online Safety Act 2023 places duties on regulated user-to-user and search services to protect children from illegal content and content harmful to children. While The Haven is itself an education provider rather than a regulated service, our duty of care to learners requires that we operate consistently with the Act and respond proportionately when learners encounter or are affected by content that the Act addresses.
9.1 Categories of harm we recognise
- Illegal content — including child sexual abuse material (CSAM, including AI-generated and digitally manipulated imagery), terrorism content, threats to kill, controlling or coercive behaviour, intimate image abuse, and content encouraging self-harm or suicide.
- Primary priority content harmful to children — pornography; content that encourages, promotes or instructs self-harm; suicide content; content that encourages eating disorders.
- Priority content harmful to children — abusive or hateful content (including based on protected characteristics); bullying content; serious violence; harmful substances content; dangerous stunts and challenges.
- AI-generated harm — the Online Safety Act applies equally to AI-generated content. Deepfakes, AI-generated CSAM, AI-generated grooming content, and synthetic intimate imagery are treated identically to non-AI counterparts.
9.2 Our operational response
When a learner discloses, encounters, or appears to have encountered content in any of these categories — on a Haven platform, on an external service, or in their broader online life — staff will:
- Treat the disclosure as a safeguarding concern under the Child Protection and Safeguarding Policy v01.26.
- Escalate to Kirsten Roy without delay.
- Where illegal content is involved, follow the escalation route to police or the Internet Watch Foundation as appropriate, and never attempt to view, download, or store the content for evidential purposes — this is a criminal offence and the responsibility of trained law enforcement.
- Support the learner therapeutically and report to commissioning schools / LAs in line with safeguarding agreements.
9.3 Filtering and monitoring
The Haven’s filtering and monitoring arrangements align with the DfE Filtering and Monitoring Standards (2024). The Kirsten Roy retains operational responsibility, with technical configuration carried out by the Kirstin Stevens and the outsourced Data Protection Officer, and termly oversight by the Elliot Wassell as part of broader safeguarding governance. Filtering decisions balance protection of children against over-blocking that would disrupt legitimate learning; the proportionality test is documented in the termly review.
10. Cyber-Incidents and Data Breaches
In the event of a cyber-attack or data breach:
-
immediate steps are taken to secure systems;
-
the DSL and DPO are informed;
-
risks to learners are assessed;
-
notifications are made in line with data protection law;
-
lessons learned inform system improvements.
11. Review and Monitoring
-
This policy is reviewed annually.
-
Termly audits consider incidents, platform use, and training impact.
-
Updates reflect statutory guidance and technological change.
Appendix A – Acceptable Use Agreement (Summary)
All staff and learners agree to:
-
use only approved platforms;
-
communicate respectfully and professionally;
-
not share personal contact details;
-
not record or distribute content without permission;
-
report concerns immediately;
-
comply with safeguarding and behaviour expectations.
A full Acceptable Use Agreement is issued and signed at induction.
Appendix B – Responsible Use of AI Policy v10.26 (Summary)
The Haven permits limited, ethical use of AI tools where they:
-
support accessibility or workload reduction;
-
do not replace professional judgement;
-
do not process personal data without approval;
-
are transparent and explainable.
AI must not be used to:
-
generate safeguarding decisions;
-
analyse learner behaviour without consent;
-
replace relational judgement.
All AI use is overseen by senior leadership and aligned with data protection and safeguarding principles.
11. Related Policies and Guidance
-
Online Safety & Acceptable Use
-
KCSIE 2025
-
UK GDPR / Data Protection Act