Google Flow creates cinematic video that can be almost indistinguishable from real footage. As tools like this move closer to classroom use, schools need to understand why video realism changes the safeguarding risk profile and why clear boundaries must be set before students ever gain access.
TL;DR for SLT: If students can access Google Flow, schools must treat it as a high-risk video tool and put clear safeguarding limits in place before use, not after harm occurs.
Why this matters for schools
Google has indicated plans to make advanced AI tools, including Google Flow, available in educational contexts. This means that tools originally designed for adult filmmakers could realistically be introduced into schools or accessed by students through managed accounts.
Because Flow produces highly realistic video, its misuse carries higher safeguarding risk than many other AI tools. Without clear staff understanding and strong boundaries, there is potential for harm through bullying, non-consensual deepfake content, or emotionally manipulative material involving children.
This briefing explains why Google Flow requires particular safeguarding caution, how it could be misused by students, and what staff should be alert to if the tool becomes available in a school setting.
What is Google Flow?
Google Flow is an AI filmmaking tool that generates high-quality, realistic video from text prompts or images. It is designed for cinematic storytelling and uses advanced video models (e.g. Veo).
Unlike simple image generators, Flow specialises in:
- Realistic human movement
- Camera angles and framing
- Emotional tone and atmosphere
- Story-like sequences rather than single images
This makes it more persuasive and more risky in a safeguarding context.
Why Flow presents higher safeguarding risk than many AI tools
1. Cinematic realism
Flow produces videos that look and feel like real footage.
This increases the risk of deception, humiliation, or emotional manipulation.
Children may struggle to tell AI-generated video from real recordings.
Safeguarding impact
Realistic fake videos can be used for bullying, coercion, or reputational harm.
2. Image-to-video animation
Flow can turn a still image into a moving video.
Risk
- A photo of a real child (e.g. classmate, sibling, influencer) could be animated without consent.
- This could be used to create humiliating, sexualised, or misleading content.
Safeguarding impact
This aligns with known patterns of non-consensual deepfake abuse, which UK safeguarding guidance treats as child-on-child abuse, not “joking”.
3. Story and scenario generation
Flow can generate scenes with implied relationships, emotions, or power dynamics.
Risk
- Videos can be created that imply intimacy, dependency, threat, or coercion.
- Even without explicit content, this can be grooming-adjacent or emotionally manipulative.
Safeguarding impact
This blurs boundaries around consent and appropriate relationships.
4. Bullying and harassment amplification
Because Flow outputs video rather than text:
- Harmful content spreads faster
- Impact on victims is stronger
- Group sharing escalates quickly
Examples
- Fake “embarrassing” clips
- Impersonation videos
- AI-generated ridicule or shaming
Safeguarding impact
This meets thresholds for peer-on-peer abuse, not just behaviour management.
What Google says it does (important but not sufficient)
Google states that Flow includes:
- Blocking of sexual content involving minors
- Detection and reporting of CSAM attempts
- AI watermarking (SynthID)
- 18+ access requirements and account sanctions
Important safeguarding note
These safeguards reduce risk but do not remove it.
Schools must still manage contextual misuse, bullying, and non-sexual harms.
What schools should do now
- Brief senior leaders and safeguarding teams on Google Flow specifically, not just AI tools in general, highlighting its video realism and misuse risks.
- Inform Google Workspace / Google Admins that Google Flow may be made available to education accounts, and agree in advance whether it should be restricted, disabled, or tightly controlled.
- Update online safety and safeguarding policies to state that AI video tools must not be used with any child’s image, voice, or likeness.
- Set clear staff guidance on what is and is not permitted, including supervised use only and curriculum-linked purposes.
- Train staff to recognise misuse (bullying, impersonation, deepfake-style content, grooming-adjacent scenarios) and to escalate concerns immediately to the DSL.
- Communicate expectations to students clearly and early, framing misuse as a safeguarding issue rather than just a behaviour matter.
- Review filtering and monitoring arrangements regularly to ensure emerging AI video tools are covered and access is appropriate for age and context.
- Adopt a precautionary approach: if safeguards are unclear or access cannot be properly controlled, do not enable the tool for students.
