1. Scope of this Policy
This Policy applies to all AI-generated, AI-assisted, or synthetic media created or used by the HEADTURNED Foundation or HEADTURNED PPV, including:
- AI-generated images, videos, or audio
- synthetic reconstructions (e.g., “what happened here” ecological scenes)
- predictive or simulated environments (e.g., future habitat models)
- AI-assisted enhancements of real footage or photographs
- AI-generated educational animations
- digital doubles, avatars, or illustrative characters
It applies to Foundation communications, Insights articles, research explainers, education materials, as well as PPV creator-produced content where relevant.
2. Transparency & clear labelling
The Foundation commits to transparency wherever AI or synthetic media is used. This includes:
- labelling AI-generated or AI-altered content clearly;
- explaining where a scene or reconstruction represents an interpretation;
- identifying synthetic voices or digital doubles in PPV content; and
- avoiding the use of synthetic media in ways that could mislead audiences.
When AI media is used for storytelling, visualisation, or education, captions or on-page notes will clarify its purpose and the extent to which it is synthetic.
3. No fabrication of events, behaviours, or evidence
AI media must never be used to present events, animal behaviours, ecological impacts, or Sanctuary care scenarios as real if they did not actually happen.
Synthetic media may be used for illustrative or educational purposes—for example, demonstrating how a habitat could recover, a future water system might work, ora possible biodiversity scenario.
But it must not be used to fabricate real cases, misrepresent incidents, or create misleading narratives about wildlife, land, or care activities.
4. Ethical safeguards in content generation
The Foundation does not generate or publish AI media that:
- depicts harm to wildlife or people unless clearly educational;
- uses real individuals' likenesses (including staff, volunteers, or members of the public) without explicit consent;
- imitates creators or speakers without disclosure and written agreement;
- reinforces negative stereotypes or biases in animal welfare, ecology, or community contexts;
- uses private or sensitive data to train or refine AI models.
All AI generation must align with our safeguarding, privacy, and data protection commitments.
5. AI in research, education & conservation
AI may be used to support conservation and Sanctuary outcomes—for example:
- wildlife identification from camera traps;
- predictive modelling of habitat changes;
- disease or behaviour pattern detection;
- vertical farming optimisation;
- energy modelling and resource efficiency;
- educational visualisations.
Where AI models influence real decisions (e.g., habitat shaping, animal release timing, food systems), human oversight and validation are mandatory.
6. Synthetic scenes, reconstructions & “what if” visualisations
The Foundation may create synthetic scenes to visualise:
- ecosystem recovery or collapse scenarios;
- rewilding potential for degraded land;
- how Sanctuary spaces may evolve over time;
- Blueprints for Tomorrow concepts;
- vertical farming installations and layouts;
- educational or public explainer content.
These must be clearly identified as conceptual, predictive, or educational—not as photography or documentation of actual events.
7. AI use by PPV creators
PPV creators may use AI tools including image generation, editing, storyboarding, animation, or automated translation. However, they must:
- disclose synthetic or AI content where it may affect viewer understanding;
- not use AI to impersonate real individuals without explicit consent;
- not create misleading wildlife or “documentary-style” synthetic scenes;
- comply with content rules, safeguarding, and ecological ethics.
Creators producing harmful, deceptive, or inappropriate synthetic content may face content removal or account action.
8. Data training, privacy & model governance
The Foundation does not use personal data, Sanctuary medical records, CCTV, or sensitive conservation data to train AI systems unless:
- explicit consent is obtained where required;
- data minimisation and anonymisation standards are applied;
- processing is lawful, ethical and mission-aligned;
- access is strictly controlled and auditable.
AI models must be reviewed to ensure they do not introduce ecological, welfare, or safety risks.
9. Preventing harmful or deceptive uses
AI or synthetic media must never be used to:
- misrepresent conservation outcomes;
- fabricate animal welfare concerns or success stories;
- deceive funders or the public;
- create political messaging or campaigning material;
- harass individuals or communities.
Misuse may result in disciplinary, partnership, or platform action.
10. Review & updates
This Policy is reviewed regularly to reflect developments in AI, media practice, and regulatory guidance. Updated versions will be published on this page.
Questions about synthetic or AI-generated media should be directed through our contact form or raised via relevant governance channels.
