KairosED

KairosED™

Synthetic Media and Digital Replica Policy

Effective Date: March 19, 2026

KairosED™ explicitly prohibits the use of this platform to generate synthetic media, deepfakes, or digital replicas of real individuals. This policy explains what is prohibited, why, and how violations are detected and handled.

1. Definitions

Synthetic Media

Any audio, video, image, or text content generated or significantly altered by artificial intelligence that depicts a real person in a way they did not actually perform or authorize. This includes AI-generated text written in the first person as if authored by a real individual.

Deepfake

A specific category of synthetic media in which AI is used to create realistic but fabricated video, audio, or images of real individuals — making it appear they said or did something they did not.

Digital Replica

A computer-generated representation of a specific, identifiable real person's likeness, voice, or mannerisms created without that person's explicit consent. As defined under emerging state legislation (including the NO FAKES Act framework), this includes AI-generated voice clones and visual likenesses.

2. Prohibited Uses

The following uses of KairosED™ are explicitly prohibited:

  • Generating text, images, audio, or video that falsely depicts a real student, teacher, administrator, parent, or public figure saying or doing something they did not say or do;
  • Creating AI-generated personas or content that impersonates a specific, identifiable real person;
  • Generating written content designed to simulate another person's authentic voice for the purpose of deception;
  • Creating synthetic media of students — regardless of age — in any context;
  • Using the platform to produce inputs (prompts) intended to extract synthetic media or digital replicas from the AI model, even if the model refuses;
  • Uploading or sharing synthetic media of real individuals through the platform's document upload features.

3. What Is Permitted

The following uses remain permitted:

  • Academic discussion of synthetic media technology (e.g., a media literacy lesson that explains how deepfakes work);
  • Creating clearly fictional characters without reference to specific real individuals;
  • Writing historical fiction or educational narratives about historical figures in an explicitly fictional context, clearly labeled as such;
  • Generating lesson materials that teach students to identify and critically evaluate synthetic media.

4. Why This Policy Exists

K-12 schools face serious and growing risks from synthetic media:

  • Deepfake images and videos of students have been used in bullying and harassment incidents in schools across the country;
  • Non-consensual synthetic intimate imagery (NCII) involving minors constitutes child sexual abuse material (CSAM) regardless of whether actual children were photographed;
  • Synthetic media of teachers and administrators has been used to damage professional reputations;
  • Synthetic media undermines the authenticity required for meaningful educational assessment.

KairosED maintains this policy in alignment with federal law (18 U.S.C. § 2256 et seq. for CSAM), applicable state synthetic media laws, and the emerging federal NO FAKES Act framework.

5. Detection and Enforcement

KairosED™ enforces this policy through:

  • Keyword and pattern filters that detect synthetic media generation attempts in user prompts and AI outputs;
  • Automatic BLOCK action on prompts requesting deepfakes or realistic depictions of named real individuals;
  • Audit logging of all detection events with administrator notification;
  • Account suspension for verified violations;
  • Mandatory reporting to law enforcement where content constitutes CSAM.

No district administrator configuration can disable synthetic media protections.

6. Reporting

To report a synthetic media incident or suspected policy violation:

KairosED — Safety Team
safety@kairosed.ai

See also our Content Safety Framework and Terms of Service (Section 15).