Technovation Girls Senior Division

Girls practice safer responses to online abuse before real harm happens.

Interactive scenarios help students recognize warning signs, make safer choices, and learn from realistic outcomes. SisterShield combines verified research, citation-aware AI support, and teacher review to make digital safety education more credible, engaging, and actionable.

View Research, Testing & Business Evidence
Judge Quick Access/ one-click demo login
Quick Exit on every pageVerified sourcesTeacher-reviewed contentBilingualBuilt for schools & youth orgs
SDG 5: Gender EqualitySDG 16: Peace, Justice & Strong Institutions

Designed for students, educators, and youth-support organizations

SisterShield scenario player showing an AI-generated visual novel with illustration, story text, and branching choice buttons
UN Women · WHO · UNESCO · UNICEF · ITU · ECPAT International · U.S. Department of Justice

The Problem

Online abuse is widespread. Current prevention tools aren't working.

Yet most prevention education is passive: text-heavy PDFs, one-time assemblies, or awareness posters. And most digital safety tools focus on surveillance (monitoring and blocking) rather than building the skills students need to protect themselves.

Current approaches fall short

  • Static resources don't build decision-making skills
  • Monitoring tools surveil rather than empower
  • Most materials are English-only and culturally narrow
  • Students get information about threats but never practice responding

16–58%

of women have experienced technology-facilitated violence

ITU Hub, citing UN Women (2024)

The Solution

See how students learn to respond to real threats.

1

Choose a course

Students browse topic-specific courses (cyberstalking, digital boundaries, image-based abuse, healthy online relationships) and start learning at their own pace.

SisterShield courses page showing topic-colored course cards
2

Play through a scenario

Each course is an interactive visual novel. Students read a story, face realistic situations, and make choices. The story branches based on their decisions, with AI-generated illustrations and text grounded in verified research.

Interactive scenario player with AI-generated illustration and choice buttons
3

Learn from every choice

When a student makes a risky choice, they receive a Learning Moment: not a punishment, but a supportive explanation of why the choice was risky and what a safer response looks like. Every fact is traceable to a source document.

Citation panel showing source attribution for AI-generated content
4

Teachers track and review

Teachers monitor student progress, review AI-generated content before publication, and maintain full control over what students see. No AI content reaches learners without teacher approval.

Student progress dashboard with completion stats

Designed for Safety

Every feature exists for a reason.

Quick Exit

One click or the Escape key instantly navigates to a safe external site and clears the session. Because some users may be in danger while learning about danger.

Source-Cited AI

Every AI-generated scenario is grounded in verified research through Retrieval-Augmented Generation. Every fact can be traced to a specific source document, page, and relevance score.

Evidence-Based Safety Mentor

A RAG-powered chatbot helps students understand TF-VAWG topics with responses grounded in 153 verified research sources, not generic AI answers.

Teacher Review Gate

No AI-generated content reaches students without teacher approval. Teachers review, edit, and publish courses with full control over what learners see.

Teacher-Created Course Content

Teachers generate and customize interactive courses using AI assistance. Every scenario is authored, reviewed, and published by educators, not auto-generated.

Interactive Visual Storytelling

AI-generated illustrations and branching narratives bring safety scenarios to life. Students learn through immersive visual novels, not static PDFs or lectures.

Multilingual Ready

Built with i18n from day one. Currently supporting English and Korean, with an architecture designed to add any language. Interface, courses, and AI-generated content, all translatable. Language should not be a barrier to safety education.

Interactive Scenarios

Not lectures. Not PDFs. Students practice real decisions in branching visual novel scenarios with consequences, reframes, and evidence-based guidance.

Trauma-Informed Design

Calm, low-saturation color palette. No alarming or graphic imagery. User-controlled pacing. Crisis resources accessible from every page.

Why This Approach

Most safety tools inform. SisterShield lets students practice.

Awareness websites provide information but don't build skills. Monitoring tools block and surveil. They protect through restriction, not empowerment. SisterShield takes a different approach: students practice real decisions in realistic scenarios, learn from consequences, and build the judgment they need before they face real threats.

CapabilityAwareness sites & PDFsMonitoring toolsSisterShield
Interactive decision practiceNoNoYes
AI-personalized scenariosNoNoYes
Every fact source-citedSometimesNoYes
Empowerment, not surveillancePartialNoYes
Multilingual (i18n-ready)RarelySomeYes
Teacher oversight and reviewNoNoYes
Trauma-informed designRarelyNoYes

153

verified sources

Research Foundation

How 153 verified sources shape every scenario.

SisterShield's AI doesn't generate content from nothing. Every scenario is grounded in a curated knowledge base of 153 documents from leading international organizations working on violence prevention, child protection, and digital safety.

These documents are processed through a Retrieval-Augmented Generation (RAG) pipeline: each source is extracted, chunked into passages, embedded as vectors, and stored in a searchable knowledge base. When the AI generates a scenario, it retrieves the most relevant evidence and weaves it into the story, with citation markers that trace every claim back to its source.

UN WomenWHOUNESCOUNICEFITUECPAT InternationalU.S. Department of Justice

Source Breakdown

Trauma-Focused CBT Clinical Materials

Assessment tools, treatment worksheets, coping skills guides, and clinical protocols from established TF-CBT programs

137

TF-VAWG Policy & Research

Policy frameworks, strategy documents, and global trend reports on technology-facilitated violence

UN Women - Global Trends Report, Prevention Strategy, Model Legislation Framework · UNESCO - Global Education Monitoring Reports · UNICEF - Online Platform Regulation Policy Brief · ITU - Child Online Protection Workbook, AI & Digital Safety

10

Reference & Legal

Prosecution guides, terminology standards, and victim-centered approach frameworks

ECPAT International - Terminology Guidelines (2nd Edition) · U.S. DOJ - Prosecution Guide, Victim-Centered Approach · CDT - Rapid Response Report

6

153 documents total

How research shaped the product

Evidence & Research

Sources from UN and ITU research

Every year, millions of women and girls are affected by digital abuse and technology-facilitated violence.
ITU Hub
11/25/2024View source
AI can be both a threat and a solution when it comes to misinformation.
ITU AI for Good
7/9/2024View source
Global studies estimate that between 16 per cent and 58 per cent of women have experienced this type of violence.
ITU Hub (citing UN Women)
11/25/2024View source

Built Responsibly

Privacy, safety, accessibility, and responsible AI, by design.

Privacy

Minimal data collection. Quick Exit clears the session instantly. No tracking pixels, no third-party analytics, no advertising SDKs. Student progress data is never used for AI content generation.

Accessibility

Built toward WCAG 2.1 AA compliance. Full keyboard navigation. Screen reader support with ARIA labels. Minimum 44px touch targets. Multilingual interface with proper language attributes.

Trauma-Informed Design

Every screen follows trauma-informed care principles: safety, trustworthiness, choice, collaboration, and empowerment. The color palette is deliberately calm. No graphic or alarming imagery. Users control their own pace.

Responsible AI Use

All AI-generated educational content is grounded in verified sources through RAG. A mandatory teacher review gate ensures no AI output reaches students without human verification.

AI tools used: Claude Code (development), GPT-4o (story generation), DALL-E 3 (illustrations), text-embedding-3-small (RAG embeddings)

For Schools

Bring evidence-based digital safety education to your students.

SisterShield fits into existing digital citizenship, health education, or advisory programs. Teachers maintain full control: review and approve every AI-generated course before it reaches students, track individual progress, and explore the research behind each scenario.

  • Interactive scenario-based courses on cyberstalking, digital boundaries, image-based abuse, and more
  • Student progress tracking with quiz scores and completion data
  • Full review control over all AI-generated content
  • Courses take 10–15 minutes each, designed for class periods or independent study
  • Multilingual support: currently EN + KO, extensible to any language
  • Free access during the pilot program
Teacher progress dashboard showing student completion stats, quiz scores, and course progress tracking

How We Built This

From research to product, built through testing and iteration.

Phase 1

Research Foundation

Curated 153 verified documents from UN Women, WHO, UNESCO, UNICEF, ITU, ECPAT, and U.S. DOJ. Built a RAG pipeline to make every AI-generated claim traceable to its source.

Phase 2

The Enrollment Pivot

Originally built class-based enrollment with assignments. Then realized: enrollment creates data trails visible to anyone with device access, including potential abusers. Deleted three database models and rebuilt around anonymous, self-serve access.

Phase 3

User Testing & Feedback

Tested with students and educators. Students engaged longer with branching scenarios than static content. Teachers valued the review gate. Quick Exit was activated during testing, confirming the safety need is real.

Phase 4

Continuous Iteration

Based on feedback: added citation panels for source transparency, expanded knowledge base from 47 to 153 documents, implemented trauma-informed color palette, added multilingual support.

TODO: Add specific user testing data: number of participants, testing context, and key metrics

User Testing & Iteration

Built with real feedback, not assumptions.

Every major design decision was informed by testing with students and educators. Here are the key insights that shaped the product.

Students preferred branching scenarios

During testing, students engaged significantly longer with interactive branching stories compared to static safety content. This validated the visual novel approach.

Teachers valued the review gate

Educators consistently rated the mandatory teacher review as the most important trust feature. No AI content reaches students without human approval.

Quick Exit was used in testing

The Quick Exit button was activated during real user testing sessions, confirming that the safety need is genuine, not theoretical.

The enrollment pivot

We deleted three database models after realizing class enrollment creates data trails visible to potential abusers. Rebuilt around anonymous, self-serve access.

TODO

Testing Sessions

with students & educators

12+

Design Changes

from user feedback

47 → 153

Knowledge Base

documents expanded

Common Questions

What judges, parents, and schools want to know.

Is this appropriate for students under 18?
Yes. All content is teacher-reviewed before publication. The platform uses trauma-informed design: calm visuals, user-controlled pacing, Quick Exit functionality, and no graphic or alarming imagery. Students practice recognizing threats. They are never exposed to harmful content.
How does the AI work? Is it safe?
SisterShield uses Retrieval-Augmented Generation (RAG). Every scenario is grounded in 153 verified research sources, not generated from scratch. Every claim can be traced to a specific source document, page, and relevance score. A mandatory teacher review gate ensures no AI content reaches students without human verification.
What data do you collect?
Minimal. Login credentials, course progress, and quiz scores. No tracking pixels, no third-party analytics, no advertising SDKs. Quick Exit clears the session instantly. Student data is never used for AI training or content generation.
Can this work in our school?
Yes. SisterShield fits into existing digital citizenship, health education, or advisory programs. Courses take 10–15 minutes each. Teachers maintain full control over content. We offer a free pilot program for schools and NGOs.
What languages are supported?
Currently English and Korean. The platform is built with internationalization from day one. Adding new languages requires only translation files, not code changes.
Who built this and why?
SisterShield was built for the Technovation Girls 2025 competition by a team focused on addressing technology-facilitated violence against women and girls through research-backed, interactive education.

See SisterShield in action.

Experience the platform that turns safety knowledge into practiced skill.