Innovation – Privy Guard

Meet PrivyGuard: Listening, Learning, and Protecting with Love

PrivyGuard is a prototype built with OpenAI’s GPT-4. It is ot intended for children in its current prototype. Responses may include inaccuracies—please confirm all information independently (More details bottom of page).


What is Privy Guard?

PrivyGuard is a prototype AI-powered privacy mentor that helps children and families make sense of digital privacy in a way that’s simple, safe, and emotionally attuned.

Instead of overwhelming legal language and endless “terms and conditions,” PrivyGuard translates app permissions, data risks, and digital footprints into clear, child-friendly conversations—so kids can understand what they’re agreeing to, and why it matters.

Built using the L.O.V.E.ai™ model, PrivyGuard listens with care, explains with clarity, and encourages curiosity. It turns privacy from a checkbox into a meaningful conversation—one grounded in relationship, trust, and reflection.


What PrivyGuard Offers?

  • Kid-Friendly Privacy Rating system for apps and platforms
  • Plain-language translations of Terms of Service and permissions
  • Interactive tools like games, stories, and scenario prompts to build digital literacy
  • Support for relational conversations between children and caregivers
  • A focus on emotional safety, not fear—helping kids develop confidence and consent awareness
  • A foundation in the L.O.V.E.ai™ model, designed to nurture trust, empathy, and critical thinking

PrivyGuard isn’t just a tool—it’s a companion for digital confidence.

Created within The Love Lab: Child Futures Innovation PlaySpace, it’s part of a larger movement to reimagine technology through love, trust, and the voices of children.


Why It Matters

In today’s digital world, children are asked to make choices they can’t possibly understand—clicking “I agree” on apps and platforms that collect their data, track their behavior, and shape their online experience. But true consent isn’t a checkbox. It’s a developmental process that requires understanding, trust, and reflection.

PrivyGuard is an AI-powered privacy mentor that meets children where they are—emotionally, cognitively, relationally and developmentally. It translates complex privacy policies into clear, age-appropriate language, guides children through real-life digital dilemmas, and empowers them to ask thoughtful questions before sharing their data.

Built on the L.O.V.E.ai design model, PrivyGuard isn’t just a tool—it’s a new way of thinking about ethical technology for children. It protects not only data, but dignity. It nurtures agency, not anxiety.

And it gives children something they rarely receive online: a safe, trusted space to learn, reflect, and grow.

Because digital privacy isn’t just a policy issue—it’s a childhood issue.


What It Does

PrivyGuard is a conversational AI tool designed to empower kids and caregivers through co-learning. Using humor, simplicity, and emotional attunement, it transforms privacy from something confusing into something kids can truly understand.

  • Translates Terms of Service into playful, plain-language explanations
  • Rates apps with a simple Kid-Friendly Privacy Score
  • Engages kids with interactive tools like “Would You Rather” games and privacy storytelling
  • Fosters meaningful conversations about trust, safety, and digital choices

Who It’s For

  • Parents & caregivers who want to talk with—not over—their children about privacy
  • Educators who teach digital citizenship or media literacy
  • Designers & policymakers who believe tech should be child-centered and emotionally intelligent


Disclaimer: Prototype Status

PrivyGuard is currently a prototype developed using OpenAI’s GPT-4 and is intended for educational use under adult supervision.

This tool is not designed for direct use by children and should only be accessed by caregivers, educators, or professionals for the purpose of guided learning, evaluation, or review. While PrivyGuard aims to provide developmentally appropriate, emotionally attuned responses, it remains subject to the known limitations of large language models—including occasional hallucinations, inaccuracies, or outdated information.

We strongly recommend confirming any critical information independently before taking action.

PrivyGuard does not collect or store personal data and is not intended to provide legal, clinical, or safety advice.

We are actively working toward a fully compliant, child-safe version grounded in ethical AI development and informed by child development science. If you’re interested in collaborating on testing, co-design, or research, please contact us at nikki@childart.ca.


Scroll to Top