Skip to main content
Safety & Trust

Your safety is our foundation

eStudent 360 combines verified mentor profiles, AI-powered content moderation, and dedicated human oversight to create the safest mentoring environment possible.

100%

Mentors verified before matching

Real-time

AI content monitoring on all messages

24/7

Safety team review & response

Our Safety Framework

Three layers of protection

Every interaction on eStudent 360 passes through multiple layers of safety checks — from mentor verification to real-time AI moderation and human oversight.

Know Your Mentor

Every mentor undergoes a rigorous multi-step verification process before they can connect with students.

Manual Moderation

A dedicated safety team reviews flagged content and responds to reports within minutes, not hours.

AI-Assisted Moderation

Our AI system scans every message in real time, detecting inappropriate content before it reaches students.

Know Your Mentor Policy

Every mentor is verified before they connect with students

Our Know Your Mentor (KYM) policy ensures that every mentor on eStudent 360 has been verified through a rigorous multi-step process. Mentors progress through four verification tiers, and their access to students is gated by their verification level.

PendingNew sign-ups awaiting initial verification — cannot be matched with students.
LowIdentity confirmed — can participate in group sessions and content pathways only.
MediumBackground check cleared — can be matched for micro-mentoring and group sessions.
HighFull verification complete — approved for all mentoring types including sustained one-on-one sessions.

Identity Verification

Government-issued ID verification confirms each mentor's real identity and prevents fraudulent accounts.

Background Check

Criminal record checks and reference verification ensure mentors meet our safety standards.

Credential Review

Professional credentials, qualifications, and employment history are verified by our team.

Ongoing Monitoring

Verified mentors are continuously monitored through session feedback, student ratings, and AI safety systems.

Content Filtering

Pre-compiled pattern matching catches inappropriate language, harassment, and explicit content across all messages.

Evasion Detection

Catches attempts to bypass filters using character substitution, spacing tricks, and coded language.

Pattern Analysis

Analyses conversation patterns over time to identify grooming behaviour and relationship boundary violations.

Self-Harm Detection

Specialised detection for self-harm indicators that immediately escalates to trained human reviewers.

AI-Assisted Moderation

Intelligent protection that never sleeps

Our AI moderation system analyses every message exchanged on the platform in real time. Using pattern recognition, linguistic analysis, and behavioural modelling, it detects and flags inappropriate content before it can cause harm.

  • Processes every message in under 50ms — faster than it takes to read
  • Detects evasion tactics like character substitution and coded language
  • Self-harm detection triggers immediate escalation to human reviewers

Manual Moderation

Trained humans behind every safety decision

AI catches threats fast, but humans make the final call. Our trained safety team reviews every flagged interaction, investigates reports, and takes action — ensuring no false positives harm innocent users and no real threats slip through.

  • Every AI flag is reviewed by a trained safety team member
  • User reports are investigated and resolved within hours
  • Fair appeals process for users who believe they were flagged incorrectly

Human Review

Every AI-flagged interaction is reviewed by a trained team member who makes the final decision.

Safety Team

Dedicated moderators with training in child safety, harassment prevention, and crisis response.

Instant Alerts

Critical flags trigger immediate alerts to the safety team with full conversation context.

Fair Appeals

Users can appeal moderation decisions through a transparent process with human review.

Guardian oversight for minors

For students under 18, guardians have full visibility and control over their mentoring experience. No session happens without guardian consent.

  • Guardian consent required before any mentoring session can be booked
  • Dedicated guardian dashboard with visibility into matches, sessions, and messages
  • Real-time notifications for all session bookings and mentor interactions
  • Guardians can revoke consent and end mentoring relationships at any time

Report anything, any time

If something doesn't feel right, we make it easy to report and fast to respond. Every report is taken seriously and investigated by our safety team.

1

Flag or Report

Use the one-click report button on any message, profile, or session to flag a concern.

2

Immediate Review

Our safety team is alerted instantly and begins investigating with full context.

3

Swift Action

Appropriate action is taken — from warnings to account suspension — and you're notified of the outcome.

Safe mentoring starts here

Join a platform where every conversation is protected, every mentor is verified, and every student matters.