The 2024 eSafety Commissioner Reporting Platform Redesign: A Broken Experience at a Critical Time

In 2024, the Australian eSafety Commissioner’s office unveiled a redesigned online reporting platform—intended to help Australians report cyberbullying, image-based abuse, harmful online content, and scams. This system plays a vital role in national digital safety efforts, especially for minors, vulnerable people, and victims of harassment.

But when the new UI went live, the user experience rapidly collapsed under its own complexity, leaving victims confused, unsupported, and—in many cases—unable to complete a report. The platform’s failure didn’t just impact usability—it compromised safety and trust in one of the most important digital protections in the country.


A Labyrinth of Report Types

At the heart of the platform was the new “Report Harm” interface, which attempted to simplify the reporting process by categorising abuse into neatly defined types:

  • “Cyberbullying”
  • “Image-Based Abuse”
  • “Adult Cyber Abuse”
  • “Illegal or Restricted Content”
  • “Online Scams”

Each option led to entirely different form trees, with little to no explanation of what qualified under each label. Victims often had to guess which path was appropriate. For example:

  • Was a revenge porn case “Image-Based Abuse” or “Adult Cyber Abuse”?
  • Was harassment via email a “Scam” or “Cyber Abuse”?
  • Could one report involve multiple types?

No real-time guidance was offered. Once users picked a category, they were locked into a rigid flow, and going back reset the form entirely.


Overwhelming Forms, Missing Empathy

The redesigned forms were exhaustive—often asking for:

  • Screenshots
  • URLs
  • Platform usernames
  • Precise date and time logs
  • Narratives explaining context

While such data may be necessary for investigation, the platform offered no contextual support or encouragement. There were no progress indicators, no save draft functionality, and minimal guidance on what to do if you couldn’t provide a certain field.

For people already in distress—especially teenagers or abuse victims—this cold, technical approach created anxiety, shame, and form abandonment.

Even more damaging, if a user timed out or accidentally closed the browser, all progress was lost. This wasn’t a complaint—it was a recurring trauma trigger for users trying to seek help.


Accessibility Gaps in a Critical Platform

Despite serving children, the elderly, and people with disabilities, the redesigned site failed key accessibility metrics:

  • Input fields lacked descriptive labels for screen readers
  • Font scaling broke layout grids on mobile devices
  • Keyboard navigation failed in multi-step form modals
  • CAPTCHA security checks were not accessible to screen reader users

These barriers locked out users who needed the service the most. The very platform meant to support digital inclusion had excluded vulnerable demographics by default.


Confusing Follow-Up and No Transparency

Once a report was submitted, users received a confirmation email—but there was:

  • No case number
  • No estimate of response time
  • No way to log in and view updates or messages

Victims were left in the dark. Did the system register their complaint? Would anyone contact them? If they had updates, could they add more evidence?

In a time when reassurance is everything, the lack of transparency and follow-up tools made users feel abandoned. For a platform tasked with protecting users from online harm, this was a devastating failure.


Inconsistent Mobile Experience

Mobile users—a major demographic for online abuse reporting—were hit hardest. The platform:

  • Lagged during form transitions
  • Failed to upload images larger than 2MB (with no compression support)
  • Broke on Android webview browsers embedded in apps like Instagram or TikTok

Many young users tried to report bullying or abuse directly from the same app where the abuse occurred, only to hit broken pages or unsupported browser warnings.


Public Reaction and Criticism

Digital rights groups, school counsellors, and mental health advocates spoke out within weeks of the launch. The common themes:

  • Too technical
  • Emotionally insensitive
  • Functionally fragile

Even teachers began advising students to contact helplines or community services rather than use the platform—a crushing indictment for a federal initiative.

Despite growing concern, the eSafety Commissioner’s office initially described the redesign as “a bold leap forward” and “built with technical robustness in mind.”


What Went Wrong?

  • Over-categorisation made self-reporting confusing
  • No flexibility or empathy built into the user flow
  • No accessibility testing with actual victims or stakeholders
  • Lack of progress saving punished vulnerable users
  • No visibility into post-submission process created uncertainty

The platform attempted to be structured and secure—but forgot the real humans behind every report. The outcome: a clean interface that delivered a cold experience, in the very moments people needed warmth and support.


FAQs

1. What was the biggest issue with the new eSafety platform?
It was overly rigid and confusing, causing victims to abandon their reports or submit incorrectly.

2. Was it accessible to everyone?
No. It failed accessibility standards for screen readers, keyboard navigation, and visual clarity.

3. Could users track their reports?
No. After submission, there was no case tracking or transparent follow-up process.

4. How did users respond?
With frustration and concern. Many turned to third-party help services instead.

5. Has it been fixed since launch?
Some minor updates were applied, but most structural UX issues remain unaddressed.