This article was written by Kayelene Kerr from eSafeKids.
Overview of Online Harms
The internet and technology have transformed the ways we learn, work, create, play, connect and are entertained. It’s given our children access to the world, but it has also given the world access to our children. Children gain immense benefits from being online, but there are also risks. The internet and digital environments, including emerging environments, were not and are not designed for children or with children’s safety in mind, yet they’re an integral part of their lives.
Globally the increasing number of children online has seen corresponding upward trends of online grooming, online child sexual abuse and exploitation, sextortion, youth produced sexual content, image-based abuse, cyberbullying, exposure to pornography and other illegal, hurtful, harmful and age-inappropriate content to name but a few. Much of what children are exposed to and are navigating, is too much too soon. Digital harm is occurring on apps, platforms and online services at unprecedented levels.
Safety has, and in many cases continues to be, an afterthought by technology companies. Sadly, and to the detriment of the health, wellbeing and safety of children, technology companies clearly demonstrate profits over people, profits over harm, profits over child safety. Technology companies have demonstrated time and time again that they will not adhere to their civic responsibility to ensure their networks, platforms and services are not used in ways that cause or contribute to violating children’s rights to be protected from harm.
Technology companies’ priority is revenue generating activity, not children’s safety. These services have been developed in such a way that they create a supply chain of commercial activity, data processing, advertising and marketing, persuasive technology and design features that anticipate and often guide a child towards more extreme and harmful content. While I acknowledge the above-mentioned features may not have intended to cause harm to children, experience has shown us they have ultimately facilitated and perpetuated it. For years technology companies have engaged in wilful blindness, prioritising commercial gain ahead of children’s safety.
I support the ongoing ‘Safety by Design2 work being led by the Australian eSafety Commissioner, however, I believe it’s unrealistic to expect technology companies to voluntarily comply or engage in co-regulation, they have a demonstrated history that shows legislation and independent regulation is required.
Children and young people’s exposure to harmful content online is not marginal, it’s mainstream. An Australian study of 14 - 17 year olds found 62% reported exposure to harmful content online. Harmful content includes:
Self-harm
Suicide – Ways to take own life
Unhealthy eating – Ways to be ‘very thin’
Hate speech
Gory or violent material
Drug Taking
Violent sexual images or videos
It’s important to note the internet is not segregated, what young people see is also what children see. In my experience the above-mentioned harms are regularly managed by primary schools.
For too long parents, carers, educators and other professionals have carried the responsibility of protecting children from online harms and for managing and mitigating the serious real-world consequences impacting children and young people’s health, wellbeing and personal safety.
My experience leads me to four clear conclusions:
It is manifestly unfair and unreasonable for children to be responsible for avoiding illegal and harmful content.
It is manifestly unfair and unreasonable for schools, parents/caregivers, educators, other professionals and community-based organisations to address these issues alone.
The Australian government must intervene to create a safer internet and online experiences for all Australian’s, particularly children and young people.
A public health approach is required.
For this submission, I will not discuss the industry codes and standards designed to protect Australians from illegal and restricted online content or the Online Safety Act as these are the eSafety Commissioner’s remit.
I acknowledge the outstanding work the eSafety Commissioner is doing, however, I believe the current legislation and regulatory approach needs strengthening to adequately address the technology industry’s response to illegal and restricted material.
I accept this is an incredibly complex landscape, but do not accept technology companies operating with the relative impunity afforded by Section 230, United States Communications Decency Act, 1996. Changes to this legislation would likely have positive and negative ripple effects that I will not discuss in this submission but do require consideration.
For the purpose of this submission, I ask consideration to be given to technology companies failing to adequately moderate their income generating algorithms and recommender systems that promote illegal and harmful online content.
TOR (a) The use of age verification to protect Australian children from social media.
In addressing TOR (a) I reflect on the conversations I've had with parents over the last
10 years, and something stands out. I've had thousands of parents tell me they wish
they'd delayed their child’s access to social media. Never once has a parent told me
they wish they'd given access sooner. For many once the genie is out of the bottle it’s
too late, it’s a pandora’s box they wish they’d not opened so early. Having said this and
coupled with the fact I work at the forefront of online harms, there could be an
assumption I would automatically support using age verification to protect children from
social media, but it’s not that simple.
To be clear, in principle I support anything that will reduce harm and improve the health,
wellbeing and safety of children. However, I caution against fear mongering, scare
tactics, knee jerk reactions and quick fixes. We need to teach, model and support
social, emotional and relational skill development.
I speak to thousands of primary students every year and the majority have at least two
social media platforms. This is mostly with parental consent. Existing measures are not
effective at deterring or preventing parents from facilitating children under the age of 13
using these platforms. I believe ‘age verification’ without the necessary cultural change
is unlikely to protect children in the way community members anticipate.
If parents continue supporting children using platforms that are not safe for them, the unintended consequence may be the platform thinks the child is 16 and they’ll be the
recipient of recommender systems suggesting content for users aged 16+. This may
result in a situation that is worse than currently experienced.
Age verification in and of itself does not make children safe online. Nor does it address
the risks to children because of the design of social media and other online services.
The real conversation and consideration should centre on TOR (d), I’ll elaborate on this
shortly.
Harmful content is not confined to social media platforms, it’s readily available via a
browser search and is frequently shared with children via messaging apps that exist
outside of social media. Using age verification for social media won’t remove harmful
content or keep children safe from unsafe content, contact and conduct. Putting the
onus onto children and parents, while absolving technology companies, is manifestly
unfair and unreasonable.
Many mainstream social media platforms in the public discourse are amongst the
safest. All be it inadequate, they do offer family safety centres, reporting options and a
degree of moderation. Many of the other platforms used by children and young people
do not offer any of these features.
Whilst there is an array of application level, device level and network level parental
controls, increasingly younger children have learnt how to circumvent them. I again
Tools for managing and guiding a child's digital experience, are not failsafe, nor a one-size-fits-all solution. These controls can help mitigate certain risks and provide a level of supervision, but they are most effective when used in conjunction with parental supervision, education, conversation and participation. A balanced approach that includes both technological safeguards and ongoing communication with children works best.
Please refer to Qoria’s submission on what could be done to improve on device protections.
I acknowledge the benefits of online services and social media. They provide a platform
for learning, self-expression, creativity, socialising, autonomy, agency and connection
with peers. They can foster a sense of community and belonging, supporting positive
interactions. Educating young users about safe, respectful, responsible, resilient,
positive and moderate use is key to maximising these benefits.
An arbitrary restrictions approach may inadvertently and disproportionally affect
marginalised young people from accessing information, assistance and support
Rather than fixating on age verification, our focus should be on mandating designs that are age-appropriate and prioritise children's health, wellbeing and safety over commercial interests. Robust legislation, regulation and effective enforcement measures that prevent children from known and foreseeable emerging risks are needed. This will be difficult given the relentless drive for market dominance, profits and anticompetitive behaviour by technology companies, but it is necessary because the alternative is to leave the health, wellbeing and safety of children in the hands of technology companies and that will not end well for anyone.
TOR (d) The algorithms, recommender systems and corporate decision making of digital platforms in influencing what Australians see, and the impacts of this on mental health.
The ethics of social media algorithms and recommender systems is both complex and controversial. Given the sheer scale and volume of material, including illegal and harmful content, shared online algorithms and recommender systems can be helpful and improve users’ experience by showing content that is relevant and interesting to them.
Most online services use human moderation in combination with algorithmic moderation. While there is certainly room for improvement, reducing the harmful content that a human moderator would otherwise review is worthwhile. Algorithmic moderation allows services to identify, filter, flag and curate online information at speed and at scale.
Naturally, errors occur and these can lead to physical and psychological harms. For the purpose of this submission, I’ll focus on how algorithms and recommender systems can be used in ways that are problematic and harmful.
The power of algorithms and recommender systems to curate feeds and influence the content a user sees and consumes ought to be approached with caution; transparency and independent oversight is imperative. Left to their own devices, technology companies do not effectively self-regulate.
Algorithms can have a significant impact on what content a user sees and interacts with. This in turn has the power to shape attitudes, expectations, behaviours, beliefs, perceptions and practices.
Technology company revenue is generated from data collection and user engagement. This commercial priority can profoundly shape the design of online products and services resulting in sensationalised content, the spread of fake news, misinformation, disinformation, malinformation and harmful and illegal content.
Without realising it users can find themselves in a filter bubble. A filter bubble refers to the way information is filtered by the algorithm before it reaches the user. Users are no longer confronted with information that could broaden their interests or challenge their beliefs or opinions. Rather users are shown content that confirms existing beliefs, while simultaneously hiding information that might challenge them. This can lead to polarisation, echo chambers forming, confirmation bias and ultimately radicalisation. This can be a breeding ground for online harm resulting in normalisation and desensitisation.
Not only do children and young people have access to illegal, harmful and age- inappropriate content, the intentional design means if children and young people view this content, intentionally or accidentally, this content will be amplified, and they will see more of it to the point it will saturate the places where they spend time online.
The online harms that I typical see promoted are; online hate and bullying, doxing, scams, violence, self-harm, suicidal ideation, disordered eating, pornography, sexualised children, misogynist, homophobic, racist and sexist content.
It is not sufficient that users are aware of how algorithms, filter bubbles and echo chambers work and why certain content is displayed for them. Online service providers should take reasonable steps to change their algorithms and recommender systems, and limit how much harmful content a user is exposed to, particularly children. This results in both individual and collective societal harm that can be far reaching and costly.
For example, technology companies are promoting and profiting from disordered eating content. A report by Reset Australia found Instagram is promoting underage eating disorders. One quarter of Instagram’s pro-eating disorder bubble in Australia is under 18 years of age, including Australian children as young as 10.4 This is not unique to Instagram, a poll of Australian 16 – 17 year olds revealed 24% of young people saw content that promoted extreme weight loss and unhealthy diets multiple times a day on a range of different social media platforms.
The ability to ‘reset’ the algorithm would also be helpful, particularly for those receiving therapeutic support.
Additionally, some of the impacts on children’s and young people’s health and wellbeing is a by-product of technology use, this is often referred to as the displacement effect. Online services are seeking to maximise user engagement and their time spent on apps, platforms and services. The more time children and young people spend online the more revenue technology companies make. A report by the Australian Institute of Family Studies found screen time may have a negative effect on weight, motor and cognitive development, social and psychological wellbeing, anxiety, hyperactivity and attention.
There are additional concerns in relation to self-esteem, social comparison, myopia and muscular skeletal disorders. Australian Bureau of Statistics data clearly shows young people’s mental health has declined since about 2012. While it is easy to blame technology, in my opinion based on working with tens of thousands of children and young people over the last 10 years I think one of the biggest issues is lack of sleep. Sleep is vital for health and wellbeing and is a protective factor that supports mental health.
Using techniques such as push notifications, variable reward, infinite scroll and autoplay can result in unhealthy habits. Some research suggests recommender systems contribute to excessive usage. I have seen this with children and young people broadly and also for children and young people with heightened vulnerability, specifically autistic children, children with ADHD and children who are experiencing social isolation.
Without regulation, technology companies will continue to prioritise user engagement at the expense of children and young people’s health, wellbeing and safety. There should be greater transparency about algorithms and content distribution practices relating to children and young people.
TOR (e) Other issues in relation to harmful or illegal content disseminated over social media, including scams, age-restricted content, child sexual abuse and violent extremist material.
For this submission, I will not discuss the Online Content Scheme, Basic Online Safety Expectations and Industry codes and standards as these are the eSafety Commissioner’s remit. I would like to suggest that these powers may need to be strengthened to make them more effectual and there are compelling reasons to consider this.
The United Nations International Children’s Emergency Fund (UNICEF) reports the online sexual abuse and exploitation of children is one of the fastest growing and increasingly complex threats to children’s safety in the digital world.
Digital environments have provided new ways for children to be sexually abused and exploited, including by live video streaming, production and distribution of child sexual abuse material, sextortion and Generative AI.
In 2023, WeProtect Global Alliance released its Global Threat Assessment which revealed an alarming escalation in online child sexual abuse and exploitation. Since 2019 there has been an 87% increase in reported online child sexual abuse material.
The internet poses a particular challenge, as those seeking to victimise children take advantage of the relative anonymity online interaction provides. As the internet and technology continues to advance, the opportunities for child sex offenders and other financially motivated criminals to sexually abuse and exploit children will continue to increase.
Stemming the tide of online child sexual abuse and exploitation will require collaboration between key global stakeholders, including online service providers, technology companies, government and non-government organisations.
The reluctance and in some cases refusal of technology companies to prioritise children’s safety is well documented in relation to global efforts to protect children from online child sexual abuse and exploitation and to disrupt the circulation of child sexual abuse material.
In 2023, The National Centre for Missing and Exploited Children (NCMEC) received 36,210,368 million reports related to the circulation of child sexual abuse material. This comprised of over 105 million files from public and electronic service providers. The reports predominately related to child sexual abuse material but there was also a rise in reports of Sextortion and the use of Generative AI.
From these reports, NCMEC identified 63,892 that were urgent or involved a child in imminent danger. In the last three years the number of urgent, time sensitive reports has increased by 140%.9 To be clear, these are real children, in real danger, including Australian children in imminent danger in Australia.
Apple Inc made just 267 notifications. In comparison the following notifications were made:
Google 1,470,958
Facebook 17,838,422 – Meta owned
Instagram Inc 11,430,007 – Meta owned
WhatsApp Inc 1,389,61810 – Meta owned
Meta owned companies account for a significant number of reports. I’ll return to this point shortly. The reason Apple Inc’s reporting was so low in comparison to other online services is not because it doesn’t happen on the service but because Apple Inc chooses not to use tools (hash matching) to detect known child sexual abuse and exploitation material on iMessage and iCloud. Additionally, Apple Inc does not have a reporting option for users of iMessage, iCloud or FaceTime.
Despite the notifications made by Google, there is inconsistency. For example, whilst YouTube takes steps to detect child sexual abuse, it does not use these tools on Chat, Gmail, Meet and Messages. Additionally, Google is not blocking links to known child sexual abuse and exploitation material.
For more than 10 years Meta detected, removed and reported photos and videos of children being sexually abused. As you can see from the above data sharing child sexual abuse material is prolific on these platforms. However, when content is detected on a Facebook user’s account, it does not mean Meta owned services Instagram and/or WhatsApp will be notified. Therefore, an account may be removed from one but can continue operating on the other. This points to gaps in safety measures.
Devastatingly, Meta is implementing end-to-end encryption on more of it’s products meaning these platforms will no longer detect sharing of child sexual abuse material. Meta cannot act or report on what it can’t see. Based on what they’ve already reported, this means over 30 million reports (rising annually) will no longer be received by global law enforcement, including the Australian Federal Police and the Australian Centre to Counter Child Exploitation.
Children will continue to be abused, distribution of photos and videos of abuse will continue and child sexual abusers and others who trade child sexual abuse material will be protected. Innocent children will not! In a civilised society how can we allow this to occur? How can commercial interests be prioritised over the safety and best interests of children?
I first studied online child sexual abuse and exploitation 27 years age as a Law, Psychology and Criminology student. I then spent 21 years as a Detective and have worked at the coal face of online child sexual abuse and exploitation investigations. To be clear we’re talking about crimes against children. The sexual abuse material I’ve watched includes children being raped, children drugged and raped by multiple people, children physically abused, children tortured, and children murdered. The child victims are getting younger and younger. As someone who has looked into the eyes of children being sexually abused by adults, I ask the decision makers to look into the eyes of those children and tell them that their sexual abuse matters less than the profits of technology companies.
While the eSafety Commissioner has used its reporting and enforcement powers to address online child sexual abuse and exploitation these enforcement powers have yet to be truly tested. eSafety v X Corp and X Corp v eSafety currently before the court with set the precedents.
What has been highlighted is that online services are not proactively addressing identified child safety issues. The extent to which these online services fail to protect the rights of children would not have been known without the relentless work of the eSafety Commissioner.
TOR (f) Any related matters
Online Grooming
Reporting data from the National Society for the Prevention of Cruelty to Children (NSPCC) shows online grooming crimes have risen by 80% in the past four years.
For simplicity, online grooming is when a person makes online contact with a child or young person using digital technologies with the intention of establishing a connection or relationship to enable their sexual abuse and exploitation.
In offline environments, grooming is commonly drawn out and gradual. It may take place over days, weeks, months or even years. However, online grooming sees several stages occur simultaneously, speeding up the process. Concerningly it’s been revealed that conversations between children and offenders on social gaming platforms can escalate into high-risk grooming within 19 seconds, with an average grooming time of 45 minutes and the longest being 28 days. Social gaming environments that facilitate adult-child connection, communication and the exchanging of virtual gifts significantly increase these risks. Online grooming commonly occurs on social media platforms.
I’m particularly concerned because online grooming is linked to the rise in online child sexual abuse material previously mentioned and the rise in Financial Extortion (Sextortion).
Self-Generated and Youth-Produced Child Sexual Abuse Material
In 2022 alone, the Internet Watch Foundation based in the United Kingdom, assessed a webpage showing child sexual abuse imagery every two minutes. The age group of children most commonly depicted was 11 – 13 years, with 7 – 10 year old’s making up a third of all child sexual abuse material observed. For the 7 – 10 year old’s there has been a 360% increase in self-generated sexual imagery from 2020 to 2022.
Concerningly, the Internet Watch Foundation has warned children as young as 3 – 6 years of age are being targeted and becoming victims of this crime.
For simplicity, ‘self-generated’ child sexual abuse material is content the child produces themselves. This distinguishes it from child sexual abuse and exploitation material that is typically adult produced, distributed and possessed by child sex offenders.
Typically, self-generated child sexual abuse material is created using a smart phone or webcam, technologies that are commonly owned and accessed by children. The child takes photos and videos of their private body parts at the direction of an online offender. I identified these emerging issues 10 years ago and have sourced and developed a range of resources to address these trends with primary school aged children.
Young people also self-generate sexual imagery, sometimes at the direction of an unknown person but most commonly from someone they know. The photos and videos are often shared with a similar aged peer. The harm is typically caused when the content in reshared without consent. It can be reshared to peers, uploaded to platforms including pornography sites, social media and a wide range of other online platforms and services. The harm is often amplified when technology companies fail to remove the content.
Sextortion
In recent years there has been a significant rise in online Financial Extortion, referred to as Sextortion.
In 2021, the National Centre for Missing and Exploited Children received 139 reports of Sextortion, increasing to 10,731 reports in 2022 and 26,781 in 2023. In 2023, the Australian Centre to Counter Child Exploitation received over 300 reports per month and they estimate this only accounts for every 1 in 10 cases. This is continuing to rise at an alarming rate with devastating consequences.
While the techniques used to entrap children are similar, the demands from financially motivated offenders is financial only. Whereas sexually motivated offenders will coerce and threaten a child to produce increasingly more extreme photos and videos and it can continue for years.
Online Grooming, Self-Generated and Youth-Produced Child Sexual Abuse Material and Sextortion are examples of online harm. The activity and material that is being produced is unlawful and it’s occurring with the knowledge of technology companies.
Conclusion
Greater transparency and accountability is required, especially when products and services are likely to be used by children. Without the genuine commitment and measurable action of technology companies this is a constant uphill battle and the price being paid by Australian children, families and communities cannot be understated.
These are complex global issues that require global responses. I will continue to think globally and act locally, concentrating on the difference I can make it in the life of one child. For 27 years I have done all that I can do, all I can now ask is that those who can do more, don’t turn away from these challenges and the opportunities to improve this current threat landscape. This is for children, but it’s also for the young people and adults they’ll become.
Age verification and assurance measures may be somewhat effective at managing some online harms, but they will be ineffective in addressing others. For this reason, I implore the Inquiry to engage with Tim Levy from Qora. His industry knowledge provides additional regulatory options that to date may not have been considered or explored.
Parents, carers, educators and other professionals play a key role in supporting children to develop the social, emotional, relational and technical skills needed to have safe, positive, respectful and secure online experiences. There is also a role for technology companies. For too long technology companies have not carried the responsibility of protecting children from online harms and for managing and mitigating the serious real-world consequences that are impacting the health, wellbeing and personal safety of children and young people. The current situation is untenable and can’t continue.
eSafeKids was cited 11 times in the Joint Select Committee on Social Media and Australian Society Social media: the good, the bad, and the ugly Final Report
To learn more about eSafeKids workshops and training visit our services page.
To view our wide range of child friendly resources visit our online shop.
Join the free eSafeKids online Members' Community. It has been created to support and inspire you in your home, school, organisation and/or community setting.
About The Author
Kayelene Kerr is recognised as one of Western Australia’s most experienced specialist providers of Protective Behaviours, Body Safety, Cyber Safety, Digital Wellness and Pornography education workshops. Kayelene is passionate about the prevention of child abuse and sexual exploitation, drawing on over 27 years’ experience of study and law enforcement, investigating sexual crimes, including technology facilitated crimes. Kayelene delivers engaging and sought after prevention education workshops to educate, equip and empower children and young people, and to help support parents, carers, educators and other professionals. Kayelene believes protecting children from harm is a shared responsibility and everyone can play a role in the care, safety and protection of children. Kayelene aims to inspire the trusted adults in children’s lives to tackle sometimes challenging topics.
About eSafeKids
eSafeKids strives to reduce and prevent harm through proactive prevention education, supporting and inspiring parents, carers, educators and other professionals to talk with children, young people and vulnerable adults about protective behaviours, body safety, cyber safety, digital wellness and pornography. eSafeKids is based in Perth, Western Australia.
eSafeKids provides books and resources to teach children about social and emotional intelligence, resilience, empathy, gender equality, consent, body safety, protective behaviours, cyber safety, digital wellness, media literacy, puberty and pornography.
eSafeKids books can support educators teaching protective behaviours and child abuse prevention education that aligns with the Western Australian Curriculum, Australian Curriculum, Early Years Learning Framework (EYLF) and National Quality Framework: National Quality Standards (NQS).