During a recent Senate Judiciary Committee hearing titled “Big Tech and the Online Child Sexual Exploitation Crisis”, senators criticized the chief executives of big tech companies for not doing enough to protect young people online. At one point, Meta CEO Mark Zuckerberg stood up and apologized to parents and families of children who had tragically passed away due to causes related to social media. “I’m sorry for everything you have all been through,” Zuckerberg said. “No one should have to go through the things that your families have suffered, and this is why we invested so much and are going to continue doing industry-leading efforts to make sure that no one has to go through the types of things that your families had to suffer.”
In recent years, tech companies have been under intense scrutiny for its policies regarding child online safety. For a look behind the curtain, k-ID spoke to Amber Hawkes, an online safety consultant with more than 15 years’ experience building strategy and programmes as well as conducting policy and legal advocacy across different sectors. Most recently, she was the Asia-Pacific Head of Safety for Meta (previously Facebook), where she led Meta’s engagement with NGOs, experts and policy makers across the region on issues including child safety and online well-being, to improve policies, products, tools, and resources to keep the online community safe.
Contrary to a lot of the narrative that seems to be around these companies, they are actually setting out to provide fun and engaging products for kids and teens, and they are not setting out to harm them at all. The experience of kids and teenagers online is actually really important to their bottom line. If kids are not enjoying it, and they are not feeling safe there, they are not going to come back and keep using that service. The first key challenge is that regulation around the world is rapidly developing in this space. Companies are increasingly being asked to (and in fact, mandated to) design their products safety for children, to incorporate risk assessments, to ensure that they are enforcing their terms of service and their child online safety policies, and to provide various protections for children. These regulations are developing rapidly and in some ways, in conflict with one another. Another tricky thing is that minors are not all equal. We have long talked about the fact that what’s appropriate for an eight year old is not the same thing that might be appropriate for a 15 year old. The level of supervision for each child is different and the level of what they are exposed to is different. It is not homogenous. This is [one of the key challenges] that companies are trying to deal with at the moment – to allow for the agency and engagement of different age groups while providing the [age-appropriate] environment for them. Protecting users from harm is actually quite expensive, and depending on how big the service is, requires a trust and safety function. It requires investment. Particularly for smaller companies that are just starting up, they are focused on how to get users in and how to make sure the product grows. Risk mitigation and online safety is of course great for business in the long-term. However, at an earlier stage, it is just a competing factor that they have to take into account – on the one hand, certain tools, functions and restrictions are going to prevent users from wanting to use their products; on the other hand, they need to provide a safe environment. It is a complex thing.
That is a very tough question, but there are many trying to do it. I would say it definitely starts with a multi-stakeholder approach to the way that they are designing their products and services. It is very important that the solutions that are developed and designed take into account how people are being impacted in real life; it is actually designing the features with insights from youth, users, parents and experts themselves, and not trying to design it in a vacuum. For example, there is a lot of A/B testing that can be done to test how [a particular feature] flows for users onboarding into a product.
This is definitely my area of passion. Traditionally, we have been in an environment where we have seen many of the larger tech companies come from maybe the US or other particular locations. They have more of an imperative, potentially, to initially engage the constituents in their own jurisdiction and consider them. But these companies are expanding into and making profit from markets across the world. Online safety risks can be very context specific. Local considerations can really influence the way users would respond to certain content or the way that they would respond to certain contacts, and even what kind of tooling might work for them and how it might come across. In my view, companies that are expecting to make a profit from users across different markets have a commercial imperative longer term, and a moral imperative, to actually include communities and stakeholders from those different markets into their assessments of their safety features and their risk assessments. When [a tech company] is planning to get users from a specific market, they will know about how families or users actually prefer to engage online for commercial purposes – so apply those knowledge to their safety assessments. Is there one phone for the whole family? Will that impact how this product gets used online? These should be assessed before rolling out [a particular] product in that market.
There can be many different types of cross-sector collaboration. From a tech company’s perspective or from the perspective of people designing products, I think you absolutely need this input. One way of doing that is to engage with local academics and experts. Also engage with local nonprofits on the ground and do some focus groups, do some research. All of this can take resources, so that can be quite tricky. But there is research and insights out there – lean into those. It is also imperative to have a connection and relationship with the authorities in the market as well. If [a tech company] is going to have a safety risk or event on their platform (for example, where a child is being abused or is in danger), they will need to have a way of reporting that.