Facebook, YouTube and Twitter, in collaboration with marketers and agencies through the Global Alliance for Responsible Media
have agreed to adopt a common set of definitions for hate speech and other harmful content and to collaborate with a view to monitoring industry efforts to improve in this critical area.
The changes follow 15 months of intensive talks within GARM between major advertisers, agencies and key global platforms, with the first changes to be introduced this month. GARM is a cross-industry initiative founded and led by the World Federation of Advertisers (WFA) and supported by other trade bodies, including ANA, ISBA and the 4A’s.
Four key areas for action have been identified, designed to boost consumer and advertiser safety with agreed individual timelines for each platform to implement across the different areas.
The key areas of agreement are:
1. Adoption of GARM common definitions for harmful content
2. Development of GARM reporting standards on harmful content
3. Commitment to have independent oversight on brand safety operations, integrations and reporting
4. Commitment to develop and deploy tools to better manage advertising adjacency
“The issue of harmful content online has become one of the challenges of our generation. As funders of the online ecosystem, advertisers have a critical role to play in driving positive change and we are pleased to have reached agreement with the platforms on an action plan and timeline in order to make the necessary improvements. A safer social media environment will provide huge benefits not just for advertisers and society but also to the platforms themselves,” said Stephan Loerke, WFA CEO.
WFA believes that the standards should be applicable to all media given the increased polarisation of content regardless of channel, not just the digital platforms. As such, it encourages members to apply the same adjacency criteria for all their media spend decisions irrespective of the media.
Phil Smith, director general, ISBA, said: "The safety of social media platforms for users has become the primary concern of all responsible advertisers. ISBA strongly backs the goals of the Global Alliance, having long argued that advertisers have the right to demand clarity in community standards and reassurance as to the effectiveness of their enforcement. This announcement is a significant step towards these goals, raising the bar for the industry through the agreement to consistency in standards and independently verified reporting."
Common definitions will create a common baseline on harmful content.
Today, advertising definitions of harmful content vary by platform and that makes it hard for brand-owners to make informed decisions on where their ads are placed, and to promote transparency and accountability industry-wide.
GARM has been working on common definitions for harmful content since November and these have been developed to add more depth and breadth pertaining to specific types of harm such as hate speech and acts of aggression and bullying.
All platforms will now consistently enforce these standards as part of their advertising content standards and consistently enforce the common definitions.
Harmonised reporting will drive better behaviours
Today, each platform has its own methodologies to measure the occurrence of harmful content. There is a need to harmonise those methodologies and to focus on metrics that are truly meaningful from a brand and a societal perspective, namely how we measure and quantify the presence harmful content per platform.
Having a harmonised, reporting framework is a critical step to ensure that policies around harmful content are enforced effectively. All parties have now agreed to pursue a set of harmonised metrics on issues around platform safety, advertiser safety, platform effectiveness in addressing harmful content.
Between September and November work will continue to develop a set of harmonise metrics and reporting formats, for approval and adoption in 2021.
Independent oversight will drive better implementation and build trust
With the stakes so high, brands, agencies, and platforms need an independent view on how individual participants are categorising, eliminating, and reporting harmful content. A third-party verification mechanism is critical to driving trust among all stakeholders. The goal is to have all major platforms audited for brand safety or have a plan in place for audits by year end.
Advertising adjacency solutions are necessary and will be developed
Advertisers need to have visibility and control so that their advertising does not appear adjacent to harmful or unsuitable content and take corrective action if necessary and to be able to do so quickly. GARM is working to define adjacency with each platform, and then develop standards that allow for a safe experience for consumers and brands. Platforms that have not yet implemented an adjacency solution will have a roadmap by year-end. Platforms will provide a solution through their own systems, via third party providers or a combination thereof.
Raja Rajamannar, CMO at Mastercard and WFA president, said: “We are delighted that GARM has made such significant progress in such a short period of time. I know these discussions have not been easy but these solutions when implemented, will offer more choice and control for advertisers and their agencies by supporting content that aligns with their values.”
Luis Di Como, executive vice president, global media, Unilever, commented: “This is a significant milestone in the journey to rebuild trust online. Unilever has long championed a responsible and safe online environment through Unilever’s Responsibility Framework and as founding members of GARM, we are encouraged by the acceleration and focus to come together as an industry and agree on these four key areas of action. The issues within the online ecosystem are complicated, and whilst change doesn’t happen overnight, today marks an important step in the right direction.”
Jacqui Stephenson, global responsible marketing officer, Mars, added: “This is a meaningful milestone in our work with GARM and part of a longer journey that started over 18 months ago. Thanks to the uncommon collaboration of GARM’s diverse membership, we now have a time-bound roadmap for the development of foundational standards, definitions and reporting practices across social media platforms which will help make social media an experience that is safer for everyone, consumers and brands alike. This is not a declaration of victory as there is much work to be done and we rely on all of our platform partners to follow through on their commitments with the pace and urgency these issues demand. Nevertheless, this is an important step in making social media a safer place for society and it’s important to recognise the progress and build further momentum as a result.”
Marla Kaplowitz, president and CEO, 4A's, concluded: "The significant progress of GARM reflects the consumer imperative to ensure a safe media environment. By focusing on consistency and accountability, the team simplified the assessment of platforms ensuring advertisers and agencies can better assess the impact of harmful content. This work reflects the impact that industry leaders have when aligning on a common approach to ensure a better environment and experience for the most important constituent: the consumer."