This has not been a good year for YouTube, and Google is making a hash of it as it refuses to face up to realities. The company’s failings when it comes to identifying and rectifying issues on YouTube are reflective of larger cultural attitudes in the tech industry, which relies heavily on algorithms and shrugging when it comes to solving problems that are more human than technical in nature. Google’s YouTube mess in 2017 is something that should worry people whether or not they use the service, though, because Google’s hegemonic grip on the internet means that these problems are also being replicated elsewhere, to greater and lesser degrees.
Google has the ability to shape what people see and how they see it, to determine whether people can monetise their art, to influence the way that people interact with the internet. YouTube may have bred a sense of complacency as a fun-loving accumulation of videos, but beneath it lies a giant financial engine that is coming into increasing conflict with the interests of users — including both YouTubers and viewers.
It’s time for a closer look at four mistakes Google made in 2017 at YouTube, and whether the company can utterly rethink the way it approaches the platform.
1. The hell that is YouTube comments
Anyone who’s spent any amount of time on YouTube knows that the comments section is an unmitigated cesspool. YouTube commenters are among the most virulently vicious on the internet, even within generally supportive subcultures. Whether a video is about cooking, shelter cats, or politics, commenters are ruthlessly savage, trashing every element of the video in meticulous detail. For female YouTubers, those comments are often tempered with sexualised remarks and nasty comments about their looks.
Yet when it comes to comment moderation, creators have very few options. They can turn comments off altogether, a dangerous move on social media, where commenting platforms are an important part of the experience for some users. They can also block or ban individual commenters, though it’s easy enough to make a new YouTube account to get around this, and they can also block specific words and phrases — again, though, adroit commenters will do their best to work around such bans. Can’t say ‘cunt’? Just say ‘c*nt.’ Creators can’t do things like delete individual abusive comments, a hallmark of successful comment moderation elsewhere on the internet; all they can do is flag, and hope someone at YouTube takes them down.
Knowing about problems with internet comments in general and those on YouTube specifically, the company doesn’t appear to have consulted experts on these issues, including people with substantial experience moderating online communities. The result is an environment so hostile that some YouTubers leave comments open but refuse to read or engage. This is a readily solvable problem, and one that’s not algorithmic in nature, since wise commenters are quick to adopt coded tactics that would avoid triggering even a well-trained, sensitive algorithm, which YouTube’s is evidently not.
2. The adpocalypse
In early 2017, Google rolled out some changes to clarify how it makes decisions about which videos are monetised (shown with ads) on the platform. The actual policy of quietly demonetising videos that didn’t meet advertiser guidelines was old, but the company made it more transparent, and updated guidelines. Nearly immediately, creators began reporting serious problems with the algorithm, which has since been updated multiple times, with no remedy for some creators, despite the much-touted appeals process.
The company argues that advertisers are getting increasingly more particular about what they advertise against, especially in an era when consumers are using targeted campaigns to pressure companies to withdraw advertising from hateful companies. Advertisers don’t want to be associated with, for example, Nazis. So the company’s standards have toughened up on hate speech, but also ‘controversial content’ — like suicide awareness, politics, and sexual education, even that designed to be friendly to all audiences.
Consequently, people with channels that discuss politics, videogames, disability rights, LGBQT issues, sexuality, and a wide range of other topics are furious that the company is demonetising their content — and they’re not talking about content they know violates guidelines. They’re discussing videos about trans identity, conversations about disability pride, age-appropriate sex education, and gaming walkthroughs. For those who have been relying on YouTube ad revenue, this is a serious blow, and the company’s repeated claims that it’s addressing this problem are wearing thin.
3. Restricted mode
When YouTube rolled out a ‘kid safe’ version in early 2017, it thought that it was mitigating some of the problems with adult content on the platform. Along the way, it also randomly decided to block LGBQT content in a manner so systemic that it almost appeared calculated. Youth interested in that content couldn’t access it on restricted mode, which deprived LGBQT youth of affirming material that provided them with valuable information.
The company insisted it was a mistake and ‘corrected’ the algorithm, but it raised an important question: Why was this content flagged in the first place? As with those caught up in the adpocalypse, YouTube can’t explain why harmless education content is being flagged as unsafe, and it appears disinterested in responding to criticisms from users who are frustrated with hearing that their very identities are ‘unsafe.’
This is not just a question of an algorithm run amok, or bad flagging systems at work. YouTube’s algorithms have systematically shown a deep contempt for people with marginalised identities, underscoring the fact that computers are only as unbiased as their creators. When videos with identical content that have been genderswapped are being treated differentially, that’s a sign that someone, somewhere, thinks phrases like ‘I am trans’ are harmful, while ‘I am cis’ is not. Perhaps it’s unsurprising that Google dropped ‘don’t be evil’ from its mission.
4. Unsafe for children
After considerable embarrassing news coverage, YouTube has finally moved to eliminate hundreds of exploitative videos of children. These videos included children in dangerous situations, kids being humiliated for entertainment, young children in revealing garments, and children subjected to violence. Said videos were initially demonetised, but after systemic pressure, the company finally started deleting offending videos and accounts. The much-vaunted algorithm didn’t catch these videos — this was the work of actual horrified humans. And with hundreds of hours being uploaded every minute, it’s functionally impossible to use humans as moderators, but the sluggish response on this issue was deeply troubling.
Child exploitation isn’t the only troubling content on YouTube when it comes to kids. A strange subgenre of video featuring spliced video of beloved children’s cartoons that initially seemed like automated attempts at gaming advertising revenue has morphed into the distribution of disturbing and upsetting cartoons. In both cases, the content targets children, and flies under the radar of content filters because it features seemingly harmless cartoons.
YouTube was also criticised for disturbing autocomplete results that popped up on the site, with users typing phrasing like ‘how to…’ and the service suggesting ‘how to have s*x with kids.’ The company claims this was the result of a trolling campaign and it moved to adjust the autocomplete suggestions, but it’s a telling sign that it’s losing control of its platform.
What’s next for YouTube?
The company relies heavily on its clearly deeply flawed algorithm, along with a network of ‘trusted flaggers,’ treated as good citizens who act as watchdogs on content. Neither of these systems is working, though, with those interested in leveraging the platform quickly finding ways to dodge visibility and accountability. When the company does move to address exploitative, illegal, or offensive content, it often does so by restricting YouTubers from marginalised backgrounds, which increases stigma and places people in a difficult position — YouTube is effectively the only widely used video platform of its kind, and people can’t migrate away to another option if their content is suppressed by YouTube.
Google’s struggles here mirror larger issues in the tech industry, which has grown rapidly and largely without regulation. As a private company, Google has considerable latitude when it comes to free speech, which is a double edged sword. It can, for example, remove offensive content that’s not necessarily illegal, thanks to its content guidelines. But it can also suppress speech it doesn’t like, again relying on the same guidelines. The same holds true for Facebook, another giant of industry that’s had a terrible year with respect to revelations about questionable business practices and truly terrible content moderation, and Twitter, with is also struggling with issues around moderation. Disturbingly, it doesn’t appear to be hurting their financial bottom line.
These companies have in some ways turned themselves into utilities, but without any of the regulation thereof. As the United States delves into a heated net neutrality debate that could change the face of the internet in the US forever, these questions of how tech companies conduct themselves are even more important. Should the FCC strike down net neutrality guidelines, companies like Google, Facebook, and Twitter could pay for premium access to users, strengthening their stranglehold on the ecosystem of the internet, while plucky upstarts aiming to rethink the way we create, share, and engage with content won’t be able to get their feet in the door, on an internet where no one can see them. If that’s not worrying to consumers, it should be.
Photo credit: medithIT/Creative Commons