

Readit news free#
Social media platforms all share the challenge of managing tradeoffs when weighing free expression against user protection. While the Reddit model allows for positive self-regulation by user communities it is also uniquely vulnerable to abuse by malicious unpaid moderators. This stands in contrast to centrally moderated platforms such as YouTube and Facebook. For example, the 25 million member r/aww, a subreddit devoted to posting cute photos of “puppies, bunnies, babies, and so on," has rules prohibiting “sad” content. Rules governing subreddits often institute norms specific to the subreddit community. Subreddits have several levels of privacy settings, and the Reddit platform provides the moderators of these subreddits a variety of tools to enforce both Reddit’s global policies and optional subreddit policies. Most content on Reddit is hosted in “subreddits”, which are user-created and moderated discussion forums focused on a particular topic. Reddit’s situation is further complicated by its unusual content-moderation model. Yet the lack of clarity surrounding what constituted hate speech and the inconsistent enforcement of such violations enabled users to abuse the platform to advance hateful ideologies, gaining Reddit a reputation as a “cesspool of racism.” The site’s previous content policy outlined eight basic rules detailing prohibited behavior for users-including harassment, impersonation, illegal content, and tampering with the site. Reddit rose to prominence, in part, due to the lack of gatekeeping on the platform.

Readit news update#
To this end, the Reddit policy update reveals the tradeoffs faced in identifying and stopping abuse of systems that allow for public and private conversation. The absence of a clear legal standard upon which companies can base their policies, as well as the importance of context in determining whether or not a post containing known harmful words constitutes hate speech, makes finding technical solutions incredibly challenging. We present a comparative assessment of platform policies and enforcement practices on hate speech, and discuss how Reddit fits into this framework. This post outlines how platforms grapple with hate speech, one of many issues addressed in a forthcoming book based on the Stanford Internet Observatory’s Trust and Safety Engineering course. Previously, Reddit’s content policy was vague according to co-founder and CEO Steve Huffman, the rules around hate speech were “implicit.” Reddit began to enforce its new policy immediately: it removed 2,000 subreddits, including several notable communities such as r/The_Donald and r/chapotraphouse. The new policy more closely corresponds with other major platform moderation policies by prohibiting content that “promote hate based on identity or vulnerability” and listing vulnerable groups. “We are always exploring ways to best support our moderators and communities,” the spokesperson added.On Monday, June 30, 2020, Reddit updated its policy on hate speech, an area of content moderation traditionally considered among the most difficult to regulate on platforms. “The fact such a crucial job like content moderation is either outsourced to commercial companies, or to volunteers who are not paid, shows how platforms are not really interested in investing in making the communities they create better.”Ī Reddit spokesperson told New Scientist: “We believe that our approach to community governance is the most sustainable and scalable model that exists online today.” They highlighted various initiatives the company offers, including a $1 million community fund.

“It really shows how, still, internet labour of all sorts is really undervalued – not just by platforms, but by society as a whole.”Īre says that the paper highlights the precarity of online labour. “The paper is extremely interesting,” says Carolina Are at Northumbria University, UK. Charged at the median cost of $20 per hour for content moderators on gig work website UpWork in the US, such labour would cost Reddit $3.4 million a year – or 2.8 per cent of the company’s total revenue in 2019. Every day in 2020, the moderation team worked for an estimated 466 hours, based on the logs of those moderators who participated in the study. Explaining why Reddit posts are removed helps people comply with rules
