XRP and Mass Media Censorship

No comments

The XRP community is no stranger to big tech censorship. From the shadow banning of Twitter accounts back in 2018 to the outright suspension of prominent XRP community members like @XRPTrump in early 2019, social media platforms have been enforcing their peculiar and contradictory rules towards XRP supporters for a long time. More recently, Twitter banned the legitimate XRPTipbot account for reasons of “impersonation.”

On April 20th, David Schwartz’s YouTube account was suspended for similar reasons. The suspension followed a lawsuit filed by Ripple on the seeming inability of YouTube to police the multitude of financial scams run by nefarious individuals targeting the XRP community. In many cases, the scammers impersonated prominent members of Ripple, like Brad Garlinghouse. In one such case, one of the scammer’s YouTube accounts was verified as authentic by YouTube admins.

The most peculiar aspect of the social media bans is the inconsistent and seemingly targeted nature of rule enforcement. A multitude of impersonation and scam accounts are ignored while legitimate users are suspended, and accounts wiped out.

Prominent members of the community can often get their accounts reinstated, but for users with less pull, their accounts remain suspended to this day.

It is not clear why these accounts were targeted for the ire of YouTube and Twitter admins. It could simply be a concerted effort to report XRP related social media accounts by opponents of the protocol, which resulted in the suspension and banning of these accounts by novice moderators. These enforcement issues are made more peculiar by the fact that YouTube appears to have the technical capability to moderate the platform, but enforcement, particularly in the political sphere, seems directed at individuals, candidates, and institutions that run afoul of the organization’s political leanings.

Censorship accusations were leveled against YouTube by the conservative political commentator and comedian Steven Crowder whereby he indicated that searches for Democratic presidential candidate Tulsi Gabbard on the YouTube platform were being suppressed in the United States.

According to the New York Times, a lawsuit filed by representative Tulsi Gabbard against YouTube’s parent company, Google, alleges that campaign advertising was suspended during a key period of six hours between January 27 and 28 and that Google’s spam filtering algorithm was placing representative Gabbard’s campaign emails in spam folders at a much higher rate than other candidates. These brief bursts of censorship coincide with a time when representative Gabbard’s campaign could have gained a lot of steam had it been advertised without restriction on the media giant’s platforms. Though it remains possible that the timing was mere coincidence, it all seems rather convenient.

In a similar vein, the mere mention of certain terms or names in a YouTube video can result in the demonetization or the outright banning of a video.

As such, it seems like the only thing that had been preventing YouTube from filtering or blocking these XRP scam videos has been a lack of will to do so. Applying the same standards to criminal activity as things like conservative media hosts, undesirable presidential candidates, channels, etc., is something most would expect from the media giant. Why then did it take a lawsuit from Ripple to get YouTube to pay attention?

I am very hesitant to suggest that any of this was done deliberately as there is no evidence to suggest that this is the case, but it is not unprecedented that tech giants weaponize their platforms against potential competitors. Back in 2015, the European Union accused Google of manipulating search results to favor its own products versus those of competitors. A Wired article indicates that the EU has hit Google with three separate billion-dollar fines for anticompetitive behavior.

Google was fined by the EU in 2017 for “…promoting its own comparison shopping service in its search results, and demoting those of competitors.” Is it so inconceivable that the tech giant could be dragging its feet when moderating scam videos that make a potential competitor look bad, particularly one that is heavily investing in ad-free monetization for third-party sites and a video platform that is monetized through Coil instead of advertising and tracking revenue?

There is another obvious explanation that bears mentioning. These sprawling social media platforms often run into impersonation and cross-jurisdictional legal issues that can make it difficult to police and moderate the platform. The scope of the user base of Twitter or YouTube alone probably dictates that admins or staff are swamped as they attempt to police the platform. It could simply be that the admins gave the Ripple related accounts a cursory glance after a mass of reports and banned them without any careful examination. But it is difficult to say this with certainty while a multitude of scam, impersonation, and rule-breaking accounts operate unhindered while legitimate users are banned for running afoul of YouTube and Twitter’s selectively enforced rules. One would also expect that actual criminal activity would be the sort of thing that YouTube administrators would be most interested in curtailing. Instead, these scams persisted unabated until Ripple filed suit. Xrplorer, the XRP focused data aggregator, indicates that in the 2019-2020 period, scammers have thus far defrauded users of 8.5 million XRP, with some of the scams using the YouTube and Twitter platforms to trick users.

Nevertheless, both YouTube and Twitter feature a staggering amount of daily user-driven content submitted from contributors all over the world, and this would make it difficult to moderate to the satisfaction of every party involved. I suppose how upset one becomes over the targeted bans on Twitter or YouTube’s feet dragging in dealing with scam videos depends on how much one trusts these mass media companies. The moderation is hardly transparent, and each group that has been affected by these mysterious shadow bans, demonetization, or de-platforming has every right to be annoyed at being swept up in bans while others, guilty of breaking the same rules, are left alone. These platforms are privately run and, as such, do not run afoul of any of the first amendment laws found in the United States. But it is more difficult to argue that they do not violate reasonable standards of free speech. These platforms have a tremendous amount of power to shape public opinion by deciding which user-generated content gets seen, which searches are found, which companies and which products show up at the top of the search results, and which political candidates and political movements have a voice. Giving a single company or group of companies the ability to shape public opinion to this degree cannot be a good idea.

Despite the difficulties faced with content moderation on such sprawling international platforms, the bare minimum we should expect from content moderation policies is that they effectively deal with criminal activity taking place on their platforms. Allowing users to impersonate public figures to extract money from unsuspecting users is unacceptable, especially considering the tenacity with which YouTube and Twitter target those that hold undesirable political leanings.

Header photo by Fred Kearney.

Leave a Reply