Connect with us

Technology

How trolltrace became a real thing


© Dreamstime.com

The disinformation space is mutating before our very eyes and the consequences are getting hairy. Every day it becomes harder to tell the difference between a conventional troll and a weaponised troll (which may or may not be human).

In a case of life imitating parody, fans of the hit TV satire show might highlight that the Trolltrace subplot of South Park Season 20 in 2016 predicted much of this.

In the show, the character Lennart Bedrager, from Denmark, invents Trolltrace.com to rid the internet of harmful gang-stalking trolls who have the capacity to impose severe psychological damage upon their victims, in some cases driving them to suicide. A parallel plot, meanwhile, sees annoying digital “ads” evolve into autonomous human form, with the South Park community being unable to differentiate the persuasion ads from real humans.

At first sight, the troll-hunter programme to name and shame truth-polluting trolls and bots seems an instinctively good idea. But as ever with South Park the take-home message is much more subtle. In the end it’s a matter of “be careful what you wish for” because as with most weapons, trolltrace can be deployed to inflict harm upon the good, and/or on innocents as well as the guilty.

The show is thus a thought experiment in what might happen if a powerful truth finder algorithm was really unleashed upon the world. Yes, trolls would be exposed and undermined, which is a good thing, but so would everyone else. And since “he that is without sin among you, cast the first rock”, that eventually leads to a total breakdown of social cohesion and trust in South Park (as some form of dirt can be discovered about all the characters).

World War Three is only narrowly avoided when the internet is shut down: an act that wipes everyone’s internet histories forever and forces everyone to acknowledge it’s better if we all just agree to forgive the dodgy internet history we all have. Humanity restarts with a clean slate, albeit with the assumption it will soon make all the same mistakes again.

Bearing that in mind, we found the findings from the latest Rand Corporation report on foreign interference in the 2020 election mighty interesting.

For one thing the report appears to depict Trevor’s axiom — as coined by South Parkto a tee:

Second, because it gives such a great breakdown of how mass-trolling has turned into modern information warfare:

Our analysis of early 2020 Twitter discourse about the general election found two kinds of suspicious accounts: trolls (fake personas spreading a variety of hyperpartisan themes) and superconnectors (highly networked accounts that can spread messages effectively and quickly).

We found both of these types of suspicious accounts to be over-represented in specific communities (two politically right-leaning communities and two politically left-leaning ones).

Troll accounts, with their nonstop partisan messaging, are well suited for stoking division; superconnectors, when they become active, are well suited for spreading messages because of the density of connections these accounts have.

An important caveat is that we cannot definitively conclude from any single part of our analysis that there was a co-ordinated foreign interference effort at the time we analysed this particular data set.

Our ML model was based on 2016 Russian tactics and those assumptions might not transfer fully if Russian tactics are dramatically different in 2020. Another possibility is that our model identified efforts that mimic 2016 Russian tactics. We also acknowledge that superconnectors occur naturally on Twitter, albeit in small numbers. We have inferred a co-ordinated effort based on the following intersecting findings:

The description of the Russian modus operandi, in particular highlights that the strategy is focused on boosting hashtags that undermine or support specific candidates to sow ever greater division on both sides. The focus is very much on radicalising both sides of the political divide, rather than favouring one over the other:

Politically left-leaning trolls talked about Republicans or political conservatives as fascists or Nazis; politically right-leaning accounts talked about progressives as communists or socialists.

Politically right-leaning trolls spoke frequently about the deep state — an amorphous conspiracy theory that describes relationships among a variety of national security and law enforcement agencies — and plans to confront it, mixed with statements of support and reference to QAnon claims. These politically right-leaning accounts also shared a mix of candidate specific criticism and mockery of former vice-president Biden, Senator Sanders, former secretary of state Hillary Clinton, and former President Barack Obama.

Politically left-leaning troll accounts shared specific themes around Trump as a Russian-owned traitor, criticism of Biden as insufficiently progressive (these trolls were in the Biden community), and messages sharing “peace,” “love,” and “good vibes.” Although this last theme might seem oddly non-political, that positive-affect language was a hallmark of politically left-leaning trolls in the 2016 election interference (Marcellino, Cox, et al., 2020).

The objective of this bipartisan radicalisation policy, meanwhile, is described as follows:

Russia’s highest aim in these efforts is to elicit strong partisan reactions and create a sense of disunity (although operators might have preferences for particular electoral outcomes). A nation that is in conflict domestically (or at least talks as if it were) is less able to exert influence and counter Russia’s political goals. This is a longstanding Russian strategy, but social media has made it cheaper and easier than ever to conduct such efforts. For these reasons, we infer that there is ongoing election interference over social media for the 2020 election, and (based on how our findings reflect Russian practices and goals) it is possible that the effort we detected is part of a Russian information effort to sow chaos in the United States.

Rand’s extremely neutral recommendations are also interesting:

We recommend publicising the threat broadly using multiple channels (eg, radio, print, TV) to help alert Californians (and the American public) to ongoing — and, most likely, foreign — efforts at manipulation that undermine confidence in democracy. Publicising the effort should feature details, such as the target audiences (eg, supporters of Trump and Biden), and specific tactics (eg, sharing attack memes of first ladies or normalising the words “fascist” and “socialist” to describe other Americans).

Though, of course, there is the fact that Rand calls its efforts to expose what’s going on “troll hunting with machine learning” and explains the latest programme follows in the footsteps of a “troll hunter” machine-learning model it created for work it had done for a UK Ministry of Defence study.

Which does make you wonder if there isn’t the potential for all this to backfire in a similar way?

As per South Park, a powerful troll-tracing algorithm certainly feels like a step in the right direction. But it’s worth remembering that as trolls and bots get increasingly sophisticated, these algorithms will need to delve ever deeper into contextual communications or even private correspondence to render their assessments of the disinformation space. That’s OK if we trust the people making such evaluations. Not so much if we don’t.

It is a conundrum we may certainly face as platforms are increasingly asked to determine exactly who is a disinformation or amplifier account (or agent) and who is not.

The question is, are we comfortable with the likes of Facebook and Twitter performing effective counter-espionage and algorithmic interrogations? And what happens if the troll-hunting tools they develop fall into the wrong hands?

Copyright The Financial Times Limited 2020. All rights reserved. You may share using our article tools. Please don’t cut articles from FT.com and redistribute by email or post to the web.



Source link

Continue Reading