Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
The Internet

Researchers Developing An Algorithm That Can Detect Internet Trolls 279

An anonymous reader writes Researchers at Cornell University claim to be able to identify a forum or comment-thread troll within the first ten posts after the user joins with more than 80% accuracy, leading the way to the possibility of methods to automatically ban persistently anti-social posters. The study observed 10,000 new users at cnn.com, breitbart.com and ign.com, and characterizes an FBU (Future Banned User) as entering a new community with below-average literacy or communications skill, and that the low standard is likely to drop shortly before a permanent ban. It also observes that higher rates of community intolerance are likely to foster the anti-social behavior and speed the ban.
This discussion has been archived. No new comments can be posted.

Researchers Developing An Algorithm That Can Detect Internet Trolls

Comments Filter:
  • Disqus, and the comment section at The Atlantic.

    • What, no love for YouTube?
      • Re: (Score:3, Funny)

        by Anonymous Coward

        Yes, well, they should definitely ban people who can't point a fucking camera, and probably have them arrested

        • Re: (Score:2, Insightful)

          by Anonymous Coward
          For the love of all (if anything) that is still good and holy:

          TURN

          YOUR PHONE

          SIDEWAYS

          I've refrained from formatting this post (any more) obnoxiously vertical to emphasize.
    • by Z00L00K ( 682162 )

      It would really be interesting to see where it would take us, but I worry about false positives in high-profile issues.

  • In other words (Score:5, Insightful)

    by fustakrakich ( 1673220 ) on Monday April 13, 2015 @10:17AM (#49462655) Journal

    Automated censorship. Eh, saves us the trouble, I guess

    • Re:In other words (Score:5, Informative)

      by Kyogreex ( 2700775 ) on Monday April 13, 2015 @10:34AM (#49462813)

      The original paper doesn't seem to be about automatic banning at all; that seems to have been added to the headline and the article linked to here (and therefore the summary). The paper says this: "automatic, early identification of users who are likely to be banned in the future."

      While that identification could be used for automatic banning, I think it would be more likely to be used to flag potential problem users, which could be very useful in determining which reported posts to investigate first rather than dealing with all of the "I don't like this post so I'm reporting it" instances.

    • Yeah, pretty sure they already made a movie [imdb.com] about the societal failure represented in our trust of an automated system to pre-recognize deviant behavior and how those systems break down.

    • TFA basically says that you can detect trolls early on but, the faster you censor them, the more antisocial they become.
    • It would be much better to have a system that HIDES users content by default, than to delete it. Then, people scrolling all posts (including hidden) would be able to report mistakes in the system.

      Funny side note: I mentioned a similar system to Reddit because they have huge problems with mod abuse now. I said, "Hey, just make mod deletions pseudo-deletions, so they're hidden unless you want to see them, so people can check mods and report abuses to admins."

      They didn't even reply back. No politically-cor
      • Re: (Score:3, Insightful)

        It would be much better to have a system that HIDES users content by default, than to delete it. Then, people scrolling all posts (including hidden) would be able to report mistakes in the system.

        From my experience if you delete content or ban a troll, it just encourages them to troll more using a different account, usually from a different IP address.

        The most effective way I found to deal with problem users is to make their bad comments only visible to them. That way it appears to them that they've had their say and no one responded to it. Without feedback to encourage them, trolls either quickly get bored and go elsewhere or sometimes they'll surprise you and produce better quality comments.

  • by Anonymous Coward on Monday April 13, 2015 @10:19AM (#49462671)

    Trolls are usually above average literacy and trying to skilfully cause a fight. It's easy to identify "illiterate" people and humans are way too quick to judge someone who cannot spell as having nothing to contribute or (worse) malicious, but these are not trolls. This is just another classist meme where the person is judged positively by the overcomplexity of their language and convolution of their sentences, as this must mean they have been educamated right.

    BTW I went to a £30k/year British boarding school, so I have no axe to grind, nor insecurity about describing things as they are.

    • Of course, I haven't read the article, but I think the summary has applied the word "troll" in a different way than this. I think the researchers are seeking to reduce the racist, homophobic, etc. trash comments frequently posted to YouTube video comments.

      As you note here, a sophisticated troll is not easily detectable via AI.
      • by Mr D from 63 ( 3395377 ) on Monday April 13, 2015 @10:30AM (#49462767)
        It is stupid to me because it does not solve a problem. Detecting trolls is certainly not a problem, dealing with them is. They need to work on algorithm for that.
        • It is trivial to deal with a troll, detecting them is the hard part. Once you have an automated system to detect trolls, you have many options to handle their posts. Delete the post, hide the post, ban the account, and those are just the starters. Any 1st year CS student could write the code to take action against a troll.
          • But blind deletion is not what communities want. They procrastinate, and like second chances, etc. Plus, an algorithm is never going to be foolproof enough....hence the problem.
            • I suppose you run your email servers without a spam filter, too? I mean, they're never 100% accurate.

        • It is stupid to me because it does not solve a problem. Detecting trolls is certainly not a problem, dealing with them is. They need to work on algorithm for that.

          How about an algorithm for developing thicker skin?

          Internet trolls only have the power you give them; many sites have an "ignore this douchebag" button anyway, so it's really a moot point.

          • by plover ( 150551 ) on Monday April 13, 2015 @01:59PM (#49464593) Homepage Journal

            While I believe that people who are less sensitive tend to thrive more than others, I don't agree that "thicker skin" is a workable solution. Too many people have fragile emotional states and simply don't have the neural hardware psychological capacity required to dismiss the hate and insults that often happen on line. There have been some high-profile suicides among teens who were attacked online, and who knows how many people remove themselves from public comment because of the hate they've received? For safety reasons I don't think society should completely abrogate the forums to the trolls.

            Does that not mean some people are overly sensitive? Sure. But just as we shouldn't velour-line the internet to cater to absolutely every person with a psychological disorder; we also don't have to tolerate the diarrhea that spews forth from the trolls. We don't have to draw a hard-and-fast line on the ground, either, and define "these words are always 100% bad in 100% of situations". Instead, we should be welcoming humans in the loop, asking them to pass judgment when needed. That gets us to a more fluid state than full automation. It also lets the user choose. Don't like the judgment process on Slashdot? Don't hang out on Slashdot.

            I know full automated filtering is the holy grail of internet forum moderation, but as soon as you deploy a filter it becomes a pass/fail test for the trolls, who quickly learn to adapt and evade it. Human judges can adapt, too, and are about the only thing that can; there are simply too few for the volume of trolls out there. A tool like this might help them scale this effort to YouTube volumes.

          • Thickness of skin has nothing to do with it, I'm pretty much impossible to offend or seriously piss off. The real problem with trolls is that they're a huge waste of everyone's time, even if you can ban/ignore them, you still have to read their posts at least once first.

    • by RogueyWon ( 735973 ) on Monday April 13, 2015 @10:28AM (#49462755) Journal

      Your mistake is in using the "classic" definition of "troll" - somebody who sets out to deliberately cause fights on a forum. Trawl through the archives of Slashdot and you will find many instances of this kind of trolling - and yes, the people doing it are often highly literate (and, when they do it right, sometimes very funny with hindsight).

      But the term "trolling" has gone political these days and is routinely used to describe any form of online behaviour that the speaker doesn't approve of. So everything from outright criminal behaviour (eg. threats of immediate violence) at one end of the scale through to disagreeing with a forum's established groupthink (however respectfully) at the other.

      And yes, it has become a favourite term of the intellectually insecure, whenever they want to shout down an opposing point of view without engaging with it. In fact, conflating those two extremes I mention above under the same term is outright beneficial for the easily offended, as it allows them to group polite dissenters together with the mouth-foaming loons.

      • But the term "trolling" has gone political these days and is routinely used to describe any form of online behaviour that the speaker doesn't approve of. So everything from outright criminal behaviour (eg. threats of immediate violence) at one end of the scale through to disagreeing with a forum's established groupthink (however respectfully) at the other.

        Bravo sir! You have summed it up perfectly.

      • by meta-monkey ( 321000 ) on Monday April 13, 2015 @11:16AM (#49463231) Journal

        Back in my day, trolling meant something!

        Ten plus years ago I used to troll /. as "Fux the Penguin" (some [slashdot.org] of [slashdot.org] my [slashdot.org] favs [slashdot.org]) and it was great fun. The system was:

        1) Get in early on a new story. You don't want to get buried under 100 comments.
        2) Lists and quotes are good. Everybody stops to read something with HTML formatting.
        3) Start reasonable. The first paragraph should sound rational.
        4) The next paragraph should include minor errors of fact or logic, but still be mostly reasonable. Just...wrong.
        5) The minor errors of fact and logic in the middle paragraphs should lead to a completely ridiculous conclusion that /.ers would hate, like running Windows, or requiring government approval for encryption technologies.
        6) Watch the post go to +5 insightful because mods don't actually read comments.
        7) lulz at people who write 8 paragraphs dissecting all my mistakes.
        8) -1 Troll.
        9) +5 Funny.

        Today the media conflates "trolling" with "abusive asshole." I think they misunderstand the word "troll." "Trolling" meant "fishing." To dangle bait for newbs to take and work themselves into a lather, and then laugh at those who don't get the joke. It was performance art. Today they think "troll" is referring to monsters who live under bridges. But no, people who stalk others on the Internet and hurl insults at them (or worse) are not "trolls." They are abusive assholes. It's sad.

        And it requires no skill. Trolling is a art.

        • Correct. The story is using a different use of the word troll as other point out.

          I see this in car forums a lot; the troll will post up seemingly naive but insidiously wrong posts, usually with a username and pic of a woman. The old guys in the forum hit that bait like a large mouth bass.
        • Re: (Score:3, Interesting)

          by Megane ( 129182 )

          Back in my day, trolling meant nothing!

          Twenty plus years ago, I used to hang out on alt.religion.kibology, where trolling was invented. Someone would post bait (hence the word troll derived from "trolling for newbies") to a newsgroup, adding an "audience" group such as alt.religion.kibology to the newsgroups header line. Stuff like mentioning "Majel Barrett Shatner" on a star-trek group, or intentional misspellings of whatever the group was obsessed with. Then you just sit back and enjoy your popcorn while

          • Later, cross-group trolling was added, where a message would be posted to two or more groups plus the audience group. If you picked your groups right, they would flame each other quite nicely, and it would be time to get another bag of marshmallows.

            Which is why the gamergate thing will never end. It's too easy to troll.

            1) Pretend to be neckbeard.
            2) Say something to piss of SJWs.
            3) lulz.
            4) Goto 1.

        • Today the media conflates "trolling" with "abusive asshole." I think they misunderstand the word "troll." "Trolling" meant "fishing." To dangle bait for newbs to take and work themselves into a lather, and then laugh at those who don't get the joke. It was performance art.

          *Sigh*. For as literate and educated (generally speaking, though not as much as they believe) as Slashdot is, one basic concept seems to continually elude them - words and their meanings do change. In this case, the so-called "classical

          • by itzly ( 3699663 )

            The "new" meaning (an individual that's deliberately abusive or deliberately fans the flames)

            Actually, the new meaning is: someone who holds an unpopular opinion.

          • I completely understand that words change. But what word would you use today for traditional trolling? That is, "non-malicious posts meant to tease and entertain?"

        • The last one about centralized email is prophetic, if you change "penis enlargement" to "terrorism".

      • by msobkow ( 48369 ) on Monday April 13, 2015 @01:24PM (#49464329) Homepage Journal

        Unfortunately, many people think that if you express a different viewpoint or opinion than the masses that you're trying to start an argument or a fight. Why is society so hell-bent on crushing dissenting opinions? And not merely silencing them, but villifying them?

        I've often been tagged as "trolling" because I don't agree with the crowd. If you knew me personally, you'd know very well that I'm not trying to start a fight, just expressing my opinion. Just because it is not the popular viewpoint doesn't mean my views aren't valid.

        Here on Slashdot, I often see people flagged as being trolls just because they don't follow the masses. You'd think a site full of outcasts and oddballs like programmers and technologists would be more accepting of alternative views, but the exact opposite seems to be the case.

    • by marcello_dl ( 667940 ) on Monday April 13, 2015 @10:38AM (#49462859) Homepage Journal

      > Trolls are usually above average literacy.
      Your right.

    • by Anonymous Coward

      humans are way too quick to judge someone who cannot spell as having nothing to contribute

      I'd sooner judge grammar than spelling. Poor grammar indicates a fundamental misunderstanding of the language. Poor spelling only indicates carelessness or memorization issues.

      For example, I've noticed a recent trend (in American English) where native speakers are confusing adverbs with adjectives -- namely, by dropping the -ly suffix that defines most adverbs in English. For example, "I responded appropriate." This is

  • by ArcadeMan ( 2766669 ) on Monday April 13, 2015 @10:22AM (#49462701)

    I don't want to talk to you no more, you empty-headed animal food trough wiper! I fart in your general direction! Your mother was a hamster and your father smelt of elderberries!

  • by pla ( 258480 ) on Monday April 13, 2015 @10:24AM (#49462715) Journal
    within the first ten posts after the user joins

    So, this algorithm only needs nine more posts than a troll will actually make per throwaway account, then?

    That's some mighty fine police work there, Lou!
  • by account_deleted ( 4530225 ) on Monday April 13, 2015 @10:28AM (#49462749)
    Comment removed based on user account deletion
  • Internet pre-crime.

  • Poof! (Score:5, Funny)

    by RogueWarrior65 ( 678876 ) on Monday April 13, 2015 @10:31AM (#49462771)

    There goes Gawker.

    • by Megane ( 129182 )
      And they still won't be able to identify the "I earned $4623.58 a month by searching for shit on Google!" spam.
  • It also observes that higher rates of community intolerance are likely to foster the anti-social behavior and speed the ban.

    If automated an intolerant core could try to get users expressing opinions that they don't like banned. The fact that they are subjected to intolerance would make the algorithm more likely to ban them.

    • Personally I'm curious how it would function on a site like foxnews or huffpo - in the case of the latter, would it flag the one person posting pro-2nd Amendment comments, or would it flag everyone else when they pile one the aforementioned poster with mountains of venomous hatred?

  • It might be useful to inform an admin to look at suspicious postings, especially if they can get the accuracy higher. BUT I hope no one uses such algorithms to automatically stop suspected trolls. This can only lead to unforeseen consequences and stifling of free speech (unless of course stifling is not an unforeseen consequence, but an intended one).

    Many Slashdotters already complain about the Lameness-Filter, this has the potential to be a hundred times worse.

    The technology will of course be developed,

    • It might be useful to inform an admin to look at suspicious postings, especially if they can get the accuracy higher. BUT I hope no one uses such algorithms to automatically stop suspected trolls. This can only lead to unforeseen consequences and stifling of free speech (unless of course stifling is not an unforeseen consequence, but an intended one).

      Moderation at a privately owned/operated site can be freely used to filter anything they don't want their users to see, even if it creates a slant. However, the odds that they will start filtering specifically subversive content is pretty low, since it's those kind of posts that generate hundreds of follow-ups of disagreement, bolstering even more traffic. More likely, they will filter the truly atrocious (bland death threats, etc) that add little in terms of desirable content.

  • The article defines a troll as someone who has been banned from an online group.

    You can be banned from a website such as redstate for being an Obama supporter. People are often banned from websites solely for having minority viewpoints.

    • My question would be: How would they identify this?

      Say I sign up to Red State as ObamaForever2016 and post heavily pro-Obama links/comments. I quickly get banned. Now, I sign up to Pro Tea Party Forums as BObamaFan and post different pro-Obama links/comments. How would the algorithm determine that those two accounts were the same person (banned from one site) and not two different people with similar political views?

  • Research (Score:5, Funny)

    by Sir_Eptishous ( 873977 ) on Monday April 13, 2015 @10:38AM (#49462853)
    Researchers have detected researchers detecting an algorithm detecting researchers researching.
  • What they identify isn't people who "troll", it's people who get mobbed and ostracized by a community. There's a big difference between the two. That's not a question of "false positives", it's a question of whether people lose themselves completely in group think.

    Of course, in practice, there is little chance this will actually go anywhere. Although content creators and ideologically biased readers frequently denounce as "trolss" anybody who disagrees with them, sites actually like controversy because it i

  • Does the algorithm account for the fact that the Troll designation is applied by some specific person who (a) has mod points, (b) strongly disagrees with a given post and (c) is in many cases part of a group who is looking for antagonists to some cause that group really believes in.

  • If (internet) then troll_present = true;

    Done, just that easy.

  • Oblig XKCD (Score:3, Funny)

    by Anonymous Coward on Monday April 13, 2015 @10:44AM (#49462913)

    https://xkcd.com/810/

    Seems relevant.

  • Uh Oh... (Score:4, Funny)

    by edibobb ( 113989 ) on Monday April 13, 2015 @10:48AM (#49462939) Homepage
    Looks like my days on the internet are numbered.
  • by Eravnrekaree ( 467752 ) on Monday April 13, 2015 @10:49AM (#49462953)

    The word troll is a pointless word which is misused by people who mainly want to villify those who disagree with them, and excuse for people who do not want anyone else to be able to express opinions except for the ones they approve, to censor anyone elses opinions they do not like. Thus, the marking in almost all cases is abused and has no real purpose except for censorship. Obviously, since a message board should be a place for discussion and expressing of differing views and opinions, such is contrary to the purpose of message boards to begin with, to express ones views and to debate subjects.

    The fact is, expressing a view someone else disagree with is not something we should censor, and the tr*** accusation is just an excuse for censorship. As long as the poster honestly believes in what they are posting, its not a tro**, their are posting their view to express their position for the sake of the issue itself, rather than to annoy anyone. Maybe, a tr*** might be someone who posts things they do not agree with for the sole purpose to annoy. However, since it is impossible for anyone to know whether or not someone posting a message honestly believes in what they say, it is impossible to determine if a message is a tr***, or not. it is also impossible to know if someone is posting a view just because they are interested in a subject and have a view on it, rather than trying to annoy anyone.

    The fact is, if someone is annoyed by something, the person responsible for being annoyed is the person who is annoyed, its all in the eye of the beholder, some people will agree with something and others will disagree, you have to allow for a difference of opinions and views. It is always the case that someone will disagree with someone else says, it does not mean that the message was posted with the sole intent to annoy, but the reader of a message may still misconstrue or assume that even though it is impossible for them to truly know that. It is okay and important for people to be able to post messages they know will annoy others, because, anything can annoy anyone, its impossible to post a view or position on anything if one has to fear annoying someone.

    The tr*** thing could only apply to messages written with the sole intent to annoy, But as I said, its impossible for anyone to know if that was the sole intent, to be the sole intent, the person would have to not honestly believe in what they say, otherwise they are posting because they believe in what they say and think that its important.

    That is why the marking on a message cannot be used legitimately and fairly, there is impossible for anyone to know if a message is a tr8**. Thats why, we should remove the marking from messaging and bullitin board systems. As I said before, in 100% of cases the marking is abused, it cannot be used in any proper, fair way, because it is a fundamentally flawed feature.

    It would be best policy on these matters is that bullitin boards should have a rule against computer generated and mass posted advertising, but thats about it.

  • by spacefem ( 443435 ) on Monday April 13, 2015 @11:15AM (#49463225) Homepage

    and volunteer to help test. We have a steady stream of trolls available for review, a truly endless supply.

  • So, posters on message boards deemed "anti-social" or that have views that are not tolerated by the community are now the definition of troll? Wow, that's a good way to make sure opposing viewpoints never get heard. The "algorithm" will just drop any message that goes against the "party line".

    I'd imagine there are plenty of places where if you stand up for your individual rights and privacy you'd be marked a subversive and the community wouldn't tolerate your presence. How about speaking of the value of

  • I can see, how this may defeat (ab)users trolling for fun and not suspecting automated detection before it hits them (though, with only 80% accuracy, I dread the thought of the methods expanding out of the virtual realm).

    But what about people "trolling" professionally — paid and/or otherwise compelled into it by a state or corporate actor [forbes.com] pretending there to exist some kind of "grass-roots" movement? How would it deal with thousands of fake accounts [dailymail.co.uk] mounting a coordinated assault, posting (while "liking" and "following" each other)?

    Some times you may be able to catch accounts posting identical things at the same exact time [pp.vk.me] (and ban them all in bulk), but Russians seem to have fixed that bug in their bots now...

    This is turning into another battle like that, in which spammers have fought the best Information Technology minds into a standstill [itsecurity.com]. I doubt, progress against forum-spammers will be much better than that — not when mere technology, however clever, is up against interests of a reasonably powerful state.

  • ... something like this:


    int is_troll( const char* username ) {
          if( !whitelisted(username) ) {
                return 1;
          }
    }

  • This won't work nearly as well as the authors expect. The moment such system gains adoption, the rules will change and anti-detection and algorithm poisoning techniques will be adopted. For example, proposed approach would likely be completely defeated by first making 10 "constructive" FAQ copy-paste postings. Also, spam is much easier to detect than trolling, since spam is not unique. Still it took years and complicated spam-detecting analytical algorithms to reduce it to manageable levels.
  • by account_deleted ( 4530225 ) on Monday April 13, 2015 @12:18PM (#49463785)
    Comment removed based on user account deletion
  • But I was really just trying to disagree with someone's point of view.
  • article = new nonsensefilledstory();
    article.addStrife();
    article.addContraversy();
    article.stoketribalisim();
    article.allowAnonymousComments(true);
    stack_of_trolls *users = article.create();

    forall users as user (
          if (user.isTroll() == false && user.respondsToTrolls() == true)
                (globalBanList.addUser(user));
    )

Lots of folks confuse bad management with destiny. -- Frank Hubbard

Working...