A team of researchers at the University of Texas at Arlington is developing algorithms to detect automated accounts — also known as "bots" — that spread misinformation online.
The project focuses on Twitter bots that spread fake news and their threat to national security. But identifying the characteristics of these bots can help the everyday social media user, too.
Chengkai Li, an associate professor of computer science and engineering, is leading the project with the help of communication professor Mark Tremayne and other team members from the University of Texas at Dallas.
...On defining fake news:
Mark Treymane: In the last year or so, the term started off to describe things that people would post on Facebook or other social networks that looked like news stories but then checked out to be either partly fictitious or entirely fictitious. I think that's the current use of the term. Then, of course, [President Donald Trump] decided to adopt the term to apply to traditional media.
...On fake news affecting national security:
Chengkai Li: We have now learned from various authorities including the FBI that Russia was behind the Twitter bots that spread fake news during the election last year, and those Twitter bots may have had an impact on the election results. That's very significant in terms of national security.
...On distinguishing between fake news and real news:
Treymane: One thing is to see how that news is being spread: Can you trace it back to where it starts? And that might give you a clue as to whether it's real or not. Are the accounts that are spreading this on, say Twitter, automated accounts — what we call "bots" — or are [they] real accounts that an actual single person runs? That can give you a clue as well.
Li: It's not something that is easy to tell sometimes. There are situations when people just make outright false claims. But there are also very tricky and delicate claims. You can twist the numbers. You can try to mislead people.
Treymane: A good conspiracy theory is one that has some facts woven into it. If you go to check out this conspiracy theory, and and some of those things check out, it lends credibility to the whole thing. Some of the things that are being spread online are cleverly done, so that pieces of them do check out, and it makes the whole thing seem believable.
...On how their algorithm will work:
Li: We need to know who posted the fake news tweet originally, who has spread the tweet, who are the followers; they all have a track record. We can also look at the content. If someone just repeated a [false] claim that has been checked by a fact-checking organization, then we can say "Oh, this has been checked; it's not really true."
Treymane: Every Twitter account, whether it's a human account or a fake account, has two characteristics: One is the content that it puts out, and the other is the connections — the people that are following that account, or that the account is following. We would have to set up a system that looks at the content. If you have some accounts that we know are fake, we can learn things from [their] content that help us detect future fake accounts. The same thing is true with the connections. If we know the kinds of connections that a fake account has, [then] that can be used to help you detect a fake account in the future.
Li: We can call the project a success if we are able to tell you whether a given Twitter account is backed by a computer program focused on spreading fake news or not.
This interview has been edited for brevity and clarity.