The Consumption of Manipulated Media: An Interview with Dr. Takis Metaxas

 
 

Ben Donnelly, External Affairs

November 16, 2021


Following the 2016 American election and Brexit, it became clear that the internet and social media could have a substantial impact on swaying individual beliefs and fuelling populist movements.  Compounded by misinformation, many of the individuals participating in these movements had preexisting biases reaffirmed by false or misleading information.  While social media was once hailed as a liberating force for good as a result of its role in helping organize events like the Arab Spring, this view has increasingly been overshadowed by its much more insidious role in spreading nationalist propaganda, misinformation about the COVID-19 pandemic and vaccine, and helping organize the January 6th insurrection. Western governments should be concerned by this phenomenon and look to address it.  

Dr. Takis Metaxas has been documenting the manipulation of political information on the internet since the early 2000s.  A computer scientist from Wellesley college, he specializes in the intersection of disciplines, bridging computer science with political studies, psychology, sociology, and economics.  He has also worked on technical solutions to misinformation, developing Twitter Trails,  an AI-based program that investigates a story’s (real or fake) spread on social media and users’ reaction to this story, thereby evaluating how real it appears on the website. I sat down with Dr. Metaxas to talk about a history of manipulated information on the internet and the current state of social media misinformation.


Misinformation has recently become an international source of concern for democratic states, with recognition of this as a glaring problem beginning to increase around the 2016 American Election.  Your work on this topic, however, can be traced back as early as 2002.  Would you be able to walk us through the early days of internet information tampering?

 I started worrying about how easy it was for people to find any kind of information without even realizing whether something they found was true or false, primarily since Google had such a great success in the early 2000s. I encountered the problem in ‘94, looking for a keyword that had a lot to do with Greek politics. I happen to be born in Greece, so I was familiar with the problem of Macedonia at that time.  So back in ‘94, I searched in a previous engine and I had seen that what one would get out of search results wasn’t always the full picture.  I got a little worried when around 2000, I had noticed that my students were using Google, which had a great reputation as, you know, the provider of ultimate knowledge. That worried me because I knew that anybody could be an author on the internet, so you can’t be sure of what is seen. There are typically two phases for conducting research. One phase is trying to find information, and the second phase is when you try to make sense of the information and write things out. With the creation of search engines and the provision of so much information on the web, that first part became kind of trivial.  I noticed that my students were thinking that the more technically astute they were in finding information, the more sure they were that the information was correct. 

Around 2003 or 2004, a little joke started.  George W. Bush had launched the war in Afghanistan and Iraq, and things were not going particularly well, so a bunch of people had agreed to play a joke on him. Knowing how Google’s Page Rank was working, they had created 29 web pages linking George W. Bush’s original page to the phrase: “Miserable Failure”.  Whenever one searched for these keywords, they would end up at George Bush. This strategy was later used in 2004 to target other politicians. So, you can imagine that this first group was Liberals, so the Conservatives responded by associating the word “waffle” with John Kerry.  In this context, “waffle” can mean that you're not stable on something, so you change positions. The power of these jokes came from the fact that the vast majority of people had no idea that this was happening. Every time the search engine gives you something that you think is right, you think that the engine is ultimately wise. 

 An organized effort to manipulate the perception of Republican voters occurred in 2006. It started with a Liberal group that decided to target about 50 relatively unknown Republican candidates for Congress.  They tried to push negative search results for particular candidates. But of course, the Conservatives picked it up right away and they didn’t want to be outdone. So, they targeted 42 Democratic candidates with the same technique. Now, the results in 2016 show that the Democratic (congressional) candidates won big time. I was searching at that time to see how people can be manipulated on the Internet through search engines. I found that internet manipulation techniques are similar to that of propaganda. There is this amazing book from 1939 called The Fine Art of Propaganda and it describes a bunch of techniques in which you associate good words with something they want to promote and bad words with something you don't want to promote. I describe other techniques in my paper, Web Spam Propaganda and the Evolution of Search Engines. I had noticed that every observed propaganda technique could be easily applied to Google.  Essentially, you present a picture that looks to be clear and independent, but you manipulate the search engine to actually show the picture that you want. This created some negative publicity for Google at the time. The first thing that Google did was to disable the techniques I was using to monitor its search results at the end of 2006, and then to make sure that in the 2008 elections, this would not happen again. Suddenly we had search results that were fixed to be only the official or mostly the official result.

 

In the early days of information tampering in politics, Google’s PageRank Algorithm could be manipulated via Anchor text, embedded links to other websites which assigned false importance to the produced search result in the algorithm.  The use of anchor text to manipulate search results would eventually be fixed, however, in the following years.  In the 2010 Massachusetts Senatorial Election, however, search results were again able to be gamed through a social media-oriented approach.  Could you describe this process?

 The first part of the question is what I was describing in my previous answer. I was not happy that Google had disabled the technology I was using to monitor results. But at that time, we were thinking okay, Google knows what they’re doing. Now, other platforms like Facebook and Twitter emerge. They're still not particularly powerful, though. They start allowing news to be presented inside their feed.

 In 2010, I was collecting data checking the hypothesis that information from social media could help you predict elections. That is, if you see what people are talking about, you might get an idea of who's going to get elected because people independently will give you their opinions. However, the data that I collected showed a very different picture. We discovered three different manipulation techniques. The first was what we call Twitter enabled Google bomb; Google was providing posts on social media in the top ten results. People would keep posting the same things over and over and over again on social media and these results would appear as top search results in Google. However, a few months after we published that result, Google stopped doing that. 

 The second was a coordinated attack on journalists. A conservative-leaning individual would create lists of tweets that would directly tell particular journalists to stop lying to people, tell them the truth. Then, the crowd would employ the same techniques, sending prefabricated tweets to the journalist. So the journalist, on the other hand, suddenly would see a large number of people attacking them, saying that they're not doing their job well. And no matter how strong a personality you have, when you see so many people supposedly independently telling you you are lying, you start doubting yourself. These attacks are chillingly dangerous to journalists.

 The third, however, was extremely powerful and very dangerous. It was a way of exploiting emotional communities through fake accounts. The first step is finding a community of people, let's say, supporters of Trump or supporters of Hillary Clinton.  At that time it was Brown and Coakley in Massachusetts. The perpetrator then creates fake accounts and infiltrates that community, in the beginning, just repeating what they say. But at the same time, they’re producing web pages with fake information. After a while, once they have gained a little bit of trust, suddenly they say, “oh, by the way, did you see that”, pointing to the fake information you have created.  People start propagating because they’re predisposed to believe anything. After all, they hate the other side; they're particularly angry. Once that is achieved, they can even delete your account and go into hiding because the angry crowd will do the job for you spreading misinformation, right? We observed and published these ideas in 2010, and it worried some politicians. However, there was also an arrogance of: “our people are smarter than the other side.”  2016 proves that actually, this is not the case: anybody can fall for this fake news. 

 The Russians, who have had a lot of experience with fake information since the Cold War, (the Americans as well, I should say), created a kind of KGB subdivision to actually promote and create whole organizations that would spread misinformation. Anybody can use this kind of tool, firstly because it's very powerful, secondly because you cannot stop it, and that's the bummer.

 

Facebook and Twitter have been two of the most notable platforms used for recent social media misinformation campaigns.  What makes them ideal for this purpose?

What makes them ideal is that the public uses them all the time, and they're coming to us with a huge volume of information. The continuous engagement algorithms that they employ interfere with the ability of people to think critically. We just don't have enough time to stop and think. This is something that works particularly well in younger generations who have the impression that they are very smart. I mean, every young person thinks that they're smarter than the old ones. But they also think that they are better at multitasking because you can read your book, listen to a lecture, listen to music and chat with your friend next to you, thinking that they have capabilities above the average person. The truth is that any human can only focus on one thing at a time. Our brain is amazing but has some huge limitations about focusing on multiple things at the same time. Social media throws so much information at us; it makes it difficult for us to keep track of what's happening, and so we can be fooled a little easier.

 

In addition to election misinformation, science-based misinformation has recently been spreading on social media regarding COVID-19.  Do you think that social media companies are responsible for the content propagated on their websites or do you think it is the individual’s responsibility to filter content for themselves?

 The answer is that both are responsible, but it's much easier to complain about social media companies and demand that they make changes than to demand something from individuals. People have important limitations on what we can process. For example, humans are decent on average, but not particularly good at understanding mathematical logic. For example, some of the arguments I heard about COVID-19 are “Listen, my friend, I can get the vaccine and still I can get sick, right? Yes. Not only that, but I could get sick with the vaccine and still die (Like Colin Powell). Right? So why should I get vaccinated?” This appears as a logical statement. It seems like the vaccine did not protect me from getting reinfected, did not protect me from dying. So the vaccine is not good. Well, that looks correct in this kind of binary logic. But to understand the world, we have to think about probabilities. You see, the vaccine will not make it impossible for you to get sick. It will just make it much less likely to get sick.  The vast majority of people who die these days are unvaccinated. People don't understand probabilities very well. It is a limitation of our brains. Not understanding probabilities makes it easy for us not to be able to make much sense of the world. Technology has gotten us to a point in which it's challenging human nature. We are just finding where we are stuck.

 This problem requires great collaboration between the vast majority of people, which we currently do not have. We are still divided in nations, in cultures, in religions, in traditions, and we are brought up thinking that our view of the world is the right view of the world because it has been used for so many years, and it works right.  For these kinds of global problems, problems that all humanity has, our divisions are a real obstacle in making good progress in our lives. Education is one of the best tools we have. But it is hard and it is an ongoing process. Critical thinking is tricky; a lot of effort is ahead of you for your generation.

 

 Do you have any advice for improving media literacy and recognizing misinformation on a personal level?

 I've written an article about that. It's called Technology Propaganda and The Limits of Human Intellect. I was thinking of trying to write more about that. But the best thing is to say, well, be aware of why you believe what you believe. Try to apply critical thinking. That is, try to apply the scientific method whenever you're about to believe something that has a major impact on your life. Recognize when you are believing something because of your emotions, because you feel angry or confused or threatened. Have this in mind. I'll go back to the ancient Greeks because my education started from there. They had this term, gnōthi seauton, which means know yourself.  If you cannot have a good understanding of who you are and why you believe what you believe, your chances of understanding the world and not being threatened by it are more limited.

 

How might we fight misinformation on a more systemic level?

 We need to keep trying to hold social media companies accountable to make sure that people cannot have financial incentives to promote misinformation. That's important and relatively easy to fix. We also need to educate people from a young age to understand what is fact and what is fiction and to recognize how often we use fiction in our lives to make decisions regarding ourselves. So, again, part of the education. So all of them are part of this kind of effort for a systemic change.

 

 Dr. Metaxas’ work demonstrates that methods used to manipulate information on the internet have been refined over time and are extremely effective in swaying voter opinions with false or misleading data. As social media continues to play such a large role in our daily lives, we must understand how misinformation can be propagated on these platforms and look to address it on an individual level, by understanding our biases and thinking critically, but also pushing for government policy that addresses this problem.  Addressing it is tricky because it isn’t a problem with a one-faceted solution, but is something that is worth pursuing.

Like Us on Facebook