SMU Office of Research & Tech Transfer – The internet has revolutionised communications. From its modest beginnings as a static network shuffling small amounts of data, with content published and maintained by expert coders, it has grown exponentially.
A tectonic shift occurred with the emergence of Web 2.0, which enabled anyone with basic computer skills to create and share their own content – and to respond unfettered to other people's content. It gave rise to the participatory online culture known as social media.
Now we are all connected in an information ecosystem that transcends geography, race, gender, age, economic difference and more. Any content we post, and comments we leave, can ricochet around the world with dizzying speed.
Such freedom of expression should make the online space ideal for democracy, and that is an underlying inquiry for the work of Haewoon Kwak, Associate Professor of Information Systems at Singapore Management University (SMU).
“With the internet there are many ways to empower people,” Professor Kwak says. “[Such as] with Twitter, where just a simple tweet is shared a million times and then it will reach a billion people. People's power can be easily emphasised on the internet.”
But there's also a darker side. It is just as easy to use the internet to troll, scam and bully, to spread false and misleading information, and to turn a disregard for truth into a business model using click bait to build an audience share that can be monetised.
So, how can we trust the online public space?
Pros and cons
Professor Kwak is using computational tools to develop data-driven methodologies to better understand obstacles to trust, such as media bias, toxicity, polarisation and unfair representations.
He notes that in the online space “everything has two sides: one is good and then at the same time the other is bad”. He offers the example of anonymity. It allows some users to “become very toxic and say bad words, but at the same time [anonymity enables] citizen movements to grow”.
There's a similar duality to the economy of the internet, where an audience of followers can be leveraged to make money.
“Because of that, the fake news industry [grows],” says Professor Kwak. “They don't really care about truth, they just want to attract as much audience as they can.
“But at the same time that kind of industry helps citizen journalists. They don't really earn money by themselves, or they are not hired by the media, but they want to find the truth and then they publish or report it on their websites and people come and it generates some money.
“So there are pros and cons, like everything on the internet,” he adds.
Information disorder
Social media reflects the best and worst of human nature, which is amplified as it is shared. Just as it can be supportive and helpful, with useful sites providing free information and resources, it can also be a powerful vehicle for hate speech.
“Trolling and bullying are very serious problems these days,” Professor Kwak laments. “And it's not easy to control or guide [the perpetrators] because… there's nothing to make them act in a correct way.”
Established sites employ moderators to remove harmful content, with varying degrees of diligence and success. Facebook, which has around 2.6 billion active users a month, currently employs about 15,000 moderators, most of whom are contract workers.
“Moderation is a very tough job. They want to read all the comments, but it's impossible because of the scalability,” Professor Kwak points out.
It is also harrowing. Earlier this year, thousands of moderators joined a class action suit against Facebook claiming the job causes post-traumatic stress disorder. Facebook settled the suit.
But rather than giving moderators the task of blocking content, which casts the sites in the role of a controlling Big Brother, Professor Kwak sees an effort by Twitter as promising.
“They show the tweet, but they also add some information that this tweet is not verified, so read it with careful consideration, or warning,” he notes.
“That level of editing information could be good. And it's also [likely to be] acceptable for many people because individuals have some ability to check which information is credible, which information is trustable.”
Professor Kwak and fellow computer science researchers are developing tools that could assist moderators, “such as crowd voting systems and language-based automation tools. So there are technical efforts, and also community efforts, going together,” he says.
Transparency and awareness
“I can say it's almost impossible to regulate the online space,” Professor Kwak concedes.
“For the ultimate solution, I think that educating people [is better] – [such as from] a media literacy perspective, so that people themselves know what is good information, what is a credible source and what information we can trust.
“That kind of individual effort at grass-roots level will be more feasible, or more possible, in this area,” he adds.
The Social Dilemma, a recent Netflix documentary, revealed the manipulative practices that big social media companies employ to keep users attached to their sites. It was an eye-opener for many people to realise they were being stalked by algorithms, in much the same way as online advertising is targeted, and entertainment sites make program recommendations based on previous choices.
“I want to say that there are a lot of efforts going on in the academic field to audit algorithms and make something transparent to users,” Professor Kwak says.
Central to the discussion about how the internet could be improved is the line between freedom of speech and censorship.
Traditionally, the argument has been applied to the power of the press. And while there are differences between mainstream media and online posts, an observation by former U.S. President Thomas Jefferson back in 1813 still resonates.
Jefferson deplored the “malignity, vulgarity and mendacity of newspapers”. But he added that this was “an evil for which there is no remedy; our liberty depends on the freedom of the press, and this cannot be limited without being lost”.
But surely we can at least lift standards on the internet – with user education and more transparency, fostered by researchers such as Professor Kwak with his stated aim of “understanding society better and also making society better through computational ways”.
Back to Research@SMU Nov 2020 Issue