Social Media and Information: The Wild West of the Digital Age

There is no denying that the COVID-19 pandemic brought about a significant amount of change to the way that the world works today. One major change concerns the credibility of social media as a news platform. One particular site, Twitter, quickly became the hotbed for debate around the spread and origin of COVID-19. With one of the main selling points of Twitter being the ability to tweet something almost instantly, it rapidly turned into the place to get the most up to date news about the spread of this new, unfamiliar virus. This by-the-minute posting effectively put legacy news media out of the main stream of information for a considerable amount of people. To make matters worse, the algorithms already put in place by Twitter were good at promoting public discourse, leading self-proclaimed “experts” to gain popularity by sharing harmful misinformation. Twitter swiftly became a double-edged sword for the spread of both real and fake news about the COVID-19 pandemic.

Recent scholars and researchers have debated whether social media companies should be responsible for the spread of information, both accurate and inaccurate. Some claim that social media should not be responsible for what is posted but instead it should be left up to the individual to decide. For example, Emily Brahler, Erica Fuller & Benjamin Turnbull, researchers at The Catholic University of America, argue that social media is expected to be full of misinformation, and that users should not even look to it as a credible source. Other researchers, like Marco Viviani and Gabriella Pasi, concede that social media is full of misinformation, but they instead argue that people cannot reasonably be expected to be able to discern fake from facts, and that the companies behind social media should actively work towards eradicating misinformation even if it is a tricky problem to solve. Others suggest at least some responsibility lies with the programmers, such as the ACM Code 2018 Task Force, who offer a more broad approach to this problem, arguing that the job of a programmer is to use computing to benefit society and improve human well-being, recognizing that everyone has a stake in its outcomes. What these scholars fail to explore is the way in which these companies should begin fixing their problems. I argue that social media companies should regulate user content and take responsibility for the spread of information to protect users from misinformation, but that this is a complex issue that will require changes to algorithms and the way information is disseminated on these platforms.

Social media companies have come under increasing scrutiny in recent years for their role in the spread of misinformation. Misinformation, also known in recent years as "fake news," refers to false or misleading information that is spread intentionally or unintentionally through social media platforms and other forms of media including word of mouth. Another way to think about false information is as “information pollution” which Wardle and Derakhshan further break into three separate categories and then place on a venn diagram. In the “false” circle they place mis-information, which they define as false information that is not created with the intention of causing harm. On the other side we have the “harmful” circle which encompasses mal-information which is information that is based on the truth but is used to harm an individual, organization, or country. Finally meeting in the middle is where dis-information lives as it is information that is both false and intentionally created to harm an individual, social group, organization, or country (Wardle 21). While misinformation might not be intentional, the impact it still has on the information environment is not negligible. This contamination can have serious consequences for individuals and society as a whole, as it can lead to the spread of false beliefs and undermine trust in the media and other institutions.

Of the three categories, misinformation ends up being the most harmful to the information environment as it is the most subtle and is mostly undetectable until there is too much of it. Just like one McDonald's cheeseburger wrapper in the ditch might not seem harmful by itself, a collection of wrappers and litter make the ditch rather grotesque and unappealing to those that have to pass by. While a ditch that is full of trash will gain attention and then receive support from a cleanup crew, the ditch with only one or two wrappers might go largely unnoticed by the general public while still harming the ecosystem. The same can be said about misinformation, where a small amount of false information is almost imperceptible to unguarded users, in particular it negatively affects those who “do not have the necessary instruments and cognitive abilities” (Viviani 18) to ascertain the integrity of the information. In contrast, malinformation is built to gain lots of attention and is thus easier to moderate because of the several eyes it will inevitably fall on.

One way that misinformation and information alike is spread through social media is with the use of algorithms. Algorithms are sets of instructions that are used to carry out specific tasks, such as sorting data or making predictions. In the context of social media, algorithms are used to personalize the content that users see on their feeds. These algorithms take into account a variety of factors (such as the user's past interactions, the time of day, and the user's location) to determine which posts and ads to show. In order to create these algorithms, social media companies collect a vast amount of data on their users, including their demographics, interests, and behavior on the platform. This data is then used to create mathematical models that can make predictions about what content a user is likely to engage with.

The use of algorithms in social media has both benefits and drawbacks. On the one hand, algorithms can help users see content that is more relevant and interesting to them, which can improve their experience on the platform. However, algorithms can also create echo chambers, where users only see content that confirms their existing beliefs, and can amplify misinformation by promoting false or misleading content. An echo chamber is a situation in which a person or group of people are exposed to only a narrow range of perspectives or information, resulting in their beliefs and opinions becoming reinforced and amplified. This can happen on social media platforms, where algorithms are used to curate content for users based on their past engagement and preferences. This can lead to the creation of "filter bubbles," where users are only shown content that aligns with their existing beliefs and opinions, leading to a lack of exposure to alternative perspectives (Cinelli 1). Social media companies, and subsequently their algorithms, have been trained to love this approach as it keeps their users engaged, and it encourages them to keep coming back, thus generating more advertising revenue for their platforms.

Echo chambers can have both negative and positive effects on people. On the negative side, echo chambers can lead to a lack of exposure to alternative perspectives, which can make people more entrenched in their beliefs and less open to new ideas. This can lead to a lack of critical thinking and a reduced ability to evaluate information objectively (Brahler 5). Echo chambers can also create a sense of "groupthink," where people are more likely to conform to the beliefs and opinions of the group, even if those beliefs are not based on evidence or facts. This can lead to the spread of misinformation and false beliefs.

On the positive side, echo chambers can provide a sense of belonging and support for people who share similar beliefs and opinions. This can be especially important for marginalized or minority groups who may feel isolated or excluded in mainstream society. Echo chambers can also provide a space for the free exchange of ideas and the sharing of information and resources among like-minded individuals. However, it is important to recognize that the potential negative effects of echo chambers should not be overlooked, and that efforts should be made to promote a more diverse and balanced flow of information on social media platforms.

As a result of how algorithms tend to work both for good and bad, there is ongoing debate about how social media companies should regulate their algorithms to prevent the spread of misinformation and promote a more diverse and accurate flow of information. Some suggestions include using fact-checking algorithms to identify and flag false content, or using algorithms to promote a more diverse range of content in users' feeds (Van Dijck 9). Nevertheless, implementing these solutions is not straightforward, and will require careful consideration of the ethical and practical implications.

While on a small scale controlling the flow of information and flagging false content might not be hard to solve, the big scale of the real world is anything but easy. With so much data coming and going from social media sites, leaving everything to be checked by a human is certainly a lofty goal, but leaving everything to an algorithm is not all that straightforward either. The verifiability of what is undoubtedly true or false is not always something that is easily quantifiable for either a machine or a human. While certain issues may come from sources that we trust, such as professional scientists that are working towards providing collective wisdom and knowledge to those who want it, there are other topics that are not as clear cut (Van Dijck 8). The world of politics is one of the more likely candidates for this issue, where human nature makes it difficult to objectively decide what is fact or fiction, and where science simply does not have the ability to give us a data driven answer. This can quickly become difficult because what is considered to be false or misleading information can vary depending on individual beliefs and perspectives.

Further exploring why humans are incapable of just “fact checking” everything on social media is the fact that the veracity of each sentence must be taken into account, with individual fact-checking taking place in order to tackle the immense task of proving if something is true. This of course takes a lot of time and resources, as even the fact checkers, their work, and the sources used for their work must be double checked for accuracy. With the age of digital media, and the reality that anyone can quickly create a new account and new posts without the need for any verification themselves instantly becomes overwhelming.

The next obvious step is to utilize the machine to do the work for us. With the help of machine learning and subsequently artificial intelligence, this task might appear to be a lot easier to automate. With this new technology, there is no longer a need to explicitly write specific rules for each individual case, as the machine can learn and adapt on its own. But this task is not as straightforward as just leaving a machine to learn by itself, as without the presence of predefined benchmarks and, more importantly, without a gold standard dataset for which we can train these models, the machines themselves are unable to determine right from wrong (Viviani 6).

In order for machine learning and artificial intelligence to be effective in automating tasks, they require a large amount of data to train on. This data must be labeled and organized in a specific way, with clear distinctions between different classes or categories. Without this structured data, the machine learning algorithms will not be able to learn and make accurate predictions. Additionally, the quality and diversity of the training data is crucial for the performance of the model. If the data is biased or unrepresentative, the model's predictions may be inaccurate or unfair. Essentially the machines still struggle with one of the same dilemmas as humans, as humans are ultimately the ones in control of the machine and are the ones that implement their own beliefs and biases into the system with no perfect dataset that everyone can agree upon.

Whether through artificial intelligence or human intervention, the concern can also turn to how much free speech should be allowed before the spread of potentially false information outweighs your right to the freedom of speech. The concept of shadow banning is not a new idea, but in recent years it has become increasingly controversial. Shadow banning is a practice that involves limiting the visibility or reach of a person or organization on social media without their knowledge. This can be done by reducing the visibility of their posts in other users' news feeds, making it harder for their content to be discovered. Shadow banning is often used as a way to limit the spread of misinformation or harmful content on social media platforms.

While the idea of shadow banning may seem appealing as a way to limit the spread of false information, it raises important questions about free speech and censorship. Many people believe that it is important to protect the freedom of speech, even if it means allowing the spread of false or harmful information. Others argue that limiting the spread of false information is necessary to protect the integrity of online discourse and prevent harm to individuals or groups. While it certainly is not a great idea to spread harmful information, at what point does a simple mistake turn into an active choice to spread false information?

Just as shadow banning might not always be the right answer to suppressing information, the idea of forcing the spread of accurate information is not always the correct answer either. The Association for Computing Machinery's Code of Ethics contains a section about avoiding harm. In this section they mention that “Well-intended actions, including those that accomplish assigned duties, may lead to harm” (ACM Code 2018 Task Force 9). While it might not be intentional, pressuring people into believing something can have unintended consequences that may lead to more harm than good. In the event of the COVID-19 outbreak, Twitter made the decision to put COVID-19 banners written by staff at Twitter at the top of everyone's timeline. While this information was undoubtedly fact-checked by staff, the idea of centralizing information on a site that is built around decentralized groups of people did not sit well with a large group of people. Ultimately this banner was removed, but not before it undermined the trust of scientists and governmental agencies that had contributed to the research included in the banner.

In conclusion, the COVID-19 pandemic has brought attention to the role of social media in the spread of misinformation. While some scholars argue that users should be responsible for discerning fake from factual news, others argue that social media companies should take responsibility for regulating content and eradicating misinformation. Misinformation can have serious consequences for individuals and society as a whole, and thus it is important for social media companies to take steps to address this issue. This will likely require changes to algorithms and the way information is disseminated on these platforms. Overall, it is clear that addressing the issue of misinformation on social media is a complex task, but it is one that must be tackled in order to protect the integrity of information and maintain trust in the media.