by Antônio F. Oliveira
[email protected]
Special to L. Neil Smith’s The Libertarian Enterprise
In recent years, technology has had a significant impact on societies and politics. One area where this is most evident is in how information is collected, stored, and used by companies and organizations. With the popularity of the internet and social media, vast amounts of data can be collected on user interactions with online content, including clicks, views, shares, etc. Furthermore, natural language processing and machine learning techniques enable companies to analyze the polarity of the data collected on the internet, such as text, images, videos, and audio, to evaluate users’ opinions on a product, service, or idea. This collected data is used to power various analysis and personalization tools, such as recommendation algorithms, machine learning, data tracking, and audience segmentation. Recommendation algorithms, for example, analyze users’ interactions with content to understand their preferences and personalize content displayed for them. Audience segmentation, on the other hand, collects information about the target audience to better understand their customer base and more effectively direct their messages.
However, these tools also have a negative side, as they can be used maliciously to influence public opinion and decision-making on political and social issues. Information manipulation, for example, can have a significant impact on public opinion, especially if falsification or distortion of information is deliberately done to influence the opinion of a group of people. Additionally, persuasive design is a technique that uses visual and interactive elements to influence user behavior in applications and websites. Below are the main algorithmic tools used by social media, analyzed technically based on Computer Science knowledge, which covers all the technical understanding of social media algorithms and artificial intelligence. The list covers recommendation algorithms, sentiment analysis, psychological manipulation techniques, persuasion and depersonalization, shadow banishment, the marriage between State and Big Tech, as well as real examples of moments when these tools were used with the help of state powers.
The role of recommendation algorithms and sentiment analysis in the agenda setting process
The Recommendation Algorithm tool can be used by criminal groups to manipulate public opinion, spread disinformation, influence elections, and other illegal purposes. To do this, they can exploit the nature of recommendation algorithms, which use machine learning techniques to analyze user behavior and present content that matches their preferences and interests. For example, a social network can create multiple fake accounts and use these accounts to interact with the content they want to promote. They can use bots to increase the number of views, likes, and shares of the content, thus increasing its visibility to other network users. In addition, they can use social engineering techniques, such as creating fake stories and memes, to increase the reach of the content and manipulate public opinion. The most widely used tool for manipulating public opinion is the Recommendation Algorithm. These algorithms can be used by certain groups to detect users who share their views and direct content to those users in a way that makes them more likely to share the content with others. Moreover, the algorithms can be used to create information bubbles that reinforce users’ beliefs and exclude different points of view or, worse still, recommend only content, newspapers, theses, and videos that are in line with a particular political narrative.
An example of the use of the recommendation algorithm tool was recorded after the Brazilian elections, in which reports prepared by the Brazilian Armed Forces and the Liberal Party, the party of former President Jair M. Bolsonaro, demonstrated the vulnerabilities of the electronic voting machines used in the country. These reports pointed out that the source code of the voting machines is not accessible and that more than half of the machines throughout Brazil cannot be audited, raising significant doubts about the reliability of the election results. In this case, the recommendation algorithms played a significant role, as can be seen in Google’s recommendation, which presented only newspapers, theses, and websites that defended the Brazilian electoral system, without presenting any facts that contradicted the official reports. In addition, Twitter manipulated the hashtag #BrazilWasStolen after the release of these reports, which denounced the election results based on official evidence. Despite reaching over 1 million retweets, the hashtag was quietly hidden by Twitter without any explanation. It is worth noting that the evidence presented by the official reports was not clarified by the body responsible for the elections, the Superior Electoral Court (TSE), which denied the armed forces adequate access to the source code. Instead of clarifying the vulnerabilities found, the TSE ironically fined the Liberal Party 22 million reais (22, which is Jair Bolsonaro’s voting number) after the party’s report was presented. The then-president of the TSE, Alexandre de Moraes, who is also a Brazilian Supreme Court justice, only argued that the report was “bad faith litigation” without presenting any evidence that contradicted the evidence cited in the reports.
Sentiment analysis plays a very important role in this manipulation. This natural language processing technique aims to identify and extract subjective information from texts, such as opinions, feelings, and emotions. It is used in various applications, such as brand and product monitoring, customer feedback analysis, social media analysis, and others.
According to Pang and Lee, sentiment analysis can be performed at three levels: document level, sentence level, and aspect level. At the document level, the goal is to determine the overall sentiment of a text, while at the sentence level, the goal is to identify the sentiment expressed in each sentence of the text. In the aspect level, the analysis is performed with respect to specific aspects of the text, such as product features or service characteristics. (Pang and Lee, 2008).
Sentiment analysis is a technique that can be used by social networks for political purposes, allowing them to manipulate public opinion by directing content based on user sentiment. For example, if the social network identifies that the majority of users in a particular state or country have a positive opinion about a politician or topic, it can show more negative news and information about that political figure, influencing public opinion and creating a negative image around them.
Reference: Pang, B. and Lee, L. (2008). Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval, 2(1–2), pp.1–135.)
An example of the use of this tool is when searching for conservative politicians or controversial topics on Google. Remarkably, most of Google’s recommendations in such searches are from progressive websites and newspapers that generally criticize these politicians or topics. Another alarming example of this tool was the case of the then-president of Peru, Pedro Castillo, who, after announcing a coup d’état in December 2022, was erroneously exposed by the media as a nationalist conservative. In fact, former President Castillo had always been considered by the very Wikipedia site that made the change to be a socialist politician and an activist for the agenda of the Sao Paulo Forum (a coalition of left-wing parties and politicians in Latin America founded in 1990 by Luiz Inácio Lula da Silva and Fidel Castro, whose goal is to build a strong and united Latin America for socialism). At the time, Wikipedia quickly changed the biography of the President of Peru from “progressive socialist” to “conservative nationalist,” something that was detected in real-time by internet users who saved the site’s previous page and compared it to the current one. (Cleiton, 2022)
This episode demonstrates how many sectors of the internet have a clear political bias and how Google’s recommendation algorithm can be manipulated to favor certain opinions. In a democratic society, it is essential to have access to reliable and impartial information about politicians and political issues, regardless of the ideological orientation of these politicians and the opinions of the people writing about them. It is worth noting that this type of manipulation can be carried out subtly, without users realizing that they are being influenced, creating an information bubble environment – a term used by media and technology researcher Danah Boyd to refer to public opinion manipulation.
However, it is important to emphasize that sentiment analysis and recommendation algorithms have remarkable potential to influence collective thinking on social networks, as they allow content to be directed based on users’ emotions, making it possible to create a closed social environment where it becomes feasible to impose a central narrative around a specific subject. Thus, it is possible to generate in society a sophisticated form of social engineering, unprecedented in recent centuries.
Reference: Cleiton, (2022, December 7). “Wikipedia rewriting history.” [Tweet]. Twitter. https://twitter.com/cleitonprofeta/status/1600620620491939840
Persuasion by design, Depersonalization and Fact-Check
Persuasion by design is another technique used to influence users’ behavior in favor of a particular political cause. This technique involves manipulating the interface of a platform or application to encourage users to take certain actions, such as sharing content or signing up for a campaign. The use of persuasive colors, fonts, images, and phrases can be quite effective in influencing a user’s decision. An example of persuasion by design is the gamification strategy, which uses game elements to encourage user engagement and loyalty on a website or application. Gamification uses persuasive design techniques such as rewards and positive feedback to encourage the repetition of specific behaviors, such as posting content or sharing personal information. Additionally, gamification can be used to encourage the purchase of products or services by offering prizes or discounts to users who achieve certain goals.
Another example of persuasion by design is the technique of dark patterns, which refers to design patterns designed to manipulate or deceive users. For example, a website may use an “accept all cookies” option instead of allowing users to choose whether or not to accept specific cookies. This strategy uses a persuasive design to encourage users to accept all cookies, even if it may compromise their privacy. The concept of persuasion by design involves the use of psychological principles and design techniques to influence human behavior. This includes concepts such as heuristics, which refers to the mental process we use to make decisions quickly, and cognitive biases, which refer to the ways in which our perceptions and judgments can be distorted. Persuasion by design can also involve the use of technologies such as artificial intelligence and machine learning to personalize the user experience and maximize the effectiveness of persuasion.
In addition, depersonalization is a phenomenon studied in social psychology that refers to the loss of self-awareness and self-control in situations where the individual feels anonymous or unidentified. On the internet, this phenomenon can be amplified because people can interact anonymously and lose track of their actions and words. Depersonalization is a technique used by social networks and other sites to disconnect the user’s actions from their identity, allowing them to feel more comfortable performing actions they would not otherwise do. For example, in political forums, anonymity can encourage users to make aggressive or prejudiced comments without fear of being identified. Users can hide behind pseudonyms and consequently feel more protected to express negative and aggressive opinions. This virtual anonymity can increase aggression and violence in online interactions, as individuals feel less responsible for their actions. This technique can also be used by social networks to increase user engagement. By allowing individuals to express themselves anonymously or through pseudonyms, social networks can encourage participation in virtual conversations and discussions. However, depersonalization can have negative implications, whether in online forums, social networks, or virtual games, as users can engage in aggressive or hostile behaviors without suffering the real consequences of their attitudes. In this way, such an online environment can become toxic, where abusive language and practices are accepted or, in more serious cases, encouraged.
The fact-checking technique has become a popular tool on social media and Google to verify the accuracy of information shared by users. However, it is important to note that the use of fact- checking can be abusive and manipulative. One of the most common strategies used by those responsible for fact-checking to manipulate public opinion is the way this tool is presented on social media. The tool behaves as a monopoly of truth, with irrefutable and indisputable conclusions. The warning messages imposed on publications are frequently not accompanied by concrete evidence, and they restrict the user from contesting the decision. The lack of an open and transparent debate about the information being verified reinforces the perception that fact-checking is the sole arbiter of truth. This causes people to trust information verified by the organization without questioning its accuracy.
It is important to note that fact-checking is not composed of autonomous agents who seek to analyze facts impartially and fairly. On the contrary, fact-check members are generally representatives of media and newspapers who tend to privilege their political narratives. Furthermore, the selection process for these individuals is not transparent, raising concerns about possible ideological biases and lack of technique in the selection.
Indeed, the use of the monopoly of truth by authoritarian and totalitarian regimes is not a historical novelty. These regimes frequently created government agencies to control the information that was disseminated to the population. The main objective was to impose a single version of the truth that reinforced the government’s narrative and reduced or eliminated the possibility of criticism and questioning. Joseph Goebbels, Minister of Propaganda of Nazi Germany, is a famous example of this type of strategy. He controlled the German media and created a propaganda system aimed at shaping public opinion in favor of Adolf Hitler’s regime (Lochner, 1948).
George Orwell’s book “1984” is a classic example of how the relativization of the meaning of words and facts can be used to manipulate public opinion. The work presents a dystopian world in which a totalitarian government controls all aspects of citizens’ lives, including the information they receive. In this world, truth is flexible and can be changed according to the government’s interests. The main character of the story, Winston Smith, works for the Ministry of Truth, which is responsible for rewriting history to fit the needs of the regime. The book shows how the manipulation of truth can be used to control the population and create an alternative reality that benefits only those who hold power (Secker and Warbug, 1949).
Reference: Orwell, George. 1984. Harcourt Brace Jovanovich, Secker and Warbug, 1949. Goebbels, Joseph. “The Führer as a Speaker.” In The Goebbels Diaries 1942-1943, edited by Louis P. Lochner, 37-45. Doubleday & Company, Inc., 1948.
However, the US Supreme Court, in important decisions, has emphasized the importance of freedom of speech and press, and it is believed that it is the public, not a judge or government, who should decide what is true or not. In 1964, in the case New York Times Company versus Sullivan, Supreme Court Justice William J. Brennan Jr. wrote that “open and robust discussion of public issues is vital to the health of our nation” and that “the value of freedom of speech and press lies in the fact that they are essential to the discovery of truth.” This idea was reinforced in other important Supreme Court decisions, including the 1971 decision in the case New York Times Company versus United States (also known as the “Pentagon Papers”), in which the Supreme Court rejected the government’s attempt to prevent the publication of secret Pentagon documents.
The complexity of the issue lies in the fact that by delegating the power to judge the truthfulness of facts to a few agents, there is a risk that the truth will be manipulated to serve personal, political, or economic interests. The result of this can be the establishment of a central narrative, which can shape public opinion according to certain interests. This coercive power should not be underestimated, as it can be used to promote hidden agendas, benefit lobbyists or malicious groups, and even produce social engineering in society.
In democratic regimes, freedom of expression is a fundamental value, but it must be exercised responsibly. On the other hand, it is important to ensure that people have the autonomy to filter the information that comes to them, without the interference of coercive bodies that may distort the facts or impose a central narrative. However, if someone shares clearly criminal content, it is fair that they are brought to court and go through due legal process, with the right to defense and contradiction. The individual should only be forced to retract, delete posts, or have their social network deactivated after a fair trial, with the right to a wide defense and contradiction, and only if the existence of a crime is proven. It is unacceptable for a group formed by the union between state powers, major press, and social media managers to operate without transparency in the selection of their agents and in the analysis process, without presenting evidence or opportunities for appeal. Coercively determining whether someone is telling the truth or not, blocking their social networks, or putting them in shadow banishment is unacceptable and more similar to fascist regimes than republican ones.
Therefore, it is crucial that there is no group or entity dictating what is true or not on social media, as this can lead to manipulation of public opinion, curtail freedom of expression, and inhibit constitutional rights. Users should have the right to challenge information and express their own opinions without fear of retaliation or censorship. In summary, fact-checking should be carried out by the facts themselves and by the community that participates in public debate without state interference, whose probability of error and manipulation is very high.
Reference: New York Times Co. v. Sullivan, 376 U.S. 254 (1964) New York Times Co. v. United States, 403 U.S. 713 (1971)
Shadow Banishing Techniques: How Shadow Banishment is Used as a Main Political Weapon on Social Media
Shadow banishment is a technique used by social media platforms to silence or hide users without them realizing it. This technique is used to restrict the reach of content from a particular political group or to completely hide it from other users. The goal is to prevent the user from spreading information that may be harmful or that violates the platform’s terms of service. However, shadow banishment can also be used unfairly to censor divergent opinions or to control the narrative surrounding a topic.
One of the main ways to implement shadow banishment is through content classification algorithms, which determine what is displayed on main pages and search results. Social media platforms can also use blacklists to restrict the reach of content from a particular political group. Blacklists are used to identify users or content that the platform considers harmful or offensive and then limit the exposure of that content to users without explanation of why it is being done.
Shadow banishment can have significant implications for freedom of expression and diversity of opinions on social media. Although platforms claim that this tool is used to maintain the safety and integrity of the platform, it is often used to censor divergent opinions and control the narrative around “controversial topics.” This can lead to the formation of opinion bubbles and limit access to information for users. In the political context, this technique can be used as a form of censorship, limiting the reach of specific political groups and their legitimate opinions. A recent case involves the acquisition of the social media platform Twitter by billionaire Elon Musk, which generated a significant increase in followers for conservative profiles. Ben Shapiro, an American conservative writer, reported that “without algorithmic handcuffs, conservative profiles are exploding.” In countries like Brazil, a considerable number of conservatives experienced an exponential increase in their follower base, an unprecedented phenomenon. However, the strategy of shadow banishment was used not only to restrict content produced by specific political groups but also to decrease the visibility of their profiles. This tactic was implemented suddenly, without any prior warning or possibility of recourse, resulting in intense dissatisfaction and outrage from conservative users who previously found themselves at a disadvantage against the progressive political spectrum.
Although initially some authorities dismissed user concerns as conspiracy theories, months later, Elon Musk admitted in an interview that “all the conspiracy theories regarding Twitter seem to have proven true with Twitter’s files.” In addition, Brazilian federal deputy Eduardo Bolsonaro, son of former president Jair Messias Bolsonaro, won a lawsuit in Brazilian court against Instagram’s shadow banishment on his profile, resulting in the removal of the deputy from Instagram’s blacklist. In summary, this scenario highlights the importance of carefully evaluating the implications of policies and actions taken by social media platforms, especially in the political context, as well as the importance of transparency and open communication with affected users.
The marriage between the State and Big Techs: a danger to the fundamental rights of humanity?
Another worrying phenomenon is the interconnection between the State and social media, which can lead to the manipulation of public policy. As discussed in this article, American State agencies such as the FBI and security agencies like the Pentagon have been accused of manipulating politics not only in the United States but also in other countries. We also addressed how the Brazilian Supreme Court, in partnership with sectors of the mainstream media and social media, is advancing its political projects in Brazil. Furthermore, we presented what happens when social media decides to stand up against these powerful agencies, as in the case of Telegram, Gettr, Rumble, and the new Twitter, which have been threatened with fines, blockades, and restrictions by the Brazilian Supreme Court. However, it is noteworthy to highlight a tremendously worrying fact in this article to alert other countries about it. It refers to an alarming case that occurred during the Brazilian presidential elections in 2022, which violated the due process of law and the immutable clauses of the Brazilian Constitution, such as freedom of expression and the right to a defense, provided for in article 60, §4 (CF, Brazilian, 1988). Such clauses are almost unchangeable and cannot be abolished even by constitutional amendment, as they are considered fundamental for the maintenance of the Democratic Rule of Law.
During the second round of the Brazilian presidential elections in 2022, during a plenary session of the Superior Electoral Court (TSE), an agency composed of Supreme Court justices responsible for coordinating the electoral process, the censorship or freedom of the conservative streaming platform Brasil Paralelo, similar to the American The Daily Wire, was debated. The documentary about the stabbing suffered by then-reelection candidate Jair Messias Bolsonaro, produced by the group, was scheduled to be shown six days before the second round of voting but was vetoed by 4 votes to 3. The ministers made this decision without having watched the documentary, imposing prior censorship in a democratic country, as well as temporarily sacrificing the company’s freedom of expression based on subjective concepts such as indications, informational disorder, fake news, among other generic and broad terms, without presenting factual evidence. This fact is worrying because it compromises due process of law and immutable clauses of the Brazilian Constitution, such as freedom of expression and the right to a defense, provided for in the article cited above.
Reference: Constitution of the Federative Republic of Brazil, 1988. Brasília, DF: Federal Senate, 1988. Available at: https://www.planalto.gov.br/ccivil_03/constituicao/constituicao.htm. Accessed on January 16, 2023.
The speech given by the ministers regarding the censorship imposed on the documentary produced by the streaming platform Brasil Paralelo, which deals with the assassination attempt suffered by former president Jair Messias Bolsonaro during the 2018 election campaign, perpetrated by a member of the Socialist Party of Brazil (PSOL), was highly subjective and lacking factual support. In his vote, Minister Alexandre de Moraes invoked the existence of an “ecosystem” of individuals, about twenty in total, who have been under investigation by the Supreme Court for three years, accused of constituting a “hate cabinet”.
In summary, the concept of a hate cabinet refers to a group of individuals or bots that use social media to disseminate false information and promote attacks against democracy. However, such allegations should be accompanied by substantial evidence, which has not yet been provided. This scenario is similar to what was reported by the American media about “Russian bots” that allegedly spread false information in support of former President Donald Trump. However, as demonstrated by the so-called Twitter Files, this narrative is false and was created by members of the mainstream media and the Democratic Party who were colluding with executives from the former Twitter. It is important to remember that these “bots” were – both in the US and in Brazil – actually real people sharing their political preferences, and not a coordinated action by foreign agents.
Furthermore, the minister makes it clear that due to this investigation – which, initially, does not present consistency in terms of evidence – there is a likelihood between the Latin legal concepts of “fumus boni iuris” and “periculum in mora”, which refer to the “smoke of good law” and the “danger in delay”. These concepts indicate that if there are indications or sufficient evidence that one of the parties is right in its claim, urgent measures must be taken to avoid irreparable harm. Based on this, without evaluating the content of the documentary in question, the minister concluded that “if there are indications or evidence” (which have not been presented) of alleged “irreparable harm,” the documentary cannot be exhibited and the platform will be demonetized and restricted from publishing documentaries temporarily. It is also important to remember that fake news, attacks on democracy, and hate speech are not classified as crimes in the Brazilian Constitution, and the minister did not present evidence that such “crimes” had been committed, leaving the accused at the mercy of his subjective interpretation.
Minister Ricardo Lewandowski surprised with his subjective legalism by stating, during a vote, that “we are facing an absolutely new phenomenon, the phenomenon of disinformation, which goes beyond fake news. The ordinary citizen, the ordinary voter, is not prepared to receive this type of informational disorder as I am presenting here in my vote.” During this episode, Lewandowski accused Brasil Paralelo of practicing “informational disorder,” a generic term that lacked concrete evidence of any illegal conduct.
In other words, the minister based his accusation on his subjective opinion of the case, without pointing out any passage that evidenced the alleged informational disorder. By punishing the platform without granting it the right to understand why the punishment was imposed and without legally knowing what crime was committed, the TSE violated article 93, item IX of the Brazilian Constitution, which determines that “all trials of the Judiciary shall be public, and all decisions shall be reasoned, under penalty of nullity.” The gravity of the mistake committed by the Court is aggravated by the fact that its decision may have had a significant impact on the elections. With the censorship of the BP documentary, which held information about Bolsonaro’s stabbing incident, the people were deprived of access to relevant data in the case. It is also important to note that the Workers’ Party (PT) of Lula, besides being known for being the most corrupt party in the country with several of its main leaders imprisoned for corruption and for being associated with socialist dictatorships in Latin America, is suspected of involvement in the brutal assassination of the former mayor of Santo André (SP), Celso Daniel. In the process, the testimonies link Lula’s party to the murder and also to Brazil’s largest criminal faction, the PCC. Such information could have favored Bolsonaro in the elections, but it was omitted by the Court, favoring the candidate Lula.
In the same subjective perspective, Minister Carmem Lúcia caused surprise by indirectly acknowledging that the lawsuit filed against the Brasil Paralelo platform constitutes a form of prior censorship. However, she argues that censorship should be applied in an “exceptionalist” way, that is, in exceptional cases, until the end of the elections. The minister highlights that the Federal Supreme Court has jurisprudence that prevents any form of censorship. However, she argues that measures like this can unfold as a remedy or a poison, and in this particular case, she agrees with the rapporteur’s decision to restrict the documentary. The minister points out that “censorship cannot be allowed under any circumstances in Brazil.” However, at the same time, she declares, “this is a specific case, once we are approaching the second round of elections.” Thus, Lúcia argues that “the inhibition should remain until October 31, one day after the end of the second round, to ensure the fairness, rigidity, and security of the electoral process and the right of the voter.” The minister considers that “this situation is exceptionalist, and if censorship becomes more extensive, the decision must be immediately reformulated to guarantee freedom of expression, fully respecting the Constitution and the guarantees it offers.”
Source: https://www.youtube.com/watch?v=F3dStHgbJWU&t=1359s
For more in this article, see:
- Prologue and Section 1: Literature Review
- Section 2: Data and real examples
- Section 3: Abuse of Power on Social Networks
- Section 4: Elon Musk exposes Twitter files
- Section 5: Behind the scenes of manipulation: techniques used to influence public opinion
- Section 6: Exploring solutions to the challenges presented: a critical review
Happy with this piece? Annoyed? Disagree? Speak your peace.
Note: All letters to this address will be considered for
publication unless they say explicitly Not For Publication
Was that worth reading?
Then why not: