The question is not whether we are discussing politics over the internet or not, but rather if we are doing it in the right way. Most official city portals rely heavily on "insecure" social media platforms like Facebook, Twitter, YouTube and Flickr to engage with the public. The issue with most internet platforms is that anyone can claim to be whoever they want and few are the mechanisms to verify the identity of the person behind the computer. In 2021 the invasion of the capitol by an angry mob in the USA is yet another example of the power of these platforms.
People Mentioned
Companies Mentioned
Coin Mentioned
The internet has become the agora[] of modern times. Where in the old days people met at physical and specific geolocations to discuss political matters and other news, usually in a central square in the middle of the city or in amphitheatres at the periphery, today we do so online on social media platforms.
The question is not whether we are discussing politics over the internet or not (we obviously are), but rather if we are doing it in the right way.[] Most official city portals rely heavily on "insecure" social media platforms like Facebook, Twitter, YouTube, and Flickr to engage with the public, and very few incorporate complex participation tools such as forums, chat rooms, e-voting, etc., that could otherwise allow for more direct and consequential interaction.
Mass protest in Tunisia and birthplace of the Arab Spring which culminated in the exile of former President Zine El Abidine Ben Ali to Saudi Arabia. Nine out of ten Tunisians used Facebook to organize the protests.
Most of the mass protests that have erupted since the early 2010s[],[], from Hong Kong[] to Algeria and Lebanon, from riots in Portland, USA, to anti-government protests over covid measures around the world[], all were organized on-line with laptops and smartphones, inspired by hashtags and coordinated through social networks.
Facebook pages were created to raise awareness of crimes committed against humanity, such as those relating to police brutality during the Egyptian Revolution, which culminated with the death of Khaled Mohamed Saeed or the death of George Floyd represented by the Black Lives Matter movement in the USA. In 2021 the invasion of the capitol by an angry mob in the USA is yet another example of the power of these platforms.[] In response, many were the times when public authorities blocked or temporarily disrupted access to social media platforms. In 2018 alone there were 128 documented mandated internet shut-downs.
In 2021, the U.S. Capitol, where the national congress musters, was overrun by an angry mob that questioned the legitimacy of the electoral results for the presidential elections. Prior to the event, there were more than a million mentions of invading the capitol on social media on "alt-tech" platforms such as news aggregator site Patriots.win, chat app Telegram and microblog sites Gab and Parler.
The issue with most internet platforms is that anyone can claim to be whoever they want and there are few mechanisms in place to verify the identity of the person behind the computer. We can self-name ourselves by entering a nickname and we can create as many accounts as we want because there is usually nothing that binds a digital identity to a real person. This is both good and bad. On one side it allows for the free flow of information that could otherwise be more easily repressed. On the other, if there is no mechanism for accountability or traceability, i.e. if users cannot attest to the authenticity of their publications, interesting tools such as online voting become unavailable and the publication of harmful content[] is encouraged because in general there are no penalties.
What happens in a world without a digital identity?
The web goes dark, in a sense.[] It becomes an environment with weak moderation. This shouldn't come as a surprise to anyone, after all, it's the internet we're talking about. What can be a surprise is that social media networks like Facebook are classified as highly insecure.
Prostitution is rampant on the platform.[] Bots[] that spread fake news[] are also out-of-control.[] Twitter is no different![] These social bots can stage conversations similar to humans and often go unnoticed by the less attentive users.[] Their ability to influence human behaviour should not be underestimated.
Research[] clearly shows that bots accounted for about 11% of all hate tweets analysed in online conversations about controversial policies related to the Israel/Palestine and Yemen conflicts. They can influence our thinking and define narratives![],[18] At the same time we have over one million users accessing Facebook completely anonymously.
[] All of these points should not be regarded as cheap criticism. It is consensually difficult to monitor content on the internet.[] But awareness of these facts and the characteristics that define the platforms that we use is important. The claim isn't that content on the internet, in general, is impossible to moderate. It could be.
[] The task however is immensely difficult and often requires state powers and resources to be effective, at least under the current paradigm. Artificial intelligence algorithms are improving by the day and can make the task easier.[],[] But just as AI techniques become more advanced, bots become better at manipulating humans as well. There has to be a better way to be on the internet.
So for those who are deep into Web3, at some point or another during your incursion you've probably come across the so-called blockchain trilemma, coined by the creator of Ethereum, summarizing the fundamental problem of blockchain, the underlying architecture of Web3, as the need to find an equilibrium and reconciliation between three apparently mutually exclusive concepts: scalability, security, and decentralization.
Parallel to this trilemma perhaps is another one lurking around of equal importance, not related to the underlying software-hardware architecture, but which tries to reason the way that we as humans use the internet and how we can take advantage of it without falling into traps and pitfalls. Essentially stopping the world wide web from becoming the world wild west, the other ‘lemmas’ are moderation, identity, and privacy.
I think we can all agree that we need some form of content moderation online. To achieve this end, some form of identity needs to be established. But we also want to keep a certain level of pseudonymity for when we want to publish under an alias and not a real name and for this a special type of identity is necessary. Just as proof-of-stake and sharding were proposed as part of the solution to the blockchain trilemma, hopefully, there is a solution to this other triad.
Casting light into the darkness
The middle ground where responsibility can be attributed to users while at the same time preserving their anonymity is this: proper identity management[] (also known as IDM). The idea behind IDM is to clearly distinguish between identity attestation from user account registration and later authentication (login), with intelligent cryptographic tricks and separation of roles between the participating entities.
Vulnerabilities arising from poor implementations of IDM are a famous problem and a world in itself.[] Since we don't expect the reader to be a cybersecurity expert, we'll spare the details. Suffice it to say that yes, it is possible to have IDM with cybersecurity. If programmed correctly, it allows users to navigate in safe environments so long as a trusted authority in charge of managing what the cybersecurity community calls digital certificates is not corrupted (HTTPS being the most well-known example). In fact, we use this kind of procedure in all public institutions; whether we are talking about the issuance of FIAT money, the counting of votes during the elections, the issuance of travel passports and border control, etc…[], always some level of identity control is necessary.
It is true that this architecture depends on the good behaviour of such trusted institutions and breaches, although uncommon, do exist. But as a matter of fact, the task entrusted is rather simple: to attest to the authenticity of an identity by publishing a certificate without disclosing sensitivedata unnecessarily.
Of course, there are ways to mitigate flaws in the design of these mechanisms in more decentralized settings where trust is shared between multiple entities. There are also other more exotic ideas in the cooking such as those making use of pseudonymous parties, also known as validation ceremonies if you will, coupled with special forms of social gamification. We shall discuss this question in more detail in an article of its own as it merits a deeper look.
Putting aside issues of trust and details about IDM... in order to understand the links between cyberpolitics, online voting, and accountability, or put differently, between moderation, identity, and privacy, it may be helpful to clarify some ideas about how content is usually presented and moderated in conventional social media networks; from the prevention of the spread and encouragement of violence to disinformation and censorship[]; how these phenomena arise and what can be done to mitigate harm.
Evidently, any platform designed to host political discussions (which is to say; any platform where people can discuss freely) should not rely on fact-checkers or any other form of moderation that is delegated to entities outside the community itself, for this is doomed to long-term failure, for obvious reasons![]
A better strategy might be to empower users of the community to self-moderate.[],[] The reason why we should trust an authority to manage the attestation of our identities but not to moderate online content has to be stated clearly; first, it detaches identity from moderation which would otherwise entail too much concentration of power if both were to be at the hands of the same authorities.
Secondly, it is easier to monitor a tab of identities, credentials, and some proof of identity such as a birth certificate, than it is to monitor all the content that is published online every day. Hence it makes sense to delegate one process and not the other. Interesting to note, the collective intelligence of a community is usually more powerful than the intelligence of the individual,[],[] although this should be taken with a pinch of salt as the collective behaviour of masses is, as all human action, not an exact science. Such self-moderation could range from a simple .ignore function to a complex ranking system where people upgrade or downgrade publications directly.
A recent online petition sent to UK authorities called for a safer social media environment by precisely implementing the digital identity we have been talking about. It was promptly answered negatively.[] Allegedly, providing digital identities by means of digital certificates and implementing them on social media platforms, even if coupled with pseudonymous aliases could increase cybersecurity vulnerabilities and be counter-productive, allegedly.
But the alternative; our current system, is it any better as it is anyway? Furthermore, can this reluctance of public institutions to provide a digital identity service have any parallel with the general reluctance into accepting cryptocurrencies?[] Are our elected officials qualified and honest enough to decide on such technical matters?
Buyer's remorse?
Even more recently, a $44,000,000,000 deal was put on hold precisely over these issues.[] Elon Musk, the world’s richest man, offered 44 billion dollars to buy Twitter on the assumption that at most 5% of its user base was fake (composed of bots or of users with multiple accounts).
However, the deal was put on hold as mister Elon estimated that the real number ought to be at least 20% or 4 times higher. Regardless of the exactness of the number, this is a hot topic in today’s world and is surely not to go away until clear positions on these matters are taken. We should not expect that all platforms need to be absolutely cyber secure in the context that we have been discussing. It is even desired that some actually are not. But those where political discussions are taken seriously should be.
A publication by the European Commission commenting the situation with the conflict between Ukraine-Russia as seen accessing www.pleroma.pt which is a portuguese instantiation of a federated ActivityPub protocol, a Web3 tool. In this platform, no strong IDM protection is used.
What I would like to take from the reader at the end of this article is the understanding that politics through digital tools is not only a cyber dream of the future but is already happening and the reason why we urgently need a safer cyberspace. Security here doesn't mean better ways to keep your password safe. It goes way beyond that. It means, among other things, asking users to be moderators as well. It means relying on third parties to issue and protect our credentials and to continuously supervise the behaviour of these authorities. The alternative to creating this safe environment is bots and fact-checkers taking over our political discourse, potentially deceiving the public with false narratives.
And that, I think, we don't want for sure.
Authored by @
Links to Sources
[1] "Agora" entry on Wikipedia,
[2] "Democratic Principles for an Open Internet" by the Open Internet for Democracy initiative,
[3] "Arab Spring" entry on Wikipedia,
[4] "Facebook and Twitter key to Arab Spring uprisings: report" by Carol Huang, The National, UAE, 2011,
[5] "2019–2020 Hong Kong protests" entry on Wikipedia,
[6] "Protests against responses to the COVID-19 pandemic" entry on Wikipedia,
[7] "2021 United States Capitol attack" entry on Wikipedia,
[8] "We’re underestimating the role of social media in mass shootings, and it’s time to change" on TNW, 2019,
[9] Dark Web entry on Wikipedia,
[10] "Prostitution on Facebook" entry on Without Bullshitblog, 2021,
[11] Eugene Goostman entry on Wikipedia,
[12] "Social bots – the technology behind fake news" entry on IONOS website, 2018,
[13] "Facebook has shut down 5.4 billion fake accounts this year" by Brian Fung and Ahiza Garcia, CNN Business, 2019,
[14] "5 things to know about bots on Twitter" by Stefan Wojcik for Pew Research Center, USA, 2018,
[15] "Interview with Eugene Goostman, the Fake Kid Who Passed the Turing Test" by Doug Aamoth for Time, 2014,
[16] "Hateful People or Hateful Bots? Detection and Characterization of Bots Spreading Religious Hatred in Arabic Social Media" by Nuha Albadi et al., University of Colorado Boulder, USA, 2019,
[17] "Bots, social networks and politics in Brazil" by Diretoria de Análise de Políticas Públicas, 2017,
[18] "How bots are influencing politics and society" on Thomson Reuters Foundation, YouTube, 2020, //www.youtube.com/watch?v=Xl0TrA8oXXo
[19] "A million people now access Facebook over Tor every month" by Kavvitaa S. Iyer on Techworm, 2016,
[20] "The Trauma Floor: The secret lives of Facebook moderators in America" by Casey Newton on The Verge, 2019,
[21] "Police make 61 arrests in global crackdown on dark web" by Warwick Ashford on TechTarget, 2019,
[22] "Porn: you know it when you see it, but can a computer?" by Bijan Stephen on The Verge, 2019,
[23] "Violence Detection Using Spatiotemporal Features with 3D Convolutional Neural Network" by Fath U Min Ullah et al., Sejong University, Seoul, Republic of Korea, 2019,
[24] "Identity management" entry on Wikipedia,
[25] "Sybil attack" entry on Wikipedia,
[26] "What Is a Certificate Authority (CA) and What Do They Do?" on Hashed Out, 2020,
[27] "Who Fact Checks the Fact Checkers? A Report on Media Censorship" by Phillip W. Magness et Ethan Yang, American Institute for Economic Research, USA, 2021,
[28] "Red tape" entry on Wikipedia,
[29] "Decentralised content moderation" by Martin Kleppmann on personal blog, 2021,
[30] "Designing decentralized moderation" by Jay Graber on Medium, 2021,
[31] "Crowdsourcing civility: A natural experiment examining the effects of distributed moderation in online forums" by Cliff Lampe et al., USA, 2014,
[32] "Why Collective Intelligence Beats Individual Intelligence" by Pawel Brodzinski on author's blog, 2018,
[33] "Make verified ID a requirement for opening a social media account" e-petition on Petitions, UK Government and Parliament, 2022,
[34] "Is bitcoin legal? Cryptocurrency regulations around the world" by Tim Falk, Finder, USA,
[35] "Elon Musk says Twitter deal 'cannot move forward' until he has clarity on fake account numbers" by Sam Shead, CNBC, 2022,