My original aim was to write a wholesome, appreciative take on contemporary RnB’s politically-charged culture. However, precursory research put me on a much different track.
A quick search for a (reliable) statistic on multi-media consumption throughout the pandemic brought me to a data report published by Nielsen Holdings PLC. As with any statistic, I thought it necessary to know the bias of the source so, naturally, I began a bit of Google-research.
Nielsen Holdings PLC, it turns out, is a largely influential information, data and market measurement firm. I sifted through the firm's Wikipedia article and quickly found all of the usual corporate capitalist markers—nothing too shocking.
The intrigue of the research actually came from Wikipedia itself, rather than the specified content. To my surprise, a few clicks on related Wikipedia hyperlinks brought me to a categorical index of C-Class company articles within the collaboration area for a group of editors dedicated to the WikiProject: Companies.
WikiProject: Companies is an editing community dedicated entirely to fact-checking contributions, particularly as they contribute to the bias of ‘company’ articles within the context of systemic corporate capitalism. This community’s existence demonstrates a consistent systemic, corporate capitalist bias in a space my naivety deemed impenetrable: a free, public internet encyclopaedia.
I know it’s not often posed in this way, but research can be a truly intimate moment in our lives. I say this because genuine moments of reflection—when we confront our bias and seek answers to our questions— can instance a change of heart on a controversial topic, or more generally result in personal growth. Therefore, I think it is significant that “we are growing comfortable with incorporating Wikipedia and, consequently, allowing Wikipedia to hold a more visible role with shaping public knowledge into our research practices” (Cummings, 2020: 148). Regardless of the solemnity of the topic of our intrigue, Wikipedia’s existence as an embedded element of corporate capitalism’s epistemic ‘crowd’ poses an important necessity for researchers—whether they be internet-surfing novices or peer-reviewing academics—to gain a comprehensive understanding of the Wikipedia phenomenon.
It’s 2021, so most everyone’s familiar with Wikipedia: it is the home of pseudo-intellectual rabbit-holes, the legitimate authority for uncovering the name of our celebrity crushes spouse and, most certainly, a source of endless irritation for those marking (poorly-researched) papers. However, it is also clear that Wikipedia holds a formidable amount of epistemic capital as a well-known ‘knower.’ What I mean by epistemic capital is the notion that a culturally recognised, reputable knowledge-source holds societal power by shaping the areas of knowledge deemed important enough to codify and accordingly, by determining which ideological ‘facts’ are deemed true within the published codification.
During all of my years using Wikipedia as a precursory source, I had never questioned or thought to understand the elemental being and intellectual-purpose of the encyclopaedia through its ontological and epistemological foundations. Put simply, ontology is the study of what is, while epistemology is the study of the cognitive process of how we know there is what there is. Importantly, these latent assumptions have the ability to manifest as empirical biases. Wikipedia actualises empirical bias through the establishment of ‘truth’ in specific spatial communities of knowledge for editors colloquially referred to as “WikiProjects.’
WikiProjects can be found—although arguably not intuitively—through hyperlinks where the ‘membered’ editors have published collectively agreed upon objectives. Simply put, Wikipedia is a storage facility for small sub-communities of varying objectives (scientific positivism), made up of individual editors with still different objectives (normative, moralist goals), some of which programme ‘bots’ to continuously actualise said personal objectives.
My focus in this review is on two particular segments of constituent ‘players’ at Wikipedia; namely, elected arbitrators and ‘bots.’ To clarify, software robots, or simply, ‘bots’ are “built to mend errors, add links to other pages, and perform other basic housekeeping tasks” (Sample, 2017). Arbitrators and bots are two distinct forms of governance within Wikipedia’s politicised culture.
Arbitrators are elected positions held by experienced Wikipedia editors on ArbCom (Wiki jargon for ‘arbitration committee’). Positions open up annually (depending on how many ‘terms’ run-out in a given year), and interested editors self-nominate to run in the intra-community election. Through ArbCom, Wikipedia has “created what the Enlightenment philosophers only dreamed of—its own body of common law, common sense, and common knowledge” (Orlowitz, 2020: 125).
However, alongside this impressive structural feat, Wikipedia’s intracommunity elections for ArbCom raise ethical concerns. Chiefly, as a non-sovereign entity, does Wikipedia have the right to run its own elections? Wikipedia is the internet’s fifth most popular website, receiving traffic from IP addresses globally (Konieczny, 2017). Why is a website with so much epistemic capital allowed to run its own elections? Why is it accepted that ArbCom should hold zero accountability to outside, global readers who rely on the unbiased accuracy of Wikipedia articles?
Wikipedia created the ArbCom in order to enact an incarnation of ‘rule of law’ within the site which has resulted in “tens of thousands” of editors being “blocked from editing the site, or subject to other restrictions (such as bans from editing specific articles or from interacting with specific editors)” in retaliation for transgressing community guidelines (Konieczny, 2017: 756).
The decision-making capital held by elected arbitrators is far from miniscule. Researchers should be concerned about the possibility of arbitrary(-biased) rulings carried out by ArbCom, especially as Wikipedia is a self-proclaimed and commonly accepted integral part of ‘the free and open source software and open content movement “ (Ford, 2020: 191)
ArbCom was created so that experienced members of the Wikipedia community could use their expertise to ‘rule’ on controversial issues. However, the insular nature of ArbCom is precisely what consequences its efficacy. Indeed, Wikipedia is a producer of a “public-good,” knowledge, so it would follow that public input rather than oligarchic ruling is preferable (Konieczny, 2017). In this oligarchic space, arbitrators are consistently behaving in an egoistic manner, “forming ties of various strength, yet clearly retaining behavioral independence, operating as distinct, identifiable individuals” within a political architecture (Konieczny, 2017: 759).
The judicial nature of the ArbCom results in a dilemma precisely because it is an organic oligarchy: each elected member carries certain epistemological and ontological biases and makes decisions accordingly. For instance, because Wikipedia is a globalised phenomenon, “two demographic variables: nationality and experience” may inform the biases of elected members and thus has an influence on respective decision-making with materialised consequences for the encyclopaedia’s global audience (Konieczny, 2017: 759).
Now, on to the bots… Bots are incredibly influential when contextualised because human “editors restlessly point out missing references and correct poorly written phrases'' and bots do so at an even more consistently frequent rate (Jemielniak, 2020: 153-154). In fact, an Oxford study tracked one of the most contentious bot battles in which Xqbot and Darknessbot “fought over 3,629 different articles between 2009 and 2010” during which period “Xqbot undid more than 2,000 edits made by Darknessbot, with Darknessbot retaliating by undoing more than 1,700 of Xqbot’s changes” (Sample, 2017). I mean, seriously: Who knew bots could be so petty?
When first considered, bot battles are certainly a bizarre occurrence. Yet, once we rationalise the seeming absurdity, a greater issue emerges: Just why exactly are strings of 0’s and 1’s fighting over semantics? And why are supposedly unbiased bots disputing over content? Aren’t they supposed to be objective?
The answer is common-sense, but unsettling nonetheless: bots are programmed code written by editors with latent ontological and epistemological assumptions. Now, I certainly don’t mean to qualify all editors who have programmed bots as malicious actors with ideological end-goals. In fact, I believe it to be largely the opposite: creators are hoping the bots will promptly find inconsistencies within contentious articles, correct them, and further the epistemic mission for axiomatic encyclopaedic articles.
Instead, the problem arises out of the editors presuppositions, because even an editor with utmost intellectual integrity has to carry with them latent assumptions about the nature of reality. Thus, a given bot carries with it a bias that does “not originate in these systems, [but] merely reflect[s] prevalent biases and prejudices, inequalities, and power structures” of the editor’s ontology and thus, “once in operation they routinely amplify” the aforementioned inscriptions (Katzenback, 2019: 7).
For example, consider the earlier referenced bot battle between Xqbot and Darknessbot. Fundamentally, Xqbot and Darknessbot are separate entities of code programmed by editors with different ontological and epistemological assumptions, or biases. Xqbot and Darknessbot actualise these different biases by battling over their understandings of ‘truth’ (i.e., preferred semantics) within contentious Wikipedia articles. It may not be immediately obvious, but this battle for ‘semantic correctness’ is politicised because it is ideological in nature. Ultimately, because of the pervasive nature of Wikipedia’s epistemic capital, the bot who gets the final edit tangibly shapes our culture's understanding of what biases, or ‘ideology,’ constitutes ‘truth.’
The inner structure of Wikipedia is fascinating because its pragmatic and algorithmic “governance allows to account for the multiplicity of social ordering with regard to actors, mechanisms, structures, degrees of institutionalisation and distribution of authority” (Katzenback, 2019: 2). Thus, Wikipedia’s architecture makes sense after careful consideration. Indeed, even an internet encyclopaedia needs hierarchical structure to function systematically and produce tangible results.
Nonetheless, it is surprising to learn how politicised Wikipedia culture is: Who would have thought that Wikipedia held intra-community elections? And who would have thought that bots would battle each other over encyclopaedic contributions?
Wikipedia editors have espoused similar epiphanies regarding their discovery of the politicised, hierarchical structure of Wikipedia. Jemielniak writes, “I noticed that what I had thought was an entirely spontaneous and disorganized conversation was, in fact, a community of many rules and norms” (2020: 152). Wikipedia’s politicised culture is concerning because it is a platform that has failed to create a readily available space which omits its contributors ontological and epistemological foundations.
In theory, Wikipedia operates ontologically within an extensional approach meaning its semantics “normally assume an unbiased population of instance data the distribution of which can hint the intended meaning of a concept” (Hu, 2009: 446). This means that Wikipedia’s articles are written on the presumption that ‘fact-checked’ contributions are devoid of bias. Thus, community reviewed articles are largely understood to be axiomatic sources of information by both the Wikipedia community and its audience.
In practice, the Wikipedia community has collectively constructed an identity for the encyclopaedia that implies an objectivist ontology. Therefore, Wikipedia’s chosen semantics imply a conception of the world in which an objective reality exists independent of the observer. The problem is that this ontological presupposition of the Wikipedia community is immediately nullified and contradicted by the editing wars that erupt between the encyclopaedia’s contributors, both humans and bots alike.
Why is an Encyclopaedia that has itself constructed sub-communities with different intellectual-objectives using semantics in a way that implies ontological objectivism? Personally, I believe that regardless of whether Wikipedia’s omission is purposeful or negligent, it reads as either dirty-handed or irresponsible, respectively.
If encyclopaedic knowledge is indeed ideological (as I have argued), Wikipedia must readily admit this. Otherwise, Wikipedia is knowingly contributing to an untruthful intellectual world. While it is true that some may find this interpretation as overly critical, I would argue that it is a justifiable concern for researchers interested in qualified—rather than arbitrary—axioms.
Wikipedia’s scope has grown exponentially since its inception and has relatively infinite resources when considering the multitude of tasks both individual editors and (more importantly) bots take on. Therefore, while it is understandable that Wikipedia is “torn among the good-faith collaboration and pro-social behaviors and the inevitable political struggles, tensions, and reflections of social biases” it is heavily hypocritical that the Wikipedia community prides itself on the production of unbiased ideas in an ontologically objective reality (Jemielniak, 2020: 155).
Wikipedia articles attempt to be as integrity-bound as possible, according to the biases of their respective contributors. This indicates an encyclopaedia that exists within an ontologically anti-positivist or anti-foundationalist reality informed by constructivism or interpretivism. So, for an encyclopaedic community founded on belief in abstract truth, Wikipedia should qualify the latent ontological and epistemological assumptions of its editors and articles. Or, at the very least, make the above-discussed information more accessible and common-place knowledge. So, the next time you look to Wikipedia for juicy details on a famed boyband, remember that you are happening upon a fairly important political battlefront for ideological hegemony in a seemingly unlikely place: a free, public internet encyclopaedia.
Cummings, R. E. (2020). The First Twenty Years of Teaching with Wikipedia: From Faculty Enemy to Faculty Enabler. In J. Reagle, & J. Koerner (Eds.), Wikipedia @ 20: Stories of an Incomplete Revolution (pp. 141-149). Cambridge, Massachusetts: The MIT Press.
Ford, H. (2020). Rise of the Underdog. In J. Reagle, & J. Koerner (Eds.), Wikipedia @ 20: Stories of an Incomplete Revolution (pp. 189-201). Cambridge, Massachusetts: The MIT Press.
Hu, B. (2009). WiKi’mantics: Interpreting Ontologies with WikipediA. Knowledge and Information Systems, 445-472.
Orlowitz, J. (2020). How Wikipedia Drove Professors Crazy, Made Me Sane, and Almost Saved the Internet. In J. Reagle, & J. Koerner (Eds.), Wikipedia @ 20: Stories of an Incomplete Revolution (pp. 125-139). Cambridge, Massachusetts: The MIT Press.
Jemielniak, D. (2020). Wikipedia as a Role-Playing Game, or Why Some Academics Do Not Like Wikipedia. In J. Reagle, & J. Koerner (Eds.), Wikipedia @ 20: Stories of an Incomplete Revolution (pp. 151-157). Cambridge, Massachusetts: The MIT Press.
Katzenback, C. (2019). Algorithmic Governance. Internet Policy Review.
Konieczny, P. (2017). Decision making in the self-evolved collegiate court: Wikipedia's
Arbitration Committee and its implications for self-governance and judiciary in cyberspace. International Sociology, 32(6), 755-774.
Sample, I. (2017). Study reveals bot-on-bot editing wars raging on Wikipedia's
pages. (Guardian News & Media Limited) Retrieved 5 2021, from The Guardian: https://www.theguardian.com/technology/2017/feb/23/wikipedia-bot-editing-war-study