top of page

Fast-Track to Extremism: How TikTok is Used as a Tool for Radicalisation

Updated: Jan 1


Since being launched in September 2016, TikTok has taken the world by storm. In that time, TikTok has enjoyed 2.6 billion downloads across 155 countries. In addition, the 689 million monthly active users watch more than 1 billion videos daily on the platform (Georgiev, 2022), predictably making the site a hunting ground for celebrities and influencers. With over 50% of users aged between 18 and 34, TikTok is a prime example of how pop culture plays a role in our daily lives – everyone's favourite artists, celebrities and social media stars are more accessible than ever when they go live or upload a brand new video.


The  closer the relationship between creator and consumer, the more influence TikTok’s dark side has – extremism. In the same way positive influencers develop their following through posting more engaging content, TikTok proves an excellent platform for extremists to share radical views in the hope of spreading their beliefs. The more extreme the content, the more interest it attracts and the more views it receives, thanks to the snowball algorithm used by TikTok. Each video is "its own fish hook", which, once bitten, offers the viewer more related content (O'Brien, 2022). Each new fish hook offers dozens more, aiming to ensnare the viewer on a particular feed or topic in which the algorithm knows the viewer is interested. By extension, one extreme video can lead to a feed dominated by radical views that bombard the user because the viewer liked, viewed the comments or watched the video in its entirety. 


This report will study how TikTok's algorithm has assisted extremists in radicalising impressionable TikTok users and how sentiments of misogyny, racism and destructive conservatism have festered on the platform, encouraged by infamous groups and individuals. Specifically considered will be the damage caused by the so-called 'Islamic State' and far-right businessperson Andrew Tate, both of which have had their accounts banned but, through their social legacies on the platform, continue shaping the mentality of TikTok's younger and more impressionable users.



The Islamic State and Andrew Tate


The presence of the Islamic State holds several different tributary accounts on social media. This report will examine the Islamic State of Iraq and the Levant (ISIL) and its sibling organisation, the Islamic State of Syria (ISIS), as both have had individual presences on the platform in the past. Additionally, both have "actively used the application since 2019 as a recruitment tool" (Lopez, 2020). This scale of radicalisation is particularly concerning, considering that "users of the social media platform are more likely to be manipulated by the content they see" (Lopez, 2020) due to the engaging and persuasive nature of content creators. Combining this with the demographic representation of specific age groups indicating over 50% of users are under the age of thirty, the Islamic State has free access to the next generation of young people who have been scientifically identified as “impressionable”, “vulnerable” and “highly responsive” (Nuwer, 2012) to exciting content. This means an interested, global audience to which they can sell their utopic ambitions.


Methods used by terrorist groups to target young people also raise concerns about this demographic’s vulnerability to radicalisation. In an article published by The Sun, ISIS is said to have been "caught spreading 'sing-along' propaganda about 'allegiance until death' on TikTok" (Pettit, 2019). Trivialising the gravity of their radicalisation methods by making videos fun, engaging and exciting for young viewers is what kick-starts the fish hook analogy. Finding the video outrageous, one young person may share it with a friend, causing both their feeds to show more extremist, hateful content. The fish hooks will become more common, more extreme and, consequently, more convincing. Opposing content will become less frequent, and the users will be bombarded by radical messages.


The danger of the Islamic State's attempts to radicalise young people on TikTok is not a controversial subject. Given this consensus, the question arises: why are people most concerned about TikTok being a radicalisation tool rather than other social media platforms? While "ISIS regularly uses propaganda spread on popular sites like Facebook, Twitter and YouTube… TikTok is the latest to fall victim to the group's social media campaigns" (Pettit, 2019). Moreover, with user age ranges lower on TikTok than on other sites, mature adults are less vulnerable than impressionable teens and, in some cases, children. As a result, vulnerability to radicalisation increases, as do success rates in exploiting such vulnerability.


Preventing the spread of radical content is of utmost importance for TikTok to limit the indoctrination and radicalisation of its audiences. TikTok's own policy states that "it prohibits criminal organisations from using the app, but the company says enforcement of such guidelines is a problem that all social media companies face" (Feuer, 2019), perhaps attempting to deflect some of their culpability for their role in the radicalisation of young people. According to Feuer (2018), TikTok removed "approximately two dozen accounts responsible for posting extremist propaganda on the app" to limit the spread of hateful content, but blocking these creators is not stopping the problem. The legacy of their posts remains on the platform, as will be discussed shortly, which means the damage is continually inflicted, and the message is still spread whether they are actively posting or not.


Andrew Tate exemplifies this fluently. Following a six-month blow-up on TikTok and Twitter, on the 22nd of August 2022, Andrew Tate was banned for "breaking content rules" (Patterson, 2022), with many people citing his heavily misogynistic, frequently racist and dangerously conservative content. Nevertheless, despite his permanent ban from the platform, "massive fan accounts continue to repost and spread his hateful content" (Little, 2022) on a grander scale than he previously achieved. Indeed, considering it from this perspective, the death of Tate's account on TikTok has served as a martyr for the greater pro-Tate movement. His followers, in their thousands, have amplified his message so many times that TikTok's attempts to control the spread of his content are redundant.


The inability to stem extremist content seems a recurring feature of TikTok as a platform. Even prior to the rise of Tate, the "app's recommendations algorithm [kept] pushing accounts that promote [banned far-right] groups and movements" (Binder, 2021). Regardless of TikTok's user policy and its attempts at enforcing it, developers seem to be out of control of content production and its spread across the platform, which is having severe and detrimental impacts on the psyche of the younger generation. Combined with 4 million followers on Instagram, it is no surprise his TikTok presence has radicalised impressionable youths. Tate's rapid spread across TikTok resulted in UK teachers receiving warnings about his influence on their students, with one teacher taking to Twitter to announce:

 "It's worth school staff being aware of the name Andrew Tate at the start of the new school year. With 11 billion views on TikTok, he is spouting dangerous misogynistic and homophobic abuse daily, and some of his views are from boys as young as 13" 

(Sharp, 2022)


Some may insist that considering the bulk of Tate's followers were Instagram followers, holding TikTok accountable is ignorant or exclusive of his principal platform. However, the martyrdom of his TikTok account has enthused fans to propel his message at a greater rate than before, and warnings about Tate's social media presence have only arisen since his account ban. Producing new content is much more difficult for Tate now that he has no access to TikTok, but this seems to have had no adverse effect on the spread of his rhetoric. Instead, the cycling of his usual misogyny and homophobia, enabled by TikTok's 'fish hook' algorithm, continues to diffuse across the TikTok viewership, over half of which (as previously mentioned) are below the age of thirty, meaning children, teens, and young adults are exposed to profoundly conservative sentiment before spreading it in schools, colleges and the workplace. Thanks to TikTok's account ban, Tate need not post any more content to spread his message – his fans and TikTok's algorithm are doing that for him across all 155 countries.



The Legacy Dilemma and Corporate Responsibility


How can TikTok combat an ideology that has spread beyond its control? Can a social media app be expected to tackle terrorist organisations and far-right extremists? Where does social media meet politics?


TikTok's responsibility to its community is referenced in its Community Guidelines. While wishing to defend free speech, developers emphasise that "videos that incite or provoke violence or hatred against an individual or a group of people… are not welcome on TikTok and can lead to suspensions and bans" (TikTok, 2016). According to community guidelines, TikTok has upheld its end of the bargain by banning both examples studied in this investigation, suggesting developers may argue there is nothing more the company can do to reduce its role as a tool in the cycle of radicalisation. However, this does not account for how their algorithm bombards readers with the same content they have tried to block.


Blocking Tate or ISIS is one aspect; eliminating their content is an entirely different matter. The remains of hateful content produced by banned users creates TikTok's legacy dilemma: in trying to uphold their community guidelines, the spread of hateful sentiment has increased and, at a basal level, TikTok's aim of reducing extremism on their platform has failed. More people today are exposed and falling victim to radicalisation due to TikTok's 'fish hook' algorithm than before when creators had their own accounts. Looking for solutions gives TikTok three options:

  • Liberalise community guidelines

    • One approach TikTok developers could take is 'if you cannot beat them, join them". Famously, Elon Musk's buy-out of Twitter has led to the reinstatement of infamous accounts on the basis that freedom of speech would be allowed, regardless of the hateful content produced. If TikTok followed this model, extremist accounts would be allowed to continue producing content, but TikTok would be shirking all responsibility to censor hateful content from the community. However, this achieves very little. For example, the Islamic State could continue uploading graphic and violent radicalising content, increasing its influence on the community. The same goes for Tate's rhetoric, and this option would likely breed more hate, extremism and cultural intolerance.

  • Ban fan accounts

    • By banning fan accounts, TikTok may mitigate the threat of hateful content spreading off the back of account bans and suspensions. The legacy dilemma would be contained as those accounts circulating extremist content would be targeted with the same vigilance used for the original accounts. However, this task is arduous, complex and unlikely to be entirely successful. Programs used to detect explicit content are already in use across the app, and content producers and consumers employ easy methods to avoid triggering this response. Manually deleting such accounts is almost impossible, considering their quantity and how ideology transcends the developers' ability to establish long-standing content restrictions.

  • Change the algorithm

    • The underlying catalyst for the spread of hateful ideology is the nature of the 'fish hook' algorithm. While this is what makes TikTok so bespoke for every user and, by extension, enjoyable to use (which is ultimately the developers' aim), it is also what is preventing TikTok's account bans from having a lasting impact. This reluctance stems from their algorithm's efficiency: considering its success, it is clear why TikTok would be reluctant to make any changes to the algorithm if it means risking its efficiency. As a corporation, one would expect TikTok to prioritise its financial profits and success over securitising against extremist ideology. So what difference does it make to them as long as they get views?


TikTok's restriction on hateful content bears political influence. Some may argue that TikTok holds corporate responsibility to reduce the radicalisation of young users, but this is a contradiction in terms. As a concept, corporate responsibility is voluntary by nature. In a predominantly capitalist global economy, corporate autonomy from government legislation and interference is a trademark of successful international companies. TikTok is no different: if government legislation dictated it needed to make certain changes to the function of its application, its efficiency and private success would plunge, and TikTok would make lower profit levels. If radicalisation is a governmental issue, TikTok has no obligation to help address the crisis.

Alternatively, the issue of platformed radicalisation could be viewed through the lens of security studies. The vulnerability of a population, particularly its younger generations, is a staple of a state's security and safety. Radicalisation, extremism and the spread of hateful sentiment amongst a population destabilise the state's pursuit of total security. However, without impeding free speech, how does the state manage the threat posed by social media, especially TikTok, with its younger audience, and ensure its population is not encountering destabilising content?


Such a level of security requires total cooperation, or even nationalisation, of social media outlets, followed by restrictions on civil liberties, such as freedom of speech. Simply, it requires total political and democratic reform within a state. Without risking revolt or economic collapse, the state is running out of options. It could introduce stricter legislation on permitted content, but this risks the state being criticised for limiting freedom of speech and restricting corporate independence. The state then has to prioritise between being depicted as a big-brother, totalitarian-leaning government or compromising national security by taking a looser stance on radicalisation and exposure to extremist content.


When considering the state's predicament, it is easy to see why it has done little to curb the spread of hateful content on TikTok and social media on a larger scale. There is not enough of an incentive for social media platforms to compromise their success in favour of state security, particularly when they are multinational corporations. Additionally, the state must try and find the balance between protection and interference, which is such a fine line that not addressing the situation seems like the safer option.



Conclusion


TikTok, more than any other form of social media, has proven itself to be the perfect platform for spreading hateful content, primarily because of the efficiency of its 'fish hook' algorithm and its unparalleled access to younger audiences. Without a doubt, TikTok's contribution to the infotainment world has largely brought opportunities, enjoyment and pleasure to the public. It is so widespread that it interconnects transnational communities and has enriched many lives around the globe. However, its weaknesses are dangerous and are mostly left unresolved.


Organisations like the Islamic State and individuals like Andrew Tate open their TikTok accounts with the same opportunity to express their views as every other content producer on the platform. Until they are banned, their content is mainly unfiltered and accessible to all other users. From the first harmful upload, extremists' legacies will last online indefinitely, kept alive by the people that fall victim to their radicalisation. Without fundamentally changing its algorithm, TikTok is largely impotent in preventing this from occurring, which will only allow the vicious cycle of sharing hateful content to continue.


Meanwhile, the state is stuck with a dilemma of its own. It finds itself juggling public favour and securitisation against extremism, but it is mostly unable to find a way to resolve the issue. For now, the state has opted to allow TikTok to continue blocking sensitive content and banning explicit accounts without advocating tighter restrictions. While TikTok continues to reap the benefits of this content through revenue, it seems the corporation and the collection of states within which it operates are all happy to look the other way, brush the reality of active radicalisation occurring on a global scale under the carpet, and to continue allowing TikTok to be the most proficient tool for radicalising the youth.



References


Binder, M., 2021. TikTok's algorithm is sending users down a far-right extremist rabbit hole. Mashable, 28 March. 

Feuer, W., 2019. TikTok removes two dozen accounts used for ISIS propaganda. CNBC

Georgiev, D., 2022. 33+ Amazing TikTok Statistics You Should Know in 2022. [Online] Available at: https://techjury.net/blog/tiktok-statistics/#gref[Accessed 30 November 2022].

Little, O., 2022. Andrew Tate videos are widely circulating on TikTok because of fan accounts, despite the platform's promised ban. Media Matters for America, 22 August. 

Lopez, C., 2020. Rise to Peace: What Role Does TikTok Play in Radicalization?. [Online] Available at: https://www.risetopeace.org/2020/10/13/tiktok/risetopece/[Accessed 30 November 2022].

Nuwer, R., 2012. Teenage Brains Are Like Soft, Impressionable Play-Doh. Smithsonian Magazine, 18 October

O'Brien, J., 2022. LBC. London: Global Radio.

Pettit, H., 2019. Jihadi Jingles. The Sun, 22 October. 

Sharp, J., 2022. Andrew Tate: The social media influencer teachers are being warned about. Sky News, 28 August. 

TikTok, 2016. Community Guidelines. [Online] Available at: https://www.tiktok.com/creators/creator-portal/en-us/community-guidelines-and-safety/community-guidelines/[Accessed 30 November 2022].

26 views0 comments
bottom of page