top of page

Bard Builds a Lie: Generative AIs and their Potential as Tools of Disinformation

DISCLAIMER: For the avoidance of any doubt, the article linked in this piece is not real. All of its methodology, findings, and citations are completely, 100%, fake. It was generated in its entirety by Bard, Google’s AI chatbot. As Bard is trained on real information, some of what was generated, like the names of some people and organisations, may be real. However, separating the real from the fake in this case is very difficult, and thus the entire article should be considered to be a fabrication, in the absence of verified, corroborable supporting evidence.

There has been quite a lot of debate in the media recently surrounding the development of artificial intelligence. This has become particularly noticeable in the wake of the explosion in popularity of OpenAI’s ChatGPT, a text-to-text generative chatbot that launched to the public in late 2022 (though another OpenAI development, DALL-E, made some waves in the art community beforehand). Aside from its realistic conversational skills, ChatGPT has perhaps become more well-known for its ability to generate large quantities of (variably) convincingly human-like writing. Users have, unsurprisingly, begun to take advantage of this in a multitude of ways, including to write appeals against fines (BBC News, 2023), emails to airline customer service (Sukheja, 2023), and even, in the case of Israeli president Isaac Herzog, to write parts of public speeches (Bunkall, 2023).

As a result, an increasing number of people are becoming concerned about the potential of AI. The Future of Life Institute, a non-profit which undertakes research on mitigating the risks associated with technological development, has released an open letter calling for a temporary pause on the development of AI more powerful than GPT4 (OpenAI’s latest model). The letter has been signed by a number of high-profile industry professionals and cites, and outlines concerns that the development of AI is currently an out-of-control race with too lax regulation (Future of Life Institute, 2023). On a less existential scale, there are concerns about the potential abuses of AI capabilities in specific fields.

One such field is academia, where there are specific concerns about the use of AI in academic writing. As of yet, concerns have so far been largely limited to its use to write student essays (for example, Sharples, 2022), but, as generative AI becomes more advanced, it seems inevitable that it will become capable of producing work of a higher academic standard. Using this to write a piece of genuine research would bring with it some difficult ethical issues; however, there also exists the potential that such a capability could be used to generate fake research, which could then be passed off as real for nefarious purposes. This would represent a novel and potentially extremely harmful form of disinformation.

‘Disinformation’, defined as deliberately inaccurate information which is disseminated to mislead (Amazeen and Bucy, 2019), has become an increasing concern around the world in recent years. Particularly, its use surrounding elections has occupied the attention of academics studying it, as it has been used to influence voters and to sow division (Allcott and Gentzkow, 2017; Mutahi, 2020). AI has now become an established tool for disinformation (Hajli et al., 2022), having been used to generate ‘fake news’, to create social media bots with fake agendas, and, more recently, to outright fake photos and footage to invent things that never happened (Taylor, 2021). If the creation of fake academic research were added to this barrage of weaponised disinformation, this could have disastrous consequences for reasoned debate, especially considering the generally high levels of trust that people place in academic sources (Lu et al., 2021).

But this is of course speculation, for the most part. Where are we now? Are generative AIs about to begin creating whole fake publications, full of academics and citations that do not exist? Can they even do a decent job of academic writing yet? To find out, I decided to see what one could do.

An excerpt from a conversation with Google's AI chatbot, Bard. The user, indicated with an 'R' icon in the top left corner next to their message, said 'Say hello, Bard!'. Bard replied with a short introductory paragraph about itself.
Figure 1: Greetings from Bard.

The one I asked was Bard. Bard is effectively Google’s answer to ChatGPT, which it rushed to release in early 2023 to try to keep up with both it and Microsoft’s ‘new Bing’, which is also (partly) powered by OpenAI technology. It is driven by Google’s own large language model, LaMDA, which has been in development since the early 2010s.

As ChatGPT is the most advanced of these three bots, I did want to use it to do this experiment with me. However, when I asked it, it gave me this response:

An excerpt from a conversation with ChatGPT. The AI begins by telling the user that it is only programmed to give factual responses and thus cannot generate deceptive information. The user asks in response if the AI cannot do this even if the user is not planning to use the generated information in a deceptive way. The AI responds that it could still not do it as the information could still be perceived as misleading by somebody who came across it.
Figure 2: ChatGPT would do anything for you, but it won't do that.

Fair play, OpenAI. I’d respect this even more if it wasn’t for the fact that ChatGPT can and does make up information all the time (Miller et al., 2022), like when I asked it to summarise a piece of legislation for me:

An excerpt from a conversation with ChatGPT. The AI has been asked to summarise the provisions made by the Medicines and Medical Devices Act 2021 regarding the international disclosure of information about medicines and medical devices. It gives a summary, which is full of factual inaccuracies and blatantly made up information.
Figure 3: ChatGPT’s summary of the powers provided by the Medicines and Medical Devices Act 2021, as regards the disclosure of information to international bodies (yes, I know this is a bit random). Sections 107 and 110 do not exist in this act, as it only has 52 subsections. None of what ChatGPT says here is mentioned in the act, either.

No matter, then. Bard would have to do. I wanted to ask it to write me an entirely made-up academic paper. For a topic, I (fittingly) chose the use of generative AI to create disinformation, and chose to centre my study around the German federal elections of 2021. I chose this as I felt that, if a real study on this topic were to be produced, one would expect 2023 to be the first year in which it could realistically come out.

My first attempt at doing this involved simply asking Bard to generate the whole thing at once, with a fake title, authors, citations, data, et cetera. Unfortunately, this was not particularly successful. I got back a relevant but very short summary of the topic I had suggested, which did not read much like a real study. Bard refused to go over about 600 words in length for the ‘paper’, even though it told me its responses had no word limit. I therefore decided to go bit-by-bit, to see if I could get a better result that way.

An excerpt from a conversation with Bard. The user prompts Bard to generate some fake academic-sounding titles for a study regarding the use of generative AI to create disinformation surrounding the 2021 German federal elections. Bard replies with five suggestions, which are 'The Use of Generative AIs to Create Disinformation in the 2021 German Federal Elections', 'Generative AIs and the Disinformation Landscape of the 2021 German Federal Elections', 'The Role of Generative AIs in the Spread of Disinformation during the 2021 German Federal Elections', 'Disinformation in the 2021 German Federal Elections: The Role of Generative AIs', and 'Generative AIs and the Challenge of Disinformation at the 2021 German Federal Elections'.
Figure 4: Bard's title ideas.

Not bad. They’re all quite clearly variations on a theme, but that’s to be expected, I suppose. I liked the last one the best, so I told it to proceed with that as the title. Next, we needed a set of authors for our study. I asked Bard to generate a set of fake authors who could have written the study. It firstly responded using its own name and that of LaMDA, which I suppose was my fault for not specifying that I wanted human names (silly me). I then specified this, and Bard returned the very real-sounding ‘John Smith’ and ‘Jane Doe’ as suggestions. Progress, certainly, but not quite realistic enough. After some further prompting, and a request to generate a journalistic citation to go along with the authors, Bard eventually produced this:

Bard has generated a fake citation, which reads 'Anya Patel, Umaima Khan, and Alejandro Fernandez. 2023. 'Generative AIs and the Challenge of Disinformation in the 2021 German Federal Elections'. The Journal of Information Technology and Politics, 20(2), 151-164.
Figure 5: A fake citation is born.

That’s more like it. Quite frighteningly, even though I asked for fake names, all of these people are real academics. Before getting to this point, it had previously generated places of work for them, which all lined up with real life. I eventually decided not to use two of these names, as I didn’t feel comfortable attributing work (though fake) to real academics without their permission. I thus asked Bard to generate some more fake names (specifically German ones, in keeping with the theme of the piece) and institutions. I kept the name Anya Patel but changed her place of work to a fake one in America. I figured her German colleagues may have needed some assistance writing the article in English. Bard’s suggestions meant I ended up with:

Dr. Anya Patel, Department of Computer Science, Easthaven University

Dr. Viktor Keller, Department of Psychology, Universität Saphirbucht

Dr. Elena Eberhard, Department of Sociology, Universität Edelweißtal

Bard would only come up with fake place names that sounded as if they had come from a German translation of The Hobbit, so ‘Sapphire Bay’ and ‘Edelweiss Valley’ were the best I could do.

I then moved on to generating the paper itself. How did it do? Well, after some formatting by yours truly, this is the result I ended up with:

So, firstly, what did it do well? Well, it is a coherent article about the topic I wanted it to be about. Each of the sections does roughly what that kind of section should do in a research paper like this (although, of course, I specified what sections I wanted). There are a couple of particularly impressive things to note as relates to these sections. The methodology section, for one, is much better than what I had expected it to produce. The prompt I gave it for this section was this:

Okay, next, we need a methodology section for the study. It should explain the aims of the study first, which is to examine the prevalence of AI-generated content in social media posts regarding the 2021 German federal elections. It should explain the fact that the study looked at the prevalence of deepfake videos, fake articles, and AI-generated images in these posts, and whether they were used in a way that was misleading. It should explain the analytical method used to determine whether something is 'misleading' or not.

This it did, and more. You will notice that I did not specify a methodological technique to use, but Bard determined that a mixed methods analysis would be best in this case. Whether you agree with this approach or not, that it was able to do this at all is interesting at the very least. Its list of what is ‘misleading’ or not is also its own innovation and a fairly good selection. Also of note is the findings section in which, after some prompting for specificity, Bard was able to include specific examples of disinformation around this election campaign, using the names of real politicians like Friedrich Merz, who was in the running to replace Angela Merkel as CDU/CSU leader when she resigned. Its recommendations were also sensible at the end of the article, and it was able to successfully summarise what it had already written.

There were, however, a number of significant limitations. Firstly, the paper is very short at just seven pages. A real paper on this topic would likely be much longer, but Bard would not generate much longer sections than the ones included. Secondly, eagle-eyed readers will notice a distinct lack of any academic citations whatsoever. Bard refused to generate these unless explicitly asked in a separate message, and generating believable citations required a lot of prompting on things like names, dates, page ranges, and the like. This lack of citation is a very conspicuous indication that this is not a real academic paper. Thirdly, although Bard did present an impressive methodology in the section pertaining to that, it did not actually follow this methodology in the findings section. This, again, indicates that this was not written by a human being.

The last thing I want to point out speaks directly to the point of this experiment. The specific examples Bard used in its findings section sound convincing, but are not real. I did not ask specifically for fake examples to be generated, but Bard made up plausible examples of AI-generated disinformation and included them in the paper. It was also able to link them to organisations which might legitimately have shared them, such as the populist Alternative for Germany (AfD) party. This last point is, I think, a small glimpse into the potential that AI holds as a tool of academic disinformation. If this were a ‘fake news’ story in its own right, and not included as part of this made-up paper, it could reasonably be shared by opponents of this party who sought to discredit them, or who simply didn’t check whether what they were sharing was accurate. This is worrying, to say the least.

Though, on balance, this paper is not a particularly convincing example of a study (though far from terrible), it is evident that AIs like Bard have a limited capacity for making up plausible and thus potentially harmful disinformation. I do not, therefore, believe that generative AIs as they currently exist are going to start hoodwinking us all tomorrow with fake research. It is not difficult to see, however, especially with the current pace of AI development, how this capacity will quickly become more advanced and thus a greater potential threat to information accuracy in the near future.

Still, there are ways to counter this. AI developers can take responsibility for the ethics of their creations and limit the extent to which they are able to produce false information. As we have seen, OpenAI has (ostensibly) already taken steps to do this. As consumers of information, we can make sure that we verify what we read and view on the Internet before we share it with others. There is also the potential to use AI as a force for good against disinformation, such as by using it to seek out false stories instead of to create them.

In other words, there is clearly still time to prepare for the future capabilities of AI. They are not as good as we think they are. Yet.

Reference list

Allcott, H. and Gentzkow, M. (2017), ‘Social Media and Fake News in the 2016 Election’, Journal of Economic Perspectives, 31(2), pp. 211-236.

Amazeen, M.A. and Bucy, E.P. (2019), ‘Conferring Resistance to Digital Disinformation: The Inoculating Influence of Procedural News Knowledge’, Journal of Broadcasting and Electronic Media, 63(3), pp. 415-432.

BBC News (2023), ‘York student uses AI chatbot to get parking fine revoked’, 1 April, available at: [accessed 04/04/2023].

Bunkall, A. (2023), ‘Israel president uses ChatGPT artificial intelligence to write part of major speech’, Sky News, 2 February, available at: [accessed 04/04/2023].

Future of Life Institute (2023), ‘Pause Giant AI Experiments: An Open Letter’, available at: [accessed 04/04/2023].

Hajli, N., Saeed, U., Tajvidi, M., and Shirazi, F. (2022), ‘Social Bots and the Spread of Disinformation in Social Media: The Challenges of Artificial Intelligence’, British Journal of Management, 33(3), pp. 1238-1253.

Lu, L., Liu, J., Yuan, Y.C., Burns, K.S., Lu, E., and Li, D. (2021), ‘Source Trust and COVID-19 Information Sharing: The Mediating Roles of Emotions and Beliefs About Sharing’, Health Education and Behaviour, 48(2), pp. 132-139.

Miller, C.C., Playford, A., Buchanan, L., and Kroliklt, A. (2022), ‘Did a Fourth Grader Write This? Or the New Chatbot?’, The New York Times [online], 26 December, available at: [accessed 04/04/2023].

Mutahi, P. (2020), ‘Fake news and the 2017 Kenyan elections’, Communicatio [online], 46(4), n.p., available at: [accessed 04/04/2023].

Sharples, M. (2022), ‘Automated Essay Writing: An AIED Opinion’, International Journal of Artificial Intelligence in Education, 32, pp. 1119-1126.

Sukheja, B. (ed.) (2023), ‘Woman Asks ChatGPT To Write “Polite And Firm” Email To Airline After Flight Delay. See result”, NDTV, 19 February, available at: [accessed 04/04/2023].

Taylor, B.C. (2021), ‘Defending the state from digital Deceit: the reflexive securitisation of deepfake’, Critical Studies in Media Communication, 38(1), pp. 1-17.

39 views0 comments


bottom of page