Staying Alert: How ChatGPT and Other Generative AI Might Promote Science Denial and Misunderstanding
ChatGPT provides quick and concise responses
Until very recently, if you wanted to explore a controversial scientific topic like stem cell research, nuclear energy safety, or climate change, you would likely turn to Google for answers. With multiple sources at your disposal, you would choose what to read and decide which websites or authorities to trust.
However, there is now another option available: posing your question to ChatGPT or another generative artificial intelligence (AI) platform, which can provide quick and concise responses.
Unlike Google, ChatGPT doesn’t search the internet in the same way. Instead, it generates responses by predicting likely word combinations based on a vast amount of online information.
While generative AI has the potential to enhance productivity, it also has some serious flaws. It can produce misinformation, create “hallucinations” by making things up, and sometimes fail to accurately solve reasoning problems. Despite these limitations, generative AI is already being used to generate news and academic articles, and website content, often without readers knowing they were created by AI.
We should be concerned about how generative AI can blur the boundaries between truth and fiction for those seeking authoritative scientific information.
The negative effects of ChatGPT and other generative AI
In this new (mis)information landscape, it is crucial for every media consumer to be more vigilant in verifying the scientific accuracy of what they read. Here are some things to keep in mind about the effects and consequences of the rising use of generative AI.
1. Erosion of epistemic trust
Epistemic trust refers to the process of trusting knowledge obtained from others, including scientific and medical experts. With a growing abundance of online information, individuals with limited scientific understanding and limited access to firsthand evidence must make frequent decisions about what and whom to trust. The increased use of generative AI and the potential for manipulation can further erode trust.
2. Misleading or wrong information
If the data on which AI platforms are trained contain errors or biases, these can be reflected in the results. Generating multiple answers to the same question can yield conflicting responses, and AI systems themselves acknowledge making mistakes. Identifying incorrect AI-generated content can be challenging.
3. Intentional disinformation
AI can be used to create compelling disinformation in the form of text, deepfake images, and videos. Requesting AI to generate content in the style of disinformation can produce fabricated citations and fake data. The ease with which deliberate misinformation about science can be created and spread is a significant concern.
4. Fabricated sources
ChatGPT may provide responses without any sources, or it may present sources that it has invented when asked. This poses a problem if readers assume authority based on a list of publications generated by AI without verifying their authenticity.
5. Dated knowledge
ChatGPT’s knowledge is limited to the data it was trained on, and it doesn’t know what has happened in the world after its training concluded. This limitation can result in outdated or erroneous information, especially in rapidly evolving fields. Seeking recent research or information on personal health issues requires caution.
6. Rapid advancement and poor transparency
AI systems continue to advance rapidly, but there are insufficient safeguards to ensure that generative AI becomes more accurate and reliable as time goes on. Lack of transparency about the training data and processes can hinder trust in AI-generated content.
What can you do to guard against the negative elements of ChatGPT and other generative AI?
To navigate this new information landscape effectively, consider the following steps:
1. Recognize the limitations
Understand that AI platforms, including ChatGPT, may not always provide completely accurate information. It is essential for users to discern accuracy and critically evaluate the information they receive.
2. Increase vigilance
Be aware that sharing information found through AI platforms without proper vetting can perpetuate misinformation. Take the time to be more deliberate and thoughtful when evaluating sources, especially when dealing with serious health concerns or complex scientific issues like climate change.
3. Improve fact-checking
Adopt the practice of lateral reading used by professional fact-checkers. Verify the credibility of the sources provided by AI, assess the expertise of the authors, and determine the consensus among experts. When no sources are provided or their validity is uncertain, use traditional search engines to find and evaluate reliable sources.
4. Evaluate the evidence
Examine the evidence supporting the claims made by AI-generated responses. Look for scientific consensus and consider both sides of an argument. Evaluating claims requires effort beyond relying solely on a quick query to AI.
5. Go beyond AI
While AI platforms like ChatGPT can provide initial insights, don’t rely solely on them as the ultimate authority. Follow up with a diligent search using traditional search engines to gather more information and different perspectives before drawing conclusions.
6. Assess plausibility
Judge the plausibility of a claim made by AI. If a statement seems implausible or inaccurate, such as an outrageous claim about the impact of vaccines, consider whether it makes logical sense. Make tentative judgments but remain open to revising your thinking based on evidence.
7. Promote digital literacy
Enhance your own digital literacy skills, and encourage others to do the same. Whether you’re a parent, teacher, mentor, or community leader, promote digital literacy to minimize risks to health and well-being. Resources like the American Psychological Association and the News Literacy Project can provide guidance and tools for improving digital literacy and fact-checking online information.
In the age of AI-generated content, it is crucial to equip yourself with the skills necessary to navigate this new information landscape. While it may take time and effort to find and evaluate reliable scientific information online, the investment is worthwhile.
ChatGPT: Artificial Intelligence, chatbots and a world of unknowns | 60 Minutes
Michael is an editor and proofreader at Bridger Jones specialising in getting genuine research published.