Understanding AI Hallucinations
Understanding AI Hallucinations
Understanding AI Hallucinations
Publihed On
August 6, 2024
Author
James Smith
Category
Content
Have you ever heard someone make a statement or argument so convincing that you assume it must be true? But then… you later find out… it’s not?
It’s not really lying, because the person genuinely believes what they are telling you. They are, just, wrong.
Well, AI can do the same thing — and it is called AI hallucination.
AI hallucinations occur when AI tools generate information that is factually incorrect but presented as truth.
Consider the 'Emu War'. This real event involved Australian soldiers and an overpopulation of emus in Western Australia in 1932. However, if prompted incorrectly, AI might fabricate details around this event — it might present you with a tale involving emus, soldiers and, perhaps, a medieval knight.
While imaginative, this can distort the truth. This presents obvious issues for students (and, well, anyone) relying on ChatGPT as a single source of truth.
AI hallucinations are not random occurrences; they have identifiable causes.
Understanding these causes, and making our own adjustments where possible, can help limit the impact of these hallucinations.
One reason behind AI hallucinations is if the model is being informed by insufficient or low-quality training data. This can lead to unreliable and biased results in AI models, affecting performance. While this data maintenance is a concern for the OpenAI’s of this world, understanding this limitation helps us to explain and emphasise the importance of source verification and cross referencing in research.
Overfitting is another cause of AI hallucination, which comes about when an AI model is so closely tailored to its training data that it loses the ability to generalise.
For example, if a model is trained on a dataset that predominantly contains photos of dogs in the outdoors, it may learn to associate the presence of grass with the image of a dog. As a result, when presented with a new image of a dog in a different context, such as inside a room, the model may be unable to identify the dog because of the lack of grass.
In addition to these architectural explanations, AI hallucinations can also result from poor prompting by the user. The use of slang, for example, may not be understood by the model — yielding irrelevant or hallucinated responses.
If relied upon, AI hallucinations present the very real risk of misinformation.
This can impact a student’s understanding or grasp on any particular topic, and can pose a threat to academic integrity if inaccurate information is cited in assignments or exams.
The latter is more common than you would expect.
I asked ChatGPT to produce a citation justifying a two-day work week for teachers. The result:
“Smith, J., & Johnson, L. (2023). Rethinking the Educational Workweek: The Case for a Two-Day Schedule for Teachers. Journal of Progressive Education, 58(2), 145-163.” Two more! I requested. It obliged:
“Johnson, L., & Carter, H. (2024). "Balancing Teacher Workload and Learning Outcomes in a Shortened Workweek," Educational Reform Quarterly, 11(1), 88-104.”
“Martinez, S. (2023). "Exploring the Impact of Reduced Teaching Hours on Teacher Well-being and Student Achievement," Global Education Review, 19(4), 200-225.”
But, when I went to verify the sources, they did not exist (sadly…!). It had simply done what I asked: produced two citations, not actual research articles. This is a classic example of poor prompting resulting in fictitious information.
There are strategies teachers and students can adopt to prevent AI hallucinations.
If AI is to become a fixture in the way students learn and write, it is our role as teachers to educate them on how to put it to best use.
Guiding and prompting AI is a skill in and of itself — it requires the ability to write concisely, offer context and use instructional, direct language. The honing of this skill will benefit students both in their schooling, and when they enter the workforce.
When working with AI tools, students should consider:
Providing all relevant information to the model, including any specific data and sources, so that the tool has the proper context through which to generate accurate results.
Looking for ways to reinforce the context of a prompt, such as through Retrieval Augmented Generation (RAG) or including numerous examples in your entry.
Creating data templates for numerical tasks. Providing a structured data template (like a table) can guide the AI in making correct calculations, reducing chances of numerical hallucinations.
Assigning the AI a specific ‘role’ — such as a climate scientist explaining the Great Barrier Reef's bleaching — to limit the scope of its responses, ensuring more factual accuracy.
Providing clear communication of desired and undesired results. Instructing the tool what is not wanted can sometimes be effective. For example, asking for a social analysis of housing affordability in Sydney without focusing on political aspects.
Using simple, direct language. Avoiding complex or vague prompts and using clear, concise, and easy-to-understand language — no slang words! — can help minimise the risk of misinterpretation and hallucinations.
The role of validation in using AI cannot be overstated.
Cross-checking information and scrutinising sources ensures a ‘human-in-the-loop’ approach to AI use that is critical.
In Australian classrooms, this means teaching students to ‘fact check’ AI with reliable sources — official Australian curriculum textbooks, government websites, and peer-reviewed journals are a good place to start.
This practice not only ensures accuracy but also instils a habit of seeking multiple perspectives and sources. As we enter the age of deepfakes and disinformation, having this skill set is more important than ever.
Have you ever heard someone make a statement or argument so convincing that you assume it must be true? But then… you later find out… it’s not?
It’s not really lying, because the person genuinely believes what they are telling you. They are, just, wrong.
Well, AI can do the same thing — and it is called AI hallucination.
AI hallucinations occur when AI tools generate information that is factually incorrect but presented as truth.
Consider the 'Emu War'. This real event involved Australian soldiers and an overpopulation of emus in Western Australia in 1932. However, if prompted incorrectly, AI might fabricate details around this event — it might present you with a tale involving emus, soldiers and, perhaps, a medieval knight.
While imaginative, this can distort the truth. This presents obvious issues for students (and, well, anyone) relying on ChatGPT as a single source of truth.
AI hallucinations are not random occurrences; they have identifiable causes.
Understanding these causes, and making our own adjustments where possible, can help limit the impact of these hallucinations.
One reason behind AI hallucinations is if the model is being informed by insufficient or low-quality training data. This can lead to unreliable and biased results in AI models, affecting performance. While this data maintenance is a concern for the OpenAI’s of this world, understanding this limitation helps us to explain and emphasise the importance of source verification and cross referencing in research.
Overfitting is another cause of AI hallucination, which comes about when an AI model is so closely tailored to its training data that it loses the ability to generalise.
For example, if a model is trained on a dataset that predominantly contains photos of dogs in the outdoors, it may learn to associate the presence of grass with the image of a dog. As a result, when presented with a new image of a dog in a different context, such as inside a room, the model may be unable to identify the dog because of the lack of grass.
In addition to these architectural explanations, AI hallucinations can also result from poor prompting by the user. The use of slang, for example, may not be understood by the model — yielding irrelevant or hallucinated responses.
If relied upon, AI hallucinations present the very real risk of misinformation.
This can impact a student’s understanding or grasp on any particular topic, and can pose a threat to academic integrity if inaccurate information is cited in assignments or exams.
The latter is more common than you would expect.
I asked ChatGPT to produce a citation justifying a two-day work week for teachers. The result:
“Smith, J., & Johnson, L. (2023). Rethinking the Educational Workweek: The Case for a Two-Day Schedule for Teachers. Journal of Progressive Education, 58(2), 145-163.” Two more! I requested. It obliged:
“Johnson, L., & Carter, H. (2024). "Balancing Teacher Workload and Learning Outcomes in a Shortened Workweek," Educational Reform Quarterly, 11(1), 88-104.”
“Martinez, S. (2023). "Exploring the Impact of Reduced Teaching Hours on Teacher Well-being and Student Achievement," Global Education Review, 19(4), 200-225.”
But, when I went to verify the sources, they did not exist (sadly…!). It had simply done what I asked: produced two citations, not actual research articles. This is a classic example of poor prompting resulting in fictitious information.
There are strategies teachers and students can adopt to prevent AI hallucinations.
If AI is to become a fixture in the way students learn and write, it is our role as teachers to educate them on how to put it to best use.
Guiding and prompting AI is a skill in and of itself — it requires the ability to write concisely, offer context and use instructional, direct language. The honing of this skill will benefit students both in their schooling, and when they enter the workforce.
When working with AI tools, students should consider:
Providing all relevant information to the model, including any specific data and sources, so that the tool has the proper context through which to generate accurate results.
Looking for ways to reinforce the context of a prompt, such as through Retrieval Augmented Generation (RAG) or including numerous examples in your entry.
Creating data templates for numerical tasks. Providing a structured data template (like a table) can guide the AI in making correct calculations, reducing chances of numerical hallucinations.
Assigning the AI a specific ‘role’ — such as a climate scientist explaining the Great Barrier Reef's bleaching — to limit the scope of its responses, ensuring more factual accuracy.
Providing clear communication of desired and undesired results. Instructing the tool what is not wanted can sometimes be effective. For example, asking for a social analysis of housing affordability in Sydney without focusing on political aspects.
Using simple, direct language. Avoiding complex or vague prompts and using clear, concise, and easy-to-understand language — no slang words! — can help minimise the risk of misinterpretation and hallucinations.
The role of validation in using AI cannot be overstated.
Cross-checking information and scrutinising sources ensures a ‘human-in-the-loop’ approach to AI use that is critical.
In Australian classrooms, this means teaching students to ‘fact check’ AI with reliable sources — official Australian curriculum textbooks, government websites, and peer-reviewed journals are a good place to start.
This practice not only ensures accuracy but also instils a habit of seeking multiple perspectives and sources. As we enter the age of deepfakes and disinformation, having this skill set is more important than ever.
Have you ever heard someone make a statement or argument so convincing that you assume it must be true? But then… you later find out… it’s not?
It’s not really lying, because the person genuinely believes what they are telling you. They are, just, wrong.
Well, AI can do the same thing — and it is called AI hallucination.
AI hallucinations occur when AI tools generate information that is factually incorrect but presented as truth.
Consider the 'Emu War'. This real event involved Australian soldiers and an overpopulation of emus in Western Australia in 1932. However, if prompted incorrectly, AI might fabricate details around this event — it might present you with a tale involving emus, soldiers and, perhaps, a medieval knight.
While imaginative, this can distort the truth. This presents obvious issues for students (and, well, anyone) relying on ChatGPT as a single source of truth.
AI hallucinations are not random occurrences; they have identifiable causes.
Understanding these causes, and making our own adjustments where possible, can help limit the impact of these hallucinations.
One reason behind AI hallucinations is if the model is being informed by insufficient or low-quality training data. This can lead to unreliable and biased results in AI models, affecting performance. While this data maintenance is a concern for the OpenAI’s of this world, understanding this limitation helps us to explain and emphasise the importance of source verification and cross referencing in research.
Overfitting is another cause of AI hallucination, which comes about when an AI model is so closely tailored to its training data that it loses the ability to generalise.
For example, if a model is trained on a dataset that predominantly contains photos of dogs in the outdoors, it may learn to associate the presence of grass with the image of a dog. As a result, when presented with a new image of a dog in a different context, such as inside a room, the model may be unable to identify the dog because of the lack of grass.
In addition to these architectural explanations, AI hallucinations can also result from poor prompting by the user. The use of slang, for example, may not be understood by the model — yielding irrelevant or hallucinated responses.
If relied upon, AI hallucinations present the very real risk of misinformation.
This can impact a student’s understanding or grasp on any particular topic, and can pose a threat to academic integrity if inaccurate information is cited in assignments or exams.
The latter is more common than you would expect.
I asked ChatGPT to produce a citation justifying a two-day work week for teachers. The result:
“Smith, J., & Johnson, L. (2023). Rethinking the Educational Workweek: The Case for a Two-Day Schedule for Teachers. Journal of Progressive Education, 58(2), 145-163.” Two more! I requested. It obliged:
“Johnson, L., & Carter, H. (2024). "Balancing Teacher Workload and Learning Outcomes in a Shortened Workweek," Educational Reform Quarterly, 11(1), 88-104.”
“Martinez, S. (2023). "Exploring the Impact of Reduced Teaching Hours on Teacher Well-being and Student Achievement," Global Education Review, 19(4), 200-225.”
But, when I went to verify the sources, they did not exist (sadly…!). It had simply done what I asked: produced two citations, not actual research articles. This is a classic example of poor prompting resulting in fictitious information.
There are strategies teachers and students can adopt to prevent AI hallucinations.
If AI is to become a fixture in the way students learn and write, it is our role as teachers to educate them on how to put it to best use.
Guiding and prompting AI is a skill in and of itself — it requires the ability to write concisely, offer context and use instructional, direct language. The honing of this skill will benefit students both in their schooling, and when they enter the workforce.
When working with AI tools, students should consider:
Providing all relevant information to the model, including any specific data and sources, so that the tool has the proper context through which to generate accurate results.
Looking for ways to reinforce the context of a prompt, such as through Retrieval Augmented Generation (RAG) or including numerous examples in your entry.
Creating data templates for numerical tasks. Providing a structured data template (like a table) can guide the AI in making correct calculations, reducing chances of numerical hallucinations.
Assigning the AI a specific ‘role’ — such as a climate scientist explaining the Great Barrier Reef's bleaching — to limit the scope of its responses, ensuring more factual accuracy.
Providing clear communication of desired and undesired results. Instructing the tool what is not wanted can sometimes be effective. For example, asking for a social analysis of housing affordability in Sydney without focusing on political aspects.
Using simple, direct language. Avoiding complex or vague prompts and using clear, concise, and easy-to-understand language — no slang words! — can help minimise the risk of misinterpretation and hallucinations.
The role of validation in using AI cannot be overstated.
Cross-checking information and scrutinising sources ensures a ‘human-in-the-loop’ approach to AI use that is critical.
In Australian classrooms, this means teaching students to ‘fact check’ AI with reliable sources — official Australian curriculum textbooks, government websites, and peer-reviewed journals are a good place to start.
This practice not only ensures accuracy but also instils a habit of seeking multiple perspectives and sources. As we enter the age of deepfakes and disinformation, having this skill set is more important than ever.
Blog & Articles
Related Post
Related Post
Related Post
Ready to Simplify Your Document Management?
Join Mark My Words and experience the ease of our bulk upload feature. Save time, reduce stress, and focus on what truly matters.
Start Your Free Trial
Ready to Simplify Your Document Management?
Join Mark My Words and experience the ease of our bulk upload feature. Save time, reduce stress, and focus on what truly matters.
Start Your Free Trial
Ready to Simplify Your Document Management?
Join Mark My Words and experience the ease of our bulk upload feature. Save time, reduce stress, and focus on what truly matters.
Start Your Free Trial
© 2024 Mark My Words. All rights reserved.
© 2024 Mark My Words. All rights reserved.
© 2024 Mark My Words. All rights reserved.