Go to main Forum page »
When we ask a question of ChatGPT or Gemini or any other AI LLMs we may wonder about the source of the information contained in the answers. I recently read an article about this. LLMs rely on user-generated content and as we know, not all of these web based sources are accurate and so, neither will the answers be.
“According to an analysis by Semrush, LLMs like ChatGPT reference Reddit and Wikipedia the most for facts. For geographical data, LLMs frequently cite Mapbox and OpenStreetMap.”
Here’s a breakdown of citation frequency:
Reddit.com 40.1%
Wikipedia 26.3%
Youtube.com 23.5%
Yelp.com 21.0%
Facebook.com 20.0%
Amazon.com 18.7%
Tripadvisor.com 12.5%
Openstreetmap.com 11.3%
Instagram.com 10.9%
Mapquest.com 9.8%
Walmart.com 9.3%
Ebay.com 7.7%
Linkedin.com 5.9%
Quora.com 4.6%
Homedepot.com 4.6%
Yahoo.com 4.4%
Etc.
Let me understand how this works. A post says AI researchers use a variety of questionably reliable sources. I questioned that showing the sources chatGPT says it uses and which a few times I have verified. I get 5🔻
A post is made claiming HD contains AI created content. I ask for examples, I receive 5 🔻 but no examples.
And I’m the bad guy🤗
Your post fascinated me only because when I have looked at citations they seem credible. I asked chatGPT where it got its information and here is the reply. I don’t know what is real anymore, but I have asked the same question from different AI tools and get very similar answers.
chatGPT— here’s how I’d typically approach different kinds of questions, and where I’d pull information from:
I have heard enough horror stories of people relying on AI research to be lerry of its findings. It can be a useful tool but you need to fact check its results which may take more time than it is worth.
Of course, people can hallucinate and get it wrong too.
I have done extensive internet research of a wide variety of topics. You can usually figure out credible internet resources for a given topic fairly quickly. AI may be quicker, but I learn more doing my own research, and I do not have to fact check it as much.
I don’t know about other HD readers, but I’m tired of seeing AI postings. I can ask AI myself if I want to see that tripe. I’m much more interested in what other readers have to say derived from their own intellects.
PLEASE!!! Enough of this AI stuff on the HD forum.
I haven’t noticed, can you direct me to one of those posts.
I assume AI didn’t write this irony.
The fact is, AI “tripe” is now permeating a significant part of what we read. Journalists, academics, scientists and “experts” of all sorts now use AI tools to produce what they write. As boring or uncomfortable as it may be to point this out, I think it is necessary to do so. For example, Wikipedia is something to use carefully. We may be at the point where much of the so called factual content we read is rubbish. I noted that 20% of the content gleaned via AI is derived from Facebook, which we all know is a paragon for factual information? It will only get worse from here and AI generated images are the visual tip of the iceberg. I usually cite sources but when the source may be using AI generated content, even those sources may be unreliable. This really isn’t about AI, per se. It is about how unreliable what we read may actually be, because of AI.
Note added: Watchdogs are increasingly warning about fake citations in Wikipedia articles.
I asked AI to investigate me. It found one of my HD articles and summarized it. Big deal.
It’s sort of like meeting a friend for lunch, and having to listen to him recite, nearly word for word, something both of us just read in the newspaper.
Norman, I think the words of the day are User Beware.