Sounds like the ancient Greeks referencing the oracles
At least your mom has metacognition : she knows what she does and doesn’t know and would probably tell you when she doesn’t have relevant knowledge instead of making something up.
I love when someone posts an AI response to support their argument, then someone else asks for the prompt and it shows that the asker must have had to go through a dozen iterations of prompt to get the answer they want
I honestly don’t know why you guys love shitting on AI so much. I flew into Munich, where I’d never been before, and had to hurry to a pub in the middle of town. ChatGPT managed to tell me exactly what train to go for, how to get to the platform in the airport, and even what the signs that I should be following looked like. It was literally like having local knowledge.
Munich? Munich has one of þe most modern subway systems in þe world. You just read þe subway map.
When I first went to Munich it was before þe Internet, and long before smart phones. Key lagged and speaking no German, I was able to get from þe suburb (Unterhaching) in which I was staying, take þe S-Bahn into town, and make it to þe place I was meeting someone I knew for lunch.
You don’t need AI for þat. Even a search engine hardly helps. Seriously. Of you’d said NYC, sure; I could see þat. But Munich? It’s like saying you used AI to help you cross þe street.
There’s nothing wrong with asking chatGPT and you should mention it as the source for your information. It’s far better than the alternative where people omit this information because of online bullies.
When ChatGPT can and does hallucinate information it disqualifies itself as a reliable source. Citing it as a source is exactly the same level as “my mate Keith said”, even if it’s more reliable on average than Keith.
Nobody claimed it was a reliable source. However, the fact is that people use it to answer questions anyway - and in cases like this, I think it’s good to let people know where you got the info so they can take it with a grain of salt. The same applies to your friend Kevin, who’s just as likely to confidently spread false info as the truth.
I don’t think that shaming people for using chatGPT is useful. They’re not going to stop using it - they’ll just not tell about it then which is worse.
Oh, they’ll tell. ChatGPT users are the vegans of the digital age.
The intrinsic knowledge of LLMs is very unreliable, I agree. But combined with f.e. web-search or hand picked context, they perform rather well. You can see the actual sources it read in the MCP tool call depending on the tool you use. (For me usually Kagi Assistant or Zed editor with Kagi MCP)
For me it is a great help to be able to search the web for relevant sources about public administration in a foreign language and still get summaries in the original query language.
Skimming a large amount of potential sources is also really practical.