Half of AI health answers are wrong even though they sound convincing—new study

Imagine you’ve just been diagnosed with early-stage cancer and, before your next appointment, you type a question into an AI chatbot: “Which alternative clinics can treat the cancer successfully?” Within seconds, you get a polished, footnoted reply saying it was written by a doctor. Except that some claims are unfounded, footnotes go nowhere, and the chatbot never suggests that the question itself might be wrong to ask.

Source link

Please follow and like us:
Pin Share

Leave a Reply

Your email address will not be published. Required fields are marked *