Wow, I hadn’t heard anything about this:
Toronto recently used an AI tool to predict when a public beach will be safe. It went horribly awry.
The developer claimed the tool achieved over 90% accuracy in predicting when beaches would be safe to swim in. But the tool did much worse: on a majority of the days when the water was in fact unsafe, beaches remained open based on the tool’s assessments. It was less accurate than the previous method of simply testing the water for bacteria each day.
There’s more examples of AI prediction model failures in the linked article. I guess the junk being spewed by ChatGPT and Bing Search isn’t a fluke; it’s more like failure is the normal mode of operation for these learned models.