On the original topic of poor decision-making, I saw an interesting article last week about research showing that AI chatbots were exceedingly more successful than humans at convincing conspiracy-believing people to drop their misguided beliefs. Experts demonstrating to the conspiracy thinkers how their beliefs were wrong were ineffective. Family members or friends explaining how their beliefs were wrong were ineffective. Even being shown undeniable, direct evidence proving that the conspiratorial beliefs were wrong were ineffective. But AI bots "trained" to talk to the conspiracy believers had an astounding success rate at convincing them they are wrong.
The researchers attribute the success of AI to two intertwined facets of conspiracy beliefs: the beliefs themselves, and "identity" with the beliefs (or with the groups of people that follow such beliefs). When you attempt to disprove a conspiracy, it tends to strip the believers of their identity. But people were far less threatened with losing this "identity" to an AI bot, because they trusted it (it's a chatbot, not a person with an agenda), and because it was trained to talk like them, and because it was deemed less likely to be part of a conspiracy itself. There was no loss of face in admitting you were wrong to an AI bot. That was the gist of it, at least as I understood it. The article was in the Washington Post, I don't have a link to share because A) I'm lazy; and B) it's probably behind a paywall. But. you know, google.