[[“value”:”


HIGHLIGHTS

OpenAI study says current GenAI evaluations encourage guessing over uncertainty

Hallucinations stem from next-word prediction, not mysterious AI glitches

Redesigning scoreboards to reward humility could reduce confident AI errors

Hallucinations in AI: OpenAI study blames wrong model measurements

When I wrote about AI hallucinations back in July 2024, the story was about inevitability. Back then, GenAI was busy dazzling the world with its creativity, but equally embarrassing itself with fanciful citations, biased imagery, or gymnasts bending like boneless cartoons. At the time I argued that hallucinations were as unavoidable as human “brainfarts” – which were entertaining, often problematic, and always a reminder that these AI systems weren’t perfect.

Digit.inSurvey

✅ Thank you for completing the survey!

$q.options.map(opt => `

`).join(“”)

`;
// trigger animation
const inner = qaContainer.querySelector(“.qa-inner”);
setTimeout(() => inner.classList.add(“show”), 50);
document.querySelectorAll(“input[name=’answer’]”).forEach(radio =>
radio.addEventListener(“change”, () => submitAnswer(radio.value));
);
]]