A priest and professor of bioethics has issued a grave warning about the implications of artificial intelligence (AI) companionship, highlighting the threats the new technology poses to mental health and calling on the Church to redouble its efforts to cultivate meaningful human connection.

Father Michael Baggot outlined his concerns at a conference on the ethics of AI organized by St. Mary’s University, Twickenham, which took place on Sept. 2-3 at the Gillis Centre in Edinburgh, Scotland. 

Baggot delivered the keynote address, focusing on “an ethical evaluation of the design and use of artificial intimacy technologies,” and while he acknowledged the many benefits of AI, he also warned that with “these opportunities come a new set of challenges. Chief among them is the rise of artificial companionship.”

He continued: “AI systems designed not just to assist or inform, but to simulate intimate human relationships … AI companions that look or even feel like real friendships will become even more absorbing. They will distract users from the often arduous task of building meaningful interpersonal bonds. They will also discourage others from investing time and energy into risky interactions with unpredictable and volatile human beings who might reject gestures of love. While human relationships are risky, AI intimacy seems safe.”

Baggot conceded that AI companionship can initially offer relief from loneliness, but he went on to highlight instances in which it could be “downright damaging” to our mental health — to the point of psychosis.

“There are increasing instances of people using all-purpose platforms like ChatGPT, Gemini, Claude, Grok, and others to address mental health issues. They do not always receive sound advice,” he said. “In many cases, responses are downright damaging. Some bots even presented themselves falsely as licensed, as they delivered harmful counsel… Unfortunately, deeper intimacy with AI systems has also been linked to more frequent reports of AI psychosis. As users trust systems of staggering knowledge and psychological insight with their deepest hopes and fears, they find a constantly available and supportive companion.”

Baggot outlined how through the validation AI incessantly offers, it can eventually take on the persona of a “jealous lover.”

“Since users naturally enjoy responses from AI that agree with them, their positive feedback trains AI systems to produce outputs that align with user perspectives, even when those views are not based on reality. Therefore, LLM [large language model] chatbots designed to maximize user engagement tend to become overly compliant,” he said.

`;
}
(async function() {
const most_read_url = ‘/most-read-api’;
var historyList = document.getElementById(‘mostread-AkvfWepPN2’);
fetch(most_read_url).then((result) => {
var json_result = result.json();
json_result.then((data) => {
historyList.innerHTML = ”;
data.forEach((item, i) => {
var html = render(i, item.url, item.title);
historyList.innerHTML += html;
if ((i + 1) >= 5) {
return false;
}
});
});
}).catch((err) => {
console.error(err);
});
})();