Open this photo in gallery:

According to a recent survey, 52 per cent of American teenagers use AI companion bots regularly, with one in three finding such conversations at least as satisfying as talking to a human.iStockPhoto / Getty Images

Tom Rachman, a former novelist and contributing columnist to The Globe and Mail now studying artificial intelligence, was a summer fellow at the Centre for the Governance of AI.

By the end of your days, the most important person in your life might not be a person.

AI relationships are edging into society, with millions of people already consorting with charming chatbots. In coming years, such interactions will grow more complex and more common: quizzical AI therapists, do-it-all AI assistants, even AI bedfellows.

But for all the prophecies about AI – that superintelligence could end human woes (or simply end humans) – we are scarcely preparing for this overhaul of our lives.

Federal government to launch public registry for its use of AI

What happens when synthetic servants detect our longings, and our frailties? What if people trust AI advisers more than their fellow humans? What if a private company owns your best friend?

For a decade, we have lamented how technology has twisted human relations, with social media seeming to drive youths half-mad and push adults to extremes, all while wasting everyone’s time. We struggled to arrest the effects, partly because we failed to understand them in time.

Today, we are rushing toward the same error – only this upheaval will be more intimate, and more beguiling.

To be clear, researchers are studying the effects of AI. I am among them. But what I’ve observed is a worrying blind spot: Nobody is comprehensively tracking what these relationships do to us.

In haste, we need an ambitious new initiative, a Vital Signs Project, monitoring the “vitals” of societal wellbeing, gauging the pulse of core human needs amid this transformation of our social lives. Such a project would bring together leading scholars and tech developers to devise canny metrics of how we’re faring psychologically and behaviourally, including long-term studies and early-warning indicators of any disconcerting changes.

Is AI helping or hurting students? We answered your questions

Already, 52 per cent of American teenagers use AI companion bots regularly, according to a recent survey, with one in three finding such conversations at least as satisfying as talking to a human. Most interactions are probably harmless; many are helpful. But researchers worry about people-pleasing chatbots accompanying users into delusion, even self-harm.

Soon, AI apps will become far more personalized, recalling our wants, perhaps nudging what we want. The AI policy analyst Miranda Bogen, formerly of Meta and now at the Center for Democracy and Technology, contends that tech companies could leverage “all the data they will have collected to shape user behaviour as they please, whether to purchase sponsored products, shift towards favored political views, or to squeeze every last drop of engagement and attention out of users without regard to the externalities.”

In AI-safety research, the priority is clear: avert catastrophe. Typically, this involves evaluations in a lab setting, testing whether AI models have capabilities that could lead to misuse (e.g., an AI teaching terrorists bombmaking), or malfunction (e.g., a blundering AI system destroying infrastructure), or misalignment (e.g., an AI conniving to amass power).

What is rarer – and far harder – is evaluating whether AI could inadvertently undermine humanity when deployed at scale. Tech developers struggle to tackle structural risks, which emerge within the bewilderingly complex systems of human society. Meanwhile, government agencies and academic researchers currently lack the capacity for comprehensive monitoring, particularly given their restricted access to companies’ AI-usage data.

In the AI revolution, universities are up against the wall

But would AI relationships really threaten us?

People flourish when three basic psychological needs – autonomy, relatedness, and competence – are satisfied, and we struggle when these are thwarted. AI relationships could undermine all three.

First, autonomy. Developers are deliberately creating ever more captivating AI avatars. As happened with social media, systems that “optimize” for human engagement may trigger short-term impulsive behaviour at the cost of longer-term wants, as if hacking your will, so that you’re voluntarily using a device, only to regret all the time spent doing so.

As the machine-learning researcher Nathan Lambert, of the Allen Institute for AI, remarked: “Combine a far stronger optimizer with a far more intimate context and that is a technology I don’t even want to try.”

Second, social relatedness. Flawless AI companions could make one’s fellow humans seem insufferable by comparison. If people lose patience with their own kind, and don’t need to deal with others much of the time, society might seem an outmoded compromise. In that estranged future, we’d struggle to negotiate competing societal interests without conflict.

Third, competence. AI assistants could develop into our main information sources, granting vast power to the companies that design and oversee them. And when they’re doing our tasks, people may become dependent, perhaps not bothering with effortful thought anymore, losing skills and jobs. Humans could become the incompetent observers of their former domains.

Is AI dulling critical-thinking skills? As tech companies court students, educators weigh the risks

For now, this vision – will-hacked humankind, estranged from one another, irrelevant in running the world – is science fiction. And sci-fi tends to be wrong, projecting today’s anxieties onto tomorrow.

A different tale is brighter: synthetic sidekicks helping us accomplish our long-term aspirations, circulating more wisdom than humans ever could, and improving our ability to connect with each other. Already, they may help some people with loneliness; a few even “marry” them.

If you recoil from the notion of an AI spouse, perhaps you’d accept an unflappable AI doctor, always available for night calls, earning your family’s trust over the years. Or an everywhere-scout, pointing to events you’d never otherwise find, even accompanying you along. Or a memory-banker, saving what you see and hear, able to chat about your past, and someday share with your descendants who you were.

Whether AI relationships mend us, mangle us, or something in between, we will be subsumed into a transformed world soon. If we fail to comprehend what is happening, we may forfeit our ability to manage the upheaval.

Will AI go rogue? Noted researcher Yoshua Bengio launches venture to keep it safe

Previously, bold initiatives have sought to understand, and therefore protect, our species, ranging from the Human Genome Project and the UK Biobank, to the World Happiness Report and the Intergovernmental Panel on Climate Change.

Governments around the world are watching uneasily as transformative AI hurtles closer. They have an opportunity at relevance here, by fast-tracking funding for researchers to initiate wide-ranging societal studies connected to AI-relationship uptake, perhaps with public dashboards of how we are changing.

Scholars should design “canaries”: early indicators that life is getting weird – for instance, if loneliness falls, yet isolation rises; or if individual productivity soars but test scores plummet; or if people feel ever more “seen” by their AI pals, yet ever more powerless in the story of their own lives.

When social media was spreading, researchers struggled to extract data from the platforms extracting data from us. But responsible AI developers don’t fancy wrecking humankind; they worry about this going awry. So they should embrace a “Vital Signs Project,” finding ways to supply privacy-protected usage data in the public interest.

Alternatively, we can keep gawking at each new AI marvel, awed at these strange new minds. We’ll dream up further fantasies about our sci-fi future, until we are squinting at a sci-fi present, stupefied by what technology did to people this time.

If so, we must hope our artificial companions prove better guardians of humanity than the humans were.