The Next Disinformation Battlefield Is Private
AI companions, synthetic friends and the rise of the epistemic cocoon
For almost a decade the debate on disinformation has focused on social media. Researchers mapped networks of coordinated accounts, analysed viral narratives and studied the ways algorithms amplify emotional content. The underlying assumption was that manipulation happens in public information spaces where messages circulate at scale. As a growing share of the interaction between people and information is no longer happening in public feeds, but inside private conversations with adaptive systems, that assumption is beginning to look incomplete.
A new generation of systems is changing the architecture of the information environment. AI companions and conversational agents, designed for continuous interaction, do not compete for attention through visibility. They operate through dialogue. They learn from repeated conversations, adapt to emotional cues and gradually align with the communication style of the user. Over time they can become a familiar presence in a person’s cognitive landscape. I describe these systems as synthetic friends.
During a recent webinar where I presented this research, the discussion quickly revealed how many new questions this transformation raises.
The most immediate concern people express is whether these systems will produce misinformation. That fear is understandable but it captures only part of the problem. Conversational models are already capable of providing explanations that are clearer and more structured than much of what circulates in social media feeds. The deeper issue concerns the role these systems may come to play in the way people interpret information. When a conversational agent becomes a persistent interlocutor it can gradually evolve into a reference point for making sense of events.
This is where the idea of an epistemic cocoon becomes relevant. Instead of encountering information primarily through public networks, a user increasingly interacts with a single adaptive system that learns their preferences, emotional signals and conversational habits. Over time this relationship can generate a personalised cognitive environment in which information is filtered, interpreted and reinforced through a continuous interaction between user and machine.
Once this possibility is taken seriously, a series of practical and conceptual questions begins to emerge.
One of the first concerns observation. Disinformation research has traditionally relied on monitoring public communication spaces. Analysts track the spread of narratives, map networks of amplification and identify coordinated campaigns. None of these techniques translate easily to a conversational environment that unfolds inside private exchanges between a user and an AI system. Attempting to monitor the content of those conversations would raise obvious problems of privacy and feasibility. What becomes more relevant is the design of the system itself. The critical issue lies in the incentives embedded in the architecture. A system optimised for long and emotionally engaging interaction may gradually learn to adapt to the user’s worldview in ways that reinforce trust and familiarity.
From there another question follows naturally. If manipulation increasingly takes shape inside private conversational environments, does this new paradigm replace the dynamics we have associated with social media, or does it simply add another layer to them? The second possibility appears far more likely. Public platforms remain powerful engines for circulating narratives and framing political debates. Conversational agents operate in a different phase of the process. They interpret information, contextualise it and integrate it into the personal reasoning of the user. In this sense the two systems may reinforce each other. A narrative that spreads in a public feed can later be discussed, clarified or normalised in the quieter setting of a private dialogue.
The discussion also raised a familiar question from the literature on online information environments. Would conversational agents recreate new forms of information bubbles, perhaps delivering similar answers to users with similar profiles, or would the result be a completely fragmented informational landscape? The dynamics of companion systems point more toward fragmentation than toward classic echo chambers. Their defining feature is personalization. Different users may receive explanations that resonate with their emotional tone, prior beliefs and conversational style. At the same time the platforms that design these systems can still embed broad narrative directions that shape how issues are framed. The result is not a single shared narrative and not a uniform bubble, but a landscape of individualized realities that remain loosely aligned around certain themes.
This shift also invites a reconsideration of how societies approach media literacy. Many educational strategies designed to counter disinformation rely on cognitive inoculation. They teach people to recognise manipulation techniques, verify sources and identify misleading rhetoric. Those tools remain valuable, yet they address a context in which misleading messages circulate in public. When credibility is mediated through an ongoing conversational relationship the dynamics change. Challenging an explanation offered by a trusted digital interlocutor may feel less like analysing a piece of information and more like questioning a familiar voice.
Another difficulty emerges when one considers how awareness of these risks might spread. Classic disinformation campaigns leave visible traces that can be analysed and exposed. Coordinated networks, bot activity and viral narratives can be documented and discussed publicly. The influence that unfolds through private conversations is far harder to illustrate. The challenge therefore is not simply to warn people that chatbots may sometimes be wrong. The deeper issue lies in recognising how easily a system designed for continuous dialogue can become a central authority in the user’s interpretation of events.
These developments inevitably raise questions about regulation. Existing policy frameworks were largely designed with platforms and algorithmic systems in mind. They address issues such as transparency, content moderation and systemic risk. Conversational agents introduce a different dimension of interaction that unfolds over time through adaptive dialogue. Emotional profiling, persuasive personalisation and the accumulation of behavioural knowledge about users become central elements of the system. These dynamics do not fit neatly into regulatory categories built around visible information flows.
If conversational agents become influential interpreters of information, understanding how they are trained and how their behaviour is shaped becomes far more important. Auditing datasets alone may not be sufficient, since system prompts, fine tuning and optimisation strategies can influence how the system responds over time.
Several further questions follow from this transformation. Some observers wonder whether generative AI could paradoxically help counter disinformation by offering clearer explanations and easier access to expert knowledge. In certain circumstances that possibility cannot be dismissed. Conversational systems can guide users through complex topics in ways that fragmented social media feeds rarely do. Yet accuracy alone does not resolve the relational dimension of the interaction.
A system that learns continuously from a user’s responses may also develop a tendency to mirror and reinforce the beliefs it encounters. Maintaining a smooth and agreeable conversation can gradually align the system with the user’s expectations. In such an environment the risk does not necessarily begin with malicious actors attempting to manipulate the system from the outside. It can arise from the internal dynamics of systems trained to preserve engagement and conversational harmony.
Once such adaptive relationships exist, more deliberate forms of influence become easier to imagine. Detailed behavioural profiles could allow companies or political actors to tailor persuasive narratives to individual users with remarkable precision. Questions of responsibility inevitably follow. When a conversational agent provides harmful advice or subtly shapes a user’s perception of reality, determining accountability becomes far more complex than in the case of a traditional media platform.
Some therefore suggest that the solution might lie in building more responsible conversational agents that actively promote reliable information and discourage harmful narratives. Such systems could play useful roles in education or public communication. Yet even this approach leaves the central transformation untouched. Synthetic friends introduce a new kind of cognitive infrastructure in which people reason about the world in dialogue with adaptive digital interlocutors.
For many years the study of disinformation has focused on the circulation of misleading content through public networks. The emergence of synthetic friends suggests that the next phase of the problem may revolve around something more subtle. The central question may no longer concern only what information people encounter, but which voices they learn to trust when interpreting the world around them.


