Think Your Online Identity Is Hidden? AI May Know Better

Anonymity is both one of the internet’s greatest strengths and one of its biggest risks. From anonymous Reddit profiles where people share niche interests to private “finsta” accounts, many users assume they can remain hidden online.
But new research suggests that anonymity may not be as secure as many people believe. A study by scientists in Switzerland found that artificial intelligence (AI) tools can identify anonymous accounts at scale by linking them to publicly known profiles.
The researchers say their findings carry “significant implications” for online privacy. They warn that without safeguards, much of the anonymity people rely on online could quickly disappear.
Those most vulnerable to being identified are individuals who reveal personal details about themselves over time. According to researchers, this group often includes older users or people who may be less aware of online privacy risks.
Speaking to The Independent, the study’s lead author Daniel Paleka of ETH Zurich said the results make it “very clear” that “if you keep posting under a pseudonym, keep quoting information about yourself,” AI tools will eventually be able to identify you quickly and cheaply.
To test the concept, researchers developed a system powered by large language models (LLMs) that searches the web and treats the process as a “matching” task. The system gathers evidence and reasoning to connect anonymous accounts containing small clues about a person with that individual’s public online profile.
Instead of analyzing real anonymous accounts, the researchers used datasets made from publicly available posts. These included content from Hacker News and LinkedIn, transcripts from AI company Anthropic’s interviews with scientists discussing how they use AI, and Reddit accounts that were intentionally divided into two anonymized halves for the experiment.
Across these scenarios, the LLM system correctly matched up to 68 percent of accounts with 90 percent precision. Researchers say this performance “substantially outperforms” traditional deanonymization methods, including manual investigations carried out by humans.
The study’s authors argue that this capability “fundamentally changes” the landscape of online privacy.
In their paper, which has not yet been peer-reviewed, they warn that the technology could have serious consequences if misused.
“Governments could link pseudonymous accounts to real identities for surveillance of dissidents, journalists, or activists,” the researchers wrote.
They also warned that corporations might connect anonymous forum posts to customer profiles for hyper-targeted advertising. Cybercriminals could compile detailed profiles to run personalized social engineering scams, while hostile groups might identify influential employees or decision-makers and build online relationships with them for future exploitation.
“Users, platforms, and policymakers must recognise that the privacy assumptions underlying much of today’s internet no longer hold.”
Paleka emphasized that the AI tools are not acting as “superhuman investigators.” Instead, their advantage lies in speed and cost.
For example, if someone mentions where they live in one comment and their workplace in another years later, both humans and AI could theoretically connect those details. However, LLMs can gather and link that information much faster and more cheaply than a human investigator.
Currently, the technology cannot reliably identify people based on writing style alone. Instead, it focuses on factual details such as employment history, location, and hobbies.
Paleka also noted that replicating the study would currently require significant technical expertise with large language models. However, he warned that if safeguards are not introduced, it could become far easier for everyday users to identify anonymous accounts within the next few years.
“The fundamentals of the technology are there,” he said. “If there are no guards I fully expect someone to be able to misuse it.”
He added that AI companies may decide to restrict how their tools can be used to prevent this type of abuse. The goal of the research, he said, is to raise public awareness while the risk to most anonymous internet users is still relatively low.
“If you care about something being anonymous, if you have something to protect, if you want to post opinions about things that you would not post under your real name, be mindful of this,” he said.
Paleka also offered a simple recommendation for protecting online anonymity.
“There is a very straightforward solution which solves the vast majority of these issues,” he told The Independent. “Use a throwaway account.”
A throwaway account is created specifically for a single post, meaning it contains no long-term history or identifying information about the user.
“If you’re posting something genuinely sensitive, don’t use the account you also use to post about whatever else for years,” he added. “That would be my advice.”