The Truth About AI Personalisation in ChatGPT and Gemini
Download MP3James Dooley: Personalization of large language models and how each search, whether it is ChatGPT, Perplexity, Claude or Gemini, can give different answers. Today I am joined with Ben who has a lot of information and evidence about this happening. He owns multiple AI tools including Get AISO, which is G E T A I S O. Ben, let’s jump straight into it. When someone does a search within a large language model like ChatGPT or Gemini, can you explain why different people receive different answers?
Benjamin Tanenbaum: Yeah, that is a very good question. We did some research on this that we are going to publish in a leading SEO publication. We tried to look at the problem with fresh eyes because there has been a lot written about personalization already. I can explain what the industry consensus is, but we wanted to analyse it from scratch.
We built a setup where we take a realistic question based on our dataset of real conversations and run that question many times. First we run it in a baseline session with no memory and not even logged in. Then we gradually add more context and run the question again many times to see how the answers change.
The concept of personalization is actually complicated to measure because large language models behave differently from Google search. If you ask the same question multiple times you can get different answers. That means if you ask Gemini or ChatGPT a question and I ask the same question, we may get different answers but not necessarily because of personalization.
Sometimes the difference is simply variability in how the model generates responses. You can think of it like a lottery for the next token. For example, if we ask for the best pepperoni pizza restaurant in New York, one answer might recommend a fancy restaurant on Fifth Avenue while another answer might recommend a cheaper place in Brooklyn. That does not automatically mean the system personalized the answer. Both options might simply rank equally in the internal query fan out process.
So what looks like personalization might actually be variability. That is important to understand first.
There are however some real personalization signals. The first one is location. Every time you send a query your location is sent with the query. This is true for ChatGPT and mostly true for Gemini as well. Even if you try to remove location from the prompt, the system still attaches the location data in the background.
This matters a lot for local businesses. If you run a restaurant, your location will strongly affect whether you appear in answers.
The second type of personalization is deeper context based on past interactions. This is a big focus for companies like OpenAI. If the system knows from previous conversations that you have a gluten allergy, it might remove restaurants that are not safe for people with that allergy. That creates a much more tailored experience.
Large language models collect a lot more contextual information than traditional search engines because conversations contain detailed descriptions of preferences and behaviours. That is why when people ask an AI model to generate an image representing them, the result can sometimes be surprisingly accurate.
However there are also some reality checks. Around 95 percent of users use ChatGPT on the free plan. That means the level of personalization is limited because those models use smaller context windows and less processing for memory.
Also many users are not logged in or have very little conversation history. So the system may not know much about them yet.
Gemini may eventually use more of the wider Google ecosystem data, such as previous searches, but this integration is still evolving.
James Dooley: Before going deeper into personalization, I want to address something I hear in the SEO community. Some people say there is no point tracking AI visibility because the answers change every time.
If you think visibility goes up and down constantly, I understand why some people hesitate to report that data to clients. But optimization for large language models still matters. The more optimization you do, the more opportunities you create to be cited. It is like buying more raffle tickets. Your chances increase.
Coming back to personalization, I understand why answers change if I am logged in and the system knows my history. But why do answers still change when someone is not logged in?
Benjamin Tanenbaum: When you are not logged in and the system has no information about you, the degree of personalization is actually minimal. In that case the main factor is location. Apart from that, the changes you see are mostly variability in how the model generates responses.
This links back to the discussion about tracking visibility. Some people think results are random because answers change each time. But if you run the same query thousands of times you start seeing patterns.
For example if a brand clearly answers a query and appears frequently in the query fan out sources, it might appear in 60 percent of responses. Instead of thinking about ranking position like traditional SEO, it is more useful to think about share of voice.
Another interesting insight is that personalization can actually reduce variability. If a model knows your preferences, the query fan out becomes more specific.
For example if you prefer Japanese style restaurants with jazz music and whiskey, the query fan out will include those characteristics. That narrows the number of relevant results dramatically.
Instead of choosing between thirty possible restaurants, the model might only consider two or three that match those preferences. That means the answer becomes much more stable.
James Dooley: I like the idea of share of voice in AI search. If a brand appears in 40 percent of responses, the goal should be to improve that to 45 percent or 50 percent over time. It may never reach 100 percent because systems intentionally vary answers, but the goal is to keep improving visibility.
Ben, it has been an absolute pleasure doing this series with you. People are trying many strategies such as LLM seeding or LLM optimization to gain an advantage in AI search. If someone wants to follow your research and updates, where can they find you?
Benjamin Tanenbaum: The best place is LinkedIn. My name is Benjamin Tanenbaum and my LinkedIn handle is B N T A M. I post quite often, maybe too often, but I try to share useful insights.
James Dooley: Ben, it has been an absolute pleasure. I hope everyone enjoyed this series on AI SEO and large language models. This episode focused on AI personalization and AI visibility. Ben, thanks again for joining.
Creators and Guests
