UI vs API: Why are the results from the API different from the UI in Perplexity and ChatGPT? - Otterly.AI Blog - Best AI Search Monitoring Solution


The AI responses you encounter on platforms like ChatGPT.com or Perplexity.ai—including features like web citations—may not perfectly align with the results produced by the corresponding API. This article explains the key differences and what they mean for anyone tracking AI search outputs.

API vs UI text output on ChatGPT & Perplexity

In February 2025, ChatGPT introduced web search (ChatGPT Search) for everyone worldwide, even for users who aren’t logged in. With this update, marketers and brands have started learning about GEO (Generative Engine Optimization) and AEO (Answer Engine Optimization) to ensure they remain relevant in these AI-powered search experiences.

While AI search platforms like ChatGPT and Perplexity.AI offer feature-rich interfaces for users, their APIs often provide a simpler, more basic experience. This means that developers don’t receive the same detailed “output” from the API as users do when interacting with the platform’s interface.

Here are the key differences.

Web Search

ChatGPT’s web search feature can be enabled or disabled directly within its user interface. However, this functionality is not available through the ChatGPT API, meaning you won’t receive link citations when using the API.

In contrast, Perplexity’s API supports web browsing and includes link citations, similar to what’s available in its user interface. That said, the configuration of the Perplexity API differs, which can result in variations in the outputs. In summary, while Perplexity’s API provides web citations, the results may not be identical to those seen in its interface.



Tuning the API

The reason why AI text output might be different from what you see in the user interface (UI) is pretty simple: APIs give developers more options and settings to customize how the AI works compared to what typical users get in tools like ChatGPT or Perplexity.

With an API, developers can adjust certain settings – some examples are listed below – to fine-tune the AI’s responses:

  • LLM Model: This refers to the specific language model being used. Are you using the same model as what your audience sees in the UI?
  • System message/system prompt: This is a default instruction that tells the AI how to behave. Think of it as a way to give the AI a personality or role. For example: “You are a top-notch copywriter for e-commerce. Follow David Ogilvy’s copywriting principles.”
  • Temperature (0–1): This controls how creative or random the AI’s responses are. Lower values make it more focused and predictable, while higher values make it more creative.
  • Top P: Similar to temperature, this setting adjusts how much randomness is allowed in the AI’s responses.
  • Search Context Size: For example, Perplexity lets you choose between high, medium, or low levels of detail when answering a question. Use “high” for detailed, in-depth questions and “low” for simple, factual ones.
  • Search Domain Filter: This option lets developers decide which websites to include or exclude in search results, giving more control over the sources the AI pulls from.

Perplexity.ai for example states in their documentation clearly that results might differ – comparing UI vs API – see article here:

Summary

Depending on the AI search engine you use, the answers shown on their user interface can be different from the ones provided through their API. This may not matter much if you’re creating custom apps using their APIs. However, it’s very important if you want to see how an AI search engine represents your brand or mentions your website.

At OtterlyAI, we’ve been researching and working in this field since 2024. From the beginning, our goal has been to help marketing teams get the most accurate and unbiased view of the results customers see on platforms like ChatGPT and Perplexity. This clearer understanding can help you make better optimizations. Keep in mind, though, that some monitoring tools might claim they can track relevant topics at scale, but in reality, they may show you results that your customers never actually see.

If you have any questions, feel free to reach out to us at hello@otterly.ai.


The AI responses you encounter on platforms like ChatGPT.com or Perplexity.ai—including features like web citations—may not perfectly align with the results produced by the corresponding API. This article explains the key differences and what they mean for anyone tracking AI search outputs.

API vs UI text output on ChatGPT & Perplexity

In February 2025, ChatGPT introduced web search (ChatGPT Search) for everyone worldwide, even for users who aren’t logged in. With this update, marketers and brands have started learning about GEO (Generative Engine Optimization) and AEO (Answer Engine Optimization) to ensure they remain relevant in these AI-powered search experiences.

While AI search platforms like ChatGPT and Perplexity.AI offer feature-rich interfaces for users, their APIs often provide a simpler, more basic experience. This means that developers don’t receive the same detailed “output” from the API as users do when interacting with the platform’s interface.

Here are the key differences.

Web Search

ChatGPT’s web search feature can be enabled or disabled directly within its user interface. However, this functionality is not available through the ChatGPT API, meaning you won’t receive link citations when using the API.

In contrast, Perplexity’s API supports web browsing and includes link citations, similar to what’s available in its user interface. That said, the configuration of the Perplexity API differs, which can result in variations in the outputs. In summary, while Perplexity’s API provides web citations, the results may not be identical to those seen in its interface.



Tuning the API

The reason why AI text output might be different from what you see in the user interface (UI) is pretty simple: APIs give developers more options and settings to customize how the AI works compared to what typical users get in tools like ChatGPT or Perplexity.

With an API, developers can adjust certain settings – some examples are listed below – to fine-tune the AI’s responses:

  • LLM Model: This refers to the specific language model being used. Are you using the same model as what your audience sees in the UI?
  • System message/system prompt: This is a default instruction that tells the AI how to behave. Think of it as a way to give the AI a personality or role. For example: “You are a top-notch copywriter for e-commerce. Follow David Ogilvy’s copywriting principles.”
  • Temperature (0–1): This controls how creative or random the AI’s responses are. Lower values make it more focused and predictable, while higher values make it more creative.
  • Top P: Similar to temperature, this setting adjusts how much randomness is allowed in the AI’s responses.
  • Search Context Size: For example, Perplexity lets you choose between high, medium, or low levels of detail when answering a question. Use “high” for detailed, in-depth questions and “low” for simple, factual ones.
  • Search Domain Filter: This option lets developers decide which websites to include or exclude in search results, giving more control over the sources the AI pulls from.

Perplexity.ai for example states in their documentation clearly that results might differ – comparing UI vs API – see article here:

Summary

Depending on the AI search engine you use, the answers shown on their user interface can be different from the ones provided through their API. This may not matter much if you’re creating custom apps using their APIs. However, it’s very important if you want to see how an AI search engine represents your brand or mentions your website.

At OtterlyAI, we’ve been researching and working in this field since 2024. From the beginning, our goal has been to help marketing teams get the most accurate and unbiased view of the results customers see on platforms like ChatGPT and Perplexity. This clearer understanding can help you make better optimizations. Keep in mind, though, that some monitoring tools might claim they can track relevant topics at scale, but in reality, they may show you results that your customers never actually see.

If you have any questions, feel free to reach out to us at hello@otterly.ai.