This post is part of the AI design patterns series – I’m sharing new design patterns I’m seeing as AI enables solutions that weren’t possible before.
Some weeks ago, I was talking to Aadhi about the product he was building. His product lets marketers accomplish things through prompts (they’re in stealth now, so I’m not going to be able to reveal more about what his product does 😀)
We were discussing a common user behavior: someone might start a chat about a certain problem X, but later in the same thread, begin talking about another problem Y — a totally different topic.
When that happens, the model’s responses can start to degrade because it’s trying to reconcile two unrelated threads of context.
We assumed this kind of “context-switch” wouldn’t happen too often within a chat, and that most users would naturally start a new chat when working on a new problem. But we weren’t entirely sure.
Fast forward to today: I was using Warp after a few weeks. In a chat I had started two weeks ago, I asked it to check if I had the latest version of node.js installed. That’s when I saw this message pop up:

Warp gently recognized the context shift and offered a graceful suggestion to start a new chat.
I don’t know if there are any empirical studies yet on how well large language models handle context switches like this, but I really liked how Warp approached this situation.
This post is part of the AI design patterns series – I’m sharing new design patterns I’m seeing as AI enables solutions that weren’t possible before.