Rage Against the Machine Learning
A Q&A with Todd Olson, CEO of Pendo
1. What is a rage prompt? Can you share an example?
A rage prompt is the generative AI version of a rage click. Just like rage clicks happen when users repeatedly click an unresponsive or misleading button, rage prompts occur when users do not get the response they expect from an agent, and continue promptings out of frustration.
For example, a customer support agent might type, “Show all open tickets for this customer.” If the result is off, they might try again with, “Give me recent issues for this client,” or, “What support cases are active?” Each variation reflects increasing frustration. These moments are more than failed interactions. They are behavioral signals that can guide agent builders to make improvements.
2. What drives users to create rage prompts? Do they arise from deficiencies in AI agents?
Often they do, but not always because of technical flaws in the AI itself. Rage prompts typically result from a lack of clarity, context, or guidance. The AI might not understand the user’s intent, or the user may not know what the AI is capable of doing.
Sometimes this is due to limitations in training data. Other times it is caused by poor interface design or missing context. Either way, rage prompts are a clear signal that something in the experience is not working as expected.
3. What could make AI agents better and less prone to rage prompts?
Improving agent performance starts with observing how users interact with agents and identifying where friction occurs. The best way to reduce rage prompts is by giving agents context, clear instructions, and constant feedback.
At Pendo, we developed a feature called Agent Analytics, the first solution to measure AI agent performance. It gives product and IT leaders visibility into patterns like repeated prompts or abandoned sessions, and it supports data-informed decisions about where and how to improve the experience.
This is less about upgrading the AI and more about improving the system around it.
4. Do rage prompts suggest that a company has misused or overused agentic AI to the detriment of customer or user experience?
Not necessarily. Rage prompts are a normal part of learning how people interact with AI. They are useful indicators of friction and can help improve product quality over time, with one customer noting that he finally understands what users want by observing the questions they ask the agent.
The concern arises when companies fail to track these signals or treat them as noise. That can lead to compounding frustration and missed opportunities. When teams pay attention to where rage prompts occur, they gain insight into both agent performance and user expectations.
5. What is the solution?
Teams need a structured way to capture and respond to these signals. That means instrumenting the experience to observe user behavior, analyzing where issues occur, and making targeted improvements based on those insights.
This is not just about tuning the AI model. It also involves improving user onboarding for the agent, providing clear guidelines on what the agent can and can’t help with, and collecting feedback after each interaction. Organizations that can connect behavior with action will be best equipped to improve their AI experiences over time.
6. Does agentic AI need to incorporate elements of artificial emotion (AE)?
Not in the way humans experience emotion, but yes, AI should be able to recognize and respond to emotional cues. That includes detecting patterns like repeated inputs, extended pauses, or erratic interactions that indicate frustration.
These cues can inform adaptive responses. For example, an agent might offer clarification, simplify its reply, or suggest alternate actions. The goal is not to replicate emotion, but to create a more responsive experience based on behavioral context.
7. What is Pendo’s solution?
Pendo helps companies understand how users interact with software, including AI agents, and take action to improve the experience. We capture every user interaction, from clicks and swipes to prompts and conversations to survey and poll responses, and synthesize the data to help teams understand where users are getting stuck on a task or workflow and take action to improve it.
Agent Analytics supports this by allowing teams to view those signals in context and take informed action. Whether the issue is with an AI prompt, a feature rollout, or a complex workflow, the goal is the same: to improve the overall software experience based on real user behavior.
As AI becomes more embedded in software, this type of insight will be critical. It is not enough to build powerful tools. They have to be intuitive, effective, and aligned with what users actually need.
#
