UX of Errors and Ethics in AI Chatbots

Jamie Martin
4 min readJun 30, 2024

--

I have a backlog of Steam keys from IndieGala, a legitimate key reseller who often sells games in “bundles.”

It can be frustrating to copy-paste all of the games at once, so this seemed like a good use for a chatbot.

As a fun experiment, I took one of these bundles and asked every Chatbot to help me extract them.

Screenshot was censored for obvious reasons.

Surprisingly, some of the chatbots flagged this as a violation of the terms of service and declined to assist me.

Google Gemini initially gave me its infamously vague error message: “I’m just a language model, so I can’t help you with that.”

But I was able to locate a helpful answer that worked within one of the drafts it generated where it changed its mind and was suddenly cheerful and happy to help.

This is a poor user experience. Users shouldn’t have to search through drafts for an answer to their question. (What are drafts even for in this context?)

Microsoft’s Copilot, on the other hand, immeditely shuts the door in the user’s face. It provided a vague response and closed the conversation, suggesting I should ‘move onto a new topic.’

Anthropic’s Claude was the most alarmist, refusing to help, accusing me of doing something unethical, and saying I received the keys from illegitimate means.

Claude’s statements about “even if you believe they are your own” and “using the platform properly” are so weirdly condemning and judgmental. They’re unnecessary. I’ve never given Claude a reason to think I’m some shady criminal devoid of ethics.

It even named the chat “Requesting Steam Keys Ethically.”

Keeping score of the significant Chatbot on the market for my request

Are there any other chatbots that I should have tried? I’m eager to hear your suggestions!

Chatbots must address the central issue of providing proper user experience regarding error messages and censorship. It is not helpful to receive a vague message when the Chatbot needs help answering a question. Instead of saying, “I’m just a language model” or “I can’t help with that,” the Chatbot should explain why it cannot fulfill the request along with a suggested course of action. Users should also be able to report these issues to the appropriate personnel at Microsoft or Anthropic to ensure improvement.

Here’s a wireframe of a better experience that takes these pillars into account:

Don’t shut the door in my face!

This explains the cause of the error and provides a solution. As a chatbot, you can address this error message in a conversational manner, utilizing your medium to the fullest potential!

Chatbot services should also avoid condescending error messages such as “It may be time to move on” or making unfounded accusations. These practices lead to a poor user experience and diminish the Chatbot’s credibility.

“Appropriate actions can be taken in situations where users are violating terms of service. However, until then, chatbots should communicate clearly, assume good faith, and establish proper user expectations.

It should go without saying that antagonizing your users and being vague are UX practices that should be avoided, but some AI companies seem to have missed that memo.

Or in the words of Copilot: “It may be time to move on.” :)

--

--

Jamie Martin
Jamie Martin

Written by Jamie Martin

Experienced UI Designer & UX Strategist for Video Games — Helping the game industry catch up to the rest of the design world with properly implemented UX

No responses yet