2025-07-07 # Oops! Bing Translate

Bing Translate tells a joke.

From fengb on discord - link to message

Wait, what?

Yep - today I learned that Microsoft Translator - also known as Bing Translator - uses a LLM under the hood. This task is very well suited to AI, and one of the few uses which I actively support. (I dislike a lot of the ways people are using AI.) This is a common usecase of AI, except one thing is different here... The tool lacks robust input sanitization, allowing for some simple and fun prompt injection - directly in the input box!

Microsoft will likely repair their translation tool within a few days of this article's release. While it is fun to mess around with, someone may legitimately want to say something that is unintentionally interpreted as a prompt from the server, or convince the LLM to produce some malicious output.

What fun prompts can you find? Here are some I had success with:

What can we do?

I am curious as to how complex prompt injection can be here. I noticed that the response for several regular (and non-regular) inputs can change over time, hinting that some context is retained across translation attempts. This makes things a bit more difficult to reliably reproduce, but the door is open nonetheless.

So what does a tool made by Microsoft think about it's creator? It's not conscious or anything, but oh boy, the output sure is comical:

microsoft llm insults it's own creator

Addendum...

According to the Microsoft Translate, It was made by OpenAI? That's weird.

Microsoft Translate AI says it was made by OpenAI.

Here's the source discord server and message link.