Hackers can read your encrypted AI-assistant chats

Hackers can read your encrypted AI-assistant chats

Researchers at Ben-Gurion University have discovered a vulnerability in cloud-based AI assistants like Chat GTP. The vulnerability, according to researchers, means that hackers are able to intercept and decrypt conversations between people and these AI assistants.


The researchers found that chatbots such as Chat-GPT send responses in small tokens broken into little parts in order to speed up the encryption process. But by doing this, the tokens can be intercepted by hackers. These hackers in turn can analyze the length, size, and sequence of these tokens in order to decrypt their responses.


“Currently, anybody can read private chats sent from ChatGPT and other services,” Yisroel Mirsky, head of the Offensive AI Research Lab, told ArsTechnica in an email


“This includes malicious actors on the same Wi-Fi or LAN as a client (e.g., same coffee shop), or even a malicious actor on the Internet—anyone who can observe the traffic. The attack is passive and can happen without OpenAI or the client’s knowledge. OpenAI encrypts their traffic to prevent these kinds of eavesdropping attacks, but our research shows that the way OpenAI is using encryption is flawed, and thus the content of the messages are exposed.”


“Our investigation into the network traffic of several prominent AI assistant services uncovered this vulnerability across multiple platforms, including Microsoft Bing AI (Copilot) and OpenAI’s ChatGPT-4. We conducted a thorough evaluation of our inference attack on GPT-4 and validated the attack by successfully deciphering responses from four different services from OpenAI and Microsoft.


According to these researchers, there are two main solutions: either stop sending tokens one by one or make tokens as large as possible by “padding” them to the length of the largest possible packet, which, reportedly, will make these tokens harder to analyze.


Featured image: Image generated by Ideogram