LLM-jacking is a term that describes the illegal access to AI models

19 September 2024 1 minute Author: Newsman

Hackers actively steal user credentials to access large language models (LLMs) such as ChatGPT and Claude, and even activate models that are not yet on the market.

This activity is called **LLM-jacking** and is unauthorized access to artificial intelligence models through stolen accounts. This can lead to serious financial losses for affected companies, which can exceed $100,000 per day. Attackers also use these models to improve their attack tools and circumvent content regulation, including sanctions and censorship.

Criminals also use LLM to analyze images, crack puzzles, and perform other fraudulent activities. For example, researchers observed a Russian student using stolen credentials to build a Claude model through Amazon Web Services (AWS). Recently, such actions have become more popular due to the desire of users to bypass the sanctions imposed on Russia and continue using prohibited technologies.

Other related articles
Found an error?
If you find an error, take a screenshot and send it to the bot.