As businesses strive to enhance productivity and streamline their operations, the integration of generative artificial intelligence (AI) solutions like ChatGPT has gained significant traction. These intelligent AI bots have proven instrumental in assisting employees with various tasks and inquiries.
However, an alarming trend has emerged, with a significant number of workers regularly posting sensitive company data into these AI bots.
Notably, over the past year, the number of compromised ChatGPT credentials that appeared for sale on the dark web has exceeded more than 100,000, according to a research report published by cybersecurity firm Group-IB on June 20.
26,802 compromised accounts appear on dark web in May
In May 2023 alone, the number of available logs that contain jeopardized ChatGPT accounts hit a new record high of 26,802, up from 11,909 in January 2023. This marks a drastic increase, especially when compared over a 1-year period, with only 74 stealer logs recorded in June 2022.
Per Group-IB, these accounts are usually compromised through phishing campaigns, which are meant to steal sensitive user information such as credentials that are saved in browsers, bank card details, cookies, browsing history, crypto wallet data, and so on.
“Employees enter classified correspondences or use the bot to optimize proprietary code. Given that ChatGPT’s standard configuration retains all conversations, this could inadvertently offer a trove of sensitive intelligence to threat actors if they obtain account credentials.”– said Dmitry Shestakov, Head of Threat Intelligence at Group-IB.
On a regional basis, Asia-Pacific had “the highest concentration of ChatGPT credentials being offered for sale over the past year,” the report says.
Other information that appears for sale on such marketplaces includes lists of domains found in the log and information about the IP address used by compromised users.