In a recent study, Facebook’s parent corporation Meta states that its security team has been monitoring new malware threats, particularly those that weaponize the current AI trend.
According to a recent security report from Meta, “we’ve looked into and taken action against malware strains over the past few months that exploit people’s interest in OpenAI’s ChatGPT to trick them into installing malware pretending to provide AI functionality.”
As many as ten new malware families, according to Meta, have been found to be breaking into users’ accounts utilising popular OpenAI chatbot ChatGPT and other AI chatbot tools.
Meta cites the spread of fraudulent web browser extensions that pretend to offer ChatGPT functionality as one of the more urgent schemes. Users download these extensions, for instance, to use the AI chatbot capabilities in Chrome or Firefox. Some of these extensions actually function and offer the chatbot features that are advertised. The extensions do, however, also include malware that can access a user’s device.
More than 1,000 distinct URLs that advertise malware disguising itself as ChatGPT or other AI-related technologies have reportedly been found, according to Meta, and their sharing on Facebook, Instagram, and Whatsapp has been stopped.
“Our research and the research of security researchers has repeatedly demonstrated that malware operators, like spammers, attempt to capitalise on emotive issues and trending subjects in order to attract people’s attention. The latest wave of malware efforts have noticed that generative AI tools are getting more and more popular, with the ultimate goal of tricking people into clicking on dangerous links or downloading malicious software.
Once a consumer installs malware, bad actors can immediately launch their attack, and they can keep improving their strategies to get around security measures, claims Meta. As an illustration, criminals might easily automate the procedure that allows them to take control of corporate accounts and grant them access to advertising.
According to Meta, it has alerted the numerous domain registrars and hosting companies that these illicit actors utilise about malicious links.
The more technical features of contemporary viruses like Ducktail and NodeStealer are also covered in detail in the paper by security researchers at Meta.
Similar to Ducktail, we have observed that the banning and widespread reporting of these malicious strains has compelled their operators to quickly adapt their strategies in an effort to survive. They have been observed using cloaking to get around automated ad review systems and making use of well-liked marketing tools like link shorteners to hide the final destination of these links. Numerous people also switched their enticements to other well-liked themes, such as Google’s Bard and TikTok marketing support. Some of these campaigns started targeting smaller services, like Buy Me a Coffee, a service used by creators to accept support from their audiences, to host and deliver malware after we blocked malicious links to file-sharing and site hosting platforms.
Meta claimed to have looked into and taken action against malware variants that pretended to have AI functionality in order to deceive users into installing malware by exploiting their interest in OpenAI’s ChatGPT.
In order to fool people into opening malicious files, NodeStealer samples are often presented as PDF and XLSX files with the proper relevant icon and filename. Due to this technique, users may not be aware that they are actually launching a potentially harmful executable rather than a benign document.
The malware leverages the Facebook credentials it has obtained from the target’s browser data to send multiple unauthorised requests to Facebook URLs in order to gather account information pertaining to advertising. By sending requests from the targeted user’s computer to the Facebook web and mobile apps’ APIs, the malware is able to access this data while hiding its activity behind the user’s real IP address, cookie values, and system configuration, appearing as a legitimate user and their session. This makes it far more challenging to identify this action. The threat actor can then evaluate and exploit user advertising accounts to conduct unauthorised adverts thanks to the stolen information.