At least 100 instances of malicious AI ML models were found on the Hugging Face platform, some of which can execute code on the victim's machine, giving attackers a persistent backdoor.
Hugging Face is a tech firm engaged in artificial intelligence (AI), natural language processing (NLP), and machine learning (ML), providing a platform where communities can collaborate and share models, datasets, and complete applications.
JFrog's security team found that roughly a hundred models hosted on the platform feature malicious functionality, posing a significant risk of data breaches and espionage attacks.
This happens despite Hugging Face's security measures, including malware, pickle, and secrets scanning, and scrutinizing the models' functionality to discover behaviors like unsafe deserialization.
[...] The analysts deployed a HoneyPot to attract and analyze the activity to determine the operators' real intentions but were unable to capture any commands during the period of the established connectivity (in one day).
(Score: 3, Insightful) by The Vocal Minority on Sunday March 03 2024, @04:49AM
I'm out of mod points but thanks for this. I do tend to customise the models I get from hugging face and fine tune so I'm not sure if this is so useful for me, but worth knowing about all the same.