Page 1 of 1

Half of the data scientists surveyed

Posted: Thu Feb 06, 2025 3:47 am
by rakhirhif8963
There are serious privacy and data security concerns about generative AI (GenAI) applications, but companies are rushing to adopt them anyway, according to Immuta’s new AI Security & Governance Report , which surveyed nearly 700 data scientists. The report also highlights some of the benefits of GenAI in terms of security and privacy tools, according to Datanami.

The report paints a grim picture of looming data security and privacy issues as companies rush to take advantage of GenAI capabilities provided by large language models (LLMs) such as GPT-4, Llama 3, and others.

“In their rush to take advantage of LLM capabilities and keep up with the rapid pace of adoption, employees at all levels are feeding massive amounts of data into unknown and untested AI models,” the report says. “The potentially catastrophic security costs associated with this are not yet clear.”

by Immuta say their organizations have four or cameroon mobile database AI systems or applications installed. However, implementing GenAI comes with significant privacy and security concerns.

According to Immuta, 55% of respondents believe that unintentional disclosure of sensitive LLM information is one of the most serious threats. A slightly smaller number (52%) are concerned that their users will pass on sensitive LLM data through tips.

When it comes to security, 52% of respondents said they were concerned about malicious attacks using AI models, and slightly more (57%) said they had seen a “significant increase in AI-based attacks” over the past year.

Overall, 80% of respondents believe GenAI makes it harder to maintain security, according to Immuta’s report. The problem is compounded by the nature of public LLMs like ChatGPT, which use entered information as input for subsequent training sessions. This creates “a higher risk of attacks and other cascading security threats,” the report notes.