Are businesses glossing over GenAI security risks?

by The Technical Blogs

Zscaler is a Business Reporter client.

ChatGPT, Bard, MidJourney… unless you’ve been living in an information blackout for the past year, these names will be familiar. Indeed, almost every industry, function and role has spent much of 2023 speculating about how generative AI (GenAI) tools such as these could be turned to their advantage.

But beyond the hype, how have businesses actually responded to the rising popularity of GenAI? And have they been able to balance the threats with the benefits it brings?

To gain insight, Zscaler recently conducted a survey, All Eyes on Securing GenAI, of more than 900 IT leaders across 10 global markets. Our aim was to quantify how many organisations are using GenAI tools, how IT departments are monitoring this use, and whether they have adapted their security models accordingly.

GenAI integration: rapid uptake

Business investment in GenAI solutions around the world is on the fast track. IDC forecasts that investment will grow by $127 billion between 2023 and 2027, and our findings certainly support this projection, showing a strong uptake of GenAI tools by global organisations.

Almost all (95 per cent) of the surveyed IT leaders said their organisations are already using GenAI tools in some way. Nearly four in five (78 per cent) are employing them for data analysis, the main business use case. Just over half are turning to GenAI for R&D services development (55 per cent) and marketing (53 per cent), while a little over two in five have tapped into these tools to streamline end user tasks (44 per cent) and logistics (41 per cent).

Staying on the cutting edge of technology is necessary to remain competitive in today’s digital world, but balance is critical, and this demands security be factored in. The question is: are organisations rushing into GenAI usage too soon in their efforts to keep up, or is early adoption a calculated risk?

Despite such high usage figures, a significant 89 per cent of surveyed IT leaders admit their organisation considers GenAI tools to be a potential security risk, with nearly half (48 per cent) agreeing that the threat may currently outweigh the opportunities these tools could unlock.

This points to a concerning divide between belief and action, considering only 5 per cent say their organisation is either holding back on usage to see where the technology goes, or has blocked it entirely. Early GenAI adoption appears to be less of a calculated risk than we might like to believe.

Figuring it out: Businesses are adapting to AI use in the workplace


Concern versus action: the great divide

The top concerns listed for organisations not using GenAI were the potential loss of sensitive data, a lack of resources to monitor the use, and a lack of understanding of its dangers and benefits. With 23 per cent of the organisations already using GenAI tools not monitoring this at all, it’s clear to see why a lack of resources to track usage was raised as a threat.

When implementing any new technology, it’s crucial to understand the unique security challenges it raises so that these don’t overshadow its potential. Failing to implement any additional GenAI-related security measures – which a third of the organisations using it admit to – is an incredibly risky move that could leave organisations vulnerable.

And while 31 per cent of that same group recognises that security must become more of a priority and have included GenAI-specific solutions in their roadmap, intent is far less effective than action. As the saying goes, the temporary tends to become the permanent.

Slow(er) and steady

When we looked at who was advocating for early GenAI adoption, the results were surprising. Interestingly, 59 per cent of IT leaders said they were driving it, with only 21 per cent responding to requests to do so from business leads and even less interest stemming from general employees (5 per cent). The situation, it appears, is less about “pressure” to introduce new technology and more about IT teams’ “desire” to keep up with digital innovation.

The fact that IT teams are behind early adoption should offer reassurance for both IT and business leaders. It means there is room to strategically temper the pace of GenAI adoption, giving IT enough time – a window of opportunity – to establish a firm hold on its security measures before vulnerabilities turn into crises. But with 51 per cent of respondents expecting interest in using GenAI tools to significantly increase between now and the end of the year, the window is closing.

A complete ban on GenAI’s use is not the solution, as this would put organisations at a substantial competitive disadvantage. The solution is slightly slower, more strategic, systematic implementation. Another old adage, “more haste, less speed”, has bearing here.

Protect the data: the time to act on GenAI security questions is now


Flipping the script: from threat to opportunity

Rules for GenAI governance are important because how, where and why these tools are used will be unique to your organisation. If you’ve yet to establish governance guidelines or need to strengthen them, a great way to tackle this is to gather a group of cross-functional experts from within the organisation (not only IT) to form a “tiger team”. This team can set rules for required security and privacy risk assessments for new GenAI deployments, make decisions on solution implementations, and also take charge of closing any existing knowledge gaps about the technology.

Ultimately, GenAI governance is a data protection story, which makes classifying data a vital first step. As it stands, only 46 per cent of respondents were confident that all their data has been classified. Data that is properly classified by category and level of confidentiality is easier to protect, enabling IT to securely authorise which people, applications and devices can have access to it. Such a Zero Trust approach will be essential for securing GenAI usage.

Solutions already exist to enable IT teams to keep full logs of tool usage as well as create and enforce policies relating to the GenAI sites employees can visit and how they interact. Meanwhile, integrated products such as Zscaler Data Loss Protection put Zero Trust configuration into practice, protecting three main avenues of data loss or leakage: unsanctioned apps, sanctioned apps, and devices.

When a new technology emerges, it brings both positive and negative use cases. But with the right security measures in place, organisations can be empowered to tap into the potential of GenAI safely and responsibly, flipping the use of these tools from threat to opportunity.

For more information, visit

Source link

Related Posts

Leave a Comment


Copyright @2020  All Right Reserved – Designed and Developed by DSF SEO COMPANY