Generative AI: Keeping Your People And Data Safe

The Two Faces Of Generative AI: Keeping Your People And Data Safe

by Sameh Jarour — 12 months ago in Artificial Intelligence 3 min. read
1823

There is only one to describe the recent implosion of ChatGPT and similar generative tools into the overall techno sphere and into business lingo and usage: virally exponential. While it took Facebook 10 months to reach one million users, and Instagram 75 days, ChatGPT was able to achieve this feat in 5 days, with the website registering 1.6 billion visits in June 2023, a mere few months after its launch.

This growth has been perpetuated by businesses for obvious reasons. The benefits for organizations are refelected in its ability to streamline manual operations, minimize time spent on repetitive tasks, and automate mundane and time-consuming tasks. A recent case study showed that ChatGPT lead to a 40% reduction on time spent on writing product descriptions, while agents utilizing conversational agents like ChatGPT experienced a 30% decrease in customer support response times.

Also read: What Is Blooket? How To Sign Up, Create Question Set, Join Blooket, & More + FAQs (Part I)

While the growth has been driven with endless promises of a new era of business productivity and potential, it has also opened a Pandora’s box of risks. This was also reflected in the concerns in a recent BI survey.

About 40% of respondents in the BI survey showed concern around generative AI’s use of information that could breach intellectual property rights. This is primarily because of the nature of deep learning learning models like ChatGPT, which allows it to learn from its conversations with users.

Real-world instances, like employees at Samsung unintentionally feeding sensitive information to ChatGPT while checking source code, underscore the significance of safeguarding confidential data.

The risks of ChatGPT extend beyond the sharing of propertiery information, like software code or PII information like credit card numbers or addresses.

It can only be an exfiltration risk for external hackers if using the app through an unsecured or public wifi network to have a conversation with ChatGPT; it opens the window for someone with ill intentions to potentially access your chat and see what data is being shared.

In a reverse engineering fashion, there is increasing evidence that hackers can exploit it as a language bot bandwagon, using it to find data security system vulnerabilities, write convincing phishing emails, help create ransomware and even custom malware to evade security systems.

For example, an employee could use AI to identify sensitive data and then use ChatGPT to generate well-written phishing emails to other employees or business partners. Even when there is no bad intentions present, this can happen through mere negligence or lack of awareness.

And practically over time, the usage of Generative AI can change the security culture inside organizations. The overreliance on generative AI and ChatGPT can lead to neglecting important aspects of data security, such as manual review and verification.

Also read: Top 7 Industrial Robotics Companies in the world

Ramifications and Mitigations

In addition to the potential damage to a business’s reputation and finances, data breaches, including those caused or assisted by ChatGPT, can lead to legal consequences and regulatory compliance failures. For example, businesses may be subject to fines and other penalties if they are found to be in violation of data protection laws such as GDPR, CCPA, or HIPAA.

The jury is still out on what is the possible scope of misuse of PII information in these apps but we do have jurisdictional examples; for example, Italy has even temporarily banned ChatGPT over privacy concerns.

Long story short, ChatGPT’s promise of being quick and easy is not a free for all. Adopting a security-first mindset, educating employees, not only about your company’s data security policies, but also about AI and the potential threats, is essential for all businesses. Remember, there are extra considerations when it comes to remote workers.

As in the example above, the danger is not only internal usage but also the overreliance of unsafe network environments such as unsecured networks or public Wifis.

Businesses should also implement appropriate data loss prevention (DLP) security measures, such as encryption, access controls, and regular security audits.

If security threats are getting more sophisticated, so should your DLP measures. A dedicated data loss prevention software such as Safetica, can help, both by blocking data as tagged as classified or sensitive from being shared on the web to chatGPT. It’s a small step in the data security arsenal, but an important one.

In conclusion, generative AI tools like ChatGPT promise a new era of productivity for organizations, but its important to setup the tools and culture to ensure that this new era is sustainable and safe for its most important assets; people and the data that they use.

Sameh Jarour

Sameh Jarour is the CMO of Safetica, a global software company that delivers data protection solutions for businesses of all types and sizes. He leads the marketing team at Safetica, with go-to-market and growth hacking initiatives in the US and Western Europe, to empower SMBs and mid-tier companies to adopt data loss solutions using minimal capacities with Safetica's SaaS offering, the easiest to implement and integrate DLP solution.

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments

Copyright © 2018 – The Next Tech. All Rights Reserved.