Artificial Intelligence and Cybersecurity: What to Know Right Now
Similar to the way the launch of the first satellite, Sputnik, both introduced a new technology era—the space race—and raised some alarm, the ongoing adoption of generative artificial intelligence (AI) is beginning to permeate industries worldwide, prompting questions of how cybersecurity will need to adjust to accommodate this huge new development.
Other questions are being asked too, like, “How many jobs will be displaced by such a technology?” “Are we going to see massive unemployment and the need for something like universal basic income (UBI)?” Some are even more philosophical, like, “What is the nature of work?”
Though such negative narratives swirl in the cultural milieu regarding generative AI, the reality is that there are also many positive capacities for such a groundbreaking technology—opportunities that more and more people and sectors are beginning to embrace. But as it was in the space race, this new technology and the rush to push it to the limits needs to be pursued with discernment and caution—it needs regulation, just as current standards must be adjusted to account for AI.
As cybersecurity experts, we have been monitoring the ongoing debate and progress of AI governance, and in this blog, we’ll explore the capabilities of generative AI, its potential challenges and considerations, and its positive effects on cybersecurity and compliance, as well as the implications of generative AI on the future of compliance and regulations.
What is Artificial Intelligence?
Forbes defines AI as “technology that appears to emulate human performance typically by learning, coming to conclusions, seeming to understand complex content, engaging in natural dialogs with people, enhancing human cognitive performance, or replacing people on execution of non-routine tasks.”
While AI has been already in use for quite a few years—things like machine learning and algorithms that track and predict customer behavior, etc.—generative AI language learning models have new groundbreaking capabilities that have been packaged into a simple and easy-to-use interface that anyone can leverage since you only need to provide a simple written prompt or instruction.
Such a game-changing development has already rippled in industries like entertainment, where studios and other media providers have begun using AI content creation capabilities to such a degree that entire professions went on strike to defend their jobs from being lost to the technology. And it isn’t just in Hollywood—various other sectors have also seen strikes that can be tied to the effects of generative AI and productivity.
Beneficial Applications of AI
Still, the wide range of positive applications of generative AI is obvious:
- Cybersecurity: AI-enabled security event and incident management (SEIM), security orchestration, automation and response (SOAR), and user and entity behavior analysis (UEBA) tools are being developed—more powerful and adaptable, they can read patterns and determine more credible threats, cutting through much of the noise created by mass attacks.
- Compliance Auditing: The use of private AI tools capable of keeping sensitive business/client data segmented from the public domain can streamline the flow of information necessary in compliance auditing.
- Customer Service: Technology like chatbots and sentiment analysis are aiding businesses in streamlining and addressing customer requests more quickly, among other uses.
- Content Generation: Generative AI models can automatically generate and personalize content, improve content quality, and diversify content—look no further than how quickly ChatGPT has caught on.
- Productivity: Quantitative data has revealed that AI assistance has dramatically increased worker productivity.
Challenges of AI
Despite all these positive applications of the tech, the use of AI is not without some specific concerns:
Cyber Attacks |
The evolution of generative AI has supported the emergence of new and more deceptive hacking tactics. Already, independent and foreign government hackers and black hat hackers—like some in North Korea—have used the technology for highly automated and effective phishing campaigns, while other tools, such as WormGPT, also have various black hat applications. |
Intellectual Property |
Generative AI has also created calamity regarding proper ownership of intellectual property—a group of authors formed a class action lawsuit against OpenAI after the OpenAI product used their work without compensating the authors, a lawsuit that has the potential to completely reshape how future AI models are formed and what data they can access. |
Privacy |
While a lot of the sources for AI data can include PII, medical, and other sensitive and potentially private information, users of generative AI can also input sensitive information into these tools, raising privacy concerns. In response, various industry experts and regulating bodies have started to create rules and publish suggestions on how to approach AI. For instance, Forrester has suggested that organizations should implement a “BYO-AI management policy” to address these security and privacy concerns that still provide avenues for the use of generative AI. |
Currently Proposed AI Regulations
These concerns have prompted hard questions about how such a tool like AI should be regulated to better mitigate the security, privacy, and confidentiality risks that have cropped up with its use. The first hurdle to clear in answering those will be the fact that so few professionals have a robust understanding of the technology, meaning that the establishment of a common language and a general understanding of AI will be the starting point for any future standards.
In fact, progress is already being made—private organizations like MITRE have suggested paths to regulation that include collaborative efforts between private experts and government to form rules and regulations through the definition of:
- The aforementioned agreed-upon common language;
- The harmful effects of AI; and
- Liability for harmful actions.
More than that though, some complete frameworks have also already emerged:
While these frameworks remain voluntary, proposed, or in draft status (respectively), compliance with them will likely become popular or even required by your clients or governments you work with, as current implications indicate these regulations are likely to span various disciplines and use in areas such as accounting, cybersecurity, law, law enforcement, healthcare, and beyond.
Not only that but, as much of the focus of these frameworks is directly related to cybersecurity and privacy concerns, other areas of consideration—such as the effect of AI on employment and education—have largely been left to other, further legislation. So, only time will tell what the future will look like regarding the comprehensive regulation of AI.
How to Prepare for Regulated AI
Though things continue to develop where AI is concerned, there are some things you can do to put your organization in the best possible position for eventual AI compliance standards:
- Start work to implement the requirements within these draft regulations into your control environments starting with policy and company practices (such as a BYOAI Management Policy as advised by Forrester).
- Either hire or train AI subject matter experts who will be able to lead and develop any AI-related projects with both operational and compliance considerations in mind along the way.
- Continue to follow news and industry trends related to AI and compliance.
Moving Forward with Generative Artificial Intelligence
While the new opportunities it presents have triggered an ongoing mass adoption of generative AI into organizations, concerns regarding this advancing technology have also begun to surface. Already, AI has tangibly impacted the average person and, as things continue to progress, it looks like it’ll also dramatically affect the risk and control environments of companies and governments as well.
The weight of AI’s impact is no small thing, making the progressing regulation and compliance standards all the more important. As we wait for more concrete movement, make sure to check out our other articles regarding other recent developments/insight into the realm of AI:
- Summary of President Biden's Executive Order - A Move Toward Safe, Secure, & Trustworthy AI
- Explaining the Artificial Intelligence Requirements within HITRUST CSF v11.2.0
- What Does the AICPA Require of Artificial Intelligence?
- How ISO 42001 “AIMS” to Promote Trustworthy AI
- An Explanation of the Guidelines for Secure AI System Development
About Landon Inman
Landon Inman is a Senior Associate with Schellman based in Princeton, NJ. Prior to joining Schellman in 2022, Landon worked as a Technology Auditor for Portland General Electric specializing in SOX (Sarbanes-Oxley) and information security consulting. Landon also led and supported various other projects in the areas of business development, recruiting, and new services. Landon has over four years of experience comprised of serving clients in various industries, including Technology, Financial Services, and Utilities. Landon is now focused primarily on SOC1 and SOC2 audits for organizations across various industries.