ChatGPT and cybersecurity

ChatGPT: How AI and Machine Learning is Revolutionizing Cybersecurity

Advances in technology and artificial intelligence (AI) change the way people work by eliminating manual tasks and improving the digestion and analysis of data. When used in conjunction with the knowledge of skilled cybersecurity professionals, AI and machine learning has transformed the ability to detect and respond to cybersecurity threats. The recent introduction of ChatGPT brings about more possibilities for improving and streamlining cybersecurity practices. However, like all technological advances, it has the potential to introduce new threats into network environments. 

ChatGPT (generated pretrained transformer) is an AI-powered system created by Open AI for online customer care. The system interacts in a conversational way, making it possible for ChatGPT to answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. These capabilities make it suitable for chatbots, AI system conversations, and virtual assistants. However, ChatGPT is also capable of additional tasks like writing code, translating, writing an article or blog post, and debugging code. ChatGPT was released for public use in November 2022. Although the system is currently in beta testing, it is extremely popular for a variety of purposes.

Thanks to its deep learning capabilities and natural language processing ChatGPT is particularly advanced. So, how is the system poised to change the cybersecurity industry? It has the potential to advance cybersecurity products that improve detection and response to cyberattacks while enhancing communication during incident response. However, it also has the potential to be used in malicious ways to breach and infect company networks. This article explores the possibilities of the future of ChatGPT in cybersecurity. 

Hidden Threats and Cyber Attacks: Reveal and Respond to Some of the Hardest to Detect Cyber Attacks

ChatGPT and Cybersecurity

There's no doubt ChatGPT has taken the world by storm. The chatbot gained 1 million users in the first five days after its release in late November, and by January 31, the website reported up to 28 million daily visits. In the cybersecurity industry, ChatGPT's potential has already been recognized by cybersecurity professionals and cyberattackers. Let's take a look at how ChatGPT can improve communication and speed in detection and response, then explore the potential for increased cybercrime using the tool.

ChatGPT for Threat Intelligence

AI and machine learning (ML) are already essential components of an effective cybersecurity solution. Cybersecurity tools use AI and Machine Learning to detect threats, reduce alert fatigue, identify zero-day exploits, respond to alerts and attacks, and eliminate manual tasks. ChatGPT doesn't work differently than previous AI systems, but it is more effective. ChatGPT's ability to analyze large volumes of data to identify anomalies is only the beginning of its threat detection capabilities. The system's conversational abilities and ability to understand and write code are where it really shines.

To use cybersecurity tools like SIEM, data analysts have to generate queries and commands. This requires specific knowledge and a considerable amount of time. ChatGPT's AI language capabilities mean cybersecurity and IT professionals can simply ask the system to write these expressions. Furthermore, ChatGPT's ability to understand natural language means it can analyze text-based communications like emails and chat logs to detect potential threats. By automating these tasks, cybersecurity professionals can streamline workflows and focus on high-value tasks that require human intervention. 

Vulnerability management is another critical aspect of cybersecurity. Proactive protection is the best way to avoid breaches that can lead to devastating and expensive attacks. Central Threat Intelligence (CTI) feeds are commonly used to identify known vulnerabilities. While these feeds help protect networks that haven't yet been exposed to a specific threat, they don't address vulnerabilities that haven't yet been discovered. ChatGPT's ability to understand code could make it useful for the identification of unknown vulnerabilities. As a result, it has the potential to help businesses avoid falling victim to zero-day attacks. For example, when Hackersploit asked ChatGPT to find problems in a PHP code with a known vulnerability, the system not only identified the security weakness but also provided the code to fix it.

BitLyft AIR® Central Threat Intelligence Overview

 

ChatGPT for Automated Security Controls

Automation script is the code used to direct automated activities. Writing it takes a significant amount of time. ChatGPT's advanced capabilities mean it can quickly generate code based on specific requirements, reducing the amount of time analysts spend writing scripts. In other words, cybersecurity professionals can simply ask ChatGPT to generate a script to complete a designated automated task. 

ChatGPT can also assist incident response with suggestions based on the information the system has been trained on. The ability to quickly and effectively respond to security incidents can greatly improve security posture. ChatGPT can provide suggestions on how to handle security investigations, threat hunting, and summarize security issues or create security policies. These actions reduce workloads on cybersecurity teams already stretched thin.

Adversarial Attacks with ChatGPT

ChatGPT is designed to refuse inappropriate requests. However, there are ways to trick the system into developing malicious code and text. When you consider ChatGPT's ability to write natural conversational language, it becomes easy to see the potential for it to be used in social engineering attacks. The ability to understand and write code also presents significant concerns. ChatGPT's current abilities make it clear that it has the potential to create these security risks.

  • Phishing Emails: In its capacity as a personal assistant, ChatGPT can be used to quickly craft emails. For cybercriminals, the system could make developing phishing faster and more convincingly than ever before. One of the easiest ways to spot phishing emails is to find spelling and grammatical mistakes. ChatGPT's ability to translate and write flawless text makes it an obvious tool for making phishing emails harder to detect.
  • Impersonation: Within seconds, ChatGPT can learn to write text in a real person's voice and style. CEOs are already using it to craft emails and speeches. With the right approach, threat actors could just as easily use the tool to impersonate public figures, develop emails and texts to carry out BEC attacks, and surpass security filters with text that eliminates suspicious patterns. 
  • Spam: Hackers use spam to flood inboxes with useless information, gather personal information, and spread malware. Creating spam requires time to develop specific text targeting victims. ChatGPT can generate spam text instantly, improving the attacker's workflow.
  • Ransomware: Due to its code of ethics, ChatGPT won't directly write ransomware code. However, with the right types of requests, the system can be tricked into writing malicious code. By creating a step-by-step procedure that describes the intended goal without describing it as malware, the system can be tricked into generating a code. 

While it's impossible to eliminate the potential for AI to be used for malicious purposes, there is some hope that continued beta testing will help put stronger security rules in place. However, at the moment, ChatGPT has the capability to improve the speed and accuracy at which attackers can generate phishing emails and spam, and carry out BEC attacks.

Cyber Attacks with ChatGPT

Is ChatGPT Accurate?

OpenAI makes it clear that ChatGPT is imperfect in its current state. A list of limitations states that the system sometimes writes plausible-sounding, but incorrect or nonsensical answers. The list also notes that ChatGPT is sensitive to input phrasing, which is possibly part of the reason it's been successfully used to generate malware during testing. 

While the use of AI with the ability to understand and generate code has exciting potential for the creation of advanced cybersecurity software, its current inaccuracies are a severe drawback. For example, coding question website Slack Overflow has banned AI-generated answers due to a high volume of incorrect but plausible responses. While the tool is being changed based on user feedback to address inaccuracies, the potential for mistakes means it can't be depended on for definitive accuracy for cybersecurity tasks. 

AI, no matter how advanced, doesn't have the capability to act on its own. The information that enables all types of machine learning is fed into machines by humans. As such, it is always subject to error. However, the system's capabilities to ingest large amounts of data and recognize its own mistakes give it the potential to offer new knowledge to individuals attempting to breach networks and those striving to protect them.

Lowering Barriers to Entry with AI and Machine Learning

It's common to mistake artificial intelligence as an out-of-the-box solution for completing tasks. Such ideas have fueled blockbusters about machine takeover and the Earth's eventual destruction due to our reliance on machines. In reality, AI depends on humans to dictate specific tasks. Consider the importance of SIEM optimization for successful results and the elimination of false alerts. ChatGPT has similar requirements to provide accurate results. Yet, ChatGPT's natural language abilities do give it an edge over many typical AI offerings. 

The ability to use conversational language for requests means users need less technical knowledge to produce effective results. Cybersecurity tools optimized by industries are typically more effective in accurately detecting and responding than those optimized by users with limited security knowledge. Similarly, the sophistication of a threat is basically tied to the knowledge level of the threat actor.

ChatGPT's ability to understand conversational language and complete complex tasks means it could dramatically lower the technical knowledge required for users. As a result, amateur threat actors are more likely to be able to successfully carry out attacks beyond their technical capabilities.

On the flip side of this concern, the system offers the same potential for beginners in the cybersecurity profession. ChatGPT's ability to write queries, commands, and scripts means cybersecurity and IT professionals in the earlier stages of education can carry out complex tasks accurately. Such advances could streamline cybersecurity education and help to address the shortage of skilled professionals in the industry.

The Future of ChatGPT in Cybersecurity

At the end of the day, ChatGPT has the capability to make life easier for both cybercriminals and cybersecurity professionals. Although it's still in beta testing, ChatGPT illustrates new technology that is likely to have a positive impact on the cybersecurity industry. Yet, there is little doubt that the tool will also be used in malicious ways by cyber attackers. For cybersecurity experts, ChatGPT could introduce cybersecurity tools that improve learning and communication capabilities surrounding cyberattacks. However, it also means developing procedures to detect new and increased threats brought about by ChatGPT.

Like all other AI and machine learning tools, ChatGPT depends on humans to supply relevant information for effective use. In its current form, the information provided by ChatGPT is not guaranteed to be accurate. Even as the system evolves, it will require human input. ChatGPT's capabilities to rapidly digest large amounts of data and identify code will make it a valuable addition to the toolboxes of cybersecurity teams, but it will not have the capability to replace trained and experienced experts. Cybersecurity threats from AI are not a new problem. As ChatGPT use evolves, cybersecurity professionals will be tasked with the challenges of fielding an increase in attacks and changes to the threat landscape. 

At its core, cybersecurity will always depend on human analysts and threat hunters to stay ahead of sophisticated threats to organizational networks. Overdependence on AI tools will render businesses more vulnerable due to the constantly evolving threat landscape. For effective cybersecurity protection, all businesses need a layered solution that provides 24/7 visibility, threat detection, and incident response. Managed detection and response is a comprehensive solution that combines the power of cutting-edge technology and sophisticated tools with the education and expertise of skilled cybersecurity professionals. 

BitLyft AIR® provides businesses with 24/7/365 monitoring, threat detection, incident response, and remediation capabilities to protect devices and endpoints across your entire network. Our unique approach to central threat intelligence provides your organization with herd immunity to threats you haven't been exposed to, and rapid remediation stops threats in seconds with automated actions to quarantine threats and continuously protect your network. We provide businesses of all sizes with all the benefits of a fully-operational security operations center with minimal investment for unparalleled protection against the sophisticated cybersecurity threats that target all industries. If you're unsure of your cybersecurity posture, don't wait to become the victim of an attack. Get in touch with our cybersecurity experts to learn more about a custom cybersecurity plan developed specifically for your unique business. 

BitLyft AIR® Overview

 

Hidden Threats and Cyber Attacks: Reveal and Respond to Some of the Hardest to Detect Cyber Attacks

Emily Miller

Emily Miller, BitLyft's dynamic Content Marketing Manager, brings a vibrant blend of creativity and clarity to the cybersecurity industry. Joining BitLyft over a year ago, Emily quickly became a key team member, using her Advertising and Public Relations degree from the University of Tampa and over 10 years of experience in graphic design, content management, writing, and digital marketing to make cybersecurity content accessible and engaging. Outside of BitLyft, Emily expresses her creativity through photography, painting, music, and reading. Currently, she's nurturing a cutting flower garden, reflecting her belief that both her work and gardening require patience, care, and creativity.

More Reading

man at a desk looking at a computer screen
What is File Integrity Monitoring?
Technology-focused organizations typically have a heavy reliance on IT environments. Whether it’s the use of expensive software, complicated hardware configurations or large business networks, it’s...
world map in red with dots over large populations
Real-Time Threat Monitoring: Do You Have It?
Modern businesses face more threats now than ever, and that’s primarily due to a little thing called cybercrime. Almost all companies currently store at least some information online. Thieves can...
man-using-laptop-and-smartphone-with-chatgpt-icon
Leveraging ChatGPT to Strengthen Your Cybersecurity: Top Strategies
If you even have a passing interest in artificial intelligence (AI) and new technology, you've probably heard of ChatGPT. Yet, you may not have considered how the system could help strengthen the...