Keely Wilkins is an Evangelist with the Office of the CTO as well as a Pre-Sales Security Engineer at Check Point. She has been in the technology and security industry for over 25 years. On behalf of Check Point, Keely participates on the “Partnership against Cybercrime” working group with the World Economic Forum. She earned her MS Cybersecurity from Florida Institute of Technology and is expected to complete her MLS Cybersecurity Law and Policy from Texas A&M University School of Law in late 2024.

How can hackers leverage AI to augment social engineering capabilities? How can you block AI-based social engineering threats? In this jam-packed expert interview, Keely Wilkins shares insights about these topics and so much more!

If your organization is concerned about social engineering, this is must-read material! Discover powerful examples, actionable steps to follow, and this security engineering expert’s unique take on the growth of AI – it might surprise you.

In your view, how has AI influenced the effectiveness and sophistication of social engineering attacks?

Artificial Intelligence (AI) has made the job of social-engineering easier for some. Social engineering is the art of deception, trickery, manipulation. AI is the newest tool available to the people or groups who aim to entice the viewer/listener/user to act in a specific way — often contrary to how they’ve been trained.

The above qualifier of “easier for some” suggests that AI doesn’t help all malicious actors.  Indeed, if the malicious actor does not know how to persuade their audience to do or not do specific things, AI (at its current maturity level) will not help bridge that gap. Not intentionally.

AI influences the effectiveness and sophistication of social engineering attacks by automating tasks that the malicious actor would normally have to spend time doing manually. One such task is learning about the target. Instead of manually looking for the target across multiple social media sites, the AI can be instructed to perform that search. AI shortens the timeline for target research and crafting a personalized message for a specific target.

Deepfake technology is a prime example of how the timeline for a sophisticated social engineering campaign can be shortened. Most of us have pictures and videos of ourselves posted on social media.  Maybe we’re tagged in that media, maybe the video includes a good sampling of our voice. Through AI ML/DL (Machine Learning/Deep Learning), those posts can be used to build a simulation of our likeness and/or voice to trick someone into believing we’ve done or said something we never did. This may be benign, or it may trick the target into believing you’re in danger. This ruse has already been used to trick a mother into believing that her daughter had been kidnapped. It was a hoax; her daughter was safe. The details of the case can be seen here.

Are you familiar with specific incidents where AI-tools have been used to hack into/gain illicit access to a company’s assets or resources?

Two recent incidents come to mind. The first was in the news last year and is based on remote job interviews. This attack grew in popularity toward the end of lockdown. Remote positions that had elevated network or access privileges, like Network Engineers or System Administrators, attracted some candidates that weren’t actual people. The resumes were fake or stolen, remote interviews were scheduled, and it was later learned that the candidate wasn’t real. Malicious actors had created deepfakes of the person whose resume they stole for the purpose of getting a job with admin-level privileges to the environment. Details of this type of scam can be found here.

Many employers spotted the deepfakes during the interview process and reported the fraud to law enforcement. Investigating and prosecuting this type of crime is difficult.

The second incident is from two months ago. At face value this may not be categorized as social engineering as it doesn’t appear to have been a targeted attack. Despite that, data was fabricated by AI, not independently verified, and accepted as truth. This situation put two attorneys in jeopardy because they cited AI-fabricated legal precedent during a court proceeding. “Two apologetic lawyers responding to an angry judge in Manhattan federal court blamed ChatGPT Thursday for tricking them into including fictitious legal research in a court filing.”  (Neumeister, 2023). The details of this case can be found here.

This scenario is concerning because there is no con artist (a.k.a attacker) at the helm. It demonstrates that AI, if left unchecked, can fabricate data that prompts humans to act.

What are the challenges that security teams face when it comes to detecting social engineering attempts that leverage AI-based technologies? How can those challenges be addressed?

AI has spawned new opportunities for attack efficiency, but not new attacks. The same is true of social engineering. Manipulating people is not new. AI just makes it more efficient and convincing.

The challenge faced by security teams is human nature. Humans socialize for survival. Humans also enjoy being helpful. As a result, humans are vulnerable to any scenario that plays on our innate need to help others. Falling victim to manipulation can make the target feel embarrassed or foolish — especially in a professional setting.

Addressing the challenge of human nature starts with awareness of the risk, then education to support better decision making, and finally a process to report suspect activity.

From a programmatic perspective, if the target (user) can provide the security team with details on how and when they were first contacted and any other interactions that happened, that may help to identify the flow of the attack. In turn, this may yield actionable data that can be blocked within the security solutions.

How can AI help analyze large volumes of data to identify patterns and detect social engineering? What are the limitations of these tools?

AI uses Machine Learning (ML) and Deep Learning (DL) to ingest and analyze data. The AI continuously learns with every interaction, every prompt. Generative AI can create new data objects based on previous input/interactions.

This process is very different from a flat dataset, relational database, or even big data analytics. New opportunities for AI continue to evolve.

AI can help analyze large volumes of data to identify patterns and, to an extent, detect social engineering.

Biased attribution, limited oversight, and unregulated processes limit the effectiveness of AI in this scenario. Consider the difference between instructing the AI to look for a specific pattern in curated data verses directing it to find patterns in raw data. The intent of the human may be the same, but the result of the data analysis will be vastly different.

Specific to social engineering, the human at the helm may instruct the AI to look for all behavior that matches XYZ (a known social engineering model) and the AI will respond accordingly. That limits the effectiveness of AI to spot only known behavior that is associated with social engineering. What about social engineering techniques that are not yet known or categorized?

How can cyber security staff stay up to date in regard to the latest advancements in AI and social engineering techniques, effectively protecting their organizations?

Stay curious and think creatively. AI and Generative AI are in a constant state of growth. There is no time to get comfortable with what it is capable of today. Once it was released to the public and integrated into countless tools, all predictability went out the window. It’s not been lab-grown in accordance with best practices, ethics or legal requirements. It is feral, but it can be tamed.

Security staff should consider what is possible with the technology and what is probable given human nature.

Many vendors offer the ability to block ChatGPT, but that doesn’t account for every integration or variant available to the public. It’s not practical to attempt to programmatically block every iteration of ChatGPT because there are so many SaaS applications that use it. As a result, the organization must rely on Policy to direct employee use of AI and Generative AI tools. There must be business policy to guide employee usage of AI tools. From that Acceptable Use Policy, the programmatic policy will evolve.

In contrast, security vendors have been using AI for years to identify and prevent attacks. The fluid nature of the AI tools available to the general public should not be confused with AI tools that have been purpose built for cyber security.

How can software engineers collaborate with cyber security professionals to integrate AI-driven solutions into software systems, thus better protecting people from social engineering threats (if at all)?

Secure by design is the goal. I’m not a software engineer; I cannot speak from their perspective. Having been in technology and security for over 25 years, I do have a wish list of what I’d like to see across the board.

A) Evaluate the whole system. What is the function of the software?  Where will it be installed?  Who will use it?  How will it be accessed?  How will it be monitored?  What data will be stored?  Are there compliance requirements?

B) Test everything. Feature tests.  Vulnerability tests.  Security tests.

C) Learn from the past. Do not reuse vulnerable code.

In your view, what are the most critical skills and knowledge areas that cyber security professionals should pursue in order to address the AI-driven social engineering threat landscape?

Awareness of how AI and Generative AI tools and synthetic content is designed and used is key.  Since many AI-based tools are now available to the general public, the use cases for the technology are seemingly endless.

Origin and purpose of the tool is the biggest thing for security professionals to keep in mind. AI technology is used in most software today; it’s been applied for specific purposes and within specific controls. Is the tool at-hand built to solve a problem or was it built for nefarious purposes? That’s not always easy to decipher.

Another aspect to consider is human nature. Can the tool be used/misused to affect a criminal or harmful end? How can that be prevented?

What are the best ways to raise awareness among employees around social engineering attacks that exploit AI (from phishing emails to deepfakes)?

Education goes a long way. People are capable of great things when given the information needed to succeed.

  1. Share case studies and examples of deepfakes to demonstrate what is possible.
  2. Reiterate their value to the security of the organization as a reason to be cautious.
  3. Deconstruct social engineering campaigns to raise awareness of situations to avoid.
  4. Provide clear instructions on what to do and how to report suspect behavior internally.
  5. Run attack simulations for employees to practice what they’ve learned.

What research and development efforts are in-progress in relation to the fields of AI and social engineering? What future trends do you expect to see?

There is excitement and caution around anything AI and Generative AI related. There is also a lot of hype and doomsday warnings about ceding control to machines.

Research and development of AI is being performed by countless organizations across the globe. The trends I see include how to use the technology responsibly, how to prevent malicious usage of the technology, how to avoid those doomsday predictions, how to use it to help people, and how to make it sustainable.

Future trends are difficult to predict. Usage of ChatGPT and similar tools has decreased in recent months. In part because the luster has worn off and in there’s only so much the casual user can achieve. At the same time, creative minds have latched on to it in unexpected ways and that’s exciting.

Is there anything else that you would like to share with the CyberTalk.org audience?

The compute power necessary to build and use AI and Generative AI tools is significant. There will be a tipping point where the cost must be balanced against the value.

“Another relative measure comes from Google, where researchers found that artificial intelligence made up 10 to 15% of the company’s total electricity consumption, which was 18.3 terawatt hours in 2021. That would mean that Google’s AI burns around 2.3 terawatt hours annually, about as much electricity each year as all the homes in a city the size of Atlanta.” (Saul & Bass, 2023)

“Researchers…say we need transparency on the power usage and emissions for AI models. Armed with that information, governments and companies may decide that using GPT-3 or other large models for researching cancer cures or preserving indigenous languages is worth the electricity and emissions, but writing rejected Seinfeld scripts or finding Waldo is not.” (Saul & Bass, 2023)

The cost I refer to is both financial and environmental. AI use cases are growing by the day and there is optimism about how it can make our lives better. The doomsday predictions about AI focus on what the technology can take from humans insofar as jobs, creativity, and of course, our freedom to choose what is best for us.  It’s often entangled in dystopian science fiction. My view is focused on sustainability.  What are we willing to give up while reaping the benefits of AI?  We can tend to it like a bonsai or let it grow like weeds. In time, we will find balance.

Works Cited

Neumeister, L. (2023, June 8). Lawyers blame ChatGPT for tricking them into citing bogus case law. Retrieved from https://apnews.com: https://apnews.com/article/artificial-intelligence-chatgpt-courts-e15023d7e6fdf4f099aa122437dbb59b

Saul, J., & Bass, D. (2023, March 9). Artificial Intelligence is Booming – So Is Its Carbon Footprint. Retrieved from https://bloomberg.com: https://www.bloomberg.com/news/articles/2023-03-09/how-much-energy-do-ai-and-chatgpt-use-no-one-knows-for-sure#xj4y7vzkg