Professional bio: Micki Boland is a global cyber security warrior and evangelist with Check Point’s Office of the CTO. Micki has over 20 years in ICT, cyber security, emerging technology and innovation. Micki’s focus is helping customers, system integrators, and service providers reduce risk through the adoption of emerging cyber security technologies. Micki is an ISC2 CISSP and holds a Master of Science in Technology Commercialization from the University of Texas at Austin, and an MBA with a global security concentration from East Carolina University.

In this exclusive interview with subject matter expert Micki Boland, CyberTalk.org uncovers why AI is having a moment right now, the business implications, and how organizations should address threats posed by new technologies.

AI has been around for decades. Why and how has it recently become more powerful?

It is true that Artificial Intelligence (AI) has been around for decades, going back to World War II. Alan Turing is considered to be a significant contributor to AI and the “Turing Test” still stands as the test for self learning AI.

Although we have had AI with machine learning algorithms in decision science for quite some time, two things are enabling big leaps in the advancement of AI right now: First, neural networks are advancing generative AI, and second, the quantum jump in compute power.

Massive amounts of compute and processing power are needed for generative AI. Case in point: in an Interesting Engineering article, from February 27th of this year, the article identified that many of the large language models are using NVIDIA’s A100 chip, which provides a 20X performance improvement over previous generations and is “the world’s fastest memory bandwidth at over 2 terabytes per second (TB/s)”. NVIDIA is offering AI-as-a-service for public cloud service providers as well as enterprises.

As the trendiest topic in tech, is the excitement warranted, in your view?

The announcement of ChatGPT with large language model GPT-3.5 by OpenAI (openai.io) on November 30th of last year hit the news in a big way and caught the attention of the mainstream media, not just the technology industry. I remember, we were at an evening meeting with a financial customer when we got the announcement on Tech Wire and everyone was speculating about what ChatGPT would bring. Since this announcement, we have seen a huge adoption of ChatGPT GPT-3.5.

A Tooltester.com report as of March 20, 2023 gave some OpenAI statistics: ChatGPT has 1B monthly website visitors and an estimated 100M active users. More will adopt with the release of new large language model GPT-4, which OpenAI indicates will be trained on 100 trillion parameters and will be able to incorporate text and images.

Regular people are adopting ChatGPT to play, write stories for their kids, perform rewrites of documents, create new communications, generate new content, answer questions and gather details for research. OpenAI Dall-E 2 has been around for a while and is fun for creating new and interesting digital content and images. For my part, I think the generative AI tools for code development like OpenAI Codex, which can write code in a dozen languages, and Polycoder, an open source version alternative, and Tabnine, which is an IDE auto code completion tool, are the most interesting in terms of reward and payoff, though potentially the most risky of all.

Can you speak to the most impactful ways in which AI is helping transform businesses?

Enterprise organizations have long taken advantage of AI through Machine Learning, neural networks and data analytics. Many have their own structures around AI, with data scientists and governance around specific use cases related to using AI for business intelligence, analytics, telemetry, cloud AI…etc. However, with OpenAI’s ChatGPT, which can be easily adopted by teams and individuals for their own purposes, there are not specific governance or business use-cases. Free of constraints around using ChatGPT in the enterprise setting, teams and individuals can find many interesting uses and time-saving techniques.

One thing that’s super compelling is using ChatGPT to obtain value and meaning from human-generated knowledge that exists within corporations: internal websites and wikis, knowledge bases, repositories, Q&A, as well as unstructured data sources. Using ChatGPT for corporate training could be interesting. Individuals learning to code software can use the AI to help them troubleshoot their code. Forbes article Top 10 Uses for ChatGPT in The Banking Industry identifies further compelling use cases: customer service, fraud detection, virtual assistance, compliance, wealth management and customer onboarding. Of cautionary note, enterprise organizations do not always understand the risk of adopting generative AI and few organizations have GCR, legal, or corporate policies around the acceptable use of generative AI platforms.

What are the most exciting initiatives, effects, stories…etc. that you’ve heard about or seen in relation to ChatGPT?

The craziest thing I have read is where ChatGPT GPT-4 was put to the test and passed academic testing exams, though there was variance in resulting scores: Uniform Bar, SAT, and GRE (funny it performed poorly on the writing test).

What are the expected returns associated with applying AI-based technologies?

I will use Check Point Software Technologies’ four real-world cyber security focused use cases for ML/DL, which are providing huge positive returns for the cyber security industry. AI/ML is delivering innovation in efficacy, efficiency, reduction in false positives; improved detection and prevention, including detection of first-seen, zero-days attacks; anomaly detection and threat intelligence to provide context aware threat detection and threat mitigation; rapid malware family identification and attribution (Malware DNA) including first seen malware variants; rapid documents and images classification.

Have you seen unintended consequences of applying AI-based tools in business and if so, can you speak to that a bit?

Ahh, I think there are already documented unintended consequences of using ChatGPT. One case published in the Vanderbilt Hustler, the official newspaper for Vanderbilt University, alleged that two professors at Vanderbilt University used ChatGPT to create a letter to students in order to share information regarding the campus shooting at Michigan State University. Vanderbilt students discovered the letter distributed by the professor was written by ChatGPT and the students were allegedly disturbed by the lack of a personal connection and empathy. The professors had to step aside while under review for not utilizing typical process for the review of communications.

Other unintended consequences can occur when a user enters restricted, protected or sensitive, customer, business or referential data into ChatGPT or any generative AI platform. Open source media reports indicate that Walmart, Microsoft, Amazon and other corporations have warned their employees in written memos and have created corporate guidelines regarding potential breaches of confidentiality while using generative AI.

How can technology developers and business leaders address the issue of bias that has been found in some AI-based tools?

A big drawback to AI is dataset bias, algorithm bias and that ML decisions inherently lack contextual awareness. OpenAI has publicly revealed that ChatGPT is biased and has publicly apologized. The MIT Technology Review assessed ChatGPT GPT-4 and identified that “Though OpenAI has improved this technology, it has not fixed it by a long shot”.

What else should CEOS and executive teams know about success vs. failure in applying AI-based technologies?

CEOs and executive teams need to know the limits and constraints of AI, particularly around its lack of abstract reasoning and real-world common sense. Algorithms require huge training data sets to learn and once created, they are brittle and will fail when a scenario is slightly different than that of the training set. They are rigid and cannot adapt after initial training and it is sometimes challenging to interpret their decisions, which means that they are opaque. Algorithmic opacity can be particularly harmful with ML models in areas holding maximum public interest, such as health care.

This harm can manifest in several ways. An erroneous treatment recommendation based on an ML model can lead to potentially disastrous consequences for a patient or a group. The opacity of ML models means the reason for the mistake can be untraceable. AI is not “the oracle”, it cannot answer every question and should not be used to make decisions for humans, especially those decisions that have the potential to impact human lives. Technology ethics leaders must always warn of bias in AI; dataset bias, algorithm bias and that ML decisions are biased, as they inherently lack contextual awareness.

What can AI teach us about what it means to be human in your view?

The advances in generative AI demand an improved focus on ethical use of AI and human centered AI. Most importantly, humans should be at the center of AI. Artificial intelligence exists to help humans, not the other way around. Humans must always have control of when and how they want to use AI. Humans must be made aware of all AI “under the hood.” I recommend the book Human-Centered AI, by Ben Shneiderman.

Your thoughts on the future of ChatGPT and similar technologies?

Easy, I asked ChatGPT! No really, my own prediction is we will continue to see an explosion in the emerging generative AI platforms and that we will use of these in every service and application everywhere. Europol warns that by 2026, 90% of all content on the internet will be synthetic and generated by generative AI. And not all for good. Generative AI is behind deepfakes and voicefakes, it’s already being used by cyber criminal gangs for CEO impersonation and fraud attacks, creating non-consensual pornography, etc.

Is there anything else that you would like to share with the CyberTalk.org audience?

As a cyber security warrior, I have a call to action for all organizations to form a council or to work within the auspices of risk management leadership, GRC, legal general counsel and the board of directors and executives to create a Risk Management Framework for the use of generative AI. NIST has a good RMF for AI which can be a good starting place for enterprise organizations. https://www.nist.gov/itl/ai-risk-management-framework

If your organization is allowing employees to use ChatGPT and other generative AI platforms on corporate devices and corporate networks, immediately inform all employees of the risk of breaching corporate confidentiality, and create acceptable use policies and guardrails for using these platforms. Make this part of corporate ethics and cyber security training ASAP and review and assess continuously.

For more insights from expert Micki Boland, please see CyberTalk.org’s past coverage here.

Want to stay up-to-date with trends in technology? Check out the CyberTalk.org newsletter! Sign up today to receive top-notch news articles, best practices and expert analyses; delivered straight to your inbox.