CyberTalk

Artificial intelligence: A double-edged sword for technology & ethics

Paal Aaserudseter, Security Engineer, Check Point featured on CyberTalk.org

Pål (Paul) has more than 30 years of experience from the IT industry and has worked with both domestic and international clients on a local and global scale. Pål has a very broad competence base that covers everything from general security, to datacenter security, to cloud security services and development. For the past 10 years, he has worked primarily within the private sector, with a focus on both large and medium-sized companies within most verticals.

In this interview, Pål Aaserudseter, a Security Engineer for Check Point, discusses artificial intelligence, cyber security and how to keep your organization safe in an era of eerie and daunting digital innovation. Read on to learn more!

Is Silicon Valley’s AI-frenzy just another crypto craze or is the advancement more comparable to electricity, or sliced bread?

Great question! It’s difficult to make a direct comparison between Silicon Valley’s AI-frenzy and other technological advancements like electricity or sliced bread, as they are all unique and have had different impacts on society.

In my opinion, it’s more like the California Gold Rush in the 1800’s rather than a crypto craze.

However, AI has the potential to be a transformative technology that can greatly impact various industries and aspects of everyday life.

AI has already shown its potential to revolutionize fields like healthcare, finance, transportation, and more. It can automate tedious tasks, improve efficiency, and provide insights that were previously impossible to obtain. AI can also help us solve complex problems and make better decisions.

It’s important to note that there is also hype surrounding AI, and some companies may be over-promising or exaggerating its capabilities. It’s important to have realistic expectations and not view AI as a panacea for all problems.

Overall, the advancement of AI is not just another passing craze like crypto, but it remains to be seen as to how much of an impact it will have on society and how quickly it will be adopted.

Although AI has been used for a long time, I believe 2023 is the year that the public will remember as the “Year of AI,” where it got its breakthrough and became available in a more general fashion, in the likes of ChatGPT etc.

Is it possible that AI will zoom past human capabilities and act of its own accord?

Predicting just how advanced AI will become is tough, but there are already categories describing exactly that.

Right now, the AI we use is called Narrow, or “weak” AI (ANI – Artificial Narrow Intelligence). The category above, General AI (AGI – Artificial General Intelligence) is described as operating as the human brain. I.e., thinking, learning, and solving tasks like a human being.

The last category is Super Intelligence (ASI – Artificial Super Intelligence) and basically that is machines that are smarter than us.

Could it be possible that AI could eventually surpass human capabilities in some areas? Sure. If AI does reach AGI, there is a risk that it could act of its own accord and potentially even become a threat to humanity. This is known as the “AI alignment problem,” which is the challenge of aligning an AI’s goals and values with those of humans to ensure that it acts in a safe and beneficial manner.

While the possibility of AI becoming a threat is a concern, it’s important to note that there are also many benefits to developing advanced AI. For example, it could help us solve some of the world’s most pressing problems, such as climate change, disease, and poverty.

To mitigate the risks associated with advanced AI, it’s important that researchers and policymakers work together to ensure that AI is developed in a safe and beneficial manner. This includes developing robust safety mechanisms, establishing ethical guidelines, and promoting transparency and accountability in AI development.

How would you like to see governments, companies and regulatory bodies better govern AI development and release?

To better govern AI development and release, governments, companies, and regulatory bodies should consider the following:

1. Establishing ethical guidelines: There should be clear ethical guidelines for AI development and use that are aligned with societal values and principles, such as transparency, accountability, privacy, and fairness.

2. Encouraging transparency: Companies and organizations should be transparent about their AI systems, including how they are designed, trained, and tested. This will help build trust with the public and facilitate better oversight.

3. Promoting collaboration: Governments, companies, and other stakeholders should work together to develop shared standards and best practices for AI development and use. This will help ensure that AI is developed in a safe and responsible manner.

4. Prioritizing safety: Safety should be a top priority in AI development, and mechanisms should be put in place to prevent harm caused by AI systems. This includes developing robust testing protocols and implementing fail-safe mechanisms.

5. Fostering innovation: Governments should provide funding and resources to support research and development in AI, while also ensuring that innovation is balanced with responsible governance.

6. Encouraging public engagement: There should be opportunities for public engagement and input in AI development and regulation, to ensure that the needs and concerns of the public are considered.

Overall, governing AI development and release will require a collaborative effort on the part of governments, companies, and other stakeholders. By working together, we can help ensure that AI is developed and used in a safe, ethical, and beneficial manner.

Right now, there are minimal rules and regulations in place. Suggestions like the AI Act exists (E.U.), but nothing has been approved and basically, as it is now, the ethical compasses of users and developers are guiding AI use.

What specific standards should companies that release certain types of AI be held to?

The specific standards that companies that release certain types of AI should be held to will depend on the type of AI released and its intended use. However, there are some general standards that could apply to a wide range of AI systems, including:

1. Transparency: Companies should be transparent about the AI systems they develop and release, including their design, data inputs, and outputs. This will help ensure that the systems are accountable and can be audited for fairness and accuracy.

2. Privacy: Companies should prioritize the protection of user privacy, particularly when it comes to sensitive data. This includes implementing strong data protection protocols and providing clear explanations of how user data is collected, stored, and used.

Note: Rumor has it that Bard, Google’s Chat AI (based on LaMDA) has been trained using all content from all users in Gmail. Where’s the privacy in that! (If true)

3. Fairness: AI systems should be designed and tested to ensure that they do not perpetuate bias or discrimination against certain groups. This includes designing algorithms that are free from biases and regularly auditing systems for fairness.

4. Safety: Companies should prioritize the safety of their AI systems, particularly if they are used in high-risk applications like autonomous vehicles or medical diagnosis. This includes designing robust testing protocols and implementing fail-safe mechanisms to prevent accidents and injuries.

5. Explainability: Companies should be able to explain how their AI systems make decisions, particularly when the decisions have significant impacts on individuals or society as a whole. This will help ensure that the systems are transparent and accountable.

Note: This is what’s called a “black box.” Most of the time, developers have no idea how an AI system has come to its conclusion and by debugging (looking in the black box) answers are often not what’s expected. This raises the issue of trust on verdicts.

6. Accessibility: Companies should design their AI systems to be accessible to all users, regardless of their physical abilities or technical expertise. This includes providing clear instructions and user interfaces that are easy to understand and use.

These are just some of the general standards that could apply to companies that develop and release AI systems. Depending on the specific type of AI and its intended use, additional standards or regulations may be necessary to ensure that the systems are safe, fair, and beneficial for society.

Is AI now a cyber attackers’ best friend?

AI can be both a tool for cyber attackers and defenders. As AI technologies become more advanced, they can be used by attackers to create more sophisticated attacks that are harder to detect and defend against. For example, attackers can use AI to automate the process of identifying vulnerabilities in systems, creating targeted phishing campaigns, or even launching automated attacks.

However, AI can also be used by defenders to enhance their security measures and better detect and respond to attacks. For example, AI can be used to analyze large amounts of data and identify patterns that may indicate that an attack is underway. AI can also be used to automate certain security tasks, such as patching vulnerabilities or detecting and mitigating suspicious activity, sifting through logs…etc.

In the end, whether AI is a cyber attackers’ best friend or a defender’s best friend depends on how it is used. (Sort of like looking in a mirror).

AI can be a powerful tool for both attackers and defenders, and it is up to organizations and security professionals to ensure that AI is used in a safe and responsible manner to protect against cyber threats. This includes implementing strong security measures and staying up-to-date on the latest AI-powered attack techniques and defense strategies.

How have cyber criminals started to use AI-based technologies for criminal gain?

Cyber criminals have started to use AI-based technologies for criminal gain in several ways, including:

1. Automated attacks: Attackers can use AI to automate the process of scanning for vulnerabilities in systems or networks. This can help them identify targets and launch attacks more quickly and efficiently.

2. Spear-phishing campaigns: Attackers can use AI to create sophisticated spear-phishing campaigns that are tailored to specific individuals or organizations. For example, AI can be used to generate convincing fake emails that appear to be from a trusted source, increasing the likelihood that the target will click on a malicious link or provide sensitive information.

3. Social engineering: AI can be used to generate convincing fake profiles on social media or other online platforms, which can be used to manipulate people into divulging sensitive information or clicking on malicious links.

4. Malware creation: Attackers can use AI to generate more advanced malware that is better at evading detection by security systems. For example, AI can be used to create polymorphic malware that changes its code structure to avoid detection by signature-based antivirus software.

5. Credential stuffing: Attackers can use AI to automate the process of credential stuffing, which involves using stolen usernames and passwords to gain access to user accounts. AI can be used to generate large numbers of login attempts and to quickly and efficiently identify which credentials are valid.

6. Deepfakes: AI can be trained on large datasets of video/voice and images and can then generate new content that appears to show a person doing something that they did not actually do or say. Deepfakes have the potential to be used maliciously for things like political manipulation, false evidence in criminal cases, or can even include your (fake) manager telling you what to do via a video call.

Overall, AI can help cyber criminals create more sophisticated and effective attacks, which can be difficult to detect and defend against. As AI technologies continue to advance, it is likely that cyber criminals will increasingly use them for criminal gain. It is therefore important for organizations and individuals to stay up-to-date on the latest AI-powered attack techniques and defense strategies.

Can you explain how companies like Check Point are using AI to prevent cyber attacks?

Check Point provides a wide range of security solutions to help organizations prevent cyber attacks. One of the ways that Check Point is using AI to prevent cyber attacks is through its advanced threat prevention (ATP) solutions, which use machine learning and other AI technologies to analyze network traffic and identify potential threats.

With more than 30 years of (big data) threat intelligence, we leverage over 70 different engines to stop attacks, where more than 40 of these engines are AI-based. This is referred to as ThreatCloud, the brain behind all Check Point solutions. We use AI to stop things like phishing, bots, malware, bad URL’s, bad DNS and so forth.

Here are some examples of how Check Point is using AI in its solutions:

1. Behavioral analysis: Check Point’s solutions use machine learning algorithms to analyze network traffic and to identify anomalous behavior that may indicate that an attack is underway. For example, the system can detect when a user’s behavior deviates from their normal patterns, which may indicate that their account has been compromised.

2. Threat intelligence: Check Point’s solutions use AI to analyze threat intelligence from a wide range of sources, including dark web forums, malware analysis platforms, and other security vendors. This allows the system to identify new threats and to quickly and even automatically develop countermeasures to prevent them.

3. Zero-day detection: Check Point’s solutions use AI to identify and stop zero-day vulnerabilities, which are unknown vulnerabilities that have not yet been patched by the software vendor. The system uses machine learning to identify patterns in network traffic that may indicate the presence of a zero-day vulnerability and it automatically blocks the attack.

4. Automation: Check Point’s solutions use AI to automate certain security tasks, such as identifying and blocking malicious traffic, patching vulnerabilities, and responding to security incidents. This helps organizations respond more quickly and efficiently to potential threats.

5. Autonomous: Check Point’s solutions use AI to automatically discover and build best practice prevention policies based on device types, like Enterprise IoT devices. Based on criteria, zero-trust rules are built and distributed within the organization.

Overall, Check Point’s use of AI in its solutions is designed to provide organizations with more advanced and effective protection against cyber-attacks. By using machine learning and other AI technologies to analyze network traffic and identify potential threats, Check Point can provide organizations with a higher level of security than traditional security solutions allow for.

Would you be able to speak to the technical dimensions of that a bit?

Sure, I can speak to some of the technical dimensions of how we are using AI to prevent cyber attacks.

1. Machine learning: Machine learning is a type of AI that allows systems to automatically learn and improve from experience without being explicitly programmed. In the context of cyber security, machine learning algorithms can be trained to identify patterns in network traffic and behavior that may indicate a potential threat.

2. Neural networks: Neural networks are a type of machine learning algorithm that are designed to mimic the structure and function of the human brain. In cyber security, neural networks can be used to analyze large amounts of data and identify patterns that may be too complex for humans to detect.

3. Natural language processing (NLP): NLP is a type of AI that allows systems to understand and analyze human language. In the context of cyber security, NLP can be used to analyze and categorize the content of emails, chat messages, and other communication channels to identify potential threats.

4. Deep learning: Deep learning is a subset of machine learning that uses neural networks with many layers to analyze complex data. In cyber security, deep learning algorithms can be used to analyze network traffic and identify patterns that may be indicative of a cyber attack.

5. Automation: AI can also be used to automate certain security tasks, such as identifying and blocking malicious traffic, patching vulnerabilities, and responding to security incidents. This can help organizations respond more quickly and efficiently to potential threats.

Overall, the technical dimensions of using AI in our ThreatCloud involve using a combination of machine learning, neural networks, NLP, deep learning, and automation to analyze network traffic and behavior, identify potential threats, and respond to security incidents. By using these advanced technologies, we are able to provide organizations with a higher level of protection against cyber attacks, preventing these attacks from happening before they can do any damage.

Is there anything else that CISOs should know about using AI to fight cyber attacks?

Yes, here are a few additional things that CISOs should keep in mind when considering the use of AI to fight cyber attacks:

1. Understand the limitations of AI: While AI can be a powerful tool for identifying and preventing cyber attacks, it’s important to understand that AI is not a silver bullet. AI algorithms can be vulnerable to attacks themselves, and they can also produce false positives and false negatives. It’s important to use AI in conjunction with other security measures to provide comprehensive protection against cyber threats.

2. Choose the right tools: There are many different AI-based security tools available, each with its own strengths and weaknesses. When choosing AI-based security tools, it’s important to consider factors such as the specific types of threats that the tool is designed to detect, the accuracy of the tool’s algorithms, and the tool’s integration with other security systems, and whether or not it is capable of preventing attacks.

3. Invest in training and education: Using AI to fight cyber attacks requires a certain level of technical expertise. CISOs should invest in training and education to ensure that their security teams have the skills and knowledge they need to effectively use AI-based security tools.

4. Consider the ethical implications: AI can raise ethical concerns related to privacy, bias, and accountability. CISOs should work with their legal and compliance teams to ensure that the use of AI-based security tools follows applicable laws and regulations, and that the tools are designed and implemented in an ethical and responsible manner.

Overall, AI can be a powerful tool for fighting cyber attacks, but it’s important to use it in conjunction with other security measures and to consider the limitations, the right tools, training and education, and ethical implications when implementing AI-based security solutions.

How do you believe that AI will affect our society in the long-term?

AI has the potential to have a significant impact on our society in the long-term. Here are a few potential ways that AI could affect our society:

1. Automation of jobs: AI has the potential to automate many jobs that are currently performed by humans, which could lead to significant changes in the job market. While automation could increase productivity and efficiency, it could also lead to job loss and economic inequality.

Note: But as with the industrial revolution in the 1700’s, we will likely see more operators in the future. Maybe a future profession would be a “prompt engineer” where people are specialists in querying AI for optimal results.

2. Improved healthcare: AI has the potential to improve healthcare by enabling more accurate diagnoses, predicting disease outbreaks, and developing personalized treatment plans. This could lead to better health outcomes for individuals and lower healthcare costs for society.

3. Enhanced transportation: AI could enable the development of self-driving cars, which could reduce accidents and traffic congestion. This could also have an impact on urban planning and land use, as cities may need to redesign their transportation infrastructure to accommodate self-driving vehicles.

4. Increased personalization: AI has the potential to personalize many aspects of our lives, from advertising to healthcare to education. This could lead to more personalized experiences and better outcomes, but it could also raise concerns about privacy and the use of personal data.

5. Ethical and legal implications: AI raises several ethical and legal questions, such as how to ensure that AI is used in a responsible and ethical manner, how to address issues of bias and discrimination, and how to determine liability in the event of an AI-related accident or incident.

Note: If a self-driving car crashes and lives are lost, who is responsible? The driver? The car manufacturer? The company that delivered AI algorithms? The developers?

Overall, the impact of AI on our society in the long-term will depend on how we choose to develop and use the technology. It will be important to consider the potential benefits and risks of AI and to work towards developing AI in a way that is responsible, ethical, and beneficial to society.

Is there anything else that you would like to share with the CyberTalk.org audience?

Yes, here are a few additional thoughts that I would like to share with the CyberTalk.org audience:

1. Stay informed: The cyber security landscape is constantly evolving, and it’s important to stay informed about the latest threats and trends. Keep up with the latest news and research in cyber security and consider attending industry events and conferences to stay up-to-date. Cybertalk.org is a great site for the latest in cyber.

2. Take a layered approach to security: No single security measure can provide complete protection against all cyber threats. It’s important to take a layered approach to security using a combination of technologies and processes to mitigate risks. Also, make sure to consolidate in a manner where layered measures are aware of each other and can correlate on events and security measures.

3. Invest in training and education: Cyber security is not just a technical issue; it’s also a human issue. Invest in training and education for employees to ensure that they understand the risks and best practices for cyber security.

4. Prioritize risk management: Cyber security risks cannot be eliminated, but they can be managed. Prioritize risk management by identifying critical assets and vulnerabilities and developing a plan to mitigate and respond to cyber threats.

Note: People, process, technology need to be a security net for each other. When people fail, tech saves you. When tech fails, process mitigates. When process fails, people remediate. And so, it forever continues…

5. Foster a culture of cyber security: Cyber security is everyone’s responsibility, from senior executives to frontline employees. Foster a culture of cyber security by promoting awareness and accountability throughout the organization.

Overall, cyber security is a complex and constantly evolving field, but by staying informed, taking a layered approach to security, investing in training and education, prioritizing risk management, and fostering a culture of cyber security, organizations can help to mitigate cyber risks and protect their critical assets.

Want to stay up-to-date with trends in technology? Check out the CyberTalk.org newsletter! Sign up today to receive top-notch news articles, best practices and expert analyses; delivered straight to your inbox.

Exit mobile version