Jeff Schwartz, CISSP, is the VP of Engineering, North America, for global cyber security company, Check Point Software. He manages a team of 200~ engineers across multi-disciplinary fields, and he’s responsible for all security engineering resources across a $700 million portion of the business in North America.

Over his 20-year career in cyber security, Jeff has consulted, designed, and overseen the implementation of the largest network security deployments across all industries, and throughout both the Fortune 500 and major government agencies.

In this interview, Jeff Schwartz discusses everything from how software supply chains operate, to best practices in supply chain management and solutions that you should know about. This interview provides premium cyber security insights.

What is supply chain security?

So, supply chain security centers around protecting the interceding life cycle of product development and product delivery.

So for example, if you’ve ordered something from Amazon, you want to know that the product will not be tampered with in some way between the time it is manufactured and delivered to you.

There are obvious things that can be done to secure a product from a physical perspective. The same holds true when it comes to software delivery.

We want for the end-user of the asset or device or piece of software to know that it hasn’t been modified and that the integrity of the software or asset is undisturbed.

Why is supply chain security important?

The reason why supply chain security is important is that if something is compromised during the development or delivery process at a more elemental level, let’s say, outside the traditional control sets, then the exposure to the organization and the overarching impact, in light of the potential for lateral movement and the risk to the organization, is very high. As we saw in the case of the SolarWinds breach, undetected malicious code can persist for a very long period of time and the impact can be very severe.

What might a supply chain security breach look like?

In terms of hardware, if a chip manufacturer or a device component, or someone who’s making individual componentry like memory or CPUs or motherboards for hardware and network devices, if those device suppliers are compromised and/or the components that they embed in that hardware are compromised, it puts the entire infrastructure at risk. The same premise holds true for software. If one component becomes compromised, it could have a very wide impact.

In terms of hardware, if a supplier becomes compromised and their products also become compromised, when the products gets shipped to the destination and goes through QA and burnin and all of the other processes that a hardware manufacturer may invoke to provide quality assurance, ultimately, the product is dead on arrival.

In software, what we saw with SolarWinds is emblematic of what can happen across many organizations. In the development process there was a compromise. A malicious function was injected into the signed code from that software supplier. As a result, customers that deployed this software were potentially compromised. Because it was signed, customers had no awareness or visibility into the presence of this malicious function. Due to the nature of the compromise, with the breach of SolarWinds, the threat actor was able to then move laterally, cascading to other infrastructure, like Active Directory to public cloud environments and elements that were not necessarily part of the initial attack vector.

So, while in this case the initial attack vector occurred via supply chain, the downstream elements of the attack were against more traditional targets compromising and misusing service accounts in AD, leveraging lack of segmentation.

Where do organizations often need more visibility into their supply chains?

So there are a couple different areas to consider here:

Obviously, doing business with trusted suppliers is critical. And understanding how those suppliers validate their product lifecycle, and the integrity and confidentiality of that development and delivery process, whether hardware or software, is important.

However, many organizations work with dozens or hundreds of suppliers, so understanding this is a bit problematic at scale. Organizations would need to hire a whole team of people to investigate these things on a per-vendor basis. So it’s up to the given organization at the moment and it clearly doesn’t scale well. At the end of this process, it would be more of a “paper study” not necessarily improving the security outcomes that the organization can achieve.

More practically speaking, most organizations will need to evaluate their own internal software development and application delivery processes. As organizations build services and applications for end users, there needs to be better scrutinized security enforcement associated with that development and delivery (lifecycle) process.

It’s less about the security vulnerabilities that the vendors are leaving exposed, due to many security researchers focusing on this with CVSS, and more about how organizations iterate and improve the security of their own applications.

Given the trends in cloud and agile computing, and the overall shift towards more of a Continuous Integration (CI) Continuous Delivery lifecycle (CD), these changes are happening real-time in a very dynamic way at tremendous scale.

So, how are these changes evaluated for security based on permissions, access, cve exposure, etc.? Are the changes introducing risk to the environment? How is that risk evaluated? Is that risk mitigated? Or are the changes accepted very quickly at scale within very dynamic environments without adequate consideration?

I think that organizations need to evaluate how their application security can adopt the concept of “shift left” models, where security is evaluated and enforced earlier in the development process and earlier in that CI/CD lifecycle. And I think that doing so is valuable 1) because of the volume of applications and modifications that organizations make to their applications. And 2) it’s more tangible and relevant for organizations to invoke because it’s their teams that are building these applications.

Organizations would derive less value from spot checking Cisco and Microsoft and Check Point and the cast of dozens or hundreds of vendors that they do business with. That’s a tedious and cumbersome process and it may not actually improve security outcomes. Introducing this shift left methodology to both evaluate and force security earlier in the development process of owned applications will produce much more meaningful improvements in the security outcomes.

What are best practices in cyber supply chain risk management?

Organizations drive better outcomes by looking at their own applications versus evaluating external supply chain risk. Think about trying to evaluate an external vendor’s supply chain risk; take Microsoft for example. You know that, yes, Microsoft will answer whatever their PR department has certified as the appropriate response, but at the end of the day, that response is going to be a paper response.

The organization evaluating the risk can’t really improve the result. The organization is going to choose to do business with Microsoft or not. Introducing internal processes to better secure your own organization’s application development and delivery will produce much better outcomes than trying to evaluate and regulate someone else’s.

What kinds of solutions should organizations look for in securing their supply chain?

There are organizations that audit and report on the security hygiene of different vendors. But doing so presents more of a paper outcome than reliable, steadfast results. I think that the solutions that organizations should be looking at are things like CloudGuard AppSec, which is very much in alignment with this idea of evaluating security earlier in the development lifecycle in a dynamic, automated fashion, as application owners push changes to app environments.