Navigating OT Networking and Security in the Cloud Era

 

An Adaptive Purdue Model Perspective

In the realm of operational technology (OT) security, we are faced with challenges that evolve seemingly faster than organizations can accommodate. These challenges have become particularly evident as segments of manufacturers with traditionally on-premises environments are increasingly moving some of their operations to the cloud.

 

The integration of OT networks with a variety of cloud services promises unparalleled efficiency and flexibility, but with it comes some complex security challenges. It is important to consider these challenges when implementing a cloud-based solution for an OT environment. Is the Purdue model up to the challenge, and can it continue to safeguard OT assets? Can it be adapted to keep up with the business challenges and requirements of manufacturing today?

 

In this blog, I delve into the intersection of OT networking, security and the cloud by examining the Purdue model – a cornerstone of OT security architecture – and how security professionals might need to adapt to meet present-day demands. I will explore factors to consider when connecting cloud-based systems to an OT environment, what vulnerabilities may exist and ways to mitigate risks while not crippling performance.

 

 

Overview of the Purdue Model

The Purdue model of industrial control systems (ICS) or OT originated from research completed in the late 1990s at Purdue University. The developers created this standardized framework to help organizations meet the growing demands for protecting the manufacturing, energy and utilities. Nearly 30 years later, it has been adopted across industries as best practices for OT networking and security. Some may argue that it is not a security architecture or framework because it was not originally intended to be. However, it does serve as a foundation to layer security controls on top of. Does it have what it needs to stand up to the demands placed on it today?

 

 

Summary of Purdue Model Layers

 

As shown in the image below, here is a summary of the different layers in the Purdue model.

 

  • Level 4/5 (enterprise zone): Can represent network and associated workloads that interact with the internet, e.g., hosting a webserver (Level 5) and perhaps an intranet not intended to be accessed outside the organization (Level 4)
  • Level 3.5 (DMZ): Hosts remote access servers, patch management and applications, as well as mirrors to prevent applications or hosts from having direct access to levels that exist on the other side of the DMZ
  • Level 3 (industrial security zone): Where most lower-level devices offering SCADA communicate with network services, historians, SCADA or optimization applications
  • Level 2 (top of the individual cell/area zone): May be a physical area in a facility dedicated to a specific function or a group of equipment dedicated to a specific process or product
  • Level 1/0 (bottom of cell/area zone): Where you have basic control systems and their associated devices and processes

 

 

Image
Information Graphic OT Cloud (1)


Figure 1: Purdue model review

 

The point here is that the DMZ, commonly referred to as level 3.5, offers an offloading zone for a break in communications to mirrors, proxies and terminal services, among others, to provide an additional layer of security before data gets relayed out to the enterprise or remote access and services, like AV and patches, are allowed to pass into the OT environment. These systems are not commonly mission critical down to the minute or hour. Applying more regular security patches, along with having firewalls maintain the boundary, allows the OT network and assets to operate with patches that might otherwise not be best accessible from the enterprise of systems that may also have direct access to the internet.

 

 

Evolution of OT Applications into the Cloud

There is an increased shift in the use of third-party integrators offering SaaS or companies offering cloud-based smart manufacturing solutions aimed at collecting and interpreting data. Designing, deploying and operating these solutions across multiple locations can offer some challenges. Not having the right people available to operate and maintain the equipment can lead to shortcuts, for example. Establishing a connection from the manufacturing zone to the cloud directly would go against the Purdue model principles. Allowing a remote, cloud-based collector to receive data from multiple sources at level 3 or below could unnecessarily increase the attack surface. In most cases, it’s regarded as too risky and should not be done.

 

There are, however, some advantages to leveraging the cloud in a hybrid industrial militarized zone (iDMZ) to take advantage of its power and capabilities. Here are a few:

 

  • Scalability: Multiple sites, instead of applications at each site, drop a collector and develop a multi-site application built to run in the cloud. This lessens the responsibility on each site to manage and operate infrastructure relating to this data collection and what do we do with it. The collectors broker the data to the cloud. When a new site comes on either through expansion or M&A, it gets integrated into the shared services zone. 
  • Cost Effectiveness: Shared infrastructure costs are less expensive than bulky, hard-to-manage servers that teams cannot maintain and operate at a high, consistent standard. 
  • Accessibility: Accessing data or systems via the cloud can simplify how users access and leverage data across an organization. Instead of managing multiple points of access at each site, organizations can minimize the attack surface by collecting data from multiple sites to a central cloud location and making it accessible there while isolating the operational components down at the site level. 

 

 

Impacts on Network Architecture

Some commonly available data collection, historian, or Overall Equipment Effectiveness (OEE) visualization applications are managed by a third party. In such cases, the installation of a collector at level 3 is common and the outbound traffic, up through the DMZ, may offer some encryption or session security leveraging MQTT, OPC-UA or HTTPS. This is right below the DMZ, and it’s a good practice to collect many streams of data from various locations in lower-level zones, as well as offer a single stream of data through one outbound connection through the DMZ level 3.5 minimizing the attack service by eliminating multiple streams of data requiring rules and policies to be defined and implemented. But the traffic is commonly leveraging open port calls to the internet to connect to a hosted server by the SaaS provider or another 3rd party. Often, the installation of these products also involves sacrifices to security controls. A vendor may place a collector or server at level 3 and have it connect back to their corporate cloud before making the data available to its users. This creates a link with too few layers of protection between the manufacturing zone and an unknown cloud services zone. Sometimes these connections are not encrypted or authenticated very well, and data may be unintentionally compromised or potentially manipulated before it is used. 

 

It is at this point that each end user needs to evaluate their risk tolerance. A small startup making widgets may not see much risk in connecting their collector to a third-party SaaS provider offering business or production analytics. In this case, creating a rule for outbound traffic through the DMZ—without a proxy in the DMZ, but relying on the application layer encryption and authentication, may be enough. However, this would set off any number of red flags in other industries or vertical that more closely follow the Purdue model and other security standards and frameworks. Critical infrastructure or life sciences might be hesitant to connect site OT networks to the cloud. Having additional layers of protection and security controls in place can make it possible.

 

Image
Information Graphic OT Cloud (3)

Figure 2: Poorly integrated data collector to cloud services

 

Image
Information Graphic OT Cloud (2)

Figure 3: Properly connected cloud services with encryption and express routes to balance security with availability

 

 

Security Strategies for OT-Based Cloud Deployments

While the Purdue model still exists at the local or site level network and the cloud zone may be services multiple sites, local or site services are not shared with other facilities and still offer traditional access for common use cases at the local enterprise level. In this case, the connection to the cloud is made through a series of VPN or IPsec tunnels and leveraging express routes to ensure data is transferred securely and efficiently. The policies and rules developed at the local end of the tunnel and the cloud end of the tunnel should be crafted in a what to limit site to site communication much like a traditional DMZ within the Purdue model. Blow are some strategies to consider.

 

 

Network Segmentation Strategies

The Purdue model has been around a while and has been gaining popularity almost 3 decades after its development, but most still struggle with the basics and interpreting what the model means for their network. Following a 6-step approach to security segmentation laid out in the “NIST CSWP 28 – Security Segmentation for Small Manufacturing” is a good step for small-to-medium or multi-site manufacturers if segmentation is found to be lacking in general.

 

When looking to segment your network, use secure communication protocols. TLS and IPsec can be used over even basic protocols that offer some level of encryption, but certainly help to provide additional layers to those that don’t. Furthermore, divide the cloud environment into zones just like you would if it was on premises. There is no need for OT apps to talk directly with IT apps in the cloud without additional security controls and policy enforcement boundaries. You could bring in 62443 zones and conduits. You could also harden and isolate critical on-premises assets to further isolate them from any potential fallout from a breach in the cloud. And, above all, you can enhance security by limiting access. 

 

Below is an even more step-by-step guide to network segmentation implementation:

 

  • Identify Assets: This is something that gets overlooked, and I see it time and time again. At the risk of sounding cliché, you can’t protect what you don’t know what you have. But if you want to move on to the next steps identifying your current subnet space, VLAN IDs and the devices on them is a good start. There are a lot of expensive tools out there that can do this for you, but even a basic tool like Nmap could be used to learn some basics about what is on your OT network.
  • Assess Risk and Create Zones: This is where the Purdue model comes in as a base model. Define your levels. Assign assets to those levels and start to reorganize them based on perceived risk. Nothing too fancy here, but if something is mission critical, it probably doesn’t need to be in the same zone as your PPE vending machines.
  • Determine the Risk Level: Now that things are grouped generally, the risk level can be assigned. Zones requiring more security may have more layers of defense. This is where I think the Purdue model and most security architectures start having some friction. The Purdue model is about as simple as you can get. Just because it defines a level and possible assets in that level does not mean that level itself cannot be divided into multiple zones. For example, a common one I see is domain controllers or other general network services at level 3 may not be in the same security zone as an EWS or HMI/application servers.
  • Map Communication Between Zones: : This step is commonly referred to as conversations. Simply identify what assets are talking to. PLCs and HMIs might talk to a historian, then that historian has a link to the DMZ to a mirror before integrating to a MES system at level 4. A PLC also talks to everything on its machine level network. This is the number one piece of missing information I see when it comes to evaluating network security architectures.
  • Determine Security Controls: This could be a range of solutions here. The CSF Manufacturing Profile (NIST IR-8183) does a good job outlining manufacturing profiles for various categories and what should be implemented for a general low, medium or high profile. It also includes links and references to the associated standard and can be looked up using the NIST Cybersecurity and Privacy Reference Tool (CPRT).
  • Create Logical Security Architecture Diagram: Documentation, documentation, documentation. This cannot be stated enough. This documentation, now that you have theoretically completed the first 5 steps, not only help visualize what you have, but also helps you track changes over time and discuss possible improvements with third-party and contractors as projects get implemented. As you look to move towards the cloud, you have the strong foundation and segmentation basics mentioned above to extend these practices into the cloud environment.

 

 

Adapting Common Security Best Practices for OT Security in the Cloud

Beyond segmentation, here are some other best practices to consider when it comes to adopting the Purdue model for to navigate OT networking and security in the cloud:

 

  • Encryption: The Purdue model does not outline ways to encrypt in transit. Review the protocol being used to connect to the cloud and assess whether the native encryption and session security features are enough for the use case. Consider whether additional encryption is required in transit to the cloud and at rest in the cloud now that the data is being stored off premises. This additional level of encryption and thought process is not often considered in on-premises OT environments.
  • Access and Authentication: Require MFA and have a robust remote access policy, along with rotating passwords. Leverage device authentication as well when connecting onsite devices to the cloud. This will enable secure communication on top of the integration of the local Purdue model levels and the cloud. Use a second means of authentication, token, app, etc. to make sure a compromised password is not enough to cause an issue.
  • Privacy Policies: Define and enforce policies that work in alignment with any governance programs. Consider regulations that might apply to the business. Is the data on servers owned or leased by the organization or run by a third party? What are the risks associated with them storing this data for you?
  • Operational Controls: Limit control of on-prem servers, as well as centralize and scrutinize third-party integrations. Develop and deploy a strong and secure local network to reduce the likelihood of localized on-prem outages. This can be anything from an ISP outage to a repairperson stepping on a power cord in your on-prem industrial data center.
  • Zero Trust: This is not a new concept for securing information. It applies not only to users, but also to devices. Look for ways to not only limit the communications between devices in distant network segments but also at various layers of the OSI model, from layer 3 and 4 up to the application. Put pressure on any vendors and SaaS or cloud providers to provide additional layers of protection.
  • Regular Security Audits and Updates: The security landscape is always changing. New CVEs come out all the time and threat actors’ TTPs are constantly evolving. In light of all of this, make sure your architecture and security controls stand up to what is actually happening out in the wild.
  • Threat Detection and Response: Regardless of the security controls put in place at the network, application and user layers, having applications monitoring and responding based on near-real-time updates to threats, indicators of compromise (IoCs) and signatures is like having a virtual security guard watch the bank even though the doors are locked and the valuables are in a vault. When it comes to monitoring, an intrusion detection system (IDS) would help provide a 24/7/365 set of eyes on your environment looking for behavior or connections that are out of the ordinary.

 

 

Conclusion

As we conclude our exploration of navigating OT networking and security in the cloud era, we find that leveraging VPN tunnels offers a robust solution for providing security OT access from level 3.5 to the cloud. I discussed considerations to think about if the cloud is private or managed by a third party, and I provided some methodologies for adapting the Purdue model to make it cloud ready. The integration of VPN tunnels and segmentation seamlessly aligns with the adaptive nature of the Purdue model, while also allowing the adoption of strategies from various other security standards and industry best practices. Organizations must stay vigilant against emerging threads and continuously assess their security posture.

 

At the end of the day, the risks and recommendations that I have outlined are not all too different than recommendations commonly offered through a security review or assessment. Take the same methodologies and security principles and apply them to the cloud. I encourage you to take an active approach to maintaining their OT networking and security when considering cloud-based OT deployments. Don’t rely on the vendors of the cloud applications to verify the solution is secure.

 

The combination of the Purdue models’ principles and modern security standards present a formidable approach to secure OT environments in the cloud era. By embracing innovative technologies and strategies, we can uphold established security frameworks and organizations can navigate complex OT security challenges with confidence as they look to the cloud to help solve any number of other business challenges.

 

 

References and Further Reading

Michael Dutko
Senior Consultant, PRODUCT SECURITY - ICS & IOT | OPTIV
Michael Dutko has over 10 years’ experience in both industrial automation and Industrial Control System (ICS)/OT networking and security. Michael currently is a Senior Consultant for Optiv’s ICS/OT security practice, leveraging his skills to complete security reviews, risk assessments, validations and technical deployments. He earned an Electronic Engineering Technology (EET) bachelor’s degree from Bloomsburg University in Pennsylvania.

Optiv Security: Secure greatness.®

Optiv is the cyber advisory and solutions leader, delivering strategic and technical expertise to nearly 6,000 companies across every major industry. We partner with organizations to advise, deploy and operate complete cybersecurity programs from strategy and managed security services to risk, integration and technology solutions. With clients at the center of our unmatched ecosystem of people, products, partners and programs, we accelerate business progress like no other company can. At Optiv, we manage cyber risk so you can secure your full potential. For more information, visit www.optiv.com.