The New Business Continuity in the Age of Pandemics

April 22, 2020
Business Continuity

Traditional Business Continuity

Business Continuity (BC) ensures you understand how your organization normally functions, and includes a plan for managing and successfully getting through planned and unplanned adverse conditions, and transitioning to/from normal and non normal.

Traditionally BC has focused on Disaster Recovery and Resiliency scenarios that address outages, loss of facilities, planned changes, recovery objectives, or reduced service levels. Many organizations and resources provide BC capabilities with extensive processes and systems to handle many possible adverse scenarios.

Pandemics are Not Traditional

Most recently, BC has become of primary importance — due to a pandemic such as COVID-19. The traditional ideas of BC processes, assumptions, and best practices have been largely unhelpful as the pandemic involves no loss of facilities, physical infrastructure or processing capabilities. Indeed, as of this writing, most existing BC businesses barely even mention Pandemic, if at all (Wikipedia only had a footnote until 3/19/20). Yet operations at many organizations around the world have ground to a halt or been severely impacted.

It is no longer sufficient to assume that organizations can continue operations based on traditional BC practices and simply relocate employees to work from their homes in isolation. There are many unanticipated challenges, including:

  • Continued operations/services with smaller workforce or fewer customers
  • Ability to provide timely and accurate communications
  • Establishing remote working logistics
  • Accessing resources remotely
  • Infrastructure limitations
  • Maintaining — and even ramping up — security
  • Policy assumptions
  • Employee/Family privacy and confidentiality

It remains to be seen how businesses will determine how to return to “normal operations”, if such a thing is even possible after such an event. 

The New Business Continuity

To be effective, the New BC must:

  • Identify any scenario that can adversely impact an organization, not just the traditional ones
  • Define how to respond to these scenarios
  • Determine how and when ‘normal operations’ will resume if/after the adverse scenario has abated

BC may not always be able to handle all risks and issues. But, with advance planning, an organization is in a better position to address them. For each threat scenario, the New BC must also focus on:

  • Identification of threats, risks, and impact/changes to workforce and customers
  • Definition of target operational models, response policies, protocols, and communication plans
  • Organizational agility to respond/adapt to rapidly changing conditions
  • Ongoing management during/after each scenario
  • Measurement of response effectiveness
  • Methods for returning to normal-mode

Why Worry After Getting Through COVID-19?

Many organizations may believe that they have recently adapted to the COVID-19 pandemic, and wonder why they need to do anything else. It’s even more important now, to understand, reevaluate, and build business continuity in your organization.

  1. How do you return to normal? What is the new normal state? When can that happen? How do I bring our remote workforce back to their normal workplaces? Should I consider implications of work done at home? What may be different during or after the return to normal?
  2. What recent changes need to be fixed or removed? Tactical changes for COVID-19 were largely done in-haste, without adequate preparation, may have to be undone, and could be improved with better strategic planning.
  3. Were our communications clear, accurate, and helpful? Many organizations lacked any kind of communication plan, and were unable to provide timely, helpful or accurate information to their users or customers, resulting in misinformation on response activity, expectations, and, in some cases, impacted physical well-being.
  4. Can this happen again? Looking ahead, there could be other catastrophic situations — new pandemics, COVID-19 mutations/resurgences, bioterrorism, germ warfare — any of which could occur with short notice, rapid transmission, unanticipated side effects, or higher mortality rates. Organizations that adapt to the New BC will be better situated to achieve true continuity before, during, and after these scenarios. 

Systems Flow Can Help

We help organizations plan for the New BC. We identify needs, strengths and gaps, define solutions, and reduce risk. We can help you improve competitive advantage through practical, effective application of best practices in enterprise architecture, vision and strategy.

Hybrid Identity Options

March 11, 2019

Today, many corporations are using SaaS applications. Leveraging the cloud can be a better option from both a cost and maintenance standpoint. This post is not about the pros and cons of SaaS, but rather about hybrid identity. Maintaining a common user for on-premise and cloud-based application access is known as hybrid identity.

Using a hybrid identity is beneficial in multiple ways:

  • Allows access — with the same credentials — to both on-premise and cloud-based applications
  • Syncs up joiner/leaver/mover changes between on-premise applications and cloud applications
  • Simplifies use of personal devices to access cloud applications outside the office (if approved and required)

A Common Hybrid Identity Scenario

An organization wants its employees to use the same credentials when accessing both on-premise and cloud-based applications. With Active Directory (AD) as their standard authentication method for those applications, its IT team choses Microsoft Azure as their cloud vendor. To use the same credentials across applications, the on-premise AD must sync up with the Azure AD. This is done with Azure AD Connect.

With an Azure subscription and Azure AD configured, Azure AD Connect is installed on-premise and connects to both the on-premise AD and the Azure AD. It is configured to sync one way (on-premise to Azure) or two-way.

Here is a logical representation:

Hybrid Identity logical diagram

The AD Sync Service in AD Connect will keep Azure AD in sync with the on-premise AD. A built-in scheduler controls the frequency of these syncs.

Two Simple Hybrid Identity Implementation Options

Azure AD Password Hash Synchronization In this option, AD domain data and the on-premise password hash are uploaded to Azure AD. Cloud-based applications can then authenticate with Azure AD, and on-premise applications can continue to be authenticated using the local AD.

Azure Active Directory Passthrough Authentication In this option, passwords are not synced to Azure. When a user attempts to sign into a cloud-based application, Azure encrypts the entered password with a public key, and places the username and encrypted password in an Azure queue. The on-premise Authentication Agent listens to the queue and receives the queued credentials. It decrypts the password with a private key and validates the credentials against the on-premise AD. It then responds to Azure AD with the results. This option is best when there are security rules or concerns with storing the password off-premise.

There are several advantages to these two methods:

  • They have a small on-premise footprint
  • No new servers are needed
  • The only required components are the Azure AD Connect application and the passthrough agent that connects to the queue
  • All connections are outbound to the Azure subscription, so the connection is less of a security concern

Upcoming: I’ll discuss more complex methods to implement hybrid identity — federation and seamless single-sign-on using Azure AD and on-premise AD.

Virtual Machines vs Containers

March 4, 2019

Virtual machines (VM) and containers are similar in concept, but very different in their implementation and use. Conceptually, both allow the running of multiple services on a single platform. Both are quicker and less expensive to deploy than physical servers. And, horizontal scaling is easier since there is no need to spin up and configure new physical infrastructure. That’s where the similarities end.

Virtual Machine Implementation Overview

Virtual machines are akin to software implementations of physical servers, thus virtualizing hardware servers. Many virtual machine instances can run on one physical machine. This is accomplished by a hypervisor.

A hypervisor application creates and maintains the virtual machines. Each VM is allocated CPU, memory, and storage. Each VM also has its own operating system, the Guest OS. VMs with different operating systems can run side by side. Applications that run on a virtual machine don’t see a difference between a VM and a physical server.

Here is a logical view of a virtual machine implementation:

Container Implementation Overview

Containers are run in native OS processes that share the same kernel. These containerized applications package all the required components together, which allows them to be deployed quickly in different environments. My recent Containers blog post goes into more detail on what these are.

Here is a logical view of containers on a server:

Notice that the container instances all share the underlying Host OS.

Here is a quick comparison of containers and virtual machines:

  Containers Virtual Machines
Startup Time Seconds Minutes
OS Shares OS Has its own OS
Virtualization OS-Level Hardware
Memory Less is required More is required
Cost Less More

Let’s examine the above comparison traits.

Startup Time is much quicker with containers, mostly due to the virtual machine having its own operating system.

The OS is virtualized for containers. Virtual machines have their own allocated OS and virtualize their underlying hardware. Thus, VMs can run on a server that has different Guest OS types.

Virtualization for a container is a type of operating-system-level virtualization. This is because a program running inside a container does not have access to the underlying operating system. It only has access to what has been allocated to the container. VM is considered hardware virtualization since, looking in from the outside, a VM appears to be a physical server with it’s own CPU, memory and disk.

The memory footprint is smaller for containers. This means a physical server can run more containers than virtual machines.

Cost would normally be less when using containers. One reason is that a container does not require its own Guest OS, and thus, many more containers can run on a host server than on VMs. 

In the end, there are good reasons to use containers and VMs. Both can be supported on-premise or through a cloud-hosting option, such as Amazon, Google, Microsoft, IBM, and others. Both have their uses and related costs. And architects can help inform the appropriate choice.

Containing Your Container Excitement

February 22, 2019

Containers are becoming mainstream as companies aim to cut costs and increase performance. And, with increased use of PaaS (Platform as a Service) offered by vendors such as Microsoft, Amazon and Google, organizations take advantage of built-in support for containers.

So, what are containers?

Containers are a method of operating system virtualization that provides the ability to run an application in resource-isolated processes. Containers can help package an application’s code, configurations, and dependencies into easy-to-use building blocks that deliver environmental consistency, operational efficiency, developer productivity, and version control.

Why use them?

  • Containers help ensure that applications deploy quickly, reliably, and consistently regardless of deployment environment.
  • Containers provide more granular control over resources, and thus improve infrastructure efficiency.

How do they work?

A container engine must be running on a server. This engine manages the containers, and sits on top of the host server OS. It starts and stops the container instances running on the host server. Here is a logical view:

The container engine manages the container instances (eg, the starts and stops).

Container images are read-only template files that describe what the container’s running application will look and act like along with the dependencies such as required libraries and binaries. The container engine uses these image files to create the container instance.

The container instance is the running implementation of the container image. It can be any number of things, such as:

  • Application
  • Service
  • Database

Pros to using Containers

  • Allows easier movement of an application across environments
  • Runs within a process, thus allowing more instances of an application
  • Starts up and scales quickly

Cons to using Containers

  • Security can be a concern, as containers share the OS and kernel
  • Networking access within containers can be more complex

Now what?

Migrating to containers is not for everyone nor for every scenario. Some legacy applications are not easily migrated, and it might makes sense to only move new applications to containers.

Docker and Google Kubernetes are currently two popular container management products. And, if you don’t want to do an on-premise install of the container engine and associated applications, excellent cloud-based container options are available with Microsoft Azure and Amazon AWS.

I’ve only scratched the surface on what containers are. Many tutorials are available; and as with many technologies, you can experiment on your own by either downloading container software or getting a free account on Azure or AWS.

In my next blog post, I’ll describe the differences between containers and virtual machines — it can get confusing since there are similarities.

« Previous PageNext Page »