IT Process Orchestration

Share This Post:

Share on facebook
Facebook
Share on twitter
Twitter
Share on linkedin
LinkedIn
It Process Orchestration

Orchestration involves balancing and coordinated the multiple layers of overlapping IT processes critical to system and network administration.  These include:

  • Application development
  • Configuration management
  • Disaster recovery
  • Server monitoring
  • Security

In this article, Secur runs through the IT Orchestration Process you need to know for your first day on the job as a Linux system administrator.

Understanding Orchestration Concepts

While you may not have heard of the term “IT orchestration process”, you might be familiar with the term “DevOps”, which is a method to improve software delivery operations and includes:

  • Continuous integration, testing and delivery
  • Infrastructure as code
  • Infrastructure automation
  • Monitoring/Logging

Probing Procedures

The idea  of DevOps is to quickly deploy small continual changes that involve new software features, bug fixes, and desired modifications to a production environment 
Continual App Processing: A portion of DevOps involves continuous delivery, leveraging software revision control to continuously test, merge and quickly integrate code changes into the main software branch.
Controlling the App Environment: To support this continuous delivery, it is critical in DevOps that the development and production environment be exact copies of one another to avoid bugs and complications from bugs and complications due to mismatched environments.
Environment changes must also go through a continual integration process. Tested new environments are added to a registry where older environments are also maintained in case a rollback to an earlier environment is needed.
Defining the App Environment In DevOps:  Development and production environments  specifications define what hardware to employ, the essential operating system, software packages and specific configuration including:

  • Security measures,
  • Authentication policies

Defining infrastructure as code means that the environments are repeatable and version-able, so you can implement revision control in the app environments for both policies and configuration. 

Deploying the App Environment: The app as well as the development environment are continuously deployed to the production environment on whatever basis meets the app’s business requirements.  The benefit of infrastructure as code is that deploying the app and its environment can be easily automated and is often referred to as infrastructure automation.
Monitoring the App Environment: You need to monitor and log the app in its production environment.  These include software metrics, infrastructure resource usage, and performance statistics with the goal being to ensure environmental health is meeting predetermined conditions (environment health).  Monitoring and logging provides:

  • Threshold and performance measurements, making it easier to make decisions about app or environment infrastructure modifications.
  • Alerts to potential failures or resource depletion events. If a particular preset limit is crossed, the monitoring software can issue an alert or even handle the event itself using predefined event rules.

Analyzing Attributes

Virtualized containers are quite useful in DevOps for a number of reasons:

Static Environment: Containers provide a predetermined environment, sometimes referred to as a container image, that does not change through time (immutable).   Created with preset library, operating system version and security settings, all container settings are recorded and no software updates are issued within the image.

Version Control:  Prior to moving a modified app container image into production, a container and its recorded configuration are registered with the version control system, which can contain previous app container images, including the one in production at the present time Some companies use a manual version control system implemented via backups.

Replace Not Update: After registration, the production environment is switched as app container is ready to move into production. Instead of the production app container image being updated, the development app container image then replaces the production container .
High Availability: Replication is the process of creating multiple copies of the production app container image and running them, allowing the stoppage of  unused production containers in order to replace them with the new production app containers.  As opposed to shutting down an application to update it, this methodology provides continual uptime for your app users. With containers and replication. 

Data Center Provisioning

While container orchestration increases the speed and agility of application deployment, you can use orchestration concepts to quickly set up your app’s required infrastructure which involved understanding the following concepts.

Coding the Container Infrastructure

Container infrastructure can be managed and controlled in a manner similar to how software revisions are treated.
Determine the Infrastructure: The environment in which a containerized app runs is preplanned by software developers and tech ops.  This involves setting the container’s operating system, libraries, services, security configuration, and any other supporting software or networking utilities are chosen.  These settings are frozen to provide an immutable development/production environments.  Ideally, remote access services such as OpenSSH are disabled in order to protect its immutable environment.

Document the Infrastructure: The container infrastructure is typically documented through an orchestration tool. The configuration management and policy as code settings are loaded into the utility’s infrastructure as code portal, in a process called automated configuration management. The data is later used to deploy and replicate the app containers through build automation.

Provide Revision Control: The infrastructure as code information is also inserted into an orchestration tool registry to allow for version control. Every change which occurs in the container image infrastructure, its modifications are tracked.

Troubleshoot the Infrastructure: If an app container is deployed into production and problems occur, one item to check is the production container’s documented configuration and revisions to determine if any infrastructure items may be to blame. Various orchestration tools allow a quick determination of modified infrastructure components and quicker problem resolution.
Handling the application’s infrastructure in this manner increases the agility of your app container deployment. It also improves the time to production speed. 
Notice that at the end of container’s life cycle (typically when the new app container image is ready), the container image is removed. However, the old app container image’s configuration should be stored within a version control system (or backup storage) and thus redeployed if any problems occur with the new app container image.

Automating the Infrastructure

Your environment may require hundreds of running production app containers so automated configuration management allows you troubleshoot the infrastructure more easily, roll the environment back to an earlier version,  and automate the deployment. 

As manually configuring infrastructure is tedious and neither fast nor cost effective, the use of orchestration tools (like  Chef and Puppet) in conjunction with automated configuration management, you can replicate the production app container with build automation tools; let your orchestration tool know you need specific number of production app container images running at any one time.

Comparing Agent and Agentless

While orchestration monitoring, logging, and reporting tools track app containers’ health/performance, there can be a performance hit from the use of these tools, which gave rise to the agent versus agentless dispute.  Some people feel that an agent is detrimental to an app container’s performance, while others see only minor effects. Some tech ops insist that agentless tools are inflexible, while others believe installing and maintaining an agent in their containers is an unnecessary hassle. 

Agent monitoring tools:  These are orchestration utilities that require something called a software agent to be installed in the app container being monitored.

These agents collect the data and transmit it to another location, such as a monitor server. The monitor server manages the information, provides analysis reporting, and also sends alerts for events, such as a container crashing.

Agentless monitoring tools:  Orchestration utilities that operate without an agent being installed in the app container being monitored. It operates by using software embedded in the container or the container’s external environment to conduct its monitoring activity.

Most companies use a combination of agent and agentless orchestration tools.

Investigating the Inventory

Orchestration monitoring utilities can automatically deal with an app container’s untimely demise. When an app container shuts down, this triggers an event, and the desired state is no longer met. A desired state is a predetermined setting that declares how many containers should be deployed and running.

For example, imagine that your software application needs to have 10 production app containers running to efficiently handle the workload. If one of those containers crashes, the container inventory now switches to 9. This triggers an event in the monitoring utility that the desired state is no longer being met.

Many orchestration utilities employ self-healing. With self-healing, if the desired state is not currently being achieved, the orchestration tool can automatically deploy additional production app containers.   This means that the orchestration tool would immediately start up an additional production app container using the container’s stored configuration settings (build automation). 

When a new production app container is first deployed, the self-healing orchestration property will cause containers to be deployed automatically until the desired state is met. 

Container Orchestration Engines

Orchestration of containers, whether the containers are on your local servers or in the cloud, requires various orchestration engines (also called orchestration systems). No one system can do it all. The best combination is a set of general and specialized orchestration tools.

Embracing Kubernetes

Originally designed and used by Google, Kubernetes is an open-source orchestration sys- tem that is considered by many to be the de facto standard. Not only is Kubernetes very popular and free, it also is highly scalable, fault tolerant, and easy to learn. This system contains years of Google’s orchestration experience, and because it is open source, additional community-desired features have been added. This is one reason so many companies have adopted its use for container orchestration.

Each Kubernetes managed service or application has the following primary components:

  • Cluster service: Uses a YAML file to deploy and manage app pods
  • Pod: Contains one or more running app containers
  • Worker: Pod host system that uses a kubelet (agent) to communicate with cluster services
  • YAML file: Contains a particular app container’s automated configuration management and desired state settings

This distributed component configuration allows high scalability and great flexibility. It also works very well for continuous software delivery desired by companies employing the DevOps model.

Inspecting Docker Swarm

Docker, the popular app container management utility, created its own orchestration system, called Docker Swarm (also called Swarm). A group of Docker containers is referred to as a cluster, which appears to a user as a single container. To orchestrate a Docker cluster, you can employ Swarm.
With the Swarm system, you can monitor the cluster’s health and return the cluster to the desired state should a container within the cluster fail. You can also deploy additional Docker containers if the desired app performance is not currently being met. Swarm is typically faster than Kubernetes when it comes to deploying additional containers.
While not as popular as the Kubernetes orchestration system, Docker Swarm has its place. It is often used by those who are new to orchestration and already familiar with Docker tools.

Surveying Mesos

Mesos (also called Apache Mesos) is not a container orchestration system. Instead, Apache Mesos, created at the University of California, Berkeley, is a distributed systems kernel. It is similar to the Linux kernel, except it operates at a higher construct level. One of its features is the ability to create containers. The bottom line is that Apache Mesos combined with another product, Marathon, does provide a type of container orchestration system framework. You could loosely compare Mesos with Marathon to Docker with Swarm.

Mesos with Marathon provides high availability and health monitoring integration and can support both Mesos and Docker containers. This orchestration framework has a solid history for large container deployment environments.
If you desire to find out more about Mesos with Marathon, don’t use search engine terms like Mesos orchestration. Instead, go straight to the source at 

Share This Post:

Share on facebook
Facebook
Share on twitter
Twitter
Share on linkedin
LinkedIn

Leave a Reply

Your email address will not be published. Required fields are marked *

Table of Contents

You May Like

Related Posts

How to configure virtual machines
Linux Basics
Linux Administrator

How to Configure Virtual Machines

Knowing how to configure virtual machines requires knowing enough to figure out: How virtual and physical networks interoperate The different disk storage choices available How

Read More »
Linux meme
Linux Basics
Linux Administrator

Adding and Removing Linux Software

A fundamental task as system administrator is adding and removing Linux software that either didn’t come with the distribution or removing unwanted software to free

Read More »
Add linux to your pc
Linux Basics
Linux Administrator

How Does A Linux Server Work?

An essential part of learning Linux is understanding how does a Linux server work. As a Linux administrator, not knowing how a Linux server works

Read More »