|
|
|
Containers meet Microservices, DevOps, and the Internet-of-Things
Published Jun 16, 2016
|
To cope with an increasingly networked and interconnected world, industrial automation is evolving to incorporate many of the new or repurposed technologies underpinning general-purpose computing. For example, the Internet-of-Things (IoT) is especially interesting today because a number of converging technology trends are making useful solutions more practical: low-power and inexpensive processors for pervasive sensors, wireless networks, and the ability to store and analyze large amounts of data, both at the edge and in centralized data centers. IoT opens vast possibilities for information gathering and automation. This in turn gives rise to new opportunities to innovate, increase revenues, and gain efficiencies.
One of the key technologies being deployed to enable the easy deployment and isolation of applications running in both gateway devices and back-end servers is Linux containers. Containers provide lightweight and efficient application isolation and packages applications together with any components they require to run; this avoids conflicts between apps that otherwise rely on key components of the underlying host operating system.
According to a Forrester Consulting Thought Leadership Paper commissioned by Red Hat, container benefits cover a wide range with higher quality releases (31 percent), better application scalability (29 percent), and easier management (28 percent) cited as among the top three reasons to adopt containers. Forrester notes: “That the top benefits cited are so spread out is a testament to the broad appeal of containers to businesses with various objectives.”
Containers are part and parcel of the set of technologies and practices through which new applications are being developed in industrial automation and elsewhere. The lightweight isolation provided by containers allows them to be used to package up loosely-coupled services that may perform only a single, simple function such as reading a sensor, aggregating some data, or sending a message. These small independent services that can operate independently of each other are often called “microservices.”
Microservices can avoid many of the pitfalls of more monolithic and complex applications in that the interfaces between the different functions are cleaner and services can be changed independently of each other. Services are, in effect, black boxes from the perspective of other services. So long as their public interfaces don’t change and they perform the requested task, they can be changed in any way the developer sees fit. Other services don’t know--and should not know--anything about the inner workings of the service.
These clean interactions in turn make it easier for small teams to work on individual services, test them, and do rapid and iterative releases. This, in turn, makes it easier to implement DevOps, which is an approach to culture, automation, and system design for delivering increased business value and responsiveness through rapid, iterative, and high-quality service delivery. Thus containers, microservices, and DevOps--while, in principle, independent--mutually support and enable each other to create a more flexible and efficient infrastructure, applications that make the best use of that infrastructure, and a process and culture that develops and deploys those applications quickly and with high quality.
For example, the aforementioned Forrester Consulting study also found that containers provide an “easier path to implementing DevOps,” especially in concert with additional tools. Forrester wrote that “organizations with configuration and cluster management tools have a leg up on breaking down silos within the software development life cycle.” Almost three times (42 percent vs. 15 percent) of the organizations using such tools identified themselves as being aligned with DevOps compared to organizations using containers alone.
From a technical perspective, services running in Linux containers are isolated within a single copy of the operating system running on a physical server (or, potentially, within a virtual machine). This approach stands in contrast to hypervisor-based virtualization in which each isolated service is bound to a complete copy of a guest operating system, such as Linux. The practical result is that containers consume very few system resources such as memory and impose essentially no performance overhead on the application.
One of the implications of using containers is that the operating system copies running in a given environment are essentially acting as a sort of common shared platform for all the applications running above. The operating system kernel is shared among the containers running on a system while the application dependencies are packaged into the container.
The operating system is therefore not being configured, tuned, integrated, and ultimately married to a single application as was the historic norm, but it's no less important for that change. In fact, because the operating system provides the framework and support for all the containers sitting above it, it plays an even greater role than in the case of hardware server virtualization where that host was a hypervisor.
All the security hardening, performance tuning, reliability engineering, and certifications that apply to the virtualized world still apply in the containerized one. And, in fact, the operating system shoulders a greater responsibility for providing security and resource isolation than in the case where a hypervisor is handling some of those tasks. This means, for example, that you should use available Linux operating system capabilities, such as SELinux, to follow best practices for running containerized services just as if they were running on a conventional bare metal host. This means doing things like dropping privileges as quickly as possible and running services as non-root whenever possible.
We’re also moving toward a future in which the operating system explicitly deals with multi-host applications, serving as an orchestrator and scheduler for them. This includes modeling the app across multiple hosts and containers and providing the services and interfaces to place the apps onto the appropriate resources. In other words, Linux is evolving to support an environment in which the “computer” is increasingly a complex of connected systems rather than a single discrete server.
There represents an ongoing abstraction of the operating system; we’re moving away from the handcrafted and hardcoded operating instances that accompanied each application instance—just as we previously moved away from operating system instances lovingly crafted for each individual server. Applications that depend on this sort of extensive operating system customization to work are not a good match for a containerized environment. One of the trends that makes containers so interesting today in a way that they were not (beyond a niche) a decade ago is the wholesale shift toward more portable and less stateful application instances. The operating system's role remains central; it’s just that you’re using a standard base image across all of your applications rather than taking that standard base image and tweaking it for each individual one.
In addition to the operating system’s role in securing and orchestrating containerized applications in an automated way, it’s also important for providing consistency (and therefore portability) in other ways as well. For example, true container portability requires being able to deploy across physical hardware, hypervisors, private clouds, and public clouds. It requires safe access to digitally signed container images that are certified to run on certified container hosts. It requires an integrated application delivery platform built on open standards from application container to deployment target.
Add it all together and applications are becoming much more adaptable, much more mobile, much more distributed, and much more lightweight. Their placement and provisioning is becoming more automated. They’re better able to adapt to changes in infrastructure and process flow driven by business requirements.
And this requires the operating system to adapt as well while building on and making use of existing security, performance, and reliability capabilities. Linux is doing so in concert with of other open source communities and projects to not only run containers but to run them in a portable, managed, and secure way.
Posted by
VMD - [Virtual Marketing Department]
|
|
|