How to Tie Hardware Together With Containers and the Hybrid Cloud
Executive Summary
- The different hardware modalities can, in theory, be tied together.
- One of the answers is Kubernetes, but this article explains why this is an oversimplification.
Introduction
Up to this point, we have covered the four primary hardware modalities. However, the natural question after one has analyzed each environment is how they can be integrated. We currently have different hardware modalities, some that are relatively stable (mainframe), some that are in decline (on-premises proprietary servers), and some that are rising (cloud). Increasingly, all of these environments are running Linux and open-source operating systems. A few proprietary server operating systems persist, like Solaris, for instance, but even there, the writing is on the wall. And this is not because of monopolistic tricks or lock-in. That is the reason Windows is dominant on the desktop, but because of the merits of Linux. If your hardware does not run Linux, increasingly your hardware will pay the penalty in the future. Linux creates the potential for portability of applications and databases between hardware platforms.
The second development in the portability of software between hardware modalities (and within the modality, for instance, moving code between different cloud providers) is containers and Kubernetes. The ultimate desirable state is applications that can span hardware platforms and locations. So some containers run on AWS that connect to on-premises containers. Containers can be brought up on Google Cloud, but then due to price constraints (for example), they moved out to Digital Ocean.
It’s a great vision, with all kinds of complications and pitfalls before this will happen. Kubernetes seems to be very attractive to on-premises IT departments because it allows them to justify their present IT investments (all IT departments love).
The Applicability of Kubernetes
Currently, there is a rapidly rising interest in Kubernetes. Kubernetes is a way of managing containers. A container is a discrete encapsulated and emulated operating system that allows portability of codes and dependencies between physical or virtual/emulated servers.
Containers benefit from the automated management and performance benefits of container management systems like Docker. Docker is widely thought to have significantly simplified the usage of containers and increased the general usage of containers. Docker’s capabilities provide a higher incentive to use containers because of all the container management that comes with Docker.
The rise of microservices and containers is evidenced by the fact that Docker is one of the fastest-growing products currently in programming circles. Still, the overall use of containers, be it with Docker or Kubernetes, or AWS Container Service sees enormous growth.
“Overall Docker adoption increases to 49 percent from 35 percent in 2017 (a growth rate of 40 percent). Kubernetes sees the fastest growth, almost doubling to reach 27 percent adoption. The AWS container service (ECS/EKS) leads among the cloud provider’s container-as-a-service offerings at 44 percent adoption.”[1]
Kubernetes can manage containers. Kubernetes is an open-source orchestration project begun by Google for managing containers.[1]
Cisco, one of the top on-premises hardware providers, now offers Kubernetes support to enable hybrid clouds.[1] [2]
Cisco declares the logic for introducing its Hybrid Solution for Kubernetes on AWS as follows:
“Containers and Kubernetes have emerged as key technologies to give developers more agility, portability, and speed — both in how applications are developed and in how they are deployed. But, enterprises have been struggling to realize the full potential of these technologies because of the complexity of managing containerized applications in a hybrid environment. Organizations need to work across siloed environments, technologies, teams, and vendors to glue all the parts of a hybrid infrastructure together themselves. Properly configuring Kubernetes to deploy applications on-premises and in the public cloud requires custom integrations that can be an operational challenge.”[3]
Cisco makes the integration between on-premises environments and cloud/AWS, in this case, look extremely straightforward.[1]
This is a diagram from Google on how to use Kubernetes. Notice this is a combination of Google Cloud, on-premises, and another public cloud.[1]
The Kubernetes documentation is unambiguous that it is not a platform as a service (PaaS). This is because Kubernetes operates at the container level, not the hardware level.
Kubernetes is like other things in that it has positives and negatives. Several things that Kubernetes has going for it is the following:
- Google Tested: Google already tests Kubernetes to help them manage their internal environments. Although, as we will see, just because Google can pull something off does not mean everyone else can.
- Big Supporter: Kubernetes is both originated by and pushed by Google.
- Open Source: While created chiefly by Google Kubernetes is open source. This means that it is both free and can run on any cloud or other hardware modality without issue.
But there is also a major issue with using Kubernetes to tie together applications to run on the various hardware modalities. And that has to do with the fact that the vast majority of applications were developed before Kubernetes and containers.
The Issues Which Will Place and Upper Limit the Adoption of Kubernetes
There are two primary problems with Kubernetes. The first is the same problem that restricts the adoption of containers. Moreover, that is an undeniable fact that most applications are still monolithic. Being monolithic means that the application has not been designed based upon a microservices architecture. The traditional software design has monolithic applications supported by (usually RDBSM) a single database is called the monolithic database approach or architecture, which is a particular approach to software development.
A microservice design breaks up the codebase into microservices, each with its database. This is how multi-base development is working in many companies.[1]
Microservices are a way of breaking up the functionality that is generally contained within a monolithic application and monolithic database into smaller pieces increasing the variety in coding languages, databases, and associated tools.
This is explained nicely in the following quotation.
“First, to be precise, when talking about microservices, we are actually referring to a microservice architecture. This architecture type is a particular way of developing software, web or mobile applications as suites of independent services – a.k.a microservices. These services are created to serve only one specific business function, such as: user management, user roles, e-commerce cart, search engine, social media logins, etc. Furthermore, they are independent of each other, meaning they can be written in different programming languages and use different data storages. The centralized management is almost non-existent and the microservices use lightweight HTTP, REST or Thrift APIs for communicating between themselves.”[2]
With microservices, modularity is increased, as is developer specialization as each microservice has its development, which is concentrated on the technologies and even the business processes for that microservice. New applications or new development will be increasingly likely to be based upon microservices. However, old applications are also being refactored into microservices to receive their benefits. Microservices are the opposite of the monolithic design approach. Yet, the vast majority of applications currently in service are built on the monolithic design. And they can’t use containers, or Kubernetes, or Open Shift for the hybrid cloud until there are refactored (which is to break apart and rewrite and application) into a microservices architecture. How much work is that?[3] Let us see the following quotation.
“It is not an easy transition and it takes time to change from monolithic to microservice architecture. It requires a lot of upfront work (refactoring, thinking, building, automating, deploy) but the complexity of your application does not disappear, it just moves. Microservices are not a good fit for everything, so please think twice before making the transition in order not to waste time.
Most importantly, you will need to change your organization’s structure if you want to succeed. You need to think agile and need to make small steps to deliver the changes to your customers as fast as possible.” [4]
That means Kubernetes can’t take over the world until applications are refactored, which will take quite a bit of time. Most of the articles on Kubernetes seem to circumvent the whole micro-services question and jump right into the hybrid cloud discussion. This is what IBM is trying to do, to make the overall enterprise software space about containers when only a smaller percentage of applications follow this pattern. IBM will publish article after article about how companies must move to a microservices architecture.
Why? Because it is true? No.
Because IBM now owns Open Shift through its Red Hat acquisition and the more companies make this move and invest in refactoring, the better it is for IBM.
The following quotation from Cisco is a perfect example of this type of overestimation of Kubernetes.
“Containers solved the application portability problem of packaging all the necessary dependencies into discrete images, and Kubernetes has emerged as the defacto standard for how those containers are orchestrated and deployed.”[5]
There is a big “IF” there. The quotation is true IF the application is developed as a microservice. If it isn’t, Kubernetes will not be useful in running the application in the hybrid cloud.
Cisco continues…
“By adopting containers and Kubernetes, IT and Line of Business users can focus their efforts on developing applications, rather than infrastructure and ‘plumbing’. Because Kubernetes is available everywhere, one can choose the best place to run an application based on business needs. For some applications, the scale and reach of the public cloud, along with its huge number of services available, will be the determining factor. For others, data locality, security or other concerns dictate an on-premises deployment.”[6]
Sounds great, but let’s not jump ahead to tricking ourselves into thinking that all applications will be instantly refactored to run in containers.
There is a second problem with Kubernetes. Kubernetes is quite complicated. Google developed it for its internal operations where a single request is satisfied by 10,000 or more servers. Google’s research into this area is extremely complex, as the paper Large-scale cluster management at Google with Borg explains.[7]
“Google’s Borg system is a cluster manager that runs hundreds of thousands of jobs, from many thousands of different applications, across a number of clusters each with up to tens of thousands of machines. It achieves high utilization by combining admission control, efficient task-packing, over-commitment, and machine sharing with process-level performance isolation. It supports high-availability applications with runtime features that minimize fault-recovery time, and scheduling policies that reduce the probability of correlated failures. Borg simplifies life for its users by offering a declarative job specification language, name service integration, real-time job monitoring, and tools to analyze and simulate system behavior. We present a summary of the Borg system architecture and features, important design decisions, a quantitative analysis of some of its policy decisions, and a qualitative examination of lessons learned from a decade of operational experience with it.”[8]
The reality of Kubernetes is nothing like how it is frequently explained in IT media, where it is often described as some big Lego set that will speed and simplify the move to the hybrid cloud!
There is a cottage industry around declaring what Google, Amazon, and Facebook are doing in technology (much like the manufacturing consulting cottage industry built up around Toyota, which we covered in the article How the Toyota Production System Was Misrepresented to US Audiences).[9] [10] This type of Google worship has been very prominent in the area of artificial intelligence. However, these companies can attract the best talent, they have enormous resources, and outside of Amazon, they don’t have any business to manage outside of technology. Software is their business. However, most companies use software to get things done for their real business. They also do not have anything like the interest in performing research or testing how Google, Amazon, and Facebook do.
A potential book title we had considered was “Just Because Google Can Do Something, Does Not Mean You Can.” Not very inspirational, but quite true.
The issue of Kubernetes is that there are not that many resources who understand at the appropriate level how it works to take advantage of it. The argument is that AWS, Google Cloud, and Azure will most likely be the most advanced users of Kubernetes for some time into the future.
However, these realities will not stop people from selling or overselling Kubernetes, which will lead to what we predict will be many failed Kubernetes projects. This is the pattern in IT. Whatever is new in technology is latched onto by consulting firms, which have only a very hazy understanding of the technology, particularly at the higher level. These, mostly sales types, then sell something new that they barely understand to corporate buyers who also have little knowledge of the original item. Trying to provide technical advice to consulting firms on technology is a sobering experience. Mainly, any information that is provided that does not support the sale initiative is batted away.
The Reality of the Four Hardware Modalities
Kubernetes is an excellent tool for increasing hardware portability. However, while the marketing of many consulting firms and vendors will paper over the differences, not all of the hardware modalities are inherently compatible. The most compatible is the cloud commodity servers and the proprietary on-premises servers. However, as we covered earlier, mainframes are optimized for specialized processing, better than any other hardware modality at that processing. Moreover, mainframes already have high utilization levels (of course, more mainframes can be purchased). The second hardware modality that is distinct from the bunch is appliances. Appliances like Exadata are designed to run specific products from one vendor. Therefore the question of which can run hybrid workloads is more complicated. Secondly, not all services run or are equally available on all of the hardware modalities. Cloud services offer by far the highest degree of choice. Therefore, just because a service is available on the cloud does not mean it can be spun up on the private on-premises proprietary servers.
References
[8] https://www.amazon.com/dp/B002OSXS0U/
[9] And this is leaving out the cost implications of the system designed by the proprietary vendors, as the following quote explains. “But UNIX machines cost a relative fortune. The high-end customers requested the OS, so generally, only high-end machines came with it.” – https://www.wayner.org/books/ffa/ffa-2002-12-13-mod.pdf
[10] https://www.wayner.org/books/ffa/ffa-2002-12-13-mod.pdf
[11] “One Hotmail founder told the PBS Online columnist Robert Cringely, “All we got was money. There was no recognition, no fun. Microsoft got more from the deal than we did. They knew nothing about the Internet. MSN was a failure. We had 10 million users, yet we got no respect at all from Redmond. Bill Gates specifically said, “don’t screw up Hotmail,” yet that is what they did.” – https://www.wayner.org/books/ffa/ffa-2002-12-13-mod.pdf
[12] https://www.wayner.org/books/ffa/ffa-2002-12-13-mod.pdf
[13] https://www.wayner.org/books/ffa/ffa-2002-12-13-mod.pdf
[14] “Open source versions of encryption technology are oozing through the cracks of a carefully developed mechanism for restricting the flow of the software. SINCE WORLD WAR II, the US government has tried to keep a lid on the technology behind codes and ciphers. Some argue that the United States won World War II and many of the following wars by judicious use of eavesdropping. Codebreakers in England and Poland cracked the German Enigma cipher, giving the Allies a valuable clue about German plans. The Allies also poked holes in the Japanese code system and used this to win countless battles. No one has written a comprehensive history of how code breaking shifted the course of the conflicts in Vietnam, Korea, or the Middle East, but the stories are bound to be compelling.” – https://www.wayner.org/books/ffa/ffa-2002-12-13-mod.pdf
[15] https://www.wayner.org/books/ffa/ffa-2002-12-13-mod.pdf
[16] https://www.wayner.org/books/ffa/ffa-2002-12-13-mod.pdf
[1] https://www.barrons.com/articles/red-hat-shares-point-to-wall-streets-worries-over-ibm-deal-1540920513
[2] https://www.youtube.com/watch?v=qaIROwHUm54
[3] https://www.redhat.com/en/topics/cloud-computing/why-choose-red-hat-cloud
[4] https://www.zdnet.com/article/how-the-cloud-wars-forced-ibm-to-buy-red-hat-for-34-billion/
[5] https://www.wsj.com/articles/SB124878176796786611
[1] This is one of the best graphics we have seen that explains microservices, and it is from Microtica. https://www.microtica.com/2016/10/discovering-microservices/
[2] https://dzone.com/articles/what-are-microservices-actually
[3] https://us.arvato.com/blog/refactoring-monolith-to-microservices/
[4] https://bitninja.io/blog/2017/02/06/from-monolith-to-microservices-in-10-steps
[5] https://blogs.cisco.com/cloud/simplifying-container-orchestration
[6] https://blogs.cisco.com/cloud/simplifying-container-orchestration
[7] https://storage.googleapis.com/pub-tools-public-publication-data/pdf/43438.pdf
[8] https://storage.googleapis.com/pub-tools-public-publication-data/pdf/43438.pdf
[9] https://www.brightworkresearch.com/productionplanningandscheduling/2016/09/02/is-toyota-really-a-flexible-manufacturer/
[10] Because of the financial incentives, a great deal of false information has been communicated about the Toyota Production System and parts of the TPS that were considered unappealing to US executives censored from books and explanations.
[1] This is one of the best graphics we have seen that explains microservices, and it is from Microtica. https://www.microtica.com/2016/10/discovering-microservices/
[2] https://dzone.com/articles/what-are-microservices-actually
[3] https://us.arvato.com/blog/refactoring-monolith-to-microservices/
[4] https://bitninja.io/blog/2017/02/06/from-monolith-to-microservices-in-10-steps
[5] https://blogs.cisco.com/cloud/simplifying-container-orchestration
[6] https://blogs.cisco.com/cloud/simplifying-container-orchestration
[7] https://storage.googleapis.com/pub-tools-public-publication-data/pdf/43438.pdf
[8] https://storage.googleapis.com/pub-tools-public-publication-data/pdf/43438.pdf
[9] https://www.brightworkresearch.com/productionplanningandscheduling/2016/09/02/is-toyota-really-a-flexible-manufacturer/
[10] Because of the financial incentives, a great deal of false information has been communicated about the Toyota Production System and parts of the TPS that were considered unappealing to US executives censored from books and explanations.
[1] https://cloud.google.com/solutions/heterogeneous-deployment-patterns-with-kubernetes
[1] https://www.cisco.com/c/dam/en/us/products/collateral/cloud-systems-management/hybrid-solution-kubernetes-on-aws/at-a-glance-c45-741541.pdf
[1] https://www.cisco.com/c/en/us/products/cloud-systems-management/hybrid-solution-kubernetes-on-aws/index.html
[2] (The Cisco Hybrid Solution for Kubernetes on AWS will be available next month. Cisco is offering the solution starting at $65,000 a year.) – https://www.fiercetelecom.com/telecom/cisco-pushes-kubernetes-to-hybrid-cloud-amazon-web-services
[3] https://www.cisco.com/c/en/us/products/cloud-systems-management/hybrid-solution-kubernetes-on-aws/index.html
[1] https://www.rightscale.com/blog/cloud-industry-insights/cloud-computing-trends-2018-state-cloud-survey
[1] https://resources.codeship.com/hubfs/Codeship_Continuous_Deployment_for_Docker_Apps_to_Kubernetes.pdf