We'll email you at these times to remind you to study
You can set up to 7 reminders per week
We'll email you at these times to remind you to study
Containerizing and Orchestrating Apps with GKE
In this final topic you’ll learn how to leverage Google Kubernates engine, you’ve already discovered the spectrum between infrastructure as a service and platform as a service and you’ve learnt about compute engine which is the infrastructure as a service offering of GCP with access to servers, file systems and networking. Now you will see an introduction to containers and GKE which is a hybrid that conceptually sits between the two. It offers the managed infrastructure of infrastructure as a service with the developer orientation of platform as a service GKE is ideal for those that have been challenged when DEPLOYING or maintaining a fleet of VMs and it’s been determined that containers are the solution.
It’s also ideal when organisations have containerised the work LOADS I need a system on which to run and manage them and don’t have dependencies on kenel changes or on a specific non Linux operating system. With GKE there’s no need to ever touch a server or infrastructure.So how does containerisation work infrastructure as a service allows you to share computer resources with other developers by virtualising the hardware using virtual machines. Each developer can deploy their own operating system access the hardware and build their applications in a self-contained environment with access to their own run times and libraries as well as their own partitions of RAM file systems networking interfaces and so on.
You have your tools of choice on your own configurable system so you can install your favourite runtime web server database and middleware configure the underlying system resources such as disk space, disk IO, networking and build as you like but flexibility comes with a cost the smallest unit of compute is an app with its vm. The guest OS may be large even gigabytes in size and takes minutes to boot as demand for your application increases you have to copy an entire vm and boot the guest OS for each instance of your app which can be slow and costly.
A platform as a service provides hosted services and an environment that can scale work loads independently all you do is write your code in self-contained work loads that use these services and include any dependent libraries. Work LOADS do not need to represent entire applications they are easier to decouple because they’re not tied to the underlying hardware, operating system or a lot of the software stacks that you use to manage.
As demand for your app increases the platform scales your apps seamlessly and independently by workload and infrastructure. This scales rapidly and encourages you to build your applications as decoupled micro services that run more efficiently but you won’t be able to fine tune the underlying architecture to save cost. That's where containers come in the idea for container is to give you the Independent scalability of workloads in a platform as a service and abstraction layer of the operating system and hardware in an infrastructure as a service.
It only requires a few system calls to create and it starts as quickly as a process all you need on each host is an OS kernel that supports containers and a container runtime. In a sense you are virtualising the operating system it scales like platform as a service but gives you nearly the same flexibility as an infrastructure as a service. Containers provide an abstraction layer of the hardware and operating system an invisible box with configurable access to isolated partitions of the file system, RAM and networking as well as a fast start-up with only a few system calls. Using a common host configuration you can deploy hundreds of containers on a group of servers if you want a scale for example a web server you can do so in seconds and deploy any number of containers depending on the size of your workload on a single host or a group of hosts. You'll likely want to build your applications using lots of containers each performing their own functions like microservices if you build them this way and connect them with network connections you can make them modular and deploy them easily and scale independently across a group of hosts and the host can scale up and down and start and stop containers as demand for your changes or as hosts fail.
With a cluster you can connect containers using network connections, build code modularly, deploy it easily and scale containers and independently for maximum efficiency and savings. Kubernetes is an open source container orchestration tool you can use to simplify the management of containerised n1s. You can install KUBERNETES on a group of your own managed servers or run it as a hosted service in GCP on a cluster of managed to compute engine instances called Google Kubernetes engine. Kubernetes makes it easy to orchestrate many containers on many hosts scale them as microservices and deploy rollouts and roll backs. Kubernetes was built by Google to run applications at scale. Kubernetes lets you install the system on local servers in the Cloud. Manage container networking in the storage, deploy rollouts and rollbacks and monitor and manage container and host health.
Just like shipping containers a software container makes it easier for teams to package, manage and ship their code applications that run in a container. The container provides the operating system needed to run the application container will run on any container platform this can save a lot of time and cost compared to running servers or virtual machines. Like a virtual machine imitates a computer, a container imitates an operating system everything on Google runs on containers, Gmail, web search, maps, map reduce,batch processes, Google file system, Colossus, even cloud functions are vms in containers. Google launches over 2 billion containers per week, docker is the tool that put the application and everything it needs in the container.
Once the application is in a container it can be moved anywhere that will run docker containers and the laptop, server or Cloud provider. This potability makes code easier to produce, manage and troubleshoot and update for service providers containers make it easy to develop code that can be ported to the customer and back. Kubernetes is an open source container orchestration tool for managing a cluster of docker Linux container as a single system. It can be run in the cloud and on-premises environments, it’s inspired and informed by Google’s experiences and internal systems. GKE is a managed environment for deploying containerised apps. It brings Google’s latest innovations in developer productivity, resource efficiency, automated operations and open-source flexibility to accelerate time-to-market. GKE is a powerful cluster manager and orchestration system for running docker containers in Google Cloud. GKE managers containers automatically based on specifications such as CPU and memory. It’s built on the open-source Kubernetes system making it easy for users to orchestrate container clusters or groups of containers because its built on the open-source kubernetes system it provides its customers the flexibility to take advantage of on-premises, hybrid or public cloud infrastructure.
Log in to save your progress and obtain a certificate in Alison’s free Google Cloud Computing Foundation online course
Sign up to save your progress and obtain a certificate in Alison’s free Google Cloud Computing Foundation online course
Please enter you email address and we will mail you a link to reset your password.