’Container as a Service’ (CaaS) is a Beta service that will deliver all the necessary material to execute dockerized services inside OpenStack KVM environment that provides flexibility thanks to the CloudWatt IaaS, security thanks to OpenStack virtualized network, and simplicity thanks to the content of this bundle. Please follow the why-devops-with-dockerized-micro-services video to understand the DevOps move!
The promises are - One-Click deploy & 5 minutes: you have your Docker-based ContainerAAS infrastructure. - One click & 3 minutes: you have your first Container cluster (Magnum Bay) fitted with Docker Swarm or Google Kubernetes COE (Container Orchestration Engine). - Few clicks: you integrate CaaS inside your existing Jenkins chain in order to execute DevOps
CaaS is an easy-to-deploy bundle, available from the CloudWatt Application store. Three major components are delivered: - A KVM instance that provides a ‘Private Docker registry’ to store your future Docker images - A KVM instance that provides a preconfigured ‘Build server’ to build your future Docker images - A KVM instance that provides the CaaS Backend (available via API) and User Interface, available via HTTP.
Thanks to this CaaS environment, you will benefit from the OpenStack/Magnum project that encapsulates the management of Docker containers clusters for Kubernetes or Docker Swarm ^tm^. A ‘Cluster’ (’Magnum Bay’) is a set of one ‘Master’ KVM instance that delivers the API and UI for the selected Docker COE (Container Orchestrator Engine) plus one or several ‘Nodes’ KVM instance(s) that host the customer containers.
If you select Kubernetes cluster as COE when creating a ‘MagnumBay Model’, some complementary tools will be available: - an embedded Kubernetes Dashboard to track the running containers - an embedded ELK (ELastic Search / Kibana) logging system - a ‘Collect’D / Graphana analytics service
As ‘Customer Admin’, after the ‘One-Click’ deployment of the CaaS bundle in your existing OpenStack tenant, you will: - Allocate your first ‘BayModel’ and your ‘Bay’ :see ‘1’. - Then configure your DevOps tooling (see Jenkins with an example of Ansible ‘cookbook’ provided by CaaS): see ‘2’.
Then for every Dockerized application, the customer DevOps team will develop and deploy the application with CaaS according to various roles: see ‘2’: - As customer senior developer, you will define the descriptors of the dockerized application: - The classical POM.XML file used by Jenkins to build your application from the source code version controlled in Git. - The ‘docker file’ that list the content of your future Docker images - The YML COE descriptor, so that the selected Kubernetes or Swarm orchestrator will understand how to deploy or update your application inside a Bay
As any customer integrator, you will optionally adapt the Jenkins job in order to benefit from a partial job that will only deploy an application using either the latest generated ‘PetClinic’ image or a previous one to test non regressions and new features
As any customer DevOps team member (and especially the OPS members) you will benefit from the Kubernetes add-ons (Logging, metering and dashboarding) and CaaS auto-monitoring (with Zabbix-based monitoring for CaaS elements and optionally the customer’s containers): see ‘3’.
As customer team member, you will benefit from the automatic Kubernetes features for auto-scale Up&Down of your containers (if detailed inside your application’s Kubernetes descriptor) as well as auto-repair ==‘Slef-healing’): see ‘4’.
These should be routine by now: - Internet access - CloudWatt credentials and a valid KeyPair for the future KVM instances - The knowledge on how to use the CW AppStore: let’s one-click on ‘deploy’
The ‘One-Click’ bundle is packaged as an OpenStack Heat stack. By default, the Three CaaS instances will be allocated with the ‘m1.small’ flavor. We recommend you not to minor this flavor’s size. Per instance, a Cinder volume is attached in order to keep thepersistent data.
Once you have cloned the github from /cloudwatt/applications/application-caas, you will find: - application-caas_beta1.0.heat.yml: HEAT orchestration template. It will be use to deploy the necessary infrastructure. - PetClinic_sample.zip: Java Spring example, to build and deploy a Tomcat-based Docker image and use a existing mySql Docker image - Tweet_sample.zip: Cloud Native application as a composite of existing Docker images - README.md and README-EN.md (this current document) - CaaS_howToTroubleshoot.pdf: future document.
==>See one-click-caas-deployment video
CaaS start with the 1-click of Cloudwatt via the web page Apps page on the Cloudwatt website. Choose CaaS app, press DEPLOY. After entering your login / password to your account, the wizard appears:
As you may have noticed the 1-Click wizard asked to reenter your password Openstack.\ By default, the wizard selects the flavor “m1.small”. A variety of other instance types exist to suit your various needs, allowing you to pay only for the services you need.\ Instances are charged by the minute and capped at their monthly price (you can find more details on the Pricing page on the Cloudwatt website). Please remember that you are providing your KeyPair that will be used in order to postConfigure the future three KVM instances: this will be your way to SSH on those instances for troubleshooting or granting your colleagues on them if required.\ /!\ On CloudWatt IaaS please let empty the proxy attributes if you use the Internet exposure.
Press DEPLOY. The 1-click handles the launch of an Heat stack that triggers the allocation of three instances and the related OpenStack elements (cinder volumes, neutron internal private network…)
You can see its progression by clicking on its name which will take youto the Horizon console. When all modules become “green”, the creation is finished.
You can then find three URLs that are accessible via 3 floatingIps allocated by the 1-click: - ‘Magnum_public_ip’, which provides the access to the CaaS portal, Magnum_UI-based. - ‘PrivateRegistry UI’, to be used with the appropriated certificate and related login/ Password see below. - ‘Zabbix UI’, to be used to access to the embedded self-monitoring tool with the related login/ Password see below.
In the standard Horizon/orchestration console, you can dive inside the Stack outputs (you will retrieve the previous 3 URLs as well as the generated password that will be used during the end of CaaS setup).\ … Keep in mind this stack’s auto-generated password!!!
==>Second part of the one-click-caas-deployment video*
As detailed in this video, please follow the ‘Getting started’ procedure after the authentication of the dedicated CaaS portal (trick: this is your CloudWatt standard credentials as CaaS is federated with OpenStack/KeyStone authentication system)
Finishing the setup of the CaaS infrastructure is simple: 1) Access to the Magnum URL in order to login. Use your standard CW credentials as this new ‘Over the IaaS’ service is connected to CloudWatt authentication ‘Keystone’ service. 2) On the ‘CaaS/Getting’ started page, please click on the ‘PrivateRegistry’s auto-signed certificate’ link in order to accept the auto-signed HTTPS certificate, then login on the related page with the following credentials: login=’oocaas_read’, pwd=<StackAutoGeneratedPassword>
=>You have your CaaS infrastructure!\ /!\ please note that the SSH user for the three CaaS_infra instances is ‘cloud’, with your private key!
==>See managing-baymodels video.\ Once setup, this CaaS infrastructure allows you to allocate your first ‘BayModel’ to identify your Docker cluster’s template(s) with default parameters: just give a name and select either ‘Kubernetes’ or ‘Swarm’ COE.
As a result, one cluster of each COE will be similar to the following screenshot\ /!\ please note that the SSH user for both master and nodes instances is ‘minion’, with your private key!
=>You now have your context in order to deploy your dockerized micro services
Docker Swarm technology in this v1.1.3 version is simple and easy to use(see the following chapter on how to deploy PetClinic sample with it, or the related video).
This COE provides a ‘Docker Compose compatible YML file to describe the Dockerized application. Swarm drives the deployment of the containers: That’s (only) it!
In CaaS user Interface, the Swarm-based bay is displayed with few information: - You have the API address of the Swarm API service (hosted on the ‘Master’ instance). - You must dig into the nodes in order to discover which one is run which deploy container - You must expose your container on the external world via OpenStack/Neutron network features (Load balancer and/or FloatingIp) - /!\ You must adjust the security group on ‘Swarm Nodes’ in order to open the external flows - You can allocate an OpenStack/Cinder volume and give it to one container (as an example, the ‘mySql’ database container in order to persist the data)\ => In one work: Simple: YOU DO the job!
Google Kubernetes technology in this v1.2.2 version is feature rich and easy to use (see the following chapter on how to deploy PetClinic sample with it, or the related video). This COE provides a K8S specific YML file to describe the Dockerized application. Kubernetes drives the deployment of the containers PLUS the configuration of the OpenStack external context: That’s (plenty of) it!
Like for Swarm, Kubernetes panel show the API url. But it provides plenty of additional features as soon as you open the flow onto this - Kube UI, as a dashboard to manage the deployed container K8S PODs in the cluster - Kube DNS, as a container service locator to identify the K8S services. - Kube ELK technology, in order to keep track of the logs of the running containers - Kube Collect’D & Graphana for stats on the containers - Kube configuration inside Magnum-based CaaS offer: K8S encapsulates to use of OpenStack APIs for Storage (see Cinder) and Network (see Neutron/LB and Neutron/FloatingIPs) - /!\ You must adjust the security group on ‘Minions Nodes’ in order to open the external flows. In the current OpenStack implementation, there is no means to use SecurityGroups on LoadBalancer to fine-track the exposure if your micro-service is expose via LB.
=> In one work: features rich: K8S DO the job for you!
==>See devops-and-caas-integration video.
Every bay is providing a ‘simple but magic’ dialog Box:
Please click on ‘DevOps integration: the content of the Dialog box corresponds to the parameters for your future Ansible PlayBook!
/!\ As the creator of the CaaS infrastructure, your KeyPair was used to configure the CaaS_BuildServer machine: access via SSH is OK with the ‘cloud’ user and privateKey. \ But if you want to allow your colleagues to use their own keyPair, you must grant them by\ vi of the /home/cloud/.ssh/authorized_keys file and add the relevant ssh-rsa key(s) like\ *ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEA0t°°°UqQ== rsa-key-20160218 *.
See subset of Jenkins setup in Annex1 at the end of the document. …create your new Jenkins job and configure it with few technical params, but the Developers are used to this!
/!\ take care with the credentials of the Jenkins Job, in link with the authorized_keys in the CaaS_BuildServer, see Above
Once you setup the Jenkins job, then launch it and follow the logs progress.
As an example, CaaS is providing a PetClinic ‘AllInOne’ Jenkins and related Ansible Playbook which drive: - The build of the standard Java Spring application, producing a java War file to be onboarded inside a Tomcat web server. - The build of the ‘PetClinic’ Docker image that will be based on a ‘latest’ tomcat Docker image from the Docker Hub on Internet plus some glue to deploy and configure the PetClinic War - The tag then push of this image inside the CaaS private registry - And then the [re]deployment of the application in a Swarm or Kubernetes Bay
In the following screenshot, the Kubernetes COE is used: - The first line corresponds to the BuildServer’s request to deploy the PetClinic application in the bay: Kubectl command. - Then later, when the build is in success, the Kubernetes bay is giving back plenty of info on the features rich content of the bay (already shown in the CaaS ‘Bay’ panel) - In blue: as a result the URL of the PetClinic service is given back (Developers know the URI of /petclinic )
In the following screenshot, the Swarm COE is used: - the *docker_ps.stdout_lines describe for each of the containers which ‘node’ is hosting it (cf *CONTAINER_ID corresponding to the ID of the KVM instance: ‘72cae6f6ffae’ for the unique container using the ‘PetClinic’ image stored on the privateRegistry, exposed on tcp:8080 on internal network: ‘10.0.9.14’) - when diving into the OpenStack/compute/instances tab, we can find the external IP @ of the ‘node’ (cf ‘10.194.146.94’). for the ‘mySql’ container running on ‘node’ ID=’b93d5a8c8716’, it is accessed by the TomCat Petclinic container on the internal network on TCP 3306…
=> Watch the videos: many more details inside them!!!
The CaaS UI is providing a ‘Docker dashboard’ integrated as an iFrame. In the Beta Release 1, only Read-only feature is available. Next release will provide Docker ‘Portus’ UI with Read-Write’ functions.\
Please browse the registry: as an example see the PetClinic Docker image with multiple versions (because plenty of Jenkins jobs which tag each time with the Job’s tag). Each version provides some infos.
In the DevOps integration dialog box, two attributes are configuring the use (or not) of the embedded Orange image factory framework.
If DockerImageFactory parameter is set to ‘0’, no use of it… See subset of Docker image factory setup in Annex2 at the end of the document.
CaaS innovation contains an auto-monitoring feature, see ‘service monitoring’ tab.
By design, every KVM instance deployed inside the customer’s tenant is discovered by the CaaS embedded Zabbix system and assigned to a monitoring template according to their role.\ The service monitoring page is classifying the instances into two groups: the ‘CaaS infra’ for the PrivateRegistry, BuildServer and Magnum… and the bays with one entry per bay with their Master and nodes…\ When clicking on the related ‘Zabbix link’ on one element, a new tab is open and displays the filtered monitoring element: Please login with ‘admin’ and pwd=<StackAutoGeneratedPassword>
Next document will detail some more info on - Understanding the CaaS infrastructure: see understanding-caas-infrastructure video - How to manage namespaces and repositories inside Docker PrivateRegistry? - How to fine tune the blacklists in the Docker image factory? - Operating the dockerized application with Kubernetes focus : - How to use Kube UI? - How to use Grafana? - How to use Kibana? - Operating the OOCaaS deliverables - How to troubleshoot the service? - How to use the embedded Zabbix to monitor your application?
This article will allow you to dive into the Dockerized world on CloudWatt. This beta service is currently free of charge for the Docker layer. Please do not hesitate to provide us with your feedback on the current services as well as your ideas for bugFixes, features enhancements or a ‘managed service’ by Orange Business services ^tm^.
==>See devops-and-caas-integration video
Add the custom tools : Ansible 1.9. ( How to add a custom tools)\
Create a new Job.
Configure the job to pull the source code of your application on the Git of your source control.\
Check the box ‘Install custom tools’ and select ‘Ansible 1.9’ as tool selection.\
Point onto your Ansible playbook .
Add an post build step : ‘Invoke Ansible Playbook’ and use the exported parameters from your Bay.\
Add the keypair of the Build Server instance in order that Ansible can connect to it.\
Two attributes of the Ansible Playbook are configuring the use (or not) of the embedded Orange image factory framework
On the CaaS_BuildServer KVM instance, two files are provided with default Orange values for blacklists: - ‘Insecure level == nonProduction :’ The file can be modified in /home/cloud/imagefactory/run/filch-insecure.json - ‘Production level’ : The file can be modified in /home/cloud/imagefactory/run/filch.json
If validation is running well\
If validation is KO\