WEBVTT 00:00.000 --> 00:10.000 Okay, hi everyone, thank you so much for being here. 00:10.000 --> 00:18.000 In today's session, we are going to discuss open source approach to the hybrid cloud. 00:18.000 --> 00:20.000 First, let me introduce myself. 00:20.000 --> 00:24.000 My name is Victor Palma, Cloud Engineer at Open Nebula. 00:24.000 --> 00:28.000 It will be a pleasure to be with you here in this session. 00:28.000 --> 00:37.000 First, I would like to do a quick overview of the current state of the hybrid cloud scenario. 00:37.000 --> 00:43.000 First of all, I would like to introduce a question of what is the hybrid cloud. 00:43.000 --> 00:47.000 Surely most of you already know what is the hybrid cloud, 00:47.000 --> 00:53.000 but for those who don't, the hybrid cloud is the union of the public cloud and the private cloud. 00:53.000 --> 00:57.000 So you can have your own premise cloud in front. 00:57.000 --> 01:05.000 And in certain circumstances, maybe you need to expand your cloud in order to cover on demand resources. 01:05.000 --> 01:10.000 So when you did that using public cloud resources, 01:10.000 --> 01:15.000 we can say that you are using an hybrid cloud scenario. 01:15.000 --> 01:23.000 So the hybrid cloud has a lot of benefits like flexibility, disaster recovery, 01:23.000 --> 01:24.000 cost optimization. 01:24.000 --> 01:31.000 But I think the relevant ones are the two related to security and the edge computing. 01:31.000 --> 01:38.000 Security and compliance is I think that is a topic that is very, very important to know where it is. 01:38.000 --> 01:46.000 For example, Europe, we have a lot of regulations related to the privacy of the user. 01:46.000 --> 01:54.000 So we need to find a way to complete and to meet that regulations. 01:54.000 --> 02:00.000 Thanks to the hybrid cloud, we can deploy in our public cloud provider in the Europe, 02:00.000 --> 02:05.000 cluster in order to have all the information of the user in Europe and meet that regulation. 02:05.000 --> 02:09.000 And the other relevant point is that you are computing. 02:09.000 --> 02:23.000 So in order to offer all that application that works in real time like I or 5G connectivity or even video games, 02:23.000 --> 02:29.000 video game streaming, we need to use this kind of service in the India, 02:29.000 --> 02:33.000 India to the user in order to reduce the latency. 02:33.000 --> 02:46.000 So we have some challenges here, most of them related to the complexity and the integration of this kind of architecture in our clouds. 02:46.000 --> 02:53.000 So it's very difficult to handle all the different providers that we can find in the market. 02:53.000 --> 03:03.000 Like for example, AWS or equine source, Calway or even others, new cloud providers, like I think that we have a lot of them. 03:03.000 --> 03:15.000 So each of them has different nomenclatures, different names, different workflows, and you need to adapt your cloud to them. 03:15.000 --> 03:19.000 It's a very big challenge that the hybrid cloud needs to cover. 03:19.000 --> 03:25.000 And the other one is related to the cost and resource management of the hybrid clouds. 03:25.000 --> 03:30.000 Of course, we can use the hybrid cloud to optimize the cost of our cloud. 03:30.000 --> 03:39.000 But uncontrollable resources can in certain situations lead to unexpected costs that we are. 03:39.000 --> 03:47.000 We're going to find on the end of the mode, a very big, a very big build to pay for example to the public, 03:47.000 --> 03:49.000 I'll provide that with we are using. 03:49.000 --> 03:57.000 So in order to face all these challenges, we are going to see a technology stack, 03:57.000 --> 04:05.000 a technology stack, of course, an open source, a technology stack for cover all these hybrid cloud scenarios. 04:05.000 --> 04:10.000 So first of all, of course, we are going to use a kvm and hypervisor. 04:10.000 --> 04:21.000 So we are going to use the, the very foundation that we have an any Linux system in order to create and define all the build on machines that we are going to create. 04:21.000 --> 04:33.000 We can of course use another hypervisor if you need it, but I think that kvm is the more general use case hypervisor that we can find currently in the Linux ecosystem. 04:34.000 --> 04:45.000 So in top of that, we are going to use OpenNevola as a cloud orchestrator. OpenNevola has a lot of capabilities like the orchestration of virtual machines, 04:45.000 --> 04:50.000 based in kvm and in others hypervisor as well. 04:50.000 --> 04:55.000 You can manage application containers and even Kubernetes clusters. 04:55.000 --> 05:02.000 So you can do that in your own on premise data center in the public cloud or even in the edge. 05:02.000 --> 05:11.000 And also since it's a main philosophy of mine of OpenNevola, openNevola tries to avoid vendor locking. 05:11.000 --> 05:16.000 So we want to create a solution that is at no sticks to the providers. 05:16.000 --> 05:22.000 My, as my college journey speak in the, in the previous session. 05:22.000 --> 05:27.000 So we are going to use OpenNevola as a multi cloud orchestrator orchestrator. 05:27.000 --> 05:39.000 So we can with OpenNevola deploy any application like VMs, multi VMs, Kubernetes clusters, everything in a shared environment. 05:39.000 --> 05:48.000 Then we have the feature to handle all of all these resources in a uniform management layer. 05:49.000 --> 05:59.000 So we can use an homogeneous layer to handle all the resources that we have in AWS, in econics or even in our own clouds of course. 05:59.000 --> 06:09.000 So that gives us the opportunity to use any infrastructure, not only our own premise infrastructure. 06:09.000 --> 06:15.000 So we can migrate move from even life, migrate applications. 06:15.000 --> 06:23.000 And we all time with no, don't time as you are as we are going to see later. 06:23.000 --> 06:37.000 If you want to get more information about all the information that I'm talking about right now, you can go to OpenNevola.io, multi cloud where we have more information and some use cases related to these features. 06:37.000 --> 06:53.000 So there we are going to introduce the technology that we are going to use to achieve all the, to face all the challenges that we have to, to fit during the, to be the deployment of our hybrid cloud that is one form. 06:53.000 --> 06:59.000 One form is a tool that we are currently developing here at an openNevola, some open source tool as well. 07:00.000 --> 07:08.000 And it's the foundation of the automatic provisioning in the public in the public cloud and even in the it edge cloud. 07:08.000 --> 07:21.000 So one form is an extension of openNevola that use terraform for provisioning resources and uncivil to configuring all the stuff in the notes. 07:21.000 --> 07:39.000 So, thanks to one form, we are able to deploy a cluster, sorry, a multi cloud infrastructure in just 50 minutes and in some cases under 50 minutes. 07:39.000 --> 07:46.000 So it's a very quick deployment, a very quick provisioning that we can do thanks to this platform. 07:47.000 --> 07:54.000 So we are going to take a look in depth to one form as a key tool for the hybrid cloud management. 07:54.000 --> 08:08.000 So first of all, you can see here a dictionary definition of what is one form as I already said one for is a new tool that allows you to automatically deploy and configure new clusters in the public cloud into your openNevola cloud. 08:08.000 --> 08:28.000 So it is just a definition of a formal definition, but thanks to one forms, you can deploy and provision in not only clusters, but host, of course, and networks, that are stories, virtual routers, VMs and multi VMs applications. 08:28.000 --> 08:36.000 And all of these resources are supported by the openNevola multi-tenancy capabilities from a single portal. 08:36.000 --> 08:46.000 So you can handle them with the same management lawyer using the same workflow for the different kinds of resources. 08:46.000 --> 08:55.000 So here are two concepts I want to clarify first, we have first the provider. 08:55.000 --> 09:15.000 The provider is where we are going to deploy, we are going to create the resources like the public cloud provider that we are going to use, and then we have the provision that is what we want to deploy, we want to provision in the public cloud. 09:15.000 --> 09:28.000 So the provider for one form is just a bunch of credential in order to connect to the public cloud provider and request all the resources that we want to create later. 09:28.000 --> 09:40.000 And a provision is the template and document that define all the resources that we want to create in or a public cloud provider. 09:40.000 --> 09:52.000 I would like to say that we can use the same provision with different providers. So the provision, the document that define a provision, is at no state to the provider. 09:52.000 --> 10:00.000 So we can reuse the same infra that we can define in a provision file to different providers. 10:00.000 --> 10:14.000 Some of the features while is the template system that I already mentioned, the multi provider support, of course you can create different providers in the same platform. 10:14.000 --> 10:22.000 And of course the ultimate cluster deployment is the core of all of these features. 10:22.000 --> 10:34.000 But we also have some interesting features like the lifecycle management. So you cannot, you can deploy the provision for example in AWS. 10:34.000 --> 10:43.000 And you can of course stop and delete all these resources, but you can also update this provision in real time. 10:43.000 --> 10:58.000 A scale disk provision based on your needs on rules, predefined by the cloud administrator and you can even hold the provision and power of the provision in order to save resources. 10:59.000 --> 11:09.000 One forms, one forms also develop it with the main idea in mind of the extensible and customishable is an open source tool. 11:09.000 --> 11:15.000 So we want to anyone can add the own cloud provider to the platform. 11:15.000 --> 11:20.000 So it's very easy to expand it with more configuration files. 11:20.000 --> 11:30.000 We are adding a lot of guides to your documentation in order to be user-friendly, the integration of new cloud providers to one form. 11:30.000 --> 11:44.000 Of course it also provides an API. So you can use it to create your own tool or to integrate these tool with all your third party tools in order to create automations and other workflows. 11:44.000 --> 11:53.000 So as I really said, one form is based in the integration with OpenNLLA with Terraform and Ansible. 11:53.000 --> 12:09.000 So starting from the Terraform site, here you can see a comparison between the info that one form provisioning using Terraform in AWS, this is an example for AWS. 12:09.000 --> 12:15.000 And the configuration that one form creates in OpenNLLA. 12:15.000 --> 12:34.000 In from the provider point of view, Terraform creates in this case a BPC subnet to connect for instances in this case, a public gateway and all the IP routes in cooperation that need in order to enable the confusion to the internet. 12:34.000 --> 12:42.000 From the OpenNLLA point of view, one forms, one form creates, of course, the same number of hosts. 12:42.000 --> 12:59.000 The BX lang in order to connect to provide intermediate communication between the virtual machines and all the configuration related to the data stories where we are going to restore all the images of our virtual machines. 12:59.000 --> 13:23.000 So this is completely up and up to the provider. So it's one form who is going to decide what resources I'm going to create depending on the provider and it's going to automatically configure that configuration in OpenNLLA in order to handle all the resources seamlessly. 13:23.000 --> 13:35.000 And from the Ansible part, one form relies on one deploy. One deploy is another OpenNLLA tool. That's of course open source as well. 13:35.000 --> 13:57.000 That use a lot of Ansible Playbooks in order to configure OpenNLLA notes and frontends. The OpenNLLA frontends, the machine, the server where you install the OpenNLLA demo that is in charge of handling all the OpenNLLA services and the notice David Polication note. 13:57.000 --> 14:26.000 So in this case, we use in one form, one deploy in order to configure these notes. This tool is very powerful. So in case you are thinking about try OpenNLLA, if you recommend it because if you know how to launch an Ansible Playbook, you are going to know how to use the one deploy tool. It seems it's based on an Ansible as well. 14:26.000 --> 14:55.000 So we have some use cases for one form. Of course, the first one is the automatic H++. It's cluster provisioning. So you can deploy a cluster and less than 50 minutes. It's very awesome that you can do that fully automated. Then we have the option. The use case to use a multi cloud distributed applications in or OpenNLLA cloud. So you can deploy multi BMS. 14:55.000 --> 15:01.000 Sorry, a multi cloud application for example, half a cluster distributed in different locations. 15:01.000 --> 15:11.000 An application deployed at the same time and connected in the different cluster of different providers and different locations. 15:11.000 --> 15:31.000 I'm finally, of course, provides the hybrid cloud expansion to your cloud. You can expand your cloud to the public cloud. Any time you need automatically based on demand of the resources of your current cloud. 15:32.000 --> 15:45.000 So now it's the showtime. I don't know today the network connection is a mess. It's pretty unstable. So I'm going to share some screenshots about the process. 15:45.000 --> 15:59.000 I'm going to show you how the process is. But in case you want a lift demo of all these process, we have our screencast that we recently upload to your YouTube channel. 15:59.000 --> 16:06.000 So I really recommend it to take a look at this screencast. But I'm just going to comment how this works. 16:06.000 --> 16:23.000 Here in this license, what I don't know, the Wi-Fi today is terrible. So here you can see a definition of from the info point of view of what we can deploy using one for. 16:23.000 --> 16:42.000 We can deploy from the Open Nebula system portal using the one for website that it has another interface separate from Open Nebula. You can define the providers and you can define index example three different providers. 16:42.000 --> 17:01.000 So you can create different clusters in the different providers. In this case, we have one classroom in Germany another one in Amsterdam and the last two in Paris and Poland. 17:01.000 --> 17:15.000 So you can deploy that from the info point of view and from the application point of view, you can create a multi-bying application that is going to be automatically deployed at the same time in these four clusters. 17:15.000 --> 17:40.000 So we have in this case an example you see Minayo, that is a very, very powerful tool for monitoring and all the things. So from here you can, for example, create a cluster with three VMs, we H.A. configuration, automatically and distributed in the in the cloud. 17:40.000 --> 17:50.000 So you are going to use this for, for that for a store objects and get this distributed ecosystem. 17:50.000 --> 18:04.000 Here you can see an screenshot of the one form providers tables. So from here we can add any provider that we want and it and that was supported in the platform. 18:04.000 --> 18:19.000 For example, we can use this scale way AWS or Econys, but we can add any credentials of any any cloud provider that we want in order to use it and as I said, play before. 18:19.000 --> 18:34.000 And if we try to deploy a provision is in the provider, we are going to get this. For example, this is the the Frankfurt cluster that is located in AWS. 18:34.000 --> 18:51.000 It is, it will, AWS is the provider for this cluster and we define a couple of hosts in this case with the own IP, the own resources and all of them provisioned automatically integrated automatically in open NAWLA. 18:51.000 --> 19:08.000 For an cloud open NAWLA cluster administrator, this host are going to be the workflow with this course are going to be the same as the host that you can have in your own premise infrastructure. 19:08.000 --> 19:19.000 So then once we have the infrastructure deployed, we can go to the NAWLA public marketplace in order to download the my own multi-known service appliance. 19:19.000 --> 19:29.000 So it's automatically configured in order to create the meaning and service selecting the nodes where we want to deploy it. 19:29.000 --> 19:39.000 If we go to well to the service template, if you want to learn more about how open NAWLA works, we have a lot of tutorial in our YouTube channel. 19:39.000 --> 19:55.000 But if we go to the template section and try to instantiate this service, we are going to get in a few minutes, this dashboard is my dashboard that is deployed distributed along a multi cloud setup. 19:55.000 --> 20:04.000 So as closing folks and next Steve that we are working on in this tool, I would like to say that one for is currently in development. 20:04.000 --> 20:18.000 So we will launch this tool along open NAWLA setting O. Open NAWLA setting O is the start of that we call the one next cloud generation. 20:18.000 --> 20:29.000 So it's going to have a lot of new feature related to provisioning, hybrid cloud AI that we are working on as part of the IPCI project. 20:29.000 --> 20:47.000 This tool is going to replace one provision. If some one of you knows about open NAWLA, maybe you heard this name in the past, is that touch preview that we had before, but we are redefining one form based on what we learn about one provision. 20:47.000 --> 20:57.000 So the focus that we have with this tool are mainly two. First, we want to integrate as many providers as we come. 20:57.000 --> 21:09.000 We are listening to the community in order to see what providers they are using and what providers are the game changers of these hybrid cloud scenario. 21:09.000 --> 21:28.000 And then we are also trying to optimize the provisioning time even more. So within that we can improve more the time that we dedicate to the provisioning, making more optimization for the Ansible part, and maybe for the Terraform as well. 21:28.000 --> 21:40.000 So I also would like to say as I said before that this project is funded by the EPCI-CIS project, the Net Generation Erupeet platform for the Data Center cloud edge continuum. 21:40.000 --> 21:47.000 So if you want to learn more about this project, you can go to the URL that you can see there in the bottom. 21:47.000 --> 21:59.000 And I also encourage you to participate in our forum, as I said before, we are developing the Net Generation for the cloud open source platform. 21:59.000 --> 22:14.000 We are listening to the community. We are seeing what the community needs and what the user demands. So please participate in the forum and let us know what did you expect from the next open NAWLA cloud generation. 22:14.000 --> 22:24.000 So thank you so much. So see you have any question. It's now the moment. 22:24.000 --> 22:42.000 Thank you. Questions? Can you speak more louder please? 22:42.000 --> 22:55.000 Can you repeat the question? Can you repeat the question? 22:55.000 --> 23:07.000 So I'm sorry. He asked about this. It's a defensive to deploy, of course, a bare metal industry in WBS. 23:07.000 --> 23:28.000 You say for what reason we want to do that? What? It's depending on the use case as always, but it's more oriented to the scenario where you have your own premise infrastructure and you want to be able to attend peaks of demands of resources. 23:28.000 --> 23:55.000 In that case, you don't have the time to go to your data center and put more service. For that kind of a scenario or even for the cloud edge, it will be interesting. But of course, as I said during the first slides, one of the challenges that we face with the high recloud is just the cost that the environmental instance it has. 23:58.000 --> 24:08.000 So that's right. Any other questions? 24:08.000 --> 24:19.000 Thank you so much.