WEBVTT 00:00.000 --> 00:17.000 So, hi everyone, can you hear me, little back, oh, nice. So, it is my first time in Brussels 00:17.000 --> 00:22.240 and today we will present something called, how you can accelerate your CI pipelines with 00:22.240 --> 00:29.240 help of a tool, which we have open source and build called V cluster. So, a bit about me, 00:29.240 --> 00:34.240 I am Rithi Kroy and I work at a company called Loft Labs as a platform at Workgate. 00:34.240 --> 00:37.240 I am a CNC ambassador as well. 00:37.240 --> 00:44.240 Yeah, hi everyone, my name is I am Bartek and I am working as principal to work at Workgate 00:44.240 --> 00:49.740 at Loft Labs, founder of Cube Simplifier and Bill Seve written a couple of books on certifications 00:49.740 --> 00:56.740 and also keeps not happy to be here. 00:56.740 --> 01:01.740 So, whenever we talk about environments in an organization, there are three types of 01:01.740 --> 01:06.740 environment, if we simplify it. One is a developer environment, where your developers 01:06.740 --> 01:12.740 go and build things, it can be remote with a lot of CD's right now or it can be 01:12.740 --> 01:17.740 their local machine as well. The next is testing environments where a lot of testing 01:17.740 --> 01:21.740 and all those things happen, where you provision your name machines whenever there is a 01:21.740 --> 01:27.740 specific test case or specific PR. The last is production environment, where you just 01:27.740 --> 01:33.740 push the binaries and all those so that your consumers can consume it. 01:33.740 --> 01:39.740 So, right now what we have seen is communities is right now kind of a defective standard. 01:39.740 --> 01:46.740 So, as a large scale distributed system, we see the bus about Kubernetes wherever we go 01:46.740 --> 01:52.740 and the reason between widespread adoption for Kubernetes is because the community between 01:52.740 --> 02:00.740 it as well as it supports a lot of use cases. You can have a abstraction for a lot of 02:00.740 --> 02:05.740 nodes and all those as well as it is kind of similar that it is very easy to 02:05.740 --> 02:09.740 spin of environments in that. So, whenever you create environments inside of it, it is 02:09.740 --> 02:15.740 simple as that. But whenever we talk about testing environments, the traditional set of 02:15.740 --> 02:21.740 us for Kubernetes native testing environment that you have different namespaces for different 02:21.740 --> 02:26.740 environments of testing. But most of the times that is okay when your developers are building 02:26.740 --> 02:32.740 something, but the problem arises when your developers are building a new CRD or something 02:32.740 --> 02:39.740 which are more cluster level. So, the problem arises over there. And that is not a 02:39.740 --> 02:45.740 only one type of problem that gets a RIOs. There are other type of problems. Sometimes there 02:45.740 --> 02:51.740 is an inconsistent APS server. So, the problem is whenever you are test is creating thousands 02:51.740 --> 02:57.740 of secrets, there is a heavy load on your APS server. Or sometimes there is resource 02:57.740 --> 03:03.740 change as well. So, there is pod evictions and noisy never problems. Then you are problems 03:03.740 --> 03:09.740 like delayed provisioning or slow test cycles. And the last type of problems networking 03:09.740 --> 03:13.740 difference between your different type of environments. So, those are couple of traditional 03:13.740 --> 03:18.740 challenges which we face. So, to get up from this challenges, we have seen a lot of 03:18.740 --> 03:25.740 organization create a cluster. But the problem is as you create a lot of clusters, a cluster 03:25.740 --> 03:30.740 of your PR, a cluster for each of your developer, a cluster for each of our environment. 03:30.740 --> 03:35.740 And you start creating and deleted them. There stays a state where there is a lot of 03:35.740 --> 03:43.740 unnecessary clusters. So, with the CI bottleneck, the problem is CI's time consuming. So, whenever 03:43.740 --> 03:48.740 you create a full cluster in your CI pipeline, it takes time to be created from end to end. 03:48.740 --> 03:54.740 And creating it takes time creating it takes resources creating it takes CI minutes as well. 03:54.740 --> 04:00.740 That exists resource intensive. Understanding how much resource your cluster needs, depending 04:00.740 --> 04:04.740 upon the task cases is complicated even. You define a couple of times of common clusters. 04:04.740 --> 04:09.740 But suppose your cluster needs a specific GP to test out the new PR. So, how do you do that 04:09.740 --> 04:14.740 smoothly? And the action time is again. So, you do not want your actions to be running. 04:14.740 --> 04:18.740 The cluster is provision. You can use an M space, but again there is problems which are 04:18.740 --> 04:25.740 associated with it. And the solution. So, when we see GitHub actions and all those things 04:25.740 --> 04:31.740 become popular, the main consequence containers are fast. And it helps you to create 04:31.740 --> 04:39.740 quick pipelines. It is very fast in comparison to each case or creating a new whole cluster. 04:39.740 --> 04:46.740 So, we learn from those concepts. And then problems like CI is syncing any UI use cases 04:46.740 --> 04:53.740 is also important. For example, you are having a specific CRD for a specific 04:53.740 --> 04:59.740 name, space or your specific PR. Then how do you make it simple? So, those were the kind 04:59.740 --> 05:06.740 of problems which had a solution. So, we learn from those. And then focusing on the type of 05:06.740 --> 05:11.740 environments you have. You can have a slow but expensive thing, 05:11.740 --> 05:16.740 call separate clusters. But there have a problem with that engineers need to wait. There 05:16.740 --> 05:20.740 is to duplicate it tools and all those things. And again if you go back, the simple approach 05:20.740 --> 05:25.740 was to having namespace isolation. But again the problem is lacking permissions for your 05:25.740 --> 05:31.740 clusters and for your testing environments, resource and CRDs. As well as the weak isolation 05:31.740 --> 05:37.740 between them which is sometimes problematic. So, with all those things, there was a requirement 05:37.740 --> 05:44.740 to be a little ground. And there we came in. So, we are weak cluster. And we just 05:44.740 --> 05:51.740 were launched couple of years back. And right now we have been used by a lot of companies 05:51.740 --> 05:58.740 as well as growing open source adoption right now. And what we do basically is. So, we create 05:58.740 --> 06:03.740 a, like you bring your host cluster. So, a specific host cluster. And we try to create 06:03.740 --> 06:09.740 virtual clusters on top of that. And when we create virtual cluster, it is fast because 06:09.740 --> 06:16.740 we create a containers and your APS server in your namespace, a specific dedicated namespace. 06:16.740 --> 06:22.740 And then it is fully Kubernetes compatible. So, it is a certified Kubernetes distro. So, you 06:22.740 --> 06:26.740 can have anything you are running in a normal Kubernetes distro. And you can use that. 06:26.740 --> 06:31.740 The other part is you can have specific CRDs from the host cluster over there. So, whenever 06:31.740 --> 06:36.740 you are running a simple CI pipeline, it can take few seconds to create the virtual cluster. 06:36.740 --> 06:41.740 And with the virtual clusters over there, you can think things from your host cluster. So, 06:41.740 --> 06:47.740 your CI pipeline has everything that is required for it to run. And you can also isolate 06:47.740 --> 06:54.740 whatever you want to. And on a way, it is super cheap and super green because if you are creating 06:54.740 --> 06:58.740 new EKS cluster or something like that, you need to pay the control print cost for each of those 06:58.740 --> 07:02.740 cluster. But if you have a dedicated host cluster, where you are just using the CI whenever 07:02.740 --> 07:06.740 it is required. And you have cluster auto scale and all those things enabled. So, it is 07:06.740 --> 07:26.740 more simpler. So, that is the mic switching. All right. So, you know about me 07:26.740 --> 07:31.740 cluster now, like me cluster lets you create virtual Kubernetes clusters on your host clusters. 07:31.740 --> 07:36.740 So, you bring a base cluster like mentioned and then you will be able to virtualize that. 07:36.740 --> 07:43.740 Each V cluster comes with its own isolated control plane. So, don't think that whatever you do 07:43.740 --> 07:47.740 keeps it, you will get nodes, the request goes to the API server of the host cluster. Now, it goes 07:47.740 --> 07:53.740 to the API server of the virtual cluster that is created. So, it sets somewhere between 07:54.740 --> 08:00.740 the namespaces, which are on the left side, which are obviously weak restricted. But yes, 08:00.740 --> 08:10.740 they are obviously cheap and in more less than the per cluster side. So, that is the architecture 08:10.740 --> 08:15.740 how it works. So, if you see you have the host Kubernetes cluster, the physical host cluster 08:15.740 --> 08:22.740 or any Kubernetes cluster EKS, EKS whatever that is or even your local kind Kubernetes cluster. 08:22.740 --> 08:28.740 So, the virtual cluster, what happens is there is a component inside V cluster called 08:28.740 --> 08:34.740 Sinker. So, whenever you create a pod, let us say QCTL run, NGNX, HIFANIFAN image, NGNX, the pod, 08:34.740 --> 08:40.740 the request goes and it is stored in the ETCD component of the embedded ETCD component of the 08:40.740 --> 08:46.740 virtual cluster. And then the Sinker copies that onto the host cluster. Now, as soon as it copies 08:46.740 --> 08:51.740 that to the host cluster, it is the regular Kubernetes behavior. The regular scheduler kicks in because 08:51.740 --> 08:57.740 it has to run a pod. So, it schedules the node to the best fit, reduce the pod to the best fit 08:57.740 --> 09:02.740 node and it runs that. So, and then again Sinker puts that it is in running state. So, 09:02.740 --> 09:13.740 Sinker is the component that talks to the API server. So, let us try to understand that 09:14.740 --> 09:21.740 with the demo. Hopefully, it works. I mean, it was working. So, you let us say there is 09:21.740 --> 09:27.740 there is a developer. When you what happens is that is a pretty standard scenario. You write 09:27.740 --> 09:33.740 some code and your application is deployed. What you want is ideally as a developer that I have 09:33.740 --> 09:38.740 something feature that I have implemented. So, where is a PR? For that particular PR, I want to do 09:38.740 --> 09:45.740 this and I want an environment for that. So, there should be a way that as soon as I add a particular 09:45.740 --> 09:51.740 label or whatever, I should be able to get a whole Kubernetes cluster and my application rebuild 09:51.740 --> 09:57.740 packaged, my services deployment, ingresses, everything is created and that is deployed onto that particular 09:57.740 --> 10:03.740 cluster. And I get an ingress which I can test whether my new feature is working or not. So, that is what 10:03.740 --> 10:14.740 this demo is about and how V cluster helps to kind of do that in a way. So, this is the host 10:14.740 --> 10:19.740 cluster that I am having. So, QCTL get nodes. So, the host cluster is there. Right now, there 10:19.740 --> 10:27.740 is no virtual cluster in place. So, what we will do is this is the repository. So, let us say 10:27.740 --> 10:45.740 a developer comes in and they create a new branch from the main. Now, they go to the app 10:45.740 --> 11:03.740 edit, hello for STEM and commit the change, commit to their branch and go to pull request, 11:03.740 --> 11:12.740 create a pull request, reach 100 float in now. Now, what they can do is they can add a label 11:13.740 --> 11:19.740 test. Now, as soon as the label test is added onto this particular GitHub repository, there 11:19.740 --> 11:28.740 is an action that gets triggered. What that action is and what it does is. So, it runs on the 11:28.740 --> 11:38.740 pull requests that are labeled and with the label name test. So, as soon as there is a label 11:38.740 --> 11:43.740 added to the pull request, this particular workflow gets triggered. So, it checks out the code, sets 11:43.740 --> 11:48.740 up go, sets up code. So, code is a tool to build goal and applications. So, you do not need 11:48.740 --> 11:55.740 to do that. Then, it logins to the Docker. I have already put my secrets inside actions. So, 11:55.740 --> 12:00.740 the Docker have username, Docker password is already there in the actions. Build and deploy the image. 12:00.740 --> 12:06.740 So, it is building the new image adding a tag to that. I hope it is clear if you want me to 12:06.740 --> 12:13.740 zoom in I can. So, it is adding the tag. It is doing the code build for that particular image. 12:13.740 --> 12:21.740 Now, it is also generating the deployment manifest. How it is doing that? I have a ginger 12:21.740 --> 12:35.740 template. So, let me show you that. So, here I have a deployment. So, it is a simple deployment, 12:36.740 --> 12:42.740 and in the image, I have it as a variable. So, that is a ginger template. And in the 12:42.740 --> 12:50.740 ingress as well, I have a variable, ingress tag. Because for every developer and every 12:50.740 --> 12:57.740 PR, I want to separate ingress address to be put. It should be dynamic. There should be like automated 12:57.740 --> 13:03.740 stuff for everything. So, next step is what it is doing. It is replacing the variable value. 13:03.740 --> 13:09.740 So, it is getting the deployment tag. For the ingress tag, it is generating like the current 13:09.740 --> 13:15.740 full request number dot v cluster dot take. Now, dot the v cluster dot take is pointed towards 13:15.740 --> 13:21.740 the ingress controller that is running on the host cluster. So, what internal it will do is 13:21.740 --> 13:26.740 it will when it creates the ingress with PR. Let us say this is whatever this is 12 or 13. 13:26.740 --> 13:32.740 PR 13 dot v cluster dot take. That particular ingress will have the deployment of the latest 13:33.740 --> 13:39.740 code that is committed by this particular developer. Now, pushing the deployment manifest 13:39.740 --> 13:46.740 back to feature test repository and installing the v cluster CLI logging into the platform. 13:46.740 --> 13:50.740 So, I already have provided the platform URL and the access key. I will quickly show you as 13:50.740 --> 13:56.740 well where it is. And then creating a v cluster. So, v cluster create command is platform 13:57.740 --> 14:01.740 create the name giving the project and a template. I will show you the template as well. 14:01.740 --> 14:05.740 We are adding a preview link so that a person can immediately go and click on the link. 14:05.740 --> 14:10.740 Now, we are deploying that application. So, what happens is the virtual cluster is created 14:10.740 --> 14:14.740 or deployment manifest are updated with the latest image that is there with the tag. 14:14.740 --> 14:20.740 Now, that particular deployment is deployed to the virtual cluster. The ingress and then there is 14:21.740 --> 14:27.740 simple test for running that and that it is calling trying to call the PR. The latest PR 14:27.740 --> 14:31.740 number dot v cluster dot take basically trying to call the ingress that it is working properly 14:31.740 --> 14:37.740 or not. But I will show you that as well how it is done. So, it is already completed. 14:37.740 --> 14:48.740 So, if we see the actions. So, you can see all the stuff has been done creation of v cluster. 14:49.740 --> 14:55.740 So, this is the v cluster. So, it is PR 13 that was created. That is how you can check. 14:55.740 --> 15:00.740 You can also check in the v cluster list using the CLI. So, you can see that there is a 15:00.740 --> 15:11.740 given it is cluster created. Give CTA will get for ciphon n. And that is how it looks like 15:11.740 --> 15:17.740 on the host cluster. So, whenever there is a virtual cluster created as an admin of the host 15:17.740 --> 15:21.740 cluster, I can see that there is a code in a spot. There is a hello world application. 15:21.740 --> 15:28.740 That is there and there is a state full set. So, when I give the cube conflict file for this particular 15:28.740 --> 15:34.740 virtual cluster to you and when you do cube CTA will get pods, you only see two pods which is the 15:34.740 --> 15:40.740 code in is one and the hello world one. So, you are isolated from the host cluster and you cannot see anything 15:40.740 --> 15:45.740 and you get a cube conflict file. But you feel that it is just a plain Kubernetes cluster. 15:45.740 --> 15:50.740 And I am the owner of that. Just like you order a virtual machine from any of easy to 15:50.740 --> 15:54.740 or whatever that is. And you think that you are owner, but in the end it is virtualized and it is 15:54.740 --> 15:58.740 running on a bare metal. But you do not get access to the bare metal. You get access to the 15:58.740 --> 16:08.740 virtualized machine. So, now what we can do is we can actually check if the link is working. 16:09.740 --> 16:15.740 Hello, fourth time. So, the new application is built, deployed, the manifest that deployed 16:15.740 --> 16:20.740 as a developer. I do not have to care about Kubernetes manifest as a developer. I do not have to care 16:20.740 --> 16:26.740 about creating Kubernetes clusters. I just push my code. I am testing the code that I have written 16:26.740 --> 16:32.740 whether it is working fine or not. And then that is particular that is it. Now, it is the 16:32.740 --> 16:40.740 app as well. So, either you can merge it or you can simply remove the label. And that 16:40.740 --> 16:48.740 is it. Once you remove the label, it will again run a GitHub actions which will just 16:48.740 --> 16:53.740 to be cluster delete. So, it automatically then deletes this particular virtual cluster. 16:53.740 --> 17:01.740 So, in that there was one thing which was the template. So, in v cluster you can just define 17:01.740 --> 17:07.740 the templates like how you want your virtual clusters to be. In this particular, we have given 17:07.740 --> 17:13.740 the template like I want to sync the ingress classes because again v cluster is about reusing 17:13.740 --> 17:19.740 the host cluster resources because I do not want to install certain manager ingress controllers 17:19.740 --> 17:24.740 to every virtual clusters. It is already there on my host cluster. I want to reuse those resources. 17:24.740 --> 17:29.740 So, I am reusing the ingress classes. And yeah, this is something which is external like platform 17:29.740 --> 17:33.740 putting it to auto sleep after one hour. I mean even if it is not there, everything good 17:33.740 --> 17:40.740 work as it is. So, hopefully the cluster would have gone. It is full not four because 17:40.740 --> 17:46.740 cluster is gone. And yeah, cluster is already gone. You cannot see over here. So, that is pretty 17:46.740 --> 17:55.740 much it from how it would look like in a workflow. So, in a just to recap a developer raised 17:56.740 --> 18:02.740 the full request added a label. Now, there was a kit of action that got triggered which 18:02.740 --> 18:09.740 had the configuration for virtual cluster Docker hub as part of the secrets. And it did the 18:09.740 --> 18:17.740 build. It also post with the latest image built by in a deployment file and then post back 18:17.740 --> 18:22.740 to the repository and used that to deploy it to a virtual cluster after creating a virtual 18:22.740 --> 18:27.740 cluster. And then created the ingress as well with the latest full request number. And then 18:27.740 --> 18:32.740 the developer tested it. Remove the label and they can do manual or automatic or more 18:32.740 --> 18:42.740 number of tests. And then the user can remove the label and it gets deleted. So, some of the 18:42.740 --> 18:49.740 other use cases that it can be used. This was one. You can have everything in a get-op 18:50.740 --> 18:55.740 way. So, you can have then R goes CD plugged in, watching a particular repository, a particular 18:55.740 --> 19:00.740 folder, wherever there is a change like if I do a merge request. So, if there is a merge inside 19:00.740 --> 19:05.740 the main branch, the R goes CD will already be watching that this particular thing is merged. 19:05.740 --> 19:09.740 And then there will be auto deployment of the application which is there in the production 19:09.740 --> 19:14.740 as well. So, you can do the whole CICD kind of thing. You can also do a CICD in which 19:14.740 --> 19:20.740 you can use flux for creating more or connecting the virtual clusters back to flux. So, that 19:20.740 --> 19:26.740 you have their own independent CICD cycles. And then specifically the one that Retic was 19:26.740 --> 19:32.740 mentioning the CRD, if you are developing those CRDs. So, you cannot deploy different 19:32.740 --> 19:37.740 versions of the same controller in one Kubernetes cluster because it is a cluster wide resource. 19:38.740 --> 19:44.740 But, you can do that in virtual V cluster. So, the virtual clusters will be like let 19:44.740 --> 19:51.740 say I have a host cluster and I want R goes CD version X and R goes CD version Y by a second 19:51.740 --> 19:57.740 developer. So, they both can have that inside their own virtual clusters. So, that is possible 19:57.740 --> 20:03.740 and that is something which helps. And then the resource sharing, you can have your host cluster, 20:03.740 --> 20:10.740 you can have Prometheus, Argo or Sirt Manager, CubeBird all those on your host cluster and 20:10.740 --> 20:16.740 you can reuse all the custom resources inside the virtual cluster. So, that you do not have 20:16.740 --> 20:24.740 to reinstall. Also, just add one last thing is policies are respected cluster wide. So, you 20:24.740 --> 20:30.740 can have Falco that gets the Falco is a runtime thread detection open source project. It is 20:30.740 --> 20:35.740 a CNC app graduated project. So, you can have that and you can have the demon sets. So, 20:35.740 --> 20:39.740 everything will be respected. So, you will be able to do the runtime detection from the virtual 20:39.740 --> 20:44.740 cluster. If you have Kyverno, another CNC app project, you apply the cluster wide policies. 20:44.740 --> 20:50.740 So, that will also be respected by the virtual clusters that gets created. So, yeah, that is pretty 20:50.740 --> 20:56.740 much it. I do not know if you have anything. Yeah, if you want to conclude. So, improved 20:56.740 --> 21:01.740 developer experience and reduce the build time, accelerated test cycles, all the stuff that 21:01.740 --> 21:05.740 he mentioned, creation of cluster because it is in the end like I showed you over here. 21:05.740 --> 21:10.740 It is the state full set that gets created for every virtual cluster. So, it is just a container 21:10.740 --> 21:17.740 that is spinning up. So, it takes obviously less time to run that particular thing. And then 21:17.740 --> 21:21.740 save money on the control plane cost. Because you have multiple human resources, multiple 21:21.740 --> 21:25.740 control plane cost. And if you have multiple human resources, you will be having multiple 21:25.740 --> 21:28.740 multiple engineers controllers, you will be having multiple certain managers, you will be having 21:28.740 --> 21:34.740 multiple agocedes. So, those act to the resources as well, resource consumption as well. 21:34.740 --> 21:46.740 Yeah, that is pretty much it that we have to show. I hope that is helpful. 21:46.740 --> 21:59.740 So, this is on GitHub, we cluster, that is the project that you can try out for creating 21:59.740 --> 22:07.740 virtual clusters. Any questions? I do not know how much time we have? 22:07.740 --> 22:14.740 Four minutes, okay? Yes, please. 22:14.740 --> 22:27.740 Yeah, so good question. The performance bottleneck won't be there. The reason for that is every 22:27.740 --> 22:30.740 word. 22:30.740 --> 22:35.740 So, the question is that is there performance bottleneck when you are using virtual clusters 22:35.740 --> 22:40.740 versus when you have multiple Kubernetes clusters. So, the answer for that is that no, there is 22:40.740 --> 22:45.740 no performance bottleneck. The reason for that is every virtual cluster comes with its own 22:45.740 --> 22:51.740 API server. So, again, when I do, when this was run, basically the cube CTL apply or if I 22:51.740 --> 22:57.740 do, if I connect to the virtual cluster and I do cube CTL operations, the request goes to the 22:57.740 --> 23:03.740 API server of the virtual cluster and not the API server of the host cluster. So, it is the 23:03.740 --> 23:08.740 API server of the host cluster is not loaded. So, the load is going to the API server of the 23:08.740 --> 23:12.740 virtual cluster. That is why it is just a regular part that you are interacting with. So, it is 23:12.740 --> 23:19.740 no performance bottleneck in that way. 23:19.740 --> 23:26.740 Think between the clusters. Yeah, ideally there are some mechanism where you can think, but ideally 23:26.740 --> 23:32.740 we want these clusters to be separate because we want separate teams to have separate virtual clusters. 23:32.740 --> 23:37.740 So, everything will be having separate, but there are some mechanisms and maybe some use 23:37.740 --> 23:42.740 cases where you would want to do the communication, then you have to enable certain flags and 23:42.740 --> 23:48.740 things like that. Yes. 23:48.740 --> 23:55.740 So, every virtual cluster has a single spec. So, you have to write a V cluster dot 23:55.740 --> 24:01.740 YAML file. So, let us say there is a team A and team B. So, team A wants that their 24:01.740 --> 24:05.740 ingress resources and cert manager resources have to be synced. So, they will turn on 24:05.740 --> 24:10.740 the name of the virtual cluster. So, you have to write a V cluster dot YAML file. So, let 24:10.740 --> 24:15.740 us say there is a team A and team B. So, team A wants that their ingress resources and 24:15.740 --> 24:19.740 cert manager resources have to be synced. So, they will turn on those integrations and 24:19.740 --> 24:23.740 team B doesn't want those integration and they want their own to be deployed. So, they 24:23.740 --> 24:27.740 will turn off those integrations or basically they won't have added that to the 24:27.740 --> 24:32.740 V cluster dot YAML file. So, one will be reusing the resources from the host cluster 24:32.740 --> 24:37.740 and another won't be reusing any of the resources from the host cluster. And then the regular 24:37.740 --> 24:42.740 practices of security has like from the general Kubernetes perspective have to be implemented 24:42.740 --> 24:49.740 for everything. I do not know who raised the hand first but anyone can go. 24:49.740 --> 24:58.740 And you configure the single to apply a node selector based on the instance. Yes. So, you 24:58.740 --> 25:04.740 can have you can do that in the V cluster dot YAML file itself. So, you can specify the 25:04.740 --> 25:08.740 node selector's like whether this particular virtual cluster has to be spend up on a GPU 25:08.740 --> 25:12.740 node or a high CPU node and things like that. 25:12.740 --> 25:15.740 Yeah. Yeah. Please go ahead. 25:15.740 --> 25:25.740 So, all the control planes are different. So, they have the embedded ATCD and then 25:25.740 --> 25:30.740 if you go for the like as a pro feature and stuff. I don't want to talk about that but 25:30.740 --> 25:35.740 that has the external ATCD and stuff. You can do that but that is part of the pro 25:35.740 --> 25:37.740 pro thing. Yes. 25:37.740 --> 25:43.740 Would it be able to limit the resource in general that the circle D cluster can act? Yes. Yes. 25:43.740 --> 25:47.740 You can specify that as well in the V cluster dot YAML file. So, single, single, single, 25:47.740 --> 25:50.740 spec. You specify everything over there. Yes. 25:50.740 --> 25:59.740 Yes. So, if a particular version, if a particular team wants 1 dot 3 0 and next team 25:59.740 --> 26:04.740 b wants 1 dot 3 1 they can have. If 1 team wants to upgrade another refuses to upgrade 26:04.740 --> 26:06.740 they can. 26:06.740 --> 26:10.740 So, the host can be 1 dot 2 9 as well. 26:10.740 --> 26:12.740 Yeah. 26:12.740 --> 26:14.740 I think 1 last question? 26:14.740 --> 26:17.740 Yeah. I don't know. Time. Okay. Last one. 26:17.740 --> 26:18.740 Yes. 26:18.740 --> 26:23.740 So, I saw that in comparison with the namespace and about the over here. 26:23.740 --> 26:29.740 If you expand a bit without their similar, like talking like that in your cluster, 26:29.740 --> 26:32.740 then it's a bit like that. 26:32.740 --> 26:35.740 Where was it? 26:35.740 --> 26:36.740 This one. 26:36.740 --> 26:37.740 Yeah. 26:37.740 --> 26:38.740 So, what was the question? 26:38.740 --> 26:39.740 Sorry. 26:39.740 --> 26:40.740 About the over there. 26:40.740 --> 26:42.740 So, how can the namespace of the over hit end of the 26:43.740 --> 26:46.740 class? 26:46.740 --> 26:49.740 Yeah. So, basically it's. 26:49.740 --> 26:51.740 The cluster should be. The cluster is low. 26:51.740 --> 26:56.740 So, it's very low because it's just gives it a clear name space and the resources 26:56.740 --> 26:57.740 it's not there. 26:57.740 --> 27:02.740 But, the overhead for creation of virtual cluster, the time it takes for the state 27:02.740 --> 27:05.740 full set, the persistent volume to be generated, attached to the port, 27:05.740 --> 27:09.740 generating it. So, that's the kind of overhead in terms of time it adds. 27:09.740 --> 27:13.740 Basically, it's spinning up, but the creation of the namespace is instant. 27:13.740 --> 27:16.740 Yeah. I mean, because it's about the theme there. 27:16.740 --> 27:17.740 Yes. 27:17.740 --> 27:22.740 So, it's very low. 27:22.740 --> 27:23.740 So, it's very low. 27:23.740 --> 27:25.740 Very high. That's how it is. Yes. 27:25.740 --> 27:28.740 So, we have some stickers over here as well. 27:28.740 --> 27:29.740 If you want to take. 27:29.740 --> 27:31.740 Yeah. We cluster stickers are here. 27:31.740 --> 27:32.740 Thank you so much. 27:32.740 --> 27:34.740 Thank you for attending the talk. 27:34.740 --> 27:39.740 Thank you.