WEBVTT 00:00.000 --> 00:09.520 So, let us get started with this discussion. So, I will be discussing regarding package 00:09.520 --> 00:14.820 testing and how molecule can be used to streamline your process along with Jenkins. So, 00:14.820 --> 00:21.380 let me introduce myself. I am I am my name is Yosh. I work at Perkona in a QATM. So, we 00:21.380 --> 00:28.980 basically test database packages and distributions before we you know release them. So, 00:29.020 --> 00:33.020 the agent of for this talk will be I will be introducing what packages are what do we 00:33.020 --> 00:40.220 mean by package testing and the release process where exactly do we fit then problems encountered 00:40.220 --> 00:45.720 by the QAs like which are the major problems and issues that they encounter and what 00:45.720 --> 00:51.420 molecule is how it can solve your issues. Then, we will go into the briefs of molecule 00:51.420 --> 00:56.260 what it is and how to install it some key concepts regarding it. Then, we will give you some 00:56.340 --> 01:03.340 example and also in the end there will be a demo regarding a certain scenario that 01:03.340 --> 01:08.620 might be helpful. So, let us get started with the packages what do we mean by package. 01:08.620 --> 01:14.740 So, a package can be a collection of a software that can be distributed. So, it can be an 01:14.740 --> 01:20.740 RPM file or depth file, it can be tar balls, it can be Docker image and so on. So, anything 01:20.820 --> 01:26.820 that can help in distribution of your software is a package and depending on the support 01:26.820 --> 01:32.260 of that product that you have it differs based on the OS and architecture. So, you can 01:32.260 --> 01:37.300 have a product but it can have multiple packages based on the architecture and OS. So, 01:37.300 --> 01:43.300 once such example is of Perkona server that we have we basically have that product released 01:43.300 --> 01:49.620 for different architectures and operating systems. So, that is what we will be dealing with 01:50.580 --> 01:56.660 Debian and RPM packages in this example. Now, what do we mean by package testing? So, 01:56.660 --> 02:01.380 package testing usually involves installing the packages then verifying the packages that 02:01.380 --> 02:07.940 are installed. So, it is like version tests and all then feature execution. So, you test those 02:07.940 --> 02:15.780 features and then there are upgrade and upgrade as depending on your product. Then, if you 02:15.860 --> 02:20.260 have a clustering setup, you need to check whether or not the clusters work properly or not 02:20.260 --> 02:26.260 an execute test accordingly. So, each package needs to be tested individually on the supported 02:26.260 --> 02:30.900 infrastructure. So, that creates a need for having infrastructure is also one key issue when you 02:30.900 --> 02:37.700 are doing things manually. So, lack of automation is one problem. Another problem is delays in 02:37.700 --> 02:42.420 the infrastructure provisioning due to reliance on the other teams. So, let us say if you have a 02:42.420 --> 02:47.220 QA team and you need a support for a new particular product or you want to test your product 02:47.220 --> 02:52.740 for a particular OS. If you have a central Jenkins and you need to rely on the other teams for 02:52.740 --> 02:57.860 adding over your node that creates delays. So, the back and forth communication is the key 02:57.860 --> 03:05.300 issue in this case. So, having a lack of control causes this kind of problems. So, now, we will 03:05.300 --> 03:10.820 see what molecule is. So, molecule is primarily designed to test sensible roles, playbooks and 03:11.780 --> 03:17.700 collections. So, the core idea behind Ansible is to create infrastructure provision your 03:17.700 --> 03:22.900 infrastructure and execute your roles collections on it and then destroy it once you are done with it. 03:23.620 --> 03:29.300 So, it is an ideal tool for configuration validation. Now, there might be a question like why not 03:29.300 --> 03:34.740 use Terraform and all. But, the thing is Terraform is good for infrastructure validation rather than 03:34.740 --> 03:39.860 configuration validation which molecule provides out of the box due to Ansible support. 03:41.220 --> 03:45.380 Now, it has a support for different set of environments. So, let us say if you have AWS, 03:45.860 --> 03:52.020 you want to test your products on AWS, then heads nerve or any other custom provider, 03:52.020 --> 03:59.300 you can utilize this set up easily and it is easy to learn. So, why molecule on a broader 03:59.300 --> 04:04.500 term? So, it gives you flexibility based on your requirement. So, you can there is no strict rule 04:04.500 --> 04:09.940 that you should follow a particular process when you are using molecule. So, it gives you that 04:10.660 --> 04:16.020 flexibility when you are building things. Then, there is this speed. You can build things in such 04:16.020 --> 04:21.620 a way that you can execute test in parallel for different environments. So, that saves time and 04:21.620 --> 04:26.020 resources are optimally utilized because once you create environment and they are tested things 04:26.980 --> 04:32.100 and the destruction of this environment is also ensured. So, you can be sure that you are not 04:32.100 --> 04:38.420 wasting resources in this case and then the key thing is focus. So, the QA guys are not the 04:39.380 --> 04:44.900 guys they need to deal with QA stuff. So, they need to check the product efficiently rather 04:44.900 --> 04:49.060 than learning new tools and you know, relearning those things and wasting time on that. 04:49.620 --> 04:54.500 Better to focus on the issues rather than you know learning the infrastructure related stuff. 04:54.500 --> 05:00.820 So, having a config centric tool is good thing in this case and it has a wide support. Like, 05:00.820 --> 05:05.540 you know you have testing frameworks like test in fra and bad test that you can utilize. 05:06.180 --> 05:12.500 Also cloud providers are there. So, you have AWS, GCE, Azure and Hedsner providers available 05:12.500 --> 05:19.940 which you can utilize. So, that makes it a good candidate for using. Now, let us see how 05:19.940 --> 05:25.380 can we install molecule. So, the primary way to install molecule is using the capability of Python. 05:25.380 --> 05:30.820 So, you can do that to install by molecule. But, there are certain dependencies that you need to 05:30.820 --> 05:36.660 ensure for your Python environment. You can also use it to install from source using the 05:36.660 --> 05:41.620 people installed command along with the repo demand the branch that you need to specify. 05:43.060 --> 05:47.700 So, now let us see the concepts what are the concepts that you need to be aware of when you are 05:47.700 --> 05:54.020 using molecule. So, there are key six concepts the first is of stage. Then there are drivers, 05:54.020 --> 05:59.060 then there are platforms. Then there are provisioners, verifiers and scenarios. So, on the 05:59.060 --> 06:05.380 broader terms, stages are the flow of the test. So, each step that you are executing is a stage, 06:05.940 --> 06:10.500 that you are executing. Then the driver is basically where you are executing your stuff. So, 06:10.500 --> 06:19.940 is it on AWS, is it on vagrant or Hedsner? So, drivers basically do the back end stuff that you use. 06:20.580 --> 06:25.460 And then there are platforms. So, platforms basically are the properties of the drivers that you selected. 06:25.540 --> 06:31.620 So, let us say if you selected EC2, it has its own set of configurations that are supported. 06:31.620 --> 06:36.740 So, you follow those particular syntax. And then there are provisioners. So, provisioners basically 06:36.740 --> 06:41.860 are responsible for setting up your environment and configuring them using your playbooks. 06:42.660 --> 06:47.700 And there are verifiers, verifiers basically mean that you are verifying your existing setup 06:48.100 --> 06:54.100 with the testing setup that you want to be verified with. It can be test in fra or it can be an 06:54.420 --> 07:01.860 n civil playbook. And scenario is the entirety of this molecule setup. So, you run molecule test using 07:01.860 --> 07:08.740 scenarios. So, yeah. So, this is a sample molecule dot YML file. Now, we will go into the depths 07:08.740 --> 07:17.300 of each and every stage. So, first of all, what do we mean by stage? It is execution of each 07:17.300 --> 07:23.860 and every thing that we are executing YML molecule. So, molecule has a life cycle. So, ideal 07:23.940 --> 07:30.100 life cycle involves these steps. So, first of all, you installed role dependencies using molecule. 07:30.100 --> 07:34.820 Like, let us say if you have a particular role that you want to use from Ansible Galaxy, 07:34.820 --> 07:40.820 you utilize the dependency module to do that. Then there are linting checks. If you want your 07:40.820 --> 07:46.100 Ansible playbooks to be checked while linting and all. So, you can do that. Then there is this 07:46.100 --> 07:51.380 destroy stage where you actually destroy the infrastructure. So, ideally what folks do is before 07:51.460 --> 07:58.020 creating their infras, they run the destroy stage to ensure that nothing remaining is done again. 07:58.020 --> 08:04.500 So, for example, if in some cases you have some instances running and it is not ideal to run things 08:04.500 --> 08:10.820 on that instance which was old. So, this just for the sake of insurance people run it before 08:10.820 --> 08:16.100 creation. Then there is prepare stage. So, it is kind of like provisioning your infrastructure 08:16.180 --> 08:23.620 with a set of tasks or playbooks that you have. Then converge. Convert is a key thing. So, 08:23.620 --> 08:29.060 basically you write all your tests and all in the convert stage. Convert stage. It is basically 08:29.620 --> 08:37.060 Ansible playbook. It can be a role or a collection. So, and then there is this hydropotums. So, 08:37.060 --> 08:42.420 this seventh stage is like re-learning the converge just to ensure that things are running properly 08:42.740 --> 08:47.780 or not. It verifies the converge again to see if there is any difference on the execution. 08:48.500 --> 08:52.740 And then there is this side effect. So, side effect is like running converge again, 08:52.740 --> 08:58.980 but with your own set of scripts. So, one use case might be let us say you set up a cluster using 08:58.980 --> 09:04.100 converge stage. Side effect can help you out in once the cluster is set up. You can execute 09:04.100 --> 09:11.940 test using those side effect steps. Then there is this verify. So, verify stage runs your verification 09:12.820 --> 09:18.020 test. So, for example, let us say if you want to verify for configuration file exists or a 09:18.020 --> 09:25.620 service file exists, you can do those checks using the verify stage. It can be an Ansible playbook, 09:25.620 --> 09:32.340 but usually it is love testing tool like testing for where you define the Python scripts to check 09:32.340 --> 09:38.020 the services existence. And then the clean up stage is like destroying your infrastructure. 09:38.820 --> 09:45.780 So, that you are done with the tests. So, now let us see what are drivers. So, drivers are the 09:45.780 --> 09:52.180 key thing that actually does the heavy lifting. So, these drivers are like plugins. They are not 09:52.180 --> 09:58.580 by default available except for a default driver that Ansible that molecule has. So, you install 09:58.580 --> 10:04.420 drivers using PIP command. So, let us say if you want to install Docker driver, you install this 10:04.500 --> 10:10.500 using PIP installed molecule dash Docker. So, you run your test on Docker container. So, if you 10:10.500 --> 10:16.900 want using the Docker driver, the same thing goes for very grant EC2 heads network cloud GCE. So, 10:16.900 --> 10:22.580 these are the providers that you can utilize. So, it satisfies your three needs. The first one is 10:22.580 --> 10:27.140 local testing. So, let us say if you want to test things locally, you can do so using this Docker 10:27.140 --> 10:33.220 and very grant. If you want to run on cloud, you can utilize EC2 molecule heads network or GCE. 10:33.300 --> 10:39.380 And then the deadly gated is something which is installed by default molecule. It has its own 10:39.380 --> 10:46.660 set of configurations. So, usually it is looking for custom test that you want to execute on a 10:46.660 --> 10:54.020 custom infra. So, it has a bit more complexity to it. So, let us see what do we mean by 10:54.020 --> 11:02.260 platforms. So, we saw drivers. Now, platforms are the configurations of those drivers. So, 11:02.340 --> 11:10.500 you can see for example, in this example, we are using AWS EC2 as a driver. It has its own 11:10.500 --> 11:17.300 set of syntax like you know instance tag SSH user instance type. So, platform defines drivers. 11:17.860 --> 11:25.380 So, that is the key concept behind platforms. And then there are provisioners. So, provisioner 11:25.380 --> 11:30.980 is a step where you define the playbooks. So, by default they are stored in the molecule directory. 11:30.980 --> 11:35.540 But, let us say if you have a custom requirement like you need to execute a specific playbook 11:35.540 --> 11:40.660 for each and every step, you can define them in the provisioner. It can also have inventory details 11:40.660 --> 11:45.860 like if you have a particular path where the inventories are configured, you can use them in your 11:45.860 --> 11:54.420 setup. And yeah, let us see very fires. So, very fires are responsible for verification of your 11:54.580 --> 12:01.620 configuration. It can be by default it is sensible playbook. But, you can change that and utilize 12:01.620 --> 12:10.100 test in frog or etc. In this case, default verified or via mail playbook is used. 12:11.140 --> 12:15.620 But, let us say if you use test in fry. In this case, the configuration is a bit different. 12:15.620 --> 12:20.340 You need to ensure that you have a test folder present within the molecule directory. So, 12:20.660 --> 12:29.860 this is dependent on the verified that you are selecting. So, yeah. And then there is this concept 12:29.860 --> 12:37.060 of scenarios. Scenario is where the things actually start. So, you run molecule test using 12:37.060 --> 12:45.060 scenarios. So, you can have a scenario that can have a single molecule file. And many such scenarios 12:45.140 --> 12:50.420 can exist for a particular set of test that you have. So, in this case, you can see 12:51.460 --> 12:56.900 molecule.yml file exists for you run to jami, you run to noble. So, there are different 12:56.900 --> 13:01.300 scenarios based on OS in this example. And they are running for a single task. 13:02.580 --> 13:07.940 You execute your scenarios using molecule test hyphenase command and the scenario name. 13:08.740 --> 13:15.740 Now, based on your requirement scenarios can be a bit different. So, let us say in this example 13:15.740 --> 13:22.740 that we were saying, you have scenarios for each OS executing for a single and simple playbook. 13:22.740 --> 13:28.340 In this case, it is an uncivil rule. So, yeah. So, based on your requirement, things can be 13:28.340 --> 13:36.340 a bit different, which can help you in having more flexibility in selection. So, it can be like 13:36.420 --> 13:42.020 a single scenario based setup for a single rule where you define all the supported operating 13:42.020 --> 13:47.700 systems in this particular setup. So, the role is configured in such a way that all operating 13:47.700 --> 13:52.580 systems are covered in this scenario. You can have multiple scenarios for each OS for role or 13:52.580 --> 13:59.540 a playbook as seen before. You can also have multiple scenarios for each OS for a different set of 13:59.540 --> 14:03.300 playbooks. Let us say if you have a custom requirement where you want to ensure that certain 14:03.380 --> 14:12.820 scenarios have a different set of runs. You can do so using the third setup. So, now, we will see 14:12.820 --> 14:18.740 a demo where there are multiple scenarios for each OS within the same role. So, we will be testing 14:18.740 --> 14:26.180 a role called PS. So, under the PS folder, you have three key folders. The first one is molecule, 14:26.420 --> 14:32.820 second is playbooks and the last one is tasks. The molecule folder has three scenarios. 14:34.180 --> 14:39.540 They been 12, you went to Jamie, you went to noble. Each having its own molecule.yml file. 14:40.260 --> 14:44.820 And there are set of tests. For the sake of example, I have kept all the tests to be same 14:45.380 --> 14:52.580 and they are located in the test folder. Then you have playbooks. Playbooks are very defined things. 14:52.660 --> 15:00.180 So, cleanup.yml create destroy playbook and prepare the playbooks and tasks is basically what 15:00.180 --> 15:08.340 is being executed. So, you can see a molecule.yml file for a particular scenario of debut and 12. 15:09.460 --> 15:15.860 You can see first of all the dependency name galaxies been used. So, we are using Ansible Galaxy. 15:16.420 --> 15:22.660 If you want to run a particular role in your setup, pre-existing in the galaxy library, you can 15:22.660 --> 15:28.660 do so. Then you have a driver. It is an EC2 instance that we are using. Then we have a platform 15:28.660 --> 15:34.500 that defines or EC2 instance. What are the things included in that instance? Including tags, 15:35.140 --> 15:41.460 device and all. Then there is a provision or we are using Ansible Provisioner because we have 15:41.540 --> 15:46.420 Ansible Playbooks that we want to execute things with. So, create has its own set of 15:46.820 --> 15:53.220 yml file which we are referencing in this example. Then verify we are using test in 15:53.220 --> 16:02.100 fries of verify and scenarios are sequences where we want to execute things one by one. So, 16:02.820 --> 16:09.220 in the playbook.yml we are executing a role. We have defined it under the roles and this will be 16:09.300 --> 16:16.660 executed and for the test script we are checking two tests first does the service exist named 16:16.660 --> 16:23.940 mySQL. If this test is executed as true, this means that a service exists named mySQL and then 16:23.940 --> 16:28.660 we are checking is the service running. So, these are the two checks that we are doing. 16:30.420 --> 16:37.460 So, now I will show you the code for the Jenkins setup that we have. So, we are running Jenkins using 16:37.460 --> 16:46.100 a code. So, the job is defined using this yml file using Jenkins job plugin of Python. So, 16:46.100 --> 16:52.500 we define a job, we give it a name, we give project type, it is a pipeline project that we have. 16:52.500 --> 16:59.860 We are also referencing the URL of the repository where you can find the grue file. We are also 16:59.860 --> 17:05.940 giving it explicit script path of the grue file. So, this is the same repository that we are using. 17:06.020 --> 17:11.220 So, here is the grue file that we will be used. So, first of all we will be using an agent 17:11.220 --> 17:17.220 on a Jenkins worker node, it is a bookworm agent, then it has its options for credentials purposes. 17:17.220 --> 17:23.460 So, let us say we are accessing AWS on the backend molecule needs to have access to AWS. 17:23.460 --> 17:29.620 So, that is what we are defining over here using with credentials. It is a function which is defined 17:29.620 --> 17:35.380 at the bottom. So, you can see right. These are the set of things that you need. You need a secret key 17:35.460 --> 17:44.180 and access key ID. Then we have stages. So, we have 4 key stages in this example. The first one 17:44.180 --> 17:50.020 is the name set up what name do we want this job to have. Then there is this checkout phase where 17:50.020 --> 17:55.380 we are checking out this particular repository to execute our molecule tests. Because molecule 17:55.380 --> 18:01.060 tests is present within this particular repository on GitHub. Then there is this prepared stage 18:01.060 --> 18:05.540 where we are actually installing molecule before we run things. By default the worker is 18:05.540 --> 18:12.260 plain bookworm. We need to ensure that we have molecule running on the worker. So, yeah, we install 18:12.260 --> 18:18.260 molecule and then we run things in parallel. Now, how are we running things in parallel? So, we have 18:18.260 --> 18:23.860 a function defined that has a rest of operating systems that we want to run tests on and then we 18:23.860 --> 18:30.420 have a folder defined where our molecule scenarios are located. So, this is a function named 18:30.740 --> 18:36.340 molecule parallel test which has two arguments operating systems and the path where our scenarios 18:36.340 --> 18:43.940 are located. Let us see the function. So, the operating function system function has three operating 18:43.940 --> 18:50.660 systems. You are going to knowable jammy and the bin 12. Then install molecule has a set of 18:51.620 --> 18:58.100 batch commands that we used to install molecule for this particular bookworm worker. 18:59.300 --> 19:04.980 And yeah, the molecule parallel test is defined in this way. So, we are mapping the inputs that 19:04.980 --> 19:11.140 we are getting and then we are executing them in parallel as a stage in Jenkins. So, each scenario 19:11.220 --> 19:24.980 is executed in parallel. So, this is how we execute the test. Now, let us see the locks. So, 19:24.980 --> 19:32.100 you can see this job executed and all the stages are running in parallel after the based on the 19:32.100 --> 19:40.100 operating system name. Let us open one particular example. So, we are using the example of the 19:40.500 --> 19:47.380 novel. So, initially destroyed things if it existed then it created the instances. 19:52.580 --> 19:59.220 Once the creation stage completed, we prepare by installing certain things that are needed for 19:59.220 --> 20:05.300 the percolas server to install. Once the prepared stage was completed, we went to the 20:05.300 --> 20:12.980 convert stage where we are actually installing the repo and enabling them. So, all packages are 20:12.980 --> 20:20.020 installed during this particular stage. And then we are executing a verify test. So, we had a 20:20.020 --> 20:27.060 set of tests that we are executing. That is this one. If server service is existing and if service 20:27.060 --> 20:34.900 is running, you can see those tests passed because it was installed properly. And then we destroy 20:34.980 --> 20:43.460 the instance. So, yeah, let us take a look at what is present in the particular playbook. So, 20:43.460 --> 20:50.340 we are using EC2 instance. So, it has its own set of definitions that you need to follow. But, 20:50.340 --> 20:56.500 they can be changed based on your requirement. So, in our use case, the permissions that we had. 20:56.500 --> 21:02.980 So, we created this particular setup because we wanted to use this for other things. So, we are 21:03.140 --> 21:11.140 following a particular structure of the create stage. The way in which you created your infrastructure 21:11.140 --> 21:15.620 needs to be destroyed in a similar way. So, we are using the information that we got from the 21:15.620 --> 21:22.580 create stage and we are using it to destroy it using this destroyed by ML file. Then there is this 21:22.580 --> 21:28.980 playbook where we are actually executing the role. So, the role is defined in main dot by ML file. 21:29.860 --> 21:34.660 As you can see, we are first of all installing percone release package. So, we want to enable 21:34.660 --> 21:40.660 a particular repo. So, we have this tool called percone release that lets us do that. So, once 21:40.660 --> 21:46.420 it is installed, we are basically enabling the repose that we need. So, for example, let us 21:46.420 --> 21:53.380 see if we want to set test for percone server 8 4, we enable this particular repo. And then we 21:53.460 --> 21:59.940 enable the extra db cluster repo. Then we install the packages. And once the packages are done, 22:01.220 --> 22:07.780 we start those services. And then there is this utility which is used during testing. So, we install 22:07.860 --> 22:21.300 that utility. So, this is how the setup is there. Yeah. So, that is it. 22:29.220 --> 22:31.460 Anyone has any questions or queries? Yeah. 22:37.780 --> 22:51.140 Can you speak louder, I didn't get the last step. 23:07.780 --> 23:27.220 Can you speak louder? Actually, don't know the chef part that you are referring to. So, 23:27.220 --> 23:33.620 I do know about cookbooks, but if chef has infrastructure provisioning setup, then it is similar 23:33.700 --> 23:39.140 to chef in a way. But if it does not have a particular you know support for creating in 23:39.140 --> 23:44.740 front destroying them, then I believe it is not similar to the thing that you are referring. 23:44.740 --> 23:50.500 So, the molecule helps you in creating and destroying after your test. So, that is the thing. 24:04.580 --> 24:10.820 There are folks that are doing that using their custom providers. So, it does have that support. 24:10.820 --> 24:15.860 But you will have to look into the reports for those particular providers and install them using 24:15.860 --> 24:20.660 those setup that they are doing. So, it is a community driven project. It might have a support for 24:20.660 --> 24:25.140 the needs that you have. Yeah. 24:25.220 --> 24:36.180 Yeah. There are multiple ways in which we do that. So, let us say you have a packaging 24:36.180 --> 24:42.100 the testing repo and you want to test that particular rep after upgrading from mean repo. 24:42.100 --> 24:47.140 So, you install things in the mean repo, then you reinstall your package from the testing 24:47.140 --> 24:51.300 repo. But you need to check if that upgrade work properly or not. So, you run testing 24:52.260 --> 24:57.060 but you cross reference them using the versions file. So, there are parameters involved. 24:57.060 --> 25:02.820 But you use a particular config file with which you can double check your inputs. So, you have 25:02.820 --> 25:07.860 parameters for installing the particular testing repo in the Jenkins. But if you want to test 25:07.860 --> 25:13.140 and validate the versions, you need a external file. It can be a filing the repo named versions 25:13.140 --> 25:20.420 file where you define the test versions in that. So, it is kind of manual work but you can automate 25:20.500 --> 25:24.900 that when the packages are released in the testing repo. So, once packages are released, you can 25:24.900 --> 25:30.980 rename and modify that particular thing in an automated command commit. So, that can solve that issue. 25:31.540 --> 25:38.020 We are doing this using a versions file in case of per corner. Thank you.