paint-brush
Conducting Cross Browser Testing With Cypress in CI/CD using Docker by@sanzhanov
162 reads

Conducting Cross Browser Testing With Cypress in CI/CD using Docker

by Alex SanzhanovJune 12th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

In this article, we’ll explore one interesting way of running Cypress tests across multiple browsers in CI/CD using Docker. The cool thing here is that with the help of Docker Compose we will set up the simultaneous launch of several Docker containers in which Cypress tests will be executed in parallel in different browsers. The entire workflow will run automatically on the GitHub Actions platform, and as a result, we will get artifacts from running tests in each of the browsers.
featured image - Conducting Cross Browser Testing With Cypress in CI/CD using Docker
Alex Sanzhanov HackerNoon profile picture


Greetings to all Cypress enthusiasts!


In this article, we’ll explore one interesting way of running Cypress tests across multiple browsers in CI/CD using Docker. The cool thing here is that with the help of Docker Compose we will set up the simultaneous launch of several Docker containers in which Cypress tests will be executed in parallel in different browsers. The entire workflow will run automatically on the GitHub Actions platform, and as a result, we will get artifacts from running tests in each of the browsers. Get ready to read, it’s going to be very exciting!

Why Cross Browser Testing?

Cross Browser compatibility is one of the most important characteristics of a web application, which implies its equally correct display and functionality in various browsers and their versions.


The modern variety of browsers is determined, among other things, by differences in the ways of rendering web application content, when different browser engines (Blink, WebKit, Gecko, EdgeHTML) perceive and process HTML tags and CSS styles differently, which naturally affects the appearance and application behavior.


In this regard, the importance of cross-browser testing is obvious and extremely clear. Its purpose is to ensure that when a user opens a web application in various browsers and their versions, the correct display of its content is ensured, the integrity of its structure is preserved, there are no functional errors and inconsistencies in performance, layout collapse, overlapping elements on top of each other, etc.

Cypress Features in Cross Browser Testing

If you are not yet familiar with Cypress:


Cypress is a JavaScript-based end-to-end testing tool designed for modern web test automation. Cypress allows conducting both full-fledged end-to-end testing with passing user scenarios on a real product, as well as integration testing of individual front-end components. Cypress has emerged as a popular end-to-end testing tool for web applications due to a bunch of its powerful features, user-friendly interface, fast test execution time, easy installation and debugging, etc.


To my mind, Cypress is a real game changer in end-to-end and component testing and it grows at a rapid pace. Among the many benefits of Cypress, I would like to emphasize the high quality of its , as well as the master classes that the Cypress development team conducts and publishes in the public domain, and also a very friendly and responsive . Honestly, and as you may have noted from my previous articles I’m a big fan of that wonderful tool!


Cypress has the capability to run tests across multiple browsers. According to official , Cypress has currently support for (including Electron and Chromium-based Microsoft Edge), (Safari’s browser engine), and Firefox. Excluding , any browser you want to run Cypress tests in needs to be installed on your local system or CI environment.


Obviously, it is often not necessary to run all available test suites in different browsers given the increase in test execution time and the associated cost of the required CI infrastructure. Therefore, Cypress provides different continuous integration strategies for deploying cross-browser testing in CI pipelines depending on the needs of a particular project.

Benefits of Using Cypress to Optimize Cross Browser Testing in CI

In order to balance costs and available CI resources, as well as the optimal level of confidence in testing, Cypress provides the following options for effectively organizing cross-browser testing:


  1. Selecting a specific test suite for a given browser. For example, sometimes it makes sense to run all available tests within Chrome, but within Firefox _execute only the happy or critical path related test files, or a directory of specific _“smoke” test files using --spec flag. In some cases, the priority areas for implementing cross-browser testing may be critical application features or workflows, as well as the most likely user scenarios.


  2. Flexible setting of the periodic frequency of launching individual browsers. For example, running tests within Chrome can be triggered by events in the repository, while within Firefox, tests will run on a set schedule based on release periodic frequency. Modern CI pipelines allow you to set the required time and periodic frequency for running workflows.


  3. Parallel execution of test files for each group, where the groups are based on the browsers being tested. For example, using allows you to run each browser at different levels of parallelization, differentiating the amount of allocated CI resources between browsers depending on the importance of each browser in the testing strategy. For example, tests within Chrome can be run conditionally in parallel on up to four machines, while within Firefox on two, minimizing CI costs.


  4. Possibility to configure the launching or exclusion of browsers for a specific test/test suite. Sometimes it makes sense to run or ignore one or more tests in certain browsers to shorten the duration of a test running. To do this, Cypress allows to specify directly in the configuration of a test or test suite a specific browser to run or exclude, for example: { browser: 'firefox' } or { browser: '!chrome' }.


  5. Determination of the software deployment environment. In cases where the project shows consistently stable behavior in different browsers, it is advisable to set up cross-browser testing only before deploying changes to the production environment.


A thoughtful combination of these Cypress advantages will help you build an optimal cross-browser testing strategy based on the needs of a particular project.

Why Use Docker in Cross Browser Testing?

There are various approaches to implementing cross-browser testing with Cypress, one of which is to set up the automatic running of Cypress tests in chosen CI platform using .


Docker makes it much easier to set up and maintain a cross-browser test environment. By encapsulating the entire test stack, including different browsers in isolated Docker containers, it is possible to reproduce a stable and consistent environment for running tests in different browsers, regardless of the servers running the containers.


Using allows us to effortlessly scale our testing infrastructure by distributing our workload across multiple containers. So, it is possible to define several containers for different browsers and run them simultaneously using one command, or rather one configuration file. Parallel execution of tests in different browsers obviously significantly speeds up the overall testing process.


Using the official Cypress Docker with pre-installed browsers as a base layer eliminates the need to install browsers on servers running Docker containers. You can read more about the benefits of using Docker in testing in my previous about running Cypress tests in Docker containers.

Moving from theory to practice…

In this article, I use as my continuous integration platform. It should be noted that Cypress provides many useful of configuration files for setting up GitHub Actions workflows, based on which it is quite simple to organize various options for running tests in several browsers. For example, you can create a workflow for each browser separately and activate the created workflows depending on the specified trigger events in the repository. Obviously, this is very convenient.


This article provides a simplified example of creating multiple containers to run tests within different browsers simultaneously using Docker Compose.


The idea is pretty simple — let’s say we need to run a specific Cypress test suite within four browsers — Google Chrome, Firefox, Microsoft Edge, and Electron. Trigger events for automatically starting a workflow in the repository should be a push event to the main branch, an open or reopened pull request, and a scheduled start, for example, every Friday at 2 am.


Also, during the execution of the workflow, it is necessary to obtain artifacts with the results of running tests in each of the browsers — videos and screenshots in case any tests fail.


One possible solution to this task is to create a custom Docker image based on one of the official Cypress Docker , build and simultaneously run four containers from the created image, in each of which our Cypress tests will be executed in a specific browser.

Briefly about the test project

To demonstrate how to run Cypress tests across multiple browsers, I use the very simple Cypress-Docker project to test my blog on Medium. The project already has some dependencies installed — and , and also has spec.cy.ts file with a set of three trivial tests for the blog’s homepage:



By running it locally within Chrome browser, we make sure that all tests pass successfully:



Obviously, these tests are only demonstration examples and do not indicate the correct display and functionality of the website in different browsers.


To run tests on the GitHub Actions platform, this project is hosted on GitHub.

Building a Docker image

As a base layer for building the image, the official Docker image was taken, which includes all operating system dependencies and some browsers.


The most current version of the image at the moment — includes pre-installed Node.js 18.16.0, as well as three browsers — Google Chrome, Firefox and Microsoft Edge. Given that the Electron browser is pre-installed in Cypress, we will have all the necessary browsers at our disposal to carry out cross-browser test execution in accordance with the initial task.


The final Dockerfile for building the required image will look like this:



Firstly, FROM instruction defines a base image, all of whose dependencies and configurations will be included in the generated image.


The next WORKDIR step creates the /e2e working directory, in which all subsequent commands will be executed.


Next, in COPYinstruction the package.json and cypress.config.ts files, as well as the cypress folder, including the spec file, will be copied from the repository to the working directory inside the image.


Then RUN instruction specifies two commands to run, in particular, npm i — to install the necessary dependencies in the working directory of the image and npx cypress info — to display information about Cypress, current browsers detected by Cypress, and so on.


The last step in ENTRYPOINT instruction the command in exec form is defined to run Cypress in headless mode in containers generated from this image.

Setting up running containers with Docker Compose

As mentioned earlier, after building the Docker image, four containers will be created based on it. In each container, Cypress tests will be run in a specific browser. To simultaneously run containers with a single command, it is advisable to use a tool such as . To describe the process of loading and configuring containers, the YAML configuration file should include the following:



As we can see, in the configuration file we define four services (containers) — e2e-chrome, e2e-firefox, e2e-edge and e2e-electron, each of which uses the Docker image created based on the Dockerfile located in the same directory (build keys).


Next, the values ​​of the command keys set the commands to launch Cypress in browsers with the given names. It is worth noting here that in the e2e-firefox service, the command was supplemented by a configuration change due to an open with recording video when using the Firefox browser in Cypress.


The next step is to mount volumes (volumes keys) for each service and set the appropriate mappings to gain access to artifacts outside of containers. In essence, this means that the videos and screenshots generated during the launch of Cypress and written into containers will actually be stored in the GitHub Actions virtual environment at the specified paths relative to the workspace of the launched workflow. This will allow, in case any tests fail, to extract artifacts from the ./artifacts directory during the execution of the GitHub Actions workflow, which is required according to the initial task.

Setting up the Workflow in GitHub Actions

To run the workflow, we’re going to use following workflow e2e.yml file which is located in .github/workflows directory of the project:



Let’s analyze it in more detail. By using on key we define trigger events that automatically activate the workflow according to the initial task conditions:


push — when pushing changes to the main branch

pull_request — when opening or reopening a pull request

schedule — run a scheduled workflow every Friday at 2am


Next, we define one job in the workflow, cypress-run. The job will run in a virtual machine with the latest version of the Ubuntu Linux operating system within the configured timeout of 5 minutes to ensure that an accidental hang does not use up extra CI minutes.


The steps key combines all the necessary steps to complete the task. First, custom application actions/checkout@v3 is launched, which extracts the repository into the virtual machine, sequentially performing the necessary actions, including checking the git version, creating the necessary folders, authorizing, etc.


Next, we see the main step of the job — Run docker-compose, which executes docker-compose up command to build and run four Docker containers with the specified names (e2e-chrome, e2e-firefox, e2e-edge and e2e-electron) defined in Docker Compose configurations. In this case -d flag means running containers in the background to allow the workflow to continue executing subsequent steps.


The purpose of the next step is to provide visibility into the logs of running Docker containers in order to monitor them and control the process of starting services before moving on to subsequent workflow steps. The docker-compose logs -f command displays logs of all running containers in real-time. At this step, it is possible to track the execution of Cypress tests step by step in each of the four browsers.


The next four steps are identical in nature and ensure that the workflow artifacts from each container are downloaded for access after the workflow completes. The custom application actions/upload-artifact@v3 takes the paths provided as input and uploads folders with generated by Cypress videos and screenshots if the test fails. This will make the artifacts from each container available in the workflow summary. In addition, the behavior of the action in case no artifact files are found is also configured here, as well as the storage period for artifacts (5 days).

Starting the Workflow

To activate the workflow, let’s create a trigger event. For this let’s make a small change to one of the existing tests, for example, add “.” at the end of the expected header text (highlighted) for the test to fail:



Next, let’s commit and push the change to the main branch:



Voila, the workflow is started! Let’s go to the project repository on GitHub and open the summary of the last workflow in the Actions tab:



In the log of the completed cypress-run job, we make sure that all steps were completed successfully:



In particular, at the Run docker-compose step, a Docker image was built based on the previously described Dockerfile, from which four containers were generated. During the build of the image, npx cypress info command was executed and information about detected browsers was displayed in the log, as well as other characteristics of the test environment — the operating system, versions of Node.js, Cypress, etc.:



At the next step, processes were simultaneously launched in the created containers:



to be more precise, parallel execution of Cypress tests was launched in four browsers:



We can see logs from each container about the progress of the test execution:



As a result, the first test failed as expected, while the next two passed successfully in each of the four browsers:


Validate Workflow Artifacts

As you may have noticed earlier, the workflow summary contains information about the downloaded artifacts:



After downloading the artifacts from the workflow summary in ZIP format, we verify that we have videos and screenshots of the failing test in each browser (except the video in Firefox):



Let’s check that in case of successful completion of all tests, the workflow artifacts will not be loaded. To do this, let’s fix the first test, make a commit, and push the change to the main branch again:



After the workflow is completed, we make sure that artifacts were not loaded from any container due to their absence:


Final thoughts

In conclusion, it should be noted that the optimization of cross-browser testing with Cypress based on the approach described in this article has a bunch of obvious advantages.


In particular, the simultaneous launch and parallel execution of tests in several browsers are guaranteed. Containerization ensures that the test environment is consistent and reproducible when running Cypress tests on different browsers.


Moreover, it is easier to set up and maintain a test environment based on a single configuration file. The test infrastructure scales easily depending on the desired level of parallelism for each run or the number of browsers being tested, etc.


In general, all this allows you to increase the efficiency of using available CI resources, reduce the total time for executing tests, expand test coverage in several browsers, providing the optimal level of reliability, taking into account the specifics of a particular project.


That’s about it. If you found this useful, share it with a friend or community. Maybe there’s someone who will benefit from it as well. To continue your journey with me and get more information about testing with the awesome tool, you might be interested in subscribing to my blog and get notified when there’s a new useful article.


The source code of all examples presented in this article can be found in the of the blog on GitHub.


Thank you for your attention! Happy testing!


Also published .


바카라사이트 바카라사이트 온라인바카라