visit
Greetings to all Cypress enthusiasts!
Cross Browser compatibility is one of the most important characteristics of a web application, which implies its equally correct display and functionality in various browsers and their versions.
The modern variety of browsers is determined, among other things, by differences in the ways of rendering web application content, when different browser engines (Blink, WebKit, Gecko, EdgeHTML) perceive and process HTML tags and CSS styles differently, which naturally affects the appearance and application behavior.
In this regard, the importance of cross-browser testing is obvious and extremely clear. Its purpose is to ensure that when a user opens a web application in various browsers and their versions, the correct display of its content is ensured, the integrity of its structure is preserved, there are no functional errors and inconsistencies in performance, layout collapse, overlapping elements on top of each other, etc.
If you are not yet familiar with Cypress:
Cypress is a JavaScript-based end-to-end testing tool designed for modern web test automation. Cypress allows conducting both full-fledged end-to-end testing with passing user scenarios on a real product, as well as integration testing of individual front-end components. Cypress has emerged as a popular end-to-end testing tool for web applications due to a bunch of its powerful features, user-friendly interface, fast test execution time, easy installation and debugging, etc.
Cypress has the capability to run tests across multiple browsers. According to official , Cypress has currently support for (including Electron and Chromium-based Microsoft Edge), (Safari’s browser engine), and Firefox. Excluding , any browser you want to run Cypress tests in needs to be installed on your local system or CI environment.
Obviously, it is often not necessary to run all available test suites in different browsers given the increase in test execution time and the associated cost of the required CI infrastructure. Therefore, Cypress provides different continuous integration strategies for deploying cross-browser testing in CI pipelines depending on the needs of a particular project.
In order to balance costs and available CI resources, as well as the optimal level of confidence in testing, Cypress provides the following options for effectively organizing cross-browser testing:
Selecting a specific test suite for a given browser. For example, sometimes it makes sense to run all available tests within Chrome, but within Firefox _execute only the happy or critical path related test files, or a directory of specific _“smoke” test files using --spec
flag. In some cases, the priority areas for implementing cross-browser testing may be critical application features or workflows, as well as the most likely user scenarios.
Flexible setting of the periodic frequency of launching individual browsers. For example, running tests within Chrome can be triggered by events in the repository, while within Firefox, tests will run on a set schedule based on release periodic frequency. Modern CI pipelines allow you to set the required time and periodic frequency for running workflows.
Parallel execution of test files for each group, where the groups are based on the browsers being tested. For example, using allows you to run each browser at different levels of parallelization, differentiating the amount of allocated CI resources between browsers depending on the importance of each browser in the testing strategy. For example, tests within Chrome can be run conditionally in parallel on up to four machines, while within Firefox on two, minimizing CI costs.
Possibility to configure the launching or exclusion of browsers for a specific test/test suite. Sometimes it makes sense to run or ignore one or more tests in certain browsers to shorten the duration of a test running. To do this, Cypress allows to specify directly in the configuration of a test or test suite a specific browser to run or exclude, for example: { browser: 'firefox' }
or { browser: '!chrome' }
.
Determination of the software deployment environment. In cases where the project shows consistently stable behavior in different browsers, it is advisable to set up cross-browser testing only before deploying changes to the production environment.
There are various approaches to implementing cross-browser testing with Cypress, one of which is to set up the automatic running of Cypress tests in chosen CI platform using .
Using allows us to effortlessly scale our testing infrastructure by distributing our workload across multiple containers. So, it is possible to define several containers for different browsers and run them simultaneously using one command, or rather one configuration file. Parallel execution of tests in different browsers obviously significantly speeds up the overall testing process.
Using the official Cypress Docker with pre-installed browsers as a base layer eliminates the need to install browsers on servers running Docker containers. You can read more about the benefits of using Docker in testing in my previous about running Cypress tests in Docker containers.
In this article, I use as my continuous integration platform. It should be noted that Cypress provides many useful of configuration files for setting up GitHub Actions workflows, based on which it is quite simple to organize various options for running tests in several browsers. For example, you can create a workflow for each browser separately and activate the created workflows depending on the specified trigger events in the repository. Obviously, this is very convenient.
The idea is pretty simple — let’s say we need to run a specific Cypress test suite within four browsers — Google Chrome, Firefox, Microsoft Edge, and Electron. Trigger events for automatically starting a workflow in the repository should be a push event to the main branch, an open or reopened pull request, and a scheduled start, for example, every Friday at 2 am.
Also, during the execution of the workflow, it is necessary to obtain artifacts with the results of running tests in each of the browsers — videos and screenshots in case any tests fail.
One possible solution to this task is to create a custom Docker image based on one of the official Cypress Docker , build and simultaneously run four containers from the created image, in each of which our Cypress tests will be executed in a specific browser.
To demonstrate how to run Cypress tests across multiple browsers, I use the very simple Cypress-Docker project to test my blog on Medium. The project already has some dependencies installed — and , and also has spec.cy.ts file with a set of three trivial tests for the blog’s homepage:
By running it locally within Chrome browser, we make sure that all tests pass successfully:
As a base layer for building the image, the official Docker image was taken, which includes all operating system dependencies and some browsers.
The most current version of the image at the moment — includes pre-installed Node.js 18.16.0, as well as three browsers — Google Chrome, Firefox and Microsoft Edge. Given that the Electron browser is pre-installed in Cypress, we will have all the necessary browsers at our disposal to carry out cross-browser test execution in accordance with the initial task.
The final Dockerfile for building the required image will look like this:
Firstly, FROM
instruction defines a base image, all of whose dependencies and configurations will be included in the generated image.
The next WORKDIR
step creates the /e2e
working directory, in which all subsequent commands will be executed.
Next, in COPY
instruction the package.json and cypress.config.ts files, as well as the cypress folder, including the spec file, will be copied from the repository to the working directory inside the image.
Then RUN
instruction specifies two commands to run, in particular, npm i
— to install the necessary dependencies in the working directory of the image and npx cypress info
— to display information about Cypress, current browsers detected by Cypress, and so on.
The last step in ENTRYPOINT
instruction the command in exec form is defined to run Cypress in headless mode in containers generated from this image.
As mentioned earlier, after building the Docker image, four containers will be created based on it. In each container, Cypress tests will be run in a specific browser. To simultaneously run containers with a single command, it is advisable to use a tool such as . To describe the process of loading and configuring containers, the YAML configuration file should include the following:
As we can see, in the configuration file we define four services
(containers) — e2e-chrome
, e2e-firefox
, e2e-edge
and e2e-electron
, each of which uses the Docker image created based on the Dockerfile located in the same directory (build
keys).
Next, the values of the command
keys set the commands to launch Cypress in browsers with the given names. It is worth noting here that in the e2e-firefox
service, the command was supplemented by a configuration change due to an open with recording video when using the Firefox browser in Cypress.
The next step is to mount volumes (volumes
keys) for each service and set the appropriate mappings to gain access to artifacts outside of containers. In essence, this means that the videos and screenshots generated during the launch of Cypress and written into containers will actually be stored in the GitHub Actions virtual environment at the specified paths relative to the workspace of the launched workflow. This will allow, in case any tests fail, to extract artifacts from the ./artifacts
directory during the execution of the GitHub Actions workflow, which is required according to the initial task.
To run the workflow, we’re going to use following workflow e2e.yml file which is located in .github/workflows directory of the project:
Let’s analyze it in more detail. By using on
key we define trigger events that automatically activate the workflow according to the initial task conditions:
push
— when pushing changes to the main branch
pull_request
— when opening or reopening a pull request
schedule
— run a scheduled workflow every Friday at 2am
Next, we define one job in the workflow, cypress-run
. The job will run in a virtual machine with the latest version of the Ubuntu Linux operating system within the configured timeout
of 5 minutes to ensure that an accidental hang does not use up extra CI minutes.
The steps
key combines all the necessary steps to complete the task. First, custom application actions/checkout@v3
is launched, which extracts the repository into the virtual machine, sequentially performing the necessary actions, including checking the git version, creating the necessary folders, authorizing, etc.
Next, we see the main step of the job — Run docker-compose
, which executes docker-compose up
command to build and run four Docker containers with the specified names (e2e-chrome
, e2e-firefox
, e2e-edge
and e2e-electron
) defined in Docker Compose configurations. In this case -d
flag means running containers in the background to allow the workflow to continue executing subsequent steps.
The purpose of the next step is to provide visibility into the logs of running Docker containers in order to monitor them and control the process of starting services before moving on to subsequent workflow steps. The docker-compose logs -f
command displays logs of all running containers in real-time. At this step, it is possible to track the execution of Cypress tests step by step in each of the four browsers.
The next four steps are identical in nature and ensure that the workflow artifacts from each container are downloaded for access after the workflow completes. The custom application actions/upload-artifact@v3
takes the paths provided as input and uploads folders with generated by Cypress videos and screenshots if the test fails. This will make the artifacts from each container available in the workflow summary. In addition, the behavior of the action in case no artifact files are found is also configured here, as well as the storage period for artifacts (5 days).
To activate the workflow, let’s create a trigger event. For this let’s make a small change to one of the existing tests, for example, add “.” at the end of the expected header text (highlighted) for the test to fail:
Next, let’s commit and push the change to the main branch:
Voila, the workflow is started! Let’s go to the project repository on GitHub and open the summary of the last workflow in the Actions tab:
In the log of the completed cypress-run
job, we make sure that all steps were completed successfully:
In particular, at the Run docker-compose
step, a Docker image was built based on the previously described Dockerfile, from which four containers were generated. During the build of the image, npx cypress info
command was executed and information about detected browsers was displayed in the log, as well as other characteristics of the test environment — the operating system, versions of Node.js, Cypress, etc.:
As a result, the first test failed as expected, while the next two passed successfully in each of the four browsers:
After downloading the artifacts from the workflow summary in ZIP format, we verify that we have videos and screenshots of the failing test in each browser (except the video in Firefox):
Let’s check that in case of successful completion of all tests, the workflow artifacts will not be loaded. To do this, let’s fix the first test, make a commit, and push the change to the main branch again:
In particular, the simultaneous launch and parallel execution of tests in several browsers are guaranteed. Containerization ensures that the test environment is consistent and reproducible when running Cypress tests on different browsers.
Moreover, it is easier to set up and maintain a test environment based on a single configuration file. The test infrastructure scales easily depending on the desired level of parallelism for each run or the number of browsers being tested, etc.
In general, all this allows you to increase the efficiency of using available CI resources, reduce the total time for executing tests, expand test coverage in several browsers, providing the optimal level of reliability, taking into account the specifics of a particular project.
That’s about it. If you found this useful, share it with a friend or community. Maybe there’s someone who will benefit from it as well. To continue your journey with me and get more information about testing with the awesome tool, you might be interested in subscribing to my blog and get notified when there’s a new useful article.
Thank you for your attention! Happy testing!