CrossBrowserTesting.com

A Design, Development, Testing Blog

  • Product Overview
  • Pricing
  • Back Home
  • Free Trial
  • Design
  • Development
  • Manual Testing
  • Test Automation
  • Visual Testing

Connecting your Back-End and Front-End Functional Testing

August 17, 2020 By Gareth Goh Leave a Comment

Many SmartBear testers who use SoapUI Pro to tackle their automated API functional testing know the importance of end-to-end testing of the entire back-end. Making sure those critical web services like REST, GraphQL APIs, and others are functioning is why testers use SoapUI Pro to create, run, and analyze complex tests. Creating comprehensive end-to-end tests is the best way to validate the entire workflow of your APIs from either an API definition or live endpoints. With those tests, the back-end of your web services are able to flow and function smoothly. 

Soap functional testers are adept at creating, running complex functional tests and finding bugs with APIs. Most APIs are consumed by other services and in a large number of cases by UI layers in web, mobile and desktop apps.

The functionality of APIs can be validated by exercising the APIs or the services that call them. An alternate or truer mechanism can be the exercising of the application workflows as end users do using the UI layers of the web, mobile and desktop apps. These end-end workflow tests when created and tested have the dual advantage of testing the UI layers and also testing the APIs that serve them underneath.

But what about the front-end?

As organizations are shifting left, becoming leaner and becoming more agile in an effort to delivery software faster with unsurpassed quality, functional testers are being tasked with testing the API layers and the UI layers. Additionally, the shift left movement means that API endpoints are being tested by more than just QA or SDETs, which benefits the final product. Revealing or discovering issues earlier in the process makes it easier and much less expensive for the team to resolve.

This is particularly true in smaller, fast moving teams and organizations and in instances where an older application has been retrofitted with a new services layer in an effort to support digital transformation.

Increasingly, we are seeing that API testers are becoming responsible for not just making sure the back-end of web services works correctly, but that the front-end looks and functions well too. Mobile and web testers are now tasked with tackling the entire functional testing lifecycle and getting more of that API experience. 

When that test automation engineer is writing a test script for the frontend, ideally they are also keeping the backend in mind and writing a test script that matches up as well. 

In a number of instances, especially for web apps and native mobile apps. Testers who are charged with testing both the UI and API layers have a few choices.

  1. Test the API layers and then test the UI which will also test the API’s
  2. Test the UI which will indirectly test the API layers
  3. Find a set of complementary tools that allow the use of an API functional test alongside a UI functional test tool. This has some overlap but it ensures that all use cases including edge cases are covered effectively

To that end, those testers need testing tools that play well together and are able to communicate in sync. More and more, we’re seeing SoapUI Pro users ask and using CrossBrowserTesting to test their web apps, as well as BitBar for their mobile app testing – both of which have API’s which are being tested using SoapUI

By further connecting your back-end and front-end functional testing, API testing should also fit neatly into your continuous delivery/continuous integration (CI/CD) pipeline. SoapUI Pro and both CrossBrowserTesting and BitBar are designed for test automation and for those automated tests to fit seamlessly into your DevOps or Agile development workflow. 

Diagnosis of problems will also be quicker and smoother with the UI test driving the front end which in turn drives the APIs. Some visual bugs could be the result of bad code on the frontend or the API having an issue pinging the database the showcase the correct visual. Without this exercise of using the application workflow, the backend test automation test may not have catch issues that the front end testing does. The test automation engineer working on both parts will know that. 

Whether you’re an API tester looking to branch into the front-end of your website, or a UI/UX designer looking to learn more about how APIs work on the back-end, SmartBear tools are designed to help you get there and manage that process more efficiently.

Learn more about SmartBear tools to test your front-end by signing up for CrossBrowserTesting & BitBar FREE trials!

Filed Under: Test Automation Tagged With: automated testing, back-end testing, front-end testing, SmartBear

The Benefits of Increased Parallel Testing

June 2, 2020 By Nick Bonofiglio Leave a Comment

Quality at speed is the new mantra for most CIOs. In response, Development Operations has evolved their software lifecycle- becoming both more agile and more scalable while attempting to deliver a flawless experience. In response, companies are constantly delivering new products and updates to consumers, and the speed with which they can deliver quality code and features is limited by a dependency on slow and intricate testing process. This rapid pace of change directly effects the functional testing space. How can you deliver products at speed while maintaining the highest standards of experience? 

Enter parallel testing. 

Instead of running single tests sequentially, parallelization is the process of running multiple automated test cases concurrently. With parallel testing, organizations can drastically save on the amount of time it takes to execute automated tests while dramatically improving the scope and scale of said tests. 

The ROI is immediate and can be easily quantified because there is a direct correlation between the number of concurrent executions and the time that it takes to run the test suite. For example, let’s say a team has a suite of 100 automated tests and each test take an average of one minute to execute. That would mean that the test suite takes 100 minutes to run, if executed sequentially. However, if that team distributes that test suite across 10 parallels running concurrently, they could reduce the execution time ten-fold to just 10 minutes total. This results in a significant amount of time being put back in the developers’ hands. 

Example code for implementing parallel test execution on SmartBear’s CrossBrowserTesting platform, using JavaScript via NodeJS. This sample, as well as samples for other languages and frameworks can be found at https://github.com/crossbrowsertesting/. 

And, dramatically reducing testing time isn’t the only benefit of parallel testing: 

Improve Fluidity in your Development Process 

An increasing number of DevOps teams have adopted the process of Continuous Integration and Continuous Delivery, a type of process that requires frequent and reliable functional testing. Dev teams who get test results back faster are able to maintain a more constant flow of code writing, testing, deploying to production all of this repeated as fast as possible. 

Implementing a parallel testing process can optimize that continuous flow. Getting results back quickly is a core part of CI/CD, because the sooner developers do, the sooner they can begin debugging and refactoring, or pushing to production. In addition to that, organizations can reduce cost and increase accessibility by moving test environments to the cloud. 

Example code for implementing execution into a Jenkins pipeline in NodeJS using the CrossBrowserTesting Jenkins plug-in. This plug-in allows users to specify their desired environment configurations in the pipeline, itself, separate from the test framework for increased flexibility. Learn More 

Substantially Reduce Testing Costs 

By running parallel tests concurrently on virtual machines in a cloud environment, you can dramatically reduce your costs per test. Working with a partner like SmartBear – who offers more combinations of virtual machines and real devices in both public and private cloud environments – helps remove the burden of maintaining test environments and devices, which is costly. Parallel testing is the way to take advantage to all those browser combinations and devices. Speaking of which… 

Drastically Increased Test Coverage 

Because parallel testing is faster – and with access to thousands of device and browser combinations in SmartBear’s public cloud – your team can make sure that the breadth of your testing efforts is much wider. If your customers are a diverse group spanning all sorts of mobile devices, and with their own preferred operating systems and browsers, you want to make sure that the code you push covers all of those bases. One unhappy user with an edge case device or browser type can be enough to harm your business’ reputation. With parallel testing, you’re much more likely to avoid that scenario. 

Changing from sequential testing or manual testing to an automated testing environment with multiple parallels can be intimidating for some teams, but it doesn’t have to be scary! You can take baby steps, increasing first to two parallels and setting up a few automated tests there, before continuing to scale up. SmartBear is here to help you explore the many benefits of parallel testing. 

Filed Under: Test Automation Tagged With: automated testing, parallel testing, Selenium

Real Device and Browser Testing Strategy for 2020

December 19, 2019 By Sarah Mischinger Leave a Comment

Check out the latest blog from the CrossBrowserTesting and Bitbar team as we help you get off on the right foot for 2020!

With the new year coming in big strides, it’s time to start thinking about your real device and browser testing strategy for 2020. As you can imagine, testing without a proper plan often causes bottlenecks that hinder your apps’ timely release. Therefore, by defining a strategy, your teams can be more productive and publish highly functional apps regularly.

Even if you already have a plan on hand – next year could be the right opportunity to reevaluate the tools and methods you currently utilize and search for better fitting alternatives. Our goal for this article is to help you put your device and browser testing strategy in the best position for 2020.

Testing On the Right Devices and Browsers

No matter if you’re developing a web app or a native mobile app – you need to test on various real devices, platforms, etc. to make sure your product works perfectly for as many customers as possible. As discussed here, you shouldn’t rely on emulators to get reliable test results.

However, we can find a sheer unending number of possible test environments out there – especially on Android, as they are highly fragmented. That being said, you also can’t blindly pick devices and browsers for testing but need a proper plan. Ultimately, this plan should guide your testers and developers through testing so they can focus on the task at hand.

What does this mean for your testing strategy for 2020?

Depending on where you’re currently at, there are two different key actions you need to take:

  1. If you don’t have a list of devices and browsers to test yet, it’s time to establish one and make it an integral part of your device and browser testing strategy for 2020.
  2. If you already have a list of real devices and browsers that you need to test your app, you’ll need to revise this list to meet the changing mobile device and browser markets.

Use your product’s usage data to determine which devices and browsers your customers prefer. You can also find information in the quarterly report of the most popular mobile devices from DeviceAtlas. Also, look for brow distribution data for your target markets online.

After collecting and analyzing the data, you should be able to create a prioritized list of the must-test devices and browsers with confidence. Also, make sure to take action when older devices and browser versions disappear, and new ones appear and update your list.

Pick a Fitting Framework for UI Testing

As the new year approaches us, it might also be the perfect opportunity to switch to a different UI testing framework – or finally start using one.

Depending on the tech stack, team size, and available roles in your team, some frameworks may fit better than others.

What does this mean for your testing strategy for 2020?

Choosing and working with the right test framework enables your team to get the most out of their testing time, and release functional, high-quality applications. It doesn’t matter whether you’re building a web or mobile app – only the tools you can leverage are different.

Before choosing a framework, you need to make sure to analyze and understand your product’s and team’s needs. According to Capgemini’s World Quality Report 2019-20, 30%¹ of all respondents felt the lack of skills in all areas of testing for QA and test professionals. Hence, you need to make sure you have skilled experts on your team or have the resources to hire and retain them.

UI Test Framework for Web Apps

The framework that worked its way to the top of the most popular testing tools for web apps is Selenium. Although your developers and testers may need a while to get comfortable with Selenium testing and write efficient tests with it, you will save a lot of time and headaches in the future.

Basically, you can use Selenium to automate interactions with web browsers on desktop and mobile devices using different programming languages. Kick-start your automation journey by learning from our Selenium Toolkit here.

UI Test Frameworks for Mobile Apps

For mobile testing, we can find three popular frameworks that all have their pros and cons.

  • Appium – works similar to Selenium and supports both Android and iOS. You only need one codebase, but tests tend to be slower and flakier than tests run with the other frameworks.
  • Espresso – only supports Android and is part of the Android SDK. Thus, tests written with Espresso perform faster than with Appium and are less flaky.
  • XCUITest – only supports iOS testing and is part of the XCode integrated XCTest framework. As a result, tests perform well and deliver reliable results.

Tip: Read our comparison articles on Appium vs. Espresso and Appium vs. XCUITest to help you make a decision.

Make Use of Automation and Parallel Testing

Modern businesses aim to deliver apps in the shortest time possible – this can only be achieved if an appropriate automation strategy is set up.

However, most likely, not everything can be automated, and your team needs to decide which parts to test manually and which to test automatically.

In fact, according to Capgemini’s survey, 50%² of respondents named the “Inability to apply test automation appropriate levels” as one of the challenges that they are currently facing when integrating testing practices into agile developments. Surprisingly, this number hasn’t changed since the last survey in 2018. Therefore, the effective integration of test automation in 2020 will remain an essential part of the whole real device and browser testing strategy for companies.

What does that mean for your testing strategy for 2020?

Especially for testing the user experience of mobile or web apps, manual testing can be of great help. For everything else, try to use automation. However, this shouldn’t cause new bottlenecks, and you need to determine what you need to test how. As a rule of thumb, anything that needs to be checked regularly, or from which you can expect certain results, can be safely automated.

Naturally, this is the same whether you’re testing mobile or web apps. Though, again, the tools which can help you in this endeavor differ.

For example, you could use a tool that automates your test scripts and takes screenshots of your web app on different screen sizes. Mainly for testing mobile apps, it makes sense to select an environment that lets you run tests on multiple devices at the same time (parallel testing). This way, you not only save time by automating tests but also running them more efficiently. You can get the ultimate results out of this by testing on an unlimited number of devices.

Another key driver for businesses in 2020 is AI-powered testing. 16%³ of executives rate the use of AI in test execution activities as their top priority concerning AI plans. More and more businesses will rely on AI for testing in the coming years. Companies that adapt to these new technologies at an early stage have an advantage over companies that ignore them.

Decide Where to Test – On-Premise or in the Cloud

Do you have a device and browser test lab in-house, or do you leverage online-based test services? We can’t possibly favor one option over the other, as the right environment depends on your requirements.

With cloud-based services, your team can run tests anywhere, anytime, and you don’t have to buy and maintain devices yourself. This way, you can save a lot of money and the time of your employees.

Yet, many SaaS platforms don’t meet the modern enterprise security requirements, so some companies can’t use them. Besides, SaaS usually provides out-of-the-box features, so you can not customize the setup to suit your needs.

What does this mean for your testing strategy for 2020?

Considering what we’ve discussed so far, the optimal environment should:

  • be flexible and allow customization
  • support all known test frameworks
  • allow you to run tests in environments provided by Docker images
  • enable you to test on real devices and browsers in parallel
  • be scalable and adaptable to fit your future needs
  • let you use AI-powered testing tools
  • meet the security requirements of your business

Before you can choose the appropriate environment, you need to analyze your requirements – also take your potential future needs into account! You must remain flexible and scalable to survive in the ever-changing technical landscape.

For browser testing, you can utilize our over 2000 real browsers and mobile devices to perform manual, visual, and automated Selenium tests in the cloud, using real devices and browsers.

For mobile testing, you could utilize our friends at Bitbar for the most flexible real device cloud. Execute your tests automatically on as many devices as needed and leverage their real device concurrency functionality. If you need your data to stay on-premises, they’ll help you build a modern local mobile device lab with their enterprise solution.

Filed Under: Development Tagged With: automated testing, browser testing, Selenium, testing strategy

Parallel Testing: What It Is and Why You Should Adopt It

December 13, 2019 By Victoria.Bezsmolna Leave a Comment

Our friends over at Bitbar recently wrote another fantastic blog about parallel testing, what it is and why you should adopt it. Take a look below!

The continuous integration and delivery methodology is a great way to keep you releasing high-quality products in the shortest possible time. All of this relies on continuous testing and requires a faster test cycle. While the traditional testing methodology – sequential testing – means a long queue of test jobs and longer time-to-market, parallel testing is the favored approach for higher test efficiency and quicker turnaround time in software deliveries.

What is Parallel Testing?

Parallel testing is an automated testing process that developers and testers can launch multiple tests against different real device combinations and browser configurations simultaneously. The goal of parallel testing is to resolve the constraints of time by distributing tests across available resources.

For example, if 20 test cases take a total of 100 minutes to complete, then 10 parallel execution could run 2 test cases each and bring the total testing time down to 10 minutes. Ideally, if you have sufficient resources, say 20 real mobile devices and real browsers for simultaneous execution of 20 test cases, then you’ll be able to significantly shrink the run time to 5 minutes.

parallel device and browser testing

On the other hand, sequential testing or serial testing is the opposite of parallel testing. In the above case, 20 test cases are executed one after another, and you will need to waiting 100 minutes for the test results.

sequential device and browser testing

Benefits of Parallel Testing

As we have scratched the surface of what parallel testing is and how it influences the test cycle, there are a couple of more reasons why you should consider testing in parallel.

Speed

Sequential testing is time-consuming, while parallel testing will allow you to divide invested time by the number of environments. In order to test your application against ten devices, all you need to do is write only ONE script and run it against all your target devices, cutting your testing time by ten times.

Cost-Efficiency

The building, maintaining and keeping your own test environment up-to-date can burn a hole in your pocket. When it comes to parallel testing, maintenance isn’t a headache anymore — in fact, you lease the needed testing environment, always updated. Plus, cloud-based testing grids allow you to run tests at high concurrency, making the cost per test significantly lower.

Better Coverage

It’s always a good idea to run your application through as many platform-device-browser combinations as possible so that no bug sneaks in. Parallel testing will take your test coverage to the next level, giving you a significant ROI boost.

Optimization of Your CI/CD Processes

Parallel testing is the best friend of continuous integration and delivery. By testing in parallel, you can run tests as soon as developers submit new code updates throughout the entire SDLC. Timely reporting and quick feedback in parallel testing will also facilitate better communication between various departments.

Improvement of Testing Practices

Parallel testing improves the QA routine in your company. The reason is crystal clear: by testing at high speed, you can test more. This gives your QA team a chance to improve their testing practices and pinpoint bugs faster.

Gradual Transition from Sequential Testing

Transitioning your QA regime from sequential to parallel testing is a huge undertaking if you try to do all at once. Starting small might be your best bet. You can begin with the test cases that are most adaptable to the parallel testing environment, giving your engineers enough time to adjust the rest of the tests.

Tips for Successful Parallel Testing

Many companies switched to parallelism only in some instances and avoid parallelizing all their QA processes. Why? Infrastructure limitations, data dependencies, poor test cases data management and hard-coding are the most common constraints. The good news is there are ways to nail them down and reap all the benefits of it properly.

Test in the Cloud-Based Environment

Implementing a parallel testing strategy using in-house resources is one of the most typical mistakes. Building and maintaining your own infrastructure is not efficient. It is not just time- and cost-consuming – you won’t be able to make the most of parallel testing where the ability to test at high concurrency is a major advantage. Also, keeping your testing environment up-to-date requires a lot of resources. To this end, it’s a good idea to turn to cloud-based services that allow you to access the necessary devices at any time.

Avoid Dependencies Between Tests

Dependencies between different test cases are a primary reason why transitioning to parallel testing is so challenging for many teams. Simply put, when test cases are dependent on each other, you should run them in a particular order, which can destroy any parallel testing strategy. So, it is critical to creating your test cases to be atomic, independent from each other. Only then, you will be able to run them at any time and in any order, making your testing processes free of constraints.

Avoid Hard-Coding

Hard-coding is embedding data directly into the source code instead of generating it at runtime. This notion is an enemy of efficient parallelization since it creates dependencies between test cases, mentioned above. It is strongly recommended to avoid hard-coding values when scripting your test cases to ensure that each of your tests is self-sufficient and can be run whenever necessary. Use the data-driven approach to configure your test cases when they are running in parallel.

Manage Test Data Efficiently

Efficient test data management is the key to successful parallel test execution. But first, you need a strategy. There are two components you might want to add to it: a creational strategy (which creates test data needs) and a cleanup strategy (which cleans it up). Still, the only thing that matters is that your strategy is adapted to your particular case. The following ones are quite basic, and perhaps that’s what makes these data management strategies so effective:

  • Elementary approach, which has no creational and cleanup strategies
  • Refresh-your-data approach, which resets your source code in between test executions but has no creational strategy
  • Selfish data generation approach, which has a creational strategy but no clean up one.

These are the most basic strategies. You can mix and match them to serve your own case or explore alternatives, such as generating other data or refreshing specific data.

Build Tests to be Run in Parallel

You don’t want to end up with a bunch of tests that can’t be run in parallel right before the release date. Planning correctly from the get-go will save you from nasty surprises when time’s up or when a deadline is due. Create your test cases with parallelization in mind should be your way to go. And believe us, it sounds harder than it actually is: test cases created for running in parallel are simpler, shorter and faster to build.

Conclusion

Parallel testing can save your team from delays in delivery without compromising the quality, which makes it superior to sequential testing.

With parallelization, you can cut your QA expenses, run cases at high concurrency, optimize your CI/CD processes and constantly improve your scripts to get more accurate results. At the same time, all these advantages are out of reach without a correct strategy. Testing in a cloud-based environment, avoiding hard-coding, eliminating data dependencies and scripting with parallelization in mind will help you enjoy all the benefits of parallel testing.

Filed Under: Test Automation Tagged With: automated testing, continuous testing, parallel testing

Hands-On UI Testing With Python FAQ

August 22, 2019 By Andy Knight Leave a Comment

Red Background with copy Hands-On UI Testing With Python FAQ

On August 14, 2019, I teamed up with SmartBear to deliver a one-hour webinar about Web UI testing with Python! It was an honor to work with Nicholas Brown, Digital Marketing Manager for CrossBrowserTesting at SmartBear Software, to make this webinar happen.

Check out the On-Demand version of the webinar!

In the webinar, I showed how to build a basic Web UI test automation solution using Python, pytest, and Selenium WebDriver. The tutorial covered automating one test, a simple DuckDuckGo search, from inception to automation. It also showed how to use CrossBrowserTesting to scale the solution so that it can run tests on any browser, any platform, and any version in the cloud as a service!

The example test project for the webinar is hosted in Github here: https://github.com/AndyLPK247/smartbear-hands-on-ui-testing-python

I encourage you to clone the Github repository and try to run the example test on your own! Make sure to get a CrossBrowserTesting trial license so you can try different browsers. You can also try to write new tests of your own. All instructions are in the README. Have fun with it!

The Q&A

After the tutorial, we took questions from the audience. Here are answers to the top questions:

How can we automate UI interactions for CAPTCHA?

CAPTCHA is a feature many websites use to determine whether or not a user is human. Most CAPTCHAs require the user to read obscured text from an image, but there are other variations. By their very nature, CAPTCHAs are designed to thwart UI automation.

When someone asked this question during the webinar, I didn’t have an answer, so I did some research afterwards. Unfortunately, it looks like there’s no easy solution. The best workarounds involve driving apps through their APIs to avoid CAPTCHAs. I also saw some services that offer to solve CAPTCHAs.

Are there any standard Page Object Pattern implementations in Python?

Not really. Mozilla maintains the PyPOM project, but I personally haven’t used it. I like to keep my page objects pretty simple, as shown in the tutorial. I also recommend the Screenplay Pattern, which handles concerns better as test automation solutions grow larger. I’m actually working on a Pythonic implementation of the Screenplay Pattern that I hope to release soon!

How can I run Python tests that use Selenium WebDriver and pytest from Jenkins?

Any major Continuous Integration tool like Jenkins can easily run Web UI tests in any major language. First, make sure the nodes are properly configured to run the tests – they’ll need Python with the appropriate packages. If you plan to use local browsers, make sure the nodes have the browsers and WebDriver executables properly installed. If you plan to use remote browsers (like with CrossBrowserTesting), make sure your CI environment can call out to the remote service. Test jobs can simply call pytest from the command line to launch the tests. I also recommend the “JUnit” pytest option to generate a JUnit-style XML test report because most CI tools require that format for displaying and tracking test results.

How can I combine API and database testing with Web UI testing?

One way to handle API and database testing is to write integration tests separate from Web UI tests. You can still use pytest, but you’d use a library like requests for APIs and SQLAlchemy for databases.

Another approach is to write “hybrid” tests that use APIs and database calls to help Web UI testing. Browsers are notoriously slow compared to direct back-end calls. For example, database calls could pre-populate data so that, upon login, the website already displays stuff to test. Hybrid tests can make tests much faster and much safer.

How can we test mobile apps and browsers using Python?

Even though our tutorial covered desktop-based browser UI interactions, the strategy for testing mobile apps and browsers is the same. Mobile tests need Appium, which is like a special version of WebDriver for mobile features. The Page Object Pattern (or Screenplay Pattern) still applies. CrossBrowserTesting provides mobile platforms, too!

Check out my blog for other tips and tricks when working with Python! AutomationPanda.com

Filed Under: Test Automation Tagged With: automated testing, influencer, python, Selenium

Selenium Academy 101 FAQ

August 7, 2019 By Nick Brown Leave a Comment

Selenium Academy 101 FAQ

With the conclusion of our very first Selenium Academy class we wanted to ensure we armed you with the answers to some of the most common questions that came up during our live QA session!

In our Selenium 101 class we started at the very beginning by explaining what Selenium is, why you will want to use Selenium for your UI testing and especially why you will want to use Selenium with CrossBrowserTesting. But let’s take a look at some of the most common questions that came up during last week’s webinar.

What is the difference between a Selenium grid and CrossBrowserTesting?

While you can make your own Selenium grid with a costly device lab or VMs, many have found moving their Selenium grid to a third-party cloud like CrossBrowserTesting is faster and more productive, allowing more tests to run on more machines in less time. With over 2050 browsers to choose from, the possibilities of parallel test execution with CrossBrowserTesting are almost limitless. Our browser, device, and operating system selection includes all the latest configurations as well as legacy browsers so you can test what your users are on without worrying about any of the associated costs or upkeep. There are certain real costs of building an in-house environment as well as countless other secondary costs of not going with a cloud provider, like CrossBrowserTesting.

Selenium is open source right, why would we need to buy CrossBrowserTesting?

We can give you 8 good reasons!

  1. We make it easier to get started.
  2. Just install Selenium bindings for your programming language of choice and be done!
  3. Eliminate the work of managing and maintaining Selenium Servers, WebDrivers, and Browsers
  4. Screenshots, videos, and even network data can be captured during a Selenium test and shared with anyone
  5. Higher concurrency means faster execution
  6. Run dozens of tests at once without having to run dozens of RAM-hungry browsers
  7. Spend less time waiting and more time doing what matters
  8. Instant access to thousands of browsers and devices, including mobiles

What languages are supported with Selenium?

C#, Java, JavaScript, PHP, Python and Ruby

Which language is the easiest to get started with?

When talking with users about what language is easiest to get started with, we always recommend Python. The reason we recommend it for beginners is because it reads like English and tends to be easier for users to understand when getting started. Take a look at our blog on the Best Programming Language to Learn First.

What’s the best way to set up tests to run within a build pipeline using Jenkins?

There are only two things you need to be able to run your selenium tests from a Jenkins pipeline. You need a shell script or batch file that Jenkins can call to start your test, and Jenkins needs access to any libraries or modules that you use during the test – including the Selenium language bindings. If you are using Python, you can use VirtualEnv to create a virtual environment for Python with Selenium already installed. If you are using Java, make sure to include Selenium in your pom.xml file and run mvn build.

Does Selenium have the ability to open two webpages at the same time?

Yes, Selenium can be used to easily manage multiple pages at the same time. You can use the built-in command “switch_to_window()” documented here: Learn More Here

Is it recommended to use find element by xpath always?

Quite the opposite. Using Xpath to find elements, especially absolute Xpaths, tends to lead to very brittle tests. Any small change in the hierarchy around the element you are locating could cause the wrong element to be selected. Those bugs in your Script can sometimes be hard to debug. You should save yourself the trouble and use element IDs as much as possible

So, find_element_by_id is the best function?

Yes, it’s the most resilient to change and the easiest to use. If you can, add human-understandable IDs to any elements of your page that you want to automate.

For AngularJS based applications, do you have any specific built in functionality?

CrossBrowserTesting does not provide specific functionality to test websites built with AngularJS, but CrossBrowserTesting does work with Protractor – a Selenium-based framework created by Google for testing AngularJS. We have a short guide on getting started here: Help Doc for Protractor

How well does protractor work with just regular Angular?

It works great! That’s what it’s made for. If you run into any problems, feel free to reach out to our support team and they’ll help get you up and running.

Want to register for our 201 class on August 27th? Register now!

Filed Under: Test Automation Tagged With: automated testing, automation, FAQ, Selenium, selenium 101

Using DevOps Practices to Improve Productivity and Happiness

June 19, 2019 By Harold Schreckengost Leave a Comment

retain software testers
retain software testers

DevOps has become such a common topic of discussion that it almost sounds like a buzzword at times.  But there are some really great ideas that live under the DevOps umbrella, you just need to peel the onion back a little.  Even if you are tired of hearing people say, “We use JIRA, we’re Agile now!”, I believe it’s worth checking out.

Look, I get it, this is probably the third or fourth silver bullet you’ve been promised. This isn’t a silver bullet, but more of a roadmap to help you make the right choices for yourself and your team.

The core of DevOps can be broken down into three categories:

  • Reducing the risk of a failure – If we can reduce the chance of something failing, that’s less time we spend fixing it, less cost associated with the fix, and less frustration because of the downtime.
  • Reducing the impact of a failure – If a failure can wipe out the entire system, we have to be exceptionally careful to make sure that nothing ever fails. But we all know that no matter how careful we are, we’re going to see a failure at some point. If we can reduce the impact of a given failure, we can see the same benefits as reducing the risk.
  • Reducing toil – Toil is mindless, repetitive work. No one likes doing mindless work. By reducing toil, people are able to spend more time on things that they’re uniquely qualified for, allowing for more impactful improvements for the product and business.

Getting Started

In general, just because you’re following “best practices” doesn’t mean you’re actually gaining much value from them.  When you’re evaluating a process change, consider whether it will provide value to you and your team.  By going through this process, you can evaluate the pros and cons to ensure it’s the best process moving forward.  At the end of the day, you want to make your team better. Wasting time building out a process that doesn’t help is inefficient.

Automated Tests

Automated tests can be a simple place to get started; they can help your developers become more confident in what they’re writing, without having to invest significant time in verifying every change they’re making.  This can increase the efficiency of those developers, allowing them to do fewer repetitive tasks (reducing toil) and reducing the risk of a failure. A common side effect of this is that code will be more thoroughly tested, since it’s low effort now to do the testing, so people will be more willing.

Automated Deployments

Everyone has that one bit of software that is awful to deploy to production – maybe it takes a long time, maybe it’s error prone, maybe it’s just a lot of steps to try to get through. By automating a deployment, rather than requiring it be done by hand every time, you can reduce the risk of a failure (a well-crafted automated process will be consistent and not accidentally fat-finger a key) while reducing the time investment to deploy.

Cattle Over Pets

This is one of the biggest things to work towards, in my opinion: the idea of treating a system as replaceable, rather than as something that we have to take special care of means that we don’t have to worry about what will happen if a specific failure occurs. We have the ability to rebuild on the spot instead, toss it out, and away we go on our merry way again. So if something does fail, it’s much less likely to be a truly catastrophic failure, and the process of fixing it is the same as setting up the box to begin with – reducing impact and reducing toil.

Continuous Integration and Continuous Delivery

Continuous Integration and Continuous Delivery serve slightly different purposes but are very much related to each other at the same time. Both stem from the same principle: by doing work in small, concrete chunks, we have an easier time making sure those changes are valid, and we have an easier time deploying small, concrete changes over monolithic, far-reaching changes. I’ve heard both of these referred to using the idea of a blast radius – even a lot of smaller changes won’t be enough to completely destroy something, but one big change can be enough to bring down pretty much anything. Judicious use of both CI and CD can help reduce risk, reduce the impact of a change, and reduce toil.

 Where to go from here

A common theme for a lot of these changes is that they are designed around making it easier for developers to do the right thing. Build it and they will come. Build the tools that make peoples’ lives easier, and they will use them.

The benefits of testing earlier (“shifting left“) and more often are immense – if a bug caught early in the development cycle costs $50, for example, one that makes it into production can easily cost 10 times as much, and one that lives in production for a while just gets more and more expensive as time goes on.

Take inspiration from what other people are doing but realize that you don’t have to be held to that. Just because something works for someone else doesn’t mean it will work for you, and it probably won’t if you don’t understand the problem they were able to solve with a tool. As long as you’re making things better, that’s the important part.  It doesn’t have to be scary.

Filed Under: DevOps Tagged With: automated testing, Continuous Delivery, Continuous Integration, devops

Why You Shouldn’t Skimp on Both Functional and Visual UI Testing

December 12, 2018 By Alex McPeak Leave a Comment

visual UI testing functional testing differences importancevisual UI testing functional testing differences importance

visual UI testing functional testing differences importancevisual UI testing functional testing differences importance

There are many components to a successful web testing strategy, but one of the most often overlooked is the importance of visual UI testing in addition to functional testing.

Most teams will focus on one over the other, but to truly catch as many bugs as possible, you’ll need to incorporate both. First, you need to understand what the difference is and why they’re both needed.

Functional Testing

Let’s start by going over what functional testing actually is. Functional testing examines how the software actually works in relation to the given requirements. For example, a homepage on an e-commerce web application might have a menu button; if clicked, it’s expected to drop down with different options.

Functional testing is executed on tests of all sizes from unit tests to end-to-end cases, and it’s critical for making sure that user flows are working as intended and consistent with the product manager’s vision.

So for the same example with the menu button, you may also check to make sure that clicking one of the options on the drop-down will take you the correct page. You can continue to perform a functional test case that includes adding an item to your cart and checking out to ensure each of those actions work one after another and allow the user to successfully make a purchase.

To dive a little deeper, functional tests are basically actions and validations — the tester or tool performs an action with an expected validation, which either works or doesn’t work.

Most of the time, when we talk about functional testing, we’re referring to automated testing where we’re getting a pass or fail test result. When thinking about cross-browser and device testing, functional testing should be included after deciding which browsers and devices to test in order to make sure those tests pass across different configurations.

If you’re in development, design, or QA, this information probably isn’t news to you. However, differentiating between functional and visual testing, and understanding when each is needed, will inform a more intelligent testing strategy.

Visual Testing

While it’s clearly important to test the functional elements of your application, some teams will make the mistake of doing the bulk of their testing to check functional efficiency with little or no regard for visual validation.

Visual testing requires the tester to consider how the application looks in addition to how it works. Often times, the UI of an application can pass a functional test, while design elements that are visually defective can go under the radar.

This becomes extremely important as we look at responsive design and consider the myriad of different devices, browsers, and operating systems. If an application is not built to be responsive, design elements may suffer on different screen sizes, compromising the user experience. This can go unnoticed if just functional testing is performed.

Visual bugs can be annoying or unpleasant to users, but they can go beyond inconvenience to be more severe. In fact, visual inconsistencies can affect the user journey in a way that makes it difficult or even impossible to complete their intended actions even though the test is functionally proficient.

You can see the spectrum of severity when it comes to visual bugs in the following images from the Baymard Institute. In the screenshot of Amazon’s website, the text overlay has not rendered properly and is hard to read. In the example with the form field, visual issues make it difficult to fill out.

Photo via Baymard Institute

 

Photo via Baymard Institute

The issue may be that teams don’t have the time or capacity to do visual testing, or that they don’t prioritize it, but there are many organizations that are simply unaware that it should be an integral part of their strategy.

Automating Functional and Visual UI Testing

As mentioned, functional UI testing is most often done with a test automation tool, such as Selenium or Record & Replay, that will run your test in multiple browsers and give you a pass or fail result to tell you whether the application is working as intended.

However, this doesn’t mean you have to do visual testing all by hand and manually compare your website in different browsers. So how do you speed up visual testing?

A tool like CrossBrowserTesting can be used for visual UI testing to take automated screenshots across configurations. With the screenshot engine, you can compare a page on different browsers and devices side-by-side with your baseline configuration and evaluate highlighted layout differences that let you know where there may be bugs.

Since your team will probably be adding new features and changing the UI from time to time, you can also look at historical versions of your application for regression testing with an integration like Applitools Eyes.

Additionally, to ensure that your web application is put through regular functional and visual testing, you can also schedule tests in a multitude of ways — through Jenkins with Selenium, or with scheduling for Record & Replay or automated screenshots.

By implementing automation practices and incorporating visual testing into your existing strategy, your team can achieve more testing coverage without wasting any time. And your release cycles won’t be a guessing game.

In Summary…

  • Make sure your application works correctly and looks great
  • Visual bugs can be a mild inconvenience, or they could prevent your users from completing a crucial task
  • Ensure a positive overall user experience by testing across different browsers and devices
  • Leverage tools to automate both functional and visual testing for faster feedback

Filed Under: Visual Testing Tagged With: automated testing, functional testing, screenshots, visual testing

Introducing Scheduling for Record & Replay and Automated Screenshots

November 13, 2018 By Alex McPeak Leave a Comment

automated test scheduling record and replay screenshots

automated test scheduling record and replay screenshots

This year, we introduced Record & Replay for teams that want to automate tests without scripting. This method of codeless automation has allowed teams to maximize the value of every manual test by reducing execution time while increasing speed and coverage. And just like with Selenium testing, once your tests are recorded in Record and Replay, you can run them in parallel to get even more out of each suite.

Additionally, when paired with automated screenshots, teams of any skill level have the ability to visually and functionally verify their web application on thousands of browsers at once. By leveraging our Comparison Engine, you can take visual testing to the next level by evaluating differences that are highlighted at the pixel level.

Now with Scheduling for Record & Replay and automated screenshots, you can set tests to run on a daily, weekly, or monthly basis and have the results sent to you via email or Slack. Scheduling is the next step in helping our customers achieve continuous testing without needing to learn highly technical CI tools like Jenkins, TeamCity, and VS Team Services.

If you’re thinking this sounds too good to be true, find out how easy it is to get started with Scheduling in CrossBrowserTesting and see for yourself.

How to Get Started With Automated Test Scheduling

For Record and Replay:

  1. Go to Replay a Suite under Record & Replay in the CrossBrowserTesting app
  2. Select one of your suites. Note: You need a previously saved suite in order to schedule a Record & Replay test. Follow these directions to find out how to create a Record & Replay suite.
  3. Scroll down to “Suite Schedules” and click “Add Schedule”
  4. Decide if you want to run the suite on a daily, weekly, or monthly schedule and set a time for it to run. Choose if you want notifications via email or Slack and select “Add Schedule”

For Automated Screenshots:

  1. Go to View Test Results under Screenshots in the CrossBrowserTesting
  2. Select which screenshot test you’d to schedule and click “Test Results”. Note: You need to have previously run a screenshot test in order to schedule one. Follow these directions for how to take an automated screenshot of your website.
  3. Click “Add Schedule” and set to run daily, weekly, or monthly at a time of your choosing. Select whether you want notifications through email or Slack and click “Add Schedule”

That’s it — just a few steps to functionally and visually testing your UI on a continuous basis with no code.

If you want to learn more about Scheduling for Record & Replay and automated screenshots, join us in our upcoming webinar to learn just how easy it is to start scheduling your tests today. We’ll run through the new feature and give example use cases of Scheduling that you can immediately start implementing in your workflow.

We’ll also go through CI with Jenkins to further our pursuit of continuous testing in CrossBrowserTetsing for teams that may be interested in Selenium automation.

Register for the webinar now.

Filed Under: Test Automation Tagged With: automated testing, record and replay, scheduling, screenshots

6 Questions to Ask Yourself Before You Decide What Test Cases to Automate

June 1, 2018 By Alex McPeak 1 Comment

decide what test cases to automate

decide what test cases to automate

Even just the idea of agile test automation is enough to get the attention of most software teams. But while the idea of faster testing may seem like the silver bullet, there’s a time and place for automation.

Before you jump in head first, you should evaluate how automation is actually fitting into your organization. By asking these questions, you can get a better idea as to whether automation is the right move, or if you should take another route.

  1. What’s the risk involved? If you’ve performed a risk analysis of your application, you probably have a good idea of which test cases are the most important to check in each code change and which are a lower priority. Consider how human error could affect test data if it’s not automated. Think about how different test cases that may critically impact the customer and/or impact internal team downtime if broken. Additionally, you might want to automate high traffic page, features that depend on security, and problematic areas and other functions that rank high on your risk analysis.
  2. How often am I going to run this test again in the future? Running the same test more than once may seem repetitive and tedious, but if you’re not actually going to be running it frequently, it might not be worth it to automate. For example, if you know you’re going to have to test a feature two or three times but won’t be keeping it as a permanent part of the application or won’t be testing it after that, you might want to rethink automation. The tests you decide to automate should be a foundation of your regression testing strategy, so if you don’t feel like they’re a critical part of the application to be checked in every code change, it may be best to keep them separate.
  3. How much time am I willing to invest? When automation requires so much time and maintenance, you have to determine the ROI of your efforts. Some regression tests may seem like they’re a candidate for automation but upon closer inspection, might not be worth the energy for the value they’re providing. Automation is not a quick, one-and-done solution to your problems. Take a lesson from Elon Musk — you don’t want to drop automation into the mix without a second thought. Spend the time making sure your first test is checking all the boxes and meeting requirements. Then determine how often it will need to be updated or reevaluated in order to establish whether or not automation is worth your time investment.
  4. Are there parts of this test case that still need to be explored? You might think that you’re ready to automate, but keep in mind that automation comes after a thorough exploratory testing session. Just because you can automate doesn’t mean you should, and many tests are better off being performed manually. Are there still instances that could benefit from human observation? While you can’t expect to manually cover every part of the application, getting a foundation during exploratory testing is what leads to more successful, stable, and insightful automated tests.
  5. Does automating support the needs of my team? If you’re automating a test case, you want to make sure the tests you write are going to actually answer the questions your team is asking. First, take a hint from Bas Dijkstra, and discuss the “why” before firing up a test automation tool. While Selenium is a great tool for automation, understanding the nuances of the many commands and abilities of it can take some getting used to, so if you’re not able to write a script that encompasses all of the requirements your team is requesting, another method of testing may be better. If you still feel that automation is the best process for the job but don’t feel confident in your Selenium skills, a tool like Record & Replay may help you achieve your testing goals. There are many choices when it comes to automated testing tools, so you need to make sure the one your organization uses will support the testing you’re trying to accomplish.
  6. Why are you automating? Sometimes the most simple questions can give you the most telling answers. If you’re automating because you think it’s what you should be doing or because your boss is telling you to, you might not be automating your tests for the right reasons. Your objective with test automation should be to provide insight to you your team and give faster feedback. If you have ulterior motives, your efforts won’t take you very far.

Conclusion

Even if you could automate every test, you would quickly find it’s not helping you or the rest of your team very much. Next time you’re deciding on whether or not to automate, ask the right questions first.

Filed Under: Test Automation Tagged With: automated testing, test automation, testing tools

  • 1
  • 2
  • 3
  • Next Page »

Try CrossBrowserTesting

Everything you need for testing on the web. Browsers & Devices included.


  • Grid In The Cloud
  • Simple CI Integrations
  • Native Debugging Tools
  • Real iOS and Android
  • 2050+ Browser/OS Combinations
  • Intuitive REST API

Start Testing Today

Want Testing Tips?

Want the latest tips & strategies from industry experts right to your inbox? Sign up below.
 

Join Over 600,000 Testers & Developers And Start Testing Today

Learn more Free Trial

Features

  • Live Testing
  • Selenium Automation
  • Automated Screenshots
  • Screenshot Comparison
  • Local Testing
  • Real Devices

Solutions

  • Automated Testing
  • Visual Testing
  • Manual Testing
  • Enterprise
  • Internet Explorer

Resources

  • Browsers & Devices
  • Blog
  • Webinars
  • Integrations
  • ROI Calculator

Company

  • About Us
  • Careers
  • Plans
  • Terms of use
  • Security

Support

  • Help Center
  • API Docs
  • Schedule A Demo
  • Contact Us
  • Write For Us