CrossBrowserTesting.com

A Design, Development, Testing Blog

  • Product Overview
  • Pricing
  • Back Home
  • Free Trial
  • Design
  • Development
  • Manual Testing
  • Test Automation
  • Visual Testing

Manual Cross-browser Testing for Beginners Part 2: Tools and How to

April 23, 2020 By Sarah Mischinger Leave a Comment

Welcome back to the second part of our manual cross-browser testing tutorial for beginners! In the last part of this series, we talked about what cross-browser tests are, why you need to conduct them, and what different types of tests we know. If you haven’t read the first part yet, we encourage you to do so now.

Today, we’re going to delve deeper into the How of cross-browser testing, and also take a look at some prerequisites and tools that help you test more efficiently. So, without further ado, let’s get started!

How to choose browsers for testing

In our previous blog post, we learned that cross-browser testing can help ensure that your web application looks and works perfectly on different browsers, browser versions, and devices. But let’s take a second to think about all the different combinations you could theoretically test. It’s hardly possible to test on every browser and device out there! Hence, you should spend some time figuring out which combinations to check.

First, you need some usage data to find out which browsers and browser versions are the most popular used in the market as well as the ones visiting your website. 

You usually have two options here:

  1. Analyze publicly available data on websites like this one
  2. Analyze existing usage data of your web app (e.g., from Google Analytics)

Don’t forget to include mobile devices and their browsers in your research! 

Use the data found to create a browser matrix that everyone on your team can understand and use when cross-browser testing. For this, create an Excel or Numbers spreadsheet that contains at least the following information:

  • Browser name
  • Browser version
  • OS
  • Device
  • Support (yes, no, or partially)

Depending on how many browsers you’ve listed for testing, you may need to prioritize them. Especially if you’re testing manually, you probably won’t be able to check every browser and device on your list when time is running out. Therefore, you should prioritize the entries in your browser matrix according to popularity, so that you can ensure that the majority of your users can enjoy a thoroughly tested web app.

Set up your ideal test infrastructure

After defining your browser matrix, you need to find a way to access all of these different browsers and devices. Here, you also have various options to choose from:

In-house device and browser lab

Your company may already have an internal lab that offers everything you need. If you are considering building a lab on-premises, first make sure that enough resources are available to purchase and maintain the necessary components of your infrastructure.

Emulators and simulators

To check your web app in mobile browsers, you can install and run emulators and simulators on your machine. These tools mimic real devices and are a great choice if you want to review your app during development. However, they are not the “real deal” and can’t completely replace testing on real devices. If you want to learn more about the pros and cons, read this blog post from our friends at Bitbar.

Virtual machines

You can install a virtual machine (VM for short) on your computer and use different images to run foreign operating systems. So, for example, if you are a Mac user and need to test your web app on Microsoft Edge, you can run Windows 10 in a Virtual Machine. Yet, VMs take up a lot of disk space – the more OS versions you need to run, the more images and space you need. Also, your computer may struggle with running another OS, which can cause severe lag, among other things. Therefore, this option may not be suitable for you.

Online services

The most promising option for most testers is to use an online-based test service that provides all the browsers, browser versions, and devices you need to test. Typically, these services are subscription-based but can save you a lot of resources and headaches since you don’t have to purchase and maintain the infrastructure and tools yourself.

How to conduct manual cross-browser tests

After you’ve set up a suitable infrastructure to access all of the combinations required for testing, it is time to think about the actual testing process. Let’s find out what usually works quite well:

Set a baseline for comparison

First of all, you should test your app in a modern, reliable and popular browser like the latest stable version of Google Chrome. This way, you can find and fix some of the more apparent bugs and get a clear idea of ​​how the app should look and work. If you then test in other browsers, you can compare and evaluate the results using this baseline.

Create a cross-browser test strategy

You need a document that contains guidelines for cross-browser testing that everyone should work with. If you don’t have such a plan on hand, you may forget to test critical functions or follow different practices than your colleagues. As a result, you may invest a lot of time in tests that lead to unsatisfactory results.

In this test strategy document, describe what should be tested, how often it must be tested, and what should not be tested. Use your product’s test specification or create one if necessary. Additionally, you should include your browser matrix and add some information about the infrastructure used. Read our blog post “Real Device and Browser Testing Strategy for 2020” to learn more about what to include here.

Execution

After completing all of the above steps, you can finally get the show on the road and execute your tests manually according to your test strategy!

Tools that make manual cross-browser tests more efficient

Since manual testing can lead to bottlenecks that hinder the timely release of your web applications, we at CrossBrowserTesting have developed some tools that help you to test more efficiently:

Live testing 

The Live Testing tool lets you perform interactive manual and exploratory tests on more than 2050 desktop and mobile browsers. You get access to browser extensions and inspection tools that you can use to debug your CSS, HTML, and JavaScript code.

Visual Testing

Don’t forget to check what your application looks like in addition to how it works – especially when it comes to your responsive website design. With our Visual testing tool, you can take full-page, chromeless, or above-the-fold screenshots of your web app’s layout on different devices at the same time. We also offer you an API that you can use to schedule screenshots.

Record and Replay

Would you like to get test automation into your test mix without having to write a single line of code? Then our Record and Replay tool is the perfect choice to eliminate the pain of manually executing repetitive test cases. Record your test flow and replay it in parallel on all desired browsers to shorten test execution time while increasing browser coverage.

Conclusion

In the second and last part about manual cross-browser testing for beginners, we gave you an idea of ​​how to run cross-browser tests, what you need to do in advance, and which tools can make testing more efficient.

We hope that we’ve been able to provide you with some useful and valuable knowledge that will help you get a flying start with manual cross-browser testing! 

Filed Under: Manual Testing Tagged With: cross browser testing, manual testing, record and replay, tutorial, visual testing

Manual Cross-browser Testing for Beginners Part 1: Concepts and Types

March 26, 2020 By Sarah Mischinger Leave a Comment

You have probably noticed that automated testing receives a lot of love and hype right now. That’s because they can accelerate test execution times and make room for other, more pressing tasks. However, manual testing is still very much relevant as we cannot automate everything (not yet at least). For this reason, we want to take a closer look at the craft with a two-part series about manual cross-browser testing for beginners. In this first blog post, we’ll talk about what manual testing is, why and when you need it, and what types of cross-browser testing need to be done manually.

What is cross-browser testing?

When you develop a website or web app, you most likely use one or two browsers for development, such as Chrome and Firefox. Maybe you resize the browser window to see if the component you are creating adapts correctly to different screen sizes.

However, do all of your users browse the web with your development browsers? And do they solely use their laptop or PC to do so? As you probably know too well, the answer is no. There are many different browsers and versions out there, and your users often use their tablets or mobile devices to visit your website. In fact, the use of mobile internet has increased by 504% since 2011!

By conducting cross-browser tests, you can ensure your web app works and looks splendid on many different browsers, browser versions, and devices. The goal is to offer as many users as possible a consistent and positive experience with your online product. 

Who performs these tests?

The answer to this question can vary from company to company since everyone has set up their own workflows with the available staff and resources. But let’s see who is most likely to run cross-browser tests:

  • Web Developers – Naturally, the people who implement the app under test are often best suited for the job. They already know what to look for, how to report and deal with errors, and are also capable of resolving any issues they encounter.
  • Quality Assurance teams (short QA) – Especially big companies have set up a QA department that helps make sure products meet specific criterias and quality standards. Usually, the staff is trained to conduct tests and report bugs back to the developers.
  • Web/UI/UX Designers – The designer of your web app should always ensure that the result matches their crafted design. However, since they check the overall look and feel of the web app, they don’t need to test on every browser and device and often limit their testing efforts on one of each.

Note: In general, anyone can help test your app. This often depends on the type of test you want to conduct. For example, if you need to verify your app’s intuitiveness, you can invite a group of external people to represent your target audience and watch them use your app.

Why should you test your website or web app?

In recent years, websites have become more and more complex. In fact, we mostly refer to them as web apps these days – a term which people hardly used ten years ago. Software that users previously had to install on their computers has been transformed into online services, and the web is no longer just a provider of information.

Imagine visiting your favorite blog and seeing that components such as the primary navigation don’t work in your browser. You may be a little annoyed, but things could be worse. However, if you need to fill out an essential form on your government’s web portal, and the same bug occurs, things aren’t looking so rosy anymore. For better or worse, web apps are an integral part of our daily lives we depend on.

Your web app users also expect your product to work flawlessly anytime, anywhere. That’s why you need to conduct thorough cross-browser tests! If you can’t provide users with a working app, they may be looking for an alternative. In the worst case, your company loses customers (and ultimately money).

When and what should you test manually?

Basically, we know two different ways to carry out cross-browser tests: manually and automatically. Since we’re covering manual testing in this blog series, we’re not going to talk too much about test automation, but include the basics to give you an idea of ​​the differences.

Manual cross-browser testing

Here, a tester interacts personally with the web app on all browsers and devices. For this, they can follow a pre-written list of things to test and report any errors they encounter to a system or developer (depending on the company’s workflows).

Automatic cross-browser testing

Testers write test scripts and run them automatically on all required browsers and devices by using automation tools such as Selenium WebDriver. We recently published a blog post about when and what to test automatically, which you can read here.

Naturally, manual testing takes a lot of time – especially if your app is quite complex and must be checked in many different browser versions. Test automation helps reduce the needed time and allows you to focus more on analysis, troubleshooting, and other more pressing matters.

However, manual tests are still required! Some tests are too complicated or computationally intensive for automation, but testers can efficiently perform them manually. Most security components like Captchas cannot be tested automatically either, because it would make no sense if someone could bypass these security mechanisms through automation.

Besides, test automation is overkill for some projects. If you are working on a small and relatively simple website, writing test scripts can take more time than running the test manually. And if your company doesn’t have qualified staff to write these tests, you may also need to opt for manual testing until the situation changes.

What types of manual cross-browser tests are there?

Let’s complete this first blog post on manual cross-browser testing by taking a quick look at the types of tests that you can conduct manually.

Exploratory testing

This type of test is always done manually, as a human must freely discover the app to find UX vulnerabilities and bugs. Even if you could achieve 100% test automation, you would still miss errors that only occur under certain unpredictable circumstances. With exploratory testing, you can find out how users interact with your app and how it responds in different situations. Yet, since you don’t have a checklist to guide you through testing, it can be challenging to know where to start or what to look for. That’s why we have created a playbook to help you discover some of the types of personas you should test for.

Visual Testing

Visual tests ensure that your app looks and feels good in different browsers, browser versions, and devices. Of course, you need to check if your CSS animations and JavaScript-driven components work appropriately under different circumstances. Additionally, designers and QA teams can ensure that the web app meets the expected quality and that the implementation matches the design.

Functional Testing

Looks aren’t everything, and the most stunning web app is worthless if UI components don’t respond as expected. As the name suggests, functional tests help you make sure that users can actually use your app. For example, you can test whether your contact form checks the email field correctly and whether the user can submit forms if the required fields are empty.

Note: While you can’t automate exploratory tests, teams can save time by automating repetitive test cases and tests that can expect a specific result.

Conclusion

In this first part about manual cross-browser testing for beginners, we made sure to provide you with the basic knowledge of the craft. Now you should know about the different types of tests and who will perform them and when. We hope to see you again in the second part of this blog series, where we will talk about how to perform manual tests and what tools you can use to make the process more efficient.

Filed Under: Manual Testing Tagged With: beginner, cross browser testing, manual testing, tutorial

Move Fast, Break Things: How to Test the Limits of Your Web App

August 8, 2018 By Alex McPeak Leave a Comment

negative testing broken computer on purple background


When going through an application, it makes sense to perform some of the actions you would think your users would perform. But it may also makes sense to perform some of the actions you wouldn’t expect them to perform.

Your users are unpredictable, and the actions that cause complications with your application aren’t always what you would expect or even do yourself. Sometimes, you have to test the limits of your application in order to properly break it.

Clearing Misconceptions

Before we get started, we should make one thing clear: software testers don’t actually break things. Although the reference may be common and it’s a simple way to understand that a certain action resulted in a functional failure, it’s not an accurate way to think about testers in real life.

There’s a quote where James Bach says, “We don’t break the software. We break illusions about the software.”

The job of a software tester is to find where a product is broken. While something may work fine for the programmer who made it and is using it according to its intent, an application will have infinite areas for bugs can hide and users to find.

The make of a good tester comes down to being able to report these bugs, help stakeholders identify and fix issues, and contribute to the release of a product that’s of a higher quality than it was when it started out. Users don’t always go down the “happy path,” so it’s important that testers consider these different scenarios because at the end of the day, finding a problem is a good thing.

So although we might refer to it as “breaking software,” it’s important to understand that it’s not the testers who are breaking the software, it’s the application that’s broken.

Negative Testing

Often in testing, we want to follow a user journey by performing the actions that get us to an end result. For example, if you fill out the fields in a “Contact Us” box, that information will be sent to the site owner so they can follow up. If the application is working correctly, this will give us a passing result.

This passing result is a good thing because it means that in an ideal scenario, the user will fill in the information being asked as they are supposed to. But it doesn’t necessarily tell us much about the edge cases — what happens when a user doesn’t use the application as intended or planned. What happens if a user leaves a field blank or puts in an incorrect email address?

When we think about trying to see where an application fails, we want to look at negative testing. Negative testing ensures that your application can handle invalid inputs or unexpected user behavior. This means that you’re thinking about where a user can venture off path, rather than just confirming the application works with normal behavior.

By negative testing, you can get a more insight into where bugs are hidden. And by knowing what’s broken, you can build out your risk analysis to come up with a strategy for more comprehensive testing. How many of these edge cases do you feel are likely for users to follow? How detrimental would it be to the application if they were to follow them?

A good place to start with negative testing may be thinking about different personas. Everyone uses an application differently, and by paying attention to the details that give different end users unique experiences, you can curate a more intelligent approach to testing for them.

However, even when thinking about persons it can be difficult to truly put yourself in someone else’s shoes and come up with those edge cases. Brainstorming a list of scenariosmight be the best way to come up with new ideas for negative testing.

How to Test the Limits of Your Web App

Inspired by this post on Reddit that contains reliable strategies for breaking stuff (plus a few of our own favorite methods), we’ve rounded up a few good ways to test the limits of your application.

  • Start performing an action on the page. Leave it open for a long time, then try to come back and finish out the action.
  • Mess around with form fields. See what happens if you enter a bunch of random numbers, hold down a key continuously, or enter emojis and gifs.
  • Put a fake address where it asks for email.
  • Try any illegal input you can think of.
  • Enter the least likely inputs or large data strings.
  • Enter invalid data into promotional or payment fields.
  • Leave fields completely blank.
  • Cancel and remove items from your cart.
  • Add as many of one item as you can to your cart.
  • Reload the page.
  • Close out of the page. Reopen the page.
  • Shut down your computer. Reopen the page.
  • Automate data from the big list of naughty strings
  • Hit a submit button as many times as you can before the page changes.
  • Try and get through a test case as fast as possible.
  • Try and get through after navigating away from the page multiple times or opening up new tabs.
  • Test after a few beers.
  • Don’t follow directions; do the opposite.
  • Take it to mobile and see what happens when you have multiple apps open or are low on battery.
  • Turn it horizontal.
  • Have someone call or text you while using the application.
  • Simulate 10x the number of users that regularly go to your site.
  • Go backward. Then forwards. Then back again. Resume.
  • See what your website looks like in Internet Explorer
  • Try using your application with different plugins, integrations, and third-party apps.

Break Responsibly

While the goal of developing and testing is to release high quality, functional software, sometimes you have to find the places it’s broken first.

If you’re looking to try out negative testing, this is a good place to start. But the true value (and the fun) of negative testing comes from using your curious mindset and creativity to test the limits, especially during exploratory testing.

Try coming up with a few of your own ways to test the limits of your application, and leave your favorite method in the comments.

Filed Under: Manual Testing Tagged With: exploratory testing, negative testing, personas

Pair Testing: Working Together to Define Quality

June 22, 2018 By Alex McPeak Leave a Comment

pair testing

pair testing

When you think of software testing, you probably think of an individual sitting at their computer performing manual tests or maybe writing up a Selenium script. In this scenario, they are also the sole person providing feedback on the test case and recording any notes or documentation.

However, pair testing is a little different, and it’s gaining popularity among the testing community.

So what is pair testing, why would you want to do it, and how do you do it right? A

What is Pair Testing?

It takes two to tango, or pair test. Pair testing is usually done with a tester and a developer or two testers, but the tester can pair with anyone they think would make a good partner such as customer support, product management, or even marketing.

While testers don’t usually want people peeking over their shoulder while they do their job, pair testing encourages two people to work together towards share objectives in testing.

The two will work at the same machine with one person controlling the test through the mouse and keyboard and one person giving feedback, asking questions, taking notes, and making observations. Both parties should be taking part in a collaborative discussion about the test scenario.

Pair testing is similar to the idea of pair programming where two developers work together, and one writes the code while the other reviews, ask questions, or makes suggestions. In pair testing, however, both participants are going through the functionality, usability, or visual representation of that code whether it be a new feature or a completely new UI.

In Lisa Crispin’s pair testing blog, she says her partner has likened pair testing to riding in the passenger seat versus driving the car — “You have so much more chance to look around.”

Basically, pair testing is just two people testing the same scenario together instead of one, using their unique skills to provide additional insight to the rest of their team.

Benefits of Pair Testing

  • Cross-Team Collaboration – By combining the unique skills of different team members, you can gain more insight into what is needed for the specific scenario to be successful. Pairing with a developer, for example, means less back and forth between both roles when new code needs to be tested. Working with another tester or a product manager would give another insight. Each of these roles can then work together to decide what is working well and what may need to be changed or fixed.
  • Two Perspectives – When testing, you’re always limited to the fact that not every user journey will be accounted for. The reality is, it’s impossible to think of every possible combination of actions that could be performed. But by having two participants think about a case, it means there’s another set of eyes and brains to consider new possibilities or strategies.
  • Breaking Down Silos – When things are moving fast in Agile development teams, testers can often end up in siloed roles — especially if they’re working on a small or single team. Pair testing helps break down barriers, work with new people, and get testers’ constructive feedback ideas out in the open so each role can better understand where the other fits in and how it leads to quality.
  • Exploratory Testing – Because of the nature of pair testing, it’s usually an exploratory effort. Where exploratory testing is so important for finding bugs and providing feedback on new features, it still doesn’t always get the dedication it deserves. Pair testing encourages teams to make more time for exploratory testing to collect and deliver that valuable feedback.
  • Having a Witness – Sometimes it can be hard to recreate bugs, and then when that happens, make a case that they need to be fixed. But the best way to solve any mystery case is with a witness. By having someone to see bugs as they come up and help determine whether or not it should be considered a bug in the first place, it is easier to pass to the development team with certainty for fixing.
  • Training Opportunity – Not only is pair testing a great way to help the team deliver with an extra measure quality, it’s also a good way for each participant to learn from each other. For newer testers, matching up with a mentor for pair testing is a fun and productive way to teach them what goes into a successful test session. For other roles that may be interested in testing more, this can teach them what to look at. No matter what level or expertise, there’s always something to be learned from your partner in pair testing.

How to do Pair Testing

Pair testing really isn’t that different from other methods of software testing, but there are a few things you may want to do in preparation.

Following these steps will ensure that it’ll be a productive use of both your time.

  1. Identify the test case you want to work on and why you think pair testing is a solution.
  2. Identify who you want to work with and what each person’s role will be. What do you hope to gain by working with this person specifically? Will one person be operating the machine the entire time or will you switch?
  3. Determine an objective from your pair testing session. Do you want to find bugs or suggest new features? Maybe you want to define tests that should be automated — maybe all of the above. Understand who you will be bringing your results to if it’s different than your normal report.
  4. Create a timeline that gives yourselves enough room to explore and meet these objectives while staying focused on the tasks at hand.
  5. Come up with a few specific cases you want to be included in the pair testing. Katrina Clokie suggests writing down a simple checklist of ideas.
  6. Speak your mind and ask questions. Mike Talks says that “pairing is an interactive partnership,” which means both parties are equally contributing thoughts, ideas, and suggestions regularly throughout the test.
  7. Be open to differences of opinion, styles, and techniques. While this can lead to conflicting points of view, it’s always important to respect the other person’s feedback and try to learn from it and accommodate it.
  8. Keep notes and documentation that you can use to share results with other stakeholders.

This, of course, is a very basic process for pair testing, but in real life, the test case, your partner, and objectives will all affect how you go about pair testing. These should give you a good foundation to get started and ask the right questions.

Two is Company

Pair testing is one kind of what Malcolm Isaacs calls “social software testing”. When following an agile methodology and shifting left, it’s crucial for testers, developers, product owners, and other participants in the SDLC to come together for collaboration. Pair testing is just way one to do this.

While it may not be as deeply technical as, say, test automation might be, there are clearly many advantages to having two team members get together to discuss the quality of the application under test — what’s working, what’s not, and what needs to change?

 

 

Filed Under: Manual Testing Tagged With: exploratory testing, manual testing, pair testing

How to Apply User Personas to Improve Exploratory Testing

February 12, 2018 By Alex McPeak Leave a Comment

persona based testing exploratory testing user personas

persona based testing exploratory testing user personas

As a software tester, your job is to advocate for the end user. However, when there are thousands or even millions of end users, how can we be sure that we’re keeping in mind their differences?

While advocating for each and every user may be an impossibility, adopting persona-based testing may be the key to deriving more accurate insights for your exploratory tests. By paying attention to the details that give different end users unique experiences, you can curate a more intelligent approach to testing for them.

Sometimes personas are supplemented with pictures of the person and detailed descriptions of everything from hobbies to the number of pets they have. While these can help to paint a complete picture, you may want to start with some of the essentials.

In order to begin building your user personas, consider the following when it comes to determining how visitors are using your web application:

  • Age – Does your product or service target a mix of ages, or is it geared toward users that are older or younger? Younger users are more technically savvy — many have grown up using the internet, so navigating the web comes naturally to them from social media to online shopping. However, while they may be more equipped at handling complex features, often their standards are higher and patience is thinner. They will not stick around for complicated forms or slow load times but will leave your site for a sleeker user experience. While older generations may be less impatient, on the other hand, they also may be less likely to get around more intricate components. They also may be on older browsers and operating systems, which could render your application useless if not accounted for. When it comes to making their experience better, you want to think simple and straightforward. Taking into account the range of user behavior solely based on age, it’s easy to see why it’s an important element to consider when thinking of personas.
  • Job/Role – What does the person do, and what are they using it for that will influence their decision? For example, if you own an office supplies website, you may have two 30-year-old women visiting your website in the span of an hour — does that mean they should be tested for the same way? What if one woman is coming to the site as a freelance writer at a small magazine and one is coming in as an office manager for a major enterprise? They’re going to be looking for two different outcomes and have two different experiences. The freelancer may be looking for a few single items such as a pack of pens, a notebook, and a binder. The manager may be looking to bulk order supplies — what happens when she puts in an order for 300 office chairs and 600 monitors? Can the application handle these requests in the same way as a single item? That may be just one difference the two women encounter going through these process, and you want to assess those possibilities.
  • Accessibility – Accessibility can be an overlooked part of software testing, but considering different challenges people may have when it comes to user personas can be helpful. For example, many men are prone to color blindness, which means something as simple as a color choice could make or break the experience for someone. Users that are hard of hearing may rely on image attributes and labeled form fields. Even just having a slight mobility impairment or vision loss could affect the way an application is experienced, especially when it comes to responsive design and mobile device usage. Once you realize these obstacles are common for a variety of people, it becomes important to make accessibility part of your persona-based testing.
  • Geolocation – Looking at demographics is a great way to understand personas, but it’s not just limited to age and gender; it also includes looking at where people are in the world. In fact, where someone’s looking at your site from has a big influence on how they’ll use it. Keep in mind Hofstede’s cultural dimension theory to determine what different cultures prioritize in order to apply that to your personas and application testing. Look at how McDonald’s changes its web design to accommodate a cross-cultural experience to understand how geolocations can apply to persona-based testing. Depending on where you have customers coming from, these preferences may change how you want to relay your website. Even something like network latency will affect how people from different countries experience your application, so you may want to see what it looks like through the eyes of a different country.
  • Behavior – Some personas can’t be pinned down by hard facts — sometimes you have to get creative imagining the ways different people might use your application. For example, knowing someone’s job title won’t tell you that they have a tendency to open multiple tabs and windows at once, keeping a shopping car that’s ready to check out open on their computer for days at a time. It won’t tell you that someone may enter illegal inputs or never remember their password. As critical as it is to collect data on your customers and create personas that accurately represent your users, part of being a great tester is also thinking outside the box to give those personas depth that includes quirky behavior. Oftentimes, it may seem useless to test an action that you wouldn’t expect anyone to perform, but that sequence could be the one that breaks the application…like the computer glitch that let someone buy nothing from McDonald’s for 99 cents.

Conclusion

Persona-based testing helps you step out of your preconceived notions about how people use an application to think about how they could use it.

Recently, we made an infographic about the different personas you should use in exploratory testing. While specific attributes may differ from team to team in order to paint an accurate representation of users, we believe these personas will give you a better understanding of all the ways that people can push an application to its limits.

Check out Exploratory Testing Playbook: 7 Personas Your Team Should Be Testing For and join us on a visual safari to determine who you should be testing for and how to test for them.

Filed Under: Manual Testing Tagged With: exploratory testing, manual testing, user personas

Discovering Exploratory Testing

August 22, 2017 By Alex McPeak Leave a Comment

what is exploratory testing

exploratory testing

Despite being fairly unstructured, scriptless, and independent of a pass/fail status, exploratory testing is a very important and strategic part of QA. In fact, its focus on discovery and learning in software is proven to be highly valuable when it comes to testing an application for the first time or looking into a new feature from the last integration.

By identifying problems, questions, ideas, and praise through this method of manual testing, exploratory testing helps start a conversation between internal teams and produce improved software.

What is Exploratory Testing?

Exploratory testing is a scriptless testing practice that emphasizes learning and discovery in software and aims to improve quality.

While test cases are not created in advance, exploratory testing still has a very strategic process and requires documentation. Test design and execution are done simultaneously in an ad-hoc manner, although exploratory testing and ad-hoc testing are not the same (we’ll get to that later).

Since exploratory testing requires a human to look at software, evaluate its functionality, and think of new ideas, it’s a manual practice rather than an automated one. It also depends on the skills and knowledge of an independent tester to be able to evaluate where software has issues or could use improvements.

However, just because it’s a manual practice doesn’t mean it can be left behind in the industry move to Agile and CI/CD. Even as teams shift left and transition from manual testing to automation, exploratory testing continues to play a critical role in Agile development.

Exploratory testing is often best for:

  • New websites
  • New feature
  • Unexplored areas
  • Different devices
  • UI testing

Afterwards, these insights are discussed with the rest of the testing and development team members to make changes, fix bugs, or add new features. Additionally, results can be used to raise questions about what’s working well and what’s not to implement in other areas of the software.

Session-Based Testing

Session-based testing is one way to provide more structure in exploratory testing. While you can’t go into it with a script or a definitive test plan, having certain elements in mind like objectives, heuristics and oracles, focus, risks, and follow-up will make sure the process is valuable.

Session-based testing will often include a technique called mind mapping to identify some of these elements in a visual layout. This also helps write better documentation to share with teams after testing is finished.

Establishing a Definition

James Bach and Michael Bolton are attributed as the testers responsible for calling attention to the value of exploratory testing.

However, Cem Kaner coined the term in 1983, defining it as “a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the quality of his/her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project.”

Today, some people suggest that exploratory testing should just be called “testing” since all testing has an element of exploration to it, but we still think that it deserves a place as its own method according to the original definition.

Exploratory Testing vs. Other Testing

The parameters of exploratory testing are often confused or misunderstood. Comparing and contrasting exploratory testing to other types of testing can help clarify its purpose.

Scripted Testing – Scripted testing and exploratory testing are posed as opposites, which helps better understand their definitions. As mentioned, exploratory testing is scriptless and more random, whereas scripted testing follows a set and defined plan written by the tester. This also means that you can’t test something that isn’t planned ahead with scripted testing, whereas exploratory testing encourages testers to discover new regions as they go. This allows testers to act more as the end user and evaluate areas they come across rather than predefined spaces.

Ad-hoc Testing – Exploratory testing and ad-hoc testing are probably compared most often, but they’re two distinct methods of testing. For one, ad-hoc testing doesn’t require documentation, which is an essential part of exploratory testing. Similarly, it’s not well-integrated into team collaboration and has no set goals, objectives, or focus. Though exploratory testing is meant to be done in an ad-hoc style, it also has more set expectations than a being a random test. Additionally, many divert the two because exploratory testing could be said to require more skill, whereas anyone could perform an ad-hoc test.

Can You Automate Exploratory Testing?

Not really. Exploratory testing inherently depends on the individual tester to contribute their skills, knowledge, and insight to new or unevaluated components in an application — something that cannot be done with automation.

Thinking about this in terms of the testing vs checking debate, exploratory testing would be part of “testing” while automated testing is a method of “checking,” meaning that you could not automate it because you can only use automation to check — a tester has to test since a machine cannot provide independent thought.

Tips for Exploratory Testing

  • Have a loose plan – Going into a test with a focus is important in order to come out of the process with valuable feedback on certain objectives. At the same time, the nature of the exploratory test is to address certain issues as they arise and consider those as well. While testers should have a general plan, they should also allow room to improvise and consider unfamiliar elements.
  • Think like an end user – Exploratory testing is an ideal way to advocate for a customer that may be using a website. While developers are familiar with the code they build, it might be more difficult them to think outside the box and use software in a way it’s not optimized for. The job of the tester in this phase is to recognize these areas of weakness and bring them to the attention of the rest of the team. Check out our Exploratory Testing Playbook for ideas on how to test for user personas.
  • Leverage your skills – While exploratory tests are designed and changed as the tester goes along, it doesn’t mean that just anyone can do it correctly. In fact, the results of the test will largely depend on a tester’s skill level. Leveraging a tester’s knowledge of software and previous testing practices will be the most efficient way to find and address problems.

Filed Under: Manual Testing Tagged With: ad-hoc, exploratory testing, manual testing, session based testing

The Unexpected Value of Drunk User Testing

May 31, 2017 By Alex McPeak 1 Comment

Richard Littauer The User is Drunk User Testing

Richard Littauer The User is Drunk

It’s not often you stumble upon a service where you’re willing to pay a professional to work while intoxicated. In fact, getting drunk on the job is usually frowned upon. But when Richard Littauer founded The User is Drunk, he seemed to have found just the right niche where this was not only found acceptable, but people were willing to pay a little extra for it.

Operating under the tagline “Your website should be so simple, a drunk person could use it.” Littauer’s homegrown side-hustle took off as designers and businesses invested in a UX pro that would review their sites with honest commentary and a new perspective.

At CrossBrowserTesting, we believe web design should be optimized for every user, no matter what browser, device, or operating system. Littauer’s project suggests that it’s important to consider the universal user in another way — drunk or sober.

Though Littauer no longer regularly drunk reviews websites (for money, anyway), he still remains opinionated on what separates a good web design from a bad one. In order to answer some of our burning questions, Littauer gave us an exclusive on some of the things he learned and experienced while testing under the influence.

What’s your background in development and UI/UX?

I grew up with computers and had a half dozen dumpster-dived desktops in my basement. So, naturally, I did a university degree in Linguistics. After enjoying coding for my senior thesis and being the web developer for some societies in uni and for my own personal sites, I took a Masters in Computational Linguistics, with some UX courses which lead through a long, tangled route to me working as a front end developer for a couple of years in New York and San Francisco.

When I started the User Is Drunk, I mainly had front end experience, not UX, and was working as a freelancer on a project with the MIT Media Lab as a designer and developer. There’s no easy path that shows how I got from A to B, and I’m not entirely sure where B is, anyway.

So, how did you decide to start getting drunk and testing people’s websites for them?

Personal experience. Basically, I had come back from a nice night (I assume, I’ve since forgotten. I think it was a date.) and my friend asked me to jump on Google Hangouts and review his site for him. The feedback was brutally honest, and kind of useful because of that, and I remember saying “I should make a thing out of this.”

Drunk advice is different than normal advice; it’s not caged, it’s emotional bull**** but not normally buzzword bull****, and it is not good at error recognition or correction at all. It’s niche. And niche things are good.

I let the idea stew for a couple of months, and then, at a hackathon, decided to just run with it one day. The rest is history – it went viral, and there I was, reviewing people’s sites for money.

When and where have you primarily done this?

I did the majority of the reviews while traveling with Hacker Paradise through in Bali and Thailand. But I’ve done reviews all over the world – I’m a digital nomad, so it’s anyone’s guess where I am going to be at any given moment. I remember doing reviews in Tokyo, Berlin, Boston…

I don’t drink alone. That’s a quick way to hate yourself. So, normally, I invite some friends out for some drinks, and then slip away for twenty minutes, and come back and buy everyone a round. It seems to work pretty well. I’ve gotten better at planning to do little work the next day, too.

What do you do when you aren’t critiquing people’s websites?

Live my life. Recently, that means I run a company I founded – maintainer.io – which helps people maintain their open source software, and consults with businesses on their open source strategy. I am a community manager as well as a developer, so this came out of that, realizing that a lot of what maintainers do can be done as a service.

Sometimes I debate getting a lot of buzz words I’ve used to describe my work printed on hats, and hanging them all on my wall.

What’s your approach — do you wing it or are you usually looking at something specific?

I wing it. Sometimes clients have questions for me, but mostly I just go. It’s very hard to plan drunk. There have been many times where I’ve failed to turn on my microphone, for instance, so I am largely better off just clicking around and following my gut.

How do you think drinking allows you to look at websites differently? How do you think web designers benefit from getting a drunk user to look at their site from a UX perspective?

Basically, it means that I am honest, or at least that there is no filter. There’s what alcohol does; it inhibits your own ability to self-reflect and realize what you are saying and doing. This kind of reaction can’t be faked, and it’s one of the reasons I’ve never pretended to be drunk (the other reason being because that just wouldn’t be right).

The advice or complaints you give while drunk haven’t gone through some review board, haven’t been cleaned up – it’s what me, a real person, actually thinks about your site at a given time. That’s invaluable.

Couldn’t just anyone get drunk and click through their own website?

Sure. But they don’t have the experience I do, and they may not be used to developing and designing websites. Also, at this point, I’m the most experienced drunk UX auditor in the world. I think that counts for something. But if you’re up for it, go have a good time. Drink responsibly.

What are the top 3 worst things websites tend to do in the eyes of an inebriated user?

  1. Dark UX patterns are very, very clear when you’re drunk. A lot of websites are jerks – no, I don’t want to sign up for your newsletter. No, I don’t want this pop-up to happen every time.
  2. Another major thing would be too much text. Reading walls of texts isn’t really my thing, and probably isn’t other people’s either. On the other hand, Pam, the mom from The User Is My Mom, loves reading walls of texts. She has a longer attention span. Age matters.
  3. The final worst thing would be red and green colors everywhere. I’m a bit colorblind; not horribly, but if you point to a cardinal in a Christmas tree, it’ll take me a while to find it. A lot of sites don’t cater to people with this disability, although it is fairly common. That gets annoying, fast.

Have you ever drunk tested in different browsers or devices? Do you think there would be any value in testing responsive design under the influence?

I’ve done a few apps, and a few private sessions. There is some value, there. Frankly, though, I’m not an app developer, and I’ve always felt that my advice on websites is more pertinent. There is definitely value in responsive design for drunk people. You have no idea where your users will be.

How did people respond to the results? Do a lot of people change their web design after your reviews?

It’s all across the board. I’ve gotten a few people who were full of praise, and changed a lot of things, working my tips into their sites. I’ve had people change their onboarding process after, especially.

I’ve also had a fair amount of people who never got back to me. Those are not fun. Drunk user testing is very hard to standardize – sometimes I’m nice, sometimes I’m not. That’s how it goes.

What’s your drink of choice?

Uisge beatha. Talisker, 12 year, or a Laphroaig, quarter-cask. But normally it’s just a local beer. I’ve gone off stouts, wheat beers, and red wine with cigars as a result of some bad nights. That’s OK with me.

What advice do you have for designers who want to cater to the user experience, whether those users are drunk or sober?

Get on Twitter and Designer News. This is the golden age of learning – never stop.

And please don’t use red and green colors too much.

Any last thoughts you’d like to add?

While the User is Drunk is fun and stupid, it’s also fun and stupid. This is not my main job, at all. Currently, I’m working on consulting for companies wishing to do better open source, and helping individual coders and teams who have too much code to maintain – check out my business at maintainer.io.

I say this because I think a lot of people are under the illusion I only do the User Is Drunk, which couldn’t be further from the truth. I limit my reviews to around once a month now. Livers can’t be fixed by resetting the router. Drink responsibly, folks.

Filed Under: Manual Testing Tagged With: design, development, usability

Choosing Between Emulator vs Simulator vs Real Devices for Testing

March 20, 2017 By Alex McPeak Leave a Comment

emulator vs simulator vs real device

Emulators, simulators, or real devices cross-browser testing

One of the initial choices you have to make when you begin to build a cross browser testing strategy is what environment those tests are going to be run against. When it comes to choosing the mobile devices used for testing, you have three options — simulators, emulators, or real devices.

While developers evaluate the pros and cons of these choices, the answer to a classic testing dilemma may be more straightforward than you think.

Emulator vs Simulator

Emulators and simulators are both virtual devices that are used for the same basic purpose — they mimic the functionality of real devices and imitate a user’s actions to recreate operational behaviors.

The difference, respectively, is that one emulates (replicates or reproduces) real mobile device software, hardware, and the OS in order to test and debug applications within another software/hardware platform, while the other simulates (imitates or mimics) the internal behavior of a device, but does not emulate hardware or work on the OS.

The best way to exemplify this is thinking about a flight simulation. By using a simulator, you are mimicking the operation of the plane’s mechanics, journey, and surrounding environment through a computer-generated program that shows you what the procedure might look like. If there were such thing as a flight emulator, on the other hand, the device could theoretically take you to a destination the way a plane could.

In this instance, if you are deciding between emulation and simulation, emulation is your best bet as it’s more accurate and more suitable for debugging. However, emulators and simulators are often used and referenced interchangeably since they perform similar functions.

Additionally, when you think about the flight simulator example, you can see that both instances have degrees of variation from boarding a real plane, just as using simulators and emulators varies from using a real device. In real life, there are many factors that can affect the results that would not be considered in a simulation or emulation.

We’ve found that the use of emulators and simulators is most appropriate in the earlier stages of testing, where you may not require 100 percent accuracy but rather want a fast test that gives you a general idea of how a web application is performing.

As you continue in your testing process, you will likely find that real devices give you more precise observation and allow you to observe more particular aspects of a web application.

Real Devices

The advantage of real devices is in the name. Testing on real, physical devices means that you’re testing the same way that your users are operating their devices.

While simulators and emulators will come close to imitation, it’s impossible to perform exactly the same as a real device. This is because emulators can’t account for every single environmental factor, feature or user action performed on a real device. For example, emulators cannot simulate high traffic volume in the way that a real device will experience, which would possibly affect the results of performance or functional testing.

They also are known to give false negative/and positive testing results, which can be problematic in an advanced testing process and can negatively impact developers’ progress, ROI and bottom line goals. For this reason, testing on real devices more accurate, concise and user-specific, and they become essential for performance, reliability, operability, sanity, and regression testing.

Testing on real devices comes at a higher cost, however. Oftentimes, testers may resort to emulators and simulators because they’re a less expensive option compared to building a device lab for all the platforms you require. However, hosting real devices in a third-party cloud is a simple answer to cutting costs while obtaining access to a broad range of devices.

This will also allow you to perform faster testing, since you can run them in parallel. At the end of the day, real devices are necessary because testing on them will show a larger range of issues that your users are experiencing.

In Conclusion…

There’s a place and a time for emulators and simulators as well as real devices. Deciding between the two is a choice that is highly dependent on where you are in your development process. Additionally, because both options have their pros and cons, many developers will decide to us both practices for testing.

These are the main points to consider when choosing between emulators/simulators and real devices:

  • Most developers agree that some combination of simulation/emulation and real devices is best for testing.
  • Emulators are a good option in the development process because they’re a faster option, but often are less accurate due to lack of human observation that is critical in testing
  • Real device testing can be done through a physical device lab, VMs or a third-party cloud
  • Hosting real device testing in a secure cloud is often less expensive, while it allows you to test more devices at the same time

Filed Under: Manual Testing Tagged With: emulators, real devices, simulators

Getting Started with Manual Testing

March 17, 2017 By Alex McPeak Leave a Comment

Manual Testing

Manual Testing

Manual testing is a must

What exactly is Manual Testing?

When any tester, developer, or QA team begins a functional or performance testing process for the first time, they’ll likely start with manual testing. Regardless of the size of the company or web application, manual testing is an essential component of any testing strategy because it gives deep insight into the user experience.

This is because manual testing is done completely by human hand, without the assistance of automation tools or scripts of any kind. For this reason, manual testing is the ideal way to assess the details that are relevant to the end-user including how the application functions, how well visual elements render, and other GUI qualities.

Manual testing is specifically valuable in exploratory testing or other test cases that are only run once or twice and ensures that you discover bugs and other issues early on in the development cycle. These tests are an optimal environment for incorporating human observation without an urgent need for the speed and in-depth accuracy that’s characteristic of automated testing.

Manual testing also allows testers to build test cases, which are needed for automation so that you can repeat those actions. No matter how much time you’re willing to put into manual testing, it’s the first step to creating a script that can be run in parallel in other browsers, or many times over.

For this reason, it’s crucial to understand some of the stages and techniques before executing manual tests.

Types of Software Tests

These stages of testing are sequential, starting with the smallest component of the application and progressing to make sure that all parts are working together.

Unit

Unit testing is at the bottom of the testing pyramid because, as the smallest testable part of an application, it focuses on one individual function within the entire application. Although this type of test is better performed in an automated manner – it’s still the base of our pyramid and most plentiful type of tests out there.

Clicking a button to make sure it takes you to the correct destination would be a unit test. For example, making sure that the “Next page” button will actually take to you the next page.

Integration

Integration testing is the next step after unit testing and is the process of integrating multiple units to test them together. This lets testers observes how different components of the software harmonize to ensure functionality.

An example of an integrated test could be your applications log-in process. Now, you’re testing whether entering a username and password and clicking login will bring you to the Welcome screen. These types of tests match two or more unit tests together.

It is important to have an inverse amount of tests as you go down the testing pyramid. This allows for easy debugging and error locating, speeding up both testing and development. Have unit tests along with integration and other test lets you diagnose the exact location of bugs.

System

A system test refers to testing completely integrated software in order to evaluate that system meets the requirements of the other hardware or software systems that it’ll come in contact with.

This is actually a series of multiple tests including end-to-end scenario testing, verifying inputs and output and testing the user experience. There are also multiple types of system testing such as stress, regression and functional testing, the uses of which will differ from tester to tester depending on available time and resources.

Again, all the components that would be tested in a system test have already passed integration testing. System testing after is necessary for testing the design and behavior of the software.

Acceptance

Acceptance testing or User Acceptance testing is the beta testing of the software performed by actual end users. This, of course, is to bring the product into a real life environment to see how well it measures up to a user’s standards in case the tester misses an essential user function, specification or request.

As you can see, manual testing requires a thorough and sequential completion of these stages in order to properly measure a software’s functionality at different points in the testing process in order to target or fix various areas.

Black Box vs White Box Testing

Black box testing and white box testing are two techniques of testing that can be applied to the above stages.

Black box testing is where the internal structure of the software being tested isn’t known to the test, while the internal structure is known within white box testing.

Black box testing is the most common for testers because they don’t have to be expert in coding, making it easier and simpler to execute. It’s also quicker to do because testers only have to look at the GUIs that users are interacting with and looking at.

However, black box testing has restricted coverage in this way and can only evaluate a percentage of every possible test case or scenario, rather it just considers inputs and outputs.

In comparison, white box testing is a little more in-depth and looks at the programmatic structure of the system. In this way, it allows the developer or tester to spot errors and bugs quicker and more accurately and provides a more thorough insight.

Keeping Manual in Your Testing Strategy

While the testing environment is largely moving to automated testing as today’s developers focus on a strategy that embodies agile CI, there will always be a very specific and important place for manual testing. Those hoping to move towards 100 percent automation will find that their results will fall short from complete and efficient web development.

Instead, getting to know your code through the methods and techniques of manual testing will prove to be invaluable when it comes to present and future web testing practices.

Filed Under: Manual Testing Tagged With: exploratory testing, live testing, manual testing

Launching A Instant Test With The Bookmarklet

February 2, 2017 By Daniel Giordano Leave a Comment

Our Instant Test Bookmarklet allows you to quickly jump into a Live Testing session or take screenshots of the web page you are currently on at the time.

 

A bookmarklet is like a bookmark, but instead of loading a specific page, it injects JavaScript into the current page in your browser for added functionality. Our bookmarklet can be found at the Test Center, and quickly installed by being dragged into your bookmark bar.

Starting The Bookmarklet

To begin using the bookmarklet, simply click the bookmark while on the page you’d like to test. You’ll be presented with the following app on your web page:

Bookmarklet Overview

From here you can jump right into a Live Testing session by clicking one of the browsers presented as a “Quick Launch Browser”.

Adding A Browser

To add a new browser, simply click Add and select the operating system, browser, and resolution. When you click Add, we’ll include that as a “Quick Launch Browser” as long as you are logged in.

Bookmarklet Add

Removing A Browser

To remove a browser, click Edit, and the X in the upper right-hand corner. In this view you can also change the resolutions of each configuration.

bookmarklet edit

 

Running Screenshots

To run Automated Screenshots, select Shots on the left side navigation menu, right below Live. You can choose from popular or saved lists of browsers and quickly launch into our app’s UI.

bookmarklet screenshots

Installing the CBT Bookmarklet

Known Issues:

  • Bookmarklet layout issues:  The bookmarklet injects javascript into the current page you are viewing. CSS styles on these pages can impact the appearance of the bookmarklet. If you notice a site that it is not correct on, contact us and we will try to correct it.

Filed Under: Manual Testing

Try CrossBrowserTesting

Everything you need for testing on the web. Browsers & Devices included.


  • Grid In The Cloud
  • Simple CI Integrations
  • Native Debugging Tools
  • Real iOS and Android
  • 2050+ Browser/OS Combinations
  • Intuitive REST API

Start Testing Today

Want Testing Tips?

Want the latest tips & strategies from industry experts right to your inbox? Sign up below.
 

Join Over 600,000 Testers & Developers And Start Testing Today

Learn more Free Trial

Features

  • Live Testing
  • Selenium Automation
  • Automated Screenshots
  • Screenshot Comparison
  • Local Testing
  • Real Devices

Solutions

  • Automated Testing
  • Visual Testing
  • Manual Testing
  • Enterprise
  • Internet Explorer

Resources

  • Browsers & Devices
  • Blog
  • Webinars
  • Integrations
  • ROI Calculator

Company

  • About Us
  • Careers
  • Plans
  • Terms of use
  • Security

Support

  • Help Center
  • API Docs
  • Schedule A Demo
  • Contact Us
  • Write For Us