CrossBrowserTesting.com

A Design, Development, Testing Blog

  • Product Overview
  • Pricing
  • Back Home
  • Free Trial
  • Design
  • Development
  • Manual Testing
  • Test Automation
  • Visual Testing

Webinar: How to Increase Test Coverage Over Time with Automation

July 30, 2018 By Alex McPeak Leave a Comment

abstracta test coverage webinar automation federico toledo crossbrowsertesting

abstracta webinar test coverage automation federico toledo crossbrowsertesting

When your application has endless possible test cases, do you know how much coverage is enough?

Test coverage gives us ideas of what to test and helps us know the extent of our tests. We often use automation a tool to increase test coverage by checking more parts of the application under test, faster.

But complete coverage is nearly impossible to achieve even with automation. And sometimes, more coverage isn’t always better — it’s a great measuring stick, but iti doesn’t guarentee that our application is bug-free.

The ideal amount of test coverage should be based on your priorities and analysis of risk. The good news is once we know which test cases to prioritize, we can apply long-term coverage criteria to optimize test coverage over time.

If you want to know more about how to get the most our of your test coverage, join us on August 7th at 2 PM ET.

CrossBrowserTesting joins Abstracta Co-founder and Director Federico Toledo in a webinar to discuss strategies for measuring test coverage and maximizing it for the long-term.

In this webinar, you will learn:

  • What test coverage is and how to measure it
  • Laying out a plan to optimize test coverage in the long-term
  • Demonstration with CrossBrowserTesting
  • Q&A

Save your seat

Filed Under: Test Automation Tagged With: abstracta, automation, test coverage, webinar

The Top 8 Things We Learned From the QA Team at America’s Test Kitchen

June 28, 2018 By Alex McPeak Leave a Comment

Americas test kitchen quality assurance

Americas test kitchen quality assurance

With three core websites, America’s Test Kitchen has a lot of content to cover. Yet when Megan Collins came on as QA manager, she was the only tester. Since then, America’s Test Kitchen has exponentially scaled their testing, adopted an automation strategy, and expand the QA team to three members, including Automation Engineer Michael Ng.

We recently had Megan and Michael join us for a webinar to talk a little bit about the web testing strategy behind America’s Test Kitchen and how they handled their shift to automation with the help of CrossBrowserTesting.

Here are the top 8 things we learned in the webinar:

  1. Fast deployment schedule demand testing — Most organizations dream of achieving the agility that America’s Test Kitchen has reached, deploying code to production every two days. But when you’re integrating as often as they are, you can’t just rely on developers to test each others code or you’ll never achieve consistent test coverage. By bringing in Megan Collins as QA Manager, she was able to spend her time on something that had been severely lacking — testing. This way, proper exploratory tests could be performed when there are any changes to code, and the development team can continue working doing their jobs instead of worrying about the intricacies of whether a new feature works or not.
  2. Real devices matter – When discussing browser simulators, mobile emulators and real devices, there’s a time and place for each one. In fact, Megan mentions that the developers on her team use emulators. However, the only way to see exactly what your users are seeing is to use real devices, which is why they are so important to the QA team. In fact, they even ran into a bug where membership details weren’t displaying for a user that wasn’t found on the emulator they were using — they noticed it once they performed the test on a real tablet device. While emulators are highly accurate, they do not replace real devices, and this is a large reason why they need a tool like CrossBrowserTesting that includes all three in order to prioritize both speed and accuracy during mobile testing.
  3. It’s important to understand different user journeys – The success of the America’s Test Kitchen QA team has a lot to do with their ability to identify the different users they have and how it affects their testing. Because a lot of the web content they have is for members only, there are paywalls that occur when someone accesses from a non-paid account, for example. By separating user states into two focused areas for their automated tests, static and dynamic, Michael is able to create more precise tests for different parts of the website. Understanding that they need to test different based on whether users are new, registered, on unsubscribed helps them build out different test cases and scenarios.
  4. Capture tools can help prove a point – Sometimes seeing is believing, and it can be hard to convince other team members of a bug they haven’t seen for themselves. Tools like CrossBrowerTesting’s screenshots and video capture help the America’s Test Kitchen QA team build out their test reports to include visual evidence of different occurrences during tests. This way, Megan doesn’t have to repeat the test case or find it on the right browser when she shows it to development, she can just show the screenshot or video instead so bugs get fixed faster. Additionally, by using the screenshot comparison engine, Michael has been able to create an archive of each homepage over time so it’s easy to show what changes in each regression.
  5. Use automation to make your life easier – America’s Test Kitchen isn’t automating every single test on their website. In fact, manual testing is still a large part of their strategy. Automation comes in mostly with repetitive tasks that take up a lot of time. For example, they use automation to sign in as a member in order to see whether content can be viewed instead of going through the actions of logging in in every time. This saves Michael valuable time in performing different test cases under different user scenarios — instead of replicating the same actions every time, he can focus on how each test differs.
  6. Collaboration is key – We say it over and over again, but communication with your team is important to productive testing. By leveraging tools for organizing your tests, taking notes, and tagging, it’s easier to remember which bugs you’ve encountered before and which are new. Additionally, keeping open communication with the development and design teams has been essential to America’s Test Kitchen for understanding priorities, risks, and hot spots for more insightful testing. Megan and Michael have found collaboration with the development team to be especially crucial in the shift to automation — learning new skills, choosing what to automate, and being notified when new code is added.
  7. Test while you sleep – Many of the automated tests that the America’s Test Kitchen QA crew create run at night, meaning that you literally can test in your sleep. Overnight testing is at the crux of their Selenium strategy, meaning they can check tests in the morning to see if they pass or fail. Because they have so many tests that run for an extensive amount of time, they rotate their browsers each night to get coverage of all the different configuration their users are on to see which tests are failing on which browsers. By running these regression tests, they’re able to test a subset of test cases across different pieces of functionality that they need to make sure are working day to day.
  8. Tests are only as good as how much time you put into them – Michael runs a lot of automated tests, but he also spends a lot of time maintaining them. The America’s Test Kitchen QA team emphasizes that maintenance is crucial to the success of the project. This is one of the main reasons they have a dedicated QA automation engineer. In order to ensure that Selenium tests aren’t just been created left and right, Michael spends a lot of time on each one to make sure it is accomplishing his goals (and may even be a few steps ahead of Elon Musk in this way).

Megan and Michael gave us a great look at how they use automated and manual testing to further the goals of America’s Test Kitchen. By taking the time to learn from other successful QA teams such as this one, you can apply some of their practices to your own testing strategy.

If you want to view the full webinar, you can watch below.

Filed Under: Test Automation Tagged With: america's test kitchen, test automation, webinar

Testers: Beware of Automating Yourself In

June 27, 2018 By Amy Phillips Leave a Comment

amy phillips lessons automated testing

amy phillips lessons automated testing

Automated testing is increasingly becoming the future of testing in DevOps and Continuous Delivery environments. It’s assumed that simply training testers to move into purely technical roles where they focus almost entirely on test automation is the solution. In this article, we explore some of the pitfalls that can arise from this assumption.

I once quit a job over automated testing. Not because of something philosophical like the test strategy or the team’s perception of testing. No, I quit because I wasn’t allowed to contribute to the automated tests. The reason given: I was a performance tester, not an automation tester.

All around us, we’re trying to put people into boxes. We talk about manual testers, security testers, agile testers and act like we have a hierarchy of skills. Almost every week I hear of someone, usually not a tester, discussing how to “up skill” manual testers to become automation engineers. We expect all of our testing needs to fit neatly into predetermined scripts that can be executed time after time without any deep testing expertise.

The rise in DevOps, Continuous Delivery, and Lean development practices has been influential in moving testing from an afterthought to being seen as an integral part of successful software delivery. We’re aware that batching up large changes can increase risk and there’s growing demand for repeatable and reliable, build, test, and release pipelines. Unfortunately, the desire to break down siloed test teams and replace manual test phases with automated test suites to keep up with the fast-paced delivery cycles can lead to unintended consequences.

Lesson #1

One of my first experiences of streamlined release processes resulted in the test team being broken up to form cross-functional teams along with developers, designers, and product managers. It can be hard to break a team up, and there was pressure for every tester to act as an expert in all types of testing. Previously the test team had been diverse and our skills complementary. Maybe we should have been “up-skilling” to spread the skills to every tester, or maybe we should have recognized that testing is a broad role requiring different approaches and specialties.

The goal of creating cross-functional teams to develop and release every two weeks had another hitch. The first ten days of the iteration were a mix of developing and testing new features. The final two days of the iteration were dedicated to release testing and bug fixing. Sometimes things were delayed and we’d end up with untested features in the release candidate. Already tight, the release testing time would now need to include feature testing as well as release testing. The responsibility — and extra work — to get the release testing completed fell exclusively to the testers.

During the days leading up to and immediately after the release, the testers were obviously rather busy. Rather than sitting around waiting for testing to complete, the developers started working on the next round of features because there were deadlines to meet. It sounded sensible, but this subtle separation of the team led to the testers becoming more and more behind with their normal “feature testing”.

Eventually, someone suggested that testing shouldn’t even be in the same sprint as the development, but should intentionally take place in the following iteration to try and reduce the amount of rework on the automated tests. Despite being a single “team,” splitting responsibility for work created a new type of silo.

If we had owned a single vision of achieving fully released features, encompassing both development and testing, we might have had better success at collaborating together to meet the goal.

Lesson Learned – Own the vision but share the work.

Lesson #2

On my next project, the testers had been hired for cross-functional teams and our skills were complementary to the entire Scrum team. We followed a process that allowed new features to be exploratory tested before automated tests were added to the regression test suite. More importantly, the entire team was expected to contribute to testing. On paper, we had everything we needed to succeed.

Releases took place once a month and the entire team of developers and testers performed the release testing to check for regressions as well as make sure the new features were working as expected. One of the test engineers was responsible for maintaining and executing the automated regression test suite. Everyone else used ad-hoc scripts or exploratory testing to check high-risk areas.

One day, the release testing coincided with the automated regression test engineer’s holiday. The automated tests failed.

As we investigated the failures it became obvious that there was some incredible duplication going on with our release testing. We had failed to share the test scenarios with the entire team. Automated regression tests were being duplicated by ad-hoc scripts, and by exploratory testing too. No one in the team felt like they had the authority to disregard the automated testing results to make the release so we were forced to delay the release date to give us time to fix all the broken regression tests.

Lesson Learned – Share the test scenarios with the entire team. New automated tests should be replacing testing rather than duplicating effort.

Lesson #3

On another project, we’d managed to avoid most of the testing politics. Developers and testers were bought into the cause and actively contributing to the test suites. As we built new features, we exploratory tested the features and automated the tests that we believed we’d want to run against future release candidates. We had tests running in different browsers and on multiple devices. The test suites were well respected and formed the essential part of the automated pipeline that allowed us to deploy to production as frequently as we needed.

For a while, at least.

As time passed and we continued to diligently add tests we built up such a suite of tests that were taking a long time to execute. Developers were becoming more reluctant to run the tests locally before committing to the build because of the time it took to execute all the tests.

We’d already worked out which tests were most likely to fail and were executing these first. Now there was talk of running the tests overnight instead of with each commit. The team started to question whether we’d finally outgrown our release approach and looked at moving to scheduled releases to give us more time to execute the vast number of automated tests.

Fortunately, we decided to analyze what we were actually testing before doing anything drastic. As we dug deeper we discovered that we had a number of problems — we were running automated tests on features that were fine to break as long as we fixed them soon after (a.k.a. non-critical features); we also had a significant number of tests for each feature when, in, fact a simple sanity test would often have sufficed.

We’d fallen into the biggest automation trap: believing that more automation would make us safer.

As we switched our test approach to only automate the things that really mattered, we were able to get our release cycles back to a reasonable duration and avoided having to change our release cadence.

The lesson learned – Only automate the things you actually care about.

Lesson #4

Test automation can be a powerful tool in helping to create fast and flexible release processes. As the rise of DevOps and Continuous Delivery continues we should embrace the fact that testing is such an integral piece of the puzzle. Building true cross-functional teams consisting of the skills we really need rather than just job titles as well as creating and sharing a testing vision avoids so many of the test automation pitfalls. Teams that agree on the end goal are less likely to fall into the “automate everything” trap that can be the undoing of so many projects.

As you go through your testing journey, you’ll make mistakes and discover unexpected opportunities. The most important lesson of all is to use other people’s stories and lessons as a starting point for your own, rather than treating them as a direct set of instructions.

I remember listening to a talk of how Google had achieved incredible things with their test automation. It was inspiring, with so many technical and cultural issues solved by a deep and fully featured test suite. Finally, someone in the audience asked how long all the tests took to run. The answer: six minutes. We were amazed, until we discovered that Google used whole data centers to run their tests.

Most of us can’t ever hope to replicate Google’s test execution power, but that doesn’t mean the talk was wasted. Taking the inspiration of what can be done and then using it to pragmatically approach your own situation will help you create your own success.

Lesson Learned – Your team is unique.

About the Author: Amy Phillips is an Engineering Manager at MOO. She manages the Platform team and supports cross-functional product teams in their quest to build awesome products. To learn more about Amy, visit her website or follow her on Twitter.

Filed Under: Test Automation

Join Us for a Ministry of Testing Meetup Featuring Angie Jones

June 14, 2018 By Alex McPeak Leave a Comment

angie jones meetup which tests should i automate ministry of testing smartbear

angie jones which tests should i automate ministry of testing smartbear
SmartBear and Angie Jones are joining forces to teach you everything you need to know about setting up an automation strategy for success in our upcoming Ministry of Testing meetup. Trust us — this is one you don’t want to miss.

Angie Jones is a Senior Automation Engineer at Twitter who is a guru of strategic test automation. She is continuously sharing her insights and experiences with the global testing community at conferences such as Romania Testing Days, StarEast, Selenium Conference, and of course Agile Testing Days USA, where she will be delivering a keynote prior to the meetup.

We are ecstatic to welcome her to the SmartBear headquarters to discuss “Which Tests Should I Automate”. As automation takes the spotlight for faster testing, it’s important for teams to realize that not every test should be automated. Angie will walk you through the process of deciding which test cases should be automated and which you should reconsider.

Additionally, Bria Grangard and Joan Liu of the CrossBrowserTesting team will be covering how you can approach issues of speed and scalability within your organization, arming you with a checklist of action items for supercharged test automation that you can implement right away.

If you’re in Boston, come by the SmartBear HQ in Assembly Row on June 26 to enjoy snacks, drinks, and mingling, while enjoying all three talented presenters live.

To register, visit the Ministry of Testing Boston meetup page.

If you can’t come in person, you can still tune in remote by streaming on Youtube Live. Save to your calendar now.

Filed Under: Test Automation

Selenium vs. Cypress: Is WebDriver on Its Way Out?

June 12, 2018 By Alex McPeak 2 Comments

selenium vs cypress

selenium vs cypressThere’s a new kid on the block for open source test automation tools, and everyone’s talking about how Cypress may be an alternative to Selenium.

But while some have posed that Cypress signals the end of an era, we’re not so sure that Selenium is going anywhere anytime soon.

To better understand the similarities and differences between these two tools, let’s take a better look at what they both do.

Selenium

Selenium WebDriver is the open source automation tool we all know and love. With a history starting from the humble beginnings of IDE and transforming into Selenium 2.0 a.k.a. Selenium WebDriver, Selenium has become a foundation of many modern software teams’ test automation toolkits.

Most are familiar with Selenium’s use — it makes direct calls to the browser using each browser’s native support for automation. By writing a script in Selenese using the compatible programming languages, you can run tests that work as if you were driving the browser yourself.

While the learning curve can be steep for Selenium and requires the knowledge of at least one programming language, it has a vast community of active users and even boasts an annual user conference.

One of the foundations of Selenium is the ability to test on the most popular browsers, making it essential to cross-browser testing.

As testing with Selenium has grown in popularity, it’s not only become a basis of many automation strategies but has also become an integral part of many third-party tools such as CrossBrowserTesting. Once you know how to use Selenium, you can use these cloud testing tools to access thousands of browsers and leverage parallel testing to cut down on testing time.

Cypress

So how is Cypress different from Selenium, and how is it the same?

First, Cypress is said to have an easier onboarding than Selenium, where all you need to do is install the .exe to get up and running sine all the drivers and dependencies are automatically installed. With Selenium, on the other hand, you need to spend a little more time getting everything up and running.

One of the advantages of Selenium is that it supports most of the major programming languages including Java, Perl PHP, Python, Ruby, and C#, so if your team is already using one of those languages, Cypress may not be ideal since it only supports JavaScript.

In addition, since Cypress tests are written using Mocha and Chai, the syntax echoes what most Javascript users understand. This means that if you work with Javascript, it will be especially easy to start using Cypress. However, if you’re more familiar with another language, it will be fairly complex.

Cypress almost acts as a “freemium” model. Similiar to Selenium which is entirely open sourced unless you integrate with a paid tool, you can also add to Cypress’s core capabilities by leveraging the paid dashboard which includes a test runner that allows you to easily debug tests with DOM snapshots and includes video storage for private projects.

Selenium and Cypress differ in the way of their architecture, as well. Selenium is made up of bindings, or libraries, and the WebDriver, which controls the browsers. These two components work through the JSON network.

Alternatively, while Cypress is used for UI testing, it uses its own unique DOM manipulation and runs directly in the browser with no network communication. Cypress is often cited as a developer-friendly test automation tool for this reason since this often causes frustrations for them in Selenium.

Having a different architecture on top of an Electron app also allows Cypress to be so close to the web application under test that we can do things that you can’t do in Selenium such as stubbing DOM APIs.

One of the biggest gaps between Selenium and Cypress is that while Selenium runs against different browsers, as mentioned before, Cypress only supports Chrome. Selenium’s browser support is often integral to those who use it, making Cypress severely lacking in the area of cross-browser testing.

For software teams that find importance in checking their website on multiple browsers instead of just leaving their code up to chance, finding their way around this deficiency could prove challenging to Cypress users.

Cypress vs Selenium

Cypress and Selenium serve a similar purpose that is achieved in two different ways.

Selenium is a library but requires a unit testing framework or a runner plus an assertions library to build out its capabilities. Cypress provides a robust, complete framework for running automated tests but takes some of the freedom out of Selenium by confining the user to specific frameworks and languages.

If you don’t know what you’re doing when it comes to building out a test automation framework, Cypress may be a good option.

Another thing to keep in mind is that if you look at how Cypress is built, it is largely a unit testing tool that is will fit best in Javascript-focused development teams. Once you stray from these details and your team decides to experiment with other methods of test automation, you’ll find that Selenium can better accommodate those growing pains.

As a tool made by developers for developers, it’s less a replacement for Selenium and more so a way to introduce developers to test automation.

Like most instances when it comes to test automation, the tool you pick and the method you follow will largely depend on what your team decides and what works best for your organization. However, the notion that Selenium is on its last legs is misled.

If you’re still confused as to the differences between Cypress and Selenium, Automation Rhapsody provides a really great breakdown of how each performs in different categories.

Filed Under: Test Automation Tagged With: cypress, Selenium, test automation, testing frameworks

Webinar: Meet the QA Team Behind America’s Test Kitchen

June 4, 2018 By Alex McPeak Leave a Comment

america's test kitchen QA

america's test kitchen QA

America’s Test Kitchen is all about perfecting the recipe for success, literally.

With two TV shows, three print magazine, and four different websites, they’ve built a customer base that is hungry for quality culinary content. But their users are also paying close attention to how they’re being served their online experiences and want to consume it in a way that works best for their lifestyle.

This means the QA team is under a microscope when it comes to releasing a Michelin star web application that works across a large selection of devices and browsers. Whether users are taking a cooking course, shopping for kitchenware, or watching one of the pros whip up a delicious dinner, America’s Test Kitchen strives to satisfy from the backend to the front of the house.

Join CrossBrowserTesting and America’s Test Kitchen QA Manager Megan Collins & QA Engineer Michael Ng as we go behind the counter to learn about what kind of testing is needed to keep up with a company that aims empower and impress millions of home cooks.

In this webinar, you’ll get a taste of:

  • What the America’s Test Kitchen QA strategy looks like
  • The incredible growth of the testing team and their shift to automation
  • How they use CrossBrowserTesting to serve up their cross-compatible web app on a silver platter

Save Your Seat

Filed Under: Test Automation Tagged With: america's test kitchen, test automation, webinar

6 Questions to Ask Yourself Before You Decide What Test Cases to Automate

June 1, 2018 By Alex McPeak 1 Comment

decide what test cases to automate

decide what test cases to automate

Even just the idea of agile test automation is enough to get the attention of most software teams. But while the idea of faster testing may seem like the silver bullet, there’s a time and place for automation.

Before you jump in head first, you should evaluate how automation is actually fitting into your organization. By asking these questions, you can get a better idea as to whether automation is the right move, or if you should take another route.

  1. What’s the risk involved? If you’ve performed a risk analysis of your application, you probably have a good idea of which test cases are the most important to check in each code change and which are a lower priority. Consider how human error could affect test data if it’s not automated. Think about how different test cases that may critically impact the customer and/or impact internal team downtime if broken. Additionally, you might want to automate high traffic page, features that depend on security, and problematic areas and other functions that rank high on your risk analysis.
  2. How often am I going to run this test again in the future? Running the same test more than once may seem repetitive and tedious, but if you’re not actually going to be running it frequently, it might not be worth it to automate. For example, if you know you’re going to have to test a feature two or three times but won’t be keeping it as a permanent part of the application or won’t be testing it after that, you might want to rethink automation. The tests you decide to automate should be a foundation of your regression testing strategy, so if you don’t feel like they’re a critical part of the application to be checked in every code change, it may be best to keep them separate.
  3. How much time am I willing to invest? When automation requires so much time and maintenance, you have to determine the ROI of your efforts. Some regression tests may seem like they’re a candidate for automation but upon closer inspection, might not be worth the energy for the value they’re providing. Automation is not a quick, one-and-done solution to your problems. Take a lesson from Elon Musk — you don’t want to drop automation into the mix without a second thought. Spend the time making sure your first test is checking all the boxes and meeting requirements. Then determine how often it will need to be updated or reevaluated in order to establish whether or not automation is worth your time investment.
  4. Are there parts of this test case that still need to be explored? You might think that you’re ready to automate, but keep in mind that automation comes after a thorough exploratory testing session. Just because you can automate doesn’t mean you should, and many tests are better off being performed manually. Are there still instances that could benefit from human observation? While you can’t expect to manually cover every part of the application, getting a foundation during exploratory testing is what leads to more successful, stable, and insightful automated tests.
  5. Does automating support the needs of my team? If you’re automating a test case, you want to make sure the tests you write are going to actually answer the questions your team is asking. First, take a hint from Bas Dijkstra, and discuss the “why” before firing up a test automation tool. While Selenium is a great tool for automation, understanding the nuances of the many commands and abilities of it can take some getting used to, so if you’re not able to write a script that encompasses all of the requirements your team is requesting, another method of testing may be better. If you still feel that automation is the best process for the job but don’t feel confident in your Selenium skills, a tool like Record & Replay may help you achieve your testing goals. There are many choices when it comes to automated testing tools, so you need to make sure the one your organization uses will support the testing you’re trying to accomplish.
  6. Why are you automating? Sometimes the most simple questions can give you the most telling answers. If you’re automating because you think it’s what you should be doing or because your boss is telling you to, you might not be automating your tests for the right reasons. Your objective with test automation should be to provide insight to you your team and give faster feedback. If you have ulterior motives, your efforts won’t take you very far.

Conclusion

Even if you could automate every test, you would quickly find it’s not helping you or the rest of your team very much. Next time you’re deciding on whether or not to automate, ask the right questions first.

Filed Under: Test Automation Tagged With: automated testing, test automation, testing tools

How Every Team Member Can Leverage a Test Automation Tool

May 24, 2018 By Alex McPeak 2 Comments

test automation tool

test automation tool

With the introduction of test automation tools such as CrossBrowserTesting, it’s easier than ever for teams to meet the demands of Agile, CI/CD, Continuous Testing, and DevOps. However, while testing used to be a job strictly for QA, software teams are finding that this no longer applies to their development process. As organizations continue to shift left, the lines continue to blur between different roles, and testers aren’t the only ones testing anymore.

If your entire team is ready to take on more testing, here’s a few ways each member can best leverage a test automation tool.

Digital Marketers – Think that your marketers have no use for testing? Think again. Anyone on your team that cares about Search Engine Optimization (SEO) will likely be aware that mobile responsiveness is a factor when it comes to domain authority. In fact, Google has stated that responsive design is the recommended design pattern, which means that web applications that are optimized for mobile will actually show up higher in peoples’ search results. Your marketers may want to take advantage of a visual testing tool so that they can evaluate priority pages on different devices and screen sizes to determine whether or not they’re responsive. Ensuring mobile-friendliness will also benefit SEO by accelerating site speed, decreasing bounce rate, and boosting social sharing, making testing a marketer’s secret weapon.

Product Managers – Testing doesn’t have to (and shouldn’t) wait until the end of the SDLC. In fact, it can take place as soon as the planning stage, especially when shifting left. Product Managers can expertly use the data of past test results to influence the direction of the product, but they can also perform their own testing to make immediate improvements and determine what changes should be made in the application next. By focusing on the usability of the application, product managers can perform exploratory tests to better understand what forms could be more intuitive or determine if the site is optimized for accessibility, for example. Additionally, product managers might find value in testing newer features recently implemented by the development team that impact usability (homepage layout, checkout process, etc.) to make sure it fits into the buyer journey they way they intended.

Designers – If your designers have been manually testing your website to make sure it looks good across browsers and devices, that’s a good first step, but they shouldn’t stop there. Visual testing allows them to automatically capture screenshots of a URL on tens of browsers and compare them side-by-side. By picking a base browser, they can see where there are differences in each layout and decide which are acceptable and which need to be fixed. Whether an app is off by one pixel or a hundred, your designers can determine for themselves whether everything looks the way they want it to across configurations. It’s also important to make sure everything works in addition to looking exceptional, which is why Record & Replay helps test across the same browsers to make sure things like buttons, navigation, and link are all working as expected.

Developers – While we believe that every software development team should have it’s own dedicated QA team, that’s not to say that there are not certain times where developers should test their own code. It goes without saying that when a new feature is added, change is made, or bug is fixed the developer should ensure basic functionality of that new implementation after the code is adjusted by testing it. For longer test cases or more elaborate projects that restructure large areas of the application and require more extensive, end-to-end testing (such as you might see with a website rebrand), developers might want to also run one-off tests without taking up too much time. By leveraging prior programming knowledge to learn Selenium commands, developers can ensure functionality without spending hours on testing. Instead of manually executing a basic test case, they could write an automation script in a fraction of the time and still cover large areas of code to check whether it passes or fails.

Manual Testers – Manual testing will always be a crucial part of QA because it depends on the curiosity, observation, and skill of the individual to execute tests that will provide insight to the rest of the test. However, while exploratory tests can be the best part of the job, manual regression tests can be the bane of their existence. Despite the name of the role, manual testers no longer have to be confined to strictly manual testing. With test automation tools like Record & Replay, manual testers can automate the tedious, boring, and repetitive testing that they’ve come to dread without having to learn a programming language or getting up to speed with complex testing frameworks.

Automation Engineers – For the tester that’s mastered Selenium and has too many UI tests to count, how can they speed up testing to take it from days to hours? Parallel testing is the key to making your automation time even faster. By simultaneously testing in multiple browser configurations at once instead of one after another, parallel testing allows you to take your suite and divide the time spent running tests. Just by running a test in two browsers at once, you’re already cutting your automation time in half because you’d be testing them all at the same point in time. You can see where this would really pay off as you continue to add more tests, create longer suites, and run them on more browsers.

Conclusion

When everyone in the SDLC is moving fast to implement and deliver new features, testing is no longer a one-man job. In order for other team members to meet their deadlines, they must also be held accountable for some of their own testing.

While we can never replace the important work of our QA teams, there are tools that can help ensure that everyone plays a part in ensuring speed and quality from planning to deployment.

Filed Under: Test Automation Tagged With: developer, marketing, product manager, QA team, test automation, web designer

What Elon Musk Taught Us About Test Automation

May 16, 2018 By Alex McPeak Leave a Comment

elon musk test automation vs manual testing

elon musk test automtion vs manual testing

As Elon Musk continues to “wow” the world, he’s grown quite the fan base among the tech community.

While most of us can only dream of founding the next Tesla or Space X, there’s plenty of wisdom we can gain from Musk, much of which he shares over social media.

Like any great leader, Musk knows that we often learn more from our mistake than our successes, which is why he was the first to admit he had messed up with the Model 3 production.

Yes, excessive automation at Tesla was a mistake. To be precise, my mistake. Humans are underrated.

— Elon Musk (@elonmusk) April 13, 2018

He attributed the operational mishap to an excess of advanced automation, saying that “humans are underrated”.

What came of this failure was actually a lesson that is extremely valuable in testing — automation is great, but can’t be used all the time. Humans are essential to ensure a quality of the highest standard and catch mistakes that machines won’t always recognize.

Trouble in Production Paradise

Being Elon Musk isn’t easy. In the last year, he’s faced his fair share of challenges, but one of the most recent has been the production delays for Tesla’s Model 3 Sedan.

As one of the first affordable Tesla models, it’s no surprise the pre-order numbers were sky high. In response, Musk used an assembly line run by automated robots for fast production and projected to make 5,000 of the car every week to meet demand.

In reality, the number Musk was able to achieve with this method was closer to 2,000. Once he realized that the degree of automation was actually slowing down the process, he stepped in and took over.

Though automation seemed like the perfect solution for quickly and consistently building the cars, Musk acknowledged that he had gotten complacent about some of the new technology, putting it all towards the Model 3 without staging it first, according to CBS.

In turn, Musk had to scrap the production model and spent countless long nights working himself, because that’s what you do when you’re Tesla’s CEO and have to deliver thousands of product every week.

While the robots were impressive, they didn’t stack up to the complicated and detailed process that went into making the Model 3. It seems Musk could have benefitted from more spending time learning the ins-and-outs of the machines and designing a plan specifically for the Model 3 Sedan project.

Additionally, if there had been a balance of automation and manual, the two processes could have supported each other and worked as a symbiotic relationship. Instead, an overreliance on automation caused the production downfall.

Our Fascination with Automation

We all love to love automation because it’s new, shiny, and a lot of times it’s a great solution to manual work.

However, in our excitement over the possibilities of automation, many of us have experienced too much of a good thing. By overdoing automation, oftentimes we are negatively impacting productivity.

Through Elon Musk’s trials and tribulations with automation, he effectively taught the software testing and development teams of the world a few important lessons:

  1. Effective automation depends on the quality of your first “robot” — if you don’t dedicate the patience and precision required, you will see your results fall short.
  2. The job isn’t done once the bots are built. Automation depends on extensive maintenance and upkeep.
  3. When automation backfires, you must step in to manually alleviate the damages and make necessary alterations.
  4. There will always be a place for manual testing. In the discussion over whether or not AI will take over these jobs, there is no question that human engineers will continue to have an integral part of the QA process.
  5. Automation follows a pattern. If there are any new obstacles or factors that haven’t been considered, it disrupts this pattern and affects the results.
  6. Learning the intricacies of new technology is critical. Before diving in headfirst, you have to spend the time becoming familiar with how it functions and what you can expect.
  7. Planning is critical. It’s easy to underestimate the amount of time, cost, and effort that goes into putting an effective automation strategy into effect, and sometimes manual work will be more productive.

If you’re beginning to implement automation in your organization, or if you’re finding it hasn’t been giving you the results you were hoping for, take a step back and reconsider your test automation strategy.

Automation is not a quick fix — it requires extensive planning, monitoring, maintenance, and upkeep. Elon Musk himself proved that you can’t just expect one robot to solve all your problems.

Filed Under: Test Automation

How Speak Easy is Increasing Diversity at Testing Conferences

May 8, 2018 By Alex McPeak Leave a Comment

Speak easy testing conferences diversity fiona charles

Have you ever gone to a tech conference and realized that the majority of the speakers look the same? Perhaps you’ve even witnessed a lineup that was lacking diversity completely.

I’m on that list. I’m most definitely not comfortable with that line up at all, nor is my company.
I hadnt even looked at any of it since accepting.
I’ve just emailed them to cancel my attendance, thank you for bringing this to attention.

— Richard Bradshaw (@FriendlyTester) April 10, 2018

It’s not just you. The disparity of women and minorities in tech is often reflected in speaking lineups at testing conferences. However, it’s not necessarily because there are none interested; it often comes down to other factors.

That’s why we talked to Speak Easy Co-Founder Fiona Charles to tell us a little bit more about some of the obstacles underrepresented groups face, how mentorship can provide the support and encouragement they need to get accepted to their first conference, and how conference organizers can provide more opportunities for diverse speakers.

Q: How does Speak Easy work?

A: We’re a volunteer organization that matches aspiring speakers with mentors who are themselves experienced speakers. We don’t dictate the interactions; we don’t tell our mentors how to mentor. Most of the time speakers are very generous with their time and have good ideas for getting started.

That’s a philosophically important statement about Speak Easy — we believe in context, partnership, and in working out the relationship to understand what the aspiring speaker needs.

Some people need help structuring a coherent talk and practicing it, some people need help understanding they have something to say. There are other things, too, like writing proposals, so they get help on all kinds of things depending on what they’re looking for in a mentor.

Q: What is Speak Easy’s mission?

A: We exist primarily to promote gender diversity at tech conferences. That’s how we started — Anne Marie [Charrett] and I were at a speaker event and looked around the room and said, “Where are all the women?” So we decided we wanted to do something about that, and that’s how Speak Easy started.

It doesn’t mean we don’t help men as well. Really it’s to help women, underrepresented groups, and new speakers to help them get into testing conferences.

Q: Why do you think there’s a lack of diversity at these conferences to start with?

A: One reason is that the tech industry is lacking gender balance, but another is role models.

The more women you have on a platform, the more you’re going to see. If you have four keynotes and none of them are women, you’re going to get fewer women wanting to come to conferences and fewer women thinking they could speak at one.

It’s about seeing people that look like you doing interesting things and feeling welcome because they are being given prominence at a conference, and I think that’s true whether that’s gender, racial diversity, ableism, or any other underrepresented group. So for us, it’s about helping women and new speakers understand that they are welcome and working with conferences to make sure they are welcome.

Q: What do you think conference organizers can do to attract more diverse speakers?

A: They can work with us to attract new speakers, which is a big part of our mission because it can be intimidating standing up at a conferences speaking. But they can also do other things to make people feel welcome.

Codes of conduct are very important. They’re symbolic in the sense the Code of Conduct says something about what the conference organizers believe in, and if used properly, it ensures you have a respectful atmosphere for diverse participation.

I think that it’s also incumbent on conference organizers to make an effort to invite more diversity, to actually go out and invite more women, more POC, more people who are disabled or in another minority group, to ensure they have a diverse platform.

Q: What’s the benefit for conferences to feature new speakers?

A: It’s really important to have new voices. I’ve done a lot of speaking, but how many times do people really want to hear me?

You want people who are engaged in the industry, who are discovering new things themselves and coming up with new ideas. We need to keep our ideas fresh, and different points of view are going to help that. Even if they might not have a different view on a testing topic, there could be a new perspective on how you operate in the workplace.

Q: What’s the benefit for new speakers in terms of professional development?

A: One is that if you speak at a conference, you get in for free. There’s a whole question there around paying to speak because if you have to travel, your expenses might not be covered. There are conferences that do pay for travel and those that do not, but speaking is certainly one way to get yourself into a conference without having to pay.

In terms of speaking, it really depends on where you want to go with your career, but I think speaking at a conference is a way to build your own confidence in your abilities. Certainly, someone who is confident in expressing their ideas and expressing them well is going to have an advantage in the workplace, whatever they’re doing. It’s a way of getting affirmation that your ideas are valuable.

A: Why should testing conferences consider covering speaker expenses? How does this encourage speaker diversity?

Q: It’s the speakers who make the conference happen. You can’t put a conference on without speakers.

When I started there was an idea that you had to pay your dues — you pay to speak because it helps you get started, and then after awhile, you wouldn’t have to because people would invite you and pay your expenses. But that’s a tough way to start.

It needs to be easier for new speakers to get started, and I think it can be particularly difficult for women to justify the expense of conferences for sociological reasons.

It also affects people from other countries. Someone from North America may be able to afford to speak at a conference in North America, but for people in other countries who may not be making the same, it’s different paying for those expenses and traveling from another country.

Q: What do you look for in mentor volunteers? On the other hand, what can someone do to be a good mentee?

A: Mentors should have the experience of speaking at multiple conferences because different conferences present different opportunities, and the more you do it, the more you add to your toolbox to help other people.

Mentees should, of course, listen to what the mentor has to say. You don’t have to take it as gospel — think about what will work for you. That’s kind of the difference between mentoring and coaching, which is a distinction that doesn’t always get drawn.

When I’ve mentored someone as a test manager, I’ve expected them to approach me first. I think a big part of the mentoring relationship is that it’s the person being mentored calling the shots.

You need to be proactive. You may not know all the questions you want to ask, but you need to be the one that takes the initiative to approach the mentor and start developing an idea of what it is that you need and what it is you’re going to do to come up with an engaging talk.

Q: Why should new speakers apply through the Speak Easy program?

A: We have an impressive list of experienced speakers as mentors, but also because we engage with conferences to form a partnership. Each conference we partner with will reserve at least one spot for our speakers, so you have a much better chance of getting your proposal accepted if you go through us.

Q: What’s your advice for new speakers?

A: Have a clear story and present your own experience; don’t speak from theory. Speak from ‘we tried this and that’s how it worked out for us’. Talk about things that didn’t go well in addition to things that did. People are very interested in hearing about other peoples’ experiences, and stories will help to make the learnings stick.

Filed Under: Events, Test Automation Tagged With: conferences, diversity, professional development, Selenium Conference

  • « Previous Page
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • …
  • 8
  • Next Page »

Try CrossBrowserTesting

Everything you need for testing on the web. Browsers & Devices included.


  • Grid In The Cloud
  • Simple CI Integrations
  • Native Debugging Tools
  • Real iOS and Android
  • 2050+ Browser/OS Combinations
  • Intuitive REST API

Start Testing Today

Want Testing Tips?

Want the latest tips & strategies from industry experts right to your inbox? Sign up below.
 

Join Over 600,000 Testers & Developers And Start Testing Today

Learn more Free Trial

Features

  • Live Testing
  • Selenium Automation
  • Automated Screenshots
  • Screenshot Comparison
  • Local Testing
  • Real Devices

Solutions

  • Automated Testing
  • Visual Testing
  • Manual Testing
  • Enterprise
  • Internet Explorer

Resources

  • Browsers & Devices
  • Blog
  • Webinars
  • Integrations
  • ROI Calculator

Company

  • About Us
  • Careers
  • Plans
  • Terms of use
  • Security

Support

  • Help Center
  • API Docs
  • Schedule A Demo
  • Contact Us
  • Write For Us