CrossBrowserTesting.com

A Design, Development, Testing Blog

  • Product Overview
  • Pricing
  • Back Home
  • Free Trial
  • Design
  • Development
  • Manual Testing
  • Test Automation
  • Visual Testing

7 Reasons Your Testing Team Will Love Record & Replay

April 30, 2018 By Alex McPeak Leave a Comment

record and replay

Imagine there was a way to have your manual tests recorded so you could easily replay them on hundreds of browsers without needing to perform the same exact actions over and over again. Record & Replay does just that, allowing you to test once and play it back on the largest selection of browsers in the cloud.

This may seem too good to be true, but many testers have already experienced the power of a codeless automated testing tool. If you’re still a little lost, we’ll make it easy for you. Here’s exactly why you should be excited about Record & Replay:

  1. No coding knowledge needed – Have you been wanting to experience the speed of test automation but having trouble getting started with Selenium? If you don’t know how to code, you have a long way to go from learning a programming language to writing your first working script. Luckily, with Record & Replay, no coding knowledge is needed. Rather than writing script, all you have to do is perform them and CrossBrowserTesting will record the actions. If you’re struggling with complicated frameworks and commands to create a successful Selenium test, Record & Replay is a hassle-free way to automate with the skills you already possess.
  2. Ability to edit tests – What happens when you perform the wrong action in Record & Replay or want to change part of a test? No need to stress, all you have to do is go back and edit recorded scripts. Delete mistakes with one click, or go in and edit your script by clicking on the script name and “Edit Steps”. There, you’ll see all the recorded values for each step variable, and once you’ve made your adjustments, all you have to do is save it to put your changes into action. Whether it be a typo or missed step, there’s no reason to spend valuable time redoing an entire test when all you need to do is fix one part of it.
  3. Unbeatable browser selection – There’s no question — CrossBrowserTesting offers Record & Replay on the widest selection of browsers. In fact, most of our competitors aren’t doing it at all. But we aren’t here to brag, we just think that a test is only as valuable as the number of browser configurations it can be run on. That’s why it makes sense that Record & Replay users have access to our numerous configurations, which means whether you want to test IE 8 or the newest Chrome release, you have the resources to check cross-browser functionality where and when you need to.
  4. Parallel testing capabilities – What’s better than automating one test at a time? How about automating ten browsers in parallel? With Record & Replay, it takes seconds to select your test suites and play them back at the same time against different browsers. Parallel testing gives you time back to focus on exploratory test and untouched code in your application. Stop running test after test in different browsers, and start running them all at once to make time for the tests you haven’t run yet.
  5. Access to recording tools – Since tests are recorded as a script, sometimes you may want to insert certain commands that you can’t do manually to get the results you want. For example, if you want to verify element properties to keep visual consistencies across different browsers, you might need extra capabilities to allow you to do that. The tools included in Record & Replay allow you to add quick commands such as expects, sleeps, and hovers to make sure you’re making the most of your tests.
  6. Start shifting left – Software development teams aren’t shifting left because it’s the cool thing to do, it’s actually just a really good way to make sure quality is accounted for earlier. Test automation has been a big part of the shift left movement, allowing teams to test faster and thus earlier and more often. For teams that haven’t yet made the shift or who are just getting starting with automated testing, Record & Replay provides an on-ramp to automation, and sharing capabilities through Slack and Jira means encourages open communication with your entire team so you can get testing involved sooner in the software development lifecycle.
  7. Testing for all – As the roles in the software development lifecycle begin to overlap and blur, testing isn’t just for testers anymore. While we do believe that organizations should depend on a dedicated QA team, there is more room for other software stakeholders to take part in some small testing tasks as well. Previously, the ability to automate was only available to automation engineers or testers with programming knowledge. Now anyone can test from developers and product managers to marketers and designers can take part, so your entire team can run like a well-oiled machine.

Conclusion

If your QA team is still playing a game of catch up to a development team that’s continuously integrating new code, you have a clear need for an automated testing tool for continuous testing.

Trying to understand automation can be daunting, but it doesn’t have to be. Record & Replay can help you take the first steps to faster testing and increased code coverage to support your team from planning to deployment.

Filed Under: Test Automation Tagged With: automated testing, record and replay, test automation

The 5 Ways Parallel Testing Can Instantly Improve Your Workflow

April 24, 2018 By Alex McPeak Leave a Comment

benefits parallel testing

benefits parallel testing

You’ve probably heard of parallel testing before, but the ability to multiply your web testing power without multiplying your testing time seems something of a QA fantasy if you haven’t tried it yet. But once you experience parallel testing, you’ll never want to go back to life before it.

While there are probably hundreds of benefits from parallel testing, everyone will gain the value in different ways. In case you need to be convinced (or you need to convince your boss), we rounded up the top five advantages you can expect from parallel testing.

  1. More Browser Coverage – Take one test, run it across multiple browsers at once — that’s the beauty of parallel testing. If you’ve been running tests on different browsers one after the other, or even just skipping browser testing all together, you’ll realize how valuable parallel testing is after your first time trying it. Instead of only testing on one or two browsers, you can substantially increase your cross-browser coverage with parallel testing, hitting configurations you didn’t even realize were an issue. Plus, with CrossBrowserTesting’s broad selection of browsers, operating systems, and devices, you have seemingly endless options for checking the functionality for any and every user.
  2. Less Manual Testing – We’re not saying manual testing is a bad thing — in fact, it’s a pillar of a successful testing strategy. However, when you’re manually performing the same regression tests over and over on different browsers, you’re not making the most of your time. Not only is this boring, tedious work, you’re also risking human error and inaccurate test data by trying to repeat the same tests. Parallel testing allows you to repeat these tests in regression without worrying about skipping a step so you can be confident you’re reaching all the right parts of your application.
  3. Saves Valuable Time – Besides increasing test coverage, parallel also decreases the amount of time you’re actually spending on each test. Since you’re running multiple tests at the same time, rather than one after the other, you’re testing gets done in the fraction of the time it normally takes. In fact, just running 2 tests in parallel doubles your speed, and the more you run in parallel, the more time you save that can be spent elsewhere. This means you can redirect that energy to go back to performing the exploratory tests and looking at new parts of your application at while still meeting your deadlines.
  4. Improving the Development Workflow – Not only does parallel testing make your job easier, it allows for better communication between departments and improves the productivity of your entire team. By allowing for faster feedback and better reporting, parallel testing equips you with the ability to relay results from testing back to your team earlier in the development cycle. This means that parallel testing will be useful in your organization’s effort to shift left and implement Continuous Testing to get iterations to market faster and with fewer bugs.
  5. Go Scriptless – With CrossBrowserTesting, you now have even more ways to run your tests in parallel. With our Selenium grid, you can use your Selenium skills to write tests and run them in parallel. With Record & Replay, you can also achieve parallel testing with zero coding knowledge needed by executing a test manually then rerunning your suites across the browsers your choose. This means teams who are both new to automation and experienced in Selenium can benefit from the capabilities of parallel testing.

We could talk all day about how amazing parallel testing is, but don’t take our word for it. Whether you’ve automated 100 tests or 1, you can easily start running tests in parallel now to experience more code coverage with less time spent on manual regressions.

Try your hand at running a test in two browsers at once then take it to the next level by adding more, and witness pure testing magic right before your eyes.

Filed Under: Test Automation Tagged With: parallel testing, record and replay, test automation

Record and Replay Testing vs. Scripting: Which is Right for You?

April 19, 2018 By Alex McPeak Leave a Comment

record and replay testing

record and playback testing
With so many tools on the market claiming to help you automate your tests, what do you choose? Between Selenium and record and replay testing, it can get a little confusing trying to differentiate and pick the best one for the job.

When it comes down to it, the most effective way to determine which path is better is to understand what the differences are and what you and your team need in a testing tool. All test automation tools have certain strengths; once you know how to leverage them, the answer becomes clear.

What is Record and Replay Testing?

Record and Replay, otherwise known as codeless automation, is a way to run tests without programming knowledge. This is done using a tool, like CrossBrowserTesting, that allows you to manually perform actions in the browser and save them as a test. You can then take this test and rerun it with a click, making regression and browser testing infinitely faster.

This means that manual testers or testers that are still new to Selenium can easily create automated tests. In addition, this allows other roles such as product managers, developers, designers, and even marketers participate in testing without having to learn an entirely new tool.

Record and Replay is the lightweight solution for test automation. The value is the most prominent for teams that are still transitioning from mostly manual testing to include some automation in order to speed up testing and help integrate it earlier in the software development lifecycle.

While Record and Replay may not be the ideal solution for teams who are well-versed in the intricacies of Selenium and who have lofty goals for code coverage on a multi-faceted web application, it is a great way to get started with automation.

Record and Replay is good for:

  • Individuals with little or no programming knowledge
  • Filling in the gaps of Selenium tests and transitioning from mostly manual testing
  • Lightweight automation for smaller tests
  • Non-technical roles doing one-off tests
  • Teams where members outside of QA take part in some testing

What’s Scripting?

Scripting is what you might traditionally think with test automation — writing a test using Selenium commands and coding knowledge. In order to do this, you do need to have a foundational understanding of at least one programming language as well as what makes up a Selenium script and Selenium commands.

As teams focus on shifting left to incorporate testing earlier and integrate more often, Selenium should eventually be a part of this strategy. However, while Selenium is an invaluable tool for testing, it does have a steep learning curve. Scripting is a great option if there are existing team members that know how to write them, but you might want to seek out an alternative if not.

Fortunately, there are plenty of resources for learning Selenium because anyone who uses Selenium will tell you how it’s worth it. If your team knows how to leverage frameworks, maintain tests, and create stable scripts that provide insight for your team, you’ve probably already witnessed how valuable Selenium is. Additionally, you can multiply the effectiveness of your tests by leveraging cloud tools like CrossBrowserTesting to extend your browser library and capabilities.

Selenium is good for:

  • Highly technical practitioners that know the ins and outs of writing a Selenium script
  • Enterprise teams with an experienced QA team, expansive web applications, and a high volume of users
  • Creating robust suites and end-to-end tests
  • The largest reach in terms of automation speed and code coverage

Automation for Everyone

While Record & Replay and Selenium have different uses, they do share features in CrossBrowserTesting that will benefit anyone.

A stand-out detail is the ability to test on one of the largest browser selections in the cloud. This means that whether you’re a seasoned automation engineer or first-time tester, you can run a single test across an enormous array of browsers.

Additionally, you also have the ability to test in parallel. We can’t say it enough, parallel testing is one of the best ways to effortlessly accelerate the speed and coverage of your testing.

Once your tests are saved in Record & Replay, you can run up to 10 scripts in parallel across dozens of browsers with a few clicks. With Selenium, you use multithreading, which you can read more about in this blog about running your Selenium test in two browsers at once.

But what if you’re still having troubling choosing? What if some people on your team are automation novices, while others are Selenium professionals? With CrossBrowserTesting, you don’t have to choose. We have both Record & Replay and a Selenium grid, which means that every organization, every team, and every tester can discover the best way to achieve their automation goals.

Filed Under: Test Automation Tagged With: record and replay, scripting, Selenium, test automation

Ease Your Transition to Test Automation with New Record & Replay

April 11, 2018 By Alex McPeak Leave a Comment

crossbrowsertesting record and replay

crossbrowsertesting record and replay
All the talk about test automation can be a little overwhelming. It seems like everyone is doing it, and in theory, it sounds great — who doesn’t want to release software faster and with fewer bugs?

In practice, though, you’ve probably experienced that making test automation a part of your organization’s day-to-day workflows is another story. Learning Selenium is challenging and getting everyone on board isn’t as easy as you hoped.

Additionally, with faster iterations and release cycles, your QA team isn’t the only one who needs to run tests. In addition to testers; product managers, developers, designers, and even digital marketers are all looking for an immediate means to test their progress.

The fact is we’re all still adjusting to the environment of test automation, and everyone’s at different stages. Unfortunately, hopping on the Selenium bandwagon isn’t as easy as picking a programming language and pumping out a fully-functional and stable script.

So, how do you accomplish your automation goals without being able to write an end-to-end Selenium suite? CrossBrowserTesting now gives everyone on your team the ability to fill in the gaps of their test coverage with Record & Replay.

What’s Record & Replay?

Record & Replay is the newest feature in the CrossBrowserTesting toolkit.

While you’ve had the ability to test over 1,500 real desktop and mobile browsers with Live Testing or through our screenshot comparison engine with no coding knowledge needed, you may have experienced a disconnect in the jump to automation.

This revolves around the fact that you cannot use our Selenium grid without knowing how to work Selenium, and thus knowing how to code. Without the ability to use this feature, it can be a race against the clock to test your application in all the browsers you need before your deadline hits.

Getting started with Record & Replay, on the other hand, is accessible for testers of all skill levels. By simply typing in your URL and manually performing the actions you want to automate, CrossBrowserTesting will save them as a test. You can then take your recorded test and play it back on an unbeatable selection of browser and operating system configurations, just like you would with Selenium.

All your work is saved in the CrossBrowserTesting cloud so you can go back and run tests over or keep building out your suite. And don’t worry – if you skipped a step or need to make an adjustment, Record & Replay allows you to go back and edit your tests so you don’t have to redo each and every action you already performed.

Once you’ve made a suite of tests, you can use Record & Replay to take up to 10 pre-recorded scripts and run them in parallel, which means you can test against multiple configurations simultaneously. This is great for bulk regressions testing, and you can use the Run History section to monitor your tests.

When running just two tests in parallel doubles the speed and coverage of your testing, leveraging this feature is a no-brainer for teams that are aiming for a faster time to market without knowing how to write a test script.

Start Automating Today

As teams continue to shift left and integrate more testing throughout the software development lifecycle, it requires the effort of all stakeholders to be able to test areas of your application whenever they need. Now everyone on your team can quickly create automated tests that get the job done.

We all know manual testing can be a drag — who has time to run the same test over and over? Wouldn’t it be better to have your entire team on the same page, taking on their tasks with a single tool? Having CrossBrowserTesting’s comprehensive Live, Visual, Automated, and now Record & Replay capabilities means your entire team can leverage our robust device lab to meet each of their individual needs in a way that makes the most sense for them.

Record and replay tools take the guesswork out of automation, and now with CrossBrowserTesting, you can use our version of Record & Replay to take the complexity out of recording manual tests and replaying them across browsers. Automation just became much more accessible to your entire team.

To learn more, take a look at what you can do with Record & Replay capabilities in the video below.

Filed Under: Product Update, Test Automation Tagged With: product update, record and replay, test automation

Selenium 101: Managing Multiple Tabs

April 2, 2018 By Alex McPeak Leave a Comment

how to handle multiple tabs in selenium webdriver

multi-tab testing selenium

As we’ve discovered through this Selenium 101 Series, automation can be challenging when your tests require more complex actions such as testing in two windows at once or closing a pop-up. Luckily, once you know the right commands, you can begin creating scripts that reach all parts of the application you need to check.

Just like you might open web pages in different tabs locally, it’s also possible to have multiple tabs up in one browser during a Selenium test. Multi-tab testing is a common annoyance for testers who are learning Selenium — how do you open more than one tab in a single browser window, then switch back and forth between them? Many pages do this on their own. For instance, any anchor tag with an attribute like target=”_blank” will open the link in a new tab. How do we handle this in our automated tests? The good news is that it’s not as hard as it seems.

With multi-tab testing, you might want to:

  • Launch your browser
  • Maximize your browser
  • Launch multiple tabs
  • Switch between the tabs
  • Perform actions on each tab
  • Close the browser

To first open a new tab (if one doesn’t open automatically when clicking a link), you’ll use driver.findElement(By.cssSelector(“body”)). followed by sendKeys(Keys.CONTROL+”t”); You can probably already tell that we’re focusing on our webpage, then sending ctrl + t to open a new tab the way you might normally in your browser.

Any time you switch between tabs, you’ll want to use the driver.switchTo().window(<some window>); method. This allows you to change control from one tab to another. The switchTo method is also how you handle alerts, so the steps will be very similar.

You might want to throw some waits in there for good measure, but it should look something like this:

WebDriver driver = new FirefoxDriver(); 
driver.manage().window().maximize(); 
driver.get("URL");

String newWindow = driver.getWindowHandle();

driver.findElement(By.cssSelector(“body”)).sendKeys(Keys.CONTROL+”t”);
driver.get("Second URL");


//Perform whatever actions you want done on the secondary tab, let’s pretend we’re logging in somewhere

driver.findElementById(“ID”).sendKeys(“username”);
driver.findElementById(“ID”).sendKeys(“password”);
driver.findElementById(“submit”).click();

driver.switchTo().window(mainWindow);

//Perform whatever actions you want done on the main tab, we’ll click a button
driver.findElementById(“button”).click();

# Close current tab
driver.findElement(By.cssSelector(“body”)).sendKeys(Keys.CONTROL + 'w');

driver.close();


This script will give you a good foundation to use in future tests where you might want to switch tabs. For more in our Selenium 101 Series, check out our previous posts:

  • Selenium 101: How to Automate Your Login Proces
  • Selenium 101: Running Your Test Against Two Browsers at Once
  • Selenium 101: Navigating Waits and Sleeps
  • Selenium 101: How to Take Screenshots During Automated Tests
  • Selenium 101: Automating User Actions
  • Selenium 101: Generating Test Reports
  • Selenium 101: Mastering Selenium Design Pattern

 

Filed Under: how-to, Selenium, Test Automation Tagged With: how to, multiple tabs, Selenium, selenium 101, test automation

Building a Learning Culture

March 21, 2018 By Mike Talks Leave a Comment

building a learning culture mike talks professional development tester

building a learning culture mike talks professional development tester

Going to a conference can be an amazing and wondrous experience. The best way I can describe the feeling of a great conference is “Disneyland … for testers”. It’s several days of meeting key thinkers in your discipline, being bombarded with big ideas, and playing about in workshops with new technology.

But sadly of course, all good things must come to an end. And when you return to work, there’s a battle to incorporate those big ideas or to try out that new technology. You can sometimes feel on your return that everyone’s going, “brace yourself, Mike’s going to push for TDD again”.

This, I have to admit, has been my experience – it’s not possible for everyone in a team to attend a conference, especially when it’s overseas. And hence it’s left on your shoulders to be an ambassador and evangelist for what you’ve learned. In this experience report, I’ll look at some of the things we’ve been trying to do in my department to bring the spirit of conferences into our working week.

Moving Forward

I feel that everything changed for our department back in 2013 with our shift to agile. Before that, everything was very command and control – seniors would work out how we’d do things, and those instructions would be filtered down as “a list of things you need to do”. Agile put everyone more in the driving seat, and it needed them to have the capability to act in a multi-skilled role, or have the maturity to seek someone else out when they were out of their depth.

At the same time, our portfolio evolved into new technology in terms of supporting mobiles, photo technology, using automation, shifting our security framework, infrastructure refreshes, monitoring production behaviour as well as up-and-coming tech and features such as AI for facial recognition. We had been thrown well and truly out of the “test the functional requirements” playpen.

The company I work for has one of the best support for training courses I’ve ever known – you find a training course, you can be pretty assured of attending. The problem was in some areas, things were so relatively new, there was no established training course to “just sign yourself up to”.

Thus we found ourselves circling around “how do we keep ourselves relevant”. We seemed to go back and forth for several months over what the ideal way forward would look like. Then one day on Twitter, I had a revelation when I read the oft-used quote “a year from now, you’ll wish you started today”. THIS WAS US! We needed to just get started – as long as we started something, we could adjust the forum if it wasn’t working.

So I started with something we called the Technical Tester Study Group – pulling together a group of testers from across my organisation. The idea was we’d meet every fortnight and more than anything we’d focus on hands-on activities.

We started off looking through the basics of Java, which as you can imagine was pretty popular. I led these sessions where I would introduce a couple of concepts and set an exercise for people to work through on their laptops.

Each session, people would bring in a laptop, I’d demonstrate a basic concept in Java, and then they’d complete an exercise where they’d use it. At the end we wouldn’t have covered a lot, but we covered it in depth. We had both made mistakes and learned from them, ending up with a concrete example to build on. [I’m personally a fan of this mistake-driven learning, which I call “solving messy problems”. We typically learn a lot from our mistakes]

It took about ten sessions to work through the Java basics, and then we worked through some basic Selenium, using it and the Java we’d learned to open pages, check for content, and manipulate elements of them.

Up until then, I’d been leading sessions and using material I’d built up over the years, but always it was about doing more than just being “the Mike Talks show”. With just me leading it, people would only ever get as smart as me … and there is room for improvement on that!

Collaborative Learning

I got one of my technical testers to create a session on how browsers work, where I have to admit I learned a few things. And before Christmas, one of our team ran an interactive session on using a Robot framework.

We also got in a few guest speakers:

  • A friend working in AI to talk to us about their work in machine learning and chatbots, which turned into a three-session workshop of building our own chatbots.
  • A member of the tools team within our company talked to us about how Nagios and monitoring in production worked.
  • One our AWS gurus talked to us about how infrastructure is built with AWS, and how it can be tested.

I currently have one member working on a security module to do at a future day, with another looking at doing the Robot framework step-by-step. I’ve also used the group to try out material – often when I submit to workshops at a conference I can say “I know this material works because I’ve tried it”.

I’ve had great feedback from the Technical Testing Study Group, but it’s become a hunger to do more. My team has spearheaded testers taking a more hands on approach to automation, and we’ve been running an Automators group. The approach here is vastly different – we created a series of goals for the group and use a Trello board to map learning objectives. Each session, there is some kind of assignment to members of the group who then have to report back on what they’ve achieved and how it went.

Example assignments have included:

  • Collating ways our automation fails and leading a meeting on addressing some of the issues
  • Moving some of our automation to a new design pattern and why we’ve adopted that pattern
  • Reviewing why we’re trying to achieve with our smoke tests and seeing if we can optimise them

A key thing from the team has been to not be afraid to take a tangent and try something new. The task “keeping the board green” (our tests passing) was something we spent a few sessions on, and it began to feel like a Sisyphean task. The group encouraged each other to just “move onto something else” and park it for a while.

Staying Relevant

Fundamentally, learning helps to keep you and what your team doing relevant. At conferences, you give yourself time and space to try new things, and this is so vital to reproduce on some level in your working week.

Perhaps you’re thinking “but my manager won’t approve this”? My experience is that most managers are really receptive to these ideas and actually might come at you with suggestions of things to try. Just make it clear what the goals are, and how you expect it to deliver value. We, for instance, use a monthly survey for our Automators group to show how peoples’ confidence and engagement is increasing.

Find a way for your team to get some space and some play time to learn. The thing which has really stood out about these sessions is how much fun they’ve been – I’ve got to see people in my team really shine, which has sometimes surprised me, but also the level of humour we can engage in because we’re a team who trusts and respects each other.

Stuck for ideas? Why not ask someone who’s run a conference workshop you attended about their materials and have a go at running it in your team? But fundamentally, don’t worry about getting it right. If your team has a good level of energy in it, you’ll find a way to course correct.

About the Author: Mike Talks is a Test Manager at Datacom in New Zealand and author of the book How To Test. Having been in the IT for over 20 years, he’s learned to embrace reinventing himself like David Bowie as the industry and its needs evolve around him. Turn and face the strange – ch-ch-changes! To learn more, follow him on twitter @TestSheepNZ. 

Filed Under: Test Automation Tagged With: conference, learning, professional development, tester

Selenium 101: Automating User Actions

March 6, 2018 By Alex McPeak Leave a Comment

automate browser actions

automate browser actions

While automating commands like clicking with Selenium is fairly straightforward once you get it down, more advanced user actions such as scrolling, controlling mouse movements, navigating forms, and handling pop-ups will prove to be more difficult.

However, while performing user actions with Selenium can be challenging, they’re an important part of writing an end-to-end test that accounts for common user behavior. Follow along with this installment of our Selenium 101 Series to get a better understanding of how to automate them.

Scrolling

The command scrollIntoView is used in Selenium testing to scroll the div content into view while running the tests. scrollIntoView() is a predefined method in JavaScript. This method brings scrolls into the viewport to perform the test commands. If the element does not exist, the attribute returns NULL. scrollIntoView() is used when there are hidden elements in the web page.

ScrollIntoView allows you to specify whether the top or the bottom of the element will be put into view. By default, the top of the element will be aligned to the top of the scrollable object.

Let’s look at an example:

var webdriver = require('selenium-webdriver');

var driver = new webdriver.Builder()
.forBrowser('chrome')
.build();
//driver.manage().timeouts().implicitlyWait(10);
 function scrollingIntoView() {
element = driver.findElement(webdriver.By.id("featurette-13"))
.then(function(element) {
var element1 = driver.executeScript("arguments[0].scrollIntoView();", element);
});
}
driver.get('https://smartbear.com')
.then(scrollingIntoView());
driver.quit()

In the example, we are searching for an element by ID, which is in the bottom of the page. For Selenium to click that element, it needs to be scrolled to that viewport. While executing that function, arguments object is created. It has an array structure where the total number of parameters passed is specified, then it executes to scroll the element into the viewport.

Drag and Drop

While testing some websites, we encounter dragging and dropping like a target element moved to another element location. A dragAndDrop(element, location) function is defined to drag to a web element or to an offset in pixels.

Let’s look into an example:

var webdriver = require('selenium-webdriver');
var assert = require('selenium-webdriver/testing/assert');
var droppable
var draggable
var driver = new webdriver.Builder()
.forBrowser('chrome')
.build();
function draggingAndDropiing() {
draggable = driver.findElement(webdriver.By.id("draggable"))
droppable = driver.findElement(webdriver.By.id("droppable"))
driver.actions().dragAndDrop(draggable,droppable).perform();
}
/*
//move draggable to droppable and release via actions
function draggingAndDropiing() {
draggable = driver.wait(webdriver.until.elementLocated(webdriver.By.id("draggable"), 500))
droppable = driver.wait(webdriver.until.elementLocated(webdriver.By.id("droppable"), 500)) 
driver.actions()
.mouseMove(draggable)
.mouseDown()
.mouseMove(droppable)
.mouseUp()
.perform();
}
*/
function dropable() {
dropped = droppable.findElement(webdriver.By.css("p")).getText()
.then(function(dropped){
assert(dropped).equalTo('Dropped!');
});
}

driver.get('http://crossbrowsertesting.github.io/drag-and-drop.html')
.then(draggingAndDropiing())
.then(dropable());
driver.quit()

In the webpage, there are two web elements that draggable which are dragged into the other web droppable element. In the first function, a predefined dragAndDropdriver function is called and the parameters will be the element that needs to be dragged. Another parameter will be located where the element should be dropped and then perform() will execute the action sequence.

In a similar way, this dragAndDrop action can be performed with the mouse movements. In another function, we can observe a mouseMove() function is called by specifying the first parameter as the location to move to.

After driver encounters the element, mouseDown() and mouseUp() actions are done. What these do is press a mouse button without releasing until mouseup is called and vice-versa for mouseUp. All the actions are executed in sequence by perform().

Forms

In web automation, it is common practice to automate forms that have dropdowns, radio buttons, or checkboxes. We’ll look into how to test radio buttons and dropdowns.

Dropdowns

A dropdown list allows the user to select a single option out of a large number of items. This is an ust like selecting any other type of element on a webpage. Dropdowns can be selected by ID, Name, CSS & Xpath.

var webdriver = require('selenium-webdriver');
const util = require('util');
//create driver object for chromedriver
var driver = new webdriver.Builder()
.forBrowser('chrome')
.build();
function dropable() {
driver.findElement(webdriver.By.id("dropdown")).click()
.then(function(element){
driver.findElement(webdriver.By.css("#dropdown > option:nth-child(3)")).click()
element2 = driver.findElement(webdriver.By.id("submitbtn")).click();
});
}
driver.get('http://crossbrowsertesting.github.io/selenium_example_page.html?text=&select=&radio=&textarea=')
.then(dropable());
driver.quit()

In the webpage, there is a dropdown list where we might need to select an option while using automation to perform some other actions. We have selected the dropdown list to make it active and selected the third option with CSS element.

Radio Boxes

If ID/CSS is given for the Radio Button and you just want to click on it irrespective of its value, then the command will be like this:

var searchbar;

var webdriver = require('selenium-webdriver');

//create driver object for chromedriver
var driver = new webdriver.Builder()
 .forBrowser('chrome')
 .build();
function radioButton() {
searchbar = driver.findElement(webdriver.By.css('#myform > input[type="radio"]:nth-child(7)')).click();
 
 driver.findElement(webdriver.By.id("submitbtn")).click(); 
 
 }
driver.get('http://crossbrowsertesting.github.io/selenium_example_page.html?text=&select=&radio=&textarea=')
 .then(radioButton());
driver.quit()

WebDriver provides the methods to find the elements. findElement() finds a single web element and returns as a WebElement object.

We have created an example which selects a radio button and is toggled on by using click() method. These selections are then submitted to the form.

Pop-ups and Alerts

Alerts are pop-up windows which may appear after clicking or filling form. Alert window block the underlying webpage without performing any actions. Testing these alerts in a web page node allows interactions with JavaScript-based dialog boxes. driver.switchTo().alert() is a method which will allow to you switch from main window to an alert so that you may properly handle it.

Once the command focus is switched to the alert interface, there are few methods that can be use to deal with the dialog box like, accept() , dismiss(), getText() , and sendKeys().

var webdriver = require('selenium-webdriver');
const util = require('util');
//create driver object for chromedriver
var driver = new webdriver.Builder()
.forBrowser('chrome')
 .build();
function alert() {
 element = driver.findElement(webdriver.By.css("body > div > div:nth-child(12) > button")).click()
 .then(function(element){
 var alert = driver.switchTo().alert();
 alert.getText().then(function(text) {
 console.log('alert text: ' + text);
 alert.accept();
 });
});
 }
driver.get('http://crossbrowsertesting.github.io/selenium_example_page.html?text=&select=&radio=&textarea=')
.then(alert());
driver.quit()

Simple alerts will only have an “OK” button in addition to displaying some information to the user. Some alerts will have options of “OK” and “Cancel” where we can choose Alert.Accept() or Alert.dismiss() based on our requirements. If there is an option of filling out a text box in an alert window, there is a predefined method called sendkeys() to send the text to the alert and accept it.

var webdriver = require('selenium-webdriver');
const util = require('util');
//create driver object for chromedriver
var driver = new webdriver.Builder()
 .forBrowser('chrome')
 .build();
function alert() {
 element = driver.findElement(webdriver.By.css("#pop-up-page > button:nth-child(2)")).click()
 driver.findElement(webdriver.By.id("closepopup")).click()
}

driver.get('http://crossbrowsertesting.github.io/selenium_example_page.html?text=&select=&radio=&textarea=')
 .then(alert());
driver.quit()

In this example, we have a simple alert window that has sample text and an “OK” button. After the button is clicked, this will return the currently open alert object. With this object, you can now accept, dismiss, read its contents or even type into a prompt. Here the object gets the content from the alert window with getText() method and accepts the alert.

Conclusion

Mastering automating user actions is key to mastering automated testing. Elements like scrolling, drag and drog, filling out forms, and handling pop-ups will have to be addressed during testing, so it’s probably you’ll come the need to automate them as well.

For more on getting started with test automation, check out the rest of our Selenium 101 series:

  • Selenium 101: How to Automate Your Login Process
  • Selenium 101: Running Your Test Against Two Browsers at Once
  • Selenium 101: Navigating Waits and Sleeps
  • Selenium 101: How to Take Screenshots During Automated Tests
  • Selenium 101: Generating Test Reports
  • Selenium 101: Mastering Selenium Design Patterns
  • Selenium 101: Managing Multiple Tabs

 

Filed Under: how-to, Selenium, Test Automation Tagged With: Selenium, test automation, user actions

How to Increase Test Coverage Over Time with Automation

March 5, 2018 By Federico Toledo Leave a Comment

test coverage in software testing automation

test coverage in software testing automation

When it comes to software quality, we want to test as much code as humanly (or mechanically) possible, right? Actually, yes and no. For each test cycle, it’s important to consider multiple strategies for measuring test coverage and put a system into place where it can be maximized over the long-term as well.

Test coverage is one of the measurements of test quality that tells us how much of the application under test has been tested. You can think about it like sweeping the floors of a house. Imagine if I only included sweeping bedrooms for my sweep coverage criteria. With that criteria, if I swept 100% of the bedrooms, would that mean that the whole house is clean? No, because I completely missed the kitchen, dining room, bathrooms… You get the point! Therefore, we must always be careful with test coverage and recognize that sometimes it has its limitations.

Test coverage is useful for defining certain parts of the software in order to cover them with tests. It also tells us when we have tested sufficiently, gives us ideas of what else to test (thus expanding coverage), and helps us know quantitatively the extent of our tests. It’s a great measuring stick, but even with 100% test coverage, we’re not guaranteed that our application is 100% bug free.

Even if you only managed to achieve 20% coverage, it may not necessarily be a bad thing. The ideal amount of test coverage to aim for should be based on your priorities and analysis of risk.

There are many ways to consider test coverage. Here we’ll examine code coverage, data-oriented coverage, and the plethora of other techniques at a tester’s disposal.

Code Coverage

Code coverage is the most popular metric for measuring test coverage. It measures the number of lines covered by the test cases, reporting total number of lines in the code and number of lines executed by tests. Essentially it’s the degree to which the source code of a program is executed when a test suite runs. The more code coverage, the less chance of undetected bugs making it to production. This measurement can also be broken down into different levels; not only lines of code covered, but there are also branches, decisions inside logic constructors, etc.

Data-Oriented Coverage

With data-oriented coverage, you have input and output parameters, each of them with their own domain (the spectrum of possible values they can have). If you think about all the possibilities, you’ll end up with a Cartesian product because you can test every possible combination.

On the other hand, you can test less and go with “each choice” coverage, which means that you choose at each possible value at least once. There is also all-pairs, which is empirically said to have the best cost-benefit relationship, being the best mix between each-choice and all combinations.

Other Kinds of Coverage

In addition to those previously mentioned, there are several more ways to cover the product that you are testing such as state-machines, decision tables, decision trees, equivalence partition and boundary values, etc. It’s very interesting to see that each technique is supported by an “error theory”. The error theory takes into account the typical errors that programmers commit. For example, equivalence partition and boundary values consider the error of using a “<” instead of a “<=”, misunderstanding business logic, etc.

Additionally, there are other kinds of test coverage that are not related to lines of code or inputting test data. One thing we must cover is mobile fragmentation: are we covering the main mobile devices, operating systems, and screen sizes? When it comes to browsers and operating systems, we must consider how our web system will behave in any combination of operating systems and browsers and how many combinations we should test. Lastly, we must think about the test environment, context, etc.

Laying Out a Plan to Optimize Coverage in the Long-Term

What happens when you never have enough time to reach certain criteria for your test cycles? In this case, you might want to consider the following method for improving test coverage over multiple test cycles.

Imagine we have different features to test on different browsers and have organized different test cases with different test suites, each one with its own priority. We need to execute the most critical against all browsers, but the rest, we can decide to execute on a different browser. In the following test cycles, we can exchange all pairs (suite/browser). That way, in each test cycle we do not have perfect coverage, but after multiple test cycles we improve it. We cannot ever be assured that we are done with testing, but when time is scarce, we have to use it wisely and do our best to reduce risk.

Here’s an example of how to plan good test coverage over many test cycles:

test coverage in software testing coverage and time

Where it says “date 1”, it could also say “sprint 1”, “iteration 1”, “day 1”, “version 1”, etc. The goal here is to distinguish which test cases you will execute in each iteration in each environment. For some of them, it’s mandatory to execute the test every time time on all browsers (probably the most critical ones). Others can be divided into groups and executed only in one browser, but this has to be done in a very clever way in order to have each one executed in each browser by the 4th round.

Here is another example applied to mobile testing in order to reduce risk related to device fragmentation:

test coverage in software testing fragmentation

After the third execution, you’d have this coverage:
test coverage in software testing

 

Conclusion

Test coverage criteria are very useful, but they don’t guarantee anything. Some criteria are linked to others- when one is forgotten, so are the rest and vice versa. We need to use the ones that best suit our needs and also consider priorities for each module and define coverage to look at each one according to priority and complexity. Finally, we can apply long-term coverage criteria to optimize test coverage over time.

Join us for a webinar on August 7 at 2 PM EDT with to learn more about how to increase test coverage over time with automation.

Register Now

About the Author: Federico Toledo is the co-founder and director of the software testing company Abstracta and holds a Ph.D. in Computer Science from UCLM, Spain. With over 10 years of experience in quality engineering, he’s helped many companies to successfully improve their digital products. Dedicated to testing education, he’s written one of the first books in Spanish on testing and formed Abstracta Academy. He is also a co-organizer of TestingUY, the biggest testing conference in Latin America. Follow him on Linkedin or Twitter. 

Filed Under: Test Automation Tagged With: code coverage, test automation, test coverage

Selenium 101: Navigating Waits and Sleeps

February 22, 2018 By Alex McPeak Leave a Comment

selenium waits and sleeps

selenium waits and sleeps

Understanding waits and sleeps is an essential component of forming a comprehensive Selenium testing strategy.

Waits can be used in Selenium to make our tests less flaky, help us debug, and to generally slow down our testing experience. Now, you might not want to do that for production code, but we’ll go through some examples later where it can be really helpful.

Waits are a necessity when dealing with heavily dynamic web pages filled with JavaScript. Browser pages don’t load all at once; elements get added over time as they’re received from the network. So if our tests expect a page element to be present immediately, they won’t find it, and our test will fail, even though that element might load a few milliseconds later. Waits help us add a bit of time in our test to wait for the latency of the browser, or the promise in a JavaScript action, such as clicking a button.

There are also multiple types of waits including:

  • Implicit Waits
  • Explicit Waits
  • Sleeps

We’ll go over the best ways to use each one.

Implicit Waits

Implicit waits are a way to tell our entire test to poll the DOM, or wait and recheck for a certain amount of time, for an element to be present to continue on to the next step or fail our test.

You can simply write an implicit wait of, say, 10 seconds at the start of your test and your test will wait at most 10 seconds for any object in the DOM to be present before continuing its instructions.

Notice we said at most – if the DOM element is present before your Implicit Wait time, the test will continue.

Here’s how we might use an implicit wait on CrossBrowserTesting:

var webdriver = require('selenium-webdriver');

//create driver object for chromedriver
var driver = new webdriver.Builder()
.forBrowser('chrome')
.build();
driver.manage().timeouts().implicitlyWait(10);

function searchForCrossBrowserTesting() {
searchbar = driver.findElement(webdriver.By.name('q'))
.then(function(searchbar) {
searchbar.sendKeys('CrossBrowserTesting.com');
searchbar.submit();
});
}
function clickTreehouse() {
clickLink = driver.findElement(webdriver.By.linkText("Cross Browser Testing Tool: 1500+ Real Browsers & Devices"))
.then(function(clickLink) {
clickLink.click();
});
}
driver.get('http://google.com')
.then(searchForSmartBear())
.then(clickSmartBear());
driver.quit()

In the example, we have added the implicit wait to the driver which will persist throughout all the commands. In this particular case, it will wait for 10 milliseconds to find the element and give time load the DOM to find the element. It will return an error if the driver is not able to find it in the given timeframe.

Note that Increasing the implicit wait timeout should be used judiciously as it will have an adverse effect on test run time.

What happens without using an implicit wait? Once the implicit wait is set, it will set for the life of the webdriver object, and if there is no implicit wait the default is set to zero.

The load time for different websites varies from each other. As we learned, implicit waits tell webdriver to wait for a particular number of seconds. If implicit wait is not included there will be error thrown up by the program because the webdriver did not wait until the DOM is completely loaded and it was not able to find out the “login-in-message”.

Explicit Waits

Explicit waits are more direct and narrow than their implicit cousins and help us pinpoint our wait times, giving us the flexibility to place a specific wait time on each individual web element. You must set your specific explicit wait time on each command, which is the biggest drawbacks to Explicit Waits. However, they come with a huge advantage to testers who use them correctly.

Explicit waits allow us to make incredibly robust tests with pinpoint accuracy allowing for a wait of 8 seconds for an animation to finish loading, but only a 1.5-second wait for a button.

Let’s look at an example:

var webdriver = require('selenium-webdriver');

//create driver object for chromedriver
var driver = new webdriver.Builder()
.forBrowser('chrome')
.build();
function searchForCrossBrowserTesting() {
searchbar = driver.wait(webdriver.until.elementLocated(webdriver.By.name('q')), 500)
.then(function(searchbar) {
searchbar.sendKeys('SmartBear.com');
searchbar.submit();
});
}
function clickCrossBrowserTesting() {
clickLink = driver.wait( webdriver.until.elementLocated(webdriver.By.linkText("Cross Browser Testing Tool: 1500+ Real Browsers & Devices")),
.then(function(clickLink) {
clickLink.click();
});
}
driver.get('http://google.com')
.then(searchForCrossBrowserTesting())
.then(clickCrossBrowserTesting());
driver.quit();

In function searchForCrossBrowserTesting the webdriver will wait up to 500 milliseconds for locating an element “ q ” before throwing a TimeoutException. A successful return value for the function is a True Value i.e., boolean.

If the explicit waits are not included in the commands and the webdriver is not able to complete certain condition before running the next commands in the code, there will be an error while running the code.

Explicit waits are used when we know particular element will take more time to visible, so we put expected condition of time to wait.

Sleeps

Sleeps are used for to put your entire script to sleep for a set amount of time, and they will not be “woken up”. Waits will stop waiting once an object is found, making it sometimes difficult to debug tests. With sleeps, we can add consistent pauses to our tests – allowing you to watch exactly what’s happening with your test.

An example of using a sleep would look like the following:

var webdriver = require('selenium-webdriver');
var sleep = require('sleep');
//create driver object for chromedriver
var driver = new webdriver.Builder()
.forBrowser('chrome')
.build();
function searchForCrossBrowserTesting() {
searchbar = driver.findElement(webdriver.By.name('q'))
.then(function(searchbar) {
searchbar.sendKeys('CrossBrowserTesting.com');
searchbar.submit();
});
}
function clickCrossBrowserTesting() {
clickLink = driver.findElement(webdriver.By.linkText("Cross Browser Testing Tool: 1500+ Real Browsers & Devices"))
.then(function(clickLink) {
clickLink.click();
sleep.sleep(15)//sleep for 5 seconds
});
}
driver.get('http://google.com')
setTimeout(function() {
searchForCrossBrowserTesting();
setTimeout(function() {
clickCrossBrowserTesting();
setTimeout(function() {
driver.quit();
}, 5000)
}, 5000);
}, 5000);

Sleeps completely stop script execution even if the next element you’re looking for has been loaded into the page, so they can make your tests take significantly longer to run. For this reason, you should avoid using sleeps in production tests where possible. Sleeps should generally be used only for debugging. Sleeping briefly while the page loads might be all right in production tests if waits simply can’t do the job, but it should be for a few milliseconds at most.

Conclusion

Implicit waits, explicit waits, and sleeps all have different use cases when it comes to testing with Selenium. Understanding when and how to use each will help you run passing tests and properly debug. As you begin learning Selenium testing, waits and sleeps will be an invaluable tool to write successful test automation.

Check out the rest of our blogs for more on Selenium 101:

  • Selenium 101: How to Automate Your Login Process
  • Selenium 101: Running Your Test Against Two Browsers at Once
  • Selenium 101: How to Take Screenshots During Automated Tests
  • Selenium 101: Automating User Actions
  • Selenium 101: Generating Test Reports
  • Selenium 101: Mastering Selenium Design Patterns
  • Selenium 101: Managing Multiple Tabs

Filed Under: how-to, Test Automation Tagged With: Selenium, sleeps, test automation, waits

The Defect Expiration Date

February 20, 2018 By Melissa Eaden Leave a Comment

melissa eaden melthetester defect status in testing

To understand how to reach a zero defect status quo, think about defects like you might think about bread. Bread is the absolute best right out of the oven. Slather it with butter and pop it into your mouth. It’s heaven. The worst bread can get is when it’s molded over and completely inedible, but it can also be helpful.

Defects are often seen as bad things. That’s not necessarily true. Defects tell you something about what’s going on with the state of the application. If defects were to equal hot, tasty bread you pop right into your mouth, would you think about avoiding them? Maybe, maybe not.

There are probably plenty of reasons to avoid bread. Allergies. Carbs. Gluten tolerance. Those reasons are absolutely respected. Replace the idea of bread anything that works just as well. Suggestions are welcome!

Measuring The Freshness Of A Defect

Let’s consider the bread analogy a little further. Bread goes through states of change. That’s how we can talk about piping hot bread and moldy bread. Now if I shift that to talking about defects using a bread timeline it looks something like this:

Bread vs Defect

Hot Out Of The Oven Less Than An Hour Old
Fresh Less Than A Day Old
Stale Less Than 2 Weeks Old
Moldy More Than 2 Weeks Old
Is It Still Bread? Older Than A Month

If you have a low number of defects waiting for a while due to complexity or dependencies, that’s probably OK. If you have a large number of defects that could have been resolved within hours of them being found but they are so stale no one really knows if they are still defects, then that would indicate there are some issues in how defects are being handled. Looking at defects with a very visible time measurement makes them valuable in gauging the health of an application.

It’s a balancing act. You want to handle each one correctly and in their own time, but trying to discover what that time frame is and how quickly it needs to be handled is always a trial and error approach. The best way to start surfacing wait times and delays would be to create some rules around defect types or flavors.

Here is an example of some defect flavors and handling rules to go with them:

Severity (functional issues usually)

  • High – don’t log it–fix it!
  • Med – log it (but make the logging relevant – make a new story.)
  • Low – why bother logging it? Note it on the current story card and move on.

Priority (business rules issues usually)

  • High – Get it into a sprint stat!
  • Med – Put it into the next sprint or you might as well delete it.
  • Low – ask a PM. If they don’t care, don’t bother. Note it somewhere and move on.

Relevance (relation to current feature work/business goals)

  • High – Three Amigos time – make a decision and go with it.
  • Med – Next standup – make a decision and go with it.
  • Low – If there isn’t a stakeholder, even a hidden one, no defect needed. Note it in your testing logs and move on.

There should be some initial discussion about what defects are ranking where. That’s important.
Also note that the information should be documented somewhere, either in testing logs or the story card.

Don’t let the information get lost. If the defect suddenly becomes relevant or someone wants to know why no one found it before, you can point back to your logs or a story and then move it into a card or a defect management system. Before that point, defects that don’t rank high enough are only creating noise.

Why Defect Backlogs Cost Money And Time

On a maturity model for a company, one of the things that can be seen as a sign of development lifecycle maturity is how quickly and efficiently defects are handled.

If you are not using a zero defect approach of immediately resolving or working towards resolving your defects; you, your team, or management are probably engaged in doing two or three of these things:

  • A triage meeting to groom the defects which haven’t been resolved or are not in flight
  • Maintaining a backlog which needs updating and managing
  • Paying or upkeeping a defect management tool (If you are only using story cards, you are moving in the right direction).

If no one is reviewing defects or maintaining them, the management tool and the backlog are basically where defects go to die. The company engaged in this unfruitful process is wasting money and time. It’s an act of literally writing something that will never be used into a tool that the company is paying for, whether that’s an open source tool on a server or a cloud product, which has no meaning to anyone because no one is looking at it.

Defect backlogs or management systems are only helpful if they are kept current and folks are reviewing them on a regular basis. This costs money to have meetings, upkeep tools, and make sure defects are being handled properly. This method can cost more than the leaving the defects to die because people are engaged in the process. That’s OK. When there is complexity or necessity, this isn’t a bad process to have, but it’s a lot to maintain and threatens slipping back into the habit of the defect graveyard.

Moving to a zero defect process can do a couple of things for a software development lifecycle.

  1. It lets people quickly make a decision about how to handle the defect.
  2. It uses the original story as a reference for the bug OR turns the defect into a story which goes into the backlog for a sprint and is handled like a STORY instead of a defect.
  3. Defects passed to teams from other teams like Customer Service have an answer immediately instead of waiting for someone to tell them it’s fixed, they know it will be resolved by the next release, or never.

Setting Standards For A Zero Defect Process

The goal of a zero defect process is to reduce the time in the backlog or management of a defect to zero by either fixing, converting, or closing the defect. Using the example standards above you’ll want to target at least three areas for setting standards for the following defect types:

  • Defects which originate from within the team
  • Defects which originate from external teams but involve your team
  • When a Defect presented (either internal or external) is changed to a story or closed.

Enhancements Are Not Defects

Some organizations like to lump enhancement requests in with defects, mostly because they are generated from the same place, the customer.

When a customer asks for an enhancement request, that information shouldn’t be hanging out in a defect management tool or story backlog, it should be handled with a weekly report or some kind of handoff to the Product Management folks. Leaving it in a backlog means that someone is missing trends, customer requests, or possibly the next big feature idea.

Customers can ask for some pretty crazy things, but if your business model is centered around reporting and they are asking for a reporting feature, it might be important to tell someone rather than write a bug and close it as “enhancement request.”

Find a different handling method for those requests. Get Product Management to reply to customers directly. You’d be surprised how much customers like being acknowledged even if they know they aren’t going to get exactly what they wanted.

Always Have A Plan

Whether you are moving towards a zero defect practice or dealing with defect graveyards, having a plan is better than no plan at all. Communication is very key in dealing with any defect of any severity. Create standards, tweak them, change them when they stop working, make every effort to make backlogs and graveyards disappear.

Another alternative is to go for the radical option and declare defect bankruptcy and start over! At least then it would be a clean slate for your team or all the departments that want to take that route.

Are there other methods or approaches which have led you to a zero defect practice? Comment here or blog about them. It’s not impossible. It’s much like losing weight or going gluten free, you need to commit to the process and move forward, that’s the only way it gets better.

software testing defectsMelissa Eaden has worked for more than a decade with tech companies and currently enjoys working for ThoughtWorks in Dallas, Texas. Melissa’s previous career in mass media continues to lend itself to her current career endeavors. She enjoys being EditorBoss for Ministry of Testing, supporting their community mission for software testers globally. She can be found on Twitter and Slack @melthetester.

 

Filed Under: Test Automation Tagged With: defects, guest blog, test automation

  • « Previous Page
  • 1
  • …
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • Next Page »

Try CrossBrowserTesting

Everything you need for testing on the web. Browsers & Devices included.


  • Grid In The Cloud
  • Simple CI Integrations
  • Native Debugging Tools
  • Real iOS and Android
  • 2050+ Browser/OS Combinations
  • Intuitive REST API

Start Testing Today

Want Testing Tips?

Want the latest tips & strategies from industry experts right to your inbox? Sign up below.
 

Join Over 600,000 Testers & Developers And Start Testing Today

Learn more Free Trial

Features

  • Live Testing
  • Selenium Automation
  • Automated Screenshots
  • Screenshot Comparison
  • Local Testing
  • Real Devices

Solutions

  • Automated Testing
  • Visual Testing
  • Manual Testing
  • Enterprise
  • Internet Explorer

Resources

  • Browsers & Devices
  • Blog
  • Webinars
  • Integrations
  • ROI Calculator

Company

  • About Us
  • Careers
  • Plans
  • Terms of use
  • Security

Support

  • Help Center
  • API Docs
  • Schedule A Demo
  • Contact Us
  • Write For Us