Whether it be for an advertisement, a call-to-action, or a warning, pops-up are a common way to draw the attention of users.
However, when these alerts and overlays are implemented, it may interfere with testers who are keen on automation. So, how do you use Selenium to close out of a window, or automate testing to interact with a pop-up? We demonstrated the commands for basic pop-ups to make test automation as seamless as possible, without reverting to manual.
There are a few distinct types of alert messages that pop-up with varying user asks.
Simple – A simple alert will usually give minimal information or a warning and only require one, such as clicking an “OK” button to confirm you understand the message.
Prompt– Usually a prompt alert will require the user to input some information such as a name, email, or password before being able to click “OK” to continue.
Confirmation – Confirmation alerts will request permission, which can be answered by clicking “OK” or “Cancel”.
Pop-ups and alerts are usually either Windows-based or web-based. Windows-based alerts are beyond Webdriver capabilities because they’re part of the OS instead of the browser, so you’ll probably need to rely on third-party tools such as Sikuli Script or Robot class.
Using WebDriver for Web-based Pop-ups
WebDriver, on the other hand, does the ability to interact with multiple windows, including alerts, with a method called switchTo. This allows you to switch control to the pop-up while keeping the browser in the background:
popup = @driver.switch_to.alert
From there, these are the simple alert commands for web-based pop-ups:
- To click “Cancel”
- To click “OK”
- To capture the alert message
- To enter information
- To exit
An example of this might be:
require 'selenium-webdriver' require 'rspec/expectations' require 'TimeoutException' include RSpec::Matchers def setup @driver = Selenium::WebDriver.for :firefox end run do @driver.get 'URL' @driver.maximize.window @driver.find_elements(css: 'button').click popup = @driver.switch_to.alert popup.accept textreturned = driver.find_element 'result' print(textreturned.text) assertTrue(textreturned.text, 'You Clicked OK') except TimeoutException: print 'no alert' def teardown end
These commands should take care of any functions you want to perform when automating testing where there are any simple, prompt, or confirmation alert-pop-ups in your browser.
What are the most common challenges that you face testing with Selenium? Tell us in the comments!
Ashley Hunsberger is a Test Automation Architect at Blackboard, a virtual learning environment and course management system for top institutions worldwide.
Her Selenium Conference presentation, “Transformative Culture” reviewed how Blackboard established departmental goals, built teams that would ensure quality and success, and slashed execution times, while transforming the way the company thinks about software quality.
Not only was this presentation a favorite among conference attendees, it also serves as highly valuable information for any company that’s undergoing a culture change.
A Cause for Change
A year ago, it took an hour to run Blackboard’s 400 tests, and 370 were unreliable. So, not only was the time to execute high, but the builds were constantly failing and developers had stopped caring about them.
Hunsberger quickly realized that the root of the issues they were having centered around culture and the way different departments thought about quality and communicated with each other to deploy projects.
She wanted to make a company-wide change that would enable the development team to own testing and quality and hoped to get the entire organization to put more care into test suites and maintenance.
The very first step to transform the department was a simple one. They would not longer be referred to as QA, but instead Engineering Productivity.
Hunsberger also wanted to re-establish the department mission to “Reduce the time from concept to deliverable by providing our product development teams with the tools, practices, and support to increase their productivity while maintaining high quality standards.”
She also wanted to clarify that Engineering Productivity would not take over writing tests for other teams, but instead be a resource to support the automation strategy.
The next move was to set goals including:
- Provide an easily maintainable and extensible framework that would enable scrum teams to add and remove tests.
- Enable the automatic and early detection of failures within the software under development.
- Prevent the source of detected failures from moving any further downstream.
- Accommodate all of this without impacting the engineer’s time.
Enter, Team Shangri-la
After realizing this transformation would only be possible with a proper leadership team to own the automation strategy, Hunsberger introduced Team Shangri-la, where she acted as the project owner alongside a scrum master, SETs, a feature engineer, and a CI/CD team member.
Team Shangri-la had to first establish their definition of “Done”, as Hunsberger found that in the past, a lot of teams were saying projects were done without them actually being testing, reviewed, documented, and released.
They also had to outline test suite definitions to get everyone speaking the same language:
- Goals – What was the intent for each suite?
- Trigger – What would trigger tests?
- Gates – What happens if tests fail?
- Requirements – What are the requirements for each suite?
Once the guidelines were determined, they had to work on containerizing environments for faster feedback and implementing project guardrails to provide support for nine different scrum teams.
Additionally, one prominent area of Shangri-la’s strategy was risk analysis, where they aimed to prioritize what tests would be executed and how they would be written.
By rating the likelihood of a bug occurring and what the impact on customers would be if the feature didn’t work, it became easier to evaluate and what areas need to be included in the test suite and which did not.
Charting and tracking the results also helped the team decide what kind of testing needed to be executed, as well as how the tests would be monitored and maintained.
Hunsberger and her teams saw the 33 stable tests and 370 unreliable tests that ran in an hour turn on its head to now run 165 stable tests in 30 minutes.
Along the way, Hunsberger learned that not everyone has the same idea about what’s critical, and you can’t test everything. These small realizations ultimately initiated a complete culture change that encouraged them to be better communicators so that they could establish goals and meet them.
By focusing departments to be a company-wide effort, Hunsberger was able to affect better testing to support development, optimize testing, and ensure uncompromised quality across Blackboard’s projects.
The CrossBrowserTesting team hung out in Austin last week for three days of Tex-Mex and test automation talk as an official sponsor at the 2017 Selenium Conference. We thoroughly enjoyed meeting our customers and conversing with other testing enthusiasts as well as attending speaker presentations to hear a few insights from the experts.
Angie Jones is a Consulting Automation Engineer providing strategies for scrum teams at LexisNexis who came to this year’s Selenium Conf to recite some of Aesop’s fables and teach us how we can apply the lessons to test automation and continuous integration to increase productivity, accuracy, and trust between departments.
These are just the few of the lessons we learned from Jones’ storytelling session.
The Boy Who Cried Wolf – In this familiar story, a shepherd boy is tasked with keeping watch over the sheep and alarming the village people if a wolf comes. However, the shepherd boy yells “Wolf! Wolf” just to laugh as the villagers all come running in panic to realize there is no wolf. The boy does this quite a few times, until one day a wolf actually comes. The shepherd boy yells “Wolf! Wolf!” but nobody believes him, and the wolf eats all the sheep.
Jones asserts that all our tests are becoming like the shepherd boy and crying “Wolf” but often with false negatives. All the while, the developers — our villagers — are no longer paying attention to the alarms, and in turn, problematically ignoring broken tests as well.
The Lioness and the Vixen – A lioness and a vixen are bragging to each other about their children. The vixen tries to one-up the lioness by saying that she has a whole litter of kids, while the lioness only has one. In response, the lioness says that vixen may have more, but she has a lion, implying that her child is of higher value than the vixen’s many.
In test automation, Jones advances that quality should always prioritize quantity — the number of tests don’t matter, but how efficient they are does. Especially in a CI cycle, the more tests, the slower the process, and the fastest way to lose trust is with flaky tests. You want to make sure you’re automating quality tests, providing the best possible value to developers, and ensuring fast feedback.
The Astrologer – An astrologer went out every night looking at stars, analyzing constellations, and trying to predict the future. Unfortunately, he wasn’t paying attention to what was in front of him because he was so focused on looking up at the sky, and he fell into a deep hole.
Often times, teams will solely focus on automating new features, but it’s important to pay attention to your present builds, as well. “Everyone wants to get the newer features out the door, but no one’s looking at the state of automation right now, and it’s broken,” said Jones. “Why are we building new code when the builds are red? Don’t value new code over working code.”
The Farmer and the Stork – A farmer finds that cranes have been trespassing to eat his crops, so he devises a plan to catch them in the act. The cranes recruit a stork to come eat with them, but the stork doesn’t realize the food would be stolen and doesn’t participate. However, when the farmer casts his net to catch the cranes, he captures the stork too. The stork pleads to be let go, saying he’s done nothing wrong, but the farmer says that it doesn’t matter because he had still been there with the rest of the cranes and so would receive the same punishment.
Similarly, all it takes in testing is one flaky test to ruin the rest of the build. Once a build turns red, no one trusts the rests of the tests even if they might be reliable. Jones advises separating tests into different paths — one for stable tests that only fail when something is wrong and one for unstable tests with flaky fails. This way you can divide attention between the red builds that need to be fixed or rewritten and the green builds that developers should focus on.
The Happy Ending
Jones has 4,000 automated tests, less than 1.5 percent of which are unstable — a number so low it trump’s Google’s 3 percent. The main build is always green, unless there’s a change with the application, and everyone trusts it for feedback.
To learn more about managing flaky tests, monitoring builds, knowing when to rewrite tests, and other CI lessons based on classic tales, take advantage of the full recording of The Build That Cried Broken: Building Trust in Your CI Tests Selenium Conf presentation.
We’ve discussed the evolution of Selenium, and the major shift from Selenium 1 to Selenium 2 in “Intro to Selenium Testing”. We also noted that latest version of developers’ preferred open source automation testing tool is actually Selenium 3.
But what are the differences present between the two versions, and how did it affect development and testing? In this post, we’ll examine the features that are components of Selenium 3.
Announcing Selenium 3.0 Beta
When Simon Stewart, the founding contributor of WebDriver, announced a new beta version of Selenium would soon be released at SeleniumConf in 2013, it caused quite a stir among the already large community of open source contributors.
After advising users to switch from the original RC APIs to WebDriver, Selenium 3 would stop supporting RC completely to be replaced by a similar, revised function that was integrated with WebDriver.
Of course, since most people had already abandoned RC at this point, this change didn’t affect too many developers. Those who still were using RC, however, would have to prepare the transition to WebDriver.
The update also required Java 8 to be running to access certain elements and stopped supporting IE prior to version 9.
In a second release note, Stewart specified with the following details:
- WebDriver users will just find bug fixes and a drop-in replacement for 2.x.
- Selenium Grid users will also find bug fixes and a simple update.
- The WebDriver APIs are now the only APIs actively supported by the Selenium project.
- The Selenium RC APIs have been moved to a “legacy” package.
- The original code powering Selenium RC has been replaced with something backed by
- WebDriver, which is also contained in the “legacy” package.
- By a quirk of timing, Mozilla have made changes to Firefox that mean that from Firefox 48 you must use their geckodriver to use that browser, regardless of whether you’re using Selenium 2 or 3.
However, developers continued to wait three years for the release of Selenium 3, which was supposed to come out at Christmas 2013.
The Official Release of Selenium 3.0
Finally, in October of 2016, Selenium 3 was released.
Before, Selenium was responsible for driving each browser. With the update, all major browser vendors would ship their own implementations. While the specification was still in the works, it meant that Selenium was moving more toward becoming a testing framework that would operate as the default browser.
“Because the browser vendors know their browsers better than anyone, their WebDriver implementations can be tightly coupled to the browser, leading to a better testing experience for you,” said Stewart in the final release note.
Selenium 3.0 would also be integrating a W3C WebDriver specification for automation. In a follow-up interview, testing specialist Juho Perälä noted that this was a relevant tool in mobile testing through Appium and iOS Driver and it would “encourage compatibility across different software implementations of the WebDriver API.”
The purpose of the update wasn’t going to make any huge changes to the common use of Selenium. Instead, it was largely to make Selenium Core obsolete in exchange for an alternative implementation old RC interfaces to meet a rapidly advancing environment of web testing.
Looking Forward with Selenium
Selenium 3 continues to be exceedingly favored in test automation today, and SeleniumConf returns for 2017 as developers and enthusiasts maintain conversation around best practices in testing and advancing the future of Selenium. And we can only hope that when Selenium 4 is announced, it doesn’t take as long to launch.