The CrossBrowserTesting team hung out in Austin last week for three days of Tex-Mex and test automation talk as an official sponsor at the 2017 Selenium Conference. We thoroughly enjoyed meeting our customers and conversing with other testing enthusiasts as well as attending speaker presentations to hear a few insights from the experts.
Angie Jones is a Consulting Automation Engineer providing strategies for scrum teams at LexisNexis who came to this year’s Selenium Conf to recite some of Aesop’s fables and teach us how we can apply the lessons to test automation and continuous integration to increase productivity, accuracy, and trust between departments.
These are just the few of the lessons we learned from Jones’ storytelling session.
The Boy Who Cried Wolf – In this familiar story, a shepherd boy is tasked with keeping watch over the sheep and alarming the village people if a wolf comes. However, the shepherd boy yells “Wolf! Wolf” just to laugh as the villagers all come running in panic to realize there is no wolf. The boy does this quite a few times, until one day a wolf actually comes. The shepherd boy yells “Wolf! Wolf!” but nobody believes him, and the wolf eats all the sheep.
Jones asserts that all our tests are becoming like the shepherd boy and crying “Wolf” but often with false negatives. All the while, the developers — our villagers — are no longer paying attention to the alarms, and in turn, problematically ignoring broken tests as well.
The Lioness and the Vixen – A lioness and a vixen are bragging to each other about their children. The vixen tries to one-up the lioness by saying that she has a whole litter of kids, while the lioness only has one. In response, the lioness says that vixen may have more, but she has a lion, implying that her child is of higher value than the vixen’s many.
In test automation, Jones advances that quality should always prioritize quantity — the number of tests don’t matter, but how efficient they are does. Especially in a CI cycle, the more tests, the slower the process, and the fastest way to lose trust is with flaky tests. You want to make sure you’re automating quality tests, providing the best possible value to developers, and ensuring fast feedback.
The Astrologer – An astrologer went out every night looking at stars, analyzing constellations, and trying to predict the future. Unfortunately, he wasn’t paying attention to what was in front of him because he was so focused on looking up at the sky, and he fell into a deep hole.
Often times, teams will solely focus on automating new features, but it’s important to pay attention to your present builds, as well. “Everyone wants to get the newer features out the door, but no one’s looking at the state of automation right now, and it’s broken,” said Jones. “Why are we building new code when the builds are red? Don’t value new code over working code.”
The Farmer and the Stork – A farmer finds that cranes have been trespassing to eat his crops, so he devises a plan to catch them in the act. The cranes recruit a stork to come eat with them, but the stork doesn’t realize the food would be stolen and doesn’t participate. However, when the farmer casts his net to catch the cranes, he captures the stork too. The stork pleads to be let go, saying he’s done nothing wrong, but the farmer says that it doesn’t matter because he had still been there with the rest of the cranes and so would receive the same punishment.
Similarly, all it takes in testing is one flaky test to ruin the rest of the build. Once a build turns red, no one trusts the rests of the tests even if they might be reliable. Jones advises separating tests into different paths — one for stable tests that only fail when something is wrong and one for unstable tests with flaky fails. This way you can divide attention between the red builds that need to be fixed or rewritten and the green builds that developers should focus on.
The Happy Ending
Jones has 4,000 automated tests, less than 1.5 percent of which are unstable — a number so low it trump’s Google’s 3 percent. The main build is always green, unless there’s a change with the application, and everyone trusts it for feedback.
To learn more about managing flaky tests, monitoring builds, knowing when to rewrite tests, and other CI lessons based on classic tales, take advantage of the full recording of The Build That Cried Broken: Building Trust in Your CI Tests Selenium Conf presentation.