CrossBrowserTesting.com

A Design, Development, Testing Blog

  • Product Overview
  • Pricing
  • Back Home
  • Free Trial
  • Design
  • Development
  • Manual Testing
  • Test Automation
  • Visual Testing

Challenges of Being the Team’s Sole Tester

August 31, 2018 By Mike Talks Leave a Comment

lone software tester agile

lone software tester agile

PART ONE

Introduction

Until relatively recently, the chances are that if you were a tester on a project, you’d be one of a number of such people. You’d have other members of the team to try ideas out, share the workload, cover for you when you’re away.

With the recent drive towards agile, we’re seeing the makeup of the team change dramatically. Projects can typically be supported by much smaller teams working permanently to evolve the product. This can often result in there only being a single tester on a team.

What are the challenges being the sole tester on such a project? How can you work with these constraints? This has been the subject of a series of workshops with fellow testers within my company, which I’m excited to share the outcome with you …

The Iron Triangle

Before we get underway, it’s useful to revisit the following principle within project management which we found underpinned many of our conversations. It’s useful for thinking about the constraints we’re working within on a project, especially in agile.

Image result for scope cost schedule quality

In the iron triangle gives us the idea that the “quality” of a project is defined by three attributes of your project – cost, scope, schedule (or time).

You might have heard the adage “cost, scope, schedule … pick two”. However, ideally on a project, there should be only one cast iron attribute – what management consultant Johanna Rothman calls “your project driver” in her book Manage It.

Within any project you can only really have only one attribute which is fixed – it could be “we need this done by X” (schedule) or “there is only Y budget for this” (cost). The skill of a manager is to work with this constraint and plan around what can be done with the other two attributes to achieve this goal.

Within traditional test management, there are a lot of parallels for applying this same theory to test planning. Within this dynamic the fields are,

  • Scope – how much testing you’d like to achieve
  • Cost (or rather typically resources) – having more testers allow you to execute more activity
  • Schedule or timeframe – how long you have to do things

It should be obvious that if you have a large scope, and a short timeframe, one solution would be to have more testers on it. Although of course in the real world, there are constraints as to how much this can be pushed, and good test management revolves around knowing and pragmatically working within these constraints.

Another solution, of course, is less testers, but it means that it takes longer to get through everything you’d like. Great for the test budget, but typically people like developers need to be paid to be on call to fix the bugs and the bugs are found later in the cycle, so developers need to be available longer.

Finally, if you find yourself in a situation where your available people and schedule are fixed, the only thing to do it to prioritise your scope as it’s the only thing you have control of.

Understanding this dynamic and the trade-offs is important because it was a core part of the discussions that were held, together with ways they could be handled and occasionally hacked.

Under pressure

A common initial feeling of someone stepping into the role of a sole tester was that of feeling under pressure.

Especially in an agile project, the timeframe is set by the sprint duration and your testing team size (although this can be “hacked” as we’ll discuss later).

Just back in 2013, one of our projects would have an annual release, which would involve a two-month testing window and would keep our test team of six busy.

Fast forward to 2018, and we’re now working in agile teams where we are creating deliverable code in a two-week sprint window using only two testers.

A key enabler in this was adopting a robust automated testing framework, which was easy to maintain with changes in the system under test. Such a suite did not grow overnight – and required a lot of work between testers and developers to build the right thing from a framework perspective, as well as to work through a prioritised list of useful automated tests to have in place. In working out idea scenarios and prioritisation, testers found themselves well-placed to lead these discussions. Over time, this suite was able to carry the functional regression load.

Automated testing helped, however, it didn’t eliminate the testing role. But testers found that their role did change dramatically. Most manual testing effort now focused on testing new or changed functionality in-depth during a sprint, as well as helping out with increasing ownership on test scenario selection for the automated suite (as well as shock-horror, learning to code their own tests).

In teams which are still undergoing a level of “forming” – a term used to describe those that have relatively new team members, some of whom were relatively new to working in an agile team – it was quite common for the sole tester to feel initially like they were the “point of blame”. If something gets out into production, the inevitable uncomfortable question can be asked of “why didn’t you test that?”

We shared a few of our experiences looking for general themes. Part of the problem that we were acutely aware of was time, and it’s not always possible to test everything you want to.

In many examples of a release where a defect had been undetected, manual testing had always occurred. Typically though, something was missed, or it was not imagined that a particular scenario could have been capable of causing an issue.

It’s worth taking a moment to think about how this was addressed in “classic” waterfall projects. A test lead would create a plan of what’s to be covered in consultation with many people on the project, but especially using the requirements. From this, they would build up a series of scenarios to be covered and make estimations around the resources and timescale.

However, on these classic projects, this was not the end of the story. It was the tester’s job to produce the best schedule they could, but it was known that this would not be perfect on the first draft. This was why such emphasis was put on the importance of reviewing – firstly by peer testers to see if enough testing heuristic variation has been employed, but also by a wider team such as project managers, customers, developers.

The aim with reviews was to find gaps in the plan and address them. This allowed the final testing scheme to be the most robust scheme of testing possible. This could come from developers saying, “we’re also making changes in this area” or our customers stating there’s an expectation that “most people will…”.

Within agile, it can be easy to forget that this level of contribution is still required. It needs to occur, however, it’s in a more informal, often verbal manner.
Within my colleagues, there is a general consensus that the tester becomes more responsible for facilitating a discussion around testing, much closer to what some organisations will call “a quality coach”.

A core tool for having these conversations is the use of mind maps, which the group has been using with success since 2013. A mind map allows the author to show for a particular feature, all the different variations, and the factors that they’re planning to follow in a one-page diagram.

When done well, they’re intuitive to read and can even be posted in common areas for people to look at. Their brevity helps get people to read them — “I haven’t had time to read that thirty-page document you’ve sent yet” is a frequent complaint in IT.

Even with a mind map in place, there is a natural tendency for the rest of the team to rubber stamp things. A sample conversation might go like this:

Tester: Did you have anything to add to the test mind map I sent out?

Team member: Uh … I guess it’s okay?

We all have a tendency to say something along the lines of “I guess so” for something we’ve not properly read. It’s important to still follow up with a brief conversation about what’s in your coverage – this can be individually with each team member, but often it’s better with the whole team. Just after stand-up can be a great time for this to occur.

If a member of the team notices there’s a mistake about the approach or some items that are missing, it’s expected for them to provide that feedback. Likewise, if the developer does more change than initially anticipated, there’s an expectation for them to tell the tester what they might also want to consider.

Often what you’ll read in agile literature about a “whole team approach” is essentially this: the whole team takes responsibility to give feedback whether it’s about how a story is defined, how a feature is being developed, or how testing is being planned.

A good indicator of when a team has made this mind shift is the use in retrospective of “we” instead of “you” – “WE missed this, WE need to fix this”. Teams where this happens have a much more positive dynamic. It’s important that this applies not just to testing.

Other examples include when a developer builds exactly what was on the story card, but not what was actually wanted (“we failed to elaborate”), when a story turns out much bigger than first thought (“we failed to estimate”) etc.

That said though, agile does not mean the breakdown of individual responsibility. A clear part of the tester’s role is to set clear expectations for the team of what they can do, how much effort it will take, and how you’re approaching it. But there needs to be team input to fine tune this to deliver the best value.

Mainly testing will revolve around changes to a product, for which the rest of your team are your first “go-tos” as fellow subject matter experts on the item. Occasionally as a tester though, you will find the value to consult with another peer tester – and there is an expectation that testers who are part of the same organisation but in other teams can be approached to be asked for their advice and thoughts on a test approach. Within our company, there is an expectation that all testers make some time in their schedule to support each other in this way. This, in many ways, echoes the “chapter” part of the Spotify model, with testing being it’s its own chapter of specialists spread across multiple teams/squads who provide test discipline expertise.

Reaching out to other testers like this is important; it creates a sense of community and the opportunity to knowledge share across your organisation.

Waterfall into agile won’t go…

There have been some “agile-hybrid” projects where there has been an expectation of set numbers of people being able to perform a set volume of testing in a set time (sprint). This can sometimes be problematic as the tester involved in execution hasn’t been involved in setting the expectation of what volume of tests are likely. And hence, it can feel like working against an arbitrary measure not based in reality.

In such a situation, it’s like being given an iron triangle where someone has given you “here’s your schedule, here’s your resources … so you need to fit in this much scope”. When faced with so many tests to run, it obviously helps to have them prioritised so that you’re always running the most important test next. When three areas are fixed, what suffers is the quality – it gets squished.

On projects where test scripting was not mandated by contract, there was always a preference for use of exploratory testing – this being because it allowed the manual tester to focus their time on test execution with very little wastage, meaning more tests could be run, which helped reduce the risk.

Summing up for now …

There was so much material, we had to split it up. So far we’ve taken a dip in, looking at how teams found themselves evolving to a whole team responsibility to quality.
Next time we’ll look at how testers found their voice, and some of the key skills and approaches my colleagues found increasingly pivotal in their day-to-day role.

Thank you to Janet Gregory for reviewing, editing, and donating her expertise for this article.

Filed Under: Uncategorized Tagged With: agile, guest blog, mike talks, sole tester

Building a Learning Culture

March 21, 2018 By Mike Talks Leave a Comment

building a learning culture mike talks professional development tester

building a learning culture mike talks professional development tester

Going to a conference can be an amazing and wondrous experience. The best way I can describe the feeling of a great conference is “Disneyland … for testers”. It’s several days of meeting key thinkers in your discipline, being bombarded with big ideas, and playing about in workshops with new technology.

But sadly of course, all good things must come to an end. And when you return to work, there’s a battle to incorporate those big ideas or to try out that new technology. You can sometimes feel on your return that everyone’s going, “brace yourself, Mike’s going to push for TDD again”.

This, I have to admit, has been my experience – it’s not possible for everyone in a team to attend a conference, especially when it’s overseas. And hence it’s left on your shoulders to be an ambassador and evangelist for what you’ve learned. In this experience report, I’ll look at some of the things we’ve been trying to do in my department to bring the spirit of conferences into our working week.

Moving Forward

I feel that everything changed for our department back in 2013 with our shift to agile. Before that, everything was very command and control – seniors would work out how we’d do things, and those instructions would be filtered down as “a list of things you need to do”. Agile put everyone more in the driving seat, and it needed them to have the capability to act in a multi-skilled role, or have the maturity to seek someone else out when they were out of their depth.

At the same time, our portfolio evolved into new technology in terms of supporting mobiles, photo technology, using automation, shifting our security framework, infrastructure refreshes, monitoring production behaviour as well as up-and-coming tech and features such as AI for facial recognition. We had been thrown well and truly out of the “test the functional requirements” playpen.

The company I work for has one of the best support for training courses I’ve ever known – you find a training course, you can be pretty assured of attending. The problem was in some areas, things were so relatively new, there was no established training course to “just sign yourself up to”.

Thus we found ourselves circling around “how do we keep ourselves relevant”. We seemed to go back and forth for several months over what the ideal way forward would look like. Then one day on Twitter, I had a revelation when I read the oft-used quote “a year from now, you’ll wish you started today”. THIS WAS US! We needed to just get started – as long as we started something, we could adjust the forum if it wasn’t working.

So I started with something we called the Technical Tester Study Group – pulling together a group of testers from across my organisation. The idea was we’d meet every fortnight and more than anything we’d focus on hands-on activities.

We started off looking through the basics of Java, which as you can imagine was pretty popular. I led these sessions where I would introduce a couple of concepts and set an exercise for people to work through on their laptops.

Each session, people would bring in a laptop, I’d demonstrate a basic concept in Java, and then they’d complete an exercise where they’d use it. At the end we wouldn’t have covered a lot, but we covered it in depth. We had both made mistakes and learned from them, ending up with a concrete example to build on. [I’m personally a fan of this mistake-driven learning, which I call “solving messy problems”. We typically learn a lot from our mistakes]

It took about ten sessions to work through the Java basics, and then we worked through some basic Selenium, using it and the Java we’d learned to open pages, check for content, and manipulate elements of them.

Up until then, I’d been leading sessions and using material I’d built up over the years, but always it was about doing more than just being “the Mike Talks show”. With just me leading it, people would only ever get as smart as me … and there is room for improvement on that!

Collaborative Learning

I got one of my technical testers to create a session on how browsers work, where I have to admit I learned a few things. And before Christmas, one of our team ran an interactive session on using a Robot framework.

We also got in a few guest speakers:

  • A friend working in AI to talk to us about their work in machine learning and chatbots, which turned into a three-session workshop of building our own chatbots.
  • A member of the tools team within our company talked to us about how Nagios and monitoring in production worked.
  • One our AWS gurus talked to us about how infrastructure is built with AWS, and how it can be tested.

I currently have one member working on a security module to do at a future day, with another looking at doing the Robot framework step-by-step. I’ve also used the group to try out material – often when I submit to workshops at a conference I can say “I know this material works because I’ve tried it”.

I’ve had great feedback from the Technical Testing Study Group, but it’s become a hunger to do more. My team has spearheaded testers taking a more hands on approach to automation, and we’ve been running an Automators group. The approach here is vastly different – we created a series of goals for the group and use a Trello board to map learning objectives. Each session, there is some kind of assignment to members of the group who then have to report back on what they’ve achieved and how it went.

Example assignments have included:

  • Collating ways our automation fails and leading a meeting on addressing some of the issues
  • Moving some of our automation to a new design pattern and why we’ve adopted that pattern
  • Reviewing why we’re trying to achieve with our smoke tests and seeing if we can optimise them

A key thing from the team has been to not be afraid to take a tangent and try something new. The task “keeping the board green” (our tests passing) was something we spent a few sessions on, and it began to feel like a Sisyphean task. The group encouraged each other to just “move onto something else” and park it for a while.

Staying Relevant

Fundamentally, learning helps to keep you and what your team doing relevant. At conferences, you give yourself time and space to try new things, and this is so vital to reproduce on some level in your working week.

Perhaps you’re thinking “but my manager won’t approve this”? My experience is that most managers are really receptive to these ideas and actually might come at you with suggestions of things to try. Just make it clear what the goals are, and how you expect it to deliver value. We, for instance, use a monthly survey for our Automators group to show how peoples’ confidence and engagement is increasing.

Find a way for your team to get some space and some play time to learn. The thing which has really stood out about these sessions is how much fun they’ve been – I’ve got to see people in my team really shine, which has sometimes surprised me, but also the level of humour we can engage in because we’re a team who trusts and respects each other.

Stuck for ideas? Why not ask someone who’s run a conference workshop you attended about their materials and have a go at running it in your team? But fundamentally, don’t worry about getting it right. If your team has a good level of energy in it, you’ll find a way to course correct.

About the Author: Mike Talks is a Test Manager at Datacom in New Zealand and author of the book How To Test. Having been in the IT for over 20 years, he’s learned to embrace reinventing himself like David Bowie as the industry and its needs evolve around him. Turn and face the strange – ch-ch-changes! To learn more, follow him on twitter @TestSheepNZ. 

Filed Under: Test Automation Tagged With: conference, learning, professional development, tester

Try CrossBrowserTesting

Everything you need for testing on the web. Browsers & Devices included.


  • Grid In The Cloud
  • Simple CI Integrations
  • Native Debugging Tools
  • Real iOS and Android
  • 2050+ Browser/OS Combinations
  • Intuitive REST API

Start Testing Today

Want Testing Tips?

Want the latest tips & strategies from industry experts right to your inbox? Sign up below.
 

Join Over 600,000 Testers & Developers And Start Testing Today

Learn more Free Trial

Features

  • Live Testing
  • Selenium Automation
  • Automated Screenshots
  • Screenshot Comparison
  • Local Testing
  • Real Devices

Solutions

  • Automated Testing
  • Visual Testing
  • Manual Testing
  • Enterprise
  • Internet Explorer

Resources

  • Browsers & Devices
  • Blog
  • Webinars
  • Integrations
  • ROI Calculator

Company

  • About Us
  • Careers
  • Plans
  • Terms of use
  • Security

Support

  • Help Center
  • API Docs
  • Schedule A Demo
  • Contact Us
  • Write For Us