This is part two of a look with my testing colleagues in Datacom at the challenges of being “that sole tester” on an agile team. What do you mean you missed it? Catch up now!
Last time we looked at some of the fundamental theory which underpins so much in IT, “The Iron Triangle”, as well as look at how cross-functional teams evolved an understanding that everyone had to support testing for it to be effective. Teams who embraced this had leveled up from being partly conflicted “forming” teams to “performing teams”.
Today we’ll look at how my peers found their voice and dealt with many of the challenges of being “that sole tester” …
Advocating for testing
A commonly recited tale within “forming” teams involved occasions where a story had been handed to them with just an hour to go until sprint end – and hence any testing performed would be rushed. This usually reflected a bias to see a story as “done” if development was finished.
It was typically these stories which, when passed into production would be the ones with missed issues.
As a tester it’s your responsibility to contribute to a “forming” team’s evolution to “performing” by helping the rest of the team understand your role and your needs in that role. The key to doing this is advocating for testing throughout the sprint and bringing your needs to the team.
During pre-sprint elaboration of stories when the team talks about future work requests and tries to size them:
- If part of a story was going to be difficult to test, then ask developers to also add tools aid with the testability of the feature, and factor that into the size
- If a story was going to be onerous to test, bring it up – it’s possible that the product owner might be comfortable accepting less testing, especially if it was an early part of a feature that would grow over a number of sprints. Looking at our “iron triangle”, this is essentially accepting there will be less scope to make it fit.
- Alternatively, discuss with the team how a large testing task could be shared amongst the whole team. Looking at our iron triangle, this is accepting more resources/cost for testing in order to get the scope of testing you want.
An example of a shared team approach to aiding testing involved a number of sprints which involved page redesign – during the initial stories, the pages were only tested in IE, Chrome, an Android phone, and an iPhone. These items represented the most popular browsers/devices currently used by the system.
As all the pages neared completion, testing was performed across the finalised pages using a larger suite of tests. The project tester drew up a matrix of items to be tested, with members of the team helping with allocated browsers or phones to test in more depth according to the instructions given to them.
During stand up:
- It always helps to have a rough expectation of how long you expected to need to finish testing a story. As the sprint end came closer, it helps to remind the team about how long you expect to need to test, especially if several pieces of work are due to complete close to the sprint end. The team may have help or solutions for you.
- Some teams have embraced this in their planning, mixing up the size of story they prioritise to keep a good flow of work to best tested, rather than deliver several large stories at the end. Likewise, there is a level of maturity by accepting that a story won’t get finished this sprint and bringing it to the scrum master and product owner’s attention.
A key skillset that was talked about by sole testers was that of being effective and influential within their communications to the rest of the team. When talking about concerns or problems, it was important to put together a strong argument.
This often felt daunting to testers, especially in forming teams where there was better numerical representation of other disciplines.
The following elements were considered the core features of effective communication when making a case for a risk:
- Explain why you consider it to be a problem. Frame it in terms relevant to the business.
- Highlight some examples if need be. People are highly visual, so anything you can show will often make your case strongly.
- Outline what you can / cannot do.
As a tester it’s important to recognise you need to make the strongest case you can, but often others – whether the wider group as a consensus or the product owner themselves — will make a decision based on that risk.
As time went on in a team, this pressure relaxed and there was a feeling of trust that built up within the team as it shifted into “performing”. But it strongly highlighted how effective communication and influence skills are becoming increasingly central to everything we do.
The power of showing
The talk about effective ways of communication and “highlighting some examples” diverted to talk about “the power of showing”.
Persuasion is always easier when you’re close and when you’ve worked together long enough to build trust in each other as a team. A charming story around this, is one of our testers just has to go gasp at something on her screen, and the rest of the team turns around and wants to know “what have you found”. Teams like this only form with time and trust from all parties, moving from that “forming” stage to building trust to “perform”.
For many, seeing is believing. It can be easy to get trapped into a conversation when you describe behaviour you’ve witnessed of “well, it shouldn’t do that”. A demonstration can simply and effectively show “well it does”.
One example came from a distributed team with some members in different countries. A remote team lead passed a link for the web page to be tested in an email.
The tester responded that it was the wrong link. The team lead replied back that it definitely was the correct one. This followed a few iterations.
Eventually the tester did a screen share with the team lead over Skype For Business, showing them clicking the link, and showing how the link was not valid and went nowhere. The team lead then responded immediately with the correct link the tester needed.
There is no substitute for colocation, but Skype For Business and other such screensharing tools are the next best thing. They are used extensively by our internal helpdesk to log and rectify issues with our machines, and so are logical tools for testers.
When this isn’t possible, recording your session with a commentary as a video can be really useful, and as a last resort, the tester favourite of taking screenshots for an email can be deployed.
However, sometimes email is not always the best way to interact. Along with the choice of channel, it was picked up how important it was to keep communication simple – you ideally want to take up as little time as possible. If you do it face-to-face or in real time in some form, you can be certain it won’t be ignored.
A multi-page email might contain all the detail that might be required, but you can’t be sure it’ll be read or that care will be taken with its salient points. This was highlighted to me recently with a joke I sent to a colleague, which they asked me about afterwards. My colleague didn’t get the joke because the actual punchline was in the second line of the email … but he didn’t read that far. When we send an email, even one with a read receipt, how can we really be sure that it’s been read? And much deeper, how can we be sure it’s been understood?
There also came out from conversations some of the basics that testers have been talking about for decades and which apply to communications – avoid being pedantic and recognise that anything you find issue with, represents someone’s hard work and effort. As such, there is always potential to upset and hurt people.
It’s been noticed that without colocation, it’s much easier for egos to be hurt in such interactions. A collocated team can build up a depth of social interaction to allow for a closer, deeper teambuilding. Typically, people in collocated teams feel they move from “forming” to “performing” much quicker.
As the sole tester, you’re a key resource in a sprint. When you’re not there, there is no tester.
This initially created some feeling of guilt when a member of staff had to take a sick day or even worse, planned leave!
Ultimately, responsibility with coping with your absence does not solely rest on your shoulders. There is some expectation on the managers and team to cope with this.
Within the team realm, members should have enough cross-skills, that any task on the board can be picked up by more than one team member. Of particular note from testers, was the close relationship between testers and business analysts on a team, and how a good business analyst should be able to pick up a testing task and vice versa.
Solid teams learned to focus on making sure stories were done, and that meant people going “this task needs finishing … I could do it” rather than sticking solely to their own discipline tasks.
The mind maps we talked about earlier were invaluable for the team – if they had been created, they formed a road map of planned testing which the rest of the team could cover, and which would allow them to continue in the case of short, unexpected absences.
For longer absences, being able to get another team member to cover for you, and coaching them through the basics of what you do, also turned to be invaluable.
A team needs an understanding of tools/product. It’s often been much easier to get a member of the team to cover vs bringing in a new tester onto the team, unless the absence is significant.
Our workshops were run using a modification of the Lean Coffee model. We all collected about five post-it notes of ideas, and put them on the board, clustering common items as topics. We then could use five dots to vote up the topics which were important to us.
At the end of our final workshop, there was a lot of satisfaction on the stories and learning we’d shared. We didn’t talk about every topic, but we’d talked about the most important topics applying the prioritisation that we spoke of being so important.
In you reading through our combined wisdom there is perhaps a slight sense of “déjà vu”. If you read enough about agile testing, you’ll come across items like “it’s not done until it’s tested”, “the WHOLE team is responsible for quality”, “everybody helps testing”.
These terms can be very validating to us as individuals. However, what we sometimes don’t appreciate in reading this material is that as testers, it often falls to us to educate and build trust with the larger team who to get to this “performing team” Utopia.
Achieving this is all about sharing our role and our struggles. Good communication and influence skills are pivotal because we need to get others on side, and yes, we may have to make a few compromises and understandings of others’ roles on the way.
In looking at the future of what skills the tester of five years in the future will need, we often find ourselves focusing on automation and technical skills. And these skills do have their place. But as our workshop highlighted, soft-skills are a core part of being a tester moving forward.
The sole tester is required to be the ambassador for their craft and their viewpoint. To be effective, they need an ability to influence, persuade, and teach. From a training perspective, the challenge going forward in my company is how do we as a discipline practice and build these skills.
That will be a focus of a future session, but feel free to share your experiences in the comments.