Everyone understands the need to test their applications on a diversity of browsers and devices. Yet, it seems many still don’t understand the value of having diverse people involved in the development and testing of the software.
I recently stumbled upon a thread in a technical forum where the opening poster asked how hiring for diversity helps a company succeed. The number of commenters who believed diversity hiring to be nothing more than the politically correct thing to do was frightening. Many even resented that companies are making extra efforts to find diverse talent.
There were several commenters in this thread who were software developers and believed that writing code is a task that does not require a diverse perspective. Perhaps there’s some validity to that statement when considering the most narrow of use cases, which are typically what developers have in mind when coding. Most times, developers are coding just enough to pass the acceptance tests that are provided by business analysts who typically get their requirements from a specific client or from market research. However, traditionally, testers have done a great job of considering the many different usages of the software, and having a diverse group of testers ensures that a variety of perspectives and scenarios are examined.
Here are a few use cases that show the importance of diverse technical teams, particularly testers.
A couple of years ago, Google introduced a “smart” photo tagging algorithm that would use image recognition to identify the contents of uploaded photos and automatically tag them. The feature was developed, tested, and released, and while in production, it mistakenly identified Black people as gorillas. How did this happen? The system was trained with photos of humans but apparently, no one thought to use photos of dark-skinned humans in the training or testing, therefore the system had no idea how to recognize people of color.
Similarly, in New Zealand, when an Asian man attempted to submit a photo for his passport application, the Department of Internal Affairs’ website rejected the perfectly good photo because it mistakenly thought the man’s eyes were closed. This is another case of the software not being trained to recognize racial differences.
Having a diverse set of testers can help avoid embarrassments like these, but it becomes even more critical as the future of artificial intelligence has to make vital decisions and are taught to value human life over everything. We certainly want our algorithms to be trained and tested to be able to correctly identify who is human.
The Unheard Consumer
Because many technical teams are dominated by White males, not only is racial diversity unaccounted for but in many cases, so is gender diversity. Back in 2011, a woman reported about how her fancy new voice-controlled car system doesn’t recognize many of her commands, although it had no problem understanding everything her husband said. These systems are designed to eliminate distractions and the need to work with devices while driving. However, when it doesn’t work for women, it promotes the dangerous behavior it’s designed to avoid — forcing them to resort to working with their device while driving.
However, this was six years ago. That’s practically a lifetime in technology. One would assume that voice recognition is much better in recognizing women’s voices today. Rachael Tatman, a sociolinguist who studied Google’s speech recognition software, disagrees. Her research into the software still shows a striking difference in the accuracy of recognizing a man’s commands compared to that of a woman. Much like with image recognition, Tatman blames the lack of diversity in the data set used to train the speech recognition system. Women make up half of the world’s population and are often times unaccounted for when developing and testing software in the male-dominated tech industry.
It’s natural for people to surround themselves with others who are like them. One argument that is often made by White male engineers (including the aforementioned thread) is that hiring underrepresented people and expecting their views to represent everyone of the group they identify with is fallacious. I can agree, however, I don’t think it’s fully appreciated that an underrepresented person does have a much better understanding of their group than outsiders do, and more than likely have a greater representation of members of that group in their immediate networks. So while it may not be everyone’s view per se, they are at least aware of lots of different views from people of their group.
For example, I’m a Black tester who works at Twitter. As a whole, the way Black people use Twitter is so different from everyone else that it has its own cultural identity known as “Black Twitter”. While I certainly cannot speak for all Black users of Twitter, my understanding and immersion in Black Twitter is extremely valuable to the company. Not only am I sensitive to how potential changes can affect them, I am also in a position to make recommendations for features that can lead to increased activity by this group. Arguably most important, I am a first-hand early witness to how this group uses the product, and as a tester, have real life scenarios that employees who are not a part of this group may have never noticed.
Recently, a Black Twitter user manipulated the product in a way I have never seen before. He used the features of the site to set up an interactive and immersive scenario. I do not follow this user, his tweets had not gone trendsetting viral, and yet because of my association with this group, I became aware of this new use of the platform within hours. The use was so creative that I’m sure it will catch on and others will do the same. As a tester, I became aware of this real life customer scenario before it became a mainstream use case and can do my part to ensure that we as a company are ahead of the curve in planning for this.
Intruders: A Scenario Thread. pic.twitter.com/xFAIGKxVn9
— أسود (@NasMaraj) July 8, 2017
This type of insight is not restricted to Black Twitter. The Japanese also use Twitter in a very different way than most. Having testers who understand these cultural nuances and can keep them in mind when designing and testing new features is extremely valuable.
Again, no one debates whether testing software on the browsers and devices that their customers use is beneficial for the business. It’s a no-brainer. I don’t believe it’s that far of a stretch to find similar benefits and advantages in also aiming to comprise your technical team with people who can relate to your customers and better understand potential scenarios and use cases.
About the author: Angie Jones is a Sr. Automation Engineer at Twitter who has developed automation strategies and frameworks for countless software products. As a Master Inventor, she is known for her innovative and out-of-the-box thinking style which has resulted in 22 patented inventions in the US and China. Angie shares her wealth of knowledge by speaking and teaching at software conferences all over the world and leading tech workshops for young girls through TechGirlz and Black Girls Code.