CrossBrowserTesting.com

A Design, Development, Testing Blog

  • Product Overview
  • Pricing
  • Back Home
  • Free Trial
  • Design
  • Development
  • Manual Testing
  • Test Automation
  • Visual Testing

Why Your Website Looks So Bad on Internet Explorer

October 2, 2018 By Alex McPeak Leave a Comment

why is internet explorer so bad

internet explorer

Just the mention of Internet Explorer is sure to bring on a migraine for developers and testers. But why is IE so often the browser we love to hate?

As we know, your web page will have slight differences across browsers because of the different ways they render HTML, CSS, and JavaScript. Oftentimes, this is minimal, but it can also lead to more major functional and visual differences that could potentially affect how a user interacts with the web applications, which is why cross-browser testing is so important.

This becomes even more evident with Internet Explorer where those differences are often more obvious. The problem is that despite all of IE’s issues, it continues to hold a share of the browser market, which is why developers and testers have to keep it in mind when building out and releasing a website.

So why is Internet Explorer so problematic, and if so, why do people use it all? Let’s take a look back at some browser history to get a better understanding of where the issues first started.

Blast to the Past

In the early years of the web, Netscape, which was largely considered the first major internet browser, ran the show. It didn’t have much competition until Microsoft came out with Internet Explorer in 1995.

This was the first time there was real competition between two browsers a.ka. the browser wars. Not only was Internet Explorer free and automatically installed on Windows, Microsoft’s advantageous financial situation allowed them to continue implementing new capabilities, such as scripting support and the first CSS implementation.

This helped IE inch ahead in the race against Netscape, and in 2003, Internet Explorer became the most popular web browser, captivating about 96 percent of the market share.

However, we know the browser wars don’t end there. Following Internet Explorer’s takeover, Opera made its way into the picture. Then Safari, Firefox, and Chrome.

Where the problem truly lies, however, is that Microsoft didn’t abide by the guidelines of the World Wide Web Consortium (W3C), which gives specific HTML and DOM specifications for browsers to follow, and decided to make their own decisions when it comes to features and plugins.

So while every other browser was following certain rules and regulations, Internet Explorer was a wild card. With users split between those newer, W3C compliant browsers and the non-compliant Internet Explorer, developers were attempting to write standards-compliant applications and running into issues when it came to IE.

Meanwhile, Internet Explorer had already grown a large user base and used competitive advertising to drive users away from other browsers. While new browsers created competition for IE, some people stayed with the browser because it was the default on Windows and they were already using it.

User interest began to noticeably wane when IE stopped releasing new developments and updates. New browsers focused on speed, security, and, to developers’ delight, standards compliance. They released regular updates for continuous improvement, and by the time Internet Explorer tried to catch up, they were already too far behind.

Microsoft hadn’t updated IE6 for 10 years, and as Android and iOS mobile operating systems emerged, Internet Explorer was not compatible.

Not to mention, Internet Explorer was brimming with security issues, criticized for using third-party technology, suffered from a lack of development tools for debugging, was slow to update, experienced slow performance, and was crawling with software bugs.

Eventually, in 2015, Microsoft ended support for Internet Explorer and replaced it with Microsoft Edge as the default Windows browser.

Back to the Future

Over time, Internet Explorer has made great strides in some of the areas they have been most widely scrutinized for by users. However, because so much of the underlying code still does not follow the W3C guidelines, it still remains a problematic browser for developers and testers.

As new versions of Chrome, Firefox, and Safari are continuously released, it’s important that we test web applications in those newest versions as well as previous versions. Though Internet Explorer is defunct, it’s still important for us to test on versions of Internet Explorer since it retains a user base.

Moreover, because of the fact that it is non-compliant, it remains a problematic browser, which means that it’s less likely to render your website the way you intended it to look and work, and more likely to crash it. This makes it high-risk, and savvy software professionals know the importance of including it in browser testing. In fact, our customers’ second most commonly tested desktop browser is IE 11, and many of our customers go back to IE 9 or further.

Of course, there are other methods for deciding which browsers to test and if Internet Explorer is one of them. But one thing is to be expected — looking at your website in Internet Explorer for the first time may make you doubt your design and development skills — just know you’re not alone. And make sure to check your website in IE from time to time.

Filed Under: Browsers Tagged With: browsers, internet explorer, web design, web development, web testing

Balancing Your Automation Strategy with Manual Testing

September 21, 2018 By Alex McPeak Leave a Comment

balance between manual and automation

balance between manual and automationbalance between manual and automation

You’ve heard it a million times before — you can’t automate everything. But if your organization is just starting into the swing of automation, they’re probably thinking about all the ways they can automate. So how do you make sure you maintain the proper balance of manual testing and automation?

Whether you have an entire QA team whose roles are split between automated and manual testing, or there is one tester whose jobs is to do both, striking the right balance will be the difference between a testing strategy that provides value to your team and one that gives you more frustration than feedback.

By understanding how to best support your manual strategy with automation, you can ensure you’re getting the best of both world.

Know the Limitations of Automation

No matter how skilled you are in automation, how many Selenium commands you know, or what testing frameworks you use, some things simply cannot be automated.

For example, CAPTCHAs are often very popular with ecommerce sites. However, they’re inherently impossible to automate. This means that any test case that includes a CAPTCHA can’t be automated, and manual testing has to step in.

This, of course, isn’t the only case where you won’t be able to use automation. There are plenty of instances that it will be possible to automate, but it will be so complex to script, that you’re better off doing it manually.

Assess the limitations and challenges of test automation so you know where to step in with manual testing.

Solidify Your Regression Suite

The bottom line for automation is that you want to automate tests that are repetitive and time-consuming, so think about which tests you find yourself going through the most often. Solidifying your regression suite will help you determine which tests you want to cover after every integration or change to code.

Keep in mind that this will change over time as your application changes, but it’s helpful to have a suite that ensures basic functionality.

Additionally, deciding which environments you want to test on will be key to understanding which browsers, devices, and operating systems will be included in your regression suite.

Don’t Sweat the Small Stuff

Small test cases are best for automation and give you tests that are reusable and maintainable. The results of your automation are dependent on the state of your application, which means the smaller and more specific the test is, the more resistant it will be to changes in the UI. This is why unit and integration tests are ideal for automation.

But longer, more complex test cases and entire user journeys are often better left for manual testing. This gives you more room to explore the application and requires less time for scripting. Moreover, it means that if the UI of the application changes, you won’t have to go back into your script and figure out where it needs to be adjusted.

What’s the Risk?

When you’re thinking about whether or not to automate, risk should be a leading factor for consideration. However, while there are many methods for evaluating which tests are the most at risk, identifying these doesn’t mean you want to leave all other test cases that aren’t high risk out completely. In fact, these are perfect for manual testing.

When you considering risk, there are usually three tiers that you sort test cases into — high, medium, and low. As we discussed, most high-risk test cases will probably be included in your automation suite.

But, if you’re finding medium and low-risk cases that you still find valuable to test in order to provide feedback, then they might be something you want to run through manually.

Is It Providing Value?

Automation is a great way to speed up testing, but it also takes time for coding and maintenance, and that may not be worth it if you implement a test that isn’t adding value.

Say you run a test suite every day, and it passes 100 percent of the time. Is it worth it to keep automating if your results never change?

Just because a test was once automated doesn’t mean it should stay that way. Automation requires making changes, updates, and edits as the application code changes.

If a test is no longer providing value through automation, it might be better to take it back to manual testing to see if there are new parts to explore.

Communicate With Developers

Developers can’t tell you what to automate and what not to automate, but they can give you insight into the state of the application.

Keep open lines of communication with developers and other team members so that you are clear on what’s being changed in the application during development cycles. Depending on whether integrations are minor changes to the code, completely new features, or a revamped UI will make a difference in whether the test that follows will be part of your automation suite or should be tested manually.

To get additional insight, it may be helpful to pair with developers when going through the application to get a more in-depth look at what has changed and what areas or functions need to be tested.

Exploratory Testing

Exploratory testing is crucial to visually and functionally verifying new features and changes to the UI. Automation frees you up to test parts that you wouldn’t normally get to, so take it as a challenge to find a new bug or test for a different persona.

Human observation, curiosity, and creativity are paramount to being a tester. Automation can only take you so far when it comes to finding new bugs. That’s why manual testing, and especially exploratory testing, will always have a place in the QA process.

Visual Testing

Functional automated testing is not the only thing you have to worry about. Oftentimes, a functional test will pass because your application is working correctly, but it could still be visually inconsistent across browsers. Not only could this affect your layout and design, it could also affect the way your users are able to interact with your application.

In addition to functional testing with Selenium, you should incorporate automated screenshot testing into your workflow. This will help you account for visual cross-compatibility issues that you may have missed with functional testing by allowing you to compare screenshots of your web application side-by-side in different browsers.

Increasing Test Coverage

Automation will take care of repeated sequences, but in order to increase test coverage, you want to use manual testing to look at areas that have not yet been tested.

In this way, automation can be used as a tool to gradually help you increase test coverage over time. Test high-level functionality with automation, then fill in the gaps with manual testing. Look at edge cases and try negative testing to go beyond pass and fail results to gain deeper insight into your application.

This is also a good time to figure out what new tests you would like to add into your automation suite, so you can continue to increase coverage. Evaluate manual test cases to determine whether they would be valuable in regression, and assess which tests may be missing from the suite already.

Considering ROI

At the end of the day, it’s best to consider return on investment when deciding whether or not to automate. Automating takes time to analyze, code, and maintain.

If the time it takes to automate isn’t worth the results that you’d get from the test, it may be something you want to do manually, or not at all.

 

Filed Under: Test Automation Tagged With: balance, manual testing, test automation

Webinar | Lighten Up: Methods for Making Your Automation Suite Leaner

September 19, 2018 By Alex McPeak Leave a Comment

Bas Dijkstra webinar leaner test automation

Bas Dijkstra webinar leaner test automation

Selenium has been the testers’ tool of choice for automated functional UI testing, but it’s too often misused. This results in inaccurate results, slow tests, and shallow feedback — and no one wants that.

If you’re struggling to write Selenium tests that meet your goals, it might be time to take a step back and get a better understand of the test cases that are best for Selenium as well as those that are better suited for testing at a lower level, or perhaps not automated at all.

Test Consultant Bas Dijkstra of On Test Automation joins CrossBrowserTesting to provide his expertise on how to de-bloat you UI tests and get your automation suite into shape. In this webinar, Bas will arm you with solutions to tighten up your end-to-end automation and streamline your Selenium strategy.

In this webinar you will learn:

  • What makes a good Selenium test, and what doesn’t
  • What is better tested at the API level, or not automated at all
  • How to future-proof your Selenium tests for better end-to-end testing
  • Live Q&A with Bas

Register Now

Bas Dijkstra

More about Bas Dijkstra: Bas Dijkstra a Test Automation and Service Virtualization Consultant at On Test Automation who’s always looking for more intelligent ways to use tools to improve test processes and software quality. He’s been in the field for over 10 years, designing and developing test automation and service virtualization solutions that enhance and improve test teams and test processes. You can read more about him on LinkedIn or Twitter.

Filed Under: Test Automation Tagged With: Selenium, test automation, webinar

What Our Customers’ Top 100 Tested Browser & OS Combinations of 2018 Tell Us

September 18, 2018 By Alex McPeak Leave a Comment

top browsers

top browsers

What are the top browsers, operating systems, and devices? We looked at data from the top configurations of CrossBrowserTesting customers tested to give you better insight.

After looking at the millions of browser and operating system configurations our customers have tested in 2018, 100 reign supreme. These results have come from a mix of people in different roles, industries, and company sizes, showing us the most commonly tested configurations out of 1500+ choices.

If you’re trying to decide which browsers and devices to test, this data should give you more to consider when it comes time to pick. Whether you’re testing in 5 browsers or 500, keeping in mind some of the most commonly tested configurations will help you outline a strategy for testing.

Top Takeaways

The top 100 combinations make up 68% of total tests – One hundred configurations may seem like a lot, but there’s more where that came from. We have thousands of browser/OS/device combos that extend well beyond this view. However, the top 100 does make up the majority of our customers’ browser testing, so it gives up a good benchmark of common trends and patterns.

The top 100 browsers include Chrome, Safari, IE, Edge, Firefox, and Opera – These are the 6 major browsers, so it’s not surprising that they’re the most common choices. It’s important to note, however, that our customers are not only testing on the latest versions of these browsers. By includer older browser versions in your testing, you get a more complete understanding of whether your application works for both Chrome users that are quick to update to the latest browser as well as those who might prolong it and be a few versions back.

Only Chrome and Safari are tested on mobile OSs – Unexpectedly, people most often test Safari on iOS and Chrome on Android. However, in this data set, there are ranges of browser versions and device models. If you are keeping your mobile testing limited to mobile Chrome and Safari browsers, make sure to keep older versions and diversified devices in mind. Keep in mind that the mobile browser share of your users may differ depending on where your users are. For example, internationally, UC Browser and Opera Mini will be more popular on mobile. It’s always good to use an analytics tool in order to get an accurate representation of your user base.

The #1 most tested browser/OS is Safari 11 in Mac OSX 10.13 – While Safari isn’t overwhelming in browser share, this Mac OSX and Safari combo is the most commonly tested configuration. Most likely, this means a lot of testers aren’t on Mac or Safari together and realize it’s a common configuration.

The most tested browser is Chrome – Considering Chrome is the most popular browser, this makes a lot of sense. The most commonly tested version is actually an older one with Chrome 43 hitting the #2 spot, but throughout the top 100 most people are testing in 63 or higher. It’s evident that people are testing on older Chrome versions and recent versions, which is good, especially being as popular as it is.

For mobile browsers, people are testing the most in Safari 8 – 11 – Again, we’re seeing that not only are people testing on a range of mobile devices, they’re also being mindful of testing older versions of those browsers. For mobile Chrome, we see users go back as far as 58. Especially with mobile devices, not everyone may be updating as soon as it’s available, and it’s important to keep this in mind with mobile testing.

The most popular mobile browser is Safari 10 on iPhone 7 – Here’s Another data point that shouldn’t surprise too many people. This configuration makes it into the top 10 most commonly tested browsers. Since this is one of the most common mobile devices, it goes to show that people are aware of these trends and including them in their testing strategy.

The second most often tested desktop browser after Chrome is Internet Explorer with the most popular version being IE11 and going back until IE9 – Internet Explorer is a consistent problem for developers and testers alike. Although it may not be the most widely used browser, the fact that it is highly problematic means it deserves a spot on your browser roster. It’s apparent that our customers know this, too.

Most popular mobile OS is iOS with iPhone 7 being the most popular device, followed by the nexus 6p on Android 7.0 – Most people are testing iPhones and the most recent models at that. This is because, unlike the fragmented Android market, most iPhone users will be using the most recent devices and operating systems. This is ideal because it means you can cover a larger portion of your user base without testing on too many iOS devices. However, as you dive into mobile testing, make sure you include Android phones too. Although their users are more diversified, it’s good to get a base to understand of the Android users’ experience.

While it’s popular for mobile, Safari desktop browser only appears 3 times in the top 100 – Safari is predominately tested on mobile devices, but not often tested on desktop Mac OSX and only appears 3 times in the top 100 despite also being in the #2 spot. Although Safari is the default browsing option for Mac, similar to iOS, it seems that people tend to download Chrome and use it more often on Mac OS than iOS. If you are testing on Mac, it’s probably best to test in both Safari and Chrome, since we are aware that users are on both and others test both.

Twenty-six of the top 100 configurations are mobile, 5 are tablets – When it comes to mobile testing, it can be confusing to know how many devices to add to your mix, but this gives us a pretty good idea — about 25% should be mobile. This, of course, is not an exact number and it should be based more heavily on your users and application — if you see high mobile traffic it should be higher, and vice versa — but it gives you a good starting point. Use this as a benchmark, and make sure to include iOS and Android with different browsers or models, as well as a few tablets, which will be telling when it comes to responsive design and verifying the way your application looks on different screen sizes.

Customers are testing as far back as Windows 7 OS and Mac 10.11 – Not only are our customers testing older browser versions, they’re also testing older OS versions as well. This is important because while new browsers are coming out the time, there are also regular OS updates that not everyone will do. Keeping these older operating systems in mind while testing will give you more insightful and accurate browser testing.

Conclusion

Our customers seem to have a pretty good idea of what they’re doing when it comes to browser testing. From the Top 100 data, we can see that users are testing the most popular browsers, operating systems, and devices to cover large market shares, but they’re also testing older versions to account for people that may not update right away.

As you approach testing in your organization, keeping these takeaways in mind will guide you to account for the most significant coverage. This way, no matter whether users visit your website on IE or Chrome, iPhone or Android, Mac or Windows, you can be prepared to provide the best experience possible to everyone.

Filed Under: Browsers Tagged With: browser testing, browsers, data, devices, operating systems

Did You Know? CrossBrowserTesting Has Open Source

September 13, 2018 By Alex McPeak Leave a Comment

crossbrowsertesting open source testing tool

crossbrowsertesting open source

So many great projects are open source, and we realize just how important it is to the software community over here at CrossBrowserTesting. Many of our integrations are with open source projects, predominately Selenium testing; and as contributors ourselves, we know how important affordable testing and development tools can be to the success of those projects.

That’s why CrossBrowserTesting offers a free plan specifically for open source projects, allowing you full access to the largest device lab in the cloud at no cost. We want to make sure the open source community is aware of this opportunity has access to the tools that will help release new developments faster and with higher quality.

This means that all 1500+ of our browsers are available for use, so as changes are made and integrations are added, you can find bugs more quickly in the configurations your customers use and fix them on-the-spot with developer tools. And not only will we let you test manually on our enormous browser selection, we’ll throw in automation minutes as well. Enjoy automated screenshot comparisons, Record & Replay, testing with Selenium, and parallel testing with the most comprehensive browser testing tool on the market.

Open source relies on a community of dedicated contributors to help make quality software. As a part of SmartBear’s ongoing commitment to the open source community, we hope that providing open source plans for all-in-one web testing means that we can also play a larger part in our contribution to these projects.

Earlier this year, SmartBear received the highest ranking for open source-based testing acceleration in the 2018 Critical Capabilities for Software Test Automation published by Gartner, the world’s leading information technology research and advisory company. SmartBear has demonstrated a strong commitment to open source projects such as Selenium, OpenAPI Specification (OAS), Swagger tools, and SoapUI, resulting in SmartBear receiving the highest score. SmartBear recognizes that software development teams are increasingly using open source tools and frameworks, and is committed to supporting these organizations to obtain as much success as possible.

To qualify for an open source account, simply tell us a little bit about your project through our open source request form. Once you’re approved, all you have to do to get a CrossBrowserTesting subscription free of charge is to add our logo to your website or Github.

Never worry about how your application looks on different browsers again. Safeguard your software with a little help from CrossBrowserTesting.

Filed Under: Development Tagged With: crossbrowsertesting, free open source, open source

Challenges of Being the Team’s Sole Tester (Cont.)

September 11, 2018 By Alex McPeak Leave a Comment

challenges software tester

challenges sole tester

PART TWO

Recap

This is part two of a look with my testing colleagues in Datacom at the challenges of being “that sole tester” on an agile team. What do you mean you missed it? Catch up now!

Last time we looked at some of the fundamental theory which underpins so much in IT, “The Iron Triangle”, as well as look at how cross-functional teams evolved an understanding that everyone had to support testing for it to be effective. Teams who embraced this had leveled up from being partly conflicted “forming” teams to “performing teams”.

Today we’ll look at how my peers found their voice and dealt with many of the challenges of being “that sole tester” …

Advocating for testing

A commonly recited tale within “forming” teams involved occasions where a story had been handed to them with just an hour to go until sprint end – and hence any testing performed would be rushed. This usually reflected a bias to see a story as “done” if development was finished.

It was typically these stories which, when passed into production would be the ones with missed issues.

As a tester it’s your responsibility to contribute to a “forming” team’s evolution to “performing” by helping the rest of the team understand your role and your needs in that role. The key to doing this is advocating for testing throughout the sprint and bringing your needs to the team.

During pre-sprint elaboration of stories when the team talks about future work requests and tries to size them:

  • If part of a story was going to be difficult to test, then ask developers to also add tools aid with the testability of the feature, and factor that into the size
  • If a story was going to be onerous to test, bring it up – it’s possible that the product owner might be comfortable accepting less testing, especially if it was an early part of a feature that would grow over a number of sprints. Looking at our “iron triangle”, this is essentially accepting there will be less scope to make it fit.
  • Alternatively, discuss with the team how a large testing task could be shared amongst the whole team. Looking at our iron triangle, this is accepting more resources/cost for testing in order to get the scope of testing you want.

An example of a shared team approach to aiding testing involved a number of sprints which involved page redesign – during the initial stories, the pages were only tested in IE, Chrome, an Android phone, and an iPhone. These items represented the most popular browsers/devices currently used by the system.

As all the pages neared completion, testing was performed across the finalised pages using a larger suite of tests. The project tester drew up a matrix of items to be tested, with members of the team helping with allocated browsers or phones to test in more depth according to the instructions given to them.

During stand up:

  • It always helps to have a rough expectation of how long you expected to need to finish testing a story. As the sprint end came closer, it helps to remind the team about how long you expect to need to test, especially if several pieces of work are due to complete close to the sprint end. The team may have help or solutions for you.
  • Some teams have embraced this in their planning, mixing up the size of story they prioritise to keep a good flow of work to best tested, rather than deliver several large stories at the end. Likewise, there is a level of maturity by accepting that a story won’t get finished this sprint and bringing it to the scrum master and product owner’s attention.

Communication

A key skillset that was talked about by sole testers was that of being effective and influential within their communications to the rest of the team. When talking about concerns or problems, it was important to put together a strong argument.

This often felt daunting to testers, especially in forming teams where there was better numerical representation of other disciplines.

The following elements were considered the core features of effective communication when making a case for a risk:

  • Explain why you consider it to be a problem. Frame it in terms relevant to the business.
  • Highlight some examples if need be. People are highly visual, so anything you can show will often make your case strongly.
  • Outline what you can / cannot do.

As a tester it’s important to recognise you need to make the strongest case you can, but often others – whether the wider group as a consensus or the product owner themselves — will make a decision based on that risk.

As time went on in a team, this pressure relaxed and there was a feeling of trust that built up within the team as it shifted into “performing”. But it strongly highlighted how effective communication and influence skills are becoming increasingly central to everything we do.

The power of showing

The talk about effective ways of communication and “highlighting some examples” diverted to talk about “the power of showing”.

Persuasion is always easier when you’re close and when you’ve worked together long enough to build trust in each other as a team. A charming story around this, is one of our testers just has to go gasp at something on her screen, and the rest of the team turns around and wants to know “what have you found”. Teams like this only form with time and trust from all parties, moving from that “forming” stage to building trust to “perform”.

For many, seeing is believing. It can be easy to get trapped into a conversation when you describe behaviour you’ve witnessed of “well, it shouldn’t do that”. A demonstration can simply and effectively show “well it does”.

One example came from a distributed team with some members in different countries. A remote team lead passed a link for the web page to be tested in an email.

The tester responded that it was the wrong link. The team lead replied back that it definitely was the correct one. This followed a few iterations.

Eventually the tester did a screen share with the team lead over Skype For Business, showing them clicking the link, and showing how the link was not valid and went nowhere. The team lead then responded immediately with the correct link the tester needed.

There is no substitute for colocation, but Skype For Business and other such screensharing tools are the next best thing. They are used extensively by our internal helpdesk to log and rectify issues with our machines, and so are logical tools for testers.

When this isn’t possible, recording your session with a commentary as a video can be really useful, and as a last resort, the tester favourite of taking screenshots for an email can be deployed.

However, sometimes email is not always the best way to interact. Along with the choice of channel, it was picked up how important it was to keep communication simple – you ideally want to take up as little time as possible. If you do it face-to-face or in real time in some form, you can be certain it won’t be ignored.

A multi-page email might contain all the detail that might be required, but you can’t be sure it’ll be read or that care will be taken with its salient points. This was highlighted to me recently with a joke I sent to a colleague, which they asked me about afterwards. My colleague didn’t get the joke because the actual punchline was in the second line of the email … but he didn’t read that far. When we send an email, even one with a read receipt, how can we really be sure that it’s been read? And much deeper, how can we be sure it’s been understood?

The joke in question

There also came out from conversations some of the basics that testers have been talking about for decades and which apply to communications – avoid being pedantic and recognise that anything you find issue with, represents someone’s hard work and effort. As such, there is always potential to upset and hurt people.

It’s been noticed that without colocation, it’s much easier for egos to be hurt in such interactions. A collocated team can build up a depth of social interaction to allow for a closer, deeper teambuilding. Typically, people in collocated teams feel they move from “forming” to “performing” much quicker.

Absence

As the sole tester, you’re a key resource in a sprint. When you’re not there, there is no tester.

This initially created some feeling of guilt when a member of staff had to take a sick day or even worse, planned leave!

Ultimately, responsibility with coping with your absence does not solely rest on your shoulders. There is some expectation on the managers and team to cope with this.

Within the team realm, members should have enough cross-skills, that any task on the board can be picked up by more than one team member. Of particular note from testers, was the close relationship between testers and business analysts on a team, and how a good business analyst should be able to pick up a testing task and vice versa.

Solid teams learned to focus on making sure stories were done, and that meant people going “this task needs finishing … I could do it” rather than sticking solely to their own discipline tasks.

The mind maps we talked about earlier were invaluable for the team – if they had been created, they formed a road map of planned testing which the rest of the team could cover, and which would allow them to continue in the case of short, unexpected absences.

For longer absences, being able to get another team member to cover for you, and coaching them through the basics of what you do, also turned to be invaluable.
A team needs an understanding of tools/product. It’s often been much easier to get a member of the team to cover vs bringing in a new tester onto the team, unless the absence is significant.

Conclusion

Our workshops were run using a modification of the Lean Coffee model. We all collected about five post-it notes of ideas, and put them on the board, clustering common items as topics. We then could use five dots to vote up the topics which were important to us.

At the end of our final workshop, there was a lot of satisfaction on the stories and learning we’d shared. We didn’t talk about every topic, but we’d talked about the most important topics applying the prioritisation that we spoke of being so important.

In you reading through our combined wisdom there is perhaps a slight sense of “déjà vu”. If you read enough about agile testing, you’ll come across items like “it’s not done until it’s tested”, “the WHOLE team is responsible for quality”, “everybody helps testing”.

These terms can be very validating to us as individuals. However, what we sometimes don’t appreciate in reading this material is that as testers, it often falls to us to educate and build trust with the larger team who to get to this “performing team” Utopia.

Achieving this is all about sharing our role and our struggles. Good communication and influence skills are pivotal because we need to get others on side, and yes, we may have to make a few compromises and understandings of others’ roles on the way.

In looking at the future of what skills the tester of five years in the future will need, we often find ourselves focusing on automation and technical skills. And these skills do have their place. But as our workshop highlighted, soft-skills are a core part of being a tester moving forward.

The sole tester is required to be the ambassador for their craft and their viewpoint. To be effective, they need an ability to influence, persuade, and teach. From a training perspective, the challenge going forward in my company is how do we as a discipline practice and build these skills.

That will be a focus of a future session, but feel free to share your experiences in the comments.

Further reading

Manage It! Your guide to modern pragmatic project management by Johanna Rothman
Agile Testing: A Practical Guide for Testers and Agile Teams by Lisa Crispin and Janet Gregory

Filed Under: Test Automation Tagged With: guest blogs, mike talks, software tester

Google Chrome 69 is Here…And We Love the New Look

September 7, 2018 By Alex McPeak Leave a Comment

google chrome 69 material design 10 birthday

google chrome 69 material design 10 birthday

Time flies when you’re surfing the web. If you blinked, you may have missed that Google Chrome just turned 10 (which is getting quite old in browser years).

From its humble beginnings as the underdog in the browser wars to the popular kid it is today, we’ve seen Chrome go through a lot of changes.

We hit double digits! Thanks to all of you for making #GoogleChrome’s last 10 years so awesome. pic.twitter.com/grBZusVmel

— Google Chrome (@googlechrome) September 4, 2018

 

In celebration of Google Chrome’s 10th birthday, Chrome is actually giving us a present — your favorite browser is getting a facelift. While many of those changes for browser version to browser version over the years are relatively small, the newest release is giving Chrome’s UI a design refresh.

“Every time you open your browser, you have a mission to accomplish: trips to plan, emails to send, stories to read, skills to learn. We built Chrome to help you do all of those things as quickly and safely as possible,” said Google’s blog announcement.

“In the case of Chrome, those changes happen every six weeks to bring you new features and security updates, but our 10th birthday update is bigger than normal.”

Here are all the new updates you’ll notice to the UI and beyond as Chrome 69 rolls out.

More Than Meets the UI

Whether you’re on mobile or desktop, you’re sure to notice Chrome 69’s new look.

Pick a screen, any screen.

We’ve got something new for you, no matter where you use #GoogleChrome https://t.co/dKBdXSOUUi pic.twitter.com/0GzvTmpoiJ

— Google Chrome (@googlechrome) September 4, 2018

 

The sleek upgrade includes more rounded edges, which you’ll see first looking at tabs, but you’ll also spot the change in text boxes, the search bar, and icons. In additions, you may also see some small design changes such as new icons and a revamped color palette.

Some of the magic behind the new Chrome refresh is thanks to Google’s open source project, Material Design. Material Design is an adaptable system of guidelines, components, and tools that support the best practices of user interface design is backed by open-source code. This is also what helps Chrome 69 flow so seamlessly across devices.

The idea here is to give Chrome a cleaner look that will also make your life a little bit easier due to its simplicity, especially for those of us who find themselves with more tabs than they can keep track of (guilty as charged).

Better, Faster, Stronger Password Management

You’ll notice in addition to the new UI, Chrome is trying to make updates that will help streamline your life and work. With more accurate and accessible autofill, logging in and checking out should be quicker and easier than ever.

But what good is doing things fast if you can’t do them with security? Chrome 69 includes a password generator, so when it comes time to switch up your password from the combination you’ve been using for, well, literally everything, Google will supply you with a new one that’s strong and secure.

Coupled with the fact that Chrome will save these combinations to your Google account and allow you to access directly from the Chrome toolbar, Google is working towards keeping your account secure without locking you out, which is something we can all appreciate.

Smart Searching

The omnibox (aka the search bar/address bar combo you get in Chrome) is getting smarter by showing you answers directly in the address bar without having to search or open a new tab.

If you’re an aforementioned tab hoarder, you can also use omnibox to find which tab the site you’re looking for is in and switch to it.

Upgrading to Chrome 69

If you’re ready to check out Chrome 69, it’s easy to update.

Simply:

  1. Open your Chrome browser
  2. Go to the three dots on the far right of your toolbar and click
  3. Click “Settings”
  4. When the new tab opens, click the three horizontal lines next to “Settings” all the way to the upper left
  5. Click “About Chrome” at the bottom of the list
  6. When Chrome 69 is available, there will be an option to download and install

Looking Back on 10 Years of Chrome

After 10 years, it’s evident that Chrome is still the most popular browser, and it’s given us many improvements to the way we use the web over those years.

While it’s important to test on the most recent versions of the browsers your customers use, it’s also important not to leave older versions behind. With every browser update comes changes that could affect how your web application looks or works.

If you want to update to Chrome 69, check out our instructions above. But don’t neglect old browser versions, and use a platform like CrossBrowserTesting to test in versions as early as Chrome 6.

Additionally, if you aren’t interested in upgrading, it’s still important to test the most recent version for those users who are, which is why you can now check out Chrome 69 in Live Testing.

Filed Under: Browsers Tagged With: chrome 69, Google Chrome, google chrome birthday, material design

Selenium Interview Questions You Should be Prepared to Answer

September 5, 2018 By Alex McPeak Leave a Comment

Selenium interview questions

Selenium interview questions

Software development teams are constantly shifting, which means testers’ roles have to evolve along with them. As we’ve seen more teams go towards Agile, DevOps, and CI/CD, testers are encouraged to concurrently take on skills that better support these workflows and help with faster testing.

One of those skills is Selenium. As more software testers take on test automation and begin to shift left, Selenium testing is one of the most popular strategies, and a comprehensive knowledge of the tool is foundational to creating tests that provide the team with productive feedback and insight. While Selenium is a tool that can benefit from continuous learning, there are certain things that interviewers will want to make sure you know when you’re applying for you’re next role in automation.

To better prepare for that upcoming interview, here are the top Selenium interview questions you can expect to be asked.

Explain the different Selenium components.

It’s important to be able to prove you know the full breadth of Selenium beyond writing a script. Selenium has an entire suite of tools that are used for a different capability. Review the history of Selenium to fully understand the differences between IDE, RC, WebDriver, and Grid.

Do you know the difference between Selenium 1 and 2? How about the programming languages and browsers it supports? Knowing the tool and rounding out your understanding of how it works will be something that employers value.

What’s your experience with Selenium?

Be able to talk about the different test cases that you use Selenium for. What worked well in your last role, and how do you identify a good case for automation? Is there a time that you automated something you shouldn’t have, and what did you learn from it? What automated tests have you cut out of your suite because they weren’t valuable? How many tests did you automate? Do you have experience using a tool to run Selenium in the cloud?

Giving an in-depth look into the breadth of your experience will help your interviewer understand your qualifications.

What are the advantages of Selenium? What are the most common Selenium challenges?

There are plenty of reasons that people gravitate towards Selenium. Have an answer for why people would choose it over other test automation tools and what it can help accomplish.

However, Selenium also has its challenges. Flakiness, dynamic content, reporting, mobile testing, and captcha are a few things that may come to mind when you think of these limitations. Know which ones have solutions and how to implement them, and know which mean testing is better of done manually.

What should Selenium be used for? What shouldn’t it be used for?

Going off of the last question, you should have a comprehensive knowledge of which tests should be automated and which are better off done manually.

This is not necessarily subject to Selenium as you should be able to give an overview of the right times to automate, how to decide by creating a risk analysis, and how to prioritize your automation.

When do you use Waits and Sleeps?

Knowing the proper use of explicit wait, implicit waits, and sleeps is key to reducing flakiness in your tests, which is undoubtedly something your future employer will care about — no one wants false positives and negatives.

Be able to talk about the difference between implicit wait and explicit wait and sleep, when to use each one, and how.

How do you handle common commands?

To verify your basic knowledge of Selenium commands, it’s expected an interviewer might ask you how to:

  • Launch a browser
  • Click
  • Scroll
  • Drag and drop
  • Send keys
  • Find an element
  • Close and quit
  • Handle pop-ups
  • Navigate back and forth
  • Switching tabs
  • Minimize and maximize windows

Check out our Selenium toolkit for tutorials on these commands and more.

What is Page Object Model and Page Factory?

To get more readable maintainable Selenium tests, you need to have a structure that likely involves design patterns including Page Object Model and Page Factory Object.

You should be able to speak to what they are, the differences between them, why they’re important, and how you use them.

What’s your process for reporting?

Because Selenium has limited reporting capabilities, testers’ method of reporting will differ, and your future employer will likely be interested in getting a better understanding of how you report your Selenium tests.

How would you automate this?

In order to prove your technical capabilities, you may be asked to write a test case. This not only helps the interviewer understand whether you’re able to accurately use Selenium, it also helps them understand your process.

Straight From the Manager’s Mouth

These previous questions are common in the test automation space, but to give you a more well-rounded look at what you could be asked, we also asked a few others might ask for Selenium interview questions:

“There are a few questions I ask our candidates. In most cases, they are a bit general, however it will give us an overall picture of what to expect from your future employee. I value the logical and intuitive thinking of a candidate, as I understand that in tandem with their knowledge, this will make them perfect team players and valuable members of our company.

These questions help me assume if a candidate follows the best practices in work with Selenium. It also helps me to understand how much expertise the candidate has and if he or she is ready for further learning and improving as a QA professional.

  1. In your opinion, why we should we choose this tool, and what are its advantages among others similar tools? What is the fundamental benefit of Selenium?
  2. What are the limitations of Selenium? For what purpose it is not the best choice?
  3. What type of tests have you automated?
  4. What are some differences between CSS selector and XPath based locators? Explain how one is better than another and why. What are the testing process particularities while working with CSS selectors and Selenium?
  5. What is a hybrid framework in Selenium?
  6. What is Page Object Model and Page Factory in Selenium? Provide an example in which case we can use them.”

–QArea Software Development Company

 

“1. How many tools are there in the Selenium tool suite? Tell me more about each tool.
2. We use WebDriver’s APIs to write our tests. What do you think about calling WebDriver’s APIs directly inside the test case?

These first two questions are easy questions to start the interview. There is no need to continue if they fail these.

3. What are the solutions to eliminate bad test design problems when using WebDriver? Compare and contrast your solutions.

I expect the interviewee to show deep knowledge on industry best practices on test designs such as Page Object or BDD.

4. What is your preferred control locating technique? And Why? Compare.

Interviewee should have good knowledge about capturing UI controls, and ability to determine what is the best locator for different situation, knowledge about different locator techniques.

5. Do you have any experience integrating your Selenium tests with any CI/CD tools? If Yes, Explain your experience.

This question helps to identify the superstar candidates. One element is that in order to successfully answer this, they will have to have had to have some real experience working with CI/CD tools to explain the answer in detail.”

-Thanh Viet Le, Automation Project Lead & Framework Architect, LogiGear

 

“1. What are some common exceptions you have encountered in Selenium and what are the steps you took to handle those exceptions?

This question is asked to understand the candidate’s knowledge and experience of Selenium exceptions and handling. This enables maintainable automation and we want to ensure these basics are understood by the software engineers.

2. What would the optimal Selenium automation framework look like?

This question is asked to understand the candidate’s in-depth knowledge of automation testing and the automation process, as well as gauge the candidate’s technical knowledge of how Selenium frameworks are architected.

3. Can you provide a few examples of major challenges you have experienced when automating test cases with Selenium and how you went about resolving them?

This question is asked to get an understanding of the types of complex test cases the candidate has automated with Selenium and the strategies the candidate used to resolve the testing challenges.

4. What are some major innovations or enhancements you have created in your Selenium automation framework?

This question is asked to identify the candidates out of the box thinking and assess the candidate’s self-driven qualities. It allows us to gain insight into how a Selenium automation engineer thinks in regards to ensuring the frameworks are maintainable, robust, and portable.

5. Can you describe some changes that have been made in Selenium 3.0 from Selenium 2 and the reason for those changes?

This question is asked to learn about candidate’s complete knowledge of the Selenium testing framework and the candidate’s enthusiasm/curiosity about product updates/new versions. It also gives us a clear idea if they stay up-to-date on enhancements and best practices.”

-Vivek Kumar, Head of Automation, QASource Automation Testing Services

What are we missing? Leave the Selenium interview questions you’ve asked or answered in the comments.

Filed Under: Test Automation Tagged With: interview, Selenium, selenium interview questions, test automation

What Robotic Restaurants Can Teach Us About Test Automation

August 28, 2018 By Alex McPeak Leave a Comment

robotic restaurants test automation

robotic restaurants test automation

To most people, going out to eat doesn’t evoke the image of robots. But to the four founders of this restaurant, robot chefs are the only viable solution.

In this automated kitchen, robots begin cooking your meal in temperature controlled pans as soon as you submit your order. The bots a programmed to spin the ingredients and spices around until everything’s perfectly cooked. Then, it dumps your meal into a compostable bowl.

While the restaurant may seem overrun by restaurants when you look around, there are more humans involved than you think.

Once in the bowl, a person takes it and adds yogurts, sauces, and dry toppings, then hands you your finished product. Behind the scenes, the Culinary Director and Executive Chef are the masterminds behind the delicious recipes, making sure the finished meals have the right balance of flavor. Additionally, there are people prepping the ingredients. Of course, there are also the four engineers behind the robots making sure that everything runs smoothly.

So what does this have to do with software testing?

In the shift from manual to automated testing, we often get ahead of ourselves in trying to automate as much as possible, or even attempting to automate everything.

But just like a fast-casual restaurant cannot operate on robotics alone, neither can we rely on automation to fulfill 100 percent of our testing. There are certain qualities that testers possess that we can’t automate.

So as we continue to explore the ways that automation can aid in our testing efforts, here are a few takeaways we can gain from robotic restaurants:

  • A Good Recipe is Key – Just as we need a chef to create the recipes for the robotic sous chef to follow, there needs to be the first test for automation to follow. Whether this is done through record & replay or by writing out a script, the quality of the first test is crucial to designing a suite that will provide insightful results. Just like a robot can’t follow a recipe it isn’t yet programmed to make and can’t create one on its own, a test automation tool can only test what it is told. Additionally, every bowl is only as good as the recipe, just as every regression test is only as good as the first test or script.
  • Robots Don’t Have an Eye for Detail – Just like humans are the final step in garnishing your meal at a robotic restaurant, human testers have an eye for detail that is critical in testing. There are many soft skills that software testers have — empathy, creativity, curiosity, and knowledge of the software, for example — that are impossible to teach to an automation system. This is why it’s important to have real people that can perform exploratory tests on new features, layouts, or other noticeable application updates, so they can bring unique perception and understanding to provide valuable insight. Not to mention the customer service aspect — communication is important that there’s someone on the team advocating for testing as well as the end users and who’s able to talk to other stakeholders.
  • There is Risk Associated with Automation – Automation usually accounts for “normal” situations but doesn’t consider edge cases or stress. What happens if a customer has an issue with their bowl? What happens if a concert gets out, and the influx of customers is 10 times what it normally is? What happens if you run out of utensils? These are the instances where automation can no longer solve the problem, and a human needs to step in. Automation can’t go outside of what it’s told, which means there will always be risk involved when expected situations arise.
  • Automation is Best for Repeated Actions – A McKinsey report said the food industry is most susceptible to automation because there are so many predictable physical activities involved. Similarly, during code changes and new integrations, it’s important to repeat some of the same tests run previously to make sure they work. This is where automation becomes a good candidate for testing — regressions that have programmable repetition that helps to reduce human error. Rather than have someone perform the same actions over and over, having a machine take over the tester has more time to add new tests is what will result faster productivity in less time.
  • Cut Costs Without Cutting Jobs – Because of reduced labor costs, the robotic kitchen’s bowls start at $7.50 — around $3 less than comparable options. If you’re getting to the point where your solo tester doesn’t have time to run all the regression tests they need across browsers in addition to new exploratory tests, introducing an automation solution can alleviate some of those time-consuming tasks and give you a higher ROI. However, if you implement automation, there needs to be a human behind that as well. It’s also important to have someone who understands automation and the tools in order to properly execute and maintain tests.

The Biggest Takeaway

You can’t automate everything. Though you can add significant automation to a project, there will always be a place for human testers. Additionally, automation should be used as a tool to support manual work rather than replace it.

Interested in hearing more about how test automation is like a robot restaurant. Listen to Episode 6, Manual vs Automated…Restaurants? of SmartBear’s podcast series “The Good, the Bad, and the Buggy” with an interview with Paul Grizzaffi.

 

Filed Under: Test Automation Tagged With: manual testing, robots, test automation

CrossBrowserTesting Product Update: What’s New and What’s Coming Up

August 23, 2018 By Alex McPeak Leave a Comment

crossbrowsertesting product update

crossbrowsertesting product update

If you’re a CrossBrowserTesting user, you may have noticed a few new changes. Behind the scenes, our product team is always working hard to make the experience better for you. That’s why we want to share some of the exciting new features, releases, and improvements we’ve been working on this year, as well as some exciting things to come.

As always, feel free to get in contact with us and tell us what you’d like added or what integration would make your life easier. Who knows — you may just be the inspiration for the next product update.

15 Browser Releases

Did you know there have been 10 major browser releases in 2018 alone? New browser versions are coming out all the time, and every change could affect the way your web application works.

We’re dedicated to having the largest selection of browsers, which means we prioritize getting new browser releases to you within 24 hours after their rollout.

Check out our latest additions, and maybe spin up a test or two to make sure your application is working on all the latest versions.

  • Opera 50 – 54
  • Firefox 58 – 61
  • Chrome 64 – 68
  • Edge 17

3 New Real Devices

Additionally, want to make sure our customer have access to popular mobile devices on the market, which is why we’ve made three new real devices available for testing.

Because these aren’t simulators or emulators, you’re getting pinpoint accuracy in your testing, so your results are the same as if you were holding it in your hand.

You can now find the following real devices for testing, plus 3 major OS updates:

  • Samsung Galaxy S7 & S8
  • iPhone 8

Record & Replay

We’re so excited to bring a Record & Replay option to CrossBrowserTesting so that teams of all technical abilities can get started with automation with no coding knowledge needed.

If you haven’t already heard, Record & Replay is our click-to-script feature that allows you to record the steps on your manual tests once and replay them back on a selection of browsers. This not only makes it much faster and easier for you to perform a test, it eliminates the human error that arises when you’re manually repeating the same test over and over.

And that’s not all — parallel testing lets you select your test suites and play them back at the same time against different browsers, so you can run your tests in Safari, Chrome, and Firefox all at once. Talk about a time saver.

If you need more reasons to try Record & Replay, here’s seven of them.

WebRTC

We’ve switched to a new type of streaming to reduce lag and improve the accuracy of your Live Tests with WebRTC. WebRTC is an Open Source project that provides browsers and mobile applications with Real-Time Communications (RTC) that became standard in the last couple years. CrossBrowserTesting has recently upgraded to RTC, which means you should be having better experiences when it comes to testing applications that have videos and other media. You’ll literally be able to see the difference every time you test.

Other Updates

In addition to some of these more noticeable changes, you’ll see a few other updates that should help you out in your day-to-day.

  • Audio in Live Tests – You can now test sites that have videos and music to know whether the sound is working.
  • File Uploads on Mobiles – Now you can test upload functionality on mobile.
  • New Local Connection Chrome Extension – If you’re using local connections, make sure to update.
  • Record & Replay Selector – You can change the selector for Record & Replay, which lets you find an element more reliably during playback.

Performance Improvements

Browsers and devices won’t do you much good if the experience inside the CrossBrowserTesting app isn’t up to par. In addition to new features, the entire CrossBrowserTesting experience is noticeably faster.

Here are a few places you might notice better performance:

  • Reduced recorder launch times by 15 seconds
  • 4 seconds faster Mac Launch times
  • 30 Second iOS Appium Launch times
  • Significantly improved Android screenshot success rate

Coming Up

We’re always thinking about what’s next. We have a few browser and OS updates in the pipeline, as well as real devices like the Galaxy S9, iPhone X, and two TBD tablets. Additionally, we have a few major initiatives coming up as we plan out upcoming features and functions.

Again, make sure to let us know what devices you want to see or integrations that would make your life easier and help streamline your workflow so we can take them into considerations.

You’ll see:

  • 5 Browser Releases
  • 4 New Real Devices
  • 4 Major OS Updates

Major initiatives include:

  • Laying down foundation work to get the latest devices out in a more timely manner and available for Live, Screenshots, and Selenium tests.
  • Queueing Selenium Tests for when you run out of parallel execution time.
  • Faster key and mouse input speed for laggy Android and iOS devices.
  • Continuing to work on R&R and maybe bring to mobile or add screenshots. Record & Replay is still in beta, so we’re looking for suggestions and exploring new ways to improve.

To watch the full webinar that includes all the updates from the first half of the year with Product Manager Joan Liu, check out the recording.

Filed Under: Product Update Tagged With: browsers, deviices, product update, web rtc

  • « Previous Page
  • 1
  • 2
  • 3
  • 4
  • 5
  • …
  • 17
  • Next Page »

Try CrossBrowserTesting

Everything you need for testing on the web. Browsers & Devices included.


  • Grid In The Cloud
  • Simple CI Integrations
  • Native Debugging Tools
  • Real iOS and Android
  • 2050+ Browser/OS Combinations
  • Intuitive REST API

Start Testing Today

Want Testing Tips?

Want the latest tips & strategies from industry experts right to your inbox? Sign up below.
 

Join Over 600,000 Testers & Developers And Start Testing Today

Learn more Free Trial

Features

  • Live Testing
  • Selenium Automation
  • Automated Screenshots
  • Screenshot Comparison
  • Local Testing
  • Real Devices

Solutions

  • Automated Testing
  • Visual Testing
  • Manual Testing
  • Enterprise
  • Internet Explorer

Resources

  • Browsers & Devices
  • Blog
  • Webinars
  • Integrations
  • ROI Calculator

Company

  • About Us
  • Careers
  • Plans
  • Terms of use
  • Security

Support

  • Help Center
  • API Docs
  • Schedule A Demo
  • Contact Us
  • Write For Us