CrossBrowserTesting.com

A Design, Development, Testing Blog

  • Product Overview
  • Pricing
  • Back Home
  • Free Trial
  • Design
  • Development
  • Manual Testing
  • Test Automation
  • Visual Testing

Must-Have Components of Your Successful Web App Test Strategy

February 18, 2020 By Sarah Mischinger Leave a Comment

In the early days of the WWW era, websites were rather static and simple. Today though, many web applications are quite complicated and more so what people think of as “software.” This complexity comes with a lot of responsibility since you want your app to perform as expected and please your users. This is why proper and thorough testing practices are mandatory for releasing high-quality web apps.

Pave the Way for More Efficient Test Practices During Development

Not thinking about testing during the development process can lead to unnecessary frustrations for your testers. Because testing web applications is no longer optional, you should consider testing requirements right from your project’s start.

Here’s what you can do during development to make testing more efficient:

Leverage Your Past Experiences

Suppose you know that the component you are creating may be faulty in a particular browser if it’s not programmed in a certain way. If you share this knowledge right away and write the functionality accordingly, you can save time and headaches when testing.

Tip: Share your experience with colleagues and make comments in the code if you know about error-prone components.

Use a Solid Foundation

Instead of reinventing the wheel, use frameworks and libraries that are already set up and optimized for your project. For example, Bootstrap for your CSS, Angular, or Vue.js for your JavaScript, Laravel for your PHP backend, etc. With these tools, you can make sure that you are more efficient and end up with fewer errors in your code. Besides, most frameworks already include guidelines for writing code, tests, and so on. As a result, you and your team can save a lot of time in development and testing, as following these rules will help you build a clean and consistent codebase.

Reuse Code Snippets From Other Projects

If you’ve already done something, why not do the exact same thing again? By reusing previously tested code from other projects, you can save time in development and testing. Of course, you still want to write tests for these snippets, but you can expect them to be less of a problem for you.

Clean Code Saves Everyone’s Day

Writing clean code should be a no-brainer since it’s not only easier to test, but also improves your team’s productivity and motivation. If you need to work with someone else’s code, you sure don’t want to spend too much time deciphering what’s going on there. And clean code can help make it more readable and understandable.

The tricky part of achieving clean code is that every single programmer needs to contribute. A great way to improve code quality is to enforce style/code guidelines and let developers install linters in their IDEs that automatically check the written code for spelling mistakes and wrongly styled code.

Be Prepared and Identify Browsers and Devices to Test

While your web app must be all-around highly functioning, it is critical to make sure that it looks and works correctly on the many different browsers and devices, which is called cross-browser testing. After all, the interface of your web app is what users see and interact with. If you present users with a faulty UI, you may be missing out on some new loyal customers.

Before you start cross-browser testing – or any other tests that you must conduct in a browser – you first need to know where to test. For this purpose, you should work with your app’s existing analysis data and determine your users’ preferred browsers and devices. If you don’t have such data, you can find general figures and statistics on the internet.

Based on the results, you then create a matrix with the browsers to be tested, their versions, and various devices such as smartphones and tablets. Also, make sure to make this matrix an integral part of your web app test strategy and update it from time to time, so you don’t miss out on anything new!

Next, evaluate the environments in your matrix and identify the troublemakers, i.e., which are the most demanding to test on and which are more prone to errors? Make sure to test on these first and work your way up to the easier ones. If you do it the other way around, you will likely break previously working code, and then have to test everything again.

Which Test Types Should You Use?

As mentioned earlier, web apps tend to be complex and need to be thoroughly tested before you ship them to your users. Let’s take a quick look at the most essential tests you should include in your web app test strategy to produce a high-quality product your team can release with confidence:

Functionality Testing

Does your web app work as specified? You can make sure of this by performing a variety of tests:

  • Unit tests in your frontend and backend code. For example: Does a specific input produce the expected output?
  • Database integrity tests. For example: Do database operations such as updating or deleting rows cause unwanted side-effects?
  • HTML and CSS checks. For example: Does your code comply with the W3C standard, and are your elements accessible via keyboard?
  • Frontend functionality tests. For example, do forms and other elements that the user can interact with work as intended? A great tool for this kind of testing is Selenium, which you can learn more about in our webinar here.

Usability Testing

Your team has seen your web app every day for some time now, so it’s wise to get a fresh pair of eyes involved and check your web apps’ usability. You can do this by inviting a small group of external testers that match your app’s target audience and watch them use your application. As a result, you can see what is working well and which areas still need improvement.

Interface Testing

Most web apps get their data from the server via an API. Of course, you need to make sure that both endpoints are individually tested well, but also that the right data is being sent and retrieved. Besides, this is the perfect opportunity to review any error messages from your web application to ensure that they are accurate and understandable.

Compatibility Testing

As discussed earlier, you need to make sure that your app works on different browsers, major browser versions, and your users’ favorite devices. In contrast to functional tests, cross-browser testing mainly focuses on checking whether the user is presented with a consistent experience, regardless of the technology used. Manually performing these types of tests can be tedious. However, testers can save a lot of time by creating and running automated tests with Selenium, a toolbox for browser automation.

Performance Testing

Your users and search engines hate slow websites. Therefore, it is extremely important that you continuously try to improve the speed of your web application. However, other factors can impair the performance, which is why we recommend the following:

  • JavaScript and CSS: try to minimize your JS/CSS code and serve it in one file if possible. Also, check that you only load code that your app uses!
  • Assets: Always remember to keep your pictures, videos, etc. as small as possible – compress, compress, compress!
  • Backend code and database queries: Optimize your API code and database queries to keep waiting times for requests as short as possible.
  • Stress tests, load tests, and capacity tests: To improve and scale your app in the future, you need to know its metrics: What is the maximum number of users who can use your app at the same time? What does it take to make the system fail? How much data can the server process at the same time?

Be sure to check out LoadNinja to get accurate load tests in just minutes, using real browsers in the cloud.

Security Testing

Not only finance and e-commerce web apps have to be tested for security gaps! Take extra time and let testers look for loopholes and other vulnerabilities. Pay special attention to:

  • SQL Injection
  • Cross-Site Scripting (XSS)
  • Broken Authentication
  • Cross-Site Request Forgery (CSRF)
  • Components with well-known vulnerabilities
  • Encryption of sensitive data
  • Cookie and Session handling

Conclusion

As you’ve seen, there are many things to consider to ensure your web app is thoroughly tested. We recommend that you create a web app test strategy that helps you keep track of your tasks and makes testing more straightforward for the entire team.

Since testing can be quite time-consuming, you should also look for tools that can help you be more productive. Give our cross-browser test services a shot and see how we can help you deliver high-quality apps to your users more frequently.

Filed Under: Cross Browser Testing Tagged With: cross browser testing, Selenium, testing, testing frameworks, testing skills, testing strategy

September Product Update

September 10, 2019 By Joan Liu Leave a Comment

Orange background with text reading September Product Update

Did you hear about our Summer Blockbuster? SmartBear has acquired Bitbar, expanding our breadth of Native Mobile App testing innovation. Bitbar is an established best-in-class product, and is a perfect complement to CrossBrowserTesting—so welcome to the family, Bitbar! Though we’ll pass on the bunk beds and the boat crashing.

Full-Page Screenshot Options

A few months back, we updated our screenshots to use CSS translate instead of our scroll and stitch method. However, CSS translate does not work with lazy-loading content. Lazy-loading content requires a scroll to trigger new content to load. If you are experiencing your screenshots not scrolling and not loading content, use our Standard Scroll Method under the Advanced Options.

Archiving Test Results

We had some customers reach out about archiving test results. We have added two new options for archiving automated tests. You can now archive individual results without navigating away from all the results, and you can archive an entire build’s tests.

New Browsers and Operating Systems

We released Firefox 68, Chrome 76, and Opera 62 on OSX. We also released the Samsung Galaxy S9s with 4 different browsers.

Bug of the Month

A user with an automated test was trying to launch Twitter links and they were getting inconsistent test results. The expectations was that it would launch in-browser but some launched on the Twitter desktop app. We didn’t install Twitter on our desktops, so we thought we had a cleaning issue. 

We finally determined that last April, Microsoft was experimenting with auto-installing the Twitter app and associating it with twitter.com. Not all of our machines had it, as we had a variety of Windows 10 machines acquired at different times. This caused our user to have flaky tests but we were able to fix our machines to never launch the Twitter app, and uninstall the app whenever we found it.

Try out all the new improvements for yourself. Log in now to get started. 

Filed Under: Product Update Tagged With: browsers, product update, screenshots, testing

Introducing SmartBear’s First Podcast Series

July 10, 2018 By Alex McPeak Leave a Comment

smartbear podcast the good the bad the buggy

smartbear podcast the good the bad the buggy

We’ve all been there — you download a new mobile app or you go to a new website. You’re excited about the prospect of trying or buying something new. You tell everyone you know about the promise it holds, only to be disappointed. Or not — it comes down to your user experience.

From when a user first touches your app to when they leave, they are internally judging, evaluating, and considering whether they’re willing to spend their valuable time and energy on coming back again.

More than ever, companies are depending on their digital assets to drive the reputation of their brand, and exceptional software has become a foundational part of a positive user experience.

At some level, we all know and recognize this. However, those of us who work at SmartBear are even more acutely aware of how technology influences these everyday interactions.

When over 6 million software professionals and 22,000 companies across the globe use SmartBear tools to develop, deploy, test, and maintain some of the world’s most innovative applications, we know that the standard of quality can make or break your success.

This is the very inspiration behind SmartBear’s very first podcast series, “The Good, the Bad, and the Buggy.”

Our very own Alex McPeak and Bria Grangard will be sharing their own stories, talking to guests, and analyzing different case studies to consider what works and what doesn’t when it comes to online user experiences. Because a lot of the time, quality is in the details.

We hope this series encourages software teams to take a second look at their own projects to determine: is it good, is it bad, or is it buggy?

Listen to the first three episodes on SoundCloud now.

Don’t forget to share with your friends, coworkers, and family, and provide your feedback!

Additionally, you’ll want to check back often because we’ll be releasing a new episode bi-weekly.

Have feedback, an episode idea, or want to be featured as a guest? Reach out to Alex on LinkedIn.

Filed Under: Product Update Tagged With: design, development, podcast, testing

The Marketers’ Guide to Testing Your Website

June 18, 2018 By Alex McPeak Leave a Comment

marketing testing tools guide

Marketer's guide to testing
Software testing has traditionally been the job of Quality Assurance teams, but as organizations realize the inherent value of a seamless online user experience, the idea of testing has also proven invaluable to other roles.

As marketing professionals evaluate the effectiveness of their strategies, tactics, and content, it’s no longer enough to blindly publish landing pages, design websites, and send emails.

By performing the following tests, marketers of all levels and ensure that the intent of their messaging matches the execution.

A/B Testing – Sometimes, it can seem like communicating with prospects can be a shot in the dark. A/B testing lets you test two scenarios at once to determine which your audience responds to better. The is often done with on web page or app to better understand how you’re communicating with your prospects. For example, if you were unsure what to write for the header on your website, you could A/B test two versions against each other to gain insights into what resonates best to capture leads and use that data choose A or B. In addition, analyzing the results of your test helps you make informed decisions that will convert leads in the future, such as the next time you have to write a page heading. Optimizely is the deluxe A/B testing tool, allowing you to go into your page and live edit through the dashboard to make tests. Optimizely keeps all your data organized, and includes value-add features such as retroactive filtering, intuitive stats engine, optimized targeting options, and saved audiences. For teams that are looking for something that will fill their more basic A/B testing needs, VWO is a comparable option.

Responsive Design Testing – Did you know that having a responsive website improves your SEO? We bet your ears just perked up. It’s true — Google prefers sites that are responsive. What does this mean for you? Basically, you need to make sure that your site works not only on desktop browsers but also on other screen sizes such as mobile and tablets. It’s not surprising the mobile use is increasing year after year, not to mention exceeding online desktop activity. While the thought of mobile-friendliness should be a no-brainer, it may not always fall on the marketer’s plate to design responsively. However, when it has a stake in your page ranking, it becomes a whole other story. Tools that provide a device lab and visual testing capabilities like CrossBrowserTesting can help give you a look at what your website looks like on those devices you don’t have access to. This is a fairly easy way to guarantee the compatibility of your site with the most popular search engine so you can make sure all that backlinking work you’ve been doing isn’t for nothing.

Email Testing – If you’re slaving over email marketing but keep seeing your open rates drop, it might not be that your copy is lacking as it is as much a lack of testing. Just like you need to make sure your website looks good on different devices, you also need to make sure your emails look good on different email servers. Unfortunately engaging email messaging isn’t always enough when Gmail and Outlook are displaying content differently — unless you stick to HTML. Litmus provides a platform that lets you build, edit, and preview your emails on over 90 different email clients (did you even know there were that many?) because the only thing worse than an email that goes out with a typo is an email that doesn’t show up at all at all. Speaking of errors, Litmus does a crawl of your email to make sure that links work, images render, and content loads so you can feel confident in every email you send out. By using a tool like Litmus, you can stop leaving your emails up to chance and spot discrepancies before you send them out to your 100,000 count subscriber list.

User Testing – What’s a better method to getting feedback on your digital experience than asking the very people whose opinions you care about most — your users? User testing is an unparalleled way to get feedback from real people on the usability of your application. When you have a website, for example, it can be difficult to put yourself in your users’ shoes and think about their needs outside of how you would want the application to look and work. User testing lets you gain insight into the user experience of someone who may not have been on your app before, and platforms like UserTesting provide you with videos of those real people giving their honest feedback on your website, mobile app, or another project of your choosing. By getting your product in front of your target audience, you can get more accurate insight into why your users perform certain actions or want things a certain way.

Performance Testing – Performance testing is fairly underrated in the marketing world, but it could be what makes or breaks the success of your application. Why spend all your time making the perfect graphics, designing the perfect layout, and writing the perfect copy if your website is going to take forever to load or crash when too many people visit? When 51 percent of online shoppers in the US say that site slowness is the top reason they’d abandon a purchase and a 2-second delay in load time during a transaction result in abandonment rates of up to 87 percent, according to Radware. It’s not just your QA team’s problem to test for performance. In fact, it’s in your best interest to see your marketing efforts succeed. Leveraging tools like LoadComplete and LoadUI will help you determine breakpoints in your application so you can get a better understanding of how fast it is and how much it can handle. Trust us — around the holidays, performance testing is a lifesaver.

Conclusion

Testing is all about checking the quality of your application, but it shouldn’t be limited to QA teams. With plenty of tools on the market, it’s becoming easier for marketers to conduct their own tests in order to optimize websites, landing pages, emails, and other content. By including testing in your marketing, you can know that your efforts are reaching a higher level of quality and trust that your messages are getting across to more users each time they interact with you online.

 

Filed Under: Design Tagged With: marketing, mobile testing, testing

What Testing Conferences Look For When You Submit a Proposal

April 4, 2018 By Alex McPeak 1 Comment

software testing conferences call for papers

submit testing conferences

If you’ve ever been to a testing conference, you’ve probably come back inspired by the speakers and even dreamed of presenting yourself one day. It can be intriguing to submit to a conference at the same time it’s intimidating. Especially for first timers, figuring out where to start, what to talk about, and how to get accepted can seem overwhelming.

However, the benefits of actually submitting a proposal and being accepted to speak far outweigh the fear you first experience. Here are a few reasons why you should submit to testing conferences and some tips on how to start.

Why Should You Submit?

Speaking at a conference is a one-of-a-kind opportunity to share ideas and connect with others in testing. It doesn’t hurt that it’ll boost your resume or further your professional development, either. There’s no better way to network with other testers than to do so in person, and conferences draw diverse minds from across the world who are talking about the same topics you’re interested in.

Being able to participate in these conversations means staying up to date on current trends. Even after conferences are over, you have the advantage of bringing newly acquired knowledge back to your organization to better your team and build a learning culture.

In an effort to attract more submissions and more diverse speakers, a lot of conferences are also moving towards #nopaytospeak. This means in addition to advancing your public speaking skills, you’d also be compensated to travel, attend, and learn from other speakers.

While it can be nerve-wracking to submit a proposal — nonetheless build up the courage to speak in front of a lot of people — anyone who has done it will tell you what a great experience it is. Additionally, once you get the first time out of the way, you’ll probably find yourself even enjoying the experience and applying to more.

How to Submit a Proposal

  • Talk from experience – Your everyday experiences are some of the best material you could use for a conference session. Using real-life lessons is a foolproof way to ensure you’re providing value to others who can learn from the same failures and successes as you. “I prefer to hear experience based talks, they are more relatable, more emotional,” says Richard Bradshaw. “If you’re coming at me with a theory based talk, I want to see some data and hear about some experiments.” Look back on the biggest learning moments in your personal and professional life if you’re looking for a place to start.
  • Have a clear takeaway – You want to be engaging, but you also want to provide clear, actionable takeaways that will help people in their everyday lives. Providing your ideas and experiences is great, but you risk being forgettable if you’re not being constructive. Ashley Hunsberger notes when reviewing papers for Selenium Conference “Can I identify if there are things attendees will be able to walk away doing after the talk? Or ideas presented that will allow attendees to think about moving forward?” Asking these questions helps identify if those critical takeaways are being included.
  • Make it yours – Have fun with your topic and make sure to add some of your personality. No one wants to watch someone speak on a topic it’s clear their not passionate about, and it probably won’t come off as exciting in your abstract, either. Alan Richardson says to find a topic you feel strongly about and identify your “unique slant”. The more passion you bring to your topic, the more appealing it’ll be for reviewers and the more engaging it’ll be to your audiences. Ashley also asks “Would I WANT to see this talk? If I’m an attendee, is this a talk I want to go in and watch? This is very subjective and can vary reviewer to reviewer. But, I always ask – do I want to see this myself?” Your abstract could be highly informative, but if it doesn’t translate to the stage, it might be better left for a blog or whitepaper.
  • Make your contribution valuable – Upon asking his biggest piece of advice for submitting to testing conferences,  Rob Lambert said to make the submission accurate, to the point, and easy to read. “Make it clear what you will talk about, what experience you have and why you can be trusted to deliver a talk to the conference audience,” he said.  “Sadly, most people skip this step and assume that because they have a potentially great talk, that it will get accepted on title and fluffy abstract alone. That’s not true. If you can’t articulate your idea clearly as a submission, why would anyone believe you can do that from the stage? Put in plenty of effort to the submission as it’s the first hurdle to getting your message heard by those you seek to hear it.” Rob points out that conference organizers have one main job — creating an amazing conference, and you need to make them trust that you can deliver on your talk and your promises.
  • Get a second set of eyes – There’s no need to work on your submission alone, especially if it’s your first time. There are many experienced speakers in the testing community that are more than happy to offer a hand and give feedback. Programs like Speak Easy are also dedicated to mentoring speakers and increasing diversity at tech conferences. At the very least, you should have a colleague, friend, or peer review your proposal before you submit to make sure you’re getting your point across and communicating clearly. There are also free platforms like Hemingway App that will help with spelling, grammar, and the general flow of your words.
  • Know your audience – Applying to a conference is kind of like applying for a job — you want to tailor your content for each one. This doesn’t mean you can’t reuse material — in fact, a lot of great speakers do this successfully! The key is making sure that your content is relevant to the conferences your submitting to, and you’re not offering off-topic proposals just because you saw a software testing conferences call for papers. Rob Lambert says to define your target audience and create a single “avatar” or “persona” to write the talk for. This will help you narrow down your topic and communicate clearly with the people who will actually be attending that conference. Additionally, knowing your audience means understanding what the people reviewing your abstract are looking for, too.
  • Follow directions – Make sure that you’re paying attention to the guidelines for submission — is there a theme you have to follow? A specific length of time your talk should be? What’s the deadline? Ignoring these is the easiest way to take yourself out of the running. Rob says to create a checklist of these important details and make sure you’ve prepared supplementary materials like a speaker bio and professional headshot that conferences will often require.
  • Your title is important – The biggest mistake you can make when submitting a proposal is putting all the work into the abstract without thinking about the title. The title draws people to your talk and gets them intrigued off the back. “It’s the first thing I read, it sets the scene for the rest of the abstract. Does it make me want to read on? Is it targeted? Think purpose and audience” says Richard. Taking the time to put creativity and consideration into your title means you have a better chance of being accepted to speak, and attendees will be more likely to listen to what you have to say.
  • Don’t be afraid of rejection – Keep in mind that many conferences are highly competitive — just because your proposal wasn’t accepted doesn’t mean reviewers didn’t like or consider. A lot of times, the reason could be as simple as too many papers pitching a similar topic. Ask for feedback from the reviewer so you can work on your submissions and improve next time. Don’t take rejections to heart; take it as a learning experience and keep applying.

What’re You Waiting For?

Attending and submitting to testing conferences is an important part of being an active member of the testing community, and even more, staying relevant in a quickly evolving space.

Year after year, we see new methods, practices, and approaches in testing. The best way to keep up to date with emerging trends in software is to continue to learn from each other.

Deadlines for submissions are nearing, and conference season is just around the corner, which means now is the time to start brainstorming what you’d like to share with your peers. So what’re you waiting for?

Filed Under: Events Tagged With: conferences, professional development, testing

5 Lessons Agile Teams Can Learn From Netflix

January 16, 2018 By Alex McPeak Leave a Comment

agile development netflix

Emerging from the days of video rentals and cable came a new way to digest as many movies and television shows as we want, when we want — Netflix.

There’s no doubt that the binge-watching business is booming. Netflix enjoys 109.25 million customers and subscribers consume more than 140 million hours of content every day, which means it doesn’t look like they’re going the ways of Blockbuster anytime soon.

So why do streaming services experience such high success compared to our previous viewing options? Why do people spend 120 minutes per day on Netflix, but only 30 – 60 minutes per week watching regular television? If you consider that Netflix is operating in an Agile fashion, it makes sense.

That’s not to say that the development team does or does not follow an Agile methodology, but it has to do more with the way that Netflix releases content. Similar to Agile development, it seems like Netflix also strives to focus on fast feedback, iterative changes, and cross-collaboration.

By taking a closer look at the way the company operates behind the scenes (or should we say, behind the screens), Agile development teams can learn quite a bit from Netflix. If you’re still not convinced, here are a few ways Netflix echoes Agile development and can teach teams to be more successful when building, testing, and delivering software.

  1. Do Better Than a Pilot Episode – Have you ever noticed that Netflix doesn’t have pilot episodes? That’s because a pilot episode is essentially a big test to see whether the network and the network’s audiences like the first episode enough to keep watching more. But what happens if they don’t? A lot of time and money wasted. Instead, Netflix pulls the perfect “balance of intuition and analytics” during production so that they know shows will be successful. House of Cards was greenlighted based on deep data analytics that told Netflix it would be well-received. In fact, Netflix Originals have a 35% higher success rate than new TV shows released on-network. Don’t wait until the end of development to test, and don’t let your users find faults for you. Integrate testing into every step of the software development lifecycle so you can be sure you’re delivering something your customers will love.
  2. Integrate Feedback – As per the manifesto, one of the key components of a successful Agile team is allowing for fast feedback, and just as importantly, implementing it in your next release. You don’t have to tell Netflix twice. Take House of Cards again — amid recent allegations about Kevin Spacey, there was a decision to be made about keeping him on the cast or canceling the show, both of which the company knew would agitate viewers. Instead, they decided to continue production with Robin Wright as the lead, showing that integrating feedback is a win-win. Whether that feedback is from testers while the application is in production or from customers after the software is delivered, incorporating feedback maintains continuous growth and improvement.
  3. Be Compatible with Your Customers – Netflix supports 900 different devices — that’s almost as many as CrossBrowserTesting has in the cloud. Between laptops, smartphones, and tablets, not to mention different browsers, operating systems and screen sizes, consistency is key. Netflix has a huge customer base, which means they probably have a lot of diversified device usage. If they didn’t make the application accessible to each one of those, they simply would not enjoy the amount of success they do today. Cross-browser testing is a no-brainer, so be sure to not to skip it if you really want to release a high-quality application for every user.
  4. Release Often – One of the major advantages of Agile is that it allows teams to release software more often instead of having longer release cycles and only delivering every few months or even years. Netflix has found that releasing new movies and shows every on a weekly basis keeps customers excited and intrigued, so they don’t get bored by the same selections over and over. That means once bingers are done with Stranger Things, they still have the next season of Black Mirror to look forward to. It’s no secret that consumer expectations and demands these days are high — they constantly want their hands on the next best thing. But at the end of the day, it’s pretty straightforward. If people want to consume your product, give them the means that allows them to do that as much as they want with Continuous Integration and Delivery.
  5. Embrace Detailed User Stories – A user story describes a software feature as a customer would see it. The goal of Agile is to bring user stories from ideation to deployment, considering who the user is, what they want to accomplish, and how they accomplish it with that feature. Netflix takes this to the next level by understanding every user story with advanced personalization. In fact, 75% of Netflix views are a result of their recommendation engine. By creating different trailers and artwork for content based on viewers previous movie and show choices, they’re able to more precisely communicate recommendations with people based on their interests and behavior. Take it from Netflix and get familiar with user personas and the customer journeys that take place throughout your application to better plan throughout development.

Whether you stream shows or surf TV channels, there’s a lot we can learn from the media mogul. As Agile teams embrace speed and quality throughout testing and development, Netflix provides the blueprint for success from production to breakout deployment.

Filed Under: Continuous Integration Tagged With: agile, netflix, testing

Top Takeaway Trends from the SmartBear State of Testing Survey: Cross-Browser Testing

September 11, 2017 By Alex McPeak Leave a Comment

state of testing survey smartbear

state of testing survey smartbear

Over 5,000 professionals in Software Development, QA, and Testing responded for the SmartBear 2017 State of Testing Survey to provide insightful feedback into how they organize their testing strategy and execute daily responsibilities.

Of special interest to the CrossBrowserTesting team were findings among browser testing, mobile device testing, parallel testing, and cloud testing, which we’re highlighting here.

Browser Testing

According to the State of Testing report, four out of five teams test on multiple browsers, but half only test on the latest versions.

It’s evident that the majority of testers know the importance of running tests on more than one browser. However, the fact that many don’t find the need to test on previous browser versions can hinder an otherwise thoughtful cross-browser testing strategy.

Older browsers are often the most problematic when it comes to rendering code. If browsers are not updated automatically and users don’t take the time to update, their browser is unaccounted for in testing, even if they’re just using an earlier version of Chrome.

Bugs can pop-up even with smallest browser update, sometimes in the decimal level versions. Of course, this becomes a bigger issue the older the browser is, which is why it’s in the best interest of testers to add more than one browser version to their browser testing arsenal.

Additionally, the largest percentage of respondents (27%) test on only three browsers. Again, although teams are seeing the value in testing more than one browser, it’s not quite enough. Considering that teams are only testing a few browsers in the latest version, this leaves a significant gap in a lot of organizations’ web testing approach.

Mobile device testing

Two-thirds of survey participants reported test on mobile devices, and 80 percent of respondents reported testing on more than one mobile device. Unsurprisingly, the most common mobile devices tested are Androids at 92 percent and iPhones at 85 percent.

The majority of testers — 60 percent — are only testing on two different devices. Fortunately, 64 percent of mobile UI testers do not exclusively test on the latest configuration of mobile devices, which means they are incorporating some good practices when thinking about web and browser testing.

Still, 36 percent of testers only focus on the most recent device iteration, which means they are often missing large segments of their user base. Though organizations are making a noble effort to incorporate mobile testing for an increasingly on-the-go consumer, there is plenty of room to expand mobile device testing.

Incorporating older iPhones, for example, accounts for users who have not upgraded to the most recent version. As we’ve pointed out in the past, the array of options on the Android market is ideal for consumers who like to tailor their choices, but it complicates the job at hand for developers, requiring testers to target more diverse platforms.

The issue of fragmentation causes more drastic differences in screen resolution and performance due to varying operating systems, models, and brands, which means that the more Android devices included in a mobile testing strategy, the broader the coverage will be.

Parallel and Cloud Testing

The survey showed that the majority of software teams are only running a few tests in parallel, or none at all.

Seventy-seven percent (77%) of respondents are doing some parallel testing by running at least two UI tests in parallel, with the highest percentage of respondents (35%) saying they run two to five UI tests in parallel. The second highest response, however, was running no parallel tests at 23 percent. Though the majority of teams are parallel testing, they are evidently not running enough tests in parallel.

Those running parallel tests are also more likely to use a cloud service for testing than those who don’t do any parallel testing. Fifty-six percent (56%) of respondents who reported doing some parallel testing also reported running some tests in the cloud, while respondents who don’t do any parallel testing also don’t run any tests in the cloud.

Only 30 percent of organizations utilize a cloud service for testing, while 34 percent reported not running any tests in the cloud. This suggests that the total number of organizations utilizing both parallel and cloud testing hovers around 43 percent.

Where parallel testing allows test scripts to run concurrently on multiple browsers and devices through one environment, cloud services offer a great compliment, allowing testers on demand testing environments to run these parallel tests. Though it’s promising to see teams leveraging both parallel and cloud testing, it’s obvious the value of these methods have not yet been fully recognized, and there is ample opportunity to incorporate more of it in order to speed up testing times and increase quality.

Other Takeaways

In addition to the preceding topics, the State of Testing Survey covers trends in other areas including API testing, automation, and development, among others.

A few other takeaways we found interesting include:

  • Web applications are the most common type of application being tested at 85 percent, followed by APIs, desktop apps, and mobile web apps.
  • Agile is the dominant model for software development, but most teams that describe themselves as Agile are only releasing on a monthly basis. This is not ideally part of Agile best practices, which encourages frequent releases, and this response in the survey calls into question how teams define Agile.
  • Test automation is growing as a practice, but still, less than half of tests (44%) are automated on a daily basis.
  • The top challenges testers face with UI test automation include test stability, object identification and management, and test maintenance.

The trends reflected in the SmartBear State of Testing Survey reveal not only the processes, strategies, struggles of software teams today, but also imply the future of these trends and suggest ways to improve current procedures.

By understanding the full potential of software teams and tools among industry shifts, organizations can better prepare to meet the needs of a faster development and delivery cycle.

To explore the extent of SmartBear’s 2017 findings, you can read the full State of Testing report here.

Filed Under: Uncategorized Tagged With: report, survey, testing

What’s the True Cost of a Software Bug?

August 8, 2017 By Alex McPeak Leave a Comment

The Most Costly and Expensive Software Bug Screw Ups the true cost of a software bug

The Most Costly and Expensive Software Bug Screw Ups

Test early and test often — that’s what everyone says will help you avoid the high cost of deploying a software bug. But a bug might not seem like a big deal without a dollar sign attributed to it. Why invest in testing if you can just fix your mistake after? What’s the true cost of a software bug?

While there’s no set cost you can ascribe to a software bug found after the product release because it’s highly dependant on the organization’s size, customers, employees, and debugging resources, we can look at a few statistics and examples that show just how damaging it can be.

Starting with the big picture, software failures cost the worldwide economy $1.1 trillion in 2016. These failures were found at 363 companies, affected 4.4 billion customers, and caused more than 315 years of lost time. Most of these incidents were avoidable, but the software was simply pushed to production without proper QA.

Those may be stunning numbers, but you could still be thinking it doesn’t mean much to your specific organization, so let’s look at it on a smaller scale.

IBM found that the cost to fix an error found after product release was 4 to 5 times higher than if it’s uncovered during the design phase, and up to 100 more expensive than if it’s identified in the maintenance phase.

The costs go up as the bug moves through the SDLC. For example, IBM estimates that if a bug costs $100 to fix in Gathering Requirements phase, it would be $1,500 in QA testing phase, and $10,000 once in Production

And software bug expenses aren’t just related to the impact on internal operations. When a company announces a software failure, they lose an average of $2.3 B of shareholder value just on the first day.

The fact of the matter is that a software bug can affect everything from indirect costs like customer loyalty and brand reputation to more direct losses in business revenue and wasted time.

Companies Who Learned the Hard Way

You might think a software bug can only be so bad, or that any bug big enough to cause significant financial distress will be a complex mathematical error. In reality, some of the most detrimental bugs had minor mistakes that developers just didn’t catch because they weren’t used to putting the software through proper testing.

The following are companies who suffered monumental losses — some that almost put them out of business — due to easy errors, new software, or a bad line of code.

NASA – The Mars Climate Orbiter was a robotic space probe that NASA Launched in 1998 to study the climate and atmosphere of Mars. All was well until communication got cut-off a year into the mission when the Orbiter got lost in space. When dealing with spacecraft engineering, there’s a lot that could go wrong — it’s literally rocket science. However, the error could have been an easy fix if caught on time. The $125 million spacecraft presumably orbited too close to Mars’ surface and disintegrated because the engineering team failed to convert their measurements from U.S. units to metric. However, that’s not the only time a programming blunder destroyed a NASA rocket.

Knight Capital Group – As one of the biggest American financial services firms, Knight had a lot to lose, and in 2012 they lost $440 million of it. All it took was thirty minutes to drop 75 percent of their shares after an automated set of orders was sent out all at once to nearly 150 stocks instead of over a period of days like it was supposed to. The function, known as “power peg,” made stocks move up and down, lowering the stock price of all the affected companies. The loss was almost four times the company’s profit and would have put them into bankruptcy if a group of investors hadn’t come to the rescue.

AT&T – In January of 1990, AT&T failed to meet the basic needs of their customers as thousands attempted to call their friends, family, and airports. Long distance calling was down for nine hours, and customers sparked outrage after experiencing a total of 75 million missed phone calls and 200,000 lost airline reservations. Initially, AT&T thought they were being hacked, but the issue was due to a software bug. AT&T had just recently “updated” code, but the change of speed ended up being detrimental to a system that couldn’t keep up. AT&T lost an estimated $60 million from long distances charges that didn’t go through after being down less than a full day in an error that was the result of one line of buggy code.

EDS Child Support – In 2004, a computer system managed to overpay 1.9 million people, underpay 700,000, accumulate $7 billion in uncollected child support, backlog 239,000 cases, and get 36,000 new cases “stuck” in the system. This happened after a software company introduced a complex new IT system to the UK’s Child Support Agency. Meanwhile, the Department for Work and Pensions was restructuring their entire agency. The software was incompatible as the systems clashed, and as a result, the oversight cost taxpayers $1 billion.

Apple – For many consumers, Apple can do no wrong, but when they replaced Google Maps with Apple Maps on iOS 6 in 2012, they managed to put a pause on customer’s endless adoration. The update was nothing more than a product of corporate politics — Apple no longer wanted to be tied to Google’s application, so they made their own version. However, in Apple’s haste to release the latest and greatest map app, they made some unforgivable lapses in navigation warranting the #ios6apocalypse hashtag. Just a few of the many Apple Maps fails include erased cities, buildings disappearing, flattened landmarks, duplicated islands, warped graphics, and false location data. While an exact financial loss was never attributed to this lack of judgment, it made customers question whether Apple prioritized their needs over their money as they considered switching to the very competitor Apple was trying to combat.

An Argument for Agile and Continuous Testing

As technology evolves, so do our development methodologies and knowledge of testing practices. Getting stuck in the old age of software development means that crucial processes go ignored.

For example, while a Waterfall methodology used to be the go-to back in the day, high functioning software teams have realized that following Agile practices better support changes made throughout development. Agile’s incremental approach allows more feedback, flexibility, and of course, testing so that every time a feature, fix, or function is added or changed in code, it’s checked for bugs.

Since teams have adopted Agile and Continuous Integration, thus adopting continuous testing, software is higher quality after a product release because bugs and inconsistencies are caught earlier and fixed easier. In turn, organizations have found that this helps avoid preventable bugs, save time and money, and retain their customers’ trust.

How to Prevent Your Next Software Bug

While we can’t expect to test everything and go our entire lives deploying a product that’s 100% error free, we can make strides to safeguard software as best we can. Many of the worst bugs that we encounter are simple mistakes that could have been caught with a more thorough quality analysis.

The moral of the story is to test your software. If you don’t, it’ll cost you.

For more proof of why software testing is so important, check out Why Testing Matters and what the top QA professionals have to say.

Filed Under: Development Tagged With: agile, software bug, testing

How to Manage a Remote Team of Developers and Testers

July 6, 2017 By Alex McPeak 1 Comment

how to manage a remote team of developers and testers productivity communication

How to manager remote testers and developers

One major takeaway from the 2017 Stack Overflow survey was that many people who have jobs in QA and Development appreciate flexibility and the ability to work remotely.

This shouldn’t come as a surprise since many software-oriented organizations have teams that work in different offices, cities, countries, and continents. However, when no one is working in the same space and may even be spread out among different time zones, it can raise challenges with communication.

As teams begin to adopt Agile methodologies, Continuous Integration, and DevOps, this aspect of collaboration between individuals as well as different departments becomes increasingly important to a company’s overall success.

Ways Companies Can Help Remote Employees

Look for skills during hiring – Remote working is not a native skill to every employee. In fact, it takes the right kind of person to be successful doing their job remotely every day. You need to find someone who is motivated, disciplined, focused, and communicative, or working remote may be more of a distraction than a perk.

Have daily standups – These can be as simple as having everyone post their top task in Slack when they start every morning, but it’ll help you keep track of what everyone’s working on while helping your workers to prioritize their daily responsibilities.

Hold weekly video conferences – Having a video call with the entire team helps keep everyone on the same page to discuss accomplishments, challenges, completed tasks, and upcoming projects. It’s a good idea to check in at the beginning or end of the week to determine team goals and see where everyone stands.

Check-in with individuals – As importantly, it’s crucial to speak to team members individually to clarify expectations and requirements for their tasks. While you don’t have the advantage of stopping by their desk to chat and discuss projects, there are many tools (as we’ll discuss later) that assure you have no excuse not to initiate conversation with your team members.

Celebrate achievements – As a manager, it’s important not just to communicate with your team when you’re asking them to do something, but also to recognize accomplishments. Calling people out whether it be in a Slack chat or video conference for things they’re doing well will keep them motivated and driven to contribute to the big picture.

Trust your team – If you hired someone you trust and have seen their good work, trust that they’re managing their time well as a remote employee. Don’t breathe down their neck by abusing tools, watching activity, or constantly checking in for no reason, but trust that by giving them the independence, freedom, and flexibility that they’re in the position that allows them to do their job most successfully.

Ways Remote Workers Can Stay Productive

Work remote, remotely – Just because you’re working from home doesn’t mean you have to work at home. In fact, staying in the same spot all day will probably make it harder for you to be productive and stay focused. Instead, it’s a good idea to have a few spots you can travel to (with Wi-Fi, of course) like a park, coffee shop, library, or shared workspace where you can get your job done.

Keep consistent hours – One of the great things about working remote is being able to more or less make your own schedule. Not a morning person? Start the day at 11 a.m. instead of 9. Have to pick your kids up from school in the afternoon? Great, just make sure that you’re keeping your hours consistent and communicating them with your co-workers so they can know when to expect you online.

Update your team – You don’t have to save your thoughts for stand-up. If you’re accomplishing throughout the day or coming across challenges, share it with the team throughout the day and keep the lines of communication open.

Reach out to management – Similarly, don’t just wait for your boss to tell you what to do over a video chat. If you’re working remote, you have to have autocracy for your own productivity. If you need clarification on an assignment or are looking for more work, keep your manager in the loop.

Be part of casual conversation – It’s nice not to be continuously distracted by co-workers throughout the day, but social interaction is an important way to stay happy with your work, bond with colleagues, and give yourself breaks throughout the day. You don’t just have converse over technical topics; make sure to keep light conversation throughout the day. If you see a funny video, post it. If you read an interesting article, share it. There’s plenty of ways to keep a friendly office environment without being there in person.

Leverage different resources – As mentioned, remote work is becoming more and more popular, especially among the testing and dev community. This means there are people all over the world that can relate to the good and the bad of working remote. By participating in meetups and online conversations, you’ll probably find how easy it is to find advice about being productive from your peers. Additionally, conferences like Running Remote are full of best practices for remote teams.

Don’t overwork yourself – When you’re working from home, you’re not leaving your desk at the end of the day. Sometimes this can make it hard to stop working, but it’s important to remember you’re going to be much more productive working hard for a full day than overworking yourself to go over time. Additionally, it’s a good idea to make sure you’re getting up from your screen and giving yourself breaks throughout the day.

Tools for Remote Collaboration

  • Sketchboard – Sketchboard is a virtual whiteboard to encourage team collaboration. It was actually created by a software developer, so you know it’s optimized specifically for the people on your team. The tool helps remote workers plan, brainstorm, and share ideas through visuals like sketches, mind maps, and diagrams to foster more communication and inspire fast feedback without having to get together in-person.
  • Spacetime – This one’s a Slack integration because it’s seldom you come across any tech company that doesn’t use Slack these days. In general, Slack is a great messaging platform to talk to remote employees, but the Spacetime integration takes it a step further. It’s made specifically for teams in different countries or time zones. By prompting information about where different people are and translating time zones, Spacetime makes it easier to schedule calls or conferences.
  • Google Drive – Google Drive applications like Docs and Sheets allow everyone on the team to commit to shared documents. Users can comment and edit, so the original owner can see suggested changes and even reply before accepting them. You can see who’s looking at the document in real-time, and all changes are autosaved, so you don’t have to worry about that remote developer or tester with unreliable internet losing important documentation info.
  • Zoom – Zoom uses video chat to allow easy communication between remote employees with up to 70 people. You can also do screen sharing, record meetings, integrate with calendar, schedule recurring meetings, and use it on multiple devices. If anything, it’s better to resolve issues quickly and efficiently over video conference than wasting time in unproductive meetings all day, and Zoom creates an optimized experience to encourage quick calls.
  • Groove – Groove is a great way to get the whole team involved in the customer support process. This makes it easier to track tickets for bugs with a chat feature, email management, and knowledge base. You can also assign tickets and exchange private notes on them. Groove aims to get rid of the clutter and complexity while streamlining messaging to involve the whole team in personalized customer support, which is especially important with highly technical products and services.
  • Trello – Trello is one of the most popular task management systems for developers and testers. The Kanban style set-up is a familiar layout for developers and testers, and it’s continuously a straightforward way to organize assignments as “To-do,” “Doing,” and “Done.” It’s accessible on different browsers and devices for workers on the go and creates an organized workflow that other team members can access for reference of their own projects.
  • Jell – While it’s important to have regular stand-up with your remote team, taking up too much time defeats the purpose. Jell makes them short, simple, and to the point asking three questions about what team members have accomplished, what they’re doing today, and what challenges they have. Jell requires you to write down important tasks, track successes, and stay focused on meeting daily, quarterly, and yearly goals.
  • TapMyBack – Speaking of calling out accomplishments, TapMyBack is a great way to recognize employees for a job well done when you’re not holding daily, in-person meetings. It’s a fun way for management to recognize their employees, as well as for peers to recognize each other for going above and beyond. Additionally, it keeps teams on track to and motivates by showing that everyone is working towards the same mission.

What have you found is an effective method of being productive when working remotely? Share with us in the comments section!

Filed Under: Development Tagged With: communication, remote, testing

Time to Get Excited About SmartBear Connect

July 6, 2017 By Alex McPeak Leave a Comment

SmartBear Connect 2017 CrossBrowserTesting

SmartBear Connect 2017 CrossBrowserTesting

SmartBear is excited to be hosting the first-ever SmartBear Connect user conference September 12 – 13, 2017 in Boston. As we work to bring together a robust and global community of development, testing, and technology enthusiasts, we want to make sure we have something to offer for every attendee — this means you, CrossBrowserTesting customers.

We’ll have a dedicated track for discussing all things CrossBrowserTesting where you’ll have the chance to learn new testing strategies, mingle with the SmartBear team, and network with other CrossBrowserTesting users and industry professionals. Additionally, you’ll be able to participate in

  • Interactive product training classes
  • Sessions on the latest trends in Selenium, parallel testing, exploratory testing, and more
  • Unique insights and stories from CrossBrowserTesting customer

There will be two jam-packed days for discussing best practices and actionable insights to bring back with you. The team has been busy building the agendas for SmartBear Connect, and we wanted to share the schedule for the CrossBrowserTesting track on Day One.

Day One – Product Training Workshops & Certification

CrossBrowserTesting Speaker: Daniel Giordano

7:30 – 9:00 Registration / Breakfast
9:00 – 9:15 Welcome to SmartBear Connect 2017- Justin Teague, CEO, SmartBear
9:15 – 9:45 Welcome to CrossBrowserTesting
9:45 – 10:00 Break I
10:00 – 11:00 Exploratory Testing
11:00 – 12:00 Introduction to Selenium
12:00 – 1:00 Lunch
1:00 – 2:00 Debugging Javascript in Browsers
2:00 – 3:00 Running Selenium in the Cloud
3:00 – 3:30 Break II
3:30 – 4:30 Running Tests in Parallel
4:30 – 5:30 Best Practices for CrossBrowserTesting
5:30 – 7:00 Welcome Party & Customer Awards

Day Two – Insights from Customers & Industry Experts

Day two will feature unique insights from customers and industry experts. SmartBear Connect will offer three tracks – API Design, API Quality, and Software Testing. CrossBrowserTesting customers are encouraged to jump from various tracks but will most likely enjoy the sessions included in the Software Quality track. Day two will conclude with a “SmartBear Product Presentations, Best Practices, and Road Map ” session from CTO and creator of SoapUI, Ole Lensmar. To learn more about what’s happening during the other sessions, you can check out the full schedule here.

7:00 – 8:15 Breakfast
8:15 – 8:30 “Welcome to Day Two of SmartBear Connect” – Bryan Semple, CMO, SmartBear
8:30 – 9:30 Keynote – To Be Announced
9:30 – 10:15 “Lessons Learned: The Challenges and Successes of Integrating Automated Testing into Existing Development Projects” – Robert Martin, QA Engineer, Select Medical
10:15 – 11:00 Bank of America Customer Presentation
11:00 – 11:15 Break I
11:15 – 12:00 Panel “Automation Frameworks with TestComplete.” Everyone has their own style when it comes to building a Test Automation Framework. In this panel, TestComplete customers will discuss their own best practices when building a good automation framework.
12:00 – 12:45 “Why, When, What & How To Automate” – Carson Underwood, QA Automation Engineer  at O’Reilly Auto Parts
12:45 – 1:30 Lunch
1:30 – 2:15 “Moving From HP ALM to SmartBear QAComplete – Why and How” – Andy Lachapelle & Rich Meskill from  a Large Sports and Entertainment media company
2:15 – 3:00 “Doing the Impossible: Implementing new Test Complete Framework for Windows Application” – Reginald Moore, Manager, Software QA, LifeWatch Services, Inc
3:00 – 3:30 Break II
3:30 – 5:00 “SmartBear Product Presentations, Best Practices, and Road Map” – Ole Lensmar,

CTO and Creator of SoapUI, SmartBear

If you’re already registered for SmartBear Connect, we anticipate meeting you in September! If you haven’t bought your ticket yet, there’s still time for you to convince your boss to send you to this can’t-miss opportunity to learn everything there is to know from industry experts, CrossBrowserTesting users, and the rest of the SmartBear community.

SmartBear Connect 2017 user conference CrossBrowserTesting

Filed Under: Events Tagged With: conference, SmartBear, testing, training

  • 1
  • 2
  • Next Page »

Try CrossBrowserTesting

Everything you need for testing on the web. Browsers & Devices included.


  • Grid In The Cloud
  • Simple CI Integrations
  • Native Debugging Tools
  • Real iOS and Android
  • 2050+ Browser/OS Combinations
  • Intuitive REST API

Start Testing Today

Want Testing Tips?

Want the latest tips & strategies from industry experts right to your inbox? Sign up below.
 

Join Over 600,000 Testers & Developers And Start Testing Today

Learn more Free Trial

Features

  • Live Testing
  • Selenium Automation
  • Automated Screenshots
  • Screenshot Comparison
  • Local Testing
  • Real Devices

Solutions

  • Automated Testing
  • Visual Testing
  • Manual Testing
  • Enterprise
  • Internet Explorer

Resources

  • Browsers & Devices
  • Blog
  • Webinars
  • Integrations
  • ROI Calculator

Company

  • About Us
  • Careers
  • Plans
  • Terms of use
  • Security

Support

  • Help Center
  • API Docs
  • Schedule A Demo
  • Contact Us
  • Write For Us