CrossBrowserTesting.com

A Design, Development, Testing Blog

  • Product Overview
  • Pricing
  • Back Home
  • Free Trial
  • Design
  • Development
  • Manual Testing
  • Test Automation
  • Visual Testing

Microsoft Releases its New Browser, ChrEdge!

January 17, 2020 By Nick Brown Leave a Comment

What does Microsoft’s new Chromium-based Edge browser mean?

Microsoft recently released its new Chromium based version of Edge, which many, like us, have taken to calling ChrEdge!  The stable release was made available early January 15th for direct download, but have indicated it will be released in stages and will continue over the next few months.

Why did Microsoft make this switch?

There are a number of reasons for Microsoft to make the switch.  A big one is that Edge only ran Windows 10 and not on older versions of the OS or on Macs.  Microsoft’s browser marketshare was fragmented by these incompatibilities.  The new version of Edge is built on the Chromium open-source engine which of course means they’ve abandoned their proprietary EdgeHTML rendering engine in favor of one that has extensively more market share.  So, they are the same as Chrome now, right?   Not so fast…

While the browser is built on Chromium, Microsoft has made a lot of changes from that base code.  Earlier this year, they posted a list of over 50 services that were either disabled or replaced from the Chromium base.  Some, like Safe Browsing were replaced with Microsoft equivalents like Windows Defender SmartScreen, so user experiences will be different between the two browsers.

What are the biggest differences between Edge vs ChrEdge?

  • The proprietary EdgeHTML rendering engine was replaced with the market leader base engine.
  • Chrome-based extensions compatibility making it easier for users to switch from Chrome to Edge and keep their tools and data.
  • New tracking prevention enabled by default.
  • Progressive Web App support
  • Collections support

Why Should you Care?

It would be tempting to think that since the new Edge is based on Chromium, if it works on Chrome, it will work on Edge.  For a large part, that is definitely true.  But we don’t do cross browser testing because things are close or should work, we do it to ensure it works 100% as expected.  Given the amount of services changed by Microsoft, there will almost certainly be situations where the two browsers behave differently.  Add that to the aggregation of the Edge user base growing to include older Windows OS users as well as Mac users and the market share starts to be significantly larger.  Only time will tell, but we believe that both of these justify adding the new Edge to your cross browser testing program. 

Start testing your website on ChrEdge with CrossBrowserTesting. One click away!

Filed Under: Browsers Tagged With: browser update, browsers, Microsoft, Microsoft Edge, new browser

September Product Update

September 10, 2019 By Joan Liu Leave a Comment

Orange background with text reading September Product Update

Did you hear about our Summer Blockbuster? SmartBear has acquired Bitbar, expanding our breadth of Native Mobile App testing innovation. Bitbar is an established best-in-class product, and is a perfect complement to CrossBrowserTesting—so welcome to the family, Bitbar! Though we’ll pass on the bunk beds and the boat crashing.

Full-Page Screenshot Options

A few months back, we updated our screenshots to use CSS translate instead of our scroll and stitch method. However, CSS translate does not work with lazy-loading content. Lazy-loading content requires a scroll to trigger new content to load. If you are experiencing your screenshots not scrolling and not loading content, use our Standard Scroll Method under the Advanced Options.

Archiving Test Results

We had some customers reach out about archiving test results. We have added two new options for archiving automated tests. You can now archive individual results without navigating away from all the results, and you can archive an entire build’s tests.

New Browsers and Operating Systems

We released Firefox 68, Chrome 76, and Opera 62 on OSX. We also released the Samsung Galaxy S9s with 4 different browsers.

Bug of the Month

A user with an automated test was trying to launch Twitter links and they were getting inconsistent test results. The expectations was that it would launch in-browser but some launched on the Twitter desktop app. We didn’t install Twitter on our desktops, so we thought we had a cleaning issue. 

We finally determined that last April, Microsoft was experimenting with auto-installing the Twitter app and associating it with twitter.com. Not all of our machines had it, as we had a variety of Windows 10 machines acquired at different times. This caused our user to have flaky tests but we were able to fix our machines to never launch the Twitter app, and uninstall the app whenever we found it.

Try out all the new improvements for yourself. Log in now to get started. 

Filed Under: Product Update Tagged With: browsers, product update, screenshots, testing

Testing Our Screenshot Service Internally

April 11, 2019 By Luke Kende Leave a Comment

three Polaroid pictures on a dark background with text "testing our screenshot service internally"
three Polaroid pictures on a dark background with text "testing our screenshot service internally"

How do we ensure our Screenshot service works for each browser, as well as special use cases, as we continuously integrate updates to VMs, devices, and infrastructure?

Problem:

CrossBrowserTesting‘s Screenshot service has 1500+ browser combinations across desktops and mobile devices.  Not to mention the permutations: Single Pages, Long Pages, Basic Auth, Login Profiles, Selenium Scripts, and Local Connections.

We need automated tests to check our services internally and reports to find errors.

Approach:

Use our Screenshots API

We aren’t testing the UI, so we don’t need to automate interactions via Selenium.  We can simply use our Screenshots API to fire off tests to each browser. And, since each test is marked as successful only if it completes generating the windowed and full page image, we can use this data for our pass/fail reporting.

See our full Screenshots API

Create a Special Account

We set up a special account that is allowed to use a lot of automated minutes and parallel tests so that our own plan limitations can be bypassed.  This also gives us the ability to focus reporting to this particular user, as it will only run these automated screenshots tests.

Implement a persistent queue

It will take time to process all 1500+ browsers and we need to ensure we test each one. We need to make sure we are getting full coverage of browsers, even if some part of the system would go down, or the process dies while running.

A queue is great for knowing exactly what browser should be tested next and what browsers remain to be tested. However, an in-memory queue is volatile and restarting the process would mean you’d start over, retesting browsers already tested.  We need it to pick up where it left off in the event of a system failure, or if we purposely stop it for system maintenance. 

There are many ways we could create a queue, but the simplest by far is a plain old text file. In our script, we make a call out to our API to get a list of all browsers, loop over the results and insert the API names for each OS/Browser combination we want to test on its own line in the file. (We chose the “API names” as the identifier as our Screenshots API needs them in this format for consumption.  Keeps it simple when we don’t have to transform data again at time of running the test.)

Our script generates a comprehensive list, but here are a couple of api-name examples:

Win10|Chrome70|1366x768
Pixel3-And90|MblChrome72|1820x2160

Moderate tests run over time

We didn’t want to inundate the system nor hog resources from customers, so we had to moderate how often we run screenshot tests. Focusing the script to run one browser from the queue and then quit, enabled us to simply schedule a task that would fire off the script at a preferred rate.

The obvious answer for scheduling tasks is to create a crontab, but how often should this run?

A screenshot test cannot run longer than 4 minutes by system limitations. At worst, a configuration may be queued for a couple of minutes before a browser is available, hence running a new screenshot test for a browser every 6 minutes made sense.  So at any given time, we only have one browser running 6 test cases: Single Page, Long Page, Basic Auth, Login Profile, Selenium Script, and a Local Connection.

*/6 * * * * /opt/cbt/QA-ScreenShotTest/runScreenshotTestCases.sh

Putting It All Together

  1. A cronjob runs our script every 6 minutes.
  2. The script checks the persistent queue file: If it doesn’t have entries or doesn’t exist, it gets a list of all browsers and builds the queue file anew.
  3. The script pulls the first browser from the queue and saves the file back without the current browser dequeued.
  4. The script launch several test cases against the browser and then exits.

This approach worked well because:

  • If the script dies while processing, only one browser is skipped.
  • If part of the system gets shutdown, the script can pick up where it left off.
  • We can control the frequency for how often a browser should run.

After testing the script was working as intended, all we had to do is set the process in motion by turning on the cronjob. 

Reports

We have been using Grafana to give us insight into data for other systems, so it made sense to reuse here. We only needed to hook up our queries to give us results within Grafana’s GUI, tweak the data we need to focus on, and voila, now we have information for what is running well vs what needs attention.

It’s hard to get a system up and working perfectly the first run, so this does require continually tweaking the report results to get more granular information, but this has enabled us to find issues and patterns of failures so they can get addressed.

Filed Under: Visual Testing Tagged With: browsers, screenshots

Why Your Website Looks So Bad on Internet Explorer

October 2, 2018 By Alex McPeak Leave a Comment

why is internet explorer so bad

internet explorer

Just the mention of Internet Explorer is sure to bring on a migraine for developers and testers. But why is IE so often the browser we love to hate?

As we know, your web page will have slight differences across browsers because of the different ways they render HTML, CSS, and JavaScript. Oftentimes, this is minimal, but it can also lead to more major functional and visual differences that could potentially affect how a user interacts with the web applications, which is why cross-browser testing is so important.

This becomes even more evident with Internet Explorer where those differences are often more obvious. The problem is that despite all of IE’s issues, it continues to hold a share of the browser market, which is why developers and testers have to keep it in mind when building out and releasing a website.

So why is Internet Explorer so problematic, and if so, why do people use it all? Let’s take a look back at some browser history to get a better understanding of where the issues first started.

Blast to the Past

In the early years of the web, Netscape, which was largely considered the first major internet browser, ran the show. It didn’t have much competition until Microsoft came out with Internet Explorer in 1995.

This was the first time there was real competition between two browsers a.ka. the browser wars. Not only was Internet Explorer free and automatically installed on Windows, Microsoft’s advantageous financial situation allowed them to continue implementing new capabilities, such as scripting support and the first CSS implementation.

This helped IE inch ahead in the race against Netscape, and in 2003, Internet Explorer became the most popular web browser, captivating about 96 percent of the market share.

However, we know the browser wars don’t end there. Following Internet Explorer’s takeover, Opera made its way into the picture. Then Safari, Firefox, and Chrome.

Where the problem truly lies, however, is that Microsoft didn’t abide by the guidelines of the World Wide Web Consortium (W3C), which gives specific HTML and DOM specifications for browsers to follow, and decided to make their own decisions when it comes to features and plugins.

So while every other browser was following certain rules and regulations, Internet Explorer was a wild card. With users split between those newer, W3C compliant browsers and the non-compliant Internet Explorer, developers were attempting to write standards-compliant applications and running into issues when it came to IE.

Meanwhile, Internet Explorer had already grown a large user base and used competitive advertising to drive users away from other browsers. While new browsers created competition for IE, some people stayed with the browser because it was the default on Windows and they were already using it.

User interest began to noticeably wane when IE stopped releasing new developments and updates. New browsers focused on speed, security, and, to developers’ delight, standards compliance. They released regular updates for continuous improvement, and by the time Internet Explorer tried to catch up, they were already too far behind.

Microsoft hadn’t updated IE6 for 10 years, and as Android and iOS mobile operating systems emerged, Internet Explorer was not compatible.

Not to mention, Internet Explorer was brimming with security issues, criticized for using third-party technology, suffered from a lack of development tools for debugging, was slow to update, experienced slow performance, and was crawling with software bugs.

Eventually, in 2015, Microsoft ended support for Internet Explorer and replaced it with Microsoft Edge as the default Windows browser.

Back to the Future

Over time, Internet Explorer has made great strides in some of the areas they have been most widely scrutinized for by users. However, because so much of the underlying code still does not follow the W3C guidelines, it still remains a problematic browser for developers and testers.

As new versions of Chrome, Firefox, and Safari are continuously released, it’s important that we test web applications in those newest versions as well as previous versions. Though Internet Explorer is defunct, it’s still important for us to test on versions of Internet Explorer since it retains a user base.

Moreover, because of the fact that it is non-compliant, it remains a problematic browser, which means that it’s less likely to render your website the way you intended it to look and work, and more likely to crash it. This makes it high-risk, and savvy software professionals know the importance of including it in browser testing. In fact, our customers’ second most commonly tested desktop browser is IE 11, and many of our customers go back to IE 9 or further.

Of course, there are other methods for deciding which browsers to test and if Internet Explorer is one of them. But one thing is to be expected — looking at your website in Internet Explorer for the first time may make you doubt your design and development skills — just know you’re not alone. And make sure to check your website in IE from time to time.

Filed Under: Browsers Tagged With: browsers, internet explorer, web design, web development, web testing

What Our Customers’ Top 100 Tested Browser & OS Combinations of 2018 Tell Us

September 18, 2018 By Alex McPeak Leave a Comment

top browsers

top browsers

What are the top browsers, operating systems, and devices? We looked at data from the top configurations of CrossBrowserTesting customers tested to give you better insight.

After looking at the millions of browser and operating system configurations our customers have tested in 2018, 100 reign supreme. These results have come from a mix of people in different roles, industries, and company sizes, showing us the most commonly tested configurations out of 1500+ choices.

If you’re trying to decide which browsers and devices to test, this data should give you more to consider when it comes time to pick. Whether you’re testing in 5 browsers or 500, keeping in mind some of the most commonly tested configurations will help you outline a strategy for testing.

Top Takeaways

The top 100 combinations make up 68% of total tests – One hundred configurations may seem like a lot, but there’s more where that came from. We have thousands of browser/OS/device combos that extend well beyond this view. However, the top 100 does make up the majority of our customers’ browser testing, so it gives up a good benchmark of common trends and patterns.

The top 100 browsers include Chrome, Safari, IE, Edge, Firefox, and Opera – These are the 6 major browsers, so it’s not surprising that they’re the most common choices. It’s important to note, however, that our customers are not only testing on the latest versions of these browsers. By includer older browser versions in your testing, you get a more complete understanding of whether your application works for both Chrome users that are quick to update to the latest browser as well as those who might prolong it and be a few versions back.

Only Chrome and Safari are tested on mobile OSs – Unexpectedly, people most often test Safari on iOS and Chrome on Android. However, in this data set, there are ranges of browser versions and device models. If you are keeping your mobile testing limited to mobile Chrome and Safari browsers, make sure to keep older versions and diversified devices in mind. Keep in mind that the mobile browser share of your users may differ depending on where your users are. For example, internationally, UC Browser and Opera Mini will be more popular on mobile. It’s always good to use an analytics tool in order to get an accurate representation of your user base.

The #1 most tested browser/OS is Safari 11 in Mac OSX 10.13 – While Safari isn’t overwhelming in browser share, this Mac OSX and Safari combo is the most commonly tested configuration. Most likely, this means a lot of testers aren’t on Mac or Safari together and realize it’s a common configuration.

The most tested browser is Chrome – Considering Chrome is the most popular browser, this makes a lot of sense. The most commonly tested version is actually an older one with Chrome 43 hitting the #2 spot, but throughout the top 100 most people are testing in 63 or higher. It’s evident that people are testing on older Chrome versions and recent versions, which is good, especially being as popular as it is.

For mobile browsers, people are testing the most in Safari 8 – 11 – Again, we’re seeing that not only are people testing on a range of mobile devices, they’re also being mindful of testing older versions of those browsers. For mobile Chrome, we see users go back as far as 58. Especially with mobile devices, not everyone may be updating as soon as it’s available, and it’s important to keep this in mind with mobile testing.

The most popular mobile browser is Safari 10 on iPhone 7 – Here’s Another data point that shouldn’t surprise too many people. This configuration makes it into the top 10 most commonly tested browsers. Since this is one of the most common mobile devices, it goes to show that people are aware of these trends and including them in their testing strategy.

The second most often tested desktop browser after Chrome is Internet Explorer with the most popular version being IE11 and going back until IE9 – Internet Explorer is a consistent problem for developers and testers alike. Although it may not be the most widely used browser, the fact that it is highly problematic means it deserves a spot on your browser roster. It’s apparent that our customers know this, too.

Most popular mobile OS is iOS with iPhone 7 being the most popular device, followed by the nexus 6p on Android 7.0 – Most people are testing iPhones and the most recent models at that. This is because, unlike the fragmented Android market, most iPhone users will be using the most recent devices and operating systems. This is ideal because it means you can cover a larger portion of your user base without testing on too many iOS devices. However, as you dive into mobile testing, make sure you include Android phones too. Although their users are more diversified, it’s good to get a base to understand of the Android users’ experience.

While it’s popular for mobile, Safari desktop browser only appears 3 times in the top 100 – Safari is predominately tested on mobile devices, but not often tested on desktop Mac OSX and only appears 3 times in the top 100 despite also being in the #2 spot. Although Safari is the default browsing option for Mac, similar to iOS, it seems that people tend to download Chrome and use it more often on Mac OS than iOS. If you are testing on Mac, it’s probably best to test in both Safari and Chrome, since we are aware that users are on both and others test both.

Twenty-six of the top 100 configurations are mobile, 5 are tablets – When it comes to mobile testing, it can be confusing to know how many devices to add to your mix, but this gives us a pretty good idea — about 25% should be mobile. This, of course, is not an exact number and it should be based more heavily on your users and application — if you see high mobile traffic it should be higher, and vice versa — but it gives you a good starting point. Use this as a benchmark, and make sure to include iOS and Android with different browsers or models, as well as a few tablets, which will be telling when it comes to responsive design and verifying the way your application looks on different screen sizes.

Customers are testing as far back as Windows 7 OS and Mac 10.11 – Not only are our customers testing older browser versions, they’re also testing older OS versions as well. This is important because while new browsers are coming out the time, there are also regular OS updates that not everyone will do. Keeping these older operating systems in mind while testing will give you more insightful and accurate browser testing.

Conclusion

Our customers seem to have a pretty good idea of what they’re doing when it comes to browser testing. From the Top 100 data, we can see that users are testing the most popular browsers, operating systems, and devices to cover large market shares, but they’re also testing older versions to account for people that may not update right away.

As you approach testing in your organization, keeping these takeaways in mind will guide you to account for the most significant coverage. This way, no matter whether users visit your website on IE or Chrome, iPhone or Android, Mac or Windows, you can be prepared to provide the best experience possible to everyone.

Filed Under: Browsers Tagged With: browser testing, browsers, data, devices, operating systems

CrossBrowserTesting Product Update: What’s New and What’s Coming Up

August 23, 2018 By Alex McPeak Leave a Comment

crossbrowsertesting product update

crossbrowsertesting product update

If you’re a CrossBrowserTesting user, you may have noticed a few new changes. Behind the scenes, our product team is always working hard to make the experience better for you. That’s why we want to share some of the exciting new features, releases, and improvements we’ve been working on this year, as well as some exciting things to come.

As always, feel free to get in contact with us and tell us what you’d like added or what integration would make your life easier. Who knows — you may just be the inspiration for the next product update.

15 Browser Releases

Did you know there have been 10 major browser releases in 2018 alone? New browser versions are coming out all the time, and every change could affect the way your web application works.

We’re dedicated to having the largest selection of browsers, which means we prioritize getting new browser releases to you within 24 hours after their rollout.

Check out our latest additions, and maybe spin up a test or two to make sure your application is working on all the latest versions.

  • Opera 50 – 54
  • Firefox 58 – 61
  • Chrome 64 – 68
  • Edge 17

3 New Real Devices

Additionally, want to make sure our customer have access to popular mobile devices on the market, which is why we’ve made three new real devices available for testing.

Because these aren’t simulators or emulators, you’re getting pinpoint accuracy in your testing, so your results are the same as if you were holding it in your hand.

You can now find the following real devices for testing, plus 3 major OS updates:

  • Samsung Galaxy S7 & S8
  • iPhone 8

Record & Replay

We’re so excited to bring a Record & Replay option to CrossBrowserTesting so that teams of all technical abilities can get started with automation with no coding knowledge needed.

If you haven’t already heard, Record & Replay is our click-to-script feature that allows you to record the steps on your manual tests once and replay them back on a selection of browsers. This not only makes it much faster and easier for you to perform a test, it eliminates the human error that arises when you’re manually repeating the same test over and over.

And that’s not all — parallel testing lets you select your test suites and play them back at the same time against different browsers, so you can run your tests in Safari, Chrome, and Firefox all at once. Talk about a time saver.

If you need more reasons to try Record & Replay, here’s seven of them.

WebRTC

We’ve switched to a new type of streaming to reduce lag and improve the accuracy of your Live Tests with WebRTC. WebRTC is an Open Source project that provides browsers and mobile applications with Real-Time Communications (RTC) that became standard in the last couple years. CrossBrowserTesting has recently upgraded to RTC, which means you should be having better experiences when it comes to testing applications that have videos and other media. You’ll literally be able to see the difference every time you test.

Other Updates

In addition to some of these more noticeable changes, you’ll see a few other updates that should help you out in your day-to-day.

  • Audio in Live Tests – You can now test sites that have videos and music to know whether the sound is working.
  • File Uploads on Mobiles – Now you can test upload functionality on mobile.
  • New Local Connection Chrome Extension – If you’re using local connections, make sure to update.
  • Record & Replay Selector – You can change the selector for Record & Replay, which lets you find an element more reliably during playback.

Performance Improvements

Browsers and devices won’t do you much good if the experience inside the CrossBrowserTesting app isn’t up to par. In addition to new features, the entire CrossBrowserTesting experience is noticeably faster.

Here are a few places you might notice better performance:

  • Reduced recorder launch times by 15 seconds
  • 4 seconds faster Mac Launch times
  • 30 Second iOS Appium Launch times
  • Significantly improved Android screenshot success rate

Coming Up

We’re always thinking about what’s next. We have a few browser and OS updates in the pipeline, as well as real devices like the Galaxy S9, iPhone X, and two TBD tablets. Additionally, we have a few major initiatives coming up as we plan out upcoming features and functions.

Again, make sure to let us know what devices you want to see or integrations that would make your life easier and help streamline your workflow so we can take them into considerations.

You’ll see:

  • 5 Browser Releases
  • 4 New Real Devices
  • 4 Major OS Updates

Major initiatives include:

  • Laying down foundation work to get the latest devices out in a more timely manner and available for Live, Screenshots, and Selenium tests.
  • Queueing Selenium Tests for when you run out of parallel execution time.
  • Faster key and mouse input speed for laggy Android and iOS devices.
  • Continuing to work on R&R and maybe bring to mobile or add screenshots. Record & Replay is still in beta, so we’re looking for suggestions and exploring new ways to improve.

To watch the full webinar that includes all the updates from the first half of the year with Product Manager Joan Liu, check out the recording.

Filed Under: Product Update Tagged With: browsers, deviices, product update, web rtc

The Div That Looks Different on Every Browser

July 12, 2018 By Alex McPeak Leave a Comment

the div that looks different in every browser

the div that looks different in every browser

If you’ve been around the internet lately, you may have heard of the div that looks different in every browser.

But if you haven’t or unsure of what it’s all about, here’s the situation:

A guy named Martijn Cuppens coded a div in CSS which made a fairly simple green square. However, when you look at it across different browsers, it transforms completely into another shape.

The div that looks different in every browser https://t.co/hXmxoLA8fW pic.twitter.com/DTyOKxjhSG

— Martijn Cuppens (@Martijn_Cuppens) July 6, 2018

In fact, almost every browser gives you something different than the last. The shapes range from a square with a hole in the middle, to two green squares, to a solid square, to different versions of the original shape. The browser even affects color properties, changing the hues and shades of green depending on Firefox, Chrome, IE, Edge, etc. You can see a few versions in this CrossBrowserTesting screenshot test:

This phenomenon has gone a bit viral on Twitter, Reddit, and HackerNews with designers and devs scratching their heads and examining what went wrong.

If you look in CodePen, the HTML is pretty basic, but the CSS is where things get more tricky. If you have any experience with CSS, you know this is where browser rendering can go haywire.

Most devs are aware of these idiosyncrasies when developing for cross-browser problems. We can see from Martin’s code that he uses outline-offset, which seems to be what causes the issue.

Since each browser renders the offset value differently, it causes the shape to morph according to those varying rules.

“I wanted to know if there is a way to inset outlines (like box-shadow). Turned out ‘inset’ was a outline-style and a negative outline-offset can be used to inset outlines. The combination of these properties led to this result,” said Martin in a Twitter thread.

So what does this teach us? For one thing, it’s probably a good idea to avoid outline-offset. But ultimately, it reinforces the importance of cross-browser testing your code.

Martin’s div gives a visual representation of why cross-browser testing matters, and what can happen when your code isn’t cross-compatible. We can plainly see that there are so many ways for different browsers to interpret a few lines of code.

His example provided a fun insight into how CSS can go wrong, but you wouldn’t want this happening to your web application with elements looking different in every browser.

CSS can be complex, but by testing your site in a selection of browsers, you can avoid the surprise that comes when you haven’t checked how your site looks in Safari.

Filed Under: Design Tagged With: browsers, cross browser compatibility, css, design

Join Our Upcoming Webinar on Creating a Cross-Compatible WordPress Website

May 17, 2018 By Alex McPeak Leave a Comment

cross-compatible wordpress website

cross-compatible wordpress website

Did you know WordPress powers over 30 percent of websites on the internet? People love WordPress because it’s easy to set up and has thousands of templates and plugins that reduce development and deployment costs, making it the most popular Content Management System in the world.

Since about 75 million websites use it, there’s no surprise that WordPress developers are so high in demand. However, with thousands of browser and device combinations, it takes more than superstar coding and design skills to be successful — you also have to be an expert in cross-compatibility. Otherwise, you risk losing users and clients.

But how exactly do you make a WordPress site that’s cross-compatible? Amidst the proliferation of devices and screen sizes, there are thousands of ways that someone could visit your website.

As a copywriter, developer, and consultant for all things WordPress, Jenni McKinnon knows it’s not about just creating a beautiful website, but also creating a cross-functional once. This means having a website that works and looks great no matter where your users are coming from.

Join CrossBrowserTesting and Jenni McKinnon in our webinar on Tuesday, May 29 as we discuss:

  • What it means to be cross-compatible
  • Determining how your users are visiting your site
  • Techniques for building a cross-compatible WordPress site
  • What to look for when testing your site and how to do it

Save Your Seat

Filed Under: Development Tagged With: browsers, ecommerce, wordpress, wordpress development

From Bad to Brutal: Performing a Risk Analysis of Your Web Application

May 15, 2018 By Alex McPeak 1 Comment

risk analysis in software testing

risk analysis in software testing

Test automation is great, but even with the ability to cover more parts of your application in less time, testing everything is impossible. This is why you have to perform a risk analysis of your web application in order to decide when certain test cases get performed and how often.

There are many levels to creating a risk analysis, and deciding which tests go to the top level depends on largely on your business needs and the state of your application. However, these five considerations will give you a good place to start when trying to decide your most important tests.

  1. Most Popular and Problematic Browsers – Take a look at Google Analytics or Mixpanel to see which browsers, operating systems, and devices are being used most often to access your web page. The number of configurations you include in your testing will depend on how fragmented your user base is. You want to ensure basic functionality for at least 80 percent of your users, so if 80 percent are between five browsers, you can get away with including those five in your first tier of testing. However, if it’s split between ten configurations, you’ll want to allow more time for cross-browser testing. If you do find browser use is highly fragmented, parallel testing is a good way to check off your boxes in multiple configurations at once. But we’re talking about risk here, so you also want to think about which configurations cause the most problems for you. IE 8 always breaking? Make sure to include it in your first tier.
  2. Most Critical Unit and Integration Tests – Which tests will have the biggest impact on your bottom line if broken? For example, if you’re running an e-commerce website, the ability to enter payment details and submit them for processing is clearly important for allowing users to make a purchase. If they aren’t able to make a purchase and the application fails when they try to submit this information, they won’t be giving you their business and will probably go to a competitor. On the other hand, this will be different if you’re working in healthcare, finance, or government, for example. Identify the unit and integration tests that make up the foundation of your business as well as your application and associated risk on a scale from minimal to critical in order to determine which place at the top of your risk analysis.
  3. Most Common Landing Pages – Ask any marketer — a big part of the success of any web application is a user’s first interaction with it. This means that you should be looking at the pages with the heaviest traffic to determine which should be tested most often in regression. Choose five to ten pages that users land on most often. Whether that includes the homepage, pricing page, or the “about us” page, you want to make sure these are consistently up to par with a healthy mix of functional testing and visual testing.
  4. Security and Privacy – Ensuring security in any instances where personal information is being requested from your users should be of top importance in your risk analysis. Take note of any times where payment is being received, passwords are being entered, contact information is input, or confidential information is requested. In addition, any part of the application that may have an impact on the safety or financial status of your users should always be a priority to ensure that their data is uncompromised. Just ask the companies that have been victim to some of the worst software bugs in history — once you lose user trust, it’s hard to get it back.
  5. Extended Downtime – Which functionalities will take the longest for your team to get up and running again? While your users should usually be your first priority, sometimes risk goes beyond their inconveniences when it puts the entire team on a backlog. If you already know it’s not going to be an easy fix that could affect the application downtime for days and team productivity for weeks, it should probably be high on your list when thinking of risk. Where have you had issues in the past? History repeats itself in life and your application. Log and report areas that have a high probability of failure so you can prepare your team and avoid excessive repair time.

You have to be familiar with the risk associated with different functions and features in an application to know what should be tested first, what should be tested again, and what you can deprioritize.

While you can never guarantee that code will be deployed with zero bugs, you can develop a strategy to ensure that any issues that do slip through the cracks won’t be detrimental to your application.

Filed Under: Test Management Tagged With: browsers, risk analysis, Security, test automation

Using Mixpanel for Data-Driven Cross-Browser Testing

March 15, 2018 By Alex McPeak Leave a Comment

mixpanel browser data

Most testers know there’s much more to testing than going into an application blindly and clicking around. In fact, planning test cases and collecting data beforehand is the best way to understand who you’re testing for, what you should test, and why.

When it comes to browser testing, this data becomes even more valuable. For example, what good is it to waste time testing Opera if none of your traffic is visiting your web app through that browser?

While there are multiple ways to decide which browsers and devices to test, you’ll find the most accurate is through an analytics tool. We’ve already explored how to find this data in Google Analytics, so let’s see how to find it in Mixpanel.

  1. Go to “Analysis” and “Segmentation”, which will both be at the head of your dashboard.
  2. Go to the first drop-down menu you see on the top left and pick an event, which is an action correlated with a page on your web application. You probably want to pick a page that’s either high converting or high traffic to get the best glimpse of your customer data. For example, it might be a good idea to pick your homepage because that’s probably one of the first pages people see when they come to your website. You might also want to look at a few of these events to get a more accurate glimpse of how the data changes for different user journeys.
  3. For the second drop-down menu, you want to pick a browser. As you can see, this gives you a good look at which desktop and mobile browsers hold the majority of your users. From our data, we can see that we’d want to focus on Chrome, Firefox, Safari, Mobile Safari, Internet Explorer, and Microsoft Edge. You can refine these results by time duration and view them as a line graph, bar graph, or pie chart.Mixpanel browser analytics
  4. You could stop there, but why not get as detailed as possible? Add a segment by another dimension and set it to Operating System. This will tell you which configurations your users are on, so you can get more accurate information about what machines you should be testing. Looking at our example, you can see we would want to specifically test Chrome on Windows, Mac, Chrome OS, Android, and Linux as well as Firefox on Windows, and Safari on Mac.mixpanel browser and operating system analytics
  5. Another data point you may want to look at is devices. Mobile testing is taking center stage, and you want to have an idea of what mobile devices people are on. By picking “Device” as your main dimension, Mixpanel will give you a good idea of what you should test for whether it be Android, iPhone, iPad, Windows Phone, or BlackBerry. Once you see how many users on each device, you can decide which you want to test for.

Depending on what resources you have at hand (in-house device lab or cloud testing tool), how divided your Mixpanel data is, and the amount of cross-browser coverage your team has allocated for, you’ll want to use this information to form a risk analysis of your application across different configurations and determine the minimum number of browsers, operating systems, and devices you should be testing on. You may find that you have only five major players that need to be accounted for, or it could be fifteen. Either way, you should check this data on a semi-regular basis to maintain accurate information about where your customers are coming from.

Filed Under: Browsers Tagged With: analytics, browsers, mixpanel, operating systems

  • 1
  • 2
  • Next Page »

Try CrossBrowserTesting

Everything you need for testing on the web. Browsers & Devices included.


  • Grid In The Cloud
  • Simple CI Integrations
  • Native Debugging Tools
  • Real iOS and Android
  • 2050+ Browser/OS Combinations
  • Intuitive REST API

Start Testing Today

Want Testing Tips?

Want the latest tips & strategies from industry experts right to your inbox? Sign up below.
 

Join Over 600,000 Testers & Developers And Start Testing Today

Learn more Free Trial

Features

  • Live Testing
  • Selenium Automation
  • Automated Screenshots
  • Screenshot Comparison
  • Local Testing
  • Real Devices

Solutions

  • Automated Testing
  • Visual Testing
  • Manual Testing
  • Enterprise
  • Internet Explorer

Resources

  • Browsers & Devices
  • Blog
  • Webinars
  • Integrations
  • ROI Calculator

Company

  • About Us
  • Careers
  • Plans
  • Terms of use
  • Security

Support

  • Help Center
  • API Docs
  • Schedule A Demo
  • Contact Us
  • Write For Us