Modern QA testing is hard. It’s high pressure, high tempo, and there’s a seemingly endless number of tests that need to be run.
But frankly, if you’re a young developer or QA, you don’t know how lucky you have it. Things used to be so much worse.
The more experienced professionals out there know how much better things are. They've been building programs and running tests for the last few decades, they've seen the improvement for themselves. So, with this in mind, let's take a nostalgia-tinged, dread-filled look back at those early days.
Deployment and versioning pain
These days, thanks to automatic updates, testing on the latest software versions is a relatively simple process. However, before this was a common thing, it used to be normal for any update to an OS or browser to arrive on several CDs (or, god forbid, floppy disks), then installed on a PC for manual testing.
Naturally, this was a very slow process. Updating a website or app would end up taking months, not days. And the installation process wasn’t simple either. Registries had to be manually updated, files had to be placed in exactly the right place, in the right order, and the right version had to be installed each time. It wasn’t easy.
And the installation process was just one painful element. It wasn’t alone.
DLL Hell was a common challenge developers and QAs had to face. Applications would often use a shared DLL, but weren’t always compatible with each other, so installing a new version of one application would break another, fixing that would break the other application, and so on. This meant finding version-specific bugs was nearly impossible, and configuration and deployment of apps and websites was extremely awkward.
Compounding everything, there wasn’t much support out there. While web forums existed, they weren’t highly populated, tech support wasn’t forthcoming, and documentation was patchy. If you had a problem, you were probably on your own.
Clean machine chaos
Back in the late 90s and early 2000s, unless you were working for one of the big names like Microsoft, with thousands of employees, effective testing was almost certainly out of reach. You simply didn’t have the resources for it.
Naturally, this lack of testing meant finding bugs was extremely difficult, patchy, and time consuming. That’s because it wasn’t automated in any way. It was carried out by hand, on one specific clean machine.
Because VMs didn’t really exist yet, or weren’t really viable options, you had to have a machine running the latest standard version of Windows, with no additional software for testing purposes. While this was the only viable way to test, it did make spotting software bugs and conflicts, like customers would find on their own machines, close to impossible.
VMs really were a game changer for the industry. Suddenly, you could replicate any software environment, at any time. You didn’t need to build new clean machines with the latest software every few months. It meant bugs that would only show themselves when certain scenarios arose were actually possible to find.
But VMs didn’t solve every problem.
Testing turmoil
Because of the limited capabilities of testing, it used to be very common for websites or apps, including major ones, to release updates without getting close to completing the full testing pyramid. In some cases, they barely tested at all.
This wasn’t negligence, it simply wasn’t feasible to test thoroughly like you can today. Plus, because staging environments weren’t common, any update that did roll out would involve a period of downtime. Add the two factors together, and the potential for buggy, frustrating releases was very high.
Clearly, the arrival of software deployment models was a much needed, welcome change that made thorough testing somewhat feasible. It gave QAs a sandbox to play in, testing every element of an app or website, and giving them the best chance of finding bugs before release.
Around the same time, automation tools like Selenium started to emerge, which changed the game completely. But even then, 2004 Selenium was a vastly different prospect to modern Selenium. It was harder to use, and way more limited in what you could use it for, naturally.
There was also a lot more that needed to be tested. The browser market was a lot more diverse, each with subtly different web standards. Because of this, if one browser, say Firefox, was missing features that rivals had, you’d need to use a polyfill to add it in.
So, while the mid 2000s did see the start of the modern, automated testing environment, it certainly wasn’t like it is now.
Things are pretty good, actually
Yes, modern development and testing isn’t easy. There’s a lot of pressure, and an enormous amount of complexity to modern apps and websites. Plus, a growing sense that change needs to happen immediately, not next week.
However, QA has seen so many fundamental changes over the last two decades, it’s easy to forget how far we’ve come. Modern testing tools, from old masters like Selenium to new upstarts like Playwright, have changed the landscape, while build processes, VMs, and simplified landscapes have made testing a lot quicker and simpler.
And that’s before you add in AI assistance for writing code, standardization, and more. Not to mention the bolt-on testing services you now have access to, including Mailosaur. So, while there are still plenty of things that frustrate and challenge devs and QAs every day, remember, it used to be a lot, lot worse.
If you’re a younger developer, enjoy the knowledge that you have benefitted from the (often painful) experiences of older colleagues, who had to endure some pretty stark processes to get where they are. Because we guarantee that every experienced developer will have their own horror story to tell.
So, if you’re just starting out, be sure to ask your colleagues. They’ll probably be happy to share, but they might need a beer afterwards.