Tech Gyan

Manual Software Testing: Why Automation Cannot Replace It

Manual Testing is an irreplaceable part of ensuring software quality. An engineer goes through an application and compares expectations for how it should work with actual outcomes, then provides feedback on what failed. But in time-to-market constraints, companies are forced to seek automation. For people with lack of engineering background, its profit is unambiguous: whenever a tool performs a task, a tester saves time to focus on more important areas of an application. Moreover, they don’t understand, how such boring activity as searching for defects can be interesting and exciting. Recently, we even hear that manual testing job is obsolete. Maybe, indeed, the human element is not necessary for this industry anymore? In this article, we shall argue against this statement.

Manual Testing

New features must be manually tested

The regression testing is verifying that previously developed and checked software still performs identically after modifications in the code. This tedious and time-sapping process has repeated itself upon each build/version. Here, the advantage of automating testing technique is clear. Endurance, load and expandable performance testing (simulating thousands of users) are inherently automation-driven.

However, in new version testing regression is always followed by the tests of the new features. Freshly added functions, with very high probability will interact with legacy code in unpredictable ways. It is unrealistic to foresee all points where two generations of features will collide. At this point, exploratory manual software testing comes in. Creative talent and analytic skills are needed to write and execute scenarios, which were not thought of, from an end-user viewpoint, to find issues unaccounted for in the product specification. A tool can find a bug if it is programmed to do so. But only a human being can imagine and anticipate. And areas like installation, compatibility, user interface, layout, recovery, documentation also require only manual testing.

Quick return of investments in automation is a myth

Expenses for purchase of automation tool and licensing fees for proprietary frameworks are very high. And if the staff is not skilled enough, the tool will stay on the shelf. Linear “record and play” approach is not maintainable. Thus, even more will cost team training, source code development support and updating of brittle test scripts. And you will need to maintain scripts regularly, alongside with each new software version. Neglecting this will cause false error reports and growth of the rat’s nest of code. The development of automation framework is a separate project itself, while open-source tool requires consulting or internal expertise to uncover its full value.

Those all require skilled human resources: either having sufficient programming education or technical background to learn and use new technologies. Not all companies can afford to invest in good automation team. And, for sure, not all need this. The average company developing e-commerce or video-game applications runs for forty hours weekly. How much time should they devote to the human investigation, and how much ‒ to tooling, depends on the expected product scales and life cycle. Manual testing is less expensive in the short-term, e.g. if you need to test a simple app once, and only a few updates are expected. With manual testers, you are paying only for hiring them and their time rate.

Most of the test ideas need to run once or twice, not returning the cost of their institutionalizing, not worth running all the time. Identify crucial application areas and test cases. Avoid automating modules or test cases that might be running once and not included in the regression. Automate reasonably, only things that will be executed repeatedly at regular intervals. Every script suite should operate long enough until the cost of its building is definitely cheaper than that of human execution ‒ typically 15-20 times or more for separate builds. Thus, the investments in test automation will be returned in the long-time prospect. Anyway, no tool will do all tester’s work.

ROI - Return Of Investment

Human insight should not be sacrificed

In manual exploratory testing, every next test idea derives from the result of the preceding one. Context-driven manual QAs are typically the subject-matter experts knowing the systems under tests inside out. In a nutshell, scripts will never replace their personal view. Imagine an automated test which runs 100 times and it passes all times; does it honestly add any value? If that outcome is really accurate, such test may be valuable, still only for a certain time span, unless it verifies a high-risk scenario. But if the manual tester has found more defects, the project manager should muse of what is more profitable. Isn’t it better to discover factual issues manually?

Evaluating usability, appearance, consistency, comfort is rather an art than a science. Accessibility, unpredictable human behavior can hardly be imagined automated. For example, the tool is able to click the button even it is visually shifted improperly or the part of its description is cut, while an engineer will instantly see such problem. Similarly, they can pay attention to the much too low contrast of background related to a button, making hardly readable what to do. Such user interface feedbacks are provided only manually. Image re-captcha, crucial for security, is also a manual-only task. Thus, humans will continue testing graphic-intense games and e-commerce for decades to come.

Testing Graphical Interface

Unstable User Interface requires manual testing

At the initial stages of the development cycle, user interface goes through permanent changes. That causes numerous false errors, lagging test runs while waiting for unavailable data, re-running and waiting for fixes longer than expected. Before UI is fixed and frozen enough, it is a bad idea to automate the further tests. Those scripts will be very expensive to maintain. The team who tests either startup, or small, or not a stable app, has to approach it flexibly, on-the-fly. They will probably be better off ensuring quality “by hands”. When a feature or sprint is ready, the engaged customers often find bugs as well as give new smart requirements.

As we said, good manual software testing includes exploration. Following “steps to reproduce” every time might be consistent, but over time it may reduce the test coverage as the application features expand. The intelligent tester will view test cases not only focusing on the expected result but always ask themselves “I wonder, what if we…?”

Tools are not ready to execute after each change, even in the same environment; they need set up. After all, the sound judgment: if you need to quickly test only one small change, go ahead, test it manually right here and now :).

Global and future and view

Beta crowd-founded testing in different regions and languages brings valuable local-specific feedbacks for global companies.

The ability to develop on the new platform, be it web, smartphone, tablet, or popular plugins, emerges long before the test scripting technology for named platform. When Google created the Android operating system, no tools existed to run neither native Android software nor the OS itself. Why? Because at the system level, Android relied on manual testing. Same story with future technologies: Google Glass, Siri, Oculus, innovative graphics-intense video games (some even reacting to human motion). Successful development companies will apply a blend of manual and automation approaches and its proportion will constantly vary.

Bucking the ongoing prognostication of the opposite, manual testing is quite alive. At the end of the day, we should think about testing in the same way, as we think about development. This is something done by humans ‒ not by computers. Automation helps, but it does not do testing. It impacts thinking, imposes patterns and limitations. Unless artificial intellect overtakes a human brain, manual testing will play a decisive role in software development.

Article written by Rachit Mangi

Hey, fellas! This is Rachit Mangi, co-founder and administrator of Tricks N Tech. He is a Computer Engineer by degree and a passionate blogger by heart. He likes to code sometimes. He is fond of watching movies and cricket. He loves to travel to new places.

This Post Has One Comment

  1. Esferasoft Reply

    Hey Rachit Mangi!

    Great & interesting share!

    I do agree with you that manual testing is irreplaceable. What manual testing can do sometimes automation tools are failed to do. It happens often that we check an application in an automation tool but still we again check the functionality of the application manually to ensure whether the automated results were right or not.

    So the scope for manual testing would always be high in comparison to automation testing.

    Keep sharing!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.