
Legend says that there shall be a time when there be no need for manual testers, and they all will be slayed in the hands of automation. Scared? Well, don’t be. Manual testing is not going to be replaced easily, and if we start doing the tests completely automated, your main concern will not be your jobs, fellow manual testers. In this article, we are finally starting to learn about testing by firstly clearing up all the confusion about manual and automated testing.
Le important note: Before stepping into the galaxy of testing, you might want to consider getting equipped with the knowledge of SDLC. When we say STLC, and you think of an edible object, it is highly recommended to visit the previous tutorial. Don’t worry, we will wait for you… are you done? Okay, let’s go!
QA, Testing, What’s Going On?
Software testing, in our definition, is the activity to check the quality of the application by going through the system and trying to find the defects. Quality assurance (QA) on the other hand is the whole process of verifying and validating whether the product is working up to the expectations of the client. So, we can say that testing is a subset of quality assurance, although those terms are often used interchangeably. In the following tutorials, we will be finding out the tasks included in the quality assurance, or STLC. Those steps, such as test plan creation, are independent of manual or automated testing but are taking a part in QA. Capiche? Cool.
Manual, Automated, What?
Manual testing is, in the QA life cycle, the operation where test cases are written, and executed by human power instead of machines. And automated testing is where the test cases are in the form of computer scripts written by automation engineers and executed automatically without the need for flesh.
We are living in the 21st century, so why do we even need humans if we can automate everything and let robots conquer the world? Not so easy pal, first we need to understand the usage of each of those two approaches.
Manual Going Nowhere
First of all, it is fundamentally incorrect to think that everything can be automated. Let’s think about localized content in a website that has to be tested. By the use of automation and some help from an NLP (Natural Language Processing) algorithm, we can pretty much verify if the text part has any lexical or maybe even semantic issues. But what if it has an image of a hand sign, a certain expression, or even a color that is perceived offensive by a certain country’s audience? In this case, we will have to rely on hardworking localization testers to avoid all the havoc.
Exploratory testing is another method that is hard to be automated. It is done for understanding the application and the requirements by acting like a user and randomly using the interface. Since it is performed as the first approach to get an understanding of the software, it is to be done manually.
Also, by testing manually we can cover more scenarios. In the standard STLC process test cases are written in advance of the execution phase, and the testing is done following those cases. However, careful manual testers might realize more scenarios to play around with while doing the actual testing, and if they have a legit work ethic, they will test the new scenarios they find out as well. Thinking about creative ways to break the system is not something that can be automated either. Humans: 3 – Robots: 0.
Usability testing is also something that is supposed to be done by real users. Automation can be used to replace the functionality or performance testing as these are something that does not subject to change from person to person. Usability though requires subjective feedback from someone that actually uses the program. Similar to the localization testing mentioned in the first paragraph, a manual tester will give more valuable feedback on the user experience, such as the ugly elements on the UI or unclear texts.

Lastly, sometimes manual testing is the cheaper way to go. That’s why nearly all game companies hire manual testers. Games are usually complex pieces of software developed under very tight deadlines, and timewise most of the time it is not possible to write automation scripts to test the game covering all the possible behaviors of the players, which can be an extremely huge number. Also considering the need to verify the enjoyability (useability) aspect of the games, there is no doubt that manual testing is superior in this field.
When to Automate Then?
Just like when we can’t automate everything, we cannot manually test everything either, or testing doing it so would be more expensive and time-consuming. As intelligent testers, there are several occasions that we are in a bind for the power of automation.
If we need to test the same cases over and over again, those cases are better automated. To illustrate, whenever a defect is fixed, and we need to verify the related features are not affected by the fix, we execute all the test cases in that area (this is called, regression testing). Or, whenever there is a new build received, there is a smoke test done on the whole application or on a particular area to have a general check on whether the build is testable. Also, as the last step of testing, usually and end-to-end testing is performed by executing all the previous test cases to confirm the system is working fine. In those cases, it is not good for the time efficiency or the sanity of the testers to manually execute the tests multiple times. If that process is automated, all will be done with just one click.

Furthermore, performance testing might get impossible to do unless there is an army of manual testers. For example, sometimes, the limits of the system should be tested by mocking a real-life scenario of a million users signing in at the same time. Good luck finding a million volunteers to sign in at the same time, we will do it by automation in a matter of seconds (maybe minutes).
Which one is better?
If you are still asking that question, I kindly advise you (I’m crying inside) to take a look at their usage one more time. Folks, this is not the game of scores, and just like any question in the software world without a purpose (e.g., Which programming language is the best? Geez.), it cannot be answered. We only need to use the technologies or methods according to our needs rather than ideologizing them. Indeed, a smart QA team may utilize both manual and automated testing if needed. Okay, the preaching is done. Now we can take a look at the table below as a summary of what we have discussed.
| Feature | Manual | Automated |
| Execution | By hand | By code |
| Usability | Cannot be reused | Can be reused |
| Suitable Methods | Exploratory, Localization, User Acceptance | Smoke, Regression, Performance |
| Cost to Develop the Cases | Low | High |
| Cost to Run the cases | High | Low |
| Dynamic Changes During Execution | Can be made by the tester | Cannot be updated |
There can be more points added, but once you get the core of when and why to use each approach, it will get easier to see more differences (Not that we’re lazy to make the table bigger, come on).
In the next chapter, we will start seeing how to actually perform the QA step-by-step from test planning to case and tracker creation, and execution. It will be so much fun, pinky promise. Let’s go!