Automation testing shouldn’t be viewed as a one-size-fits-all solution for quality assurance when it comes to your company’s software. Rather, information technology pros need to take a more tailored approach, deploying both manual and automated testing as best suits their specific project.
That was the assessment from the panel of quality-assurance (QA) professionals who spoke to a lunchtime crowd at the Tech Mash Up hosted by the Information and Communication Technologies Association of Manitoba (ICTAM) on March 21 at the NRC building in downtown Winnipeg. The panel featured QA professional advice and expertise, including Winnipeg-based IT and business consultancy Online Business Systems, which has been in business for 30 years with clients across North America.
What is Automated Testing?
Automated testing is a way to control the process of software testing and quality assurance (QA) at your company to ensure you can discover and fix any bugs in your software or website. The process includes providing reports and comparing results against previous tests so that issues can be addressed and your software can be improved.
Why should your company pay attention to new developments in automated software testing? As explained by TechTarget, many organizations only look to automation when a manual testing program isn't meeting their expectations, or when additional human testers aren't available due to resource constraints.
Techopedia goes further to explain the importance of the testing phase in your development process. "It ensures that all the bugs are ironed out and that the product, software or hardware, is functioning as expected or as close to the target performance as possible. Even so, some tasks are too laborious to be done manually even though they are easy enough to do. This is where automated testing comes in."
Meet the QA pros
Automated testing has come a long way since the days it was mainly limited to testing graphic user interfaces (GUI), but that doesn’t mean it’s all you need to test your new or updated applications. When effectively designed and deployed, test automation software can be a highly valuable tool to measure, manage and troubleshoot technology-based solutions.
Online Business Systems' senior consultant and QA analyst Sandra Epp moderated the Tech Mash Up panel, featuring consultants and QA analysts TapasSahoo, Nazmus Saquib and Alexander Sigal. According to these experts, IT professionals still need to know when and where to utilize more traditional manual testing methods and must realize that any automated tests are only as good as the questions they’ve been designed to ask.
As to whether QA professionals should see automated software testing spelling the end of manual testing, Saquib was clear about the need for both approaches.
“We haven’t reached that stage where we can actually replace manual testing,” Saquib said. “Let’s be smart with automation.”
He sees the biggest value of automated testing in areas where it can simplify human effects, and automated testing has already proved its worth for regression testing — ensuring the addition of the new functionalities does not hamper the existing software. But as far as testing new functionalities added to an application, the human touch of manual testing is vital.
“When it comes to new functionalities, that’s where we’ll have someone with experience to come in,” he said. “We haven’t reached that stage yet where (automated testing) can replicate cognitive thinking and that’s why manual testing is still very important.”
Sahoo said it’s still a misnomer in the industry that automated testing remains limited to the GUI level. “Is that true anymore? I don’t think so,” Sahoo said. “Today we can implement automation on various levels, on various platforms.”
Automated testing can also be implemented well before an application’s front end is ready. “With the evolution of test automation, we can plug in automation testing at any level of the application architecture,” he said.
Why design matters
As with traditional testing methods, the key to a good test is its design and how well it can deliver usable results.
“Let’s understand what we mean when we say we’re implementing test automation,” Sahoo said. “Basically, we write some script or code to automate existing, pre-defined manual steps. Manual test cases that are not written diligently with the understanding of the application or the understanding of the requirements will not help in improving quality. Until and unless you succeed at manual testing, you cannot succeed at automation testing.
“It is really important to emphasize the fact that test automation is a part of quality assurance, and to improve the quality, we need to ensure that we are good at testing. We need to make sure the test cases we have written are good enough to verify all the requirements properly. We need to make sure our objective is very clear. Every project is different.”
Simply put, design matters. “It just falls in line with that idea that you need to have that intelligent design,” Epp added. “If you’ve got your design solid, then it can add value. But if you haven’t solidly designed how you’re going to implement your test automation, it can actually detract from the value.”
Common sense and a critical eye
The panel also urged QA professionals to look further than a simple ‘pass’ or ‘fail’ result when using automated testing. A 100 per cent pass doesn’t mean your code is perfect, but on the flip-side, a fail doesn’t necessarily mean the problem resides in your coding.
“In all kinds of different testing, we cannot be 100 per cent sure if bugs are not found, that our application or software is bug-free,” Sigal said.
Automated tests are defined procedures that work through defined tests, so you can’t conclude your application is pure and clean. Results depend on the questions a test asks, so it’s critical to make automated testing objectives very clear and mission-driven. If you get a ‘fail’ from an automated test, don’t simply take it at face value.
“There are so many other factors that can affect test execution and cause your test script to fail,” Sahoo said. Other factors include a sluggish network that could cause failure of a test for timed loading, for instance.
“It’s important to design your scripts so you can differentiate whether it’s a genuine test-case fail or if it’s failed because of external factors,” Sahoo said. “It’s also important to have sufficient logs so you can trace back to the root cause of failure.”
That’s where visibility and accessibility factor into any automated testing. “We want to make sure those are stored someplace with proper time and date stamps, so after 10 sprints if we want to go back we can see what a past result was,” Saquib said. “We want to be precise on what the failures were, what the changes are that we made to make sure the application starts working on those functionalities.”
Epp said the same common-sense approaches apply to both automated and manual testing. “Even in manual testing, you can execute your tests, but if you have no way of expressing to the project manager, to team leaders or to management how those tests worked, whether they were successful, how they were executed, how long it took — those sorts of metrics — if you can’t provide that to upper management, then you’re not providing the value you need to provide.
"And if you’re not able to show the value you’re offering, you may not be given the opportunity to give that value again in the future. Being able to show the results of what you’ve done is critical.”
Agile and Waterfall
Test automation can be equally applicable to both Agile and Waterfall style software development projects, Saquib argued.
“Developers are opting to work in Agile-based rather than the traditional Waterfall setting where we start developing the software and at the end, when it’s all there and it’s into the stabilization state, it goes to QA and then it’s tested”, Saquib said.
“When there are bugs coming in, then it goes again to the developers and it takes a lot of time to fix the bugs, put it into a stabilization state and then go into live production. That’s why we’re more inclined to Agile where small pieces of the software are developed as part of a ‘sprint’ or iterations.”
He said automated testing is a valuable tool to validate functionalities along the way in an Agile-based project.
“As we have those sprints, we have to validate the functionalities in each of the previous sprints,” Saquib said. “That’s where (automated) regression testing comes in really handy. If we start to use really efficient design techniques, we can utilize our scripts to actually support both systems.”
Epp said testing automation should begin early for any project. “We need to start right at the design phase,” Epp said. “We want to have it available throughout the lifecycle of the project. It’s not something you wait until the end of the waterfall to start coding because it will take longer to create.”