Our Approach to Quality Assurance before realizing a SAAS application (or any application)
About fifteen years ago I was part of a small test team engaged with a rather large and complex ERP product testing and quality assurance. Our job was pretty much structured but it was also very demanding as we had a long list of clients who were faced with the challenge of updating their running product version with a newer updated version every first Monday of the month.
Our development and test strategies were very risk-driven oriented. So, the project and the test managers work hand in hand with the customer support people. The reason for being so meticulous about it was that we wanted to prevent client-facing issues after the product release.
So, for the staging area, we set up some rigorous protocols to develop, test, and deliver the application. What we adapted became an exemplary effort across the departments. Here is a drawn-up strategy:
We define the QA Strategy based on what we do – instead of a pre-written process:
We identified what we wanted to achieve:
This involved determining the meaning of Quality for the internal teams and the customers. So we changed the culture based on how we were tracking complaints, the changes, customer support calls, implementation goals, and the wish lists. We then extracted what value is perceived at what instance, and practically implemented these instances for each party. Our focus was on functionality, performance, usability, security, and compatibility.
Identification of the right testing techniques:
We selected appropriate testing methods based on our own selfish goals. If we followed a bookish approach then we would apply a cluster of approaches to our testing, whereas what we did was to identify what was needed for the teams. That included functional testing and performance testing and excluded security testing and compatibility testing.
Establishment of test levels and delivery instances:
Any smart test team will not define a one solid deadline for deliveries, instead, they will establish multiple milestones along the assembly line, the reason is to provide customers with a sense that they are being heard, and a balance of work pressure on the development team.
Planning and Preparation:
Test Planning:
Yes, we know it’s boring but someone needs to do it! We created detailed plans outlining what will be tested, how it will be tested, and the expected results – the elements we constructed the test plans were:
Team Structure:
- Equipment and Hardware we have
- Timelines we can stretch and squeeze
- Test Deliverables for each stage
- Test Deliverables for each stakeholder
- Who will be doing what?
The development and structuring of the test cases:
We created an online/shareable test cases repository and everyone started chipping in their ideas around what to test and how to test. It helped us in defining a coverage outside the application and enhanced what was previously developed as mind maps. It also helped us when we started testing the sprint outputs and determining what to test and what not to test as per the scope of the sprints
Prepare Test Data:
It’s easier said than done: We discourage the test data as a garbage input rather we like to have records representing the actual business domain. It is necessary to have the reflection of domain-based data because it gives life to the test cases and the mind maps. The application comes to life in a sandbox environment and lets you play around so that when reality collides with the context, the damage is minimal and controllable.
The Selection of the Right Test Tool:
This decision is based on a couple of very important aspects, where we need to select the right test tool and also see what our testers already have in their tool belts. People consider test tools as the range of open source and commercially available test tools, we don’t. We see what our testers can make use of the currently available resources, browser and local plugins, O/S features, and also the test tools or scripting languages they are comfortable with.
The early bird gets the worm!:
We start from the design board, that is to say, we include our testers right from the very first wire-frame being created on Figma. We start to give our inputs on the features and flows from that instance. It helps development to check back on their work and the team can build up a good amount of stories and epics right from the start.
We become the users:
Usability Testing:
We start to think like the users, for this, there is a simple heuristic we follow:
Building Scenarios: we use the prefix “What if…” and record all our undeclared assumptions. We don’t leave a single node untouched on our designs and mind maps.
Creating Personas – we create certain user personas based on geo – geo-psychological – and behavior depending on their age, sex, and professional capabilities.
Soap Opera with multiple user roles – yes this works, create a story with multiple roles of each user behaving differently while remaining part of one single story – and you will see how you can expand the application feature to a realistic mental model.
Extreme use – it is necessary to use the application to its tolerable stretches and for this a controlled amount of negative and extreme behavior is necessary.
- Log Off
- Shut Down
- Reboot
- Kill Process
- Disconnect
- Hibernate
- Timeout
- Cancel
- Violate constraints
- (Leave required fields blank, enter invalid combinations in dependent fields, enter duplicate IDs or names).
Communication and Collaboration:
- Somewhere, all this information should be recorded and revisited regularly.
- There should be very disciplined control factors in place that generate the outputs/status / and red flags on regular intervals – I am talking about touch-based and stand-ups.
- Have things on shared drives, tag people, and ask for actions.
- There should be one communication channel as a primary source of communication, and then a couple of backup channels. We prefer to have Slack, Whatsapp, and Skype and then GMeet and Zoom as meeting options – NO ONE LEFT BEHIND!
Additional Tips for Medium-Sized Applications:
The real challenge is to bring everyone up to speed and have them on board with the clear context of the application, what clients wish to have, what are the pain points, and where we stand as a team currently. Consider yourself driving a speedboat and there is a new member who wishes to come on board – as the captain you need to maintain the course, adjust the speed so the other member can come onboard, have them properly seated, and then also keep on track with your bearings.
Smart team managers are well aware of their surroundings and put certain touch-points, SOPs, and control factors in place which activate automatically based on different even-calls.