Creating a software application for your business is always a challenge. It will be a representation of your brand’s voice and personality, thus, it is important to ensure its quality with a thorough software testing approach.
Nowadays, testing is a must in all development processes. It gives confidence in the quality of the application and creates a bridge between business and development.
Table of Contents
- Analysing Software Testing Requirements
- Creating Open Communication – Key to Success
- Testing The Performance of the Application
- Core Considerations for QA
- Core Considerations for Testing Strategy
- Test Analysis and Design
- Test Execution for Software
When a testing approach is analyzed, we need to have a clear understanding of what and how the software application will be developed.
If you already have a team and a base of code, you should want to increase testing coverage or quality. When improving testing it is crucial to understand the tools and technologies being used. Also, be aware of your team’s knowledge level with the tools and technologies.
Software Testing Example
As an example, let’s say, we only manually tested the application, but we want to make the process faster by automating the test cases that are repetitive.
We would have to see if the existing testers have the knowledge to implement the framework. If they do, lucky us, if they don’t, we need to decide if either we need to hire a specialist. There is also the possibility to hold training for existing testers. This will impact their manual testing capacity if the project is underway. As a result, the delivery timeline could possibly change.
It is important to have integration with different providers (payment, content). Combined with a timeline of delivery on their side is a necessity.
This step is almost always a pain because 3rd parties don’t always have the best product integration manuals. At times they don’t provide much information so communication is key in order to keep on top of potential issues.
In this situation, we make sure that we have at least one contact from that 3rd party, so we can create a communication bridge.
You also need to consider environments that are to be used and their limits.
If you decide to implement tests that will check the performance of the application. Then you will need an environment that is as close to the production one as possible in order to have realistic results and expectations.
Before even starting developing and testing, you need to decide which development process is best for our project. At CodeFactory, we have multiple projects that use different methodologies. An example is a feature-driven development which is basically building a feature list and then plan for each feature.
This methodology works when you have all the information on what needs to be developed from the beginning.
Agile Methodology in Software Testing
Other projects use Agile methodology where throughout the development process the customer is in close collaboration with the development.
Teams develop in short sprints, each of which includes a defined duration and list of deliverables in no particular order. During sprints, teams work towards the goal of delivering working software or some tangible, testable output.
Sometimes customers already have a live application and want to completely change the look or how their existing functionalities work.
In this case, the existing codebase needs to be analyzed to determine which functionalities need to be kept, adapted, or removed. Also, we need to consider that the existing application has existing users, accounts, wallet information, and so on. That’s why you need to analyze the impact of releasing new features as it will involve large data migration.
For us to be able to start building a team, we need to know the release timeline. If a customer wants to be live during a certain period, and let’s say we have 6 months to implement 12 new complex features we’ll need a large number of developers and a large number of testers.
In comparison to developing the same number of features within 2 years. Once we identify all activities, dependencies, resources, and a timeline, we establish a clear leadership hierarchy and their responsibilities. They will become key people to facilitate communication between development, business, and customer.
Once we have a high-level of knowledge of what the client wants such as requirements, timeline, dependencies. We can create a high-level testing approach for the development cycle.
For example, by looking at the timeline, we can establish testing types so we can approximate when we will need to run regression tests or user acceptance tests.
Then, we have to consider how the testing process will be from sprint to sprint. For example, when the sprint starts, testers also start creating test scenarios as this is the most time-consuming activity.
It’s easier because when the ticket moves in the test, they just run the tests and they already know what that ticket is about. This is because they had enough time to raise questions about the requirements.
How We Test
Another aspect of the process is where and how we test. Usually, there is a test environment where dev branches are deployed, and that’s where the feature testing is done.
There is another environment, where the master branch is deployed. The tester can smoke test the changes once the developer branch is merged to master. We also decide which test management tool to use. The most common choice is Testrail as we can easily track test cases, test plans, or test runs.
Another important aspect is the compatibility, so we need to decide browsers and their versions. As well as the operating systems where testing will be performed. This can come from the client’s strategic team which researches the market in that specific country and comes up with most used platforms.
Testing is done by developers and testers on different layers. Both manual and automated. For component testing, we use unit tests which are done by the developer.
Integration testing is an effort done by both developer and automation tester. It’s on the API level using RestAssured and the programming language in which the application is written like Scala or Java.
We want integration tests to be run automatically at each build of our application that’s why it is integrated with Jenkins.
Once we test each unit, and different units integrated, we perform system testing. This is done by testers and is manual. It evaluates end to end system specifications. When speaking about manual testing, it includes UI testing, API testing, database, and logs check.
We automate some major system flows, these tests are considered smoke tests. These smoke tests should run at each new deployment to ensure that the most important functions work.
We don’t automate all system tests because it’s time-consuming and not as reliable as manual testing. It is because many components are involved, and any change or downtime to any component can cause tests to fail.
The last testing level that will be done is UAT, here we have different demands. Some customers want it done by the same testing team that developed the application, others prefer an external team.
In either case, it is important to have good test scenarios and documentation. That way whoever picks up this task won’t need extra training and will understand the current situation.
Depending on the timeline, the number of developers and the level of testing coverage required. We can determine the resources we need. During the project, we can add more testers or developers if needed as we can see better the team velocity after a few sprints.
During Test preparations, we perform all activities needed to be done. In order to be able to execute testing, here is when the general testing objectives are transformed in test scenarios.
As mentioned earlier, we usually start test prep once a ticket is picked up by the developer. In this step, we analyze what we will test by reviewing the requirements in more detail.
Furthermore, noting down any questions that we might have for the product owner.
For example, we have a requirement that a login button needs to be added but we don’t have detail about the location, font, colour. This is a question that will be addressed to the product owner.
When all the questions are answered, the tester defines the testing plan.
The tester starts creating the test scenarios in the test management tool, in the agreed format like adding preconditions, reusable format, using tags for testing type, and so on.
We also make sure that testers review each other’s scenarios. It is a method to prevent using different formats or to help increase the coverage as the application knowledge can vary between the testers. As a testing prep mindset, we always try to make the test scenarios as reusable as possible so that when a change is done, we’d only have to change one test case, not all the related test cases.
Once the ticket is in the test column, the tester can start creating the test run, this contains the feature testing scenarios created during test prep and any already existing test cases that need to be tested to verify that the changes have no impact on already existing functionalities.
Once a test run is created, then the tester runs the tests, step by step by comparing expected results with actual results. When the actual result is different from the expected result, then a defect will be documented, and tracked in a software development tracking tool, like Jira.
Software Testing – Documenting Defects
When we create a defect, we add as much information as possible. Such as database entries, logs, screenshots so that the developer doesn’t waste extra time investigating.
The tester also adds the priority on the defect, so the developer and the product owner can have an idea if there are major or minor issues. Then they decide on the next steps.
The handling of defects depends on the agreed process. We usually execute the whole test run and list all defects before moving the ticket back in development for defect fixing.
Once the developer fixes the defects, all feature tests are rerun. Even the initially passed ones as any new change invalidates the first result. To determine if a story meets the expected quality, some metrics are agreed upon.
For example, we say that any story needs to have at least 90 % of the test results set to pass and no defects with priority 1 or 2 to be open before QA can sign it off. Once we have the test execution results, we can assess it against these quality metrics so we can evaluate results, document it in a test report, and communicate the functionality quality and readiness to the stakeholders.
In the end, each project is different. The software testing approach differs from client to client and from project to project. When we establish how testing will be done, we always communicate with the client. In order to suggest what we think is the best testing solution and to see their expectations regarding quality. Some projects need more focus on the user interface side, others on the backend side, and some on both. The development and testing approach at Codefactory will adapt accordingly.
If you would like to discuss software testing for your project contact us.