Software Testing Types
Testing your code is a key component of the Software Development Lifecycle. If you are like me and came from a networking background, you may not be aware of the sheer number of types of tests that exist. In this blog post I will give a high-level overview and some key characteristics of the most common types of tests we use, or that are being introduced, or that we plan to use in the Nautobot ecosystem.
Unit Tests
Unit tests are by far the most commonly implemented. Unit tests test a specific section or part of code. More often than not, the “part” of code is a function. One example would be creating a test to ensure that your function that converts MAC addresses in the format aaaa.bbbb.cccc
to aa:bb:cc:dd:ee:ff
works as intended.
A great example of that exact unit test can be found in my prior blog here.
Some characteristics of unit tests include:
- Quickest of tests to run – Unit tests should be written so that they take very little time to run.
- Provide specific feedback – Because unit tests test a small section of code, feedback is typically very precise.
- Easy to write – Out of the many types of tests, unit tests are often the easiest to write becasue they deal with a small section of code.
- Does not interact with dependencies – Unit tests should test only the piece of code they are focused on. They should not interact with a web server, database, etc.
- Should be able to run simultaneously – Because unit tests have no real dependencies, they can, and should, be run in parallel.
Real-world unit tests can be found in pyntc repository. Note the use of both mock
and patch
to ensure that these tests do not have any dependencies.
Integration Tests
Integration tests are also very common. As the name suggests, the main purpose of integration tests is to test the integration between separate modules of a given application or program. An example of an integration test can be found in Nautobot here. This function is testing the integration between the web UI and the back end to ensure that when someone logs in, the log-in is successful. Another example more related to the network world would be if the tests found here in pyntc used an actual device rather than a “mock”. You could then call these integration tests since they have a dependency (the switch) and rely on it for their tests.
Some characteristics of integration tests include the following:
- Typically use the real dependency – Integration tests more often than not test using an actual dependency, e.g., database, switch, web server, etc.
- Difficult to write – Compared to unit tests, integration can be much harder to write as now you have to account for interactions between modules.
- Can be time-consuming – Because integration tests typically use real dependencies, the tests take longer to run. You may have to wait for an API call to return data or for an HTTP server to start.
- May not be able to be run in parallel – Because integration tests often depend on other modules or code, they are typically run in succession rather than in parallel.
Regression Tests
Regression testing is more of a methodology than a specific test encompassing a particular part of a program or application. The idea is to to test all parts of your code whenever a change is made, regardless of whether the change affected that part of code. Because regression testing is more a methodology than testing a particular piece of code, both the aforementioned unit and integration tests can be considered regression tests to some extent. Let me give you an example. I recently opened a pull request to add an “Export” button to the Nautobot ChatOps project. When I created that pull request, the CI/CD pipeline process ran through all of the existing unit and integration tests to ensure that functionality of the plugin was not broken from the code I added. I also needed to add tests for the code I added, which later on could be considered regression tests for the next person who wants to add a feature to the plugin.
Some characteristics of regression testing include:
- Time-consuming – Regression testing typically means running the whole test suite even when only a small part of code may have changed.
- Repetitive task – The same tests need to be run over and over again whenever changes to the code are made.
- New tests for code changes – As new features or bug fixes are introduced into a project, tests need to be created to account for that.
Load Tests
The purpose of load tests is to ensure that your application can handle the amount of users, connections, and interactions it will receive in a production environment. While there are currently no official load tests in the Nautobot repo, we do plan on adding them using the Python library Locust. An example test we may have could be having 100 concurrent users hit the landing page of Nautobot to see how it handles it. With that load test we could look at loading times of the page and how long any interactions with the database took. If we increased that 100 users to 1,000 users we could run our test again and see how Nautobot handles that.
Some characteristics of load testing include:
- Can be, but not necessarily are, stress tests – Stress testing is typically done with the intent to reach a point of failure. Load tests can result in a failure but that is not part of the goal.
- Can be difficult to account for all types of configurations – Customer X may run your application on a 2-core processor and client Y may run your application on an 8-core processor. When load testing, you need to account for the different types of hardware, software, and security configurations on a given machine.
User Acceptance Tests
User acceptance tests are some of the last tests performed on an application. They differ from the aforementioned tests because, while the previous tests can easily be done programmatically, user acceptance tests take more work to automate. The goal of these tests is to ensure that the created software meets the goals of the customer/end user who will be using the application. Many times there can be a disconnect between what the developer creates and what the end user needs. Here at NTC we definitely take advantage of user acceptance tests quite often. If we are working in a professional services agreement, we are always getting feedback from the customer. If we are developing an open-source plugin, many hands here at NTC touch it and give feedback before it is released. Whil automating user acceptance tests can be more difficult than unit tests, one great library that provides a good framework is Selenium. It provides a programmatic way to interact with web browsers. This allows us to create reproducable and traceable tests to ensure we are meeting the customers’ needs.
When embarking on a user acceptance test journey, you may want to keep these things in mind as guidelines for developing good tests.
- Define the scope – What exact features are you testing?
- Constraints and assumptions – Before starting the tests, what are some assumptions and constraints? For example, are we only able to test on Windows 11 and not 10? Or maybe we can test only on a Linux system and not Windows.
- Risks – This can include things such as incomplete testing environments and components.
- Roles and responsibilities – Ideally you have multiple people doing user acceptance tests. You need to define what group (or individual) does what tests.
- Create the script for your tests – Define each step a user will take for a given test and document it properly.
Conclusion
Software testing is a huge subject. I’ve only briefly introduced you to some of the tests that exist out there. Hopefully this has intrigued you enough to take a little bit of time and do some research on the many other types that exist.
-Adam
Tags :
Contact Us to Learn More
Share details about yourself & someone from our team will reach out to you ASAP!