Testing

May 20, 2012

Testing Standard or COTS Systems – How Hard Could it Be?

Most testers work in organizations in which the goal is to develop and maintain customized IT systems for a given business. Many of the methodologies and much of the literature on testing are centered on this reality.

However, there is another part of the IT industry in which general, ‘standard’ products such as ERP systems, financial systems, telecommunications systems and security applications are developed. These are products which customers install and use in their IT environment. This kind of software is often called COTS, which stands for ‘commercial off the shelf’.

It is interesting for testers to know more about how a product company works, partly because we all come in contact with standard products as users or because we integrate them into our IT environment, but also because product companies are often under intense market pressure and are forced to constantly push themselves to the utmost. Because of this, many product companies work as a laboratory for new methods.

This article will outline the special challenges you might face when working with the testing of standard products.

Technical environment

The characteristics of standard products is that they can be installed by many customers, in many different environments, with different requirements, and perhaps in many countries, time zones and locales. Customers are likely to have extensive and varied expectations of new features and bug fixes.

Put simply, a product company’s goal is to survive and maximize profits. There is very little room to work on something that will not pay itself off directly and the view of software quality is often crass – get a good enough quality so as to not lose customers, even with limited resources.

 

Multiple concurrent versions

To remain competitive and to have satisfied customers, a product company has to make long-term strategic development and help existing customers with bug fixes and selected new features. Customers are often reluctant to make major upgrades, so it can happen that a supplier is required to maintain even quite obsolete versions.

It is common for product companies to work with three types of releases: major development releases, smaller releases containing both bug fixes and new features, and small, quick releases that address problems which have arisen.

Testing of the various kinds of releases differs significantly. The larger development releases are usually conducted in projects where time is provided to prepare and carry out extensive testing. For the smaller releases, time is limited so a risk-based selection of which test cases can be run is carried out. Developers may be tasked to do an impact analysis of changes and pinpoint areas that may be affected and must be retested. Of course one can never be certain of the results of such an analysis so it is advisable to have a core set of test cases and to never release the product without running them.

Maintaining multiple parallel lines of development puts special requirements on the test environment. It must be possible to have multiple versions installed at once or be very quick to adjust the environment. It also places heavy requirements on test case management. There must be a set of regression test cases for the functionality of all existing lines of development, even for older versions. A common but not so easy to handle solution is to have the test specifications in the form of Excel spreadsheets, one tab for each release, or to otherwise select which test cases are valid for a particular release.

Various parameters and data

A standard product has to fit many customers and could be used in many different ways. A common solution is to let the product’s functionality be affected by a number of parameters that are determined by the client. The parameters can control things like language, currency, calculation methods, security profiles and permissions hierarchies. The parameters can be stored in the operating system settings, startup files or in databases.

For the tester this involves a challenge. You must have a strategy for which different sets of parameters to be tested. There is no way to test all combinations, so you must choose wisely. A good way is to gather information about the customers to maximize the testing of the combinations that are used by real customers.

If the product has a database that customers themselves populate with data, it is important to carry out an analysis of what combinations of data will need to be tested. Even a moderately complex data model can produce an enormous number of possible combinations. With a well-developed approach to what data to be tested, many combinations can be covered even with a limited number of test cases.

Even if you have successfully chosen different sets of test data, it is likely that unexpected problems will arise when you get a new customer who fills the system with data that is not similar to what existing customers are using. If you have good cooperation with a customer you can ask the customer to get access the customer database, after the sensitive data has been deleted or anonymised, and use this data in future testing.

Testing that the product works in all environments

A standard product usually has to work on many different platforms. Testing must ensure that the products will work on all supported operating systems (both server side and client side). It may also be necessary to test different versions of third party products used by the system, such as database managers, web browsers and communications programs. Presumably the company has specified the supported environments and will have rules for when to phase out obsolete versions of operating systems and third party products.

If testing has to be carried out on many different platforms you may have to spend a lot of time installing different operating systems and other products. You can gain a lot of time by using virtualization to create virtual machines for the various configurations you need to test.

Customer Involvement

When developing a system for a specific business the final test that determines whether you have succeeded is usually acceptance testing. When you create a new release of a standard product, there is no single customer who can perform acceptance tests. To achieve a similar quality control there are essentially two options: distribute the product to a large, unidentified amount of customers and let them perform what is often called beta testing, or the second option: settle an agreement with one or a few customers whom you invite to participate in a controlled testing or deployment. The customer gains since they are the first with the newest release and they have some impact on the functionality. This method often gives better and more reliable feedback.

Regardless of which way you choose, testers have to engage in customer testing by following up and supporting the beta testers, or by having close cooperation with pilot customers in order to ensure that customers will favor the product as well keep track of all proposed improvement suggestions.

Another way to take advantage of customers is to continually analyze the bug reports coming in from live use and add more testing in areas where there are a lot of problems.

Delivery

Some standard products require a lot of customization and development before they can be used by an organization. In such cases it is common to let each client installation become a separate project. The project can be run by the same company which sells the product or by a completely different company. Testers who are working in a customer project often work the same way as in a project that makes contract development for a particular customer. Integration testing, functional testing, system testing and acceptance testing are key activities.

If the company responsible for the client project conducts several similar projects you can benefit from each other’s experiences and even reuse test cases from previous projects.

Different companies and markets

There are many kinds of product development companies and each has its own challenges. A software supplier may need to change their testing process many times during a product’s life cycle. A new product which has only just begun being evaluated by a few customers can not be tested with the same long-term approach as a mature product that has reached a broader market.

In any event, it is both demanding and rewarding to work as a tester in a product company. You have to be technically proficient, understanding of the reality for the customers, become a specialist on the product, read market trends and constantly learn and improve your methods.

Share article