What is static code analysis and technical debt?

29 septiembre, 2021

Testing de Software

There are different kinds of tests that can be performed to check that an application’s software code is working properly. However, such tests are not suitable if the intention is to assess the performance of the application or how safe the code is. In such cases, the only option is to use specific tools or seek specialist consulting.

[contact-form-7 id="22307" title="Formulario Blog"]

There are different kinds of tests that can be performed to check that an application’s software code is working properly. However, such tests are not suitable if the intention is to assess the performance of the application or how safe the code is. In such cases, the only option is to use specific tools or seek specialist consulting.

Static code analysis

Static code analysis helps to reveal:

– Non-functional issues of the application.

– Evidence of potential issues in the early stages of the software’s life cycle.

– Prevention of potential issues in the early stages of the software’s life cycle.

– Static code analysis does this while not being a load test or security tool.

An advantage of static code analysis is that it provides significant cost savings. By anticipating potential issues before they become a reality, the organisation can avoid the expense of correcting these issues. Let’s not forget that costs increase exponentially as the life cycle advances. That is, the later a defect is discovered, the more expensive it is to resolve.

This is corroborated by the good practices established in the TMMi model. Even at maturity levels 2 and 3 there is a need to identify defects in the initial stages of the life cycle, because the cost of correcting a defect during production can be up to 70 times higher than if it had been identified at the start of the project.

Getting started with static code analysis

Before getting started with static code analysis, we need to classify the characteristics of the software code into Health Factors, which are listed below, and are subject to applicable standards.

In general, code characteristics can be grouped as follows:

Reliability:The aim is to avoid unexpected behaviour. It is necessary to manage all possible cases and not perform operations that lead to indeterminate results.

Efficiency:There are limited resources. There are times when we tend to think that everything can be solved by allocating more resources, but efficient use of existing resources is preferable.

Security:Information must be properly compartmentalised. Each user should only be able to access what they have permission to access.

Maintainability:Any implemented change will be more traumatic in proportion to the complexity of the code and how poorly it is documented.

Factors such as reliability, efficiency and security aim to avoid risks in the production environment, while maintainability and similar factors aim to give an idea of the cost of ownership of the software.

It should also be kept in mind that each programming language has its own unique features and good practices. These translate into rules that are assigned to the health factors mentioned above. For instance, analysing Cobol code is different from analysing Java code. Even if they share some rules, other rules are specific to each language.

On the other hand, it needs to be decided whether it is better to perform a manual or automated analysis. The criteria for choosing one or the other are subject to aspects such as:

– The technology to be analysed

– The organisation’s preferred licensing method

– Establishing priorities, that is, the need for deeper analysis balanced against the need for a quicker response time.

In the case of manual code analysis services, no tool is necessary, but in-depth knowledge of the following is needed:

– The language and its good practices

– The client’s environment and context

– In the case of automated analysis, a tool such as Cast, Kiuwan, SonarQube or PMD is needed.

Lastly, it should be noted that, even if the analysis is automated, the result is subject to interpretation. Some rules in certain tools generate false positives or are not applicable in specific environments.

Static code analysis case studies

When static code analysis is introduced in the life cycle, a change is produced in the development teams. Sometimes, bad practices are adopted simply because “it’s always been done this way”. In these cases, revealing the issue and explaining and justifying the correct way of doing things tends to have an immediate effect.

Some MTP studies on the efficiency and security of applications

MTP has carried out some studies on the correlation between production incidents and code quality:

For example, for a company in the utilities sector, it was determined that only 16 objects, accounting for 80% of production incidents over the past year, were by far the most complex and worse documented from among the 22,000 that were evaluated.

In another case, it was possible to identify the improper release of resources as the cause of continued drops in the production of an application. This problem tends to be reoccurring and it is best to identify it as early as possible.

Recently, in an application under development, it was found that connections to databases were not being reused, resulting in poor performance. As a result, a connection pool application was recommended, leading to improved performance.

What is technical debt?

‘Technical debt’ is not only applicable to static code analysis. The concept aims to describe any technical deficiency in a product in financial terms.

In this case, technical debt is defined as the cost of development required to eliminate risks due to the code quality in a productive environment. We can also add:

– The cost associated to the complexity

– The lack of documentation

– Difficult to test, etc.

Depending on the tool that is used, technical debt can be stated in terms of working days or as a monetary value. Regardless of how it is expressed, it is calculated according to the following factors:

– Assigning a correction time to every broken rule.

– Weighting said time using a complexity factor associated to the non-compliant element.

Thus, two non-compliances of the same rule will be more costly to correct depending on the complexity of the object in which they are produced.

 

Francisco Manuel López

Head of the QA Project in MTP

Ver más historias