Ssd Unit 5

6 minute read

Published:

Unit 5

Learning Outcomes Achieved

  1. Critically analyse development problems and determine appropriate methodologies, tools and techniques to solve them
  2. Design, develop and adapt programs and to produce a solution that meets the design brief and critically evaluate solutions that are produced

Codio: Equivalence Partitioning

Although I use testing extensively in my daily work, I have not encountered equivalence partitioning before. I can see it being extremely useful for scenarios where a range of values need to be tested, but only a few test cases can be created. After testing and reading the code in Codio, I decided to test it out on my own by creating my own (manual) example in Python and following the strategy suggested by SoftwareTestingHelp (2021). The most ideal example I could think of, is testing a function that implements the circuit breaker pattern in a microservice. The circuit breaker pattern is a rate-limiting strategy for microservices that prevents cascading failures by returning immediate errors if a threshold of consecutive errors is encountered (Richardson, n.d.). Practically speaking, a minimal implementation of this function would take a percentage as an input, and return a different string as output. The reason why this is ideal as a test is because percentages have a large range of values (0-100), but there are distinct cases for different ranges and only a few values could reasonably be tested. My sample code is provided below:

def check(failure_rate):
    # Return an integer to represent an error code, for convenience.
    # There is a less obvious case to consider: a rate that's less than 0.
    # An input of that value should be rejected, because it's not logically sound.
    if failure_rate < 0:
        print("Invalid value received.")
        return -1
    if failure_rate == 0:
        print("No failures detected")
        return 0
    elif failure_rate > 0 and failure_rate < 20:
        print("Warning: some failures detected.")
        return 1
    elif failure_rate >= 20 and failure_rate < 50:
        print("Warning: moderate failures detected.")
        return 2
    elif failure_rate >= 50:
        print("Failures are >= 50. No requests will be forwarded for a brief period")
        return 3

def test_failures():
    """
    Equivalence partitioning is applied here.
    Based on the check() function, there are 4 boundaries:
    One for a value of zero, one boundary for 20=<x<50,
    and x>=50. 3 boundaries means that there only need to be 3 test cases.
    Note that it's not necessary to directly test x==20 or x==50,
    boundaries exist only for values that cause outputs to change.
    Anything above 20 but less than 50 can be used, as an example.
    """
    assert check(-20) == -1
    # 0 is a special case because it is one value check, it isn't a range. It needs a case of its own
    assert check(0) == 0
    assert check(5) == 1
    assert check(24) == 2
    assert check(50) == 3

test_failures()

I will apply this new knowledge in my place of work whenever I need to test a range of values but can only do some, and for the implementation assignment, I will use it to develop test cases. This strategy could be applied to both unit and integration tests: for example, equivalence partitioning can be used to test functionality for warning flight surgeons of problematic health values, and it could also be used to test INSERT queries work with values that might be close to a data type limit (e.g., I intend to store log times in UNIX format, which could become problematic if the timestamp becomes too large for a column’s datatype).

As a result of this exercise, I was able to achieve learning outcomes 2 and 3, because I identified testing as an important area of creating quality software, and thereafter, created an approach for creating improved tests which can be used to guarantee the functionality of code according to the design brief.

References

Richardson, C. (n.d.) Pattern: Circuit Breaker. Available from: https://microservices.io/patterns/reliability/circuit-breaker.html [Accessed 1 November 2021]. SoftwareTestingHelp. (2021) What Is Boundary Value Analysis And Equivalence Partitioning? Available from: https://www.softwaretestinghelp.com/what-is-boundary-value-analysis-and-equivalence-partitioning/ [Accessed 1 November 2021].

Cyclomatic Complexity’s Relevance

Software quality evaluation has become a topic which has gained popularity due to modern development paradigms. Rapid release cycles due to agile methodology, and more frequent deployments due to the benefits afforded by continuous integration/continuous delivery necessitate the prioritization of automatically guaranteeing code quality as it is written. Cyclomatic complexity is a long-standing metric which requires reassessment in today’s software landscape. I argue that cyclomatic complexity will remain a metric worth considering, however, it provides insufficient insight into overall code quality- it can only be used to reason about the testability of code.

Industry is somewhat divided on the topic. NASA (2020) recently endorsed cyclomatic complexity as a measurement metric, recommending that “safety-critical” components have a maximum cyclomatic complexity of 15 to improve testability. However, certain researchers within the report argued that other controls should be enforced to improve code quality, such as using static code analysis and minimizing the leniency of compiler checks. A researcher for SonarSource (a company known for producing static code analysis tools), argued that code complexity occurs not only at the level of control flow, but a linguistic level (Campbell, 2021). The author argues that modern language features such as lambda functions, can significantly improve the understandability of code compared to the usage of primitive features, however, cyclomatic complexity would not adjust its rating, despite this increase in clarity.

As shown above, although a low cyclomatic complexity implies safer code, it does not provide an absolute guarantee that the code written is high quality. Cyclomatic complexity will remain a relevant metric because code with less branches will always be easier to interpret, however, cyclomatic complexity will likely be combined with other measures to guarantee clean code.

References

Campbell, G. (2021) Cognitive Complexity: A new way of measuring understandability. Web: SonarSource. NASA. (2020) Cyclomatic Complexity and Basis Path Testing Study. Web: NASA.