Home > General > Code Metrics – Here’s your new code metric

Code Metrics – Here’s your new code metric

November 29th, 2010 Leave a comment Go to comments

This is the second part of the article, written in March of 2005 by the chief architect of IDE tools – Mark Miller, the author of the Maintenance Complexity code metric existing in CodeRush. It is updated to include maintenance complexity points for new language elements, that appeared in the new language versions (C#3.0, C#4.0, VB9.0, VB10.0). Posted with his permission.

Mark Miller

Wednesday, March 02, 2005

Here’s your new metric

A while back I was whining about the low signal-to-noise ratio of existing source code metrics. Shortly thereafter I created a new code metric based on my complaints about the status quo.

My primary complaint is that each of the existing metrics only provide a tiny slice of the big picture. Fan-out tells you the number of calls from the current method going out to other methods, while cyclomatic complexity tells you the number of decision points in a method. These metrics fail to convey the real essence of what makes code so challenging to maintain — the complexity of the code itself (all of it, all the way down to the smallest detail).

My secondary beef is that there are a number of metrics out there with questionable value (e.g., comment density is a particularly silly metric). These are metrics where consensus on ideal numbers is hard to reach. For example, is it better to have more comments in code, or less? Ultimately the value of a comment lies in its synchronicity with the code (which tends to erode over time), and in how well it reduces long-term maintenance issues without introducing noise, which can thwart readability. Comment synchronicity and noise levels are very challenging to detect, so metrics like comment density, which completely ignore these aspects, make me want to laugh and cry at the same time.

So in an effort to get a more complete picture of the actual time required to understand and maintain a given chunk of code, we created a new metric.

To some degree it combines elements of cyclomatic complexity and fan-out. Every element of the code has a weighted point value, all the way down to local variable declarations, assignment statements, expressions in for loops, unary operations, etc. In a few cases the point value changes based on context. We simply add up the points to get the final score for the method.

For example, here are the point values I assigned to elements that form structure:

CodeRush CodeMetrics MC structure element points

One of the more controversial elements in this table is the Comment weighting of 3. This is equivalent to saying that on average, a comment adds to the cost of understanding code (e.g., at the very least it must be read, and its synchronicity must be maintained), and is about as costly to maintain as a Finally statement.

Note that most of these structural elements can contain child elements. As an example, a for-loop can contain a complex block of code, in addition to complex expressions and initializers that determine the boundaries of the loop. So to determine a for-loops total point value, you would add the child point values (for the code iterated over by the for-loop), and you would also add the points for the expressions that determine the lower and upper bounds of the loop to get the total for that element. So many of the point values above are merely for the container.

Points for elements affecting program flow:

CodeRush CodeMetrics MC flow element points

Of these, Goto and Return were the hardest to set values for. In general, Goto statements are really bad in an object-oriented world. They can be notoriously challenging to follow and make code harder to read. In fact, I know of at least one obfuscations tool that introduces Goto statements into the code. However I also know developers who are comfortable with them. So the value here is a compromise between my personal bias against Goto statements, and the comfort I know some of you have with this construct.

Ultimately I was satisfied with a value of one for Return, because this statement is easy to assimilate, and if there is any complexity associated with it (e.g., through a complex expression returned as the value of a function), then the weight of the scoring will be contributed by the expression.

Points for expressions and operations:

CodeRush CodeMetrics MC expression points

Note that some of these entries are more specific versions of other entries. For example, a unary pre-increment (e.g., “++count” in C#) is a specific version of a unary operator. When this happens, only the points for the specific language element are used to calculate the total. Also, many of these expression elements share containership properties with the structural elements introduced in the first point table.

For example, an Inversion operation can act upon the result of a Logical operation, which can contain Relational expressions that hold method references.

So by now you can see the purpose of Maintenance Complexity is to give you a picture of how much code you have in a given member. MC can detect ultra complex expressions in methods with shorter line-counts and no decision points and no calls to outside methods (methods where cyclomatic complexity, fan-out, fan-in, and line count metrics would all fail to highlight as potentially problematic).

Ideal Scores

These ideal scores will give you a rough idea of how to interpret a score for a given member.

CodeRush CodeMetrics MC Score

Even in our most complex of classes, we try to keep MC below 300, and ideally less than 100.

Refactor CodeMetrics Tool window

Note that forms with large numbers of components will cause IDEs like Visual Studio to generate a method to build the form that will be huge (e.g., greater than 20000 lines in the InitializeComponent method alone). These giant methods will produce correspondingly large MCs. You can choose to ignore these high MCs on the grounds that a human being will never manually be maintaining these methods, or you could see them as an indicator of a potential performance problem. Forms with hundreds of components take longer to create, and can often be optimized by grouping the contextually-related controls into distinct UI elements created on demand. Doing this will also improve perf issues when the form is localized into target locales.

Try It Now

Maintenance complexity calculation is built-in to the DXCore (as is cyclomatic complexity), but it’s just as easy to add your own custom metrics. See what code metrics features CodeRush and Refactor! suggest.

—–
Products: CodeRush Pro
Versions: 10.1 and up
VS IDEs: any
Updated: Nov/30/2010
ID: T036

Similar Posts:

  1. April 23rd, 2013 at 09:10 | #1

    Thanks for posting this. I was trying to figure out those numbers and how they relate to my code. While they aren’t perfect, thanks to them I was able to refactor most of my code to be below 200. Which made my code a lot more readable and maintainable.

    On a another note, I find it odd that binary operations are considered so complex considering how easy they are. Most of the time they’re used for flags. I can’t imagine why they would be cause for such angst and agony.

  1. No trackbacks yet.