Acumen Fuse Schedule Quality Index™: Understanding the Method

APPLYING AND UNDERSTANDING THE ACUMEN FUSE SCHEDULE QUALITY INDEX™

Acumen Fuse is a software tool used to assess the technical quality or compliance of a Critical Path Method (CPM) schedule. The software is used to measure a range of metrics including recognized standards, such as the Defense Contract Management Agency [DCMA]’s 14-point Assessment, the Government Accountability Office [GAO]’s Scheduling Best Practices, or the National Defense Industrial Association [NDIA].

In addition to reporting individual metrics, Acumen Fuse calculates an overall score based on the results of the metrics. For example, Acumen Fuse Schedule Quality Index™ defines nine metrics which are Missing Logic, Logic Density, Critical, Hard Constraints, Negative Float, Insufficient Detail, Number of Lags, Number of Leads and Merge Hotspot. The software then calculates an overall score from the results of those metrics – where a good score is considered higher than 75, an average score between 50 and 75 and poor as below 50 (the user can adjust these default software settings, if desired).

The Schedule Quality Index™ is a very helpful tool to rapidly communicate initial feedback on schedule quality. However, analysts should be aware that Acumen Fuse uses two approaches for computing the overall score: 1) Activity Based Approach, and 2) Metric Based Approach. It is very important to understand the difference between these two approaches. The overall score based on each calculation can be significantly different based on the same set of results.

The analysts should be aware that Acumen Fuse uses two approaches for computing the overall score … based on each calculation [it] can be significantly different.

In figure 1 below, a sample schedule containing 2,635 activities was analyzed using Acumen Fuse Schedule Quality Index™. The table provides the results for each specific metric and the overall score.

 

When the Activity Based method is used, the Schedule Quality Index™ reported 41 (red, a poor result), whereas the Metric Based method reported 88 (green, a good result) even though the scores for each individual metric are the same. Clearly the difference in the reported Schedule Quality Index™ can be significant (i.e. the difference between a good and a poor schedule) based solely on the calculation method, which is why it is important to understand how the result is computed in each case.

If the activity scores negatively against only one of the metrics, it may receive a score of 82-91% under the Metric Based Approach … whereas under the Activity Based Approach it would receive a score of 0% and fail entirely.

ACTIVITY BASED APPROACH TO CALCULATING SCHEDULE QUALITY SCORES

The Activity Based Approach is set as the default in Acumen Fuse and it usually brings about lower overall scores. The activity based score is calculated as follows:

1. Each activity in the schedule is evaluated against all of the selected individual metrics using pass/fail criteria, e.g. does an activity have a lag? Y/N, does activity have a negative float? Y/N, etc.

2. Each activity receives a score of 0 (fail) or 1 (pass). An activity receives a 1 (pass) only if ALL of the individual metrics achieve a passing score. The activity is scored a 0 (fail) if it receives a fail on ANY of the individual metrics.

3. All the activities in the schedule are evaluated using the steps above and the percentage of activities scoring 1 (pass) becomes the Schedule Quality Index™ score for the Activity Based Approach.

In the example above, the schedule received a score of 41, which means that only 41% of the activities received a passing score for ALL of the individual metrics.

METRIC BASED APPROACH TO CALCULATING SCHEDULE QUALITY SCORES

The Metric Based Approach usually brings about higher overall scores and, sometimes, the difference is substantial. The metric based score is calculated as follows:

1. Each activity in the schedule is evaluated against all the selected individual metrics using pass/fail criteria, e.g. does an activity have a lag? Y/N, does activity have a negative flat? Y/N, etc.

2. Each individual metric has an assigned weight based on its importance to the overall score (Acumen Fuse has predefined weights but they can be changed by the user if desired). For example, under the DCMA 14 point check all weights are equal, apart from Resources which has a weight of 0, meaning it is ignored in calculating the overall score. For the Acumen Fuse Schedule Quality Index™ the weights vary more, although Critical and Logic Density have a weight of 0, meaning they are ignored in calculating the overall score.

3. Each activity in the schedule receives a score between 0 and 100%, based on the results and weights (importance) of each of the individual metrics, after evaluating pass and fails on each individual metric.If the activity scores negatively against only one of the metrics, it may receive a score of 82-91% under the Metric Based Approach (depending on the weight of the failing metric), whereas under the Activity Based Approach it would receive a score of 0% and fail entirely.So, for the activity to score 0 and fail under the Metric Based Approach, ALL of activity’s metrics (except those that are zero weighted) must return a 0 and fail. Otherwise, the activity receives a partial credit for the metrics it has passed.

4. After assigning scores for all the activities based on the weighted system of passes and fails, the average score is calculated for the entire schedule. This average score is the Acumen Fuse Schedule Quality Index™ score for the Metric Based Approach.

Depending on the intended use of the Acumen Fuse Schedule Quality Index™, either of the methods could be a useful tool if understood and applied appropriately. Clearly the Activity Based Approach is significantly more onerous which may be appropriate when trying to establish an initial baseline that needs to be free of any technical issues. The Metric Based Approach is less onerous but could be seen by some as the more pragmatic and realistic measure of a schedule’s overall technical quality.

The appropriate method will always depend on the circumstances, but when using a software tool to score a schedule on quality, it is essential that we understand what the results mean and how the software is calculating them. Only then can we properly understand the output and make an informed decision on the appropriate choice.

The appropriate method will always depend on the circumstances, but when using a software tool to score a schedule on quality, it is essential that we understand what the results mean and how the software is calculating them. Only then can we properly understand the output and make an informed decision on the appropriate choice.”
Mariusz Wiechec, Senior Analyst, HKA

Depending on the intended use of the Acumen Fuse Schedule Quality Index™, either of the methods could be a useful tool if understood and applied appropriately. Clearly the Activity Based Approach is significantly more onerous which may be appropriate when trying to establish an initial baseline that needs to be free of any technical issues. The Metric Based Approach is less onerous but could be seen by some as the more pragmatic and realistic measure of a schedule’s overall technical quality.

RELATED ARTICLES