Service KPIs form the basis of service reporting for most outsourced relationships. A good KPI set will consist of a manageable number of metrics that are simple to measure, easy to interpret and within the control of the TPA. Crucially, they must also be effective in identifying healthy (or unhealthy) functioning of the service – and this is sometimes trickier than it first appears.
KPIs will almost always have a ‘green’ threshold for good performance, set somewhere at or below 100%, giving an agreed tolerance for error. There may also be a second ‘amber’ threshold. Any KPI that fails the threshold(s) is scored ‘red’.
In this example, for ‘Trade STP Rate’, Manager A and Manager B both score ‘green’ – a good performance – whereas Manager C scores ‘red’ – a poor performance, to be hopefully rectified by the TPA.
There is only one problem with that assessment: it is completely wrong – C’s performance is the strongest, they just have a much stricter threshold.
Setting a Threshold
Many factors may impact the decision on the appropriate level for KPI thresholds, such as current performance, target performance (if an improvement is expected), maximum potential performance (which may be limited for some asset types) and perception of market standard performance.
Problems can arise when the KPI threshold is either too slack, or too strict, for its intended use.
Slack KPIs mean that the threshold is set lower than is appropriate. The KPI will always show as performing well, even when that is objectively not the case – for example, when compared to other deals.
Conversely, strict KPIs are set higher than is appropriate. This may result in a measure that lacks sensitivity, constantly flagging up issues for improvement for processes that are, in fact, already being performed well.
During the last 10 years, Alpha has collated a library of service metrics, for 30+ deals and more than 100 different KPIs, allowing us to determine market averages for KPI thresholds and performance. This library is supplemented by our broader Investment Operations study.
Assuming that Managers A, B and C all have a fairly standard trading profile, then we would expect them to have broadly the same KPI thresholds and performance as other deals in the market (see below).
In this example, Manager A would have a ‘slack’ KPI. The threshold for ‘good’ performance (90%) is set at a level that is in fact at the poorest end of the sample (in the bottom quartile). What Manager A deems to be ‘good enough’ performance would be ‘poor’ in a more typical KPI report.
Manager C, on the other hand, may have an overly ‘strict’ KPI – the performance must be exceptional to avoid being flagged as an issue in the KPI report.
Manager B is broadly aligned with the market standard, both in terms of threshold and performance. Although Manager C has a better STP rate, Manager B perhaps has a better-designed KPI.
Following discussions with their TPAs, Managers A and C may wish to adjust their KPI thresholds closer to the norm. When transitioning from a period of underperformance (such as for Manager A), this may be done incrementally.
KPI benchmarking can take place at any point in an outsourced relationship – either at the outset as part of service negotiations, as part of an overall relationship review, or as a standalone project.