KPIs For Forecast Improvement

KPIs For Forecast Improvement

Why do I often hear demand planners’ fear of KPIs?

I have seen reluctance in employing KPIs for forecast accuracy over the years for many reasons. Some consider it a stick to beat the planners with. A similar comment I hear is: everyone knows how good/bad the forecasts are, so why bother measuring them?

At previous businesses I’ve worked with, the forecast has invariably been the scapegoat, the reason for business stock anomalies, capacity swings, impossible production plans… you name it, poor forecast was the excuse, leaving the demand planners to take the blame.

I’ve heard it many times, and more recently during the pandemic that, when forecast accuracy has decreased, the decision to stop measuring altogether has often been a course of action: “the forecasts got so bad that we stopped measuring them”.

Rather than considering KPIs as the stick to beat the planners with, consider them the helpful walking stick to help you improve your forecasting process. The time to measure forecasts is when their accuracy is decreasing!

If you choose the right KPIs, they will give insight into product, industry, and even planners’ behaviours that you weren’t previously aware of.

Why Measure?

Ultimately, the measurement of forecast accuracy is for improvement of the forecast through understanding. It is important to investigate those forecast errors. If you know why it happened, you will have an insight to be able to mitigate it in the future.

There are a plethora of different measures about. Each has its strength and its weakness; some measurements may be of use in one business situation, with others better in another. Nevertheless, here are a few of my favourites, and why I think they are useful.

BIAS

BIAS is a means of measuring a regular and sustained over or under-forecast. It is a common situation for sales input to be ‘positive’ – a confident sales team is what you’re looking for to sell your products, so it’s natural for the sales input to be bullish. However, if the forecast is regularly too high, the BIAS measurement will help you identify these, and will help you feed this back to sales. It can easily be fixed too. A regular over-forecast of say, 5% can mean you chop 5% off all the forecasts. Better that than a compound overstock of 5% over a number of months.

The tracking signal is a cruder calculation to BIAS, as it indicates whether the forecast is up or down against actual, but not by how much.

MAPE

Mean Absolute Percentage Error. This is a measurement that compares the difference between forecast and actual, and provides the percentage of that difference. It’s considered a brutal measurement by many, but to me it’s necessary, with the lower the number, the better the forecast. However, it should be measured at the right aggregational level, and against the right products. More on this later.

MAD

Mean Absolute Deviation is another, similar to MAPE, but averages the magnitude of forecast errors across a number of periods, or a number of records. This can give you a general view of your forecast accuracy.

What To Measure?

Measuring forecast accuracy is important, but how important is it to measure the forecast of a product that’s worth less than 0.1% of your business? Consider how you segregate your business – ABC class in use? Review your A class items, consider whether the B class are useful to review, but ignore your C class. Collectively, they would in total represent about 5% of your business.

Consider also volumes. A 50% accuracy on a product that sells on average 100 a month is 50 pieces. 50% on a product that’s 2 a month is just one piece.

It’s also worth considering at what level you should be measuring the forecast. Consider measuring product at DC level, rather than each individual record that makes up that DC level. From there you can then drill-down further into individual records to see if there are stand-out records at the lower level.

Benchmarking

Benchmarking is an important part of the process. The simplest way to do this is to measure your forecast accuracy once. That’s your benchmark. Then, keep a review of how your forecast accuracy is every month (or every time you measure it), and chart it over that time. Are you improving, or getting worse?

There is most likely an industry benchmark to compare yourself with, which in general is the best you’re likely to achieve in that industry, so comparing yourself to the best can sometimes be a depressing exercise, especially if you have only just started on the path of measuring your accuracy.

Algorithms vs. Human Input

Another good measure is to compare human intervention with statistical models. Most, if not all demand planning tools contain a statistical forecast generation function which is the basis from which forecasts are added to with human input. Again, it’s likely that the demand planning tool will allow you to show these in separate streams of data, in some cases one for each input (planner, sales manager, marketing, General management). This then allows you to gauge the input from each source, to see who was closest to the actual.

I have known planners who feel they haven’t done their job if they haven’t made an adjustment against every record. Comparing the system generated forecast with the planner’s forecast against actual will tell you whether those adjustments are time well spent. Similarly, salesmen are for the most part inherently employed for their optimistic nature, so it can be in their nature to believe they can achieve more than they do, which puts a bias on their forecast.

If the measurement is done regularly, you might again see a pattern that the input of a specific, or even general source may be less accurate than the statistically generated forecast.

Promotional Activity

Promotional activity should always be shown in its own separate data stream. Why? It allows you to measure its effectivity. Marketing think the promo will give us 20% uplift, with a 3 month ‘glow period’? If in actual fact it was 15% with a 2-month glow period, that’s good information for plans to do a promotion again. Additionally, it makes the history cleansing a lot simpler if the promotion is in a separate data stream.

Get to The Bottom Of It!

Use the Five Whys to get to the root cause of the forecast error.

  1. Why didn’t we hit forecast? No stock
  2. Why no stock? We Under forecast
  3. Why did we under forecast? Major customer Promotion
  4. Why didn’t we include the major customer promotion? Demand planners weren’t aware
  5. Why weren’t planners aware? Sales team didn’t advise us

Once you’ve found root cause, and you’re able to share it in the business, mitigation of that issue can be actioned going forwards.

Summary

Ultimately, KPIs against a forecast are, once again, all about helping you get to tell the story of the business. They are another tool helping you to gain insight into the business activities and to fine tune your forecasting input and processes. The more you understand your business and the processes that create the forecasts, the better able you will be to improve your forecast, and the business processes around you.

Demand Planning, and Integrated Business Planning hasn’t changed much in the past twenty years – what has changed is the different information and data that is available to the planners, both structured and unstructured that can allow greater insights into the sales signal. Improvement in the speed of information flow and ease of access to that information, has been driven by necessity – as our lives have grown increasingly faster.

 

An article written by: Andrew Baillie – Business Consultant at Demand Solutions (Europe) Ltd.

2 Comments

Comments are closed.

Contact us

Subscribe to our newsletter

Sign up for our latest news & articles.

Demand Solutions Europe is the most complete supply chain planning and forecasting software.