Vendor Scorecards

Vendor Scorecards: Keeping Them Honest, Keeping Us Informed

If you’ve spent any length of time in purchasing you’ve probably come across the “QBR” or Quarterly Business Review. This is a well-intentioned meeting between you and your suppliers that occurs…well quarterly to review their delivery and quality performance. Many times specifics aren’t discussed and expectations aren’t clear and neither are the action items. You then part ways until the next quarter where the cycle continues. Everyone has done their job “playing work”.

I’m a big accountability guy. I measure things; a lot of things. If there is a measurement that is going to help keep a process in control, I’m on it. Measurements need to be accurate and timely.

From a purchasing perspective, vendor scorecards are great. It gives both sides an opportunity to level-set on performance versus expectations, discuss any open issues from the last meeting and actions that are going to help improve performance. Instead of quarterly, vendor scorecards should be reviewed monthly.

Identifying Suppliers

The first step is determining which suppliers you’re going to do scorecards with. These are called your “key” suppliers. Which vendor you include in the scorecard process could be based on size (i.e. how much of your spend they represent), importance to the business (i.e. only supplier in the world who can make critical parts for you) or risk (i.e. a supplier that is struggling and is key to your success).

Next, you need to set clear expectations for performance. The mainstays of criteria are quality and delivery performance. More advanced supply chains will include safety and cost criteria.

Setting Expectations

Let’s start with quality because frankly, nothing else matters if the quality is crap. Getting a bad part on time at a low cost is of very little use to you. I’ve always measured vendors on a DPPM (defective parts per million) scale. The calculation is simple: number of defective pieces detected in a month divided by the number of pieces received in that month times 1,000,000. Let’s say you detect 150 defective pieces in a given month. You received 10,000 pieces from that supplier that month. You take your 150 / 10,000 = 0.015 x 1,000,000 = 15,000 DPPM. Is that good or bad? You need to decide what your company is willing to tolerate. As a starting point, I’ve always said that anything over 5,000 is concerning and needs to have prioritized improvement activities for the vendor. Anything less than that is relatively decent but I would like to see actions toward improvement over time.

Next, we have delivery performance. At a bare minimum, you should measure your suppliers against their promise date. You can also measure their performance to your request date but we first need to make sure they can live up to their own promises. As an example, let’s say you place an order requesting delivery for October 10. They confirm the PO for a delivery of October 20. If they deliver the PO on or before (no more than 3 days early) October 20, they’re considered “on time”. Anything after that, they’re late. So the calculation is pieces received late-to-promise date divided by the total number of pieces received that month. If 500 pieces were received late out of the 10,000 total pieces received. That would be 500 / 10,000 = 5.0% late, or 1 minus that number for 95% on time. So, which is it, 5% late or 95% on time? Keep reading.

I used to look at suppliers’ “on time” number which is nice but not as useful for problem-solving and continuous improvement. I prefer to look at their “late” number because it gives me direct access to the number of “problems” that need to be solved. Let’s go a step further on that topic.

The supplier delivered 500 out of 10,000 pieces late. The expectation for problem-solving becomes more clear and would entail taking the 500 pieces and assigning reason codes defining why they were late. The vendor should do this activity and present their findings to you in a Pareto chart in descending order. That may look like this:

Reasons for 500 late pieces:

  • Labor capacity: 300
  • Machine down: 100
  • Lost order: 50
  • Late arrival of raw material from the vendor’s vendor: 40
  • Inventory inaccuracy: 10

Taking Action

Once they’ve identified the reason codes, they need to decide what actions are going to be taken, by whom and by when to solve the issue. I’ve found it is more effective for the supplier to solve one issue at a time. Stay focused, be accountable. If they come back to me and say, “We’re going to solve the inventory accuracy issue by doing a full physical inventory of our plant.” That’s great and all, but that only represented 10 pieces out of 10,000 or 0.1% of the late pieces. Instead, I want to hear about the specific actions related to the Labor Capacity issue which is 3.0% of the total misses. Actions for this could be hiring additional labor, using a temporary workforce, adding overtime, or improving efficiency in specific areas (or a combination of those). Then, I want to know who is doing it, by when and what kind of impact it is going to have. We will note that as an action and everyone will go on their way until next month.

Validate Action Implementation

Once the supplier implements the action, we measure again to see if their actions were effective and if so, they should move on to solving the next problem. In an ideal world, they keep solving problems and we reward them with more business. An added benefit of this for the supplier is these improvement activities will help them not only improve performance for us as a customer but for their customer base as a whole. This should result in additional business to help their company grow.

Supplier scorecards aren’t meant to “punish” suppliers for performance when they don’t meet agreed-upon expectations. They are meant to set clear expectations and help suppliers focus on the issues that make the business relationship more productive which is a win/win for both sides.

Posted in Blog.