A client sponsored a triage of one of their high-value business processes, one that receives and evaluates eligibility requests for a financial benefit. One triager’s point-of-pain was an observation that 70% of requests required rework — reaching out and contacting the applicant for additional information. How or why this information was not captured at the first attempt became an improvement to analyze.
But what is the cost of 70% rework? Inquiring minds want to know. (You can be sure this 70% will be laser-focused fixed now that the team sees it. To their credit, it’s an all-hands-on-deck effort. Some of this rework is caused by unverifiable info from applicants — garbage in.)
I suspected it was exponential — at least non-linear, assuming each attempt had the same probability of failure for illustration purposes. Naturally, real data would adjust this accordingly.
What it tells us is you’ll process twice as many requests as you need to when your re-touch rates are 40% or so. You’re processing three times as many customer touches at about 70% rework. That’s two-thirds of your resources unavailable do something else! The chart gets crazy-ugly at failure rates above 70%, by the way.
We call that kind of process failure a dumpster fire. At 70% rework or customer re-touch, two thirds of your touches are avoidable if your process is designed to deliver a one-and-done customer experience.
The remedy is a blinding flash of the obvious: Reason-code every failure, sort the volume of these reasons using Pareto rules, resolve them in highest-volume order, and raise first attempt success to something less than 10% for starters. If automated systems are used to capture the required information, present it list or check boxes, mandatory field captures, use good scan-and-attach tools, and by all means attempt to educate the benefits applicant on what’s needed before attempting. Here’s my spreadsheet.
That’s what first attempt resolution is worth.