“Commodity Data Disease” Costs Supply Chains Millions

By Mark Swartz, NEURAL CORP

 

MarkSwartz

Mark Swartz

Depending on the size of your organization, you may spend millions (even billions) on materials each day. Even smaller organizations have 14,000 unique items (think: hospital, factory, warehouse). Neural Corporations’ investigation indicates that statistically, over the course of 12 months, this equates to approximately 4% (and growing) of ‘duplicate purchases’ in most planning and supply chain systems. These systems ‘duplicates’ are unknowingly in your purchase orders, sitting in parts and materials receiving areas, awaiting kitting in your warehouse, and on stock room shelves…confusing functional groups and often causing untold communication issues across your supply chain. Associated costs (loss) just in materials/commodities is between $1 million and $10 million annually – and this does not include lost productivity costs and payroll costs.

 

Identifying commodity data disease

Commodity data disease has a direct, negative effect and can cover the entirety of any organization, small or large. Symptoms are hard to spot, and the telltale signs only appear at the end of a fiscal period when someone attempts to figure out why 10,000 catheters with SKU 87-1991 are sitting on the books or in a warehouse and asks, why were these not consumed?

Numerous client projects at artificial intelligence firm Neural Corporation revealed how this scenario typically plays out. We wanted to know how common was this occurrence and, was it trending? Neural back-tested other AI clients on behalf of our initial findings. As it turns out, in some cases, larger corporations (over $5 billion revenue) had closer to 7%, or more, duplicate item masters. Once corrected, the out-of-pocket money (savings) identified was between $1.6 million and $8.9 million, and more in some cases. We then tracked the lost productivity and payroll to manage these purchase orders, logistics, materials handling, and warehousing direct and overhead costs.

In total, lost activity time, process, and directly attributable payroll costs were enormous. For example, consider a full-time equivalent, paid warehouse staff that receives and unloads/unpacks an inbound shipment, updates your inventory system, and moves the materials/shipment to their locations, and consumes space (and costs) on the books while doing so. While these items are unknown to that particular warehouse person (or nurse seeking a specific catheter and SKU) in back-stock, you already have 10s of 1,000s of them, except they are incorrectly marked.

Once Neural discovered the frequency with which this occurs, we then looked up-stream in the supply chain data to determine why this was happening and located a possible root cause. In cases where engineering was innovating, Neural frequently identified items masters in product systems were not updating or were attributed with old, out-of-date data. These items were being added to product bill-of-materials without being scrubbed or corrected and then placed into sales and purchasing orders. This flawed data was then pushed into SAP or similar ERP systems. It can be very challenging for any organization to locate these types of duplicates.

 

How commodity data disease enters and infects the business

More often than not, commodities teams outsource this scrubbing responsibility to various suppliers who then are only able to identify a portion of items infected. This monotonous task of identifying and correcting data is typically manual and carried out by several people in the organization, which only marginally increases correction rate efforts.

Neural also found companies that were five-years post-merger or acquisition showed a higher percentage of duplicates caused primarily by data normalization performed incorrectly. One reason for this: when you begin to add various codes or languages responsible for generating data and/or have data with insufficient detail supporting it, the likelihood for incorrect metadata attributing to duplicates in supply chain systems is compounded further.

The primary data holders (or master data sources) are typically SAP, Oracle, or similarly large ERP systems, which simply aggregate and manipulate data provided by customers. The SAPs and Oracles of the world are not antagonists. Instead, they become part of the supply chain infection. Initial commodity disease entry into the supply chain is often via PLM systems with dated items masters and engineering manually adding products and SKUs. Vendor supplier portals and individual supply chain suppliers who change SKUs and who do not issue redactions or, do not have data ‘sun-setting’ policies, contributed to further infecting the supply chain.

In larger organizations, Neural found that an infected supply chain grows every five minutes due to an occurrence (items master duplicate). When you extrapolate to any company with 25,000 commodity SKUs or more you see how a supply chain data network can speed up, moving inventory accuracy further and further away from credibility resulting in country-of-origin (COO) and UNSPS code disputes, decreasing HTS codes accuracy, customs delays, costly penalties, having to restate manufacturing financials, and more unsatisfactory business and compliance issues costing corporations their reputations and liabilities with shareholders and executives their careers.

Mark Swartz is the Founder and CEO of Neural Corp, a company that has created advanced Artificial Intelligence solutions for the Manufacturing and Healthcare industries.  Mark has built global teams of experienced experts in Advanced Algorithm Design and Software Development. Mark is a proven technology executive & co-founded companies with experience and exits in Sweden, China, and the United States.