Moving into a new ERP system is an ordeal to any organization. The amount of effort it takes to customize, and implement organizationally, is often underestimated. Apparent “must haves” is given priority, on the expense on less appealing but equally important initiatives. Examples of such is the “The initial cleansing of data”, “Functionality Gap analysis”, “User acceptance test”, “Governance” etc.
This blog entry takes interest in Data quality and the Key Performance Indicators (KPI) that expose the trends of data quality. Trends that from which we deduct if our MDM initiatives, such as governance, are having the desired effect.
As Untimely, Duplicated, deceitful information and inadequate data impedes on users usage and perception of a system, is it key to avoid these into the new system. Lost confidence in the system quickly leads to an amputated scenario, totally contrary to the Master Data Intentions, with information is being stored and sought elsewhere than in a consolidated single source of the truth. Piecemeal data repairs are also going require more effort than it would have taken, had they been corrected prior to entering the system.
Growing user Confidence in the system through proof.
Establishing of user trust is key. By cutting away the cleansing and allowing data with a poor data quality, into your system, you are choosing an uphill battle to start out with. This doesn’t mean we need to wait migrating until data is “perfect”, while that is unlikely to ever happen. Choosing a good starting point is imperative for success, and by a continuing effort aiming to prove, that data is trustworthy and continuously becoming better, will strengthen the initiative.
Static testing is inadequate
The idea “static testing” where an initial report establishes the baseline on the data quality, to measure your data repairs against, might be good enough for the foundation of an initial bulk load of data. In a live system are such data worthless, while the statements they make are outdated even before the ink hits the paper. Such tests aim to prove that data meets requirements for cut over phases, such as just before moving into production. They do not reflect if the reason to bad data quality has been uprooted. Data might be in the progress of being corrupted again, but the static report shows no concerns. I do believe that static testing keeps alive the idea “repairing data as a solution” rather than fixing the root cause. I guess this can be boiled down to “Static testing will not build confidence in the long-term. MDM, Governance and Data Quality Improvements are by nature continuous efforts”
Continuously exposing quality KPI
The way we monitor that our Governance initiatives are in effect, and our rules are applied, is through constant measuring and exposing quality trends.
Here is a list of KPI’s you should monitor for. The KPIs are listed in “easiest implementable” to “most yielding” order.
- Completeness, Are all necessary data present or missing?
- Integrity, Are the relations between entities and attributes consistent?
- Consistence, Are data elements consistently defined and used?
- Accuracy, does data reflect the real world objects or a verifiable source?
- Validity, Are all data values within the valid domains specified by the business?
- Timeliness, is data available at the time, it is needed?
Be aware of that the value of the KPI is inversely opposite proportional with the effort it takes to implement it. As an example is checking for Completeness is much easier than implementing Timeliness, which is a much more important factor.
I guess this article can be boiled down to “you cannot control, what you are not measuring”, combined with the advice, calling for data quality issues to be dealt with in a continuous and controlled manner, as early as possible in the project, whilst complexity is not increased by system restrains.