What exactly is Master Data?
In the last five or six years, Master Data Management has been an area of focus for many enterprises. Master Data by definition is, “business critical data that is stored in disparate systems spread across your enterprise”. In a hi-tech manufacturing environment one would typically think of Product data that has a significant impact on the success of product development as:
- component parts
- bills of material
- reference data
PLM software was created and has successfully evolved to become the “System of Record” for owning this type of data. In an ideal world all relevant product data can end up in the PLM system as the place organizations can look to for the single source of truth about their products. Unfortunately the world is not often as organized and clean as one would like it to be when it comes to product data.
How an Unsuccessful PLM can cost you
Some systems in an organization may predate the PLM system in the form of legacy databases or PDM systems that were never fully integrated into a successor PLM system. As a result, there may be multiple PLM systems, possibly even from different PLM vendors.
Another problem is the appearance of external databases and applications containing information that could be in the PLM system but is not. Young Information Technology systems have a manageable number of dependencies to track and test but over time success brings more use and more dependencies. Eventually, due to the sheer number and critical nature of dependent process the inevitable risk/reward analysis of modifying the PLM leads to the decision to leave the data elsewhere.
When product data does not reside in the PLM system, things can get challenging for the engineers and analysts who make critical decisions dependent on the timeliness and quality of that data. One time data cleanup efforts can impose some consistency across systems, but often what has been cleaned soon becomes outdated. “Clean” data can often be out of date the minute it is released. With rapid product data growth, how do you ensure that there are not inconsistencies and unnecessary duplication that leads to wrong decisions and end up costing money and time?
The Foundation of a Hybrid Registry
Fortunately, there are well proven techniques in the field of Master Data Management for handling the distributed ownership of data. One such technique is to take what is known as a hybrid registry approach. In such a scenario, you start with a very configurable “hub” system as a point of aggregation for all of the authoritative systems owning product data.
These source systems push their view of the world to a system which can handle a large amount of rapidly changing data. Making that data instantaneously searchable and specialized based on the role of the person viewing it. What an engineer wants to see will likely be different than what a supply chain analyst wants to see. This system must be inherently built for speed, or else it would not be able to keep up with a potentially large number of contributing sources at a high rate of change.
Once these views are built the enterprise will truly have a single view of its product data. The right view given the user’s context and the best operational decisions can be made. Once this foundation is in place users will be better empowered to make the right decisions today and set the stage for using this centralized aggregation to drive data quality back through to the contributing systems.