Enterprises around the world have been generating data at enormous rates, but they rarely use it to its fullest potential. Analytics for multi-million dollar business insights, successful artificial intelligence projects and compliance to high-risk federal regulations all hang from a common thread: high data quality.
Gartner predicts that by 2022, 90% of corporate strategies will explicitly mention information as a critical enterprise asset. If this critical asset is not guaranteed to be of high quality, enterprises will pay the price: literally, in the way of hefty fines and business inefficiencies, or in other ways, such as damaged reputation with customers and partners, missed opportunities and inaccurate decisions.
Data-driven organizations should not be taking these risks. But what constitutes “good” data quality? According to the International Association for Information and Data Quality, high-quality data entails consistently meeting worker and end-customer expectations. These criteria must be defined in the data quality strategy and should be enabled by automated, scalable and collaborative software. But before you embark on the search for data quality software, ensure you are set up for success by asking these four questions.
Have we built a business case that highlights the expected value of strong data quality and risks of poor data quality specific to our organization?
The Global Data Management Community (DAMA) reports that organizations spend between 10-30% of revenue handling data quality issues. IBM’s estimate of the yearly cost of poor data quality in the US alone was $3.1 trillion. These are only a couple of examples to help you understand the huge financial and reputational impact poor-quality data can have on an organization.
Your business case should focus on the business processes that would benefit from high data quality or that are being impacted by poor data quality. Whenever possible, include quantitative measures to justify the initiative. Make sure you have a clear scope for your data quality initiative, guided by your business priorities and objectives. A business case needs to clearly delineate the use cases, associated data and business benefits. It also needs to include a roadmap that describes how to reach data quality milestones based on expected business benefit or ease of accomplishment. Remember to nominate a business champion for the use cases when building a story for data quality improvements. This person will intentionally influence your organization’s data quality adoption and implementation.
Kalypso advocates a “fit-for-purpose” data quality strategy. It implies taking a consumer-led data approach to identifying the data elements related to the business case under consideration and tracing those data elements through their lifecycle, from the source systems and producers through their storage, access, usage, transformation, retention and ultimate disposal. This consumer-led approach helps build a business justification for fixing poor data quality.
Have we defined a data quality lifecycle in our data strategy?
To start implementing a data quality strategy, the organization should clarify what it is trying to attain. It will then be able to define its data purpose and prioritize its business needs.
Additionally, it is imperative to consider that data quality is not a one-time job. It must be managed throughout its lifecycle: planning, control, development, operations and remediation.
Understanding how the organization obtains, creates, moves, transforms, stores and uses data, as well as how it discovers and remediates issues with it, should be part of building the data strategy and ongoing data management practices.
Creating a supportive culture that shares updates regularly and provides incentives for employees that promote the adoption of the data strategy is equally important for the success of a data quality initiative.
Do we have a governance plan that considers data quality operations?
Understanding the importance of a systematic approach to data governance and having a formal data governance program in place are paramount. For implementation, leading data governance programs rely on a technology platform to ensure that employees can find the data they need, understand it and trust it to make better decisions. It is important to assign responsibilities to each employee and ensure that they are engaged with their data governance roles.
For example, a business user might call out the need to create a new business rule. A data quality analyst is expected to collaborate with the organization’s stewards to produce the specifications of the data quality rule that implements the request in a way that aligns with the data governance plan, meets standards for quality and fit for purpose, as well as meets the needs of the various enterprise entities that use the related data. There must be a feedback loop that involves the business users and ensures that the request was fulfilled end-to-end. Data quality relies on a well-defined workflow to ensure changes are appropriate, controlled, planned, reviewed by all relevant parties and communicated timely. It is crucial to model and govern the workflows that enable data quality to be operationalized. Data quality software supports the workflow by enforcing review and approval steps and can help by tracking metadata such as lineage, authorship and creation or modification dates.
Have we established an open and collaborative corporate culture around data quality?
To achieve high data quality, companies need to agree on standards and rules across functions. The definition, criticality, and tolerance threshold of a data quality rule might change when it is evaluated as a requirement of a business unit versus a data domain. Companies often struggle with this when business departments do not share information with one another. The problem is further aggravated when systems are designed and implemented at a functional level without an overarching enterprise-level architecture in mind. The lack of an enterprise architecture leads to silos of data that are difficult to integrate and are often invisible to those not possessing the requisite tribal knowledge to leverage their benefits.
The lack of visibility into what other teams are doing can be alleviated by implementing an open enterprise-wide knowledge base. Establish a crowd-sourcing data quality approach with your team and be open to requests from all functions across the enterprise. Consider adopting tools that allow you to capture project requirements, assign tasks to specific users, enable collaboration and empower users to openly announce and report their project statuses. You can build this culture of collaboration within your Microsoft Office Suite with Teams and SharePoint, or set up a tool only for this purpose, such as Confluence or a self-hosted wiki.
Using data quality software to implement and enforce data quality rules is an important step towards ensuring high data quality. However, you should not think about this as a purely tool-specific problem. In fact, in 2020, Gartner changed the name of the related Magic Quadrant from “Magic Quadrant for Data Quality Tools” to “Magic Quadrant for Data Quality Solutions.” This change was mainly to highlight the fact that effective data quality practices require more than a tool.
You should aim to see the whole picture and visualize the entire data quality process. To begin this journey, remember to ask yourself these four key questions.
Mayra is an EIM Analyst at Kalypso who delivers Enterprise Information Management solutions with experience in Financial Services and Supply Chain. She focuses on improving business performance through KPI Identification frameworks and data governance.