Conventional data integration is no longer relevant for real-time connectivity, self-service, automation, and organisation-wide digital transformations. Today, many organisations are able to collect data from various sources, but they are unable to collate, integrate, process, curate, and transform data. This has now become absolutely essential to deliver a holistic view of the business. Enter Data Fabrics.
Data Fabrics are a relatively new phenomenon. Originating from data management, they tie into storage, platform integration, and other areas. They provide a data architecture framework that makes data management more agile in a complex, diverse, and distributed environment.
What is Data Fabric?
A Data Fabric is a fusion of data architecture and technology that reduces the complexity of managing diverse types of data. It is deployed across multiple platforms and locations, and it uses multiple database management systems. A data fabric aims to provide a consistent and consolidated user experience and access to real time data.
Gartner defines data fabric as – ‘A design concept that serves as an integrated layer (fabric) of data and connecting processes. A data fabric utilizes continuous analytics over existing, discoverable, and inferenced metadata assets to support the design, deployment, and utilization of integrated and reusable data across all environments, including hybrid and multi-cloud platforms.’
Data Fabric collects and analyses all forms of metadata. Contextual information is the pillar of a Data Fabric architecture. There needs to be a well-connected pool of metadata that allows the Data Fabric to identify, connect, and analyse all kinds of metadata including business, operational, digital, and social.
What does Data Fabric do?
Data Fabric technology provides centralized access to data and connects computing resources. Data fabric has evolved into being the most significant architecture, which encompasses IT services capabilities that provide frictionless, real-time integration, and access across the diversified data silos of a big data system for large organisations having huge and varied data sources. To put it simply – Data Fabric enables the organization to put to better use the data that it has. It facilitates self-service of data consumption and automates the data integration process. All of this leads to more organized and real time insights.
Data Fabric enables integration of data from everywhere using whatever style that is necessary -bulk/batch data movement, data replication/synchronization, message-oriented data movement, data virtualisation, stream data integration, etc.
Modern technologies like semantic knowledge graphs, active metadata management and embedded ML are also essential to realise the true potential of Data Fabric. We can’t have all of it in place at the beginning but the thought process to include them must be there from the beginning.
How does Data Fabric help?
- Users gain a real-time 360-degree view of business
- It lowers the cost of owning, operating, and scaling legacy systems
- Reduces data inconsistencies by utilising the best and most accurate source of data
- Reduces the time required to generate business insights
- It future-proofs infrastructure by allowing new deployments and integrations without affecting legacy systems
“Major auto manufacturers are developing autonomous vehicles faster and more efficiently through their ability to manage, store, and move global test data.” Says Ted Dunning, Chief Technologist for Data Fabric, Hewlett Packard Enterprise.
The main objective of creating a Data Fabric is not new. It is to simply state – the ability to deliver the right data at the right time, in the right shape, and to the right decisionmaker, irrespective of how and where it is stored. In an increasingly complex and dynamic business world, this is potentially a huge competitive advantage.