From NZ Manufacturer, October 2023
-Adam Sharman, Senior Partner, dsifer
As organisations look to embrace the benefits of digitisation through automation, IoT and AI, a trend is emerging that embraces the growing availability, specificity and cost effectiveness of a plug-and-play approach to developing an IT and data architecture.
Advances in functionality, connectivity and speed to deployment have resulted in the ability to pick and choose applications that are relevant to the specific requirements of the organisation’s industry, operations and environment at a time when the vast majority of IT professionals report significant time waste due to bloated, generic applications.
Whilst this trend towards a plug-and-play system architecture has multiple benefits, it requires careful consideration of the underpinning data architecture to ensure that, not only are data privacy and sovereignty maintained but that the data generated by the plug-and-play ecosystem is collected and utilised as a performance and strategic asset.
In our interactions with manufacturing businesses in New Zealand and the UK, we are seeing the importance of establishing an independent, centralised data architecture as an enterprise asset to support the plug-and-play IT architecture, receiving data feeds from the operational systems, and using a combination of these source datasets to create meaningful and actionable insights.
There is still much debate on the relative benefits of physical centralisation (data lakehouse) versus a virtually centralised (data mesh). In our experience, both have their place depending on the organisation’s context.
However, the key characteristic of the data architecture that supports a plug-and-play IT approach are that the data assets, architecture and governance are independent of the IT applications, data siloes are removed to support integrated enterprise insight and access points are decoupled to accelerate collaboration whilst maintaining data integrity.
Organisations who have adopted this approach report multiple benefits, including:
Enhanced Data Governance and Control
Independent data architecture empowers organisations to establish comprehensive data governance policies. It allows for centralised control over data access, usage, and security. By breaking down data silos and implementing clear governance frameworks, organisations can ensure data is accurate, reliable, and compliant with industry regulations.
Improved Data Quality and Consistency
Separating data from specific applications reduces the risk of data inconsistencies and errors. With an independent data architecture, organisations can implement data quality checks, data cleansing processes, and data lineage tracking to maintain high-quality data. This, in turn, leads to more reliable insights and better decision-making.
Agility and Scalability
An independent data architecture provides the agility needed to adapt to rapidly changing business needs and technological advancements. Organisations can easily integrate new data sources, scale their infrastructure, and experiment with different analytics tools without disrupting their core data ecosystem.
Relying on a single vendor or proprietary data system can lead to vendor lock-in and limited flexibility. Independent data architecture mitigates this risk by allowing organisations to select best-of-breed solutions for each aspect of their data pipeline. This approach provides vendor independence, ensuring that the organisation is not bound to a single provider’s roadmap or pricing structure.
Compliance and Security
Maintaining compliance with data privacy regulations (such as GDPR, CCPA, or HIPAA) is a growing concern for organisations. Independent data architecture enables better control over sensitive data, access permissions, and audit trails, making it easier to meet regulatory requirements. Additionally, it enhances data security by reducing the attack surface and allowing organisations to implement robust security measures.
An independent data architecture can lead to cost savings in various ways. By choosing the most cost-effective solutions for each data management component, organisations can reduce infrastructure and software costs. Moreover, improved data quality and governance reduce the costs associated with data errors and compliance violations.
A data ecosystem that enables fast learning
By creating a data layer that is fed directly from each operational system it is possible to link datasets that previously are difficult to combine e.g. Payroll data linked with production data. This creates a “nerve centre” of the digitised operations of manufacturing activity and affords the ability to really understand the interrelationships between supply chain, inventory control, demand forecasting, production planning, shop floor production, back end support, and personnel management.
Visibility to these real data points makes it possible to implement initiatives with pinpoint precision, while also allowing the ability to assess the effectiveness of each initiative. This is truly the goal of a data ecosystem as it supports the decisioning of every aspect of an innovative manufacturer.
Establishing an independent data architecture through either centralised data lakehouse or virtual integrated data mesh is key to optimising a plug-and-play IT ecosystem, maintaining enterprise data sovereignty and supporting the strategic, operational and commercial value of data as a currency.