Big data is dead, long live big data

Big data is dead, long live big data

By Rafael Garcia-Navarro, Head of Analytics Experian UK&I

The origin of the term big data is controversial at best, with fervent supporters and equally passionate detractors.

It is common for industries in the midst of a revolution to devise a catchy tagline to rationalise the uncertainty of the change ahead – big data was just one of those examples, but by no means the only one. Another typical example in the digital advertising space was the acronym DMP, or Data Management Platform, that was in effect an attempt to try to articulate to non-technical marketing folks the need for a lambda data architecture to effectively deal with both batch and stream data processing for digital channels to achieve the holy grail – making intelligent data driven decisions.

Whilst personal preferences might drive the use of certain terminology, the reality is that many of the challenges highlighted by the above terms have long been in existence, and their complexity will continue to increase as digitalisation takes a stronger hold in consumer behaviours.

The key change in the industry has not been around the size of data, which has undoubtedly exponentially increased, though financial services and telcos have long dealt with big data before the term was widely accepted. The biggest development has been the democratisation of technology through open source technologies and cloud based environments which have unleashed data processing power previously only available to large multinationals through the use of costly data appliances and mainframes.

This democratisation process has eradicated technological and financial barriers, and has made it possible to capitalise on the opportunities that the processing of data at scale offer for a digital and knowledge based economy.

However, maximising these opportunities critically depend on the ability of organisations to place data at the centre of their culture, and on the implementation of scalable data architectures to manage the complexity and diversity this discipline inherently carries.

So how do we begin the journey?

A practical approach is to breakdown the design process into 5 distinct phases:

  1. Implementing a hybrid data infrastructure model where careful consideration is given to the components of the architecture that can be implemented in the cloud, versus those that need to be deployed in-house – e.g. PII (Personally Identifiable Information) Vs Non-PII data, low latency Vs. batch data processing, etc.
  2. Underpinned by open source technologies as the foundation of the enterprise data architecture
  3. To deliver a lambda architecture that operates across the full range of data velocity – i.e. batch and stream data processing
  1. Supported by a mathematical optimisation capability to deal with one of the most complex statistical challenges – highly multidimensional optimisation problems
  2. To deliver both offline analytics and real-time low latency decisions
Article Diagram
We are living through an era of unprecedented opportunities offered by easily accessible technologies and exploding data sources. These opportunities will be reaped by those of you willing to invest the time and effort to create systems that combine the use of computer and mathematical science to create competitive advantage through scientific data driven decision making.

The longevity of the term big data might be open to discussion, although the complexities it has managed to highlight across the industry will persist, and indeed continue to increase, well beyond big data as a marketing tagline is formally declared dead.

You may also like...