by Yash Mehta

Traditional ETL processes are blocking digital expansion – it’s time for an alternative

Opinion
Sep 13, 2021
Big DataData ManagementData Science

ETL has lost the race to big data. The solution is as simple as reordering the phases of traditional process integration.

data scientist woman at virtual monitor user interface tools for data science by metamorworks getty
Credit: Metamorworks / Getty Images

Let us rewind to 1989 when the internet happened to the common computer user. Then, the data processes were sequential, static, and inflexible in the true sense. Integration in that age was a revolution, while ETL (extract, transform, and load) was the contemporary technology beyond the reach of an average business. 

Fast forward to today, the internet has billions of users producing an unimaginable volume of data every moment. It is a reincarnation into newer system landscapes and everything on-demand. Like many processes from that era, traditional ETL meant for on-premise landscapes does not perform anymore. Despite evolution over the years, traditional ETL processes have lost the race to the madness called big data. 

As per Gartner, only 20% of analytic insights will trigger key business outcomes. As speculated, inaccurate and insufficient data is one of the major reasons here. 

Disadvantages of traditional ETL

Traditional ETL has the following disadvantages:

  • Since the business requirement of every transformation is unique, data engineers have to work on custom-coded programs and scripts. As expected, it requires them to develop specialized and non-transferable skills. This makes managing the code base a complicated affair.  
  • ETL comes attached with continuous overhead costs. It demands lengthy re-engineering cycles by dedicated data engineers.  
  • In ETL, data scientists receive the data sets only after they are transformed and refined by the engineers. Not only does it make the process rigid but also limits the agility of the outputs.  
  • Initially, ETL was meant for periodic batch processing sessions. It does not support continuous and automated data streaming. Furthermore, its data processing, ingestion and integration performance are insufficient in real-time. 

Now, in addition to all of the above, the revolutionary change in the enterprise landscape from on-premise to cloud also changed the data integration trends. This led to an explosive rise in the volume of data produced and consumed in real-time. 

Initially, the data preparation processes were designed for the warehouse model wherein the streams were systematically strategized. That doesn’t fit in the contemporary setup wherein everything is hosted in a cloud landscape. 

Here, the data lake model is more valuable. A lake captures data from multiple sources at one place before pushing the sets for data refining. Therefore, instead of transforming every data set separately from different sources, they are all collected in a lake and then transformed at the destination. 

A better approach

For traditional ETL processes, handling this madness became nearly impossible and has led to the rise of an alternative known as ELT. 

In ELT, the data integration happens between the source and target system without the business logic-driven transformations as done previously. ELT simply re-orders the phases of the traditional integration, with the transformation happening at the end.

The revised steps work as follows:

  • Extraction – Capture raw data sets from distributed sources such as on-premise apps, SaaS apps, and databases. 
  • Loading – Directly load the data in the target system including the data schema and the types including in the process. The extracted data is loaded into a data store, whether it is a data lake or warehouse, or a non-relational database.
  • Transformation – The transformation occurs in the targeted system. Make use of 3rd party tools for reporting and other purposes. Data transformations are performed in the data lake or warehouse, primarily using scripts.

That being said, the ELT process has its own limitations that may not be a challenge today but could cause unwanted disruption in the future. For example;

  • Compliance is a major bottleneck with ELT. Since it does not encrypts or masks the data stream, compliance with privacy regulations is vulnerable to compromise.  
  • ELT requires advanced infrastructure to catch up with contemporary storage technologies such as data lakes and warehouses. Data teams have to continuously slice the sets for leaner feed to the analytics. 
  • Insufficient connectivity to legacy landscapes mostly on-premise systems. This will continue to be an issue until on-premise systems become obsolete. 

The future of data integration

As data integration gets agile, custom alternatives to ETL are gaining acceptance. For example, streaming data through the pipeline is based on business entities and not database tables. Here, the logical abstraction layer, in the beginning, captures all the attributes of a business entity from all the data sources. Subsequently, the data is collected, refined, and archived into a finalized data asset. 

In the extract phase, the data of the requested entity is captured from all the sources. In the transformation phase, the data sets are filtered, anonymized and transformed as per the pre-determined rules for a digital entity instance. Finally, the sets are delivered to the big data store in the load phase. 

Such an approach processes thousands of business entities at a given time and assures enterprise-grade throughput response times. Unlike batch processing, this approach continuously captures data changes in real-time from diverse source systems. These are then further streamed to the targeted data source via the business entity layer.  

Ultimately, data collection, processing, and pipelining based on business entities produce fresh and integrated data assets. As far as adoption is concerned, K2View’s eETL is a classic example. The renowned data fabric tool delivers analytics-ready data using the above approach. It assures safe, secure, and swift transfer of data sets from any & all sources to any targeted data store. This supports all integration methods such as CDC, messaging, virtualization, streaming, JDBC, and APIs. 

Not to miss, it provides continuous support for complex queries while eliminating the need of running heavy processing table joins. 

Conclusion 

As we inch closer to more data, advanced approaches in data integration will be a necessity. Enterprises that have not moved on from conventional practices must evaluate their data science stack and should aim for speedier, clearer, and smarter data streaming.