Validata Blog: Talk AI-powered Testing

Modernising your ETL for Temenos Transact real-time progressive migration projects

Modernising your ETL for Temenos Transact real-time progressive migration projects

ETL usually involves moving data from one or more sources, making some changes, and then loading it into a new single destination. Traditional ETL platforms have been handling data following this ‘extract, transform and load’ model. That’s pretty straightforward. However, with the explosion of data and cloud adoption, they have been trying to expand their support to new data sources and data types and the cloud or hybrid environments prevalent today.

As the market grows, and becomes more diverse and complicated, new ‘data pipeline’ platforms have emerged to support the challenges that many organisations are facing in terms of data volume, diversity and complexity.

When embarking on a data migration project from legacy to Temenos, cloud or on-premise, you should choose a modern data pipeline platform, like ConnectIQ, that includes capabilities such as:
  • No-code, self-service environment through an easy-to-use graphical interface for creating and managing your Temenos data pipelines.
    You must ensure that what you get is an optimum user experience along with streamlined workflows. Automated extraction of data is a ‘must’, supporting complex data formats and many data sources, without the need to write code and transformation functions to extract and normalize the data.

  • Data Exchange, secure sharing and collaboration. The growth of data has brought a number of new roles within an organization such as data engineers, data analysts etc. All these data consumers require a data platform that supports their needs and unique requirements while promoting collaboration, reusability and extensibility of data pipelines, knowledge sharing on data and data preparation.

  • Data discovery to enable the users find and manage their data in detail. Through a platform that promotes reusability and reduction of duplicative work you can achieve increased productivity and better ROI. To support this the platform must include a searchable data catalog and voice and chat-driven data search.

  • Multi – cloud support to securely manage data at scale and from any source – both cloud, hybrid and on-premise, offering secure protocols, security controls and data encryption.

  • Data governance, privacy and security are essential features of a modern data pipeline platform, including such capabilities to ensure data quality and consistency. It must include a complete data catalog of both business and technical metadata to help users have a better understanding of their data, as well as end-to-end data lineage showing every detail and every transformation of a data pipeline from source to target. In terms of data privacy, it must support masking, obfuscation and anonymization of data with secure hashing.

  • Data usability to provide rich, analytics-ready data. The data needs to be accurate, highly usable and in a format that everyone understands.This has a big impact on the resulting analytics and enables greater visibility.

  • Support for multiple use cases from data and cloud migrations to analytics and data science. The data pipeline platform must be able to act as a ‘single data hub’ for all data pipeline needs including operational Business Intelligence (BI) and reporting and data science initiatives.

  • Data preparation, DataOps and Data Observability to support preparation, operationalisation, monitoring and auditing of data pipelines for different data consumers. DataOps capabilities must include the ability to process high volumes of data as well as concurrent data pipelines, support for both time-based and data-driven data pipeline execution with ability to re-run the failed ones and complete audit trail and version control. In terms of data preparation, the platform must support data cleansing, blending and enrichment, advanced data transformations as well sophisticated ways to group, aggregate, and slice-and-dice data.

  • Data Cleansing and data deduplication. The core purpose of the data cleansing activity is to identify incomplete, incorrect, inaccurate, irrelevant data, or data duplications in order to reduce the data depth and ensure the quality and integrity of the data. Using ConnectIQ, during the extraction the data can be validated in real-time mode and all the records that will fail the validation checks will become part of an exception list with records to be corrected either automatically through transformation or directly in the legacy system.

ConnectIQ is a fully featured, intelligent platform that can perform full Extraction, Transformation, Loading, Validation and Reconciliation of financial and static data, ideal for data migrations to Temenos – both cloud and on-premise. It is unique in its ability to merge and match data between different sources and ensure a smooth and efficient approach when migrating the data into the new system. Data can be loaded through a real-time connection to the target system(s) or off-line by creating flat files that can be loaded at a later stage.

Through the ‘progressive’ (phased) migration approach, migration of data happens in several runs over several weekends without interrupting the bank’s workday operations. Using ConnectIQ as a staging area allows the migration of the next portion of data, and migrating only the deltas for the already existing data, i.e., only additions or amendments accumulated since the last migration. In the progressive migration methodology, during migration, data reconciliation is happening in run time.

There are two different approaches:
  • Branch-based migration – migration is done branch-by-branch; all modules for a single branch are migrated over one weekend.
  • Module-based migration – migration per product is done for all branches in a single weekend. In this case, ConnectIQ’ s staging area plays an important role for the delta migration of the modules migrated during the previous weekend(s).

Copyright © 2018 Validata Group

powered by pxlblast
Our website uses cookies. By continuing to use this website you are giving consent to cookies being used. For more information on how we use cookies, please read our privacy policy