4 Steps of Data Management

True end-to-end client data management across the full client lifecycle involves strategically transforming and then executing both onboarding and maintenance processes, as well as a sustainable data governance overlay.

At iMeta we help our clients analyse how their own client data is currently sourced, ingested, validated and distributed to relevant business systems. Additionally, through a rigorous Create Read Use Delete (CRUD) analysis, identification of who uses the data and how it is used across business processes is an essential exercise. These foundational activities enable the establishment of a sophisticated plan for the best possible use of client data when defining an end-state target operating model.

iMeta proven


Legacy data must be cleansed (remediated) in order to create one and only one true record for each client. Data cleansing will not be effective unless a systematic procedure to correct or even entirely remove both superfluous and inaccurate information from the data set is carried out.

iMeta Accountable


After the data has been remediated, it is imperative to classify every client, counterparty and account; in line with both the firm’s business profile for each customer and in terms of possible regulations that may impact the conduct of business with each customer, such as KYC/AML, FATCA, OTC Derivatives Reform etc.

iMeta Reliable


Once the required set of client data has been cleansed and classified, it needs to be consolidated, such that the data-set under ongoing management is as lean and mean as it can possibly be. Managing a consolidated data-set eliminates redundant or duplicative processes; thereby allowing for optimal operational use of a firm’s data specialist resource, i.e. for exception handling as opposed to core processing.

iMeta Performance


Now that the operational data-set is ready, the next step is to configure the business rules that will operate against the data, for purposes of verification, validation and integration. Rules need to be created to detect data anomalies (duplicates etc.) and validate the quality of the reference data itself. Additionally, the data needs to be delivered to the right place, at the right time for its specific business use. The integration and delivery rules relating to what data goes where, and when, will be determined and configured.