External data retrieval using a custom adapter – Master Data Management – Salesforce Data Architect – Salesforce Certified Data Architect Study Guide

External data retrieval using a custom adapter

In scenarios where an existing adapter isn’t suitable, a custom adapter can be created in Apex, using the Apex Connector Framework. The knowledge of coding such an adapter isn’t in the scope of the exam or this book, but documentation can be found here: https://developer.salesforce.com/ docs/atlas.en-us.234.0.apexcode.meta/apexcode/apex_connector_top.htm.

A custom adapter is generally required when an external data source isn’t OData 2.0 or OData 4.0 compatible, and the external data source also isn’t another Salesforce instance. In these scenarios, a custom adapter is required for communication with the external data source so that its data can be presented, searched, and queried right within the Salesforce UI.

With an understanding of the various methods on how to construct the golden record data through relationships to data held in other systems (the system identifiers), let’s explore how we can consolidate data attributes from those systems.

Consolidating data attributes from multiple sources

When designing an MDM strategy, we need to determine what data attributes constitute the golden record, along with the source of the data attributes. Where multiple sources may contain the same attribute, it may be necessary to implement a cleansing and de-duplication process in order to ensure that the same data attribute values are preserved throughout the enterprise. Depending on the MDM implementation method being used, it may be necessary to allow any source system to facilitate the update of a data attribute to the golden record, which then invokes a process to push that update down to all source systems for that data attribute. Therefore, when implementing the golden record for the first time, data quality and classification are crucial, and so time should be given to a data cleansing, matching, and de-duplication strategy as part of the initial golden record population. It may then be wise to consider locking down the source systems for new record creation by users, so that duplicate records can be caught in the system of entry, with permissions to create new records in source systems reserved almost exclusively for an integration or process user. This way, users are entering data in a single place, and are told immediately if there are data duplication issues, and source systems remain in the cleanest state possible.

Invariably, data issues may occur over time, and reporting and analytics will help deduce the cause of these. User behavior can be influenced by training and modification of the data entry system (through the use of validation and approval processes), and automated processes can be tweaked with changes to the rule-base from which they are built to operate.

With an understanding of data attribute consolidation across the multiple systems that constitute under our belts, let’s now look at how to preserve traceability and context when working with our data from the golden record.