Federated Data Synchronization Service

Federated Data Synchronization Service is a centralized service that facilitates the synchronization of data between two systems. The Data Synchronization Service can import entities like Knowledge Base Articles, Contacts, Metadata, Analytics, etc. available outside the organization and make it accessible through an internal application or vice versa.

In Luma Knowledge, the Data Sync Service is used to exchange information with supported third-party applications such as ISM, Luma Virtual Agent, Servicenow. For example, the Knowledge Articles available in the third-party system such as ISM can be copied into Luma Knowledge and can be accessed through Knowledge Search. Likewise, contacts in Luma Knowledge can be synchronized with systems such as Luma Virtual Agent so that the End-Users can access Knowledge Artifacts available Luma Knowledge through the Virtual Agent.

Data synchronization can be configured between applications that support HTTP REST protocol.

Following are the components of Data synchronization between two systems:

Data Source- Source System
Source system refers to the third-party applications from where data entities such as Knowledge articles, Contacts can be extracted and migrated to the target system. The synchronization service allows the administrator to create a data connector to the Source system to download data.

For example, Service Now, ISM, Luma Virtual Agent, etc. can be the source systems.

Data Source- Destination System
The destination system refers to target Serviceaide systems where imported data entities are uploaded. These data connectors are available out-of-the-box in the service.

For example, Luma Virtual Agent, Luma Knowledge, etc. can be the destination systems.

Data Objects
Data Objects are the entities or information available at the Data Source System that can be migrated to the Data Destination system. These are the actual end-points from where the data is downloaded or uploaded. Multiple Data objects can be created for Source as well as Destination data connectors.

For example, Data Source: Service Now
Data Objects: Contact, KB Article, CCTI, etc.

Data Destination: Luma Knowledge
Data Objects: Contacts, KB Article, Topics, etc.

Setup Data Synchronization

The process of downloading data from a Source System and uploading it to the destination system is called Synchronization. Data Sync can be executed manually as needed or can be scheduled to run at a defined interval of time to import delta data, i.e., the new or changed since the last data sync.

Following are the components required to set up data sync between two systems:

To set up data sync between the two systems, navigate to Federated Service in the left-hand navigation.

 

Configure Data Source

The first step is to create data connectors for the Source and Target system. To do so, follow the below steps:

  1. On the Synchronization Management page, go to Manage Data Source. The tab contains the list of configured Data Sources.

  2. Click on Add Data Source to create a data connector for the Source system.

    1. Add the following details for the data connector for the Source system.

      1. Name: Refers to the name of the data connector to the Source system.

      2. Description: Detailed description of the data connector

      3. Version: Refers to the data connector version

      4. Authentication Required: Select the checkbox if the connection to the source system requires authentication.

         

    2. The new Data connector is added to the Data Source list.

    3. Click on the new data connector to Configure Authorization Profile. Add the following details:

      1. Select the Connection Type supported by the Source system. Currently, only ‘REST’ is supported.

      2. Select one of the below Authorization Type required for the connection.

        1. Separate HTTP Authentication Call: Use the option when the source application requires HTTP URL authentication. HTTP authentication consists of the following fields; add the details as required:

          1. URL: Specify the URL for authentication

          2. Copy Auth Cookies: Select the option to copy authorization cookie

          3. HTTP Method: This Indicates the POST, GET, or PUT action to be performed on the data source.

          4. Request Header: Add the additional request information to be sent to the Source system with the HTTP request. Click on the ADD button and provide the Key name, Value, and data type as required.

          5. Request Parameter: Add Request parameters to send additional information to the server. Click on the ADD button and provide the Key name, Value, and data type as required.

          6. Form Data: Add Key name, Value, and data type to send form data to the Source system Webservice.

          7. Body: Add JSON payload to send data to the Source system web service.

          8. Response Type: Select the expected response type; Plain text or JSON.

          9. Extract Response Using: Here, add the path to extract the response from the response body.

            1. If the response type is JSON, add the JSON path.

            2. If the response type is Plain Text, add the Regular expression.

            3. In case additional checks or transformation is required on the response, add a Custom code. Click on edit to add a Subroutine.

        2. Basic Authentication: Use this option to authenticate the Source System using basic authentication. It consists of the following fields:

          1. Set Authorization Value Type to Static Values and add the following details

            1. Username: Specify the Username to be used to authenticate to the source system.

            2. Password: Specify the password to be used to authenticate to the source system.

            3. Authorization Header Name: By default, the Authorization Header Name is specified as Authorization by Luma Knowledge.

          2. Set Authorization Value Type to Extract Using HTTP Call and add details as in Separate HTTP Authentication Call.

        3. Bearer Token: Use this option to add a unique authentication token required to connect to the source system.

      3. Add the required details and click Save.

  3. The next step is to Add Data Objects for the Data Source. These represent the entities to be migrated from the Source system to the Destination system. To add Data Objects, follow the below steps:

    1. Click on the Manage Data Objects tab.

    2. Select the Data Source in the Source Name field.

    3. Click on the Add Data Object button to add New Data objects for the Data Source system.

       

    4. On the Add Data Objects tab, add the entity name, Connect type and Source name. Connection type and Source are auto-populated based on the Data Source selected in the above step.
      Click Save.

       

    5. The new Data Object now appears in the list.

    6. Select the record to configure the Data profile.
      Based on the data source connection type, add the Data profile details for the Data Object (added in the above step d) on Configure Data Profile.

      1. Connection Configuration:

        1. Select the Connection Type.

        2. URL: Specify the URL for authentication

        3. Copy Auth Cookies: Select the option to copy Cookies for authorization

        4. HTTP Method: This Indicates the POST, GET, or PUT action to be performed on the data source.

        5. Request Header: Add the additional request information to be sent to the Source system with the HTTP request. Click on the ADD button and provide the Key name, Value, and data type as required.

        6. Request Parameter: Add Request parameters to send additional information to the server. Click on the ADD button and provide the Key name, Value, and data type as required.

        7. Form Data: Add Key name, Value, and data type to send form data to the Source system Webservice

        8. Body: Add JSON payload to send data to the Source system Webservice

        9. Execute Each page after seconds: This indicates the time interval between each page download or upload. Add the required time lapse in seconds.

        10. Response Type: Select the expected response type, Plain text or JSON

        11. Extract Response Using: Here, add the path to extract the response from the response body.

          1. If the response type is JSON, add the JSON path.

          2. In case additional checks or transformation is required on the response, add a Custom code. Click on edit to add a Subroutine.

        12. Response JSON Path Validator: Add the JSON path expression to be used in case an empty response (page) is received from the source system.

      2. Paging Configuration: If the data to be downloaded from the source system is huge, the Synchronization service can be configured to download and upload the data in pages by enabling Pagination for the Source and Destination system. Note: Both Source and Destination systems must support page-wise data download and upload.
        To do so :

        1. Set Is Paged Response to True to enable pagination.

        2. Select the Pagination Strategy

          1. Query Parameter: Select Query_Param if the data source supports Query Parameter as Paging Parameter. Add the following details:

            1. Total Count JSON Path: This represents the total count of records to be downloaded. Add the JSON path that contains the count of records.

            2. Page Size: Represents the count records per page.

            3. Paging Configuration Variables: Select the paging variable from the available options. You may also add a subroutine as a custom code.

          2. DYNAMIC_URL: Select this option to use custom subroutines to paginate and extract records.

            1. Next Page Validation Subroutine: Add a custom subroutine to validate page break

            2. Fetch Next Page Subroutine: Add a custom subroutine to fetch the next page.

          3. POST_PAYLOAD: Select this option use payload as Paging Parameter

      3. Add the details and click the Save button.

    7. Click on Create Object Fields to add information fields to be imported from the source.

      You may also click on Add Objects Fields on the Data Objects list.

       

    8. On the Configure Data Object Fields screen, click on the Add Data Object Field button.

       

    9. Add the Object name and data type.

    10. Repeat the above step to add all the data object fields to be imported from the Source System.

    11. To delete the Data Objects Field record, select the record and click on the respective delete button.

  4. Follow Step 2 and 3 to add a data connector for the Destination system.

Manage Synchronization

Once the data sources are ready, the Synchronization pair is created to download the information from the Source System and upload the same in the Destination system. It represents a link between the Source Data Object and Destination Data Object to perform the data exchange. Here, you can also map the Data Object fields in the two data sources and configure the frequency at which the data sync must be executed.

To Create a Synchronization pair, follow the below steps:

  1. Navigate to the Manage Synchronization tab.

  2. Click on the Add Sync Object button.

  3. On the Configure Synchronization screen, add the following details:

    1. Add Sync Pair Name.

    2. Add Description.

    3. In Push Data To (Data Destination), Select Destination Data Source and Destination Data Object.

    4. In Fetch Data From (Data Source), Select Source Data Source, and Source Data Object.

    5. Map Fields: Map or link the Destination Data Object Fields with Source Data Object Fields. For each Destination Data Object Field, select Mapping Type. The mapping can be performed in the following ways:

      1. One to One mapping: Select ONE_TO_ONE when a field from the Source data model should be mapped directly to the field in the Destination data object model

      2. Static value: Select STATIC_VALUE when the destination model field should be assigned a fixed value.

        1. Select STATIC_VALUE as Mapping Type.

        2. Add the value or JSON path in the Data Source Mapping Configuration.

        3. Select the Data type.

      3. Transformed Value: Select CUSTOM_FUNCTION when the value in the Source object field should be transformed before assigning the same to the destination object field. You may also click on the Configure button to add a Subroutine.

    6. Enable Validation Rules, if required.

    7. If validation rules are appliable and enabled, add a Custom Subroutine to validate if the data record should be imported.

    8. Once the details are added, Click the Save button.

Execute Synchronization

Once Synchronization is configured, the next step is to enable and execute the Synchronization. The data sync can be executed manually or scheduled to run at a defined frequency using a CRON job.

Follow the below steps to enable the Synchronization:

  1. Select the Synchronisation pair and click on the Edit button.

  2. Set ‘Enable’ to True.

  3. Set Frequency

    1. Select One time if the data synchronization should be run manually.

    2. Select Recurring and add the CRON Job to schedule the automatic sync run.

       

  4. Click on the Save button to save changes.

If set to Recurring, the CRON job will automatically trigger the Data Synchronization based on the configured schedule. To manually execute the Data Synchronization, follow the below steps:

  1. On the Manage Synchronization screen, select the Data Synchronization pair.

  2. Synchronization details appear on the Configure Synchronization page.

  3. Click on the Execute button to manually execute the data sync. A success message would appear on successful data sync.

  4. To view execution logs, click on the View logs button. Here you can view the status of data sync executions for the Synchronization pair.

  5. Select the execution record to view the execution details such as the count of records, pages processed, and the response received from the connector.

  6. Click on the Execution Logs to view the execution log for the connector. It can be used to troubleshoot synchronization in case of data sync failure.

On successful execution, Knowledge Artifacts are imported from the Target system to the Source system.

If Sync target is Luma Knowledge, the system creates Topics on the fly, while importing the Artifacts.

Related pages