Luma Knowledge Dashboard

Luma Knowledge Dashboard provides Key Performance Metrics in Artifacts Usage, Money, and the value that brings to the organization. It enables a system administrator to view the key metrics related to Artifacts retrieval, User Request Insights, and Channel performance in Luma Knowledge. This DashBoard is beneficial in many ways, and empowers Management, Curators to get real-time data on the system performance and effectiveness.

The metrics are calculated based on the Configurations defined at the Tenant level and the feedback collected by the system.

Configurations

Configurations are used to process usage data collected by the system and generate KPI metrics. Administrators can update the configurations as required.

Parameter Name

Parameter Value

Description

recommendation.artifact.obsolete.time.interval

180

Time interval (in days) to check for obsolete artifacts.

recommendation.artifact.ineffective.time.interval

180

Time interval (in days) to check for ineffective artifacts.

recommendation.artifact.ineffective.negative.feedback.threshold

5

The threshold to check for negative feedback to report it as ineffective.

recommendation.best.response.knowledgegap.time.interval

180

Time interval (in days) to check for Knowledge gaps generated due to best responses yielding no results.

recommendation.best.response.knowledgegap.threshold

5

The threshold of matching Best Responses yielding no results for Knowledge gaps analysis.

recommendation.best.response.ineffective.time.interval

180

Time interval (in days) to check for ineffective best responses.

recommendation.best.response.ineffective.threshold

5

The threshold of matching Best Responses with negative feedback for ineffective Best Response analysis.

feedback.response.rate.target.per.channel

75

Feedback response rate target for a channel

Feedback

Feedback is the information gathered by the system directly from the end-users or based on user’s interaction with the system. It is used to derive the effectiveness and usefulness of the content available in the system.

The system collects two types of feedback :

  • Explicit Feedback: This is the feedback provided by the end-user on the best response /artifact returned or the artifact content.

  • Implicit Feedback: This is the feedback derived by the system based on users' activity

Based on these Tenant settings, data available in the system, and feedback collected, a background process accumulates the KPI metrics and populates the same on the Dashboard.

Luma Knowledge Dashboard is available as a Landing page for Luma Knowledge.

The Performance parameters on the dashboard are time-bound. You may select the Date Range to view the Metrics for predefined ranges like the Last 30 days, the last 15 days, and since Yesterday. The Metrics are calculated based on the selected filter and presented in the Dashboard.

Overview

This section is a representation of the overall performance and effectiveness of the system. Below are the various indicator available under the section.

Result Summary

Helpful

When the users are satisfied with an artifact returned and the user responds to the first feedback question as ‘Yes’, the request is considered Helpful.

The first feedback question displayed to the end-user is configurable but is meant to ask the user if the Artifact answers the user's question. A response in “Yes” or thumbs up response indicates a solved question.
The number here indicates the total count of positive responses received for artifacts in the tenant, within the selected date range.

Not Helpful

When the user is not satisfied with the Knowledge returned and the user responds to the first feedback question as ‘No’, the request is considered Not Helpful.

The number here indicates the total count of Negative responses received for the artifacts presented to the user within the selected date range.

ROI

Return on Investment in terms of Cost Savings, achieved by deploying Luma Knowledge in the organization.
A cumulative sum of the total cost saved by preventing (deflecting) a human answered inquiry versus the cost of creating and delivering the content. This is calculated based on the average ticket cost configured for the tenant multiplied by Helpful requests (positive feedback). This is a configurable parameter set by the Tenant Administrator or Curator.

Performance

Findability

Findability represents the relevance of search results and how effectively users are able to find relevant artifacts in Luma Knowledge. This value indicates that the users are able to find and view relevant Knowledge Artifacts.
It is calculated as the percentage of artifacts accessed by the users divided by the total number of artifacts returned as responses. 

Relevancy of the artifact is based on the criteria that the user opens or accesses the content and is not a measure of the usefulness of the content.

For example,

Total number of artifacts retrieved in the Searches for your Tenant                 = 120
Number of artifacts accessed /opened by the Users       = 90
Findability                                                                              = 75% i.e. (90/120)*100  

Positivity Rate

Positivity Rate implies the existence of useful content within the Luma Knowledge. Useful content is identified based on the users' positive feedback for the Search Results. Positivity Rate represents the user queries where end-users find what they need and provide positive feedback for the Search Results.

It is defined as the percentage of user searches with positive feedback (Helpful) divided by the total user inquiries.

For example,

Total number of user inquires                 = 50

Number of inquires where feedback is received = 40

Number of inquires with positive feedback   = 25

Number of inquires with Negative feedback = 15

Positivity Rate                                      = 50% calculated as (25/(50)) *100

An inquiry with both positive and negative feedback for Artifacts returned is considered as Inquiry with positive feedback.

AQI

Artifact quality index (AQI) indicates how effective Knowledge artifacts are in solving users’ inquiries. AQI indicates that the Artifacts returned as search results are relevant and helpful in resolving user’s issues,

It is calculated as the percentage of total positive responses received (Helpful) divided by the total feedback received (i.e. both positive and negative).

For example,

Total number of user inquires                 = 50

Number of inquires where feedback is received = 40

Number of positive feedback   = 45 (the user may provide positive feedback for multiple artifacts in a single inquiry)

Number of Negative feedback = 15

AQI                                 = 75% calculated as (45/(45+15)) *100

Artifact Volume

User Queries

User queries represent the total number of User inquiries or Searches in Luma Knowledge that may or may not lead to retrieval of Knowledge Artifacts or FAQs. It is basically the number of end-user inquiries in a specific period of time.

Artifacts Returned

This is the number of Knowledge artifacts returned as responses for user queries in the indicated time period.

Views

Views represent the number of artifacts accessed or viewed by the user.

Catalog Volume

Catalog Queries

Catalog Queries represent the total number of User inquiries or Searches in Luma Knowledge that may or may not lead to the retrieval of catalogs. It is basically the number of end-user inquiries in a specific period of time.

Catalog Returned

This is the number of catalogs returned as responses to user queries in the indicated time period.

Views

Views represent the number of catalogs accessed or viewed by the user.

User Request Insights

The KPI Parameters in this section are derived from the type of User Inquiries and Feedback.

Hot Spots

Hot spots are the Most frequently viewed artifacts. Here, an Administrator can view the list of artifacts with the highest request volume.

Below details are available for each artifact in the list:

  • Domain: The Domain to which artifact belongs to

  • Topic: The main topic of the artifact

  • Type: The type of artifact (Article/FAQ)

  • Title: The title of the artifact

  • ROI: The Return on investment obtained due to the artifact

  • Total Requests: Percentage of requests for the artifact versus total requests in the Tenant

  • AQI: The Artifact Quality Index (AQI) of the artifact. It indicates how effective an artifact is in solving a user’s inquiry.

User Feedback

This section provides details about issues or concerns with search results “Best Response“ and artifacts as evaluated through the direct feedback received from users.

Ineffective Artifact

An artifact marked ‘Not Helpful’ based on the Direct feedback received from the users is called an Ineffective artifact. The user responds to the feedback question,” Did the artifact answer the user's question, “ indicating if the artifact answers the user’s inquiry. If the user responds ‘No,’ then the below questions would be asked.

  • Was the artifact understandable (Yes or No)?

  • Did the artifact answer the user’s question/problem (Yes or No)?

Note that these questions are configurable and are meant to determine the effectiveness of the artifact content. You may update the questions in Tenant Configurations.

An administrator can view a list of all Ineffective artifacts in this panel.

Click on View to see the details of the feedback received for the artifact. Based on the feedback, the administrator can update the content to improve the effectiveness of the artifact.

 

Ineffective Best Response

When the Best response (result set) for an inquiry is marked Not helpful based on the User’s direct feedback, it is called an Ineffective Best Response. It indicates that either the Topics or Artifact did not provide a correct answer to the user’s inquiry. It is derived from the negative feedback received for the search result.

An administrator can view details of all Ineffective Best Response in this panel. Based on the user’s request, the administrator can update the content in the existing artifacts or curate new artifacts and FAQs to improve the efficiency of the system.

Forensics

Based on the artifact usage and best response identified for user inquiries, Luma Knowledge assesses the quality and availability of knowledge in the system. This section lists the artifacts that are no longer in use or the inquiries that do not result in a response.

Obsolete Artifacts

This section lists the Obsolete artifacts in your Tenant. These are the Artifacts that have not been returned as the best response for any user inquiry for a specific period of time, configurable at the tenant level. The system gathers this information as Implicit feedback and uses it to generate Artifact Obsolescence or Deletion recommendations.

As a Curator/Administrator, you can use the information to improve the artifact content or remove the artifact from the system.

For more information on updating the artifact or deleting the artifact refer to Knowledge Store

 

Retrieval Accuracy

This section indicates Observed Accuracy for your Tenant. It is the empirical accuracy seen through the actual use of Knowledge. It is a graphical representation of the accuracy of the retrieval over time. It does not represent the quality of the artifact’s contents or its usefulness.

It is calculated as the percentage of user inquiries with artifacts views by the total number of artifacts viewed.

For example,

Total number of user inquires                 = 50

Total artifacts returned = 60

Number of Artifacts viewed = 40

Number of inquired where artifacts were viewed =30

Retrieval Accuracy                                = 75% calculated as (30/40) *100

The Retrieval Accuracy for your tenant should improve with time. A declining accuracy may mean that user queries return more results. The curator must review the available Knowledge and update the metadata.

Volume v/s AQI

This is a graphical representation of Volume versus Artifact AQI for the specific time period. It indicates the total number of user inquiries against the quality of artifacts returned for the user inquiries. It is derived from User feedback.

AQI and Volume should be consensus with each other. In case of a declining AQI, the curator should encourage the end-users to provide more feedback on Artifacts and review the Artifact content and metadata periodically to improve the overall quality of Knowledge in the system.

Channels Performance

Return on Investment

This is a representation of the Return of investment for a channel on deploying Luma Knowledge in the organization. It is derived based on the User inquiries that were resolved through Knowledge available in the system.

Channel here represents the mediums or systems through which Knowledge can be accessed such as Luma Virtual Agent, Luma Knowledge Search Widget.

Feedback Response Rate

The Feedback response rate chart represents the percentage of feedback received on artifacts against the Target feedback rate (configurable) set for the channel.

Feedback is critical in improving successful searches and effective artifacts. It enables the curator to keep knowledge in the system up to date and useful.
A low feedback rate for a channel should be investigated. Users should be appraised to provide as much feedback on searches and artifacts as possible.

Volume and Accuracy

This chart represents the total volume of user sessions against the Observed accuracy by channel. Here, we can view the total number of user inquiries for a channel and the accuracy of the artifact retrieval for the specified period.

Guided Response and Accuracy

This chart represents the total number of responses for Guided search against the Observed accuracy by channel. Here, we can view the total number of user inquiries for a channel guided by topics and the accuracy of the artifact retrieval for the specified period.

Knowledge Usage

Average Quality Index
The Artifact Quality Index (AQI) is used to monitor the effectiveness of artifacts in solving users’ needs. It must be measured in the context of their domain and user audience.

The chart represents Volume versus Artifact AQI per domain for the specific time period. Here we can view the total number of user inquiries and the quality of artifact returned for the user inquiries for each Domain. Artifact quality is derived from User feedbacks.

This information indicates that quality across domains should be similar. Low-performing domains should be investigated to improve artifact quality.

  • The type of media used can have a major impact on the approachability, understandability, and, ultimately, usefulness of an artifact.  For example, videos tend to be better than documents, albeit at a cost. This statistic enables curators to decide what type of media makes the most sense and whether they should spend the extra money on media production (i.e., video) to get a better outcome.

  • Since artifacts have an intended audience. So tracking the effectiveness by user type (audience) is very important. What works well for an analyst may not be understandable to an SSU, let alone a Guest. And of course, having artifacts targeted to specific user types is often necessary, if not required, for security reasons. Seeing where quality is not up to the highest level provides investigative insight into the potential need changes the media presented to that audience, address curator performance or quality of the source. Ideally, the AQI would be similar across all user types and outliers explored.

Download Dashboard Reports

You can also download and view the metrics information generated for the various segments on the Luma Knowledge Dashboard. You can click on the download button and download the reports in CSV format.

The information will be downloaded in a CSV file, which can be saved on your local system.

The following reports can be downloaded from the dashboard:

  1. Dashboard Metrics Report: This is the summary report that contains all the metrics available on the Luma Knowledge Dashboard.

    1. Click on the download button at the top of the Dashboard to down the report.

    2. Here you can find the Metrics name and the associated counts displayed or used to build the graphs or visual representations.

       

  2. User Feedback Report: The report provides information on Positive and Negative feedback received for Artifacts in Luma Knowledge. The information enables the administrators to get in-depth information on how the 'Result Summary' metrics are calculated.

    1. Click on the download button at the 'Result Summary' metric.

    2. A CSV file is generated with the following details:

      1. Artifact Id represents the Artifact for which a feedback (positive or negative) is received. The report field contains the Artifact’s Id in Luma Knowledge.

      2. Artifact summary is the summary of the Artifact.

      3. Feedback Provider(Username) is the user who has provided the feedback.

      4. Feedback:This is the feedback recieved for the Artifact. A user may find the artifact Helpful (positive feedback) or Not Helpful (negative feedback).

      5. Feedback Time is the date and time when user feedback is received.

Related pages