Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Luma Knowledge Dashboard provides Key Performance Metrics in Artifacts Usage, Money, and the value that brings to the organization. It enables a system administrator to view the key metrics related to Artifacts retrieval, User Request Insights, and Channel performance inLuma Knowledge. This DashBoard is beneficial in many ways, and empowers Management, Curators to get real-time data on the system performance and effectiveness.

The metrics are calculated based on the Configurations defined at the Tenant level and the feedback collected by the system.

Configurations

Configurations are used to process usage data collected by the system and generate KPI metrics. Administrators can update the configurations as required.

Parameter Name

Parameter Value

Description

recommendation.artifact.obsolete.time.interval

180

Time interval (in days) to check for obsolete artifacts.

recommendation.artifact.ineffective.time.interval

180

Time interval (in days) to check for ineffective artifacts.

recommendation.artifact.ineffective.negative.feedback.threshold

5

The threshold to check for negative feedback to report it as ineffective.

recommendation.best.response.knowledgegap.time.interval

180

Time interval (in days) to check for Knowledge gaps generated due to best responses yielding no results.

recommendation.best.response.knowledgegap.threshold

5

The threshold of matching Best Responses yielding no results for Knowledge gaps analysis.

recommendation.best.response.ineffective.time.interval

180

Time interval (in days) to check for ineffective best responses.

recommendation.best.response.ineffective.threshold

5

The threshold of matching Best Responses with negative feedback for ineffective Best Response analysis.

feedback.response.rate.target.per.channel

75

Feedback response rate target for a channel

Feedback

Feedback is the information gathered by the system directly from the end-users or based on users' interaction with the system. It is used to derive the effectiveness and usefulness of the content available in the system.

...

  • Explicit Feedback: This is the feedback provided by the end-user on the best response /artifact returned or the artifact content.

  • Implicit Feedback: This is the feedback derived by the system based on users' activity

Based on these Tenant settings, data available in the system, and feedback collected, a background process accumulates the KPI metrics and populates the same on the Dashboard.

...

The Performance parameters on the dashboard are time-bound. You may select the Date Range to view the Metrics for predefined ranges like the Last 30 days, Last the last 15 days, and since Yesterday. The Metrics are calculated based on the selected filter and presented in the Dashboard.

...

This section is a representation of the overall performance and effectiveness of the system. Below are the various indicator available under the section.

Result Summary

...

Helpful

When the users are satisfied with an artifact returned , it is termed as a Solved request. That is, and the user responds to the below first feedback question as 'Yes.'
Did the artifact’s topic match the original knowledge request (Yes or No)?. This question Yes’,the request is considered Helpful.

The first feedback question displayed to the end-user is configurable but is meant to ask the user if it answered the Artifact answers the user's question. Only a A response in “Yes” or thumbs up response will indicate indicates a “Solved“ solved question.
The number here indicates the total count of such solved requests for the tenant positive responses received for artifacts in the tenant, within the selected date range.

Not Helpful

When the user is not satisfied with the Knowledge returned and the user responds to the first feedback question as ‘No’,the request is considered Not Helpful.

The number here indicates the total count of Negative responses received for the artifacts presented to the user within the selected date range.

...

Return on Investment in terms of Cost Savings, achieved by deploying Luma Knowledge in the organization.
A cumulative sum of the total cost saved by preventing (deflecting) a human answered inquiry versus the cost of creating and delivering the content. This is calculated based on the average ticket cost configured for the tenant multiplied by Solved Helpful requests (positive feedback). This is a configurable parameter set by the Tenant Administrator or Curator.

Performance

Accessibility

...

Findability

Findability represents the relevance of search results and how effectively users are able to find relevant artifacts in Luma Knowledge. This value represents the accuracy in delivering content that is relevant to the users’ intent.  indicates that the users are able to find and view relevant Knowledge Artifacts.
It is calculated as the % percentage of artifacts accessed by the users (relevant artifacts) divided by the total number of artifacts returned as responses. 

...

Total number of artifacts retrieved in the Searches for your Tenant                 = 120
Number of artifacts accessed /opened by the Users       = 90
Accessibility                                                                            Findability                                                                              = 75% , i.e. (90/120)*100  

 Availability

...

Positivity Rate

Positivity Rate implies the existence of useful content within the Luma Knowledge. Useful content is identified based on the users' positive feedback for the Search Results. Positivity Rate represents the user queries where end-users find what they need and provide positive feedback for the Search Results.

It is defined as feedback received that is positive (also called solved feedbackthe percentage of user searches with positive feedback (Helpful) divided by the total feedback that is received ( i.e. both positive and negative).

For example,

Total number of user inquires                 = 50

Number of

...

inquires where feedback is received = 40

Number of inquires with positive feedback   = 25

Number of

...

inquires with

...

Negative feedback = 15

...

Positivity Rate                                     

...

= 62.5% calculated as (25/(

...

40)) *100

...

Volume

Views

Views represent the number of artifacts accessed or viewed by the user.

...

Info

An inquiry with both positive and negative feedback for Artifacts returned is considered as Inquiry with positive feedback.

AQI

Artifact quality index (AQI) indicates how effective Knowledge artifacts are in solving users’ inquiries. AQI indicates that the Artifacts returned as search results are relevant and helpful in resolving user’s issues,

It is calculated as the percentage of total positive responses received (Helpful) divided by the total feedback received (i.e. both positive and negative).

For example,

Total number of user inquires                 = 50

Number of inquires where feedback is received = 40

Number of positive feedback   = 45 (the user may provide positive feedback for multiple artifacts in a single inquiry)

Number of Negative feedback = 15

AQI                                 = 75% calculated as (45/(45+15)) *100

Volume

User Queries

User sessions represent the total number of User inquiries or Searches in Luma Knowledge that may or may not lead to retrieval of artifacts Knowledge Artifacts or FAQs. It is basically the number of end-user inquiries in a specific period of time.

Artifacts Returned

This is the number of Knowledge artifacts returned as responses for user queries in the indicated time period.
End to end: Inquiry → Retrieve Best Response → Retrieve Artifact → provide feedback

Views

Views represent the number of artifacts accessed or viewed by the user.

User Request Insights

The KPI Parameters in this section are derived from the type of User Inquiries and FeedbacksFeedback.

Hot Spots

Hot spots are the Most frequently viewed artifacts. Here, an Administrator can view the list of artifacts with the highest request volume.

...

Note that these questions are configurable and are meant to determine the effectiveness of the artifact content. You may update the questions in Tenant Configurations.

An administrator can view a list of all Ineffective artifacts in this panel.

...

For more information on updating the artifact or deleting the artifact refer to Knowledge Store

...

Knowledge Gap

This section lists the Knowledge Gaps recorded for your Tenant. These are searches or user Inquiries for which Knowledge/Artifacts are not available in Luma Knowledge. If User Inquiry does not return a Result, the system considers ‘No result’ as feedback on content available in the system. This implicit feedback is used to derive the metrics.

...

This section indicates Observed Accuracy for your Tenant. It is the empirical accuracy seen through the actual use of Knowledge. It is a graphical representation of the number of artifacts viewed versus the number of artifacts presented as responses to user inquiries. The accuracy of the retrieval over time. It does not represent the quality of the artifact’s contents or its usefulness. This

It is the empirical accuracy seen through the actual use (subsequent retrieval after presentment) by the user.

...

calculated as the percentage of user inquiries with artifacts views by the total number of artifacts viewed.

For example,

Total number of user inquires                 = 50

Total artifacts returned = 60

Number of Artifacts viewed = 40

Number of inquired where artifacts were viewed =30

Retrieval Accuracy                                = 75% calculated as (30/40) *100

...

The Retrieval Accuracy for your tenant should improve with time. A declining accuracy may mean that user queries return more results. The curator must review the available Knowledge and update the metadata.

Volume v/s AQI

This is a graphical representation of Volume versus Artifact AQI for the specific time period. It indicates the total number of user inquiries against the quality of artifacts returned for the user inquiries. It is derived from User feedbacksfeedback.

...

AQI and Volume should be consensus with each other. In case of a declining AQI, the curator should encourage the end-users to provide more feedback on Artifacts and review the Artifact content and metadata periodically to improve the overall quality of Knowledge in the system.

Channels Performance

Return on Investment

...

Average Quality Index
The Artifact Quality Index (AQI) is used to monitor the effectiveness of artifacts in solving user’s users’ needs. It must be measured in the context of their domain and user audience.

...