Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Luma Knowledge Dashboard provides Key Performance Metrics in Artifacts Usage, Money, and the value that brings to the organization. It enables a system administrator to view the key metrics related to Artifacts retrieval, User Request Insights, and Channel performance inLuma Knowledge. This DashBoard is beneficial in many ways, and empowers Management, Curators to get real-time data on the system performance and effectiveness.

...

Configurations are used to process usage data collected by the system and generate KPI metrics. These are managed from the backend. You can contact the support team if any updates are Administrators can update the configurations as required.

Parameter Name

Parameter Value

Description

recommendation.artifact.obsolete.time.interval

180

Time interval (in days) to check for obsolete artifacts.

recommendation.artifact.ineffective.time.interval

180

Time interval (in days) to check for ineffective artifacts.

recommendation.artifact.ineffective.negative.feedback.threshold

5

The threshold to check for negative feedback to report it as ineffective.

recommendation.best.response.knowledgegap.time.interval

180

Time interval (in days) to check for Knowledge gaps generated due to best responses yielding no results.

recommendation.best.response.knowledgegap.threshold

5

The threshold of matching Best Responses yielding no results for Knowledge gaps analysis.

recommendation.best.response.ineffective.time.interval

180

Time interval (in days) to check for ineffective best responses.

recommendation.best.response.ineffective.threshold

5

The threshold of matching Best Responses with negative feedback for ineffective Best Response analysis.

feedback.response.rate.target.per.channel

75

Feedback response rate target for a channel

...

Accessibility indicates how effectively users are able to find relevant artifacts in Luma Knowledge. This value represents the accuracy in delivering content that is relevant to the users’ intent. 
It is calculated as the % of artifacts accessed by the users (relevant artifacts) divided by the total number of artifacts returned as responses

Info

Relevancy of the artifact is based on the criteria that the user opens or accesses the content and is not a measure of the usefulness of the content.

...

Total number of user inquires                 = 50
Number of feedbacks with Solved (yes) option    = 25
Number of user inquires with No feedback = 15
Availability                                        = 62.5% calculated as (25/(15+25)) *100

Volume

 ViewsViews

Views represent the number of artifacts accessed or viewed by the user.

...

User sessions represent the total number of User inquiries or Searches in Luma Knowledge that may or may not lead to retrieval of artifacts or FAQs. It is basically the number of end-user inquiries.
End to end: Inquiry → Retrieve Best Response → Retrieve Artifact → provide feedback

...

Note that these questions are configurable and are meant to determine the effectiveness of the artifact content. You may contact the Support team to update the questions in Tenant Configurations.

An administrator can view a list of all Ineffective artifacts in this panel.

...

Ineffective Best Response

When the Best response (result set) for an inquiry is marked Not helpful based on the User’s direct feedback, it is called an Ineffective Best Response. It indicates that either the Topics or Artifact did not provide a correct answer to the user’s inquiry. It is derived from the negative feedback received for the search result.

...

Based on this information, the Curator and Administrators can identify opportunities to create Artifacts and build knowledge.

Info
  • ROI here refers to negative ROI; that is, it is the cost that can be saved by creating an artifact (deflecting support tickets) to fill the knowledge gap.

  • User queries that do not generate meta-data are not be considered to derive Knowledge gaps.

...

Image Removed

Retrieval Accuracy

This section indicates Observed Accuracy for your Tenant. It is a graphical representation of the number of artifacts viewed versus the number of artifacts presented as responses to user inquiries. The accuracy of the retrieval does not represent the quality of the artifact’s contents or its usefulness. This is the empirical accuracy seen through the actual use (subsequent retrieval after presentment) by the user.

...

This is a graphical representation of Volume versus Artifact AQI for the specific time period. It indicates the total number of user inquiries against the quality of artifact artifacts returned for the user inquiries. It is derived from User feedbacks.

...

This is a representation of the Return of investment for a channel on deploying Luma Knowledge in the organization. It is derived based on the User inquiries that were resolved through Knowledge available in the system.

Channel here represents the mediums or systems through which Knowledge can be accessed such as Luma Virtual Agent, Luma Knowledge Search Widget.

...

Feedback Response Rate

The Feedback response rate chart represents the percentage of feedback received on artifacts against the Target feedback rate (configurable) set for the channel.

...

Info

Volume is the indicator of Luma knowledge usage by the end-users, and any dip in the channel volume from one period to another should be investigated.

Accuracy between channels should be similar.  Dips  Any dip in accuracy or low accuracy outliers should be investigated. Higher accuracy channels should be analyzed to determine how the higher level of accuracy is being achieved, and those lessons should be applied to the other channels.

...

This information indicates :       Quality that quality across domains should be similar. Low-performing domains should be investigated to improve artifact quality. ·      

  • The type of media used can have a major impact on the approachability, understandability, and, ultimately, usefulness of an artifact.  For example, videos tend to be better than documents, albeit at a cost. This statistic enables curators to decide what type of media makes the most sense and whether they should spend the extra money on media production (i.e., video) to get a better outcome.

...

  • Since artifacts have an intended audience. So tracking the effectiveness by user type (audience) is very important. What works well for an analyst may not be understandable to an SSU, let alone a Guest. And of course, having artifacts targeted to specific user types is often necessary, if not required, for security reasons. Seeing where quality is not up to the highest level provides investigative insight into the potential need changes the media presented to that audience, address curator performance or quality of the source.

...

  • Ideally, the AQI would be similar across all user types and outliers explored.