Skip to content

Visualising Data in Dynatrace#

Uploading the Dashboards#

This demo comes with several prebuilt dashboards. Do the following in Dynatrace.

upload button

dashboard image dashboard image

Repeat this process for all the dashboards inside dynatrace/dashboards/*

Distributed Traces#

The application emits distributed traces which can be viewed in Dynatrace:

  • Press ctrl + k search for distributed traces
  • Traces for /api/v1/completion are created for each call to either OpenAI or a call to the Weaviate cache.

Remember that only the very first requests for a given destination will go out to OpenAI. So expect many many more cached traces than "live" traces.

Trace of the RAG Pipeline#

Tracing LangChain allows to see all step that the pipeline takes to supply external knowledge to the LLM model. In the span attributes, we can observe the prompt crafted by the RAG pipeline with the external documents.

distributed trace RAG

Trace with OpenAI#

A "full" call to OpenAI looks like this. Notice the long call halfway through the trace to openai.chat. These traces take much longer (3 seconds vs. 500ms).

distributed trace calling OpenAI

distributed trace metadata

Trace to Weaviate Cache#

A call which instead only hits the on-cluster Weaviate cache looks like this.

Notice that it is much quicker.

The response TTL (max time that a cached prompt is considered "fresh") is checked and if the response is "still fresh" (ie. TTL < stale time) the cached value is returned.

distributed trace returning from Weaviate

Notice the cached prompt is 123s. The max age (TTL is (by default) 60 minutes. Therefore the prompt is not outdated and thus returned to the user as valid.

cached request not stale

🎉 Demo Complete#

The demo is now complete. Continue to cleanup your environment.