Skip to content
  • Auto
  • Light
  • Dark

Retrieve Results

Retrieve Results of an Evaluation Run Prompt
get/v2/gen-ai/evaluation_runs/{evaluation_run_uuid}/results/{prompt_id}

To retrieve results of an evaluation run, send a GET request to /v2/gen-ai/evaluation_runs/{evaluation_run_uuid}/results/{prompt_id}.

Path ParametersExpand Collapse
evaluation_run_uuid: string
prompt_id: number
ReturnsExpand Collapse
prompt: optional APIEvaluationPrompt { ground_truth, input, input_tokens, 5 more }
ground_truth: optional string

The ground truth for the prompt.

input: optional string
input_tokens: optional string

The number of input tokens used in the prompt.

formatuint64
output: optional string
output_tokens: optional string

The number of output tokens used in the prompt.

formatuint64
prompt_chunks: optional array of object { chunk_usage_pct, chunk_used, index_uuid, 2 more }

The list of prompt chunks.

chunk_usage_pct: optional number

The usage percentage of the chunk.

formatdouble
chunk_used: optional boolean

Indicates if the chunk was used in the prompt.

index_uuid: optional string

The index uuid (Knowledge Base) of the chunk.

source_name: optional string

The source name for the chunk, e.g., the file name or document title.

text: optional string

Text content of the chunk.

prompt_id: optional number

Prompt ID

formatint64
prompt_level_metric_results: optional array of APIEvaluationMetricResult { error_description, metric_name, metric_value_type, 3 more }

The metric results for the prompt.

error_description: optional string

Error description if the metric could not be calculated.

metric_name: optional string

Metric name

metric_value_type: optional "METRIC_VALUE_TYPE_UNSPECIFIED" or "METRIC_VALUE_TYPE_NUMBER" or "METRIC_VALUE_TYPE_STRING" or "METRIC_VALUE_TYPE_PERCENTAGE"
Accepts one of the following:
"METRIC_VALUE_TYPE_UNSPECIFIED"
"METRIC_VALUE_TYPE_NUMBER"
"METRIC_VALUE_TYPE_STRING"
"METRIC_VALUE_TYPE_PERCENTAGE"
number_value: optional number

The value of the metric as a number.

formatdouble
reasoning: optional string

Reasoning of the metric result.

string_value: optional string

The value of the metric as a string.

Retrieve Results of an Evaluation Run Prompt
curl https://api.digitalocean.com/v2/gen-ai/evaluation_runs/$EVALUATION_RUN_UUID/results/$PROMPT_ID \
    -H "Authorization: Bearer $DIGITALOCEAN_ACCESS_TOKEN"
{
  "prompt": {
    "ground_truth": "example string",
    "input": "example string",
    "input_tokens": "12345",
    "output": "example string",
    "output_tokens": "12345",
    "prompt_chunks": [
      {
        "chunk_usage_pct": 123,
        "chunk_used": true,
        "index_uuid": "123e4567-e89b-12d3-a456-426614174000",
        "source_name": "example name",
        "text": "example string"
      }
    ],
    "prompt_id": 123,
    "prompt_level_metric_results": [
      {
        "error_description": "example string",
        "metric_name": "example name",
        "metric_value_type": "METRIC_VALUE_TYPE_UNSPECIFIED",
        "number_value": 123,
        "reasoning": "example string",
        "string_value": "example string"
      }
    ]
  }
}
Returns Examples
{
  "prompt": {
    "ground_truth": "example string",
    "input": "example string",
    "input_tokens": "12345",
    "output": "example string",
    "output_tokens": "12345",
    "prompt_chunks": [
      {
        "chunk_usage_pct": 123,
        "chunk_used": true,
        "index_uuid": "123e4567-e89b-12d3-a456-426614174000",
        "source_name": "example name",
        "text": "example string"
      }
    ],
    "prompt_id": 123,
    "prompt_level_metric_results": [
      {
        "error_description": "example string",
        "metric_name": "example name",
        "metric_value_type": "METRIC_VALUE_TYPE_UNSPECIFIED",
        "number_value": 123,
        "reasoning": "example string",
        "string_value": "example string"
      }
    ]
  }
}