Skip to content
  • Auto
  • Light
  • Dark

List Results

Retrieve Results of an Evaluation Run
agents.evaluation_runs.list_results(strevaluation_run_uuid, EvaluationRunListResultsParams**kwargs) -> EvaluationRunListResultsResponse
get/v2/gen-ai/evaluation_runs/{evaluation_run_uuid}/results

To retrieve results of an evaluation run, send a GET request to /v2/gen-ai/evaluation_runs/{evaluation_run_uuid}/results.

ParametersExpand Collapse
evaluation_run_uuid: str
page: Optional[int]

Page number.

per_page: Optional[int]

Items per page.

ReturnsExpand Collapse
class EvaluationRunListResultsResponse:

Gets the full results of an evaluation run with all prompts.

evaluation_run: Optional[APIEvaluationRun]
agent_deleted: Optional[bool]

Whether agent is deleted

agent_name: Optional[str]

Agent name

agent_uuid: Optional[str]

Agent UUID.

agent_version_hash: Optional[str]

Version hash

agent_workspace_uuid: Optional[str]

Agent workspace uuid

created_by_user_email: Optional[str]
created_by_user_id: Optional[str]
formatuint64
error_description: Optional[str]

The error description

evaluation_run_uuid: Optional[str]

Evaluation run UUID.

evaluation_test_case_workspace_uuid: Optional[str]

Evaluation test case workspace uuid

finished_at: Optional[datetime]

Run end time.

formatdate-time
pass_status: Optional[bool]

The pass status of the evaluation run based on the star metric.

queued_at: Optional[datetime]

Run queued time.

formatdate-time
run_level_metric_results: Optional[List[APIEvaluationMetricResult]]
error_description: Optional[str]

Error description if the metric could not be calculated.

metric_name: Optional[str]

Metric name

metric_value_type: Optional[Literal["METRIC_VALUE_TYPE_UNSPECIFIED", "METRIC_VALUE_TYPE_NUMBER", "METRIC_VALUE_TYPE_STRING", "METRIC_VALUE_TYPE_PERCENTAGE"]]
Accepts one of the following:
"METRIC_VALUE_TYPE_UNSPECIFIED"
"METRIC_VALUE_TYPE_NUMBER"
"METRIC_VALUE_TYPE_STRING"
"METRIC_VALUE_TYPE_PERCENTAGE"
number_value: Optional[float]

The value of the metric as a number.

formatdouble
reasoning: Optional[str]

Reasoning of the metric result.

string_value: Optional[str]

The value of the metric as a string.

run_name: Optional[str]

Run name.

star_metric_result: Optional[APIEvaluationMetricResult]
error_description: Optional[str]

Error description if the metric could not be calculated.

metric_name: Optional[str]

Metric name

metric_value_type: Optional[Literal["METRIC_VALUE_TYPE_UNSPECIFIED", "METRIC_VALUE_TYPE_NUMBER", "METRIC_VALUE_TYPE_STRING", "METRIC_VALUE_TYPE_PERCENTAGE"]]
Accepts one of the following:
"METRIC_VALUE_TYPE_UNSPECIFIED"
"METRIC_VALUE_TYPE_NUMBER"
"METRIC_VALUE_TYPE_STRING"
"METRIC_VALUE_TYPE_PERCENTAGE"
number_value: Optional[float]

The value of the metric as a number.

formatdouble
reasoning: Optional[str]

Reasoning of the metric result.

string_value: Optional[str]

The value of the metric as a string.

started_at: Optional[datetime]

Run start time.

formatdate-time
status: Optional[Literal["EVALUATION_RUN_STATUS_UNSPECIFIED", "EVALUATION_RUN_QUEUED", "EVALUATION_RUN_RUNNING_DATASET", 6 more]]

Evaluation Run Statuses

Accepts one of the following:
"EVALUATION_RUN_STATUS_UNSPECIFIED"
"EVALUATION_RUN_QUEUED"
"EVALUATION_RUN_RUNNING_DATASET"
"EVALUATION_RUN_EVALUATING_RESULTS"
"EVALUATION_RUN_CANCELLING"
"EVALUATION_RUN_CANCELLED"
"EVALUATION_RUN_SUCCESSFUL"
"EVALUATION_RUN_PARTIALLY_SUCCESSFUL"
"EVALUATION_RUN_FAILED"
test_case_description: Optional[str]

Test case description.

test_case_name: Optional[str]

Test case name.

test_case_uuid: Optional[str]

Test-case UUID.

test_case_version: Optional[int]

Test-case-version.

formatint64
meta: Optional[APIMeta]

Meta information about the data set

page: Optional[int]

The current page

formatint64
pages: Optional[int]

Total number of pages

formatint64
total: Optional[int]

Total amount of items over all pages

formatint64
prompts: Optional[List[APIEvaluationPrompt]]

The prompt level results.

ground_truth: Optional[str]

The ground truth for the prompt.

input: Optional[str]
input_tokens: Optional[str]

The number of input tokens used in the prompt.

formatuint64
output: Optional[str]
output_tokens: Optional[str]

The number of output tokens used in the prompt.

formatuint64
prompt_chunks: Optional[List[PromptChunk]]

The list of prompt chunks.

chunk_usage_pct: Optional[float]

The usage percentage of the chunk.

formatdouble
chunk_used: Optional[bool]

Indicates if the chunk was used in the prompt.

index_uuid: Optional[str]

The index uuid (Knowledge Base) of the chunk.

source_name: Optional[str]

The source name for the chunk, e.g., the file name or document title.

text: Optional[str]

Text content of the chunk.

prompt_id: Optional[int]

Prompt ID

formatint64
prompt_level_metric_results: Optional[List[APIEvaluationMetricResult]]

The metric results for the prompt.

error_description: Optional[str]

Error description if the metric could not be calculated.

metric_name: Optional[str]

Metric name

metric_value_type: Optional[Literal["METRIC_VALUE_TYPE_UNSPECIFIED", "METRIC_VALUE_TYPE_NUMBER", "METRIC_VALUE_TYPE_STRING", "METRIC_VALUE_TYPE_PERCENTAGE"]]
Accepts one of the following:
"METRIC_VALUE_TYPE_UNSPECIFIED"
"METRIC_VALUE_TYPE_NUMBER"
"METRIC_VALUE_TYPE_STRING"
"METRIC_VALUE_TYPE_PERCENTAGE"
number_value: Optional[float]

The value of the metric as a number.

formatdouble
reasoning: Optional[str]

Reasoning of the metric result.

string_value: Optional[str]

The value of the metric as a string.

Retrieve Results of an Evaluation Run
from gradient import Gradient

client = Gradient(
    access_token="My Access Token",
)
response = client.agents.evaluation_runs.list_results(
    evaluation_run_uuid="\"123e4567-e89b-12d3-a456-426614174000\"",
)
print(response.evaluation_run)
{
  "evaluation_run": {
    "agent_deleted": true,
    "agent_name": "example name",
    "agent_uuid": "123e4567-e89b-12d3-a456-426614174000",
    "agent_version_hash": "example string",
    "agent_workspace_uuid": "123e4567-e89b-12d3-a456-426614174000",
    "created_by_user_email": "[email protected]",
    "created_by_user_id": "12345",
    "error_description": "example string",
    "evaluation_run_uuid": "123e4567-e89b-12d3-a456-426614174000",
    "evaluation_test_case_workspace_uuid": "123e4567-e89b-12d3-a456-426614174000",
    "finished_at": "2023-01-01T00:00:00Z",
    "pass_status": true,
    "queued_at": "2023-01-01T00:00:00Z",
    "run_level_metric_results": [
      {
        "error_description": "example string",
        "metric_name": "example name",
        "metric_value_type": "METRIC_VALUE_TYPE_UNSPECIFIED",
        "number_value": 123,
        "reasoning": "example string",
        "string_value": "example string"
      }
    ],
    "run_name": "example name",
    "star_metric_result": {
      "error_description": "example string",
      "metric_name": "example name",
      "metric_value_type": "METRIC_VALUE_TYPE_UNSPECIFIED",
      "number_value": 123,
      "reasoning": "example string",
      "string_value": "example string"
    },
    "started_at": "2023-01-01T00:00:00Z",
    "status": "EVALUATION_RUN_STATUS_UNSPECIFIED",
    "test_case_description": "example string",
    "test_case_name": "example name",
    "test_case_uuid": "123e4567-e89b-12d3-a456-426614174000",
    "test_case_version": 123
  },
  "links": {
    "pages": {
      "first": "example string",
      "last": "example string",
      "next": "example string",
      "previous": "example string"
    }
  },
  "meta": {
    "page": 123,
    "pages": 123,
    "total": 123
  },
  "prompts": [
    {
      "ground_truth": "example string",
      "input": "example string",
      "input_tokens": "12345",
      "output": "example string",
      "output_tokens": "12345",
      "prompt_chunks": [
        {
          "chunk_usage_pct": 123,
          "chunk_used": true,
          "index_uuid": "123e4567-e89b-12d3-a456-426614174000",
          "source_name": "example name",
          "text": "example string"
        }
      ],
      "prompt_id": 123,
      "prompt_level_metric_results": [
        {
          "error_description": "example string",
          "metric_name": "example name",
          "metric_value_type": "METRIC_VALUE_TYPE_UNSPECIFIED",
          "number_value": 123,
          "reasoning": "example string",
          "string_value": "example string"
        }
      ]
    }
  ]
}
Returns Examples
{
  "evaluation_run": {
    "agent_deleted": true,
    "agent_name": "example name",
    "agent_uuid": "123e4567-e89b-12d3-a456-426614174000",
    "agent_version_hash": "example string",
    "agent_workspace_uuid": "123e4567-e89b-12d3-a456-426614174000",
    "created_by_user_email": "[email protected]",
    "created_by_user_id": "12345",
    "error_description": "example string",
    "evaluation_run_uuid": "123e4567-e89b-12d3-a456-426614174000",
    "evaluation_test_case_workspace_uuid": "123e4567-e89b-12d3-a456-426614174000",
    "finished_at": "2023-01-01T00:00:00Z",
    "pass_status": true,
    "queued_at": "2023-01-01T00:00:00Z",
    "run_level_metric_results": [
      {
        "error_description": "example string",
        "metric_name": "example name",
        "metric_value_type": "METRIC_VALUE_TYPE_UNSPECIFIED",
        "number_value": 123,
        "reasoning": "example string",
        "string_value": "example string"
      }
    ],
    "run_name": "example name",
    "star_metric_result": {
      "error_description": "example string",
      "metric_name": "example name",
      "metric_value_type": "METRIC_VALUE_TYPE_UNSPECIFIED",
      "number_value": 123,
      "reasoning": "example string",
      "string_value": "example string"
    },
    "started_at": "2023-01-01T00:00:00Z",
    "status": "EVALUATION_RUN_STATUS_UNSPECIFIED",
    "test_case_description": "example string",
    "test_case_name": "example name",
    "test_case_uuid": "123e4567-e89b-12d3-a456-426614174000",
    "test_case_version": 123
  },
  "links": {
    "pages": {
      "first": "example string",
      "last": "example string",
      "next": "example string",
      "previous": "example string"
    }
  },
  "meta": {
    "page": 123,
    "pages": 123,
    "total": 123
  },
  "prompts": [
    {
      "ground_truth": "example string",
      "input": "example string",
      "input_tokens": "12345",
      "output": "example string",
      "output_tokens": "12345",
      "prompt_chunks": [
        {
          "chunk_usage_pct": 123,
          "chunk_used": true,
          "index_uuid": "123e4567-e89b-12d3-a456-426614174000",
          "source_name": "example name",
          "text": "example string"
        }
      ],
      "prompt_id": 123,
      "prompt_level_metric_results": [
        {
          "error_description": "example string",
          "metric_name": "example name",
          "metric_value_type": "METRIC_VALUE_TYPE_UNSPECIFIED",
          "number_value": 123,
          "reasoning": "example string",
          "string_value": "example string"
        }
      ]
    }
  ]
}