Skip to content
  • Auto
  • Light
  • Dark

Retrieve

Retrieve an Existing Autoscale Pool
gpu_droplets.autoscale.retrieve(strautoscale_pool_id) -> AutoscaleRetrieveResponse
get/v2/droplets/autoscale/{autoscale_pool_id}

To show information about an individual autoscale pool, send a GET request to /v2/droplets/autoscale/$AUTOSCALE_POOL_ID.

ParametersExpand Collapse
autoscale_pool_id: str
ReturnsExpand Collapse
class AutoscaleRetrieveResponse:
autoscale_pool: Optional[AutoscalePool]
id: str

A unique identifier for each autoscale pool instance. This is automatically generated upon autoscale pool creation.

active_resources_count: int

The number of active Droplets in the autoscale pool.

config: Config

The scaling configuration for an autoscale pool, which is how the pool scales up and down (either by resource utilization or static configuration).

Accepts one of the following:
class AutoscalePoolStaticConfig:
target_number_instances: int

Fixed number of instances in an autoscale pool.

minimum1
maximum1000
class AutoscalePoolDynamicConfig:
max_instances: int

The maximum number of Droplets in an autoscale pool.

minimum1
maximum1000
min_instances: int

The minimum number of Droplets in an autoscale pool.

minimum1
maximum500
cooldown_minutes: Optional[int]

The number of minutes to wait between scaling events in an autoscale pool. Defaults to 10 minutes.

minimum5
maximum20
target_cpu_utilization: Optional[float]

Target CPU utilization as a decimal.

formatfloat
minimum0.05
maximum1
target_memory_utilization: Optional[float]

Target memory utilization as a decimal.

formatfloat
minimum0.05
maximum1
created_at: datetime

A time value given in ISO8601 combined date and time format that represents when the autoscale pool was created.

formatdate-time
image: str

The Droplet image to be used for all Droplets in the autoscale pool. You may specify the slug or the image ID.

region: Literal["nyc1", "nyc2", "nyc3", 11 more]

The datacenter in which all of the Droplets will be created.

Accepts one of the following:
"nyc1"
"nyc2"
"nyc3"
"ams2"
"ams3"
"sfo1"
"sfo2"
"sfo3"
"sgp1"
"lon1"
"fra1"
"tor1"
"blr1"
"syd1"
size: str

The Droplet size to be used for all Droplets in the autoscale pool.

ssh_keys: List[str]

The SSH keys to be installed on the Droplets in the autoscale pool. You can either specify the key ID or the fingerprint. Requires ssh_key:read scope.

ipv6: Optional[bool]

Assigns a unique IPv6 address to each of the Droplets in the autoscale pool.

name: Optional[str]

The name(s) to be applied to all Droplets in the autoscale pool.

project_id: Optional[str]

The project that the Droplets in the autoscale pool will belong to. Requires project:read scope.

tags: Optional[List[str]]

The tags to apply to each of the Droplets in the autoscale pool. Requires tag:read scope.

user_data: Optional[str]

A string containing user data that cloud-init consumes to configure a Droplet on first boot. User data is often a cloud-config file or Bash script. It must be plain text and may not exceed 64 KiB in size.

vpc_uuid: Optional[str]

The VPC where the Droplets in the autoscale pool will be created. The VPC must be in the region where you want to create the Droplets. Requires vpc:read scope.

with_droplet_agent: Optional[bool]

Installs the Droplet agent. This must be set to true to monitor Droplets for resource utilization scaling.

name: str

The human-readable name set for the autoscale pool.

status: Literal["active", "deleting", "error"]

The current status of the autoscale pool.

Accepts one of the following:
"active"
"deleting"
"error"
updated_at: datetime

A time value given in ISO8601 combined date and time format that represents when the autoscale pool was last updated.

formatdate-time
current_utilization: Optional[CurrentUtilization]
cpu: Optional[float]

The average CPU utilization of the autoscale pool.

formatfloat
minimum0
maximum1
memory: Optional[float]

The average memory utilization of the autoscale pool.

formatfloat
minimum0
maximum1
Retrieve an Existing Autoscale Pool
from gradient import Gradient

client = Gradient(
    access_token="My Access Token",
)
autoscale = client.gpu_droplets.autoscale.retrieve(
    "0d3db13e-a604-4944-9827-7ec2642d32ac",
)
print(autoscale.autoscale_pool)
{
  "autoscale_pool": {
    "id": "0d3db13e-a604-4944-9827-7ec2642d32ac",
    "name": "test-autoscaler-group-01",
    "config": {
      "min_instances": 1,
      "max_instances": 5,
      "target_cpu_utilization": 0.5,
      "cooldown_minutes": 10
    },
    "droplet_template": {
      "name": "droplet-name",
      "size": "c-2",
      "region": "tor1",
      "image": "ubuntu-20-04-x64",
      "tags": [
        "my-tag"
      ],
      "ssh_keys": [
        "3b:16:e4:bf:8b:00:8b:b8:59:8c:a9:d3:f0:19:fa:45"
      ],
      "vpc_uuid": "760e09ef-dc84-11e8-981e-3cfdfeaae000",
      "with_droplet_agent": true,
      "project_id": "746c6152-2fa2-11ed-92d3-27aaa54e4988",
      "ipv6": true,
      "user_data": "#cloud-config\nruncmd:\n  - touch /test.txt\n"
    },
    "created_at": "2020-11-19T20:27:18Z",
    "updated_at": "2020-12-01T00:42:16Z",
    "current_utilization": {
      "memory": 0.3588531587713522,
      "cpu": 0.0007338008770232183
    },
    "status": "active",
    "active_resources_count": 1
  }
}
Returns Examples
{
  "autoscale_pool": {
    "id": "0d3db13e-a604-4944-9827-7ec2642d32ac",
    "name": "test-autoscaler-group-01",
    "config": {
      "min_instances": 1,
      "max_instances": 5,
      "target_cpu_utilization": 0.5,
      "cooldown_minutes": 10
    },
    "droplet_template": {
      "name": "droplet-name",
      "size": "c-2",
      "region": "tor1",
      "image": "ubuntu-20-04-x64",
      "tags": [
        "my-tag"
      ],
      "ssh_keys": [
        "3b:16:e4:bf:8b:00:8b:b8:59:8c:a9:d3:f0:19:fa:45"
      ],
      "vpc_uuid": "760e09ef-dc84-11e8-981e-3cfdfeaae000",
      "with_droplet_agent": true,
      "project_id": "746c6152-2fa2-11ed-92d3-27aaa54e4988",
      "ipv6": true,
      "user_data": "#cloud-config\nruncmd:\n  - touch /test.txt\n"
    },
    "created_at": "2020-11-19T20:27:18Z",
    "updated_at": "2020-12-01T00:42:16Z",
    "current_utilization": {
      "memory": 0.3588531587713522,
      "cpu": 0.0007338008770232183
    },
    "status": "active",
    "active_resources_count": 1
  }
}