diff --git a/CHANGELOG.md b/CHANGELOG.md index b6f6fa9c914..380ff5cbe3b 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,3 +1,53 @@ +Release v0.21.0 (2020-04-21) +=== + +Breaking Change +--- +* `aws/endpoints`: Several functions and types have been removed + * Removes `DecodeModel` and `DecodeModelOptions` from the package ([#509](https://github.com/aws/aws-sdk-go-v2/pull/509)) + * Remove Region Constants, Partition Constants, and types use for exploring the endpoint data model ([#512](https://github.com/aws/aws-sdk-go-v2/pull/512)) +* `service/s3/s3crypto`: Package and associated encryption/decryption clients have been removed from the SDK ([#511](https://github.com/aws/aws-sdk-go-v2/pull/511)) +* `aws/external`: Removes several export constants and types ([#508](https://github.com/aws/aws-sdk-go-v2/pull/508)) + * No longer exports AWS environment constants used by the external environment configuration loader + * `DefaultSharedConfigProfile` is now defined an exported constant +* `aws`: `ErrMissingRegion`, `ErrMissingEndpoint`, `ErrStaticCredentialsEmpty` are now concrete error types ([#510](https://github.com/aws/aws-sdk-go-v2/pull/510)) + +Services +--- +* Synced the V2 SDK with latest AWS service API definitions. + +SDK Features +--- +* `aws/signer/v4`: New methods `SignHTTP` and `PresignHTTP` have been added ([#519](https://github.com/aws/aws-sdk-go-v2/pull/519)) + * `SignHTTP` replaces `Sign`, and usage of `Sign` should be migrated before it's removal at a later date + * `PresignHTTP` replaces `Presign`, and usage of `Presign` should be migrated before it's removal at a later date + * `DisableRequestBodyOverwrite` and `UnsignedPayload` are now deprecated options and have no effect on `SignHTTP` or `PresignHTTP`. These options will be removed at a later date. +* `aws/external`: Add Support for setting a default fallback region and resolving region from EC2 IMDS ([#523](https://github.com/aws/aws-sdk-go-v2/pull/523)) + * `WithDefaultRegion` helper has been added which can be passed to `LoadDefaultAWSConfig` + * This helper can be used to configure a default fallback region in the event a region fails to be resolved from other sources + * Support has been added to resolve region using EC2 IMDS when available + * The IMDS region will be used if region as not found configured in either the shared config or the process environment. + * Fixes [#244](https://github.com/aws/aws-sdk-go-v2/issues/244) + * Fixes [#515](https://github.com/aws/aws-sdk-go-v2/issues/515) + +SDK Enhancements +--- +* `service/dynamodb/expression`: Add IsSet helper for ConditionBuilder and KeyConditionBuilder ([#494](https://github.com/aws/aws-sdk-go-v2/pull/494)) + * Adds a IsSet helper for ConditionBuilder and KeyConditionBuilder to make it easier to determine if the condition builders have any conditions added to them. + * Implements [#493](https://github.com/aws/aws-sdk-go-v2/issues/493). +* `internal/ini`: Normalize Section keys to lowercase ([#495](https://github.com/aws/aws-sdk-go-v2/pull/495)) + * Update's SDK's ini utility to store all keys as lowercase. This brings the SDK inline with the AWS CLI's behavior. + +SDK Bugs +--- +* `internal/sdk`: Fix SDK's UUID utility to handle partial read ([#536](https://github.com/aws/aws-sdk-go-v2/pull/536)) + * Fixes the SDK's UUID utility to correctly handle partial reads from its crypto rand source. This error was sometimes causing the SDK's InvocationID value to fail to be obtained, due to a partial read from crypto.Rand. + * Fix [#534](https://github.com/aws/aws-sdk-go-v2/issues/534) +* `aws/defaults`: Fix request metadata headers causing signature errors ([#536](https://github.com/aws/aws-sdk-go-v2/pull/536)) + * Fixes the SDK's adding the request metadata headers in the wrong location within the request handler stack. This created a situation where a request that was retried would sign the new attempt using the old value of the header. The header value would then be changed before sending the request. + * Fix [#533](https://github.com/aws/aws-sdk-go-v2/issues/533) + * Fix [#521](https://github.com/aws/aws-sdk-go-v2/issues/521) + Release v0.20.0 (2020-03-17) === diff --git a/CHANGELOG_PENDING.md b/CHANGELOG_PENDING.md index caa6f80e78a..c2fbf738da2 100644 --- a/CHANGELOG_PENDING.md +++ b/CHANGELOG_PENDING.md @@ -1,45 +1,11 @@ -Breaking Change ---- -* `aws/endpoints`: Several functions and types have been removed - * Removes `DecodeModel` and `DecodeModelOptions` from the package ([#509](https://github.com/aws/aws-sdk-go-v2/pull/509)) - * Remove Region Constants, Partition Constants, and types use for exploring the endpoint data model ([#512](https://github.com/aws/aws-sdk-go-v2/pull/512)) -* `service/s3/s3crypto`: Package and associated encryption/decryption clients have been removed from the SDK ([#511](https://github.com/aws/aws-sdk-go-v2/pull/511)) -* `aws/external`: Removes several export constants and types ([#508](https://github.com/aws/aws-sdk-go-v2/pull/508)) - * No longer exports AWS environment constants used by the external environment configuration loader - * `DefaultSharedConfigProfile` is now defined an exported constant -* `aws`: `ErrMissingRegion`, `ErrMissingEndpoint`, `ErrStaticCredentialsEmpty` are now concrete error types ([#510](https://github.com/aws/aws-sdk-go-v2/pull/510)) - Services --- SDK Features --- -* `aws/signer/v4`: New methods `SignHTTP` and `PresignHTTP` have been added ([#519](https://github.com/aws/aws-sdk-go-v2/pull/519)) - * `SignHTTP` replaces `Sign`, and usage of `Sign` should be migrated before it's removal at a later date - * `PresignHTTP` replaces `Presign`, and usage of `Presign` should be migrated before it's removal at a later date - * `DisableRequestBodyOverwrite` and `UnsignedPayload` are now deprecated options and have no effect on `SignHTTP` or `PresignHTTP`. These options will be removed at a later date. -* `aws/external`: Add Support for setting a default fallback region and resolving region from EC2 IMDS ([#523](https://github.com/aws/aws-sdk-go-v2/pull/523)) - * `WithDefaultRegion` helper has been added which can be passed to `LoadDefaultAWSConfig` - * This helper can be used to configure a default fallback region in the event a region fails to be resolved from other sources - * Support has been added to resolve region using EC2 IMDS when available - * The IMDS region will be used if region as not found configured in either the shared config or the process environment. - * Fixes [#244](https://github.com/aws/aws-sdk-go-v2/issues/244) - * Fixes [#515](https://github.com/aws/aws-sdk-go-v2/issues/515) + SDK Enhancements --- -* `service/dynamodb/expression`: Add IsSet helper for ConditionBuilder and KeyConditionBuilder ([#494](https://github.com/aws/aws-sdk-go-v2/pull/494)) - * Adds a IsSet helper for ConditionBuilder and KeyConditionBuilder to make it easier to determine if the condition builders have any conditions added to them. - * Implements [#493](https://github.com/aws/aws-sdk-go-v2/issues/493). -* `internal/ini`: Normalize Section keys to lowercase ([#495](https://github.com/aws/aws-sdk-go-v2/pull/495)) - * Update's SDK's ini utility to store all keys as lowercase. This brings the SDK inline with the AWS CLI's behavior. - SDK Bugs --- -* `internal/sdk`: Fix SDK's UUID utility to handle partial read ([#536](https://github.com/aws/aws-sdk-go-v2/pull/536)) - * Fixes the SDK's UUID utility to correctly handle partial reads from its crypto rand source. This error was sometimes causing the SDK's InvocationID value to fail to be obtained, due to a partial read from crypto.Rand. - * Fix [#534](https://github.com/aws/aws-sdk-go-v2/issues/534) -* `aws/defaults`: Fix request metadata headers causing signature errors ([#536](https://github.com/aws/aws-sdk-go-v2/pull/536)) - * Fixes the SDK's adding the request metadata headers in the wrong location within the request handler stack. This created a situation where a request that was retried would sign the new attempt using the old value of the header. The header value would then be changed before sending the request. - * Fix [#533](https://github.com/aws/aws-sdk-go-v2/issues/533) - * Fix [#521](https://github.com/aws/aws-sdk-go-v2/issues/521) diff --git a/aws/endpoints/defaults.go b/aws/endpoints/defaults.go index 5eff793c25c..369732388ed 100644 --- a/aws/endpoints/defaults.go +++ b/aws/endpoints/defaults.go @@ -306,6 +306,30 @@ var awsPartition = partition{ Region: "eu-west-3", }, }, + "fips-us-east-1": endpoint{ + Hostname: "ecr-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "fips-us-east-2": endpoint{ + Hostname: "ecr-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "fips-us-west-1": endpoint{ + Hostname: "ecr-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "fips-us-west-2": endpoint{ + Hostname: "ecr-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, "me-south-1": endpoint{ Hostname: "api.ecr.me-south-1.amazonaws.com", CredentialScope: credentialScope{ @@ -631,12 +655,36 @@ var awsPartition = partition{ "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, - "me-south-1": endpoint{}, - "sa-east-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, + "fips-us-east-1": endpoint{ + Hostname: "fips.batch.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "fips-us-east-2": endpoint{ + Hostname: "fips.batch.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "fips-us-west-1": endpoint{ + Hostname: "fips.batch.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "fips-us-west-2": endpoint{ + Hostname: "fips.batch.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + "me-south-1": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, }, }, "budgets": service{ @@ -732,9 +780,33 @@ var awsPartition = partition{ "me-south-1": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, + "us-east-1-fips": endpoint{ + Hostname: "cloudformation-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "us-east-2": endpoint{}, + "us-east-2-fips": endpoint{ + Hostname: "cloudformation-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "us-west-1": endpoint{}, + "us-west-1-fips": endpoint{ + Hostname: "cloudformation-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "us-west-2": endpoint{}, + "us-west-2-fips": endpoint{ + Hostname: "cloudformation-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, }, }, "cloudfront": service{ @@ -828,12 +900,36 @@ var awsPartition = partition{ "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, - "me-south-1": endpoint{}, - "sa-east-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, + "fips-us-east-1": endpoint{ + Hostname: "cloudtrail-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "fips-us-east-2": endpoint{ + Hostname: "cloudtrail-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "fips-us-west-1": endpoint{ + Hostname: "cloudtrail-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "fips-us-west-2": endpoint{ + Hostname: "cloudtrail-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + "me-south-1": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, }, }, "codebuild": service{ @@ -973,11 +1069,41 @@ var awsPartition = partition{ "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, - "sa-east-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, + "fips-ca-central-1": endpoint{ + Hostname: "codepipeline-fips.ca-central-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "ca-central-1", + }, + }, + "fips-us-east-1": endpoint{ + Hostname: "codepipeline-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "fips-us-east-2": endpoint{ + Hostname: "codepipeline-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "fips-us-west-1": endpoint{ + Hostname: "codepipeline-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "fips-us-west-2": endpoint{ + Hostname: "codepipeline-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, }, }, "codestar": service{ @@ -1008,6 +1134,7 @@ var awsPartition = partition{ "ap-southeast-2": endpoint{}, "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, + "eu-north-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, @@ -1118,9 +1245,27 @@ var awsPartition = partition{ "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-2": endpoint{}, + "fips-us-east-1": endpoint{ + Hostname: "comprehend-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "fips-us-east-2": endpoint{ + Hostname: "comprehend-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "fips-us-west-2": endpoint{ + Hostname: "comprehend-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-2": endpoint{}, }, }, "comprehendmedical": service{ @@ -1190,6 +1335,7 @@ var awsPartition = partition{ "eu-central-1": endpoint{}, "eu-north-1": endpoint{}, "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, "us-east-1": endpoint{}, "us-west-2": endpoint{}, }, @@ -1312,19 +1458,44 @@ var awsPartition = partition{ "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, - "me-south-1": endpoint{}, - "sa-east-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, + "fips-us-east-1": endpoint{ + Hostname: "directconnect-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "fips-us-east-2": endpoint{ + Hostname: "directconnect-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "fips-us-west-1": endpoint{ + Hostname: "directconnect-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "fips-us-west-2": endpoint{ + Hostname: "directconnect-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + "me-south-1": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, }, }, "discovery": service{ Endpoints: endpoints{ - "eu-central-1": endpoint{}, - "us-west-2": endpoint{}, + "ap-southeast-2": endpoint{}, + "eu-central-1": endpoint{}, + "us-west-2": endpoint{}, }, }, "dms": service{ @@ -1337,17 +1508,23 @@ var awsPartition = partition{ "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, "ca-central-1": endpoint{}, - "eu-central-1": endpoint{}, - "eu-north-1": endpoint{}, - "eu-west-1": endpoint{}, - "eu-west-2": endpoint{}, - "eu-west-3": endpoint{}, - "me-south-1": endpoint{}, - "sa-east-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, + "dms-fips": endpoint{ + Hostname: "dms-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "eu-central-1": endpoint{}, + "eu-north-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "me-south-1": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, }, }, "docdb": service{ @@ -1448,36 +1625,66 @@ var awsPartition = partition{ "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, - "me-south-1": endpoint{}, - "sa-east-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, - }, - }, - "dynamodb": service{ - Defaults: endpoint{ - Protocols: []string{"http", "https"}, - }, - Endpoints: endpoints{ - "ap-east-1": endpoint{}, - "ap-northeast-1": endpoint{}, - "ap-northeast-2": endpoint{}, - "ap-south-1": endpoint{}, - "ap-southeast-1": endpoint{}, - "ap-southeast-2": endpoint{}, - "ca-central-1": endpoint{}, - "ca-central-1-fips": endpoint{ - Hostname: "dynamodb-fips.ca-central-1.amazonaws.com", + "fips-ca-central-1": endpoint{ + Hostname: "ds-fips.ca-central-1.amazonaws.com", CredentialScope: credentialScope{ Region: "ca-central-1", }, }, - "eu-central-1": endpoint{}, - "eu-north-1": endpoint{}, - "eu-west-1": endpoint{}, - "eu-west-2": endpoint{}, + "fips-us-east-1": endpoint{ + Hostname: "ds-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "fips-us-east-2": endpoint{ + Hostname: "ds-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "fips-us-west-1": endpoint{ + Hostname: "ds-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "fips-us-west-2": endpoint{ + Hostname: "ds-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + "me-south-1": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "dynamodb": service{ + Defaults: endpoint{ + Protocols: []string{"http", "https"}, + }, + Endpoints: endpoints{ + "ap-east-1": endpoint{}, + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "ca-central-1-fips": endpoint{ + Hostname: "dynamodb-fips.ca-central-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "ca-central-1", + }, + }, + "eu-central-1": endpoint{}, + "eu-north-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, "local": endpoint{ Hostname: "localhost:8000", @@ -1535,12 +1742,42 @@ var awsPartition = partition{ "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, - "me-south-1": endpoint{}, - "sa-east-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, + "fips-ca-central-1": endpoint{ + Hostname: "ec2-fips.ca-central-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "ca-central-1", + }, + }, + "fips-us-east-1": endpoint{ + Hostname: "ec2-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "fips-us-east-2": endpoint{ + Hostname: "ec2-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "fips-us-west-1": endpoint{ + Hostname: "ec2-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "fips-us-west-2": endpoint{ + Hostname: "ec2-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + "me-south-1": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, }, }, "ec2metadata": service{ @@ -1569,12 +1806,59 @@ var awsPartition = partition{ "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, - "me-south-1": endpoint{}, - "sa-east-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, + "fips-us-east-1": endpoint{ + Hostname: "ecs-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "fips-us-east-2": endpoint{ + Hostname: "ecs-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "fips-us-west-1": endpoint{ + Hostname: "ecs-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "fips-us-west-2": endpoint{ + Hostname: "ecs-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + "me-south-1": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "elastic-inference": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{ + Hostname: "api.elastic-inference.ap-northeast-1.amazonaws.com", + }, + "ap-northeast-2": endpoint{ + Hostname: "api.elastic-inference.ap-northeast-2.amazonaws.com", + }, + "eu-west-1": endpoint{ + Hostname: "api.elastic-inference.eu-west-1.amazonaws.com", + }, + "us-east-1": endpoint{ + Hostname: "api.elastic-inference.us-east-1.amazonaws.com", + }, + "us-east-2": endpoint{ + Hostname: "api.elastic-inference.us-east-2.amazonaws.com", + }, + "us-west-2": endpoint{ + Hostname: "api.elastic-inference.us-west-2.amazonaws.com", + }, }, }, "elasticache": service{ @@ -1621,12 +1905,36 @@ var awsPartition = partition{ "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, - "me-south-1": endpoint{}, - "sa-east-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, + "fips-us-east-1": endpoint{ + Hostname: "elasticbeanstalk-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "fips-us-east-2": endpoint{ + Hostname: "elasticbeanstalk-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "fips-us-west-1": endpoint{ + Hostname: "elasticbeanstalk-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "fips-us-west-2": endpoint{ + Hostname: "elasticbeanstalk-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + "me-south-1": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, }, }, "elasticfilesystem": service{ @@ -1644,333 +1952,128 @@ var awsPartition = partition{ "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, - "me-south-1": endpoint{}, - "sa-east-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, - }, - }, - "elasticloadbalancing": service{ - Defaults: endpoint{ - Protocols: []string{"https"}, - }, - Endpoints: endpoints{ - "ap-east-1": endpoint{}, - "ap-northeast-1": endpoint{}, - "ap-northeast-2": endpoint{}, - "ap-south-1": endpoint{}, - "ap-southeast-1": endpoint{}, - "ap-southeast-2": endpoint{}, - "ca-central-1": endpoint{}, - "eu-central-1": endpoint{}, - "eu-north-1": endpoint{}, - "eu-west-1": endpoint{}, - "eu-west-2": endpoint{}, - "eu-west-3": endpoint{}, - "me-south-1": endpoint{}, - "sa-east-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, - }, - }, - "elasticmapreduce": service{ - Defaults: endpoint{ - SSLCommonName: "{region}.{service}.{dnsSuffix}", - Protocols: []string{"https"}, - }, - Endpoints: endpoints{ - "ap-east-1": endpoint{}, - "ap-northeast-1": endpoint{}, - "ap-northeast-2": endpoint{}, - "ap-south-1": endpoint{}, - "ap-southeast-1": endpoint{}, - "ap-southeast-2": endpoint{}, - "ca-central-1": endpoint{}, - "eu-central-1": endpoint{ - SSLCommonName: "{service}.{region}.{dnsSuffix}", + "fips-ap-east-1": endpoint{ + Hostname: "elasticfilesystem-fips.ap-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "ap-east-1", + }, }, - "eu-north-1": endpoint{}, - "eu-west-1": endpoint{}, - "eu-west-2": endpoint{}, - "eu-west-3": endpoint{}, - "me-south-1": endpoint{}, - "sa-east-1": endpoint{}, - "us-east-1": endpoint{ - SSLCommonName: "{service}.{region}.{dnsSuffix}", - }, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, - }, - }, - "elastictranscoder": service{ - - Endpoints: endpoints{ - "ap-northeast-1": endpoint{}, - "ap-south-1": endpoint{}, - "ap-southeast-1": endpoint{}, - "ap-southeast-2": endpoint{}, - "eu-west-1": endpoint{}, - "us-east-1": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, - }, - }, - "email": service{ - - Endpoints: endpoints{ - "ap-south-1": endpoint{}, - "ap-southeast-2": endpoint{}, - "eu-central-1": endpoint{}, - "eu-west-1": endpoint{}, - "us-east-1": endpoint{}, - "us-west-2": endpoint{}, - }, - }, - "entitlement.marketplace": service{ - Defaults: endpoint{ - CredentialScope: credentialScope{ - Service: "aws-marketplace", - }, - }, - Endpoints: endpoints{ - "us-east-1": endpoint{}, - }, - }, - "es": service{ - - Endpoints: endpoints{ - "ap-east-1": endpoint{}, - "ap-northeast-1": endpoint{}, - "ap-northeast-2": endpoint{}, - "ap-south-1": endpoint{}, - "ap-southeast-1": endpoint{}, - "ap-southeast-2": endpoint{}, - "ca-central-1": endpoint{}, - "eu-central-1": endpoint{}, - "eu-north-1": endpoint{}, - "eu-west-1": endpoint{}, - "eu-west-2": endpoint{}, - "eu-west-3": endpoint{}, - "fips": endpoint{ - Hostname: "es-fips.us-west-1.amazonaws.com", - CredentialScope: credentialScope{ - Region: "us-west-1", - }, - }, - "me-south-1": endpoint{}, - "sa-east-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, - }, - }, - "events": service{ - - Endpoints: endpoints{ - "ap-east-1": endpoint{}, - "ap-northeast-1": endpoint{}, - "ap-northeast-2": endpoint{}, - "ap-south-1": endpoint{}, - "ap-southeast-1": endpoint{}, - "ap-southeast-2": endpoint{}, - "ca-central-1": endpoint{}, - "eu-central-1": endpoint{}, - "eu-north-1": endpoint{}, - "eu-west-1": endpoint{}, - "eu-west-2": endpoint{}, - "eu-west-3": endpoint{}, - "me-south-1": endpoint{}, - "sa-east-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, - }, - }, - "firehose": service{ - - Endpoints: endpoints{ - "ap-east-1": endpoint{}, - "ap-northeast-1": endpoint{}, - "ap-northeast-2": endpoint{}, - "ap-south-1": endpoint{}, - "ap-southeast-1": endpoint{}, - "ap-southeast-2": endpoint{}, - "ca-central-1": endpoint{}, - "eu-central-1": endpoint{}, - "eu-north-1": endpoint{}, - "eu-west-1": endpoint{}, - "eu-west-2": endpoint{}, - "eu-west-3": endpoint{}, - "me-south-1": endpoint{}, - "sa-east-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, - }, - }, - "fms": service{ - Defaults: endpoint{ - Protocols: []string{"https"}, - }, - Endpoints: endpoints{ - "ap-northeast-1": endpoint{}, - "ap-northeast-2": endpoint{}, - "ap-south-1": endpoint{}, - "ap-southeast-1": endpoint{}, - "ap-southeast-2": endpoint{}, - "ca-central-1": endpoint{}, - "eu-central-1": endpoint{}, - "eu-north-1": endpoint{}, - "eu-west-1": endpoint{}, - "eu-west-2": endpoint{}, - "eu-west-3": endpoint{}, - "fips-ap-northeast-1": endpoint{ - Hostname: "fms-fips.ap-northeast-1.amazonaws.com", - CredentialScope: credentialScope{ - Region: "ap-northeast-1", - }, + "fips-ap-northeast-1": endpoint{ + Hostname: "elasticfilesystem-fips.ap-northeast-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "ap-northeast-1", + }, }, "fips-ap-northeast-2": endpoint{ - Hostname: "fms-fips.ap-northeast-2.amazonaws.com", + Hostname: "elasticfilesystem-fips.ap-northeast-2.amazonaws.com", CredentialScope: credentialScope{ Region: "ap-northeast-2", }, }, "fips-ap-south-1": endpoint{ - Hostname: "fms-fips.ap-south-1.amazonaws.com", + Hostname: "elasticfilesystem-fips.ap-south-1.amazonaws.com", CredentialScope: credentialScope{ Region: "ap-south-1", }, }, "fips-ap-southeast-1": endpoint{ - Hostname: "fms-fips.ap-southeast-1.amazonaws.com", + Hostname: "elasticfilesystem-fips.ap-southeast-1.amazonaws.com", CredentialScope: credentialScope{ Region: "ap-southeast-1", }, }, "fips-ap-southeast-2": endpoint{ - Hostname: "fms-fips.ap-southeast-2.amazonaws.com", + Hostname: "elasticfilesystem-fips.ap-southeast-2.amazonaws.com", CredentialScope: credentialScope{ Region: "ap-southeast-2", }, }, "fips-ca-central-1": endpoint{ - Hostname: "fms-fips.ca-central-1.amazonaws.com", + Hostname: "elasticfilesystem-fips.ca-central-1.amazonaws.com", CredentialScope: credentialScope{ Region: "ca-central-1", }, }, "fips-eu-central-1": endpoint{ - Hostname: "fms-fips.eu-central-1.amazonaws.com", + Hostname: "elasticfilesystem-fips.eu-central-1.amazonaws.com", CredentialScope: credentialScope{ Region: "eu-central-1", }, }, + "fips-eu-north-1": endpoint{ + Hostname: "elasticfilesystem-fips.eu-north-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "eu-north-1", + }, + }, "fips-eu-west-1": endpoint{ - Hostname: "fms-fips.eu-west-1.amazonaws.com", + Hostname: "elasticfilesystem-fips.eu-west-1.amazonaws.com", CredentialScope: credentialScope{ Region: "eu-west-1", }, }, "fips-eu-west-2": endpoint{ - Hostname: "fms-fips.eu-west-2.amazonaws.com", + Hostname: "elasticfilesystem-fips.eu-west-2.amazonaws.com", CredentialScope: credentialScope{ Region: "eu-west-2", }, }, "fips-eu-west-3": endpoint{ - Hostname: "fms-fips.eu-west-3.amazonaws.com", + Hostname: "elasticfilesystem-fips.eu-west-3.amazonaws.com", CredentialScope: credentialScope{ Region: "eu-west-3", }, }, + "fips-me-south-1": endpoint{ + Hostname: "elasticfilesystem-fips.me-south-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "me-south-1", + }, + }, "fips-sa-east-1": endpoint{ - Hostname: "fms-fips.sa-east-1.amazonaws.com", + Hostname: "elasticfilesystem-fips.sa-east-1.amazonaws.com", CredentialScope: credentialScope{ Region: "sa-east-1", }, }, "fips-us-east-1": endpoint{ - Hostname: "fms-fips.us-east-1.amazonaws.com", + Hostname: "elasticfilesystem-fips.us-east-1.amazonaws.com", CredentialScope: credentialScope{ Region: "us-east-1", }, }, "fips-us-east-2": endpoint{ - Hostname: "fms-fips.us-east-2.amazonaws.com", + Hostname: "elasticfilesystem-fips.us-east-2.amazonaws.com", CredentialScope: credentialScope{ Region: "us-east-2", }, }, "fips-us-west-1": endpoint{ - Hostname: "fms-fips.us-west-1.amazonaws.com", + Hostname: "elasticfilesystem-fips.us-west-1.amazonaws.com", CredentialScope: credentialScope{ Region: "us-west-1", }, }, "fips-us-west-2": endpoint{ - Hostname: "fms-fips.us-west-2.amazonaws.com", + Hostname: "elasticfilesystem-fips.us-west-2.amazonaws.com", CredentialScope: credentialScope{ Region: "us-west-2", }, }, - "sa-east-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, - }, - }, - "forecast": service{ - - Endpoints: endpoints{ - "ap-northeast-1": endpoint{}, - "ap-northeast-2": endpoint{}, - "ap-southeast-1": endpoint{}, - "eu-west-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-2": endpoint{}, - }, - }, - "forecastquery": service{ - - Endpoints: endpoints{ - "ap-northeast-1": endpoint{}, - "ap-southeast-1": endpoint{}, - "eu-west-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-2": endpoint{}, + "me-south-1": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, }, }, - "fsx": service{ - - Endpoints: endpoints{ - "ap-northeast-1": endpoint{}, - "ap-southeast-1": endpoint{}, - "ap-southeast-2": endpoint{}, - "eu-central-1": endpoint{}, - "eu-north-1": endpoint{}, - "eu-west-1": endpoint{}, - "eu-west-2": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, + "elasticloadbalancing": service{ + Defaults: endpoint{ + Protocols: []string{"https"}, }, - }, - "gamelift": service{ - Endpoints: endpoints{ + "ap-east-1": endpoint{}, "ap-northeast-1": endpoint{}, "ap-northeast-2": endpoint{}, "ap-south-1": endpoint{}, @@ -1978,18 +2081,46 @@ var awsPartition = partition{ "ap-southeast-2": endpoint{}, "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, + "eu-north-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, - "sa-east-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "fips-us-east-1": endpoint{ + Hostname: "elasticloadbalancing-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "fips-us-east-2": endpoint{ + Hostname: "elasticloadbalancing-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "fips-us-west-1": endpoint{ + Hostname: "elasticloadbalancing-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "fips-us-west-2": endpoint{ + Hostname: "elasticloadbalancing-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + "me-south-1": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, }, }, - "glacier": service{ + "elasticmapreduce": service{ Defaults: endpoint{ - Protocols: []string{"http", "https"}, + SSLCommonName: "{region}.{service}.{dnsSuffix}", + Protocols: []string{"https"}, }, Endpoints: endpoints{ "ap-east-1": endpoint{}, @@ -1999,105 +2130,118 @@ var awsPartition = partition{ "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, "ca-central-1": endpoint{}, - "eu-central-1": endpoint{}, - "eu-north-1": endpoint{}, - "eu-west-1": endpoint{}, - "eu-west-2": endpoint{}, - "eu-west-3": endpoint{}, + "eu-central-1": endpoint{ + SSLCommonName: "{service}.{region}.{dnsSuffix}", + }, + "eu-north-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, "fips-ca-central-1": endpoint{ - Hostname: "glacier-fips.ca-central-1.amazonaws.com", + Hostname: "elasticmapreduce-fips.ca-central-1.amazonaws.com", CredentialScope: credentialScope{ Region: "ca-central-1", }, }, "fips-us-east-1": endpoint{ - Hostname: "glacier-fips.us-east-1.amazonaws.com", + Hostname: "elasticmapreduce-fips.us-east-1.amazonaws.com", CredentialScope: credentialScope{ Region: "us-east-1", }, }, "fips-us-east-2": endpoint{ - Hostname: "glacier-fips.us-east-2.amazonaws.com", + Hostname: "elasticmapreduce-fips.us-east-2.amazonaws.com", CredentialScope: credentialScope{ Region: "us-east-2", }, }, "fips-us-west-1": endpoint{ - Hostname: "glacier-fips.us-west-1.amazonaws.com", + Hostname: "elasticmapreduce-fips.us-west-1.amazonaws.com", CredentialScope: credentialScope{ Region: "us-west-1", }, }, "fips-us-west-2": endpoint{ - Hostname: "glacier-fips.us-west-2.amazonaws.com", + Hostname: "elasticmapreduce-fips.us-west-2.amazonaws.com", CredentialScope: credentialScope{ Region: "us-west-2", }, }, "me-south-1": endpoint{}, "sa-east-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, + "us-east-1": endpoint{ + SSLCommonName: "{service}.{region}.{dnsSuffix}", + }, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, }, }, - "glue": service{ + "elastictranscoder": service{ Endpoints: endpoints{ - "ap-east-1": endpoint{}, "ap-northeast-1": endpoint{}, - "ap-northeast-2": endpoint{}, "ap-south-1": endpoint{}, "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, - "ca-central-1": endpoint{}, - "eu-central-1": endpoint{}, - "eu-north-1": endpoint{}, "eu-west-1": endpoint{}, - "eu-west-2": endpoint{}, - "eu-west-3": endpoint{}, - "me-south-1": endpoint{}, - "sa-east-1": endpoint{}, "us-east-1": endpoint{}, - "us-east-2": endpoint{}, "us-west-1": endpoint{}, "us-west-2": endpoint{}, }, }, - "greengrass": service{ - IsRegionalized: boxedTrue, - Defaults: endpoint{ - Protocols: []string{"https"}, - }, + "email": service{ + Endpoints: endpoints{ - "ap-northeast-1": endpoint{}, - "ap-northeast-2": endpoint{}, "ap-south-1": endpoint{}, - "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, - "eu-west-2": endpoint{}, "us-east-1": endpoint{}, - "us-east-2": endpoint{}, "us-west-2": endpoint{}, }, }, - "groundstation": service{ + "entitlement.marketplace": service{ + Defaults: endpoint{ + CredentialScope: credentialScope{ + Service: "aws-marketplace", + }, + }, + Endpoints: endpoints{ + "us-east-1": endpoint{}, + }, + }, + "es": service{ Endpoints: endpoints{ - "eu-north-1": endpoint{}, + "ap-east-1": endpoint{}, + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-north-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "fips": endpoint{ + Hostname: "es-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, "me-south-1": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, "us-east-2": endpoint{}, + "us-west-1": endpoint{}, "us-west-2": endpoint{}, }, }, - "guardduty": service{ - IsRegionalized: boxedTrue, - Defaults: endpoint{ - Protocols: []string{"https"}, - }, + "events": service{ + Endpoints: endpoints{ "ap-east-1": endpoint{}, "ap-northeast-1": endpoint{}, @@ -2111,97 +2255,90 @@ var awsPartition = partition{ "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, - "me-south-1": endpoint{}, - "sa-east-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-1-fips": endpoint{ - Hostname: "guardduty-fips.us-east-1.amazonaws.com", + "fips-us-east-1": endpoint{ + Hostname: "events-fips.us-east-1.amazonaws.com", CredentialScope: credentialScope{ Region: "us-east-1", }, }, - "us-east-2": endpoint{}, - "us-east-2-fips": endpoint{ - Hostname: "guardduty-fips.us-east-2.amazonaws.com", + "fips-us-east-2": endpoint{ + Hostname: "events-fips.us-east-2.amazonaws.com", CredentialScope: credentialScope{ Region: "us-east-2", }, }, - "us-west-1": endpoint{}, - "us-west-1-fips": endpoint{ - Hostname: "guardduty-fips.us-west-1.amazonaws.com", + "fips-us-west-1": endpoint{ + Hostname: "events-fips.us-west-1.amazonaws.com", CredentialScope: credentialScope{ Region: "us-west-1", }, }, - "us-west-2": endpoint{}, - "us-west-2-fips": endpoint{ - Hostname: "guardduty-fips.us-west-2.amazonaws.com", + "fips-us-west-2": endpoint{ + Hostname: "events-fips.us-west-2.amazonaws.com", CredentialScope: credentialScope{ Region: "us-west-2", }, }, + "me-south-1": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, }, }, - "health": service{ - - Endpoints: endpoints{ - "us-east-1": endpoint{}, - }, - }, - "iam": service{ - PartitionEndpoint: "aws-global", - IsRegionalized: boxedFalse, - - Endpoints: endpoints{ - "aws-global": endpoint{ - Hostname: "iam.amazonaws.com", - CredentialScope: credentialScope{ - Region: "us-east-1", - }, - }, - }, - }, - "importexport": service{ - PartitionEndpoint: "aws-global", - IsRegionalized: boxedFalse, - - Endpoints: endpoints{ - "aws-global": endpoint{ - Hostname: "importexport.amazonaws.com", - SignatureVersions: []string{"v2", "v4"}, - CredentialScope: credentialScope{ - Region: "us-east-1", - Service: "IngestionService", - }, - }, - }, - }, - "inspector": service{ + "firehose": service{ Endpoints: endpoints{ + "ap-east-1": endpoint{}, "ap-northeast-1": endpoint{}, "ap-northeast-2": endpoint{}, "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, "eu-north-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "fips-us-east-1": endpoint{ + Hostname: "firehose-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "fips-us-east-2": endpoint{ + Hostname: "firehose-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "fips-us-west-1": endpoint{ + Hostname: "firehose-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "fips-us-west-2": endpoint{ + Hostname: "firehose-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + "me-south-1": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, }, }, - "iot": service{ + "fms": service{ Defaults: endpoint{ - CredentialScope: credentialScope{ - Service: "execute-api", - }, + Protocols: []string{"https"}, }, Endpoints: endpoints{ - "ap-east-1": endpoint{}, "ap-northeast-1": endpoint{}, "ap-northeast-2": endpoint{}, "ap-south-1": endpoint{}, @@ -2213,193 +2350,153 @@ var awsPartition = partition{ "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, - "me-south-1": endpoint{}, - "sa-east-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, - }, - }, - "iotanalytics": service{ - - Endpoints: endpoints{ - "ap-northeast-1": endpoint{}, - "eu-central-1": endpoint{}, - "eu-west-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-2": endpoint{}, - }, - }, - "iotevents": service{ - - Endpoints: endpoints{ - "ap-northeast-1": endpoint{}, - "ap-northeast-2": endpoint{}, - "ap-southeast-1": endpoint{}, - "ap-southeast-2": endpoint{}, - "eu-central-1": endpoint{}, - "eu-west-1": endpoint{}, - "eu-west-2": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-2": endpoint{}, - }, - }, - "ioteventsdata": service{ - - Endpoints: endpoints{ - "ap-northeast-1": endpoint{ - Hostname: "data.iotevents.ap-northeast-1.amazonaws.com", + "fips-ap-northeast-1": endpoint{ + Hostname: "fms-fips.ap-northeast-1.amazonaws.com", CredentialScope: credentialScope{ Region: "ap-northeast-1", }, }, - "ap-northeast-2": endpoint{ - Hostname: "data.iotevents.ap-northeast-2.amazonaws.com", + "fips-ap-northeast-2": endpoint{ + Hostname: "fms-fips.ap-northeast-2.amazonaws.com", CredentialScope: credentialScope{ Region: "ap-northeast-2", }, }, - "ap-southeast-1": endpoint{ - Hostname: "data.iotevents.ap-southeast-1.amazonaws.com", + "fips-ap-south-1": endpoint{ + Hostname: "fms-fips.ap-south-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "ap-south-1", + }, + }, + "fips-ap-southeast-1": endpoint{ + Hostname: "fms-fips.ap-southeast-1.amazonaws.com", CredentialScope: credentialScope{ Region: "ap-southeast-1", }, }, - "ap-southeast-2": endpoint{ - Hostname: "data.iotevents.ap-southeast-2.amazonaws.com", + "fips-ap-southeast-2": endpoint{ + Hostname: "fms-fips.ap-southeast-2.amazonaws.com", CredentialScope: credentialScope{ Region: "ap-southeast-2", }, }, - "eu-central-1": endpoint{ - Hostname: "data.iotevents.eu-central-1.amazonaws.com", + "fips-ca-central-1": endpoint{ + Hostname: "fms-fips.ca-central-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "ca-central-1", + }, + }, + "fips-eu-central-1": endpoint{ + Hostname: "fms-fips.eu-central-1.amazonaws.com", CredentialScope: credentialScope{ Region: "eu-central-1", }, }, - "eu-west-1": endpoint{ - Hostname: "data.iotevents.eu-west-1.amazonaws.com", + "fips-eu-west-1": endpoint{ + Hostname: "fms-fips.eu-west-1.amazonaws.com", CredentialScope: credentialScope{ Region: "eu-west-1", }, }, - "eu-west-2": endpoint{ - Hostname: "data.iotevents.eu-west-2.amazonaws.com", + "fips-eu-west-2": endpoint{ + Hostname: "fms-fips.eu-west-2.amazonaws.com", CredentialScope: credentialScope{ Region: "eu-west-2", }, }, - "us-east-1": endpoint{ - Hostname: "data.iotevents.us-east-1.amazonaws.com", + "fips-eu-west-3": endpoint{ + Hostname: "fms-fips.eu-west-3.amazonaws.com", CredentialScope: credentialScope{ - Region: "us-east-1", + Region: "eu-west-3", }, }, - "us-east-2": endpoint{ - Hostname: "data.iotevents.us-east-2.amazonaws.com", + "fips-sa-east-1": endpoint{ + Hostname: "fms-fips.sa-east-1.amazonaws.com", CredentialScope: credentialScope{ - Region: "us-east-2", + Region: "sa-east-1", }, }, - "us-west-2": endpoint{ - Hostname: "data.iotevents.us-west-2.amazonaws.com", + "fips-us-east-1": endpoint{ + Hostname: "fms-fips.us-east-1.amazonaws.com", CredentialScope: credentialScope{ - Region: "us-west-2", + Region: "us-east-1", + }, + }, + "fips-us-east-2": endpoint{ + Hostname: "fms-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "fips-us-west-1": endpoint{ + Hostname: "fms-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "fips-us-west-2": endpoint{ + Hostname: "fms-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", }, }, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, }, }, - "iotsecuredtunneling": service{ + "forecast": service{ Endpoints: endpoints{ - "ap-east-1": endpoint{}, "ap-northeast-1": endpoint{}, "ap-northeast-2": endpoint{}, "ap-south-1": endpoint{}, "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, - "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, - "eu-north-1": endpoint{}, "eu-west-1": endpoint{}, - "eu-west-2": endpoint{}, - "eu-west-3": endpoint{}, - "me-south-1": endpoint{}, - "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, - }, - }, - "iotthingsgraph": service{ - Defaults: endpoint{ - CredentialScope: credentialScope{ - Service: "iotthingsgraph", - }, - }, - Endpoints: endpoints{ - "ap-northeast-1": endpoint{}, - "ap-northeast-2": endpoint{}, - "ap-southeast-2": endpoint{}, - "eu-west-1": endpoint{}, - "us-east-1": endpoint{}, "us-west-2": endpoint{}, }, }, - "kafka": service{ + "forecastquery": service{ Endpoints: endpoints{ - "ap-east-1": endpoint{}, "ap-northeast-1": endpoint{}, "ap-northeast-2": endpoint{}, "ap-south-1": endpoint{}, "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, - "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, - "eu-north-1": endpoint{}, "eu-west-1": endpoint{}, - "eu-west-2": endpoint{}, - "eu-west-3": endpoint{}, - "me-south-1": endpoint{}, - "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, - "us-west-1": endpoint{}, "us-west-2": endpoint{}, }, }, - "kinesis": service{ + "fsx": service{ Endpoints: endpoints{ "ap-east-1": endpoint{}, "ap-northeast-1": endpoint{}, - "ap-northeast-2": endpoint{}, - "ap-south-1": endpoint{}, "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, - "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, "eu-north-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, - "eu-west-3": endpoint{}, - "me-south-1": endpoint{}, - "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, "us-west-1": endpoint{}, "us-west-2": endpoint{}, }, }, - "kinesisanalytics": service{ + "gamelift": service{ Endpoints: endpoints{ - "ap-east-1": endpoint{}, "ap-northeast-1": endpoint{}, "ap-northeast-2": endpoint{}, "ap-south-1": endpoint{}, @@ -2407,19 +2504,19 @@ var awsPartition = partition{ "ap-southeast-2": endpoint{}, "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, - "eu-north-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, - "eu-west-3": endpoint{}, - "me-south-1": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, + "us-west-1": endpoint{}, "us-west-2": endpoint{}, }, }, - "kinesisvideo": service{ - + "glacier": service{ + Defaults: endpoint{ + Protocols: []string{"http", "https"}, + }, Endpoints: endpoints{ "ap-east-1": endpoint{}, "ap-northeast-1": endpoint{}, @@ -2429,16 +2526,49 @@ var awsPartition = partition{ "ap-southeast-2": endpoint{}, "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, + "eu-north-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, - "sa-east-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-2": endpoint{}, + "fips-ca-central-1": endpoint{ + Hostname: "glacier-fips.ca-central-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "ca-central-1", + }, + }, + "fips-us-east-1": endpoint{ + Hostname: "glacier-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "fips-us-east-2": endpoint{ + Hostname: "glacier-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "fips-us-west-1": endpoint{ + Hostname: "glacier-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "fips-us-west-2": endpoint{ + Hostname: "glacier-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + "me-south-1": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, }, }, - "kms": service{ + "glue": service{ Endpoints: endpoints{ "ap-east-1": endpoint{}, @@ -2453,98 +2583,72 @@ var awsPartition = partition{ "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, - "me-south-1": endpoint{}, - "sa-east-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, + "fips-us-east-1": endpoint{ + Hostname: "glue-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "fips-us-east-2": endpoint{ + Hostname: "glue-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "fips-us-west-1": endpoint{ + Hostname: "glue-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "fips-us-west-2": endpoint{ + Hostname: "glue-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + "me-south-1": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, }, }, - "lakeformation": service{ - - Endpoints: endpoints{ - "ap-northeast-1": endpoint{}, - "ap-northeast-2": endpoint{}, - "ap-south-1": endpoint{}, - "ap-southeast-1": endpoint{}, - "ap-southeast-2": endpoint{}, - "ca-central-1": endpoint{}, - "eu-central-1": endpoint{}, - "eu-west-1": endpoint{}, - "eu-west-2": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, + "greengrass": service{ + IsRegionalized: boxedTrue, + Defaults: endpoint{ + Protocols: []string{"https"}, }, - }, - "lambda": service{ - Endpoints: endpoints{ - "ap-east-1": endpoint{}, "ap-northeast-1": endpoint{}, "ap-northeast-2": endpoint{}, "ap-south-1": endpoint{}, "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, - "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, - "eu-north-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, - "eu-west-3": endpoint{}, - "me-south-1": endpoint{}, - "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, - "us-west-1": endpoint{}, "us-west-2": endpoint{}, }, }, - "license-manager": service{ + "groundstation": service{ Endpoints: endpoints{ - "ap-east-1": endpoint{}, - "ap-northeast-1": endpoint{}, - "ap-northeast-2": endpoint{}, - "ap-south-1": endpoint{}, - "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, - "ca-central-1": endpoint{}, - "eu-central-1": endpoint{}, "eu-north-1": endpoint{}, - "eu-west-1": endpoint{}, - "eu-west-2": endpoint{}, - "eu-west-3": endpoint{}, "me-south-1": endpoint{}, - "sa-east-1": endpoint{}, - "us-east-1": endpoint{}, "us-east-2": endpoint{}, - "us-west-1": endpoint{}, "us-west-2": endpoint{}, }, }, - "lightsail": service{ - - Endpoints: endpoints{ - "ap-northeast-1": endpoint{}, - "ap-northeast-2": endpoint{}, - "ap-south-1": endpoint{}, - "ap-southeast-1": endpoint{}, - "ap-southeast-2": endpoint{}, - "ca-central-1": endpoint{}, - "eu-central-1": endpoint{}, - "eu-west-1": endpoint{}, - "eu-west-2": endpoint{}, - "eu-west-3": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-2": endpoint{}, + "guardduty": service{ + IsRegionalized: boxedTrue, + Defaults: endpoint{ + Protocols: []string{"https"}, }, - }, - "logs": service{ - Endpoints: endpoints{ "ap-east-1": endpoint{}, "ap-northeast-1": endpoint{}, @@ -2561,56 +2665,124 @@ var awsPartition = partition{ "me-south-1": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, + "us-east-1-fips": endpoint{ + Hostname: "guardduty-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "us-east-2": endpoint{}, + "us-east-2-fips": endpoint{ + Hostname: "guardduty-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "us-west-1": endpoint{}, + "us-west-1-fips": endpoint{ + Hostname: "guardduty-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "us-west-2": endpoint{}, + "us-west-2-fips": endpoint{ + Hostname: "guardduty-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, }, }, - "machinelearning": service{ + "health": service{ Endpoints: endpoints{ - "eu-west-1": endpoint{}, "us-east-1": endpoint{}, }, }, - "managedblockchain": service{ + "iam": service{ + PartitionEndpoint: "aws-global", + IsRegionalized: boxedFalse, Endpoints: endpoints{ - "ap-northeast-1": endpoint{}, - "ap-southeast-1": endpoint{}, - "us-east-1": endpoint{}, + "aws-global": endpoint{ + Hostname: "iam.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "iam-fips": endpoint{ + Hostname: "iam-fips.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, }, }, - "marketplacecommerceanalytics": service{ + "importexport": service{ + PartitionEndpoint: "aws-global", + IsRegionalized: boxedFalse, Endpoints: endpoints{ - "us-east-1": endpoint{}, + "aws-global": endpoint{ + Hostname: "importexport.amazonaws.com", + SignatureVersions: []string{"v2", "v4"}, + CredentialScope: credentialScope{ + Region: "us-east-1", + Service: "IngestionService", + }, + }, }, }, - "mediaconnect": service{ + "inspector": service{ Endpoints: endpoints{ - "ap-east-1": endpoint{}, "ap-northeast-1": endpoint{}, "ap-northeast-2": endpoint{}, "ap-south-1": endpoint{}, - "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, "eu-central-1": endpoint{}, "eu-north-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, - "eu-west-3": endpoint{}, - "sa-east-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, + "fips-us-east-1": endpoint{ + Hostname: "inspector-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "fips-us-east-2": endpoint{ + Hostname: "inspector-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "fips-us-west-1": endpoint{ + Hostname: "inspector-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "fips-us-west-2": endpoint{ + Hostname: "inspector-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, }, }, - "mediaconvert": service{ - + "iot": service{ + Defaults: endpoint{ + CredentialScope: credentialScope{ + Service: "execute-api", + }, + }, Endpoints: endpoints{ + "ap-east-1": endpoint{}, "ap-northeast-1": endpoint{}, "ap-northeast-2": endpoint{}, "ap-south-1": endpoint{}, @@ -2618,9 +2790,11 @@ var awsPartition = partition{ "ap-southeast-2": endpoint{}, "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, + "eu-north-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, + "me-south-1": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, @@ -2628,62 +2802,99 @@ var awsPartition = partition{ "us-west-2": endpoint{}, }, }, - "medialive": service{ + "iotanalytics": service{ Endpoints: endpoints{ "ap-northeast-1": endpoint{}, - "ap-northeast-2": endpoint{}, - "ap-south-1": endpoint{}, - "ap-southeast-1": endpoint{}, - "ap-southeast-2": endpoint{}, "eu-central-1": endpoint{}, - "eu-north-1": endpoint{}, "eu-west-1": endpoint{}, - "eu-west-2": endpoint{}, - "eu-west-3": endpoint{}, - "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, "us-west-2": endpoint{}, }, }, - "mediapackage": service{ + "iotevents": service{ Endpoints: endpoints{ "ap-northeast-1": endpoint{}, "ap-northeast-2": endpoint{}, - "ap-south-1": endpoint{}, "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, - "eu-west-3": endpoint{}, - "sa-east-1": endpoint{}, "us-east-1": endpoint{}, - "us-west-1": endpoint{}, + "us-east-2": endpoint{}, "us-west-2": endpoint{}, }, }, - "mediastore": service{ + "ioteventsdata": service{ Endpoints: endpoints{ - "ap-northeast-1": endpoint{}, - "ap-northeast-2": endpoint{}, - "ap-southeast-2": endpoint{}, - "eu-central-1": endpoint{}, - "eu-north-1": endpoint{}, - "eu-west-1": endpoint{}, - "us-east-1": endpoint{}, - "us-west-2": endpoint{}, - }, - }, - "metering.marketplace": service{ - Defaults: endpoint{ - CredentialScope: credentialScope{ - Service: "aws-marketplace", + "ap-northeast-1": endpoint{ + Hostname: "data.iotevents.ap-northeast-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "ap-northeast-1", + }, + }, + "ap-northeast-2": endpoint{ + Hostname: "data.iotevents.ap-northeast-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "ap-northeast-2", + }, + }, + "ap-southeast-1": endpoint{ + Hostname: "data.iotevents.ap-southeast-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "ap-southeast-1", + }, + }, + "ap-southeast-2": endpoint{ + Hostname: "data.iotevents.ap-southeast-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "ap-southeast-2", + }, + }, + "eu-central-1": endpoint{ + Hostname: "data.iotevents.eu-central-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "eu-central-1", + }, + }, + "eu-west-1": endpoint{ + Hostname: "data.iotevents.eu-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "eu-west-1", + }, + }, + "eu-west-2": endpoint{ + Hostname: "data.iotevents.eu-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "eu-west-2", + }, + }, + "us-east-1": endpoint{ + Hostname: "data.iotevents.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "us-east-2": endpoint{ + Hostname: "data.iotevents.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "us-west-2": endpoint{ + Hostname: "data.iotevents.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, }, }, + }, + "iotsecuredtunneling": service{ + Endpoints: endpoints{ "ap-east-1": endpoint{}, "ap-northeast-1": endpoint{}, @@ -2705,36 +2916,23 @@ var awsPartition = partition{ "us-west-2": endpoint{}, }, }, - "mgh": service{ - - Endpoints: endpoints{ - "eu-central-1": endpoint{}, - "us-west-2": endpoint{}, - }, - }, - "mobileanalytics": service{ - - Endpoints: endpoints{ - "us-east-1": endpoint{}, - }, - }, - "models.lex": service{ + "iotthingsgraph": service{ Defaults: endpoint{ CredentialScope: credentialScope{ - Service: "lex", + Service: "iotthingsgraph", }, }, Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, "ap-southeast-2": endpoint{}, "eu-west-1": endpoint{}, "us-east-1": endpoint{}, "us-west-2": endpoint{}, }, }, - "monitoring": service{ - Defaults: endpoint{ - Protocols: []string{"http", "https"}, - }, + "kafka": service{ + Endpoints: endpoints{ "ap-east-1": endpoint{}, "ap-northeast-1": endpoint{}, @@ -2756,7 +2954,7 @@ var awsPartition = partition{ "us-west-2": endpoint{}, }, }, - "mq": service{ + "kinesis": service{ Endpoints: endpoints{ "ap-east-1": endpoint{}, @@ -2772,25 +2970,25 @@ var awsPartition = partition{ "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, "fips-us-east-1": endpoint{ - Hostname: "mq-fips.us-east-1.amazonaws.com", + Hostname: "kinesis-fips.us-east-1.amazonaws.com", CredentialScope: credentialScope{ Region: "us-east-1", }, }, "fips-us-east-2": endpoint{ - Hostname: "mq-fips.us-east-2.amazonaws.com", + Hostname: "kinesis-fips.us-east-2.amazonaws.com", CredentialScope: credentialScope{ Region: "us-east-2", }, }, "fips-us-west-1": endpoint{ - Hostname: "mq-fips.us-west-1.amazonaws.com", + Hostname: "kinesis-fips.us-west-1.amazonaws.com", CredentialScope: credentialScope{ Region: "us-west-1", }, }, "fips-us-west-2": endpoint{ - Hostname: "mq-fips.us-west-2.amazonaws.com", + Hostname: "kinesis-fips.us-west-2.amazonaws.com", CredentialScope: credentialScope{ Region: "us-west-2", }, @@ -2803,173 +3001,52 @@ var awsPartition = partition{ "us-west-2": endpoint{}, }, }, - "mturk-requester": service{ - IsRegionalized: boxedFalse, + "kinesisanalytics": service{ Endpoints: endpoints{ - "sandbox": endpoint{ - Hostname: "mturk-requester-sandbox.us-east-1.amazonaws.com", - }, - "us-east-1": endpoint{}, + "ap-east-1": endpoint{}, + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-north-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "me-south-1": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-2": endpoint{}, }, }, - "neptune": service{ + "kinesisvideo": service{ Endpoints: endpoints{ - "ap-northeast-1": endpoint{ - Hostname: "rds.ap-northeast-1.amazonaws.com", - CredentialScope: credentialScope{ - Region: "ap-northeast-1", - }, - }, - "ap-northeast-2": endpoint{ - Hostname: "rds.ap-northeast-2.amazonaws.com", - CredentialScope: credentialScope{ - Region: "ap-northeast-2", - }, - }, - "ap-south-1": endpoint{ - Hostname: "rds.ap-south-1.amazonaws.com", - CredentialScope: credentialScope{ - Region: "ap-south-1", - }, - }, - "ap-southeast-1": endpoint{ - Hostname: "rds.ap-southeast-1.amazonaws.com", - CredentialScope: credentialScope{ - Region: "ap-southeast-1", - }, - }, - "ap-southeast-2": endpoint{ - Hostname: "rds.ap-southeast-2.amazonaws.com", - CredentialScope: credentialScope{ - Region: "ap-southeast-2", - }, - }, - "ca-central-1": endpoint{ - Hostname: "rds.ca-central-1.amazonaws.com", - CredentialScope: credentialScope{ - Region: "ca-central-1", - }, - }, - "eu-central-1": endpoint{ - Hostname: "rds.eu-central-1.amazonaws.com", - CredentialScope: credentialScope{ - Region: "eu-central-1", - }, - }, - "eu-north-1": endpoint{ - Hostname: "rds.eu-north-1.amazonaws.com", - CredentialScope: credentialScope{ - Region: "eu-north-1", - }, - }, - "eu-west-1": endpoint{ - Hostname: "rds.eu-west-1.amazonaws.com", - CredentialScope: credentialScope{ - Region: "eu-west-1", - }, - }, - "eu-west-2": endpoint{ - Hostname: "rds.eu-west-2.amazonaws.com", - CredentialScope: credentialScope{ - Region: "eu-west-2", - }, - }, - "eu-west-3": endpoint{ - Hostname: "rds.eu-west-3.amazonaws.com", - CredentialScope: credentialScope{ - Region: "eu-west-3", - }, - }, - "me-south-1": endpoint{ - Hostname: "rds.me-south-1.amazonaws.com", - CredentialScope: credentialScope{ - Region: "me-south-1", - }, - }, - "us-east-1": endpoint{ - Hostname: "rds.us-east-1.amazonaws.com", - CredentialScope: credentialScope{ - Region: "us-east-1", - }, - }, - "us-east-2": endpoint{ - Hostname: "rds.us-east-2.amazonaws.com", - CredentialScope: credentialScope{ - Region: "us-east-2", - }, - }, - "us-west-2": endpoint{ - Hostname: "rds.us-west-2.amazonaws.com", - CredentialScope: credentialScope{ - Region: "us-west-2", - }, - }, - }, - }, - "oidc": service{ - - Endpoints: endpoints{ - "ap-southeast-1": endpoint{ - Hostname: "oidc.ap-southeast-1.amazonaws.com", - CredentialScope: credentialScope{ - Region: "ap-southeast-1", - }, - }, - "ap-southeast-2": endpoint{ - Hostname: "oidc.ap-southeast-2.amazonaws.com", - CredentialScope: credentialScope{ - Region: "ap-southeast-2", - }, - }, - "ca-central-1": endpoint{ - Hostname: "oidc.ca-central-1.amazonaws.com", - CredentialScope: credentialScope{ - Region: "ca-central-1", - }, - }, - "eu-central-1": endpoint{ - Hostname: "oidc.eu-central-1.amazonaws.com", - CredentialScope: credentialScope{ - Region: "eu-central-1", - }, - }, - "eu-west-1": endpoint{ - Hostname: "oidc.eu-west-1.amazonaws.com", - CredentialScope: credentialScope{ - Region: "eu-west-1", - }, - }, - "eu-west-2": endpoint{ - Hostname: "oidc.eu-west-2.amazonaws.com", - CredentialScope: credentialScope{ - Region: "eu-west-2", - }, - }, - "us-east-1": endpoint{ - Hostname: "oidc.us-east-1.amazonaws.com", - CredentialScope: credentialScope{ - Region: "us-east-1", - }, - }, - "us-east-2": endpoint{ - Hostname: "oidc.us-east-2.amazonaws.com", - CredentialScope: credentialScope{ - Region: "us-east-2", - }, - }, - "us-west-2": endpoint{ - Hostname: "oidc.us-west-2.amazonaws.com", - CredentialScope: credentialScope{ - Region: "us-west-2", - }, - }, + "ap-east-1": endpoint{}, + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-2": endpoint{}, }, }, - "opsworks": service{ + "kms": service{ Endpoints: endpoints{ + "ap-east-1": endpoint{}, "ap-northeast-1": endpoint{}, "ap-northeast-2": endpoint{}, "ap-south-1": endpoint{}, @@ -2977,9 +3054,11 @@ var awsPartition = partition{ "ap-southeast-2": endpoint{}, "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, + "eu-north-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, + "me-south-1": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, @@ -2987,39 +3066,78 @@ var awsPartition = partition{ "us-west-2": endpoint{}, }, }, - "opsworks-cm": service{ + "lakeformation": service{ Endpoints: endpoints{ "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, "us-west-1": endpoint{}, "us-west-2": endpoint{}, }, }, - "organizations": service{ - PartitionEndpoint: "aws-global", - IsRegionalized: boxedFalse, + "lambda": service{ Endpoints: endpoints{ - "aws-global": endpoint{ - Hostname: "organizations.us-east-1.amazonaws.com", + "ap-east-1": endpoint{}, + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-north-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "fips-us-east-1": endpoint{ + Hostname: "lambda-fips.us-east-1.amazonaws.com", CredentialScope: credentialScope{ Region: "us-east-1", }, }, + "fips-us-east-2": endpoint{ + Hostname: "lambda-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "fips-us-west-1": endpoint{ + Hostname: "lambda-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "fips-us-west-2": endpoint{ + Hostname: "lambda-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + "me-south-1": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, }, }, - "outposts": service{ + "license-manager": service{ Endpoints: endpoints{ "ap-east-1": endpoint{}, "ap-northeast-1": endpoint{}, "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, "ca-central-1": endpoint{}, @@ -3028,51 +3146,57 @@ var awsPartition = partition{ "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, - "me-south-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, - }, - }, - "pinpoint": service{ - Defaults: endpoint{ - CredentialScope: credentialScope{ - Service: "mobiletargeting", - }, - }, - Endpoints: endpoints{ - "ap-south-1": endpoint{}, - "ap-southeast-2": endpoint{}, - "eu-central-1": endpoint{}, - "eu-west-1": endpoint{}, "fips-us-east-1": endpoint{ - Hostname: "pinpoint-fips.us-east-1.amazonaws.com", + Hostname: "license-manager-fips.us-east-1.amazonaws.com", CredentialScope: credentialScope{ Region: "us-east-1", }, }, - "fips-us-west-2": endpoint{ - Hostname: "pinpoint-fips.us-west-2.amazonaws.com", + "fips-us-east-2": endpoint{ + Hostname: "license-manager-fips.us-east-2.amazonaws.com", CredentialScope: credentialScope{ - Region: "us-west-2", + Region: "us-east-2", }, }, - "us-east-1": endpoint{ - Hostname: "pinpoint.us-east-1.amazonaws.com", + "fips-us-west-1": endpoint{ + Hostname: "license-manager-fips.us-west-1.amazonaws.com", CredentialScope: credentialScope{ - Region: "us-east-1", + Region: "us-west-1", }, }, - "us-west-2": endpoint{ - Hostname: "pinpoint.us-west-2.amazonaws.com", + "fips-us-west-2": endpoint{ + Hostname: "license-manager-fips.us-west-2.amazonaws.com", CredentialScope: credentialScope{ Region: "us-west-2", }, }, + "me-south-1": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, }, }, - "polly": service{ + "lightsail": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "logs": service{ Endpoints: endpoints{ "ap-east-1": endpoint{}, @@ -3095,95 +3219,53 @@ var awsPartition = partition{ "us-west-2": endpoint{}, }, }, - "portal.sso": service{ + "machinelearning": service{ Endpoints: endpoints{ - "ap-southeast-1": endpoint{ - Hostname: "portal.sso.ap-southeast-1.amazonaws.com", - CredentialScope: credentialScope{ - Region: "ap-southeast-1", - }, - }, - "ap-southeast-2": endpoint{ - Hostname: "portal.sso.ap-southeast-2.amazonaws.com", - CredentialScope: credentialScope{ - Region: "ap-southeast-2", - }, - }, - "ca-central-1": endpoint{ - Hostname: "portal.sso.ca-central-1.amazonaws.com", - CredentialScope: credentialScope{ - Region: "ca-central-1", - }, - }, - "eu-central-1": endpoint{ - Hostname: "portal.sso.eu-central-1.amazonaws.com", - CredentialScope: credentialScope{ - Region: "eu-central-1", - }, - }, - "eu-west-1": endpoint{ - Hostname: "portal.sso.eu-west-1.amazonaws.com", - CredentialScope: credentialScope{ - Region: "eu-west-1", - }, - }, - "eu-west-2": endpoint{ - Hostname: "portal.sso.eu-west-2.amazonaws.com", - CredentialScope: credentialScope{ - Region: "eu-west-2", - }, - }, - "us-east-1": endpoint{ - Hostname: "portal.sso.us-east-1.amazonaws.com", - CredentialScope: credentialScope{ - Region: "us-east-1", - }, - }, - "us-east-2": endpoint{ - Hostname: "portal.sso.us-east-2.amazonaws.com", - CredentialScope: credentialScope{ - Region: "us-east-2", - }, - }, - "us-west-2": endpoint{ - Hostname: "portal.sso.us-west-2.amazonaws.com", - CredentialScope: credentialScope{ - Region: "us-west-2", - }, - }, + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, }, }, - "projects.iot1click": service{ + "managedblockchain": service{ Endpoints: endpoints{ "ap-northeast-1": endpoint{}, - "eu-central-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-southeast-1": endpoint{}, "eu-west-1": endpoint{}, - "eu-west-2": endpoint{}, "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-2": endpoint{}, }, }, - "qldb": service{ + "marketplacecommerceanalytics": service{ + + Endpoints: endpoints{ + "us-east-1": endpoint{}, + }, + }, + "mediaconnect": service{ Endpoints: endpoints{ + "ap-east-1": endpoint{}, "ap-northeast-1": endpoint{}, "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, "eu-central-1": endpoint{}, + "eu-north-1": endpoint{}, "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, + "us-west-1": endpoint{}, "us-west-2": endpoint{}, }, }, - "ram": service{ + "mediaconvert": service{ Endpoints: endpoints{ - "ap-east-1": endpoint{}, "ap-northeast-1": endpoint{}, "ap-northeast-2": endpoint{}, "ap-south-1": endpoint{}, @@ -3191,11 +3273,9 @@ var awsPartition = partition{ "ap-southeast-2": endpoint{}, "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, - "eu-north-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, - "me-south-1": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, @@ -3203,73 +3283,64 @@ var awsPartition = partition{ "us-west-2": endpoint{}, }, }, - "rds": service{ + "medialive": service{ Endpoints: endpoints{ - "ap-east-1": endpoint{}, "ap-northeast-1": endpoint{}, "ap-northeast-2": endpoint{}, "ap-south-1": endpoint{}, "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, - "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, "eu-north-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, - "me-south-1": endpoint{}, "sa-east-1": endpoint{}, - "us-east-1": endpoint{ - SSLCommonName: "{service}.{dnsSuffix}", - }, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-2": endpoint{}, }, }, - "redshift": service{ + "mediapackage": service{ Endpoints: endpoints{ - "ap-east-1": endpoint{}, "ap-northeast-1": endpoint{}, "ap-northeast-2": endpoint{}, "ap-south-1": endpoint{}, "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, - "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, "eu-north-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, - "me-south-1": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, - "us-east-2": endpoint{}, "us-west-1": endpoint{}, "us-west-2": endpoint{}, }, }, - "rekognition": service{ + "mediastore": service{ Endpoints: endpoints{ "ap-northeast-1": endpoint{}, "ap-northeast-2": endpoint{}, - "ap-south-1": endpoint{}, - "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, "eu-central-1": endpoint{}, + "eu-north-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, "us-west-2": endpoint{}, }, }, - "resource-groups": service{ - + "metering.marketplace": service{ + Defaults: endpoint{ + CredentialScope: credentialScope{ + Service: "aws-marketplace", + }, + }, Endpoints: endpoints{ "ap-east-1": endpoint{}, "ap-northeast-1": endpoint{}, @@ -3283,72 +3354,44 @@ var awsPartition = partition{ "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, - "fips-us-east-1": endpoint{ - Hostname: "resource-groups-fips.us-east-1.amazonaws.com", - CredentialScope: credentialScope{ - Region: "us-east-1", - }, - }, - "fips-us-east-2": endpoint{ - Hostname: "resource-groups-fips.us-east-2.amazonaws.com", - CredentialScope: credentialScope{ - Region: "us-east-2", - }, - }, - "fips-us-west-1": endpoint{ - Hostname: "resource-groups-fips.us-west-1.amazonaws.com", - CredentialScope: credentialScope{ - Region: "us-west-1", - }, - }, - "fips-us-west-2": endpoint{ - Hostname: "resource-groups-fips.us-west-2.amazonaws.com", - CredentialScope: credentialScope{ - Region: "us-west-2", - }, - }, - "me-south-1": endpoint{}, - "sa-east-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, + "me-south-1": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, }, }, - "robomaker": service{ + "mgh": service{ Endpoints: endpoints{ - "ap-northeast-1": endpoint{}, - "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, "eu-central-1": endpoint{}, - "eu-west-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, "us-west-2": endpoint{}, }, }, - "route53": service{ - PartitionEndpoint: "aws-global", - IsRegionalized: boxedFalse, + "mobileanalytics": service{ Endpoints: endpoints{ - "aws-global": endpoint{ - Hostname: "route53.amazonaws.com", - CredentialScope: credentialScope{ - Region: "us-east-1", - }, - }, + "us-east-1": endpoint{}, }, }, - "route53domains": service{ - + "models.lex": service{ + Defaults: endpoint{ + CredentialScope: credentialScope{ + Service: "lex", + }, + }, Endpoints: endpoints{ - "us-east-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-west-2": endpoint{}, }, }, - "route53resolver": service{ + "monitoring": service{ Defaults: endpoint{ - Protocols: []string{"https"}, + Protocols: []string{"http", "https"}, }, Endpoints: endpoints{ "ap-east-1": endpoint{}, @@ -3363,26 +3406,39 @@ var awsPartition = partition{ "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, - }, - }, - "runtime.lex": service{ - Defaults: endpoint{ - CredentialScope: credentialScope{ - Service: "lex", + "fips-us-east-1": endpoint{ + Hostname: "monitoring-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, }, - }, - Endpoints: endpoints{ - "ap-southeast-2": endpoint{}, - "eu-west-1": endpoint{}, - "us-east-1": endpoint{}, - "us-west-2": endpoint{}, + "fips-us-east-2": endpoint{ + Hostname: "monitoring-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "fips-us-west-1": endpoint{ + Hostname: "monitoring-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "fips-us-west-2": endpoint{ + Hostname: "monitoring-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + "me-south-1": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, }, }, - "runtime.sagemaker": service{ + "mq": service{ Endpoints: endpoints{ "ap-east-1": endpoint{}, @@ -3397,306 +3453,261 @@ var awsPartition = partition{ "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, - "me-south-1": endpoint{}, - "sa-east-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-1-fips": endpoint{ - Hostname: "runtime-fips.sagemaker.us-east-1.amazonaws.com", + "fips-us-east-1": endpoint{ + Hostname: "mq-fips.us-east-1.amazonaws.com", CredentialScope: credentialScope{ Region: "us-east-1", }, }, - "us-east-2": endpoint{}, - "us-east-2-fips": endpoint{ - Hostname: "runtime-fips.sagemaker.us-east-2.amazonaws.com", + "fips-us-east-2": endpoint{ + Hostname: "mq-fips.us-east-2.amazonaws.com", CredentialScope: credentialScope{ Region: "us-east-2", }, }, - "us-west-1": endpoint{}, - "us-west-1-fips": endpoint{ - Hostname: "runtime-fips.sagemaker.us-west-1.amazonaws.com", + "fips-us-west-1": endpoint{ + Hostname: "mq-fips.us-west-1.amazonaws.com", CredentialScope: credentialScope{ Region: "us-west-1", }, }, - "us-west-2": endpoint{}, - "us-west-2-fips": endpoint{ - Hostname: "runtime-fips.sagemaker.us-west-2.amazonaws.com", + "fips-us-west-2": endpoint{ + Hostname: "mq-fips.us-west-2.amazonaws.com", CredentialScope: credentialScope{ Region: "us-west-2", }, }, + "me-south-1": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, }, }, - "s3": service{ - PartitionEndpoint: "aws-global", - IsRegionalized: boxedTrue, - Defaults: endpoint{ - Protocols: []string{"http", "https"}, - SignatureVersions: []string{"s3v4"}, + "mturk-requester": service{ + IsRegionalized: boxedFalse, - HasDualStack: boxedTrue, - DualStackHostname: "{service}.dualstack.{region}.{dnsSuffix}", - }, Endpoints: endpoints{ - "ap-east-1": endpoint{}, - "ap-northeast-1": endpoint{ - Hostname: "s3.ap-northeast-1.amazonaws.com", - SignatureVersions: []string{"s3", "s3v4"}, - }, - "ap-northeast-2": endpoint{}, - "ap-south-1": endpoint{}, - "ap-southeast-1": endpoint{ - Hostname: "s3.ap-southeast-1.amazonaws.com", - SignatureVersions: []string{"s3", "s3v4"}, - }, - "ap-southeast-2": endpoint{ - Hostname: "s3.ap-southeast-2.amazonaws.com", - SignatureVersions: []string{"s3", "s3v4"}, - }, - "aws-global": endpoint{ - Hostname: "s3.amazonaws.com", - SignatureVersions: []string{"s3", "s3v4"}, - CredentialScope: credentialScope{ - Region: "us-east-1", - }, - }, - "ca-central-1": endpoint{}, - "eu-central-1": endpoint{}, - "eu-north-1": endpoint{}, - "eu-west-1": endpoint{ - Hostname: "s3.eu-west-1.amazonaws.com", - SignatureVersions: []string{"s3", "s3v4"}, - }, - "eu-west-2": endpoint{}, - "eu-west-3": endpoint{}, - "me-south-1": endpoint{}, - "s3-external-1": endpoint{ - Hostname: "s3-external-1.amazonaws.com", - SignatureVersions: []string{"s3", "s3v4"}, - CredentialScope: credentialScope{ - Region: "us-east-1", - }, - }, - "sa-east-1": endpoint{ - Hostname: "s3.sa-east-1.amazonaws.com", - SignatureVersions: []string{"s3", "s3v4"}, - }, - "us-east-1": endpoint{ - Hostname: "s3.us-east-1.amazonaws.com", - SignatureVersions: []string{"s3", "s3v4"}, - }, - "us-east-2": endpoint{}, - "us-west-1": endpoint{ - Hostname: "s3.us-west-1.amazonaws.com", - SignatureVersions: []string{"s3", "s3v4"}, - }, - "us-west-2": endpoint{ - Hostname: "s3.us-west-2.amazonaws.com", - SignatureVersions: []string{"s3", "s3v4"}, + "sandbox": endpoint{ + Hostname: "mturk-requester-sandbox.us-east-1.amazonaws.com", }, + "us-east-1": endpoint{}, }, }, - "s3-control": service{ - Defaults: endpoint{ - Protocols: []string{"https"}, - SignatureVersions: []string{"s3v4"}, + "neptune": service{ - HasDualStack: boxedTrue, - DualStackHostname: "{service}.dualstack.{region}.{dnsSuffix}", - }, Endpoints: endpoints{ "ap-northeast-1": endpoint{ - Hostname: "s3-control.ap-northeast-1.amazonaws.com", - SignatureVersions: []string{"s3v4"}, + Hostname: "rds.ap-northeast-1.amazonaws.com", CredentialScope: credentialScope{ Region: "ap-northeast-1", }, }, "ap-northeast-2": endpoint{ - Hostname: "s3-control.ap-northeast-2.amazonaws.com", - SignatureVersions: []string{"s3v4"}, + Hostname: "rds.ap-northeast-2.amazonaws.com", CredentialScope: credentialScope{ Region: "ap-northeast-2", }, }, "ap-south-1": endpoint{ - Hostname: "s3-control.ap-south-1.amazonaws.com", - SignatureVersions: []string{"s3v4"}, + Hostname: "rds.ap-south-1.amazonaws.com", CredentialScope: credentialScope{ Region: "ap-south-1", }, }, "ap-southeast-1": endpoint{ - Hostname: "s3-control.ap-southeast-1.amazonaws.com", - SignatureVersions: []string{"s3v4"}, + Hostname: "rds.ap-southeast-1.amazonaws.com", CredentialScope: credentialScope{ Region: "ap-southeast-1", }, }, "ap-southeast-2": endpoint{ - Hostname: "s3-control.ap-southeast-2.amazonaws.com", - SignatureVersions: []string{"s3v4"}, + Hostname: "rds.ap-southeast-2.amazonaws.com", CredentialScope: credentialScope{ Region: "ap-southeast-2", }, }, "ca-central-1": endpoint{ - Hostname: "s3-control.ca-central-1.amazonaws.com", - SignatureVersions: []string{"s3v4"}, + Hostname: "rds.ca-central-1.amazonaws.com", CredentialScope: credentialScope{ Region: "ca-central-1", }, }, "eu-central-1": endpoint{ - Hostname: "s3-control.eu-central-1.amazonaws.com", - SignatureVersions: []string{"s3v4"}, + Hostname: "rds.eu-central-1.amazonaws.com", CredentialScope: credentialScope{ Region: "eu-central-1", }, }, "eu-north-1": endpoint{ - Hostname: "s3-control.eu-north-1.amazonaws.com", - SignatureVersions: []string{"s3v4"}, + Hostname: "rds.eu-north-1.amazonaws.com", CredentialScope: credentialScope{ Region: "eu-north-1", }, }, "eu-west-1": endpoint{ - Hostname: "s3-control.eu-west-1.amazonaws.com", - SignatureVersions: []string{"s3v4"}, + Hostname: "rds.eu-west-1.amazonaws.com", CredentialScope: credentialScope{ Region: "eu-west-1", }, }, "eu-west-2": endpoint{ - Hostname: "s3-control.eu-west-2.amazonaws.com", - SignatureVersions: []string{"s3v4"}, + Hostname: "rds.eu-west-2.amazonaws.com", CredentialScope: credentialScope{ Region: "eu-west-2", }, }, "eu-west-3": endpoint{ - Hostname: "s3-control.eu-west-3.amazonaws.com", - SignatureVersions: []string{"s3v4"}, + Hostname: "rds.eu-west-3.amazonaws.com", CredentialScope: credentialScope{ Region: "eu-west-3", }, }, - "sa-east-1": endpoint{ - Hostname: "s3-control.sa-east-1.amazonaws.com", - SignatureVersions: []string{"s3v4"}, + "me-south-1": endpoint{ + Hostname: "rds.me-south-1.amazonaws.com", CredentialScope: credentialScope{ - Region: "sa-east-1", + Region: "me-south-1", }, }, "us-east-1": endpoint{ - Hostname: "s3-control.us-east-1.amazonaws.com", - SignatureVersions: []string{"s3v4"}, + Hostname: "rds.us-east-1.amazonaws.com", CredentialScope: credentialScope{ Region: "us-east-1", }, }, - "us-east-1-fips": endpoint{ - Hostname: "s3-control-fips.us-east-1.amazonaws.com", - SignatureVersions: []string{"s3v4"}, + "us-east-2": endpoint{ + Hostname: "rds.us-east-2.amazonaws.com", CredentialScope: credentialScope{ - Region: "us-east-1", + Region: "us-east-2", }, }, - "us-east-2": endpoint{ - Hostname: "s3-control.us-east-2.amazonaws.com", - SignatureVersions: []string{"s3v4"}, + "us-west-2": endpoint{ + Hostname: "rds.us-west-2.amazonaws.com", CredentialScope: credentialScope{ - Region: "us-east-2", + Region: "us-west-2", }, }, - "us-east-2-fips": endpoint{ - Hostname: "s3-control-fips.us-east-2.amazonaws.com", - SignatureVersions: []string{"s3v4"}, + }, + }, + "oidc": service{ + + Endpoints: endpoints{ + "ap-southeast-1": endpoint{ + Hostname: "oidc.ap-southeast-1.amazonaws.com", CredentialScope: credentialScope{ - Region: "us-east-2", + Region: "ap-southeast-1", }, }, - "us-west-1": endpoint{ - Hostname: "s3-control.us-west-1.amazonaws.com", - SignatureVersions: []string{"s3v4"}, + "ap-southeast-2": endpoint{ + Hostname: "oidc.ap-southeast-2.amazonaws.com", CredentialScope: credentialScope{ - Region: "us-west-1", + Region: "ap-southeast-2", }, }, - "us-west-1-fips": endpoint{ - Hostname: "s3-control-fips.us-west-1.amazonaws.com", - SignatureVersions: []string{"s3v4"}, + "ca-central-1": endpoint{ + Hostname: "oidc.ca-central-1.amazonaws.com", CredentialScope: credentialScope{ - Region: "us-west-1", + Region: "ca-central-1", }, }, - "us-west-2": endpoint{ - Hostname: "s3-control.us-west-2.amazonaws.com", - SignatureVersions: []string{"s3v4"}, + "eu-central-1": endpoint{ + Hostname: "oidc.eu-central-1.amazonaws.com", CredentialScope: credentialScope{ - Region: "us-west-2", + Region: "eu-central-1", }, }, - "us-west-2-fips": endpoint{ - Hostname: "s3-control-fips.us-west-2.amazonaws.com", - SignatureVersions: []string{"s3v4"}, + "eu-west-1": endpoint{ + Hostname: "oidc.eu-west-1.amazonaws.com", CredentialScope: credentialScope{ - Region: "us-west-2", + Region: "eu-west-1", }, }, - }, - }, - "savingsplans": service{ - PartitionEndpoint: "aws-global", - IsRegionalized: boxedFalse, - - Endpoints: endpoints{ - "aws-global": endpoint{ - Hostname: "savingsplans.amazonaws.com", + "eu-west-2": endpoint{ + Hostname: "oidc.eu-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "eu-west-2", + }, + }, + "us-east-1": endpoint{ + Hostname: "oidc.us-east-1.amazonaws.com", CredentialScope: credentialScope{ Region: "us-east-1", }, }, + "us-east-2": endpoint{ + Hostname: "oidc.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "us-west-2": endpoint{ + Hostname: "oidc.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, }, }, - "schemas": service{ + "opsworks": service{ Endpoints: endpoints{ "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, + "us-west-1": endpoint{}, "us-west-2": endpoint{}, }, }, - "sdb": service{ - Defaults: endpoint{ - Protocols: []string{"http", "https"}, - SignatureVersions: []string{"v2"}, - }, + "opsworks-cm": service{ + Endpoints: endpoints{ "ap-northeast-1": endpoint{}, "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, + "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, - "sa-east-1": endpoint{}, - "us-east-1": endpoint{ - Hostname: "sdb.amazonaws.com", + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "organizations": service{ + PartitionEndpoint: "aws-global", + IsRegionalized: boxedFalse, + + Endpoints: endpoints{ + "aws-global": endpoint{ + Hostname: "organizations.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "fips-aws-global": endpoint{ + Hostname: "organizations-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, }, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, }, }, - "secretsmanager": service{ + "outposts": service{ Endpoints: endpoints{ "ap-east-1": endpoint{}, "ap-northeast-1": endpoint{}, "ap-northeast-2": endpoint{}, - "ap-south-1": endpoint{}, "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, "ca-central-1": endpoint{}, @@ -3706,38 +3717,50 @@ var awsPartition = partition{ "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, "me-south-1": endpoint{}, - "sa-east-1": endpoint{}, "us-east-1": endpoint{}, - "us-east-1-fips": endpoint{ - Hostname: "secretsmanager-fips.us-east-1.amazonaws.com", - CredentialScope: credentialScope{ - Region: "us-east-1", - }, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "pinpoint": service{ + Defaults: endpoint{ + CredentialScope: credentialScope{ + Service: "mobiletargeting", }, - "us-east-2": endpoint{}, - "us-east-2-fips": endpoint{ - Hostname: "secretsmanager-fips.us-east-2.amazonaws.com", + }, + Endpoints: endpoints{ + "ap-south-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "fips-us-east-1": endpoint{ + Hostname: "pinpoint-fips.us-east-1.amazonaws.com", CredentialScope: credentialScope{ - Region: "us-east-2", + Region: "us-east-1", }, }, - "us-west-1": endpoint{}, - "us-west-1-fips": endpoint{ - Hostname: "secretsmanager-fips.us-west-1.amazonaws.com", + "fips-us-west-2": endpoint{ + Hostname: "pinpoint-fips.us-west-2.amazonaws.com", CredentialScope: credentialScope{ - Region: "us-west-1", + Region: "us-west-2", }, }, - "us-west-2": endpoint{}, - "us-west-2-fips": endpoint{ - Hostname: "secretsmanager-fips.us-west-2.amazonaws.com", + "us-east-1": endpoint{ + Hostname: "pinpoint.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "us-west-2": endpoint{ + Hostname: "pinpoint.us-west-2.amazonaws.com", CredentialScope: credentialScope{ Region: "us-west-2", }, }, }, }, - "securityhub": service{ + "polly": service{ Endpoints: endpoints{ "ap-east-1": endpoint{}, @@ -3752,123 +3775,124 @@ var awsPartition = partition{ "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, - "me-south-1": endpoint{}, - "sa-east-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, - }, - }, - "serverlessrepo": service{ - Defaults: endpoint{ - Protocols: []string{"https"}, - }, - Endpoints: endpoints{ - "ap-east-1": endpoint{ - Protocols: []string{"https"}, + "fips-us-east-1": endpoint{ + Hostname: "polly-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, }, - "ap-northeast-1": endpoint{ - Protocols: []string{"https"}, + "fips-us-east-2": endpoint{ + Hostname: "polly-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, }, - "ap-northeast-2": endpoint{ - Protocols: []string{"https"}, + "fips-us-west-1": endpoint{ + Hostname: "polly-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, }, - "ap-south-1": endpoint{ - Protocols: []string{"https"}, + "fips-us-west-2": endpoint{ + Hostname: "polly-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, }, + "me-south-1": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "portal.sso": service{ + + Endpoints: endpoints{ "ap-southeast-1": endpoint{ - Protocols: []string{"https"}, + Hostname: "portal.sso.ap-southeast-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "ap-southeast-1", + }, }, "ap-southeast-2": endpoint{ - Protocols: []string{"https"}, + Hostname: "portal.sso.ap-southeast-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "ap-southeast-2", + }, }, "ca-central-1": endpoint{ - Protocols: []string{"https"}, + Hostname: "portal.sso.ca-central-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "ca-central-1", + }, }, "eu-central-1": endpoint{ - Protocols: []string{"https"}, - }, - "eu-north-1": endpoint{ - Protocols: []string{"https"}, + Hostname: "portal.sso.eu-central-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "eu-central-1", + }, }, "eu-west-1": endpoint{ - Protocols: []string{"https"}, + Hostname: "portal.sso.eu-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "eu-west-1", + }, }, "eu-west-2": endpoint{ - Protocols: []string{"https"}, - }, - "eu-west-3": endpoint{ - Protocols: []string{"https"}, - }, - "me-south-1": endpoint{ - Protocols: []string{"https"}, - }, - "sa-east-1": endpoint{ - Protocols: []string{"https"}, + Hostname: "portal.sso.eu-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "eu-west-2", + }, }, "us-east-1": endpoint{ - Protocols: []string{"https"}, + Hostname: "portal.sso.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, }, "us-east-2": endpoint{ - Protocols: []string{"https"}, - }, - "us-west-1": endpoint{ - Protocols: []string{"https"}, + Hostname: "portal.sso.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, }, "us-west-2": endpoint{ - Protocols: []string{"https"}, + Hostname: "portal.sso.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, }, }, }, - "servicecatalog": service{ + "projects.iot1click": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "qldb": service{ Endpoints: endpoints{ - "ap-east-1": endpoint{}, "ap-northeast-1": endpoint{}, "ap-northeast-2": endpoint{}, - "ap-south-1": endpoint{}, "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, - "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, - "eu-north-1": endpoint{}, "eu-west-1": endpoint{}, - "eu-west-2": endpoint{}, - "eu-west-3": endpoint{}, - "me-south-1": endpoint{}, - "sa-east-1": endpoint{}, "us-east-1": endpoint{}, - "us-east-1-fips": endpoint{ - Hostname: "servicecatalog-fips.us-east-1.amazonaws.com", - CredentialScope: credentialScope{ - Region: "us-east-1", - }, - }, - "us-east-2": endpoint{}, - "us-east-2-fips": endpoint{ - Hostname: "servicecatalog-fips.us-east-2.amazonaws.com", - CredentialScope: credentialScope{ - Region: "us-east-2", - }, - }, - "us-west-1": endpoint{}, - "us-west-1-fips": endpoint{ - Hostname: "servicecatalog-fips.us-west-1.amazonaws.com", - CredentialScope: credentialScope{ - Region: "us-west-1", - }, - }, - "us-west-2": endpoint{}, - "us-west-2-fips": endpoint{ - Hostname: "servicecatalog-fips.us-west-2.amazonaws.com", - CredentialScope: credentialScope{ - Region: "us-west-2", - }, - }, + "us-east-2": endpoint{}, + "us-west-2": endpoint{}, }, }, - "servicediscovery": service{ + "ram": service{ Endpoints: endpoints{ "ap-east-1": endpoint{}, @@ -3891,31 +3915,32 @@ var awsPartition = partition{ "us-west-2": endpoint{}, }, }, - "session.qldb": service{ + "rds": service{ Endpoints: endpoints{ + "ap-east-1": endpoint{}, "ap-northeast-1": endpoint{}, "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, + "eu-north-1": endpoint{}, "eu-west-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-2": endpoint{}, - }, - }, - "shield": service{ - IsRegionalized: boxedFalse, - Defaults: endpoint{ - SSLCommonName: "shield.us-east-1.amazonaws.com", - Protocols: []string{"https"}, - }, - Endpoints: endpoints{ - "us-east-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "me-south-1": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{ + SSLCommonName: "{service}.{dnsSuffix}", + }, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, }, }, - "sms": service{ + "redshift": service{ Endpoints: endpoints{ "ap-east-1": endpoint{}, @@ -3930,26 +3955,32 @@ var awsPartition = partition{ "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, + "fips-ca-central-1": endpoint{ + Hostname: "redshift-fips.ca-central-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "ca-central-1", + }, + }, "fips-us-east-1": endpoint{ - Hostname: "sms-fips.us-east-1.amazonaws.com", + Hostname: "redshift-fips.us-east-1.amazonaws.com", CredentialScope: credentialScope{ Region: "us-east-1", }, }, "fips-us-east-2": endpoint{ - Hostname: "sms-fips.us-east-2.amazonaws.com", + Hostname: "redshift-fips.us-east-2.amazonaws.com", CredentialScope: credentialScope{ Region: "us-east-2", }, }, "fips-us-west-1": endpoint{ - Hostname: "sms-fips.us-west-1.amazonaws.com", + Hostname: "redshift-fips.us-west-1.amazonaws.com", CredentialScope: credentialScope{ Region: "us-west-1", }, }, "fips-us-west-2": endpoint{ - Hostname: "sms-fips.us-west-2.amazonaws.com", + Hostname: "redshift-fips.us-west-2.amazonaws.com", CredentialScope: credentialScope{ Region: "us-west-2", }, @@ -3962,7 +3993,7 @@ var awsPartition = partition{ "us-west-2": endpoint{}, }, }, - "snowball": service{ + "rekognition": service{ Endpoints: endpoints{ "ap-northeast-1": endpoint{}, @@ -3970,48 +4001,17 @@ var awsPartition = partition{ "ap-south-1": endpoint{}, "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, - "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, - "eu-west-3": endpoint{}, - "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, "us-west-1": endpoint{}, "us-west-2": endpoint{}, }, }, - "sns": service{ - Defaults: endpoint{ - Protocols: []string{"http", "https"}, - }, - Endpoints: endpoints{ - "ap-east-1": endpoint{}, - "ap-northeast-1": endpoint{}, - "ap-northeast-2": endpoint{}, - "ap-south-1": endpoint{}, - "ap-southeast-1": endpoint{}, - "ap-southeast-2": endpoint{}, - "ca-central-1": endpoint{}, - "eu-central-1": endpoint{}, - "eu-north-1": endpoint{}, - "eu-west-1": endpoint{}, - "eu-west-2": endpoint{}, - "eu-west-3": endpoint{}, - "me-south-1": endpoint{}, - "sa-east-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, - }, - }, - "sqs": service{ - Defaults: endpoint{ - SSLCommonName: "{region}.queue.{dnsSuffix}", - Protocols: []string{"http", "https"}, - }, + "resource-groups": service{ + Endpoints: endpoints{ "ap-east-1": endpoint{}, "ap-northeast-1": endpoint{}, @@ -4026,64 +4026,72 @@ var awsPartition = partition{ "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, "fips-us-east-1": endpoint{ - Hostname: "sqs-fips.us-east-1.amazonaws.com", + Hostname: "resource-groups-fips.us-east-1.amazonaws.com", CredentialScope: credentialScope{ Region: "us-east-1", }, }, "fips-us-east-2": endpoint{ - Hostname: "sqs-fips.us-east-2.amazonaws.com", + Hostname: "resource-groups-fips.us-east-2.amazonaws.com", CredentialScope: credentialScope{ Region: "us-east-2", }, }, "fips-us-west-1": endpoint{ - Hostname: "sqs-fips.us-west-1.amazonaws.com", + Hostname: "resource-groups-fips.us-west-1.amazonaws.com", CredentialScope: credentialScope{ Region: "us-west-1", }, }, "fips-us-west-2": endpoint{ - Hostname: "sqs-fips.us-west-2.amazonaws.com", + Hostname: "resource-groups-fips.us-west-2.amazonaws.com", CredentialScope: credentialScope{ Region: "us-west-2", }, }, "me-south-1": endpoint{}, "sa-east-1": endpoint{}, - "us-east-1": endpoint{ - SSLCommonName: "queue.{dnsSuffix}", - }, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, }, }, - "ssm": service{ + "robomaker": service{ Endpoints: endpoints{ - "ap-east-1": endpoint{}, "ap-northeast-1": endpoint{}, - "ap-northeast-2": endpoint{}, - "ap-south-1": endpoint{}, "ap-southeast-1": endpoint{}, - "ap-southeast-2": endpoint{}, - "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, - "eu-north-1": endpoint{}, "eu-west-1": endpoint{}, - "eu-west-2": endpoint{}, - "eu-west-3": endpoint{}, - "me-south-1": endpoint{}, - "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, - "us-west-1": endpoint{}, "us-west-2": endpoint{}, }, }, - "states": service{ + "route53": service{ + PartitionEndpoint: "aws-global", + IsRegionalized: boxedFalse, + + Endpoints: endpoints{ + "aws-global": endpoint{ + Hostname: "route53.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + }, + }, + "route53domains": service{ + Endpoints: endpoints{ + "us-east-1": endpoint{}, + }, + }, + "route53resolver": service{ + Defaults: endpoint{ + Protocols: []string{"https"}, + }, Endpoints: endpoints{ "ap-east-1": endpoint{}, "ap-northeast-1": endpoint{}, @@ -4105,7 +4113,20 @@ var awsPartition = partition{ "us-west-2": endpoint{}, }, }, - "storagegateway": service{ + "runtime.lex": service{ + Defaults: endpoint{ + CredentialScope: credentialScope{ + Service: "lex", + }, + }, + Endpoints: endpoints{ + "ap-southeast-2": endpoint{}, + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "runtime.sagemaker": service{ Endpoints: endpoints{ "ap-east-1": endpoint{}, @@ -4123,87 +4144,64 @@ var awsPartition = partition{ "me-south-1": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, - }, - }, - "streams.dynamodb": service{ - Defaults: endpoint{ - Protocols: []string{"http", "https"}, - CredentialScope: credentialScope{ - Service: "dynamodb", - }, - }, - Endpoints: endpoints{ - "ap-northeast-1": endpoint{}, - "ap-northeast-2": endpoint{}, - "ap-south-1": endpoint{}, - "ap-southeast-1": endpoint{}, - "ap-southeast-2": endpoint{}, - "ca-central-1": endpoint{}, - "ca-central-1-fips": endpoint{ - Hostname: "dynamodb-fips.ca-central-1.amazonaws.com", - CredentialScope: credentialScope{ - Region: "ca-central-1", - }, - }, - "eu-central-1": endpoint{}, - "eu-north-1": endpoint{}, - "eu-west-1": endpoint{}, - "eu-west-2": endpoint{}, - "eu-west-3": endpoint{}, - "local": endpoint{ - Hostname: "localhost:8000", - Protocols: []string{"http"}, - CredentialScope: credentialScope{ - Region: "us-east-1", - }, - }, - "me-south-1": endpoint{}, - "sa-east-1": endpoint{}, - "us-east-1": endpoint{}, "us-east-1-fips": endpoint{ - Hostname: "dynamodb-fips.us-east-1.amazonaws.com", + Hostname: "runtime-fips.sagemaker.us-east-1.amazonaws.com", CredentialScope: credentialScope{ Region: "us-east-1", }, }, "us-east-2": endpoint{}, "us-east-2-fips": endpoint{ - Hostname: "dynamodb-fips.us-east-2.amazonaws.com", + Hostname: "runtime-fips.sagemaker.us-east-2.amazonaws.com", CredentialScope: credentialScope{ Region: "us-east-2", }, }, "us-west-1": endpoint{}, "us-west-1-fips": endpoint{ - Hostname: "dynamodb-fips.us-west-1.amazonaws.com", + Hostname: "runtime-fips.sagemaker.us-west-1.amazonaws.com", CredentialScope: credentialScope{ Region: "us-west-1", }, }, "us-west-2": endpoint{}, "us-west-2-fips": endpoint{ - Hostname: "dynamodb-fips.us-west-2.amazonaws.com", + Hostname: "runtime-fips.sagemaker.us-west-2.amazonaws.com", CredentialScope: credentialScope{ Region: "us-west-2", }, }, }, }, - "sts": service{ + "s3": service{ PartitionEndpoint: "aws-global", + IsRegionalized: boxedTrue, + Defaults: endpoint{ + Protocols: []string{"http", "https"}, + SignatureVersions: []string{"s3v4"}, + HasDualStack: boxedTrue, + DualStackHostname: "{service}.dualstack.{region}.{dnsSuffix}", + }, Endpoints: endpoints{ - "ap-east-1": endpoint{}, - "ap-northeast-1": endpoint{}, + "ap-east-1": endpoint{}, + "ap-northeast-1": endpoint{ + Hostname: "s3.ap-northeast-1.amazonaws.com", + SignatureVersions: []string{"s3", "s3v4"}, + }, "ap-northeast-2": endpoint{}, "ap-south-1": endpoint{}, - "ap-southeast-1": endpoint{}, - "ap-southeast-2": endpoint{}, + "ap-southeast-1": endpoint{ + Hostname: "s3.ap-southeast-1.amazonaws.com", + SignatureVersions: []string{"s3", "s3v4"}, + }, + "ap-southeast-2": endpoint{ + Hostname: "s3.ap-southeast-2.amazonaws.com", + SignatureVersions: []string{"s3", "s3v4"}, + }, "aws-global": endpoint{ - Hostname: "sts.amazonaws.com", + Hostname: "s3.amazonaws.com", + SignatureVersions: []string{"s3", "s3v4"}, CredentialScope: credentialScope{ Region: "us-east-1", }, @@ -4211,230 +4209,1427 @@ var awsPartition = partition{ "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, "eu-north-1": endpoint{}, - "eu-west-1": endpoint{}, - "eu-west-2": endpoint{}, - "eu-west-3": endpoint{}, - "me-south-1": endpoint{}, - "sa-east-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-1-fips": endpoint{ - Hostname: "sts-fips.us-east-1.amazonaws.com", + "eu-west-1": endpoint{ + Hostname: "s3.eu-west-1.amazonaws.com", + SignatureVersions: []string{"s3", "s3v4"}, + }, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "me-south-1": endpoint{}, + "s3-external-1": endpoint{ + Hostname: "s3-external-1.amazonaws.com", + SignatureVersions: []string{"s3", "s3v4"}, CredentialScope: credentialScope{ Region: "us-east-1", }, }, + "sa-east-1": endpoint{ + Hostname: "s3.sa-east-1.amazonaws.com", + SignatureVersions: []string{"s3", "s3v4"}, + }, + "us-east-1": endpoint{ + Hostname: "s3.us-east-1.amazonaws.com", + SignatureVersions: []string{"s3", "s3v4"}, + }, "us-east-2": endpoint{}, - "us-east-2-fips": endpoint{ - Hostname: "sts-fips.us-east-2.amazonaws.com", + "us-west-1": endpoint{ + Hostname: "s3.us-west-1.amazonaws.com", + SignatureVersions: []string{"s3", "s3v4"}, + }, + "us-west-2": endpoint{ + Hostname: "s3.us-west-2.amazonaws.com", + SignatureVersions: []string{"s3", "s3v4"}, + }, + }, + }, + "s3-control": service{ + Defaults: endpoint{ + Protocols: []string{"https"}, + SignatureVersions: []string{"s3v4"}, + + HasDualStack: boxedTrue, + DualStackHostname: "{service}.dualstack.{region}.{dnsSuffix}", + }, + Endpoints: endpoints{ + "ap-northeast-1": endpoint{ + Hostname: "s3-control.ap-northeast-1.amazonaws.com", + SignatureVersions: []string{"s3v4"}, CredentialScope: credentialScope{ - Region: "us-east-2", + Region: "ap-northeast-1", }, }, - "us-west-1": endpoint{}, - "us-west-1-fips": endpoint{ - Hostname: "sts-fips.us-west-1.amazonaws.com", + "ap-northeast-2": endpoint{ + Hostname: "s3-control.ap-northeast-2.amazonaws.com", + SignatureVersions: []string{"s3v4"}, CredentialScope: credentialScope{ - Region: "us-west-1", + Region: "ap-northeast-2", }, }, - "us-west-2": endpoint{}, - "us-west-2-fips": endpoint{ - Hostname: "sts-fips.us-west-2.amazonaws.com", + "ap-south-1": endpoint{ + Hostname: "s3-control.ap-south-1.amazonaws.com", + SignatureVersions: []string{"s3v4"}, + CredentialScope: credentialScope{ + Region: "ap-south-1", + }, + }, + "ap-southeast-1": endpoint{ + Hostname: "s3-control.ap-southeast-1.amazonaws.com", + SignatureVersions: []string{"s3v4"}, + CredentialScope: credentialScope{ + Region: "ap-southeast-1", + }, + }, + "ap-southeast-2": endpoint{ + Hostname: "s3-control.ap-southeast-2.amazonaws.com", + SignatureVersions: []string{"s3v4"}, + CredentialScope: credentialScope{ + Region: "ap-southeast-2", + }, + }, + "ca-central-1": endpoint{ + Hostname: "s3-control.ca-central-1.amazonaws.com", + SignatureVersions: []string{"s3v4"}, + CredentialScope: credentialScope{ + Region: "ca-central-1", + }, + }, + "eu-central-1": endpoint{ + Hostname: "s3-control.eu-central-1.amazonaws.com", + SignatureVersions: []string{"s3v4"}, + CredentialScope: credentialScope{ + Region: "eu-central-1", + }, + }, + "eu-north-1": endpoint{ + Hostname: "s3-control.eu-north-1.amazonaws.com", + SignatureVersions: []string{"s3v4"}, + CredentialScope: credentialScope{ + Region: "eu-north-1", + }, + }, + "eu-west-1": endpoint{ + Hostname: "s3-control.eu-west-1.amazonaws.com", + SignatureVersions: []string{"s3v4"}, + CredentialScope: credentialScope{ + Region: "eu-west-1", + }, + }, + "eu-west-2": endpoint{ + Hostname: "s3-control.eu-west-2.amazonaws.com", + SignatureVersions: []string{"s3v4"}, + CredentialScope: credentialScope{ + Region: "eu-west-2", + }, + }, + "eu-west-3": endpoint{ + Hostname: "s3-control.eu-west-3.amazonaws.com", + SignatureVersions: []string{"s3v4"}, + CredentialScope: credentialScope{ + Region: "eu-west-3", + }, + }, + "sa-east-1": endpoint{ + Hostname: "s3-control.sa-east-1.amazonaws.com", + SignatureVersions: []string{"s3v4"}, + CredentialScope: credentialScope{ + Region: "sa-east-1", + }, + }, + "us-east-1": endpoint{ + Hostname: "s3-control.us-east-1.amazonaws.com", + SignatureVersions: []string{"s3v4"}, + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "us-east-1-fips": endpoint{ + Hostname: "s3-control-fips.us-east-1.amazonaws.com", + SignatureVersions: []string{"s3v4"}, + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "us-east-2": endpoint{ + Hostname: "s3-control.us-east-2.amazonaws.com", + SignatureVersions: []string{"s3v4"}, + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "us-east-2-fips": endpoint{ + Hostname: "s3-control-fips.us-east-2.amazonaws.com", + SignatureVersions: []string{"s3v4"}, + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "us-west-1": endpoint{ + Hostname: "s3-control.us-west-1.amazonaws.com", + SignatureVersions: []string{"s3v4"}, + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "us-west-1-fips": endpoint{ + Hostname: "s3-control-fips.us-west-1.amazonaws.com", + SignatureVersions: []string{"s3v4"}, + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "us-west-2": endpoint{ + Hostname: "s3-control.us-west-2.amazonaws.com", + SignatureVersions: []string{"s3v4"}, + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + "us-west-2-fips": endpoint{ + Hostname: "s3-control-fips.us-west-2.amazonaws.com", + SignatureVersions: []string{"s3v4"}, + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + }, + }, + "savingsplans": service{ + PartitionEndpoint: "aws-global", + IsRegionalized: boxedFalse, + + Endpoints: endpoints{ + "aws-global": endpoint{ + Hostname: "savingsplans.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + }, + }, + "schemas": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "sdb": service{ + Defaults: endpoint{ + Protocols: []string{"http", "https"}, + SignatureVersions: []string{"v2"}, + }, + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "eu-west-1": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{ + Hostname: "sdb.amazonaws.com", + }, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "secretsmanager": service{ + + Endpoints: endpoints{ + "ap-east-1": endpoint{}, + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-north-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "me-south-1": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-1-fips": endpoint{ + Hostname: "secretsmanager-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "us-east-2": endpoint{}, + "us-east-2-fips": endpoint{ + Hostname: "secretsmanager-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "us-west-1": endpoint{}, + "us-west-1-fips": endpoint{ + Hostname: "secretsmanager-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "us-west-2": endpoint{}, + "us-west-2-fips": endpoint{ + Hostname: "secretsmanager-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + }, + }, + "securityhub": service{ + + Endpoints: endpoints{ + "ap-east-1": endpoint{}, + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-north-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "me-south-1": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "serverlessrepo": service{ + Defaults: endpoint{ + Protocols: []string{"https"}, + }, + Endpoints: endpoints{ + "ap-east-1": endpoint{ + Protocols: []string{"https"}, + }, + "ap-northeast-1": endpoint{ + Protocols: []string{"https"}, + }, + "ap-northeast-2": endpoint{ + Protocols: []string{"https"}, + }, + "ap-south-1": endpoint{ + Protocols: []string{"https"}, + }, + "ap-southeast-1": endpoint{ + Protocols: []string{"https"}, + }, + "ap-southeast-2": endpoint{ + Protocols: []string{"https"}, + }, + "ca-central-1": endpoint{ + Protocols: []string{"https"}, + }, + "eu-central-1": endpoint{ + Protocols: []string{"https"}, + }, + "eu-north-1": endpoint{ + Protocols: []string{"https"}, + }, + "eu-west-1": endpoint{ + Protocols: []string{"https"}, + }, + "eu-west-2": endpoint{ + Protocols: []string{"https"}, + }, + "eu-west-3": endpoint{ + Protocols: []string{"https"}, + }, + "me-south-1": endpoint{ + Protocols: []string{"https"}, + }, + "sa-east-1": endpoint{ + Protocols: []string{"https"}, + }, + "us-east-1": endpoint{ + Protocols: []string{"https"}, + }, + "us-east-2": endpoint{ + Protocols: []string{"https"}, + }, + "us-west-1": endpoint{ + Protocols: []string{"https"}, + }, + "us-west-2": endpoint{ + Protocols: []string{"https"}, + }, + }, + }, + "servicecatalog": service{ + + Endpoints: endpoints{ + "ap-east-1": endpoint{}, + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-north-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "me-south-1": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-1-fips": endpoint{ + Hostname: "servicecatalog-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "us-east-2": endpoint{}, + "us-east-2-fips": endpoint{ + Hostname: "servicecatalog-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "us-west-1": endpoint{}, + "us-west-1-fips": endpoint{ + Hostname: "servicecatalog-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "us-west-2": endpoint{}, + "us-west-2-fips": endpoint{ + Hostname: "servicecatalog-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + }, + }, + "servicediscovery": service{ + + Endpoints: endpoints{ + "ap-east-1": endpoint{}, + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-north-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "me-south-1": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "session.qldb": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "shield": service{ + IsRegionalized: boxedFalse, + Defaults: endpoint{ + SSLCommonName: "shield.us-east-1.amazonaws.com", + Protocols: []string{"https"}, + }, + Endpoints: endpoints{ + "fips-us-east-1": endpoint{ + Hostname: "shield-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "us-east-1": endpoint{ + Hostname: "shield.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + }, + }, + "sms": service{ + + Endpoints: endpoints{ + "ap-east-1": endpoint{}, + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-north-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "fips-us-east-1": endpoint{ + Hostname: "sms-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "fips-us-east-2": endpoint{ + Hostname: "sms-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "fips-us-west-1": endpoint{ + Hostname: "sms-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "fips-us-west-2": endpoint{ + Hostname: "sms-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + "me-south-1": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "snowball": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "fips-ap-northeast-1": endpoint{ + Hostname: "snowball-fips.ap-northeast-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "ap-northeast-1", + }, + }, + "fips-ap-northeast-2": endpoint{ + Hostname: "snowball-fips.ap-northeast-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "ap-northeast-2", + }, + }, + "fips-ap-south-1": endpoint{ + Hostname: "snowball-fips.ap-south-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "ap-south-1", + }, + }, + "fips-ap-southeast-1": endpoint{ + Hostname: "snowball-fips.ap-southeast-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "ap-southeast-1", + }, + }, + "fips-ap-southeast-2": endpoint{ + Hostname: "snowball-fips.ap-southeast-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "ap-southeast-2", + }, + }, + "fips-ca-central-1": endpoint{ + Hostname: "snowball-fips.ca-central-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "ca-central-1", + }, + }, + "fips-eu-central-1": endpoint{ + Hostname: "snowball-fips.eu-central-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "eu-central-1", + }, + }, + "fips-eu-west-1": endpoint{ + Hostname: "snowball-fips.eu-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "eu-west-1", + }, + }, + "fips-eu-west-2": endpoint{ + Hostname: "snowball-fips.eu-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "eu-west-2", + }, + }, + "fips-eu-west-3": endpoint{ + Hostname: "snowball-fips.eu-west-3.amazonaws.com", + CredentialScope: credentialScope{ + Region: "eu-west-3", + }, + }, + "fips-sa-east-1": endpoint{ + Hostname: "snowball-fips.sa-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "sa-east-1", + }, + }, + "fips-us-east-1": endpoint{ + Hostname: "snowball-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "fips-us-east-2": endpoint{ + Hostname: "snowball-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "fips-us-west-1": endpoint{ + Hostname: "snowball-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "fips-us-west-2": endpoint{ + Hostname: "snowball-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "sns": service{ + Defaults: endpoint{ + Protocols: []string{"http", "https"}, + }, + Endpoints: endpoints{ + "ap-east-1": endpoint{}, + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-north-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "fips-us-east-1": endpoint{ + Hostname: "sns-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "fips-us-east-2": endpoint{ + Hostname: "sns-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "fips-us-west-1": endpoint{ + Hostname: "sns-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "fips-us-west-2": endpoint{ + Hostname: "sns-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + "me-south-1": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "sqs": service{ + Defaults: endpoint{ + SSLCommonName: "{region}.queue.{dnsSuffix}", + Protocols: []string{"http", "https"}, + }, + Endpoints: endpoints{ + "ap-east-1": endpoint{}, + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-north-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "fips-us-east-1": endpoint{ + Hostname: "sqs-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "fips-us-east-2": endpoint{ + Hostname: "sqs-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "fips-us-west-1": endpoint{ + Hostname: "sqs-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "fips-us-west-2": endpoint{ + Hostname: "sqs-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + "me-south-1": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{ + SSLCommonName: "queue.{dnsSuffix}", + }, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "ssm": service{ + + Endpoints: endpoints{ + "ap-east-1": endpoint{}, + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-north-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "fips-us-east-1": endpoint{ + Hostname: "ssm-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "fips-us-east-2": endpoint{ + Hostname: "ssm-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "fips-us-west-1": endpoint{ + Hostname: "ssm-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "fips-us-west-2": endpoint{ + Hostname: "ssm-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + "me-south-1": endpoint{}, + "sa-east-1": endpoint{}, + "ssm-facade-fips-us-east-1": endpoint{ + Hostname: "ssm-facade-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "ssm-facade-fips-us-east-2": endpoint{ + Hostname: "ssm-facade-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "ssm-facade-fips-us-west-1": endpoint{ + Hostname: "ssm-facade-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "ssm-facade-fips-us-west-2": endpoint{ + Hostname: "ssm-facade-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "states": service{ + + Endpoints: endpoints{ + "ap-east-1": endpoint{}, + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-north-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "fips-us-east-1": endpoint{ + Hostname: "states-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "fips-us-east-2": endpoint{ + Hostname: "states-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "fips-us-west-1": endpoint{ + Hostname: "states-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "fips-us-west-2": endpoint{ + Hostname: "states-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + "me-south-1": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "storagegateway": service{ + + Endpoints: endpoints{ + "ap-east-1": endpoint{}, + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-north-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "me-south-1": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "streams.dynamodb": service{ + Defaults: endpoint{ + Protocols: []string{"http", "https"}, + CredentialScope: credentialScope{ + Service: "dynamodb", + }, + }, + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "ca-central-1-fips": endpoint{ + Hostname: "dynamodb-fips.ca-central-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "ca-central-1", + }, + }, + "eu-central-1": endpoint{}, + "eu-north-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "local": endpoint{ + Hostname: "localhost:8000", + Protocols: []string{"http"}, + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "me-south-1": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-1-fips": endpoint{ + Hostname: "dynamodb-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "us-east-2": endpoint{}, + "us-east-2-fips": endpoint{ + Hostname: "dynamodb-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "us-west-1": endpoint{}, + "us-west-1-fips": endpoint{ + Hostname: "dynamodb-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "us-west-2": endpoint{}, + "us-west-2-fips": endpoint{ + Hostname: "dynamodb-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + }, + }, + "sts": service{ + PartitionEndpoint: "aws-global", + + Endpoints: endpoints{ + "ap-east-1": endpoint{}, + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "aws-global": endpoint{ + Hostname: "sts.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-north-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "me-south-1": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-1-fips": endpoint{ + Hostname: "sts-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "us-east-2": endpoint{}, + "us-east-2-fips": endpoint{ + Hostname: "sts-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "us-west-1": endpoint{}, + "us-west-1-fips": endpoint{ + Hostname: "sts-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "us-west-2": endpoint{}, + "us-west-2-fips": endpoint{ + Hostname: "sts-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + }, + }, + "support": service{ + PartitionEndpoint: "aws-global", + + Endpoints: endpoints{ + "aws-global": endpoint{ + Hostname: "support.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + }, + }, + "swf": service{ + + Endpoints: endpoints{ + "ap-east-1": endpoint{}, + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-north-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "fips-us-east-1": endpoint{ + Hostname: "swf-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "fips-us-east-2": endpoint{ + Hostname: "swf-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "fips-us-west-1": endpoint{ + Hostname: "swf-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "fips-us-west-2": endpoint{ + Hostname: "swf-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + "me-south-1": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "tagging": service{ + + Endpoints: endpoints{ + "ap-east-1": endpoint{}, + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-north-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "me-south-1": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "transcribe": service{ + Defaults: endpoint{ + Protocols: []string{"https"}, + }, + Endpoints: endpoints{ + "ap-east-1": endpoint{}, + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "fips-us-east-1": endpoint{ + Hostname: "fips.transcribe.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "fips-us-east-2": endpoint{ + Hostname: "fips.transcribe.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "fips-us-west-1": endpoint{ + Hostname: "fips.transcribe.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "fips-us-west-2": endpoint{ + Hostname: "fips.transcribe.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + "me-south-1": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "transcribestreaming": service{ + + Endpoints: endpoints{ + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "transfer": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-north-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "translate": service{ + Defaults: endpoint{ + Protocols: []string{"https"}, + }, + Endpoints: endpoints{ + "ap-east-1": endpoint{}, + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-north-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "us-east-1": endpoint{}, + "us-east-1-fips": endpoint{ + Hostname: "translate-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "us-east-2": endpoint{}, + "us-east-2-fips": endpoint{ + Hostname: "translate-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + "us-west-2-fips": endpoint{ + Hostname: "translate-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + }, + }, + "waf": service{ + PartitionEndpoint: "aws-global", + IsRegionalized: boxedFalse, + + Endpoints: endpoints{ + "aws-fips": endpoint{ + Hostname: "waf-fips.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "aws-global": endpoint{ + Hostname: "waf.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + }, + }, + "waf-regional": service{ + + Endpoints: endpoints{ + "ap-east-1": endpoint{ + Hostname: "waf-regional.ap-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "ap-east-1", + }, + }, + "ap-northeast-1": endpoint{ + Hostname: "waf-regional.ap-northeast-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "ap-northeast-1", + }, + }, + "ap-northeast-2": endpoint{ + Hostname: "waf-regional.ap-northeast-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "ap-northeast-2", + }, + }, + "ap-south-1": endpoint{ + Hostname: "waf-regional.ap-south-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "ap-south-1", + }, + }, + "ap-southeast-1": endpoint{ + Hostname: "waf-regional.ap-southeast-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "ap-southeast-1", + }, + }, + "ap-southeast-2": endpoint{ + Hostname: "waf-regional.ap-southeast-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "ap-southeast-2", + }, + }, + "ca-central-1": endpoint{ + Hostname: "waf-regional.ca-central-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "ca-central-1", + }, + }, + "eu-central-1": endpoint{ + Hostname: "waf-regional.eu-central-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "eu-central-1", + }, + }, + "eu-north-1": endpoint{ + Hostname: "waf-regional.eu-north-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "eu-north-1", + }, + }, + "eu-west-1": endpoint{ + Hostname: "waf-regional.eu-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "eu-west-1", + }, + }, + "eu-west-2": endpoint{ + Hostname: "waf-regional.eu-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "eu-west-2", + }, + }, + "eu-west-3": endpoint{ + Hostname: "waf-regional.eu-west-3.amazonaws.com", + CredentialScope: credentialScope{ + Region: "eu-west-3", + }, + }, + "fips-ap-east-1": endpoint{ + Hostname: "waf-regional-fips.ap-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "ap-east-1", + }, + }, + "fips-ap-northeast-1": endpoint{ + Hostname: "waf-regional-fips.ap-northeast-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "ap-northeast-1", + }, + }, + "fips-ap-northeast-2": endpoint{ + Hostname: "waf-regional-fips.ap-northeast-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "ap-northeast-2", + }, + }, + "fips-ap-south-1": endpoint{ + Hostname: "waf-regional-fips.ap-south-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "ap-south-1", + }, + }, + "fips-ap-southeast-1": endpoint{ + Hostname: "waf-regional-fips.ap-southeast-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "ap-southeast-1", + }, + }, + "fips-ap-southeast-2": endpoint{ + Hostname: "waf-regional-fips.ap-southeast-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "ap-southeast-2", + }, + }, + "fips-ca-central-1": endpoint{ + Hostname: "waf-regional-fips.ca-central-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "ca-central-1", + }, + }, + "fips-eu-central-1": endpoint{ + Hostname: "waf-regional-fips.eu-central-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "eu-central-1", + }, + }, + "fips-eu-north-1": endpoint{ + Hostname: "waf-regional-fips.eu-north-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "eu-north-1", + }, + }, + "fips-eu-west-1": endpoint{ + Hostname: "waf-regional-fips.eu-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "eu-west-1", + }, + }, + "fips-eu-west-2": endpoint{ + Hostname: "waf-regional-fips.eu-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "eu-west-2", + }, + }, + "fips-eu-west-3": endpoint{ + Hostname: "waf-regional-fips.eu-west-3.amazonaws.com", + CredentialScope: credentialScope{ + Region: "eu-west-3", + }, + }, + "fips-me-south-1": endpoint{ + Hostname: "waf-regional-fips.me-south-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "me-south-1", + }, + }, + "fips-sa-east-1": endpoint{ + Hostname: "waf-regional-fips.sa-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "sa-east-1", + }, + }, + "fips-us-east-1": endpoint{ + Hostname: "waf-regional-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "fips-us-east-2": endpoint{ + Hostname: "waf-regional-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "fips-us-west-1": endpoint{ + Hostname: "waf-regional-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "fips-us-west-2": endpoint{ + Hostname: "waf-regional-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + "me-south-1": endpoint{ + Hostname: "waf-regional.me-south-1.amazonaws.com", CredentialScope: credentialScope{ - Region: "us-west-2", + Region: "me-south-1", }, }, - }, - }, - "support": service{ - PartitionEndpoint: "aws-global", - - Endpoints: endpoints{ - "aws-global": endpoint{ - Hostname: "support.us-east-1.amazonaws.com", + "sa-east-1": endpoint{ + Hostname: "waf-regional.sa-east-1.amazonaws.com", CredentialScope: credentialScope{ - Region: "us-east-1", + Region: "sa-east-1", }, }, - }, - }, - "swf": service{ - - Endpoints: endpoints{ - "ap-east-1": endpoint{}, - "ap-northeast-1": endpoint{}, - "ap-northeast-2": endpoint{}, - "ap-south-1": endpoint{}, - "ap-southeast-1": endpoint{}, - "ap-southeast-2": endpoint{}, - "ca-central-1": endpoint{}, - "eu-central-1": endpoint{}, - "eu-north-1": endpoint{}, - "eu-west-1": endpoint{}, - "eu-west-2": endpoint{}, - "eu-west-3": endpoint{}, - "me-south-1": endpoint{}, - "sa-east-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, - }, - }, - "tagging": service{ - - Endpoints: endpoints{ - "ap-east-1": endpoint{}, - "ap-northeast-1": endpoint{}, - "ap-northeast-2": endpoint{}, - "ap-south-1": endpoint{}, - "ap-southeast-1": endpoint{}, - "ap-southeast-2": endpoint{}, - "ca-central-1": endpoint{}, - "eu-central-1": endpoint{}, - "eu-north-1": endpoint{}, - "eu-west-1": endpoint{}, - "eu-west-2": endpoint{}, - "eu-west-3": endpoint{}, - "me-south-1": endpoint{}, - "sa-east-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, - }, - }, - "transcribe": service{ - Defaults: endpoint{ - Protocols: []string{"https"}, - }, - Endpoints: endpoints{ - "ap-east-1": endpoint{}, - "ap-northeast-1": endpoint{}, - "ap-northeast-2": endpoint{}, - "ap-south-1": endpoint{}, - "ap-southeast-1": endpoint{}, - "ap-southeast-2": endpoint{}, - "ca-central-1": endpoint{}, - "eu-central-1": endpoint{}, - "eu-west-1": endpoint{}, - "eu-west-2": endpoint{}, - "eu-west-3": endpoint{}, - "me-south-1": endpoint{}, - "sa-east-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, - }, - }, - "transcribestreaming": service{ - - Endpoints: endpoints{ - "ap-southeast-2": endpoint{}, - "ca-central-1": endpoint{}, - "eu-west-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-2": endpoint{}, - }, - }, - "transfer": service{ - - Endpoints: endpoints{ - "ap-northeast-1": endpoint{}, - "ap-northeast-2": endpoint{}, - "ap-south-1": endpoint{}, - "ap-southeast-1": endpoint{}, - "ap-southeast-2": endpoint{}, - "ca-central-1": endpoint{}, - "eu-central-1": endpoint{}, - "eu-north-1": endpoint{}, - "eu-west-1": endpoint{}, - "eu-west-2": endpoint{}, - "eu-west-3": endpoint{}, - "sa-east-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, - }, - }, - "translate": service{ - Defaults: endpoint{ - Protocols: []string{"https"}, - }, - Endpoints: endpoints{ - "ap-east-1": endpoint{}, - "ap-northeast-1": endpoint{}, - "ap-northeast-2": endpoint{}, - "ap-south-1": endpoint{}, - "ap-southeast-1": endpoint{}, - "ap-southeast-2": endpoint{}, - "ca-central-1": endpoint{}, - "eu-central-1": endpoint{}, - "eu-north-1": endpoint{}, - "eu-west-1": endpoint{}, - "eu-west-2": endpoint{}, - "eu-west-3": endpoint{}, - "us-east-1": endpoint{}, - "us-east-1-fips": endpoint{ - Hostname: "translate-fips.us-east-1.amazonaws.com", + "us-east-1": endpoint{ + Hostname: "waf-regional.us-east-1.amazonaws.com", CredentialScope: credentialScope{ Region: "us-east-1", }, }, - "us-east-2": endpoint{}, - "us-east-2-fips": endpoint{ - Hostname: "translate-fips.us-east-2.amazonaws.com", + "us-east-2": endpoint{ + Hostname: "waf-regional.us-east-2.amazonaws.com", CredentialScope: credentialScope{ Region: "us-east-2", }, }, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, - "us-west-2-fips": endpoint{ - Hostname: "translate-fips.us-west-2.amazonaws.com", + "us-west-1": endpoint{ + Hostname: "waf-regional.us-west-1.amazonaws.com", CredentialScope: credentialScope{ - Region: "us-west-2", + Region: "us-west-1", }, }, - }, - }, - "waf": service{ - PartitionEndpoint: "aws-global", - IsRegionalized: boxedFalse, - - Endpoints: endpoints{ - "aws-global": endpoint{ - Hostname: "waf.amazonaws.com", + "us-west-2": endpoint{ + Hostname: "waf-regional.us-west-2.amazonaws.com", CredentialScope: credentialScope{ - Region: "us-east-1", + Region: "us-west-2", }, }, }, }, - "waf-regional": service{ - - Endpoints: endpoints{ - "ap-northeast-1": endpoint{}, - "ap-northeast-2": endpoint{}, - "ap-south-1": endpoint{}, - "ap-southeast-1": endpoint{}, - "ap-southeast-2": endpoint{}, - "ca-central-1": endpoint{}, - "eu-central-1": endpoint{}, - "eu-north-1": endpoint{}, - "eu-west-1": endpoint{}, - "eu-west-2": endpoint{}, - "eu-west-3": endpoint{}, - "sa-east-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, - }, - }, "workdocs": service{ Endpoints: endpoints{ @@ -4442,8 +5637,20 @@ var awsPartition = partition{ "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, "eu-west-1": endpoint{}, - "us-east-1": endpoint{}, - "us-west-2": endpoint{}, + "fips-us-east-1": endpoint{ + Hostname: "workdocs-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "fips-us-west-2": endpoint{ + Hostname: "workdocs-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + "us-east-1": endpoint{}, + "us-west-2": endpoint{}, }, }, "workmail": service{ @@ -4545,6 +5752,13 @@ var awscnPartition = partition{ }, }, }, + "api.sagemaker": service{ + + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, + }, + }, "apigateway": service{ Endpoints: endpoints{ @@ -4570,6 +5784,7 @@ var awscnPartition = partition{ "athena": service{ Endpoints: endpoints{ + "cn-north-1": endpoint{}, "cn-northwest-1": endpoint{}, }, }, @@ -4743,6 +5958,18 @@ var awscnPartition = partition{ Endpoints: endpoints{ "cn-north-1": endpoint{}, "cn-northwest-1": endpoint{}, + "fips-cn-north-1": endpoint{ + Hostname: "elasticfilesystem-fips.cn-north-1.amazonaws.com.cn", + CredentialScope: credentialScope{ + Region: "cn-north-1", + }, + }, + "fips-cn-northwest-1": endpoint{ + Hostname: "elasticfilesystem-fips.cn-northwest-1.amazonaws.com.cn", + CredentialScope: credentialScope{ + Region: "cn-northwest-1", + }, + }, }, }, "elasticloadbalancing": service{ @@ -4802,6 +6029,7 @@ var awscnPartition = partition{ "glue": service{ Endpoints: endpoints{ + "cn-north-1": endpoint{}, "cn-northwest-1": endpoint{}, }, }, @@ -4845,6 +6073,20 @@ var awscnPartition = partition{ "cn-northwest-1": endpoint{}, }, }, + "iotsecuredtunneling": service{ + + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, + }, + }, + "kafka": service{ + + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, + }, + }, "kinesis": service{ Endpoints: endpoints{ @@ -4931,6 +6173,13 @@ var awscnPartition = partition{ "cn-northwest-1": endpoint{}, }, }, + "runtime.sagemaker": service{ + + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, + }, + }, "s3": service{ Defaults: endpoint{ Protocols: []string{"http", "https"}, @@ -4994,6 +6243,12 @@ var awscnPartition = partition{ Endpoints: endpoints{ "cn-north-1": endpoint{}, + "fips-cn-north-1": endpoint{ + Hostname: "snowball-fips.cn-north-1.amazonaws.com.cn", + CredentialScope: credentialScope{ + Region: "cn-north-1", + }, + }, }, }, "sns": service{ @@ -5165,6 +6420,18 @@ var awsusgovPartition = partition{ "api.ecr": service{ Endpoints: endpoints{ + "fips-us-gov-east-1": endpoint{ + Hostname: "ecr-fips.us-gov-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-east-1", + }, + }, + "fips-us-gov-west-1": endpoint{ + Hostname: "ecr-fips.us-gov-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-west-1", + }, + }, "us-gov-east-1": endpoint{ Hostname: "api.ecr.us-gov-east-1.amazonaws.com", CredentialScope: credentialScope{ @@ -5258,6 +6525,18 @@ var awsusgovPartition = partition{ "batch": service{ Endpoints: endpoints{ + "fips-us-gov-east-1": endpoint{ + Hostname: "batch.us-gov-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-east-1", + }, + }, + "fips-us-gov-west-1": endpoint{ + Hostname: "batch.us-gov-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-west-1", + }, + }, "us-gov-east-1": endpoint{}, "us-gov-west-1": endpoint{}, }, @@ -5308,12 +6587,30 @@ var awsusgovPartition = partition{ Endpoints: endpoints{ "us-gov-east-1": endpoint{}, + "us-gov-east-1-fips": endpoint{ + Hostname: "codebuild-fips.us-gov-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-east-1", + }, + }, "us-gov-west-1": endpoint{}, + "us-gov-west-1-fips": endpoint{ + Hostname: "codebuild-fips.us-gov-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-west-1", + }, + }, }, }, "codecommit": service{ Endpoints: endpoints{ + "fips": endpoint{ + Hostname: "codecommit-fips.us-gov-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-west-1", + }, + }, "us-gov-east-1": endpoint{}, "us-gov-west-1": endpoint{}, }, @@ -5337,11 +6634,29 @@ var awsusgovPartition = partition{ }, }, }, + "codepipeline": service{ + + Endpoints: endpoints{ + "fips-us-gov-west-1": endpoint{ + Hostname: "codepipeline-fips.us-gov-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-west-1", + }, + }, + "us-gov-west-1": endpoint{}, + }, + }, "comprehend": service{ Defaults: endpoint{ Protocols: []string{"https"}, }, Endpoints: endpoints{ + "fips-us-gov-west-1": endpoint{ + Hostname: "comprehend-fips.us-gov-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-west-1", + }, + }, "us-gov-west-1": endpoint{}, }, }, @@ -5385,13 +6700,29 @@ var awsusgovPartition = partition{ "directconnect": service{ Endpoints: endpoints{ - "us-gov-east-1": endpoint{}, - "us-gov-west-1": endpoint{}, + "us-gov-east-1": endpoint{ + Hostname: "directconnect.us-gov-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-east-1", + }, + }, + "us-gov-west-1": endpoint{ + Hostname: "directconnect.us-gov-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-west-1", + }, + }, }, }, "dms": service{ Endpoints: endpoints{ + "dms-fips": endpoint{ + Hostname: "dms.us-gov-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-west-1", + }, + }, "us-gov-east-1": endpoint{}, "us-gov-west-1": endpoint{}, }, @@ -5399,6 +6730,18 @@ var awsusgovPartition = partition{ "ds": service{ Endpoints: endpoints{ + "fips-us-gov-east-1": endpoint{ + Hostname: "ds-fips.us-gov-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-east-1", + }, + }, + "fips-us-gov-west-1": endpoint{ + Hostname: "ds-fips.us-gov-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-west-1", + }, + }, "us-gov-east-1": endpoint{}, "us-gov-west-1": endpoint{}, }, @@ -5425,8 +6768,18 @@ var awsusgovPartition = partition{ "ec2": service{ Endpoints: endpoints{ - "us-gov-east-1": endpoint{}, - "us-gov-west-1": endpoint{}, + "us-gov-east-1": endpoint{ + Hostname: "ec2.us-gov-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-east-1", + }, + }, + "us-gov-west-1": endpoint{ + Hostname: "ec2.us-gov-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-west-1", + }, + }, }, }, "ec2metadata": service{ @@ -5443,6 +6796,18 @@ var awsusgovPartition = partition{ "ecs": service{ Endpoints: endpoints{ + "fips-us-gov-east-1": endpoint{ + Hostname: "ecs-fips.us-gov-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-east-1", + }, + }, + "fips-us-gov-west-1": endpoint{ + Hostname: "ecs-fips.us-gov-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-west-1", + }, + }, "us-gov-east-1": endpoint{}, "us-gov-west-1": endpoint{}, }, @@ -5463,13 +6828,35 @@ var awsusgovPartition = partition{ "elasticbeanstalk": service{ Endpoints: endpoints{ - "us-gov-east-1": endpoint{}, - "us-gov-west-1": endpoint{}, + "us-gov-east-1": endpoint{ + Hostname: "elasticbeanstalk.us-gov-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-east-1", + }, + }, + "us-gov-west-1": endpoint{ + Hostname: "elasticbeanstalk.us-gov-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-west-1", + }, + }, }, }, "elasticfilesystem": service{ Endpoints: endpoints{ + "fips-us-gov-east-1": endpoint{ + Hostname: "elasticfilesystem-fips.us-gov-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-east-1", + }, + }, + "fips-us-gov-west-1": endpoint{ + Hostname: "elasticfilesystem-fips.us-gov-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-west-1", + }, + }, "us-gov-east-1": endpoint{}, "us-gov-west-1": endpoint{}, }, @@ -5515,6 +6902,18 @@ var awsusgovPartition = partition{ "firehose": service{ Endpoints: endpoints{ + "fips-us-gov-east-1": endpoint{ + Hostname: "firehose-fips.us-gov-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-east-1", + }, + }, + "fips-us-gov-west-1": endpoint{ + Hostname: "firehose-fips.us-gov-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-west-1", + }, + }, "us-gov-east-1": endpoint{}, "us-gov-west-1": endpoint{}, }, @@ -5540,6 +6939,18 @@ var awsusgovPartition = partition{ "glue": service{ Endpoints: endpoints{ + "fips-us-gov-east-1": endpoint{ + Hostname: "glue-fips.us-gov-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-east-1", + }, + }, + "fips-us-gov-west-1": endpoint{ + Hostname: "glue-fips.us-gov-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-west-1", + }, + }, "us-gov-east-1": endpoint{}, "us-gov-west-1": endpoint{}, }, @@ -5584,6 +6995,18 @@ var awsusgovPartition = partition{ "inspector": service{ Endpoints: endpoints{ + "fips-us-gov-east-1": endpoint{ + Hostname: "inspector-fips.us-gov-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-east-1", + }, + }, + "fips-us-gov-west-1": endpoint{ + Hostname: "inspector-fips.us-gov-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-west-1", + }, + }, "us-gov-east-1": endpoint{}, "us-gov-west-1": endpoint{}, }, @@ -5627,6 +7050,18 @@ var awsusgovPartition = partition{ "lambda": service{ Endpoints: endpoints{ + "fips-us-gov-east-1": endpoint{ + Hostname: "lambda-fips.us-gov-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-east-1", + }, + }, + "fips-us-gov-west-1": endpoint{ + Hostname: "lambda-fips.us-gov-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-west-1", + }, + }, "us-gov-east-1": endpoint{}, "us-gov-west-1": endpoint{}, }, @@ -5634,6 +7069,18 @@ var awsusgovPartition = partition{ "license-manager": service{ Endpoints: endpoints{ + "fips-us-gov-east-1": endpoint{ + Hostname: "license-manager-fips.us-gov-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-east-1", + }, + }, + "fips-us-gov-west-1": endpoint{ + Hostname: "license-manager-fips.us-gov-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-west-1", + }, + }, "us-gov-east-1": endpoint{}, "us-gov-west-1": endpoint{}, }, @@ -5665,6 +7112,18 @@ var awsusgovPartition = partition{ "monitoring": service{ Endpoints: endpoints{ + "fips-us-gov-east-1": endpoint{ + Hostname: "monitoring.us-gov-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-east-1", + }, + }, + "fips-us-gov-west-1": endpoint{ + Hostname: "monitoring.us-gov-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-west-1", + }, + }, "us-gov-east-1": endpoint{}, "us-gov-west-1": endpoint{}, }, @@ -5699,9 +7158,22 @@ var awsusgovPartition = partition{ }, }, }, + "outposts": service{ + + Endpoints: endpoints{ + "us-gov-east-1": endpoint{}, + "us-gov-west-1": endpoint{}, + }, + }, "polly": service{ Endpoints: endpoints{ + "fips-us-gov-west-1": endpoint{ + Hostname: "polly-fips.us-gov-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-west-1", + }, + }, "us-gov-west-1": endpoint{}, }, }, @@ -5722,8 +7194,18 @@ var awsusgovPartition = partition{ "redshift": service{ Endpoints: endpoints{ - "us-gov-east-1": endpoint{}, - "us-gov-west-1": endpoint{}, + "us-gov-east-1": endpoint{ + Hostname: "redshift.us-gov-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-east-1", + }, + }, + "us-gov-west-1": endpoint{ + Hostname: "redshift.us-gov-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-west-1", + }, + }, }, }, "rekognition": service{ @@ -5869,6 +7351,13 @@ var awsusgovPartition = partition{ "servicecatalog": service{ Endpoints: endpoints{ + "us-gov-east-1": endpoint{}, + "us-gov-east-1-fips": endpoint{ + Hostname: "servicecatalog-fips.us-gov-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-east-1", + }, + }, "us-gov-west-1": endpoint{}, "us-gov-west-1-fips": endpoint{ Hostname: "servicecatalog-fips.us-gov-west-1.amazonaws.com", @@ -5900,6 +7389,18 @@ var awsusgovPartition = partition{ "snowball": service{ Endpoints: endpoints{ + "fips-us-gov-east-1": endpoint{ + Hostname: "snowball-fips.us-gov-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-east-1", + }, + }, + "fips-us-gov-west-1": endpoint{ + Hostname: "snowball-fips.us-gov-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-west-1", + }, + }, "us-gov-east-1": endpoint{}, "us-gov-west-1": endpoint{}, }, @@ -5933,6 +7434,18 @@ var awsusgovPartition = partition{ "states": service{ Endpoints: endpoints{ + "fips-us-gov-east-1": endpoint{ + Hostname: "states-fips.us-gov-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-east-1", + }, + }, + "fips-us-gov-west-1": endpoint{ + Hostname: "states.us-gov-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-west-1", + }, + }, "us-gov-east-1": endpoint{}, "us-gov-west-1": endpoint{}, }, @@ -5989,8 +7502,18 @@ var awsusgovPartition = partition{ "swf": service{ Endpoints: endpoints{ - "us-gov-east-1": endpoint{}, - "us-gov-west-1": endpoint{}, + "us-gov-east-1": endpoint{ + Hostname: "swf.us-gov-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-east-1", + }, + }, + "us-gov-west-1": endpoint{ + Hostname: "swf.us-gov-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-west-1", + }, + }, }, }, "tagging": service{ @@ -6005,6 +7528,18 @@ var awsusgovPartition = partition{ Protocols: []string{"https"}, }, Endpoints: endpoints{ + "fips-us-gov-east-1": endpoint{ + Hostname: "fips.transcribe.us-gov-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-east-1", + }, + }, + "fips-us-gov-west-1": endpoint{ + Hostname: "fips.transcribe.us-gov-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-west-1", + }, + }, "us-gov-east-1": endpoint{}, "us-gov-west-1": endpoint{}, }, @@ -6026,7 +7561,18 @@ var awsusgovPartition = partition{ "waf-regional": service{ Endpoints: endpoints{ - "us-gov-west-1": endpoint{}, + "fips-us-gov-west-1": endpoint{ + Hostname: "waf-regional-fips.us-gov-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-west-1", + }, + }, + "us-gov-west-1": endpoint{ + Hostname: "waf-regional.us-gov-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-west-1", + }, + }, }, }, "workspaces": service{ @@ -6153,6 +7699,12 @@ var awsisoPartition = partition{ "dms": service{ Endpoints: endpoints{ + "dms-fips": endpoint{ + Hostname: "dms.us-iso-east-1.c2s.ic.gov", + CredentialScope: credentialScope{ + Region: "us-iso-east-1", + }, + }, "us-iso-east-1": endpoint{}, }, }, @@ -6473,6 +8025,12 @@ var awsisobPartition = partition{ "dms": service{ Endpoints: endpoints{ + "dms-fips": endpoint{ + Hostname: "dms.us-isob-east-1.sc2s.sgov.gov", + CredentialScope: credentialScope{ + Region: "us-isob-east-1", + }, + }, "us-isob-east-1": endpoint{}, }, }, diff --git a/aws/version.go b/aws/version.go index 4fb1a55dad2..6a8557741c5 100644 --- a/aws/version.go +++ b/aws/version.go @@ -5,4 +5,4 @@ package aws const SDKName = "aws-sdk-go" // SDKVersion is the version of this SDK -const SDKVersion = "0.20.0" +const SDKVersion = "0.21.0" diff --git a/models/apis/AWSMigrationHub/2017-05-31/api-2.json b/models/apis/AWSMigrationHub/2017-05-31/api-2.json index 4c7f29c5f19..725e7ff1046 100644 --- a/models/apis/AWSMigrationHub/2017-05-31/api-2.json +++ b/models/apis/AWSMigrationHub/2017-05-31/api-2.json @@ -22,6 +22,7 @@ "output":{"shape":"AssociateCreatedArtifactResult"}, "errors":[ {"shape":"AccessDeniedException"}, + {"shape":"ThrottlingException"}, {"shape":"InternalServerError"}, {"shape":"ServiceUnavailableException"}, {"shape":"DryRunOperation"}, @@ -41,6 +42,7 @@ "output":{"shape":"AssociateDiscoveredResourceResult"}, "errors":[ {"shape":"AccessDeniedException"}, + {"shape":"ThrottlingException"}, {"shape":"InternalServerError"}, {"shape":"ServiceUnavailableException"}, {"shape":"DryRunOperation"}, @@ -61,6 +63,7 @@ "output":{"shape":"CreateProgressUpdateStreamResult"}, "errors":[ {"shape":"AccessDeniedException"}, + {"shape":"ThrottlingException"}, {"shape":"InternalServerError"}, {"shape":"ServiceUnavailableException"}, {"shape":"DryRunOperation"}, @@ -79,6 +82,7 @@ "output":{"shape":"DeleteProgressUpdateStreamResult"}, "errors":[ {"shape":"AccessDeniedException"}, + {"shape":"ThrottlingException"}, {"shape":"InternalServerError"}, {"shape":"ServiceUnavailableException"}, {"shape":"DryRunOperation"}, @@ -98,6 +102,7 @@ "output":{"shape":"DescribeApplicationStateResult"}, "errors":[ {"shape":"AccessDeniedException"}, + {"shape":"ThrottlingException"}, {"shape":"InternalServerError"}, {"shape":"ServiceUnavailableException"}, {"shape":"InvalidInputException"}, @@ -116,6 +121,7 @@ "output":{"shape":"DescribeMigrationTaskResult"}, "errors":[ {"shape":"AccessDeniedException"}, + {"shape":"ThrottlingException"}, {"shape":"InternalServerError"}, {"shape":"ServiceUnavailableException"}, {"shape":"InvalidInputException"}, @@ -133,6 +139,7 @@ "output":{"shape":"DisassociateCreatedArtifactResult"}, "errors":[ {"shape":"AccessDeniedException"}, + {"shape":"ThrottlingException"}, {"shape":"InternalServerError"}, {"shape":"ServiceUnavailableException"}, {"shape":"DryRunOperation"}, @@ -152,6 +159,7 @@ "output":{"shape":"DisassociateDiscoveredResourceResult"}, "errors":[ {"shape":"AccessDeniedException"}, + {"shape":"ThrottlingException"}, {"shape":"InternalServerError"}, {"shape":"ServiceUnavailableException"}, {"shape":"DryRunOperation"}, @@ -171,6 +179,7 @@ "output":{"shape":"ImportMigrationTaskResult"}, "errors":[ {"shape":"AccessDeniedException"}, + {"shape":"ThrottlingException"}, {"shape":"InternalServerError"}, {"shape":"ServiceUnavailableException"}, {"shape":"DryRunOperation"}, @@ -190,6 +199,7 @@ "output":{"shape":"ListApplicationStatesResult"}, "errors":[ {"shape":"AccessDeniedException"}, + {"shape":"ThrottlingException"}, {"shape":"InternalServerError"}, {"shape":"ServiceUnavailableException"}, {"shape":"InvalidInputException"}, @@ -206,6 +216,7 @@ "output":{"shape":"ListCreatedArtifactsResult"}, "errors":[ {"shape":"AccessDeniedException"}, + {"shape":"ThrottlingException"}, {"shape":"InternalServerError"}, {"shape":"ServiceUnavailableException"}, {"shape":"InvalidInputException"}, @@ -223,6 +234,7 @@ "output":{"shape":"ListDiscoveredResourcesResult"}, "errors":[ {"shape":"AccessDeniedException"}, + {"shape":"ThrottlingException"}, {"shape":"InternalServerError"}, {"shape":"ServiceUnavailableException"}, {"shape":"InvalidInputException"}, @@ -240,6 +252,7 @@ "output":{"shape":"ListMigrationTasksResult"}, "errors":[ {"shape":"AccessDeniedException"}, + {"shape":"ThrottlingException"}, {"shape":"InternalServerError"}, {"shape":"ServiceUnavailableException"}, {"shape":"InvalidInputException"}, @@ -258,6 +271,7 @@ "output":{"shape":"ListProgressUpdateStreamsResult"}, "errors":[ {"shape":"AccessDeniedException"}, + {"shape":"ThrottlingException"}, {"shape":"InternalServerError"}, {"shape":"ServiceUnavailableException"}, {"shape":"InvalidInputException"}, @@ -274,6 +288,7 @@ "output":{"shape":"NotifyApplicationStateResult"}, "errors":[ {"shape":"AccessDeniedException"}, + {"shape":"ThrottlingException"}, {"shape":"InternalServerError"}, {"shape":"ServiceUnavailableException"}, {"shape":"DryRunOperation"}, @@ -294,6 +309,7 @@ "output":{"shape":"NotifyMigrationTaskStateResult"}, "errors":[ {"shape":"AccessDeniedException"}, + {"shape":"ThrottlingException"}, {"shape":"InternalServerError"}, {"shape":"ServiceUnavailableException"}, {"shape":"DryRunOperation"}, @@ -313,6 +329,7 @@ "output":{"shape":"PutResourceAttributesResult"}, "errors":[ {"shape":"AccessDeniedException"}, + {"shape":"ThrottlingException"}, {"shape":"InternalServerError"}, {"shape":"ServiceUnavailableException"}, {"shape":"DryRunOperation"}, @@ -334,7 +351,8 @@ "ApplicationId":{ "type":"string", "max":1600, - "min":1 + "min":1, + "pattern":"^.{1,1600}$" }, "ApplicationIds":{ "type":"list", @@ -404,7 +422,9 @@ }, "ConfigurationId":{ "type":"string", - "min":1 + "max":1600, + "min":1, + "pattern":"^.{1,1600}$" }, "CreateProgressUpdateStreamRequest":{ "type":"structure", @@ -430,7 +450,8 @@ "CreatedArtifactDescription":{ "type":"string", "max":500, - "min":0 + "min":0, + "pattern":"^.{0,500}$" }, "CreatedArtifactList":{ "type":"list", @@ -535,7 +556,8 @@ "DiscoveredResourceDescription":{ "type":"string", "max":500, - "min":0 + "min":0, + "pattern":"^.{0,500}$" }, "DiscoveredResourceList":{ "type":"list", @@ -856,12 +878,14 @@ "ResourceAttributeValue":{ "type":"string", "max":256, - "min":1 + "min":1, + "pattern":"^.{1,256}$" }, "ResourceName":{ "type":"string", "max":1600, - "min":1 + "min":1, + "pattern":"^.{1,1600}$" }, "ResourceNotFoundException":{ "type":"structure", @@ -870,6 +894,7 @@ }, "exception":true }, + "RetryAfterSeconds":{"type":"integer"}, "ServiceUnavailableException":{ "type":"structure", "members":{ @@ -890,7 +915,8 @@ "StatusDetail":{ "type":"string", "max":500, - "min":0 + "min":0, + "pattern":"^.{0,500}$" }, "Task":{ "type":"structure", @@ -901,7 +927,21 @@ "ProgressPercent":{"shape":"ProgressPercent"} } }, - "Token":{"type":"string"}, + "ThrottlingException":{ + "type":"structure", + "required":["Message"], + "members":{ + "Message":{"shape":"ErrorMessage"}, + "RetryAfterSeconds":{"shape":"RetryAfterSeconds"} + }, + "exception":true + }, + "Token":{ + "type":"string", + "max":2048, + "min":0, + "pattern":"^[a-zA-Z0-9\\/\\+\\=]{0,2048}$" + }, "UnauthorizedOperation":{ "type":"structure", "members":{ diff --git a/models/apis/AWSMigrationHub/2017-05-31/docs-2.json b/models/apis/AWSMigrationHub/2017-05-31/docs-2.json index 88d8b4a4860..4eb179ad67d 100644 --- a/models/apis/AWSMigrationHub/2017-05-31/docs-2.json +++ b/models/apis/AWSMigrationHub/2017-05-31/docs-2.json @@ -224,6 +224,7 @@ "PolicyErrorException$Message": null, "ResourceNotFoundException$Message": null, "ServiceUnavailableException$Message": null, + "ThrottlingException$Message": "
A message that provides information about the exception.
", "UnauthorizedOperation$Message": null } }, @@ -479,6 +480,12 @@ "refs": { } }, + "RetryAfterSeconds": { + "base": null, + "refs": { + "ThrottlingException$RetryAfterSeconds": "The number of seconds the caller should wait before retrying.
" + } + }, "ServiceUnavailableException": { "base": "Exception raised when there is an internal, configuration, or dependency error encountered.
", "refs": { @@ -505,6 +512,11 @@ "NotifyMigrationTaskStateRequest$Task": "Information about the task's progress and status.
" } }, + "ThrottlingException": { + "base": "The request was denied due to request throttling.
", + "refs": { + } + }, "Token": { "base": null, "refs": { diff --git a/models/apis/accessanalyzer/2019-11-01/api-2.json b/models/apis/accessanalyzer/2019-11-01/api-2.json index f40330db37d..1d8a73732c4 100644 --- a/models/apis/accessanalyzer/2019-11-01/api-2.json +++ b/models/apis/accessanalyzer/2019-11-01/api-2.json @@ -346,6 +346,7 @@ "createdAt", "isPublic", "resourceArn", + "resourceOwnerAccount", "resourceType", "updatedAt" ], @@ -356,6 +357,7 @@ "error":{"shape":"String"}, "isPublic":{"shape":"Boolean"}, "resourceArn":{"shape":"ResourceArn"}, + "resourceOwnerAccount":{"shape":"String"}, "resourceType":{"shape":"ResourceType"}, "sharedVia":{"shape":"SharedViaList"}, "status":{"shape":"FindingStatus"}, @@ -366,10 +368,12 @@ "type":"structure", "required":[ "resourceArn", + "resourceOwnerAccount", "resourceType" ], "members":{ "resourceArn":{"shape":"ResourceArn"}, + "resourceOwnerAccount":{"shape":"String"}, "resourceType":{"shape":"ResourceType"} } }, @@ -381,12 +385,22 @@ "type":"string", "pattern":"^[^:]*:[^:]*:[^:]*:[^:]*:[^:]*:analyzer/.{1,255}$" }, + "AnalyzerStatus":{ + "type":"string", + "enum":[ + "ACTIVE", + "CREATING", + "DISABLED", + "FAILED" + ] + }, "AnalyzerSummary":{ "type":"structure", "required":[ "arn", "createdAt", "name", + "status", "type" ], "members":{ @@ -395,6 +409,8 @@ "lastResourceAnalyzed":{"shape":"String"}, "lastResourceAnalyzedAt":{"shape":"Timestamp"}, "name":{"shape":"Name"}, + "status":{"shape":"AnalyzerStatus"}, + "statusReason":{"shape":"StatusReason"}, "tags":{"shape":"TagsMap"}, "type":{"shape":"Type"} } @@ -556,6 +572,7 @@ "condition", "createdAt", "id", + "resourceOwnerAccount", "resourceType", "status", "updatedAt" @@ -570,6 +587,7 @@ "isPublic":{"shape":"Boolean"}, "principal":{"shape":"PrincipalMap"}, "resource":{"shape":"String"}, + "resourceOwnerAccount":{"shape":"String"}, "resourceType":{"shape":"ResourceType"}, "status":{"shape":"FindingStatus"}, "updatedAt":{"shape":"Timestamp"} @@ -602,6 +620,7 @@ "condition", "createdAt", "id", + "resourceOwnerAccount", "resourceType", "status", "updatedAt" @@ -616,6 +635,7 @@ "isPublic":{"shape":"Boolean"}, "principal":{"shape":"PrincipalMap"}, "resource":{"shape":"String"}, + "resourceOwnerAccount":{"shape":"String"}, "resourceType":{"shape":"ResourceType"}, "status":{"shape":"FindingStatus"}, "updatedAt":{"shape":"Timestamp"} @@ -882,6 +902,15 @@ "key":{"shape":"String"}, "value":{"shape":"String"} }, + "ReasonCode":{ + "type":"string", + "enum":[ + "AWS_SERVICE_ACCESS_DISABLED", + "DELEGATED_ADMINISTRATOR_DEREGISTERED", + "ORGANIZATION_DELETED", + "SERVICE_LINKED_ROLE_CREATION_FAILED" + ] + }, "ResourceArn":{ "type":"string", "pattern":"arn:[^:]*:[^:]*:[^:]*:[^:]*:.*$" @@ -955,6 +984,13 @@ "resourceArn":{"shape":"ResourceArn"} } }, + "StatusReason":{ + "type":"structure", + "required":["code"], + "members":{ + "code":{"shape":"ReasonCode"} + } + }, "String":{"type":"string"}, "TagKeys":{ "type":"list", @@ -1009,7 +1045,10 @@ "Token":{"type":"string"}, "Type":{ "type":"string", - "enum":["ACCOUNT"] + "enum":[ + "ACCOUNT", + "ORGANIZATION" + ] }, "UntagResourceRequest":{ "type":"structure", diff --git a/models/apis/accessanalyzer/2019-11-01/docs-2.json b/models/apis/accessanalyzer/2019-11-01/docs-2.json index 5ae1b719b32..14211b4f153 100644 --- a/models/apis/accessanalyzer/2019-11-01/docs-2.json +++ b/models/apis/accessanalyzer/2019-11-01/docs-2.json @@ -66,6 +66,12 @@ "UpdateFindingsRequest$analyzerArn": "The ARN of the analyzer that generated the findings to update.
" } }, + "AnalyzerStatus": { + "base": null, + "refs": { + "AnalyzerSummary$status": "The status of the analyzer. An Active
analyzer successfully monitors supported resources and generates new findings. The analyzer is Disabled
when a user action, such as removing trusted access for IAM Access Analyzer from AWS Organizations, causes the analyzer to stop generating new findings. The status is Creating
when the analyzer creation is in progress and Failed
when the analyzer creation has failed.
Contains information about the analyzer.
", "refs": { @@ -352,6 +358,12 @@ "FindingSummary$principal": "The external principal that has access to a resource within the zone of trust.
" } }, + "ReasonCode": { + "base": null, + "refs": { + "StatusReason$code": "The reason code for the current status of the analyzer.
" + } + }, "ResourceArn": { "base": null, "refs": { @@ -399,12 +411,20 @@ "refs": { } }, + "StatusReason": { + "base": "Provides more details about the current status of the analyzer. For example, if the creation for the analyzer fails, a Failed
status is displayed. For an analyzer with organization as the type, this failure can be due to an issue with creating the service-linked roles required in the member accounts of the AWS organization.
The statusReason
provides more details about the current status of the analyzer. For example, if the creation for the analyzer fails, a Failed
status is displayed. For an analyzer with organization as the type, this failure can be due to an issue with creating the service-linked roles required in the member accounts of the AWS organization.
An error message.
", + "AnalyzedResource$resourceOwnerAccount": "The AWS account ID that owns the resource.
", + "AnalyzedResourceSummary$resourceOwnerAccount": "The AWS account ID that owns the resource.
", "AnalyzerSummary$lastResourceAnalyzed": "The resource that was most recently analyzed by the analyzer.
", "ConditionKeyMap$key": null, "ConditionKeyMap$value": null, @@ -418,8 +438,10 @@ "FilterCriteriaMap$key": null, "Finding$error": "An error.
", "Finding$resource": "The resource that an external principal has access to.
", + "Finding$resourceOwnerAccount": "The AWS account ID that owns the resource.
", "FindingSummary$error": "The error that resulted in an Error finding.
", "FindingSummary$resource": "The resource that the external principal has access to.
", + "FindingSummary$resourceOwnerAccount": "The AWS account ID that owns the resource.
", "InternalServerException$message": null, "ListTagsForResourceRequest$resourceArn": "The ARN of the resource to retrieve tags from.
", "PrincipalMap$key": null, diff --git a/models/apis/acm/2015-12-08/api-2.json b/models/apis/acm/2015-12-08/api-2.json index 875ae2498cf..591d46aec7b 100644 --- a/models/apis/acm/2015-12-08/api-2.json +++ b/models/apis/acm/2015-12-08/api-2.json @@ -647,7 +647,7 @@ }, "NextToken":{ "type":"string", - "max":320, + "max":10000, "min":1, "pattern":"[\\u0009\\u000A\\u000D\\u0020-\\u00FF]*" }, @@ -666,7 +666,7 @@ }, "PrivateKeyBlob":{ "type":"blob", - "max":524288, + "max":5120, "min":1, "sensitive":true }, diff --git a/models/apis/acm/2015-12-08/docs-2.json b/models/apis/acm/2015-12-08/docs-2.json index 335b01c84f2..f65ea6b3a07 100644 --- a/models/apis/acm/2015-12-08/docs-2.json +++ b/models/apis/acm/2015-12-08/docs-2.json @@ -6,7 +6,7 @@ "DeleteCertificate": "Deletes a certificate and its associated private key. If this action succeeds, the certificate no longer appears in the list that can be displayed by calling the ListCertificates action or be retrieved by calling the GetCertificate action. The certificate will not be available for use by AWS services integrated with ACM.
You cannot delete an ACM certificate that is being used by another AWS service. To delete a certificate that is in use, the certificate association must first be removed.
Returns detailed metadata about the specified ACM certificate.
", "ExportCertificate": "Exports a private certificate issued by a private certificate authority (CA) for use anywhere. The exported file contains the certificate, the certificate chain, and the encrypted private 2048-bit RSA key associated with the public key that is embedded in the certificate. For security, you must assign a passphrase for the private key when exporting it.
For information about exporting and formatting a certificate using the ACM console or CLI, see Export a Private Certificate.
", - "GetCertificate": "Retrieves a certificate specified by an ARN and its certificate chain . The chain is an ordered list of certificates that contains the end entity certificate, intermediate certificates of subordinate CAs, and the root certificate in that order. The certificate and certificate chain are base64 encoded. If you want to decode the certificate to see the individual fields, you can use OpenSSL.
", + "GetCertificate": "Retrieves an Amazon-issued certificate and its certificate chain. The chain consists of the certificate of the issuing CA and the intermediate certificates of any other subordinate CAs. All of the certificates are base64 encoded. You can use OpenSSL to decode the certificates and inspect individual fields.
", "ImportCertificate": "Imports a certificate into AWS Certificate Manager (ACM) to use with services that are integrated with ACM. Note that integrated services allow only certificate types and keys they support to be associated with their resources. Further, their support differs depending on whether the certificate is imported into IAM or into ACM. For more information, see the documentation for each service. For more information about importing certificates into ACM, see Importing Certificates in the AWS Certificate Manager User Guide.
ACM does not provide managed renewal for certificates that you import.
Note the following guidelines when importing third party certificates:
You must enter the private key that matches the certificate you are importing.
The private key must be unencrypted. You cannot import a private key that is protected by a password or a passphrase.
If the certificate you are importing is not self-signed, you must enter its certificate chain.
If a certificate chain is included, the issuer must be the subject of one of the certificates in the chain.
The certificate, private key, and certificate chain must be PEM-encoded.
The current time must be between the Not Before
and Not After
certificate fields.
The Issuer
field must not be empty.
The OCSP authority URL, if present, must not exceed 1000 characters.
To import a new certificate, omit the CertificateArn
argument. Include this argument only when you want to replace a previously imported certifica
When you import a certificate by using the CLI, you must specify the certificate, the certificate chain, and the private key by their file names preceded by file://
. For example, you can specify a certificate saved in the C:\\temp
folder as file://C:\\temp\\certificate_to_import.pem
. If you are making an HTTP or HTTPS Query request, include these arguments as BLOBs.
When you import a certificate by using an SDK, you must specify the certificate, the certificate chain, and the private key files in the manner required by the programming language you're using.
The cryptographic algorithm of an imported certificate must match the algorithm of the signing CA. For example, if the signing CA key type is RSA, then the certificate key type must also be RSA.
This operation returns the Amazon Resource Name (ARN) of the imported certificate.
", "ListCertificates": "Retrieves a list of certificate ARNs and domain names. You can request that only certificates that match a specific status be listed. You can also filter by specific attributes of the certificate. Default filtering returns only RSA_2048
certificates. For more information, see Filters.
Lists the tags that have been applied to the ACM certificate. Use the certificate's Amazon Resource Name (ARN) to specify the certificate. To add a tag to an ACM certificate, use the AddTagsToCertificate action. To delete a tag, use the RemoveTagsFromCertificate action.
", @@ -48,7 +48,7 @@ "base": null, "refs": { "ExportCertificateResponse$Certificate": "The base64 PEM-encoded certificate.
", - "GetCertificateResponse$Certificate": "String that contains the ACM certificate represented by the ARN specified at input.
" + "GetCertificateResponse$Certificate": "The ACM-issued certificate corresponding to the ARN specified as input.
" } }, "CertificateBodyBlob": { @@ -61,7 +61,7 @@ "base": null, "refs": { "ExportCertificateResponse$CertificateChain": "The base64 PEM-encoded certificate chain. This does not include the certificate that you are exporting.
", - "GetCertificateResponse$CertificateChain": "The certificate chain that contains the root certificate issued by the certificate authority (CA).
" + "GetCertificateResponse$CertificateChain": "Certificates forming the requested certificate's chain of trust. The chain consists of the certificate of the issuing CA and the intermediate certificates of any other subordinate CAs.
" } }, "CertificateChainBlob": { @@ -140,7 +140,7 @@ "base": null, "refs": { "CertificateDetail$SubjectAlternativeNames": "One or more domain names (subject alternative names) included in the certificate. This list contains the domain names that are bound to the public key that is contained in the certificate. The subject alternative names include the canonical domain name (CN) of the certificate and additional domain names that can be used to connect to the website.
", - "RequestCertificateRequest$SubjectAlternativeNames": "Additional FQDNs to be included in the Subject Alternative Name extension of the ACM certificate. For example, add the name www.example.net to a certificate for which the DomainName
field is www.example.com if users can reach your site by using either name. The maximum number of domain names that you can add to an ACM certificate is 100. However, the initial limit is 10 domain names. If you need more than 10 names, you must request a limit increase. For more information, see Limits.
The maximum length of a SAN DNS name is 253 octets. The name is made up of multiple labels separated by periods. No label can be longer than 63 octets. Consider the following examples:
(63 octets).(63 octets).(63 octets).(61 octets)
is legal because the total length is 253 octets (63+1+63+1+63+1+61) and no label exceeds 63 octets.
(64 octets).(63 octets).(63 octets).(61 octets)
is not legal because the total length exceeds 253 octets (64+1+63+1+63+1+61) and the first label exceeds 63 octets.
(63 octets).(63 octets).(63 octets).(62 octets)
is not legal because the total length of the DNS name (63+1+63+1+63+1+62) exceeds 253 octets.
Additional FQDNs to be included in the Subject Alternative Name extension of the ACM certificate. For example, add the name www.example.net to a certificate for which the DomainName
field is www.example.com if users can reach your site by using either name. The maximum number of domain names that you can add to an ACM certificate is 100. However, the initial quota is 10 domain names. If you need more than 10 names, you must request a quota increase. For more information, see Quotas.
The maximum length of a SAN DNS name is 253 octets. The name is made up of multiple labels separated by periods. No label can be longer than 63 octets. Consider the following examples:
(63 octets).(63 octets).(63 octets).(61 octets)
is legal because the total length is 253 octets (63+1+63+1+63+1+61) and no label exceeds 63 octets.
(64 octets).(63 octets).(63 octets).(61 octets)
is not legal because the total length exceeds 253 octets (64+1+63+1+63+1+61) and the first label exceeds 63 octets.
(63 octets).(63 octets).(63 octets).(62 octets)
is not legal because the total length of the DNS name (63+1+63+1+63+1+62) exceeds 253 octets.
An ACM limit has been exceeded.
", + "base": "An ACM quota has been exceeded.
", "refs": { } }, @@ -460,7 +460,7 @@ "ResourceRecord": { "base": "Contains a DNS record value that you can use to can use to validate ownership or control of a domain. This is used by the DescribeCertificate action.
", "refs": { - "DomainValidation$ResourceRecord": "Contains the CNAME record that you add to your DNS database for domain validation. For more information, see Use DNS to Validate Domain Ownership.
" + "DomainValidation$ResourceRecord": "Contains the CNAME record that you add to your DNS database for domain validation. For more information, see Use DNS to Validate Domain Ownership.
Note: The CNAME information that you need does not include the name of your domain. If you include your domain name in the DNS database CNAME record, validation fails. For example, if the name is \"_a79865eb4cd1a6ab990a45779b4e0b96.yourdomain.com\", only \"_a79865eb4cd1a6ab990a45779b4e0b96\" must be used.
" } }, "RevocationReason": { diff --git a/models/apis/apigateway/2015-07-09/docs-2.json b/models/apis/apigateway/2015-07-09/docs-2.json index a0540e06942..095aa002ad0 100644 --- a/models/apis/apigateway/2015-07-09/docs-2.json +++ b/models/apis/apigateway/2015-07-09/docs-2.json @@ -221,7 +221,7 @@ "ApiKey$enabled": "Specifies whether the API Key can be used by callers.
", "CanarySettings$useStageCache": "A Boolean flag to indicate whether the canary deployment uses the stage cache or not.
", "CreateApiKeyRequest$enabled": "Specifies whether the ApiKey can be used by callers.
", - "CreateApiKeyRequest$generateDistinctId": "Specifies whether (true
) or not (false
) the key identifier is distinct from the created API key value.
Specifies whether (true
) or not (false
) the key identifier is distinct from the created API key value. This parameter is deprecated and should not be used.
A Boolean flag to indicate whether to validate request body according to the configured model schema for the method (true
) or not (false
).
A Boolean flag to indicate whether to validate request parameters, true
, or not false
.
Whether cache clustering is enabled for the stage.
", @@ -1072,7 +1072,7 @@ "ApiKeyIds$warnings": "A list of warning messages.
", "ApiKeys$warnings": "A list of warning messages logged during the import of API keys when the failOnWarnings
option is set to true.
The list of binary media types supported by the RestApi. By default, the RestApi supports only UTF-8-encoded text payloads.
", - "CreateVpcLinkRequest$targetArns": "[Required] The ARNs of network load balancers of the VPC targeted by the VPC link. The network load balancers must be owned by the same AWS account of the API owner.
", + "CreateVpcLinkRequest$targetArns": "[Required] The ARN of the network load balancer of the VPC targeted by the VPC link. The network load balancer must be owned by the same AWS account of the API owner.
", "DocumentationPartIds$ids": "A list of the returned documentation part identifiers.
", "DocumentationPartIds$warnings": "A list of warning messages reported during import of documentation parts.
", "EndpointConfiguration$vpcEndpointIds": "A list of VpcEndpointIds of an API (RestApi) against which to create Route53 ALIASes. It is only supported for PRIVATE
endpoint type.
The warning messages reported when failonwarnings
is turned on during API import.
The list of binary media types supported by the RestApi. By default, the RestApi supports only UTF-8-encoded text payloads.
", "UntagResourceRequest$tagKeys": "[Required] The Tag keys to delete.
", - "VpcLink$targetArns": "The ARNs of network load balancers of the VPC targeted by the VPC link. The network load balancers must be owned by the same AWS account of the API owner.
" + "VpcLink$targetArns": "The ARN of the network load balancer of the VPC targeted by the VPC link. The network load balancer must be owned by the same AWS account of the API owner.
" } }, "ListOfUsage": { @@ -1500,7 +1500,7 @@ "base": null, "refs": { "AccessLogSettings$format": "A single line format of the access logs of data, as specified by selected $context variables. The format must include at least $context.requestId
.
The ARN of the CloudWatch Logs log group to receive access logs.
", + "AccessLogSettings$destinationArn": "The Amazon Resource Name (ARN) of the CloudWatch Logs log group or Kinesis Data Firehose delivery stream to receive access logs. If you specify a Kinesis Data Firehose delivery stream, the stream name must begin with amazon-apigateway-
.
The ARN of an Amazon CloudWatch role for the current Account.
", "Account$apiKeyVersion": "The version of the API keys used for the account.
", "ApiKey$id": "The identifier of the API Key.
", @@ -1544,7 +1544,7 @@ "CreateBasePathMappingRequest$domainName": "[Required] The domain name of the BasePathMapping resource to create.
", "CreateBasePathMappingRequest$basePath": "The base path name that callers of the API must provide as part of the URL after the domain name. This value must be unique for all of the mappings across a single API. Specify '(none)' if you do not want callers to specify a base path name after the domain name.
", "CreateBasePathMappingRequest$restApiId": "[Required] The string identifier of the associated RestApi.
", - "CreateBasePathMappingRequest$stage": "The name of the API's stage that you want to use for this mapping. Specify '(none)' if you do not want callers to explicitly specify the stage name after any base path name.
", + "CreateBasePathMappingRequest$stage": "The name of the API's stage that you want to use for this mapping. Specify '(none)' if you want callers to explicitly specify the stage name after any base path name.
", "CreateDeploymentRequest$restApiId": "[Required] The string identifier of the associated RestApi.
", "CreateDeploymentRequest$stageName": "The name of the Stage resource for the Deployment resource to create.
", "CreateDeploymentRequest$stageDescription": "The description of the Stage resource for the Deployment resource to create.
", @@ -1734,7 +1734,7 @@ "GetStageRequest$stageName": "[Required] The name of the Stage resource to get information about.
", "GetStagesRequest$restApiId": "[Required] The string identifier of the associated RestApi.
", "GetStagesRequest$deploymentId": "The stages' deployment identifiers.
", - "GetTagsRequest$resourceArn": "[Required] The ARN of a resource that can be tagged. The resource ARN must be URL-encoded.
", + "GetTagsRequest$resourceArn": "[Required] The ARN of a resource that can be tagged.
", "GetTagsRequest$position": "(Not currently supported) The current pagination position in the paged result set.
", "GetUsagePlanKeyRequest$usagePlanId": "[Required] The Id of the UsagePlan resource representing the usage plan containing the to-be-retrieved UsagePlanKey resource representing a plan customer.
", "GetUsagePlanKeyRequest$keyId": "[Required] The key Id of the to-be-retrieved UsagePlanKey resource representing a plan customer.
", @@ -1778,7 +1778,7 @@ "Method$authorizerId": "The identifier of an Authorizer to use on this method. The authorizationType
must be CUSTOM
.
The identifier of a RequestValidator for request validation.
", "Method$operationName": "A human-friendly operation identifier for the method. For example, you can assign the operationName
of ListPets
for the GET /pets
method in the PetStore
example.
Specifies the logging level for this method, which affects the log entries pushed to Amazon CloudWatch Logs. The PATCH path for this setting is /{method_setting_key}/logging/loglevel
, and the available levels are OFF
, ERROR
, and INFO
.
Specifies the logging level for this method, which affects the log entries pushed to Amazon CloudWatch Logs. The PATCH path for this setting is /{method_setting_key}/logging/loglevel
, and the available levels are OFF
, ERROR
, and INFO
. Choose ERROR
to write only error-level entries to CloudWatch Logs, or choose INFO
to include all ERROR
events as well as extra informational events.
The method's authorization type. Valid values are NONE
for open access, AWS_IAM
for using AWS IAM permissions, CUSTOM
for using a custom authorizer, or COGNITO_USER_POOLS
for using a Cognito user pool.
The identifier for the model resource.
", "Model$name": "The name of the model. Must be an alphanumeric string.
", @@ -1850,7 +1850,7 @@ "Stage$webAclArn": "The ARN of the WebAcl associated with the Stage.
", "StageKey$restApiId": "The string identifier of the associated RestApi.
", "StageKey$stageName": "The stage name associated with the stage key.
", - "TagResourceRequest$resourceArn": "[Required] The ARN of a resource that can be tagged. The resource ARN must be URL-encoded.
", + "TagResourceRequest$resourceArn": "[Required] The ARN of a resource that can be tagged.
", "Template$value": "The Apache Velocity Template Language (VTL) template content used for the template resource.
", "TestInvokeAuthorizerRequest$restApiId": "[Required] The string identifier of the associated RestApi.
", "TestInvokeAuthorizerRequest$authorizerId": "[Required] Specifies a test invoke authorizer request's Authorizer ID.
", @@ -1870,7 +1870,7 @@ "TooManyRequestsException$retryAfterSeconds": null, "TooManyRequestsException$message": null, "UnauthorizedException$message": null, - "UntagResourceRequest$resourceArn": "[Required] The ARN of a resource that can be tagged. The resource ARN must be URL-encoded.
", + "UntagResourceRequest$resourceArn": "[Required] The ARN of a resource that can be tagged.
", "UpdateApiKeyRequest$apiKey": "[Required] The identifier of the ApiKey resource to be updated.
", "UpdateAuthorizerRequest$restApiId": "[Required] The string identifier of the associated RestApi.
", "UpdateAuthorizerRequest$authorizerId": "[Required] The identifier of the Authorizer resource.
", @@ -2149,7 +2149,7 @@ } }, "VpcLink": { - "base": "A API Gateway VPC link for a RestApi to access resources in an Amazon Virtual Private Cloud (VPC).
To enable access to a resource in an Amazon Virtual Private Cloud through Amazon API Gateway, you, as an API developer, create a VpcLink resource targeted for one or more network load balancers of the VPC and then integrate an API method with a private integration that uses the VpcLink. The private integration has an integration type of HTTP
or HTTP_PROXY
and has a connection type of VPC_LINK
. The integration uses the connectionId
property to identify the VpcLink used.
An API Gateway VPC link for a RestApi to access resources in an Amazon Virtual Private Cloud (VPC).
To enable access to a resource in an Amazon Virtual Private Cloud through Amazon API Gateway, you, as an API developer, create a VpcLink resource targeted for one or more network load balancers of the VPC and then integrate an API method with a private integration that uses the VpcLink. The private integration has an integration type of HTTP
or HTTP_PROXY
and has a connection type of VPC_LINK
. The integration uses the connectionId
property to identify the VpcLink used.
Deletes the RouteSettings for a stage.
", "DeleteStage" : "Deletes a Stage.
", "DeleteVpcLink" : "Deletes a VPC link.
", + "ExportApi" : "Exports a definition of an API in a particular output format and specification.
", "GetApi" : "Gets an Api resource.
", "GetApiMapping" : "Gets an API mapping.
", "GetApiMappings" : "Gets API mappings.
", @@ -309,6 +310,10 @@ "DomainNameConfiguration$EndpointType" : "The endpoint type.
" } }, + "ExportedApi" : { + "base" : "Represents an exported definition of an API in a particular output format, for example, YAML. The API is serialized to the requested specification, for example, OpenAPI 3.0.
", + "refs" : { } + }, "Id" : { "base" : "The identifier.
", "refs" : { @@ -356,12 +361,12 @@ "UpdateAuthorizerInput$AuthorizerResultTtlInSeconds" : "Authorizer caching is not currently supported. Don't specify this value for authorizers.
" } }, - "IntegerWithLengthBetween50And29000" : { - "base" : "An integer with a value between [50-29000].
", + "IntegerWithLengthBetween50And30000" : { + "base" : "An integer with a value between [50-30000].
", "refs" : { - "CreateIntegrationInput$TimeoutInMillis" : "Custom timeout between 50 and 29,000 milliseconds. The default value is 29,000 milliseconds or 29 seconds for WebSocket APIs. The default value is 5,000 milliseconds, or 5 seconds for HTTP APIs.
", - "Integration$TimeoutInMillis" : "Custom timeout between 50 and 29,000 milliseconds. The default value is 29,000 milliseconds or 29 seconds for WebSocket APIs. The default value is 5,000 milliseconds, or 5 seconds for HTTP APIs.
", - "UpdateIntegrationInput$TimeoutInMillis" : "Custom timeout between 50 and 29,000 milliseconds. The default value is 29,000 milliseconds or 29 seconds for WebSocket APIs. The default value is 5,000 milliseconds, or 5 seconds for HTTP APIs.
" + "CreateIntegrationInput$TimeoutInMillis" : "Custom timeout between 50 and 29,000 milliseconds for WebSocket APIs and between 50 and 30,000 milliseconds for HTTP APIs. The default timeout is 29 seconds for WebSocket APIs and 30 seconds for HTTP APIs.
", + "Integration$TimeoutInMillis" : "Custom timeout between 50 and 29,000 milliseconds for WebSocket APIs and between 50 and 30,000 milliseconds for HTTP APIs. The default timeout is 29 seconds for WebSocket APIs and 30 seconds for HTTP APIs.
", + "UpdateIntegrationInput$TimeoutInMillis" : "Custom timeout between 50 and 29,000 milliseconds for WebSocket APIs and between 50 and 30,000 milliseconds for HTTP APIs. The default timeout is 29 seconds for WebSocket APIs and 30 seconds for HTTP APIs.
" } }, "IntegerWithLengthBetweenMinus1And86400" : { @@ -850,12 +855,12 @@ "Authorizer$AuthorizerUri" : "The authorizer's Uniform Resource Identifier (URI). ForREQUEST authorizers, this must be a well-formed Lambda function URI, for example, arn:aws:apigateway:us-west-2:lambda:path/2015-03-31/functions/arn:aws:lambda:us-west-2:
This property is part of quick create. Quick create produces an API with an integration, a default catch-all route, and a default stage which is configured to automatically deploy changes. For HTTP integrations, specify a fully qualified URL. For Lambda integrations, specify a function ARN. The type of the integration will be HTTP_PROXY or AWS_PROXY, respectively. Supported only for HTTP APIs.
", "CreateAuthorizerInput$AuthorizerUri" : "The authorizer's Uniform Resource Identifier (URI). For REQUEST authorizers, this must be a well-formed Lambda function URI, for example, arn:aws:apigateway:us-west-2:lambda:path/2015-03-31/functions/arn:aws:lambda:us-west-2:
For a Lambda integration, specify the URI of a Lambda function.
For an HTTP integration, specify a fully-qualified URL.
For an HTTP API private integration, specify the ARN of an Application Load Balancer listener, Network Load Balancer listener, or AWS Cloud Map service. If you specify the ARN of an AWS Cloud Map service, API Gateway uses DiscoverInstances to identify resources. You can use query parameters to target specific resources. To learn more, see DiscoverInstances. For private integrations, all resources must be owned by the same AWS account.
", - "Integration$IntegrationUri" : "For a Lambda integration, specify the URI of a Lambda function.
For an HTTP integration, specify a fully-qualified URL.
For an HTTP API private integration, specify the ARN of an Application Load Balancer listener, Network Load Balancer listener, or AWS Cloud Map service. If you specify the ARN of an AWS Cloud Map service, API Gateway uses DiscoverInstances to identify resources. You can use query parameters to target specific resources. To learn more, see DiscoverInstances. For private integrations, all resources must be owned by the same AWS account.
", + "CreateIntegrationInput$IntegrationUri" : "For a Lambda integration, specify the URI of a Lambda function.
For an HTTP integration, specify a fully-qualified URL.
For an HTTP API private integration, specify the ARN of an Application Load Balancer listener, Network Load Balancer listener, or AWS Cloud Map service. If you specify the ARN of an AWS Cloud Map service, API Gateway uses DiscoverInstances to identify resources. You can use query parameters to target specific resources. To learn more, see DiscoverInstances. For private integrations, all resources must be owned by the same AWS account.
", + "Integration$IntegrationUri" : "For a Lambda integration, specify the URI of a Lambda function.
For an HTTP integration, specify a fully-qualified URL.
For an HTTP API private integration, specify the ARN of an Application Load Balancer listener, Network Load Balancer listener, or AWS Cloud Map service. If you specify the ARN of an AWS Cloud Map service, API Gateway uses DiscoverInstances to identify resources. You can use query parameters to target specific resources. To learn more, see DiscoverInstances. For private integrations, all resources must be owned by the same AWS account.
", "JWTConfiguration$Issuer" : "The base domain of the identity provider that issues JSON Web Tokens. For example, an Amazon Cognito user pool has the following format: https://cognito-idp.
This property is part of quick create. For HTTP integrations, specify a fully qualified URL. For Lambda integrations, specify a function ARN. The type of the integration will be HTTP_PROXY or AWS_PROXY, respectively. The value provided updates the integration URI and integration type. You can update a quick-created target, but you can't remove it from an API. Supported only for HTTP APIs.
", "UpdateAuthorizerInput$AuthorizerUri" : "The authorizer's Uniform Resource Identifier (URI). For REQUEST authorizers, this must be a well-formed Lambda function URI, for example, arn:aws:apigateway:us-west-2:lambda:path/2015-03-31/functions/arn:aws:lambda:us-west-2:
For a Lambda integration, specify the URI of a Lambda function.
For an HTTP integration, specify a fully-qualified URL.
For an HTTP API private integration, specify the ARN of an Application Load Balancer listener, Network Load Balancer listener, or AWS Cloud Map service. If you specify the ARN of an AWS Cloud Map service, API Gateway uses DiscoverInstances to identify resources. You can use query parameters to target specific resources. To learn more, see DiscoverInstances. For private integrations, all resources must be owned by the same AWS account.
" + "UpdateIntegrationInput$IntegrationUri" : "For a Lambda integration, specify the URI of a Lambda function.
For an HTTP integration, specify a fully-qualified URL.
For an HTTP API private integration, specify the ARN of an Application Load Balancer listener, Network Load Balancer listener, or AWS Cloud Map service. If you specify the ARN of an AWS Cloud Map service, API Gateway uses DiscoverInstances to identify resources. You can use query parameters to target specific resources. To learn more, see DiscoverInstances. For private integrations, all resources must be owned by the same AWS account.
" } }, "VpcLink" : { @@ -1033,4 +1038,4 @@ } } } -} +} \ No newline at end of file diff --git a/models/apis/appconfig/2019-10-09/api-2.json b/models/apis/appconfig/2019-10-09/api-2.json index c7e82e85fce..01fb8b4ad32 100644 --- a/models/apis/appconfig/2019-10-09/api-2.json +++ b/models/apis/appconfig/2019-10-09/api-2.json @@ -686,11 +686,36 @@ "GrowthFactor":{"shape":"Percentage"}, "FinalBakeTimeInMinutes":{"shape":"MinutesBetween0And24Hours"}, "State":{"shape":"DeploymentState"}, + "EventLog":{"shape":"DeploymentEvents"}, "PercentageComplete":{"shape":"Percentage"}, "StartedAt":{"shape":"Iso8601DateTime"}, "CompletedAt":{"shape":"Iso8601DateTime"} } }, + "DeploymentEvent":{ + "type":"structure", + "members":{ + "EventType":{"shape":"DeploymentEventType"}, + "TriggeredBy":{"shape":"TriggeredBy"}, + "Description":{"shape":"Description"}, + "OccurredAt":{"shape":"Iso8601DateTime"} + } + }, + "DeploymentEventType":{ + "type":"string", + "enum":[ + "PERCENTAGE_UPDATED", + "ROLLBACK_STARTED", + "ROLLBACK_COMPLETED", + "BAKE_TIME_STARTED", + "DEPLOYMENT_STARTED", + "DEPLOYMENT_COMPLETED" + ] + }, + "DeploymentEvents":{ + "type":"list", + "member":{"shape":"DeploymentEvent"} + }, "DeploymentList":{ "type":"list", "member":{"shape":"DeploymentSummary"} @@ -1223,6 +1248,15 @@ "type":"string", "max":256 }, + "TriggeredBy":{ + "type":"string", + "enum":[ + "USER", + "APPCONFIG", + "CLOUDWATCH_ALARM", + "INTERNAL_ERROR" + ] + }, "UntagResourceRequest":{ "type":"structure", "required":[ diff --git a/models/apis/appconfig/2019-10-09/docs-2.json b/models/apis/appconfig/2019-10-09/docs-2.json index 81912d30694..d95acbea23e 100644 --- a/models/apis/appconfig/2019-10-09/docs-2.json +++ b/models/apis/appconfig/2019-10-09/docs-2.json @@ -151,6 +151,24 @@ "refs": { } }, + "DeploymentEvent": { + "base": "An object that describes a deployment event.
", + "refs": { + "DeploymentEvents$member": null + } + }, + "DeploymentEventType": { + "base": null, + "refs": { + "DeploymentEvent$EventType": "The type of deployment event. Deployment event types include the start, stop, or completion of a deployment; a percentage update; the start or stop of a bake period; the start or completion of a rollback.
" + } + }, + "DeploymentEvents": { + "base": null, + "refs": { + "Deployment$EventLog": "A list containing all events related to a deployment. The most recent events are displayed first.
" + } + }, "DeploymentList": { "base": null, "refs": { @@ -211,6 +229,7 @@ "CreateDeploymentStrategyRequest$Description": "A description of the deployment strategy.
", "CreateEnvironmentRequest$Description": "A description of the environment.
", "Deployment$Description": "The description of the deployment.
", + "DeploymentEvent$Description": "A description of the deployment event. Descriptions include, but are not limited to, the user account or the CloudWatch alarm ARN that initiated a rollback, the percentage of hosts that received the deployment, or in the case of an internal error, a recommendation to attempt a new deployment.
", "DeploymentStrategy$Description": "The description of the deployment strategy.
", "Environment$Description": "The description of the environment.
", "StartDeploymentRequest$Description": "A description of the deployment.
", @@ -356,6 +375,7 @@ "refs": { "Deployment$StartedAt": "The time the deployment started.
", "Deployment$CompletedAt": "The time the deployment completed.
", + "DeploymentEvent$OccurredAt": "The date and time the event occurred.
", "DeploymentSummary$StartedAt": "Time the deployment started.
", "DeploymentSummary$CompletedAt": "Time the deployment completed.
" } @@ -562,6 +582,12 @@ "TagMap$value": null } }, + "TriggeredBy": { + "base": null, + "refs": { + "DeploymentEvent$TriggeredBy": "The entity that triggered the deployment event. Events can be triggered by a user, AWS AppConfig, an Amazon CloudWatch alarm, or an internal error.
" + } + }, "UntagResourceRequest": { "base": null, "refs": { diff --git a/models/apis/application-insights/2018-11-25/api-2.json b/models/apis/application-insights/2018-11-25/api-2.json index 3807a830618..33139f13515 100644 --- a/models/apis/application-insights/2018-11-25/api-2.json +++ b/models/apis/application-insights/2018-11-25/api-2.json @@ -425,6 +425,7 @@ "LifeCycle":{"shape":"LifeCycle"}, "OpsItemSNSTopicArn":{"shape":"OpsItemSNSTopicArn"}, "OpsCenterEnabled":{"shape":"OpsCenterEnabled"}, + "CWEMonitorEnabled":{"shape":"CWEMonitorEnabled"}, "Remarks":{"shape":"Remarks"} } }, @@ -439,6 +440,22 @@ }, "exception":true }, + "CWEMonitorEnabled":{"type":"boolean"}, + "CloudWatchEventDetailType":{"type":"string"}, + "CloudWatchEventId":{"type":"string"}, + "CloudWatchEventSource":{ + "type":"string", + "enum":[ + "EC2", + "CODE_DEPLOY", + "HEALTH" + ] + }, + "CodeDeployApplication":{"type":"string"}, + "CodeDeployDeploymentGroup":{"type":"string"}, + "CodeDeployDeploymentId":{"type":"string"}, + "CodeDeployInstanceGroupId":{"type":"string"}, + "CodeDeployState":{"type":"string"}, "ComponentConfiguration":{ "type":"string", "max":10000, @@ -486,6 +503,7 @@ "members":{ "ResourceGroupName":{"shape":"ResourceGroupName"}, "OpsCenterEnabled":{"shape":"OpsCenterEnabled"}, + "CWEMonitorEnabled":{"shape":"CWEMonitorEnabled"}, "OpsItemSNSTopicArn":{"shape":"OpsItemSNSTopicArn"}, "Tags":{"shape":"TagList"} } @@ -712,6 +730,7 @@ "Problem":{"shape":"Problem"} } }, + "Ec2State":{"type":"string"}, "EndTime":{"type":"timestamp"}, "ErrorMsg":{"type":"string"}, "ExceptionMessage":{"type":"string"}, @@ -733,6 +752,11 @@ "NOT_USEFUL" ] }, + "HealthEventArn":{"type":"string"}, + "HealthEventDescription":{"type":"string"}, + "HealthEventTypeCategory":{"type":"string"}, + "HealthEventTypeCode":{"type":"string"}, + "HealthService":{"type":"string"}, "Insights":{"type":"string"}, "InternalServerException":{ "type":"structure", @@ -925,7 +949,28 @@ "MetricNamespace":{"shape":"MetricNamespace"}, "MetricName":{"shape":"MetricName"}, "Unit":{"shape":"Unit"}, - "Value":{"shape":"Value"} + "Value":{"shape":"Value"}, + "CloudWatchEventId":{"shape":"CloudWatchEventId"}, + "CloudWatchEventSource":{"shape":"CloudWatchEventSource"}, + "CloudWatchEventDetailType":{"shape":"CloudWatchEventDetailType"}, + "HealthEventArn":{"shape":"HealthEventArn"}, + "HealthService":{"shape":"HealthService"}, + "HealthEventTypeCode":{"shape":"HealthEventTypeCode"}, + "HealthEventTypeCategory":{"shape":"HealthEventTypeCategory"}, + "HealthEventDescription":{"shape":"HealthEventDescription"}, + "CodeDeployDeploymentId":{"shape":"CodeDeployDeploymentId"}, + "CodeDeployDeploymentGroup":{"shape":"CodeDeployDeploymentGroup"}, + "CodeDeployState":{"shape":"CodeDeployState"}, + "CodeDeployApplication":{"shape":"CodeDeployApplication"}, + "CodeDeployInstanceGroupId":{"shape":"CodeDeployInstanceGroupId"}, + "Ec2State":{"shape":"Ec2State"}, + "XRayFaultPercent":{"shape":"XRayFaultPercent"}, + "XRayThrottlePercent":{"shape":"XRayThrottlePercent"}, + "XRayErrorPercent":{"shape":"XRayErrorPercent"}, + "XRayRequestCount":{"shape":"XRayRequestCount"}, + "XRayRequestAverageLatency":{"shape":"XRayRequestAverageLatency"}, + "XRayNodeName":{"shape":"XRayNodeName"}, + "XRayNodeType":{"shape":"XRayNodeType"} } }, "ObservationId":{ @@ -1127,6 +1172,7 @@ "members":{ "ResourceGroupName":{"shape":"ResourceGroupName"}, "OpsCenterEnabled":{"shape":"OpsCenterEnabled"}, + "CWEMonitorEnabled":{"shape":"CWEMonitorEnabled"}, "OpsItemSNSTopicArn":{"shape":"OpsItemSNSTopicArn"}, "RemoveSNSTopic":{"shape":"RemoveSNSTopic"} } @@ -1203,6 +1249,13 @@ }, "exception":true }, - "Value":{"type":"double"} + "Value":{"type":"double"}, + "XRayErrorPercent":{"type":"integer"}, + "XRayFaultPercent":{"type":"integer"}, + "XRayNodeName":{"type":"string"}, + "XRayNodeType":{"type":"string"}, + "XRayRequestAverageLatency":{"type":"long"}, + "XRayRequestCount":{"type":"integer"}, + "XRayThrottlePercent":{"type":"integer"} } } diff --git a/models/apis/application-insights/2018-11-25/docs-2.json b/models/apis/application-insights/2018-11-25/docs-2.json index aa834bae0bc..a9616961c57 100644 --- a/models/apis/application-insights/2018-11-25/docs-2.json +++ b/models/apis/application-insights/2018-11-25/docs-2.json @@ -79,6 +79,62 @@ "refs": { } }, + "CWEMonitorEnabled": { + "base": null, + "refs": { + "ApplicationInfo$CWEMonitorEnabled": " Indicates whether Application Insights can listen to CloudWatch events for the application resources, such as instance terminated
, failed deployment
, and others.
Indicates whether Application Insights can listen to CloudWatch events for the application resources, such as instance terminated
, failed deployment
, and others.
Indicates whether Application Insights can listen to CloudWatch events for the application resources, such as instance terminated
, failed deployment
, and others.
The detail type of the CloudWatch Event-based observation, for example, EC2 Instance State-change Notification
.
The ID of the CloudWatch Event-based observation related to the detected problem.
" + } + }, + "CloudWatchEventSource": { + "base": null, + "refs": { + "Observation$CloudWatchEventSource": "The source of the CloudWatch Event.
" + } + }, + "CodeDeployApplication": { + "base": null, + "refs": { + "Observation$CodeDeployApplication": "The CodeDeploy application to which the deployment belongs.
" + } + }, + "CodeDeployDeploymentGroup": { + "base": null, + "refs": { + "Observation$CodeDeployDeploymentGroup": "The deployment group to which the CodeDeploy deployment belongs.
" + } + }, + "CodeDeployDeploymentId": { + "base": null, + "refs": { + "Observation$CodeDeployDeploymentId": "The deployment ID of the CodeDeploy-based observation related to the detected problem.
" + } + }, + "CodeDeployInstanceGroupId": { + "base": null, + "refs": { + "Observation$CodeDeployInstanceGroupId": "The instance group to which the CodeDeploy instance belongs.
" + } + }, + "CodeDeployState": { + "base": null, + "refs": { + "Observation$CodeDeployState": " The status of the CodeDeploy deployment, for example SUCCESS
or FAILURE
.
The state of the instance, such as STOPPING
or TERMINATING
.
The Amazon Resource Name (ARN) of the AWS Health Event-based observation.
" + } + }, + "HealthEventDescription": { + "base": null, + "refs": { + "Observation$HealthEventDescription": "The description of the AWS Health event provided by the service, such as Amazon EC2.
" + } + }, + "HealthEventTypeCategory": { + "base": null, + "refs": { + "Observation$HealthEventTypeCategory": " The category of the AWS Health event, such as issue
.
The type of the AWS Health event, for example, AWS_EC2_POWER_CONNECTIVITY_ISSUE
.
The service to which the AWS Health Event belongs, such as EC2.
" + } + }, "Insights": { "base": null, "refs": { @@ -861,6 +953,48 @@ "refs": { "Observation$Value": "The value of the source observation metric.
" } + }, + "XRayErrorPercent": { + "base": null, + "refs": { + "Observation$XRayErrorPercent": "The X-Ray request error percentage for this node.
" + } + }, + "XRayFaultPercent": { + "base": null, + "refs": { + "Observation$XRayFaultPercent": "The X-Ray request fault percentage for this node.
" + } + }, + "XRayNodeName": { + "base": null, + "refs": { + "Observation$XRayNodeName": "The name of the X-Ray node.
" + } + }, + "XRayNodeType": { + "base": null, + "refs": { + "Observation$XRayNodeType": "The type of the X-Ray node.
" + } + }, + "XRayRequestAverageLatency": { + "base": null, + "refs": { + "Observation$XRayRequestAverageLatency": "The X-Ray node request average latency for this node.
" + } + }, + "XRayRequestCount": { + "base": null, + "refs": { + "Observation$XRayRequestCount": "The X-Ray request count for this node.
" + } + }, + "XRayThrottlePercent": { + "base": null, + "refs": { + "Observation$XRayThrottlePercent": "The X-Ray request throttle percentage for this node.
" + } } } } diff --git a/models/apis/athena/2017-05-18/docs-2.json b/models/apis/athena/2017-05-18/docs-2.json index 53e2bb1dd65..857df41c1df 100644 --- a/models/apis/athena/2017-05-18/docs-2.json +++ b/models/apis/athena/2017-05-18/docs-2.json @@ -12,8 +12,8 @@ "GetQueryExecution": "Returns information about a single execution of a query if you have access to the workgroup in which the query ran. Each time a query executes, information about the query execution is saved with a unique ID.
", "GetQueryResults": "Streams the results of a single query execution specified by QueryExecutionId
from the Athena query results location in Amazon S3. For more information, see Query Results in the Amazon Athena User Guide. This request does not execute the query but returns results. Use StartQueryExecution to run a query.
To stream query results successfully, the IAM principal with permission to call GetQueryResults
also must have permissions to the Amazon S3 GetObject
action for the Athena query results location.
IAM principals with permission to the Amazon S3 GetObject
action for the query results location are able to retrieve query results from Amazon S3 even if permission to the GetQueryResults
action is denied. To restrict user or role access, ensure that Amazon S3 permissions to the Athena query location are denied.
Returns information about the workgroup with the specified name.
", - "ListNamedQueries": "Provides a list of available query IDs only for queries saved in the specified workgroup. Requires that you have access to the workgroup.
For code samples using the AWS SDK for Java, see Examples and Code Samples in the Amazon Athena User Guide.
", - "ListQueryExecutions": "Provides a list of available query execution IDs for the queries in the specified workgroup. Requires you to have access to the workgroup in which the queries ran.
For code samples using the AWS SDK for Java, see Examples and Code Samples in the Amazon Athena User Guide.
", + "ListNamedQueries": "Provides a list of available query IDs only for queries saved in the specified workgroup. Requires that you have access to the workgroup. If a workgroup is not specified, lists the saved queries for the primary workgroup.
For code samples using the AWS SDK for Java, see Examples and Code Samples in the Amazon Athena User Guide.
", + "ListQueryExecutions": "Provides a list of available query execution IDs for the queries in the specified workgroup. If a workgroup is not specified, returns a list of query execution IDs for the primary workgroup. Requires you to have access to the workgroup in which the queries ran.
For code samples using the AWS SDK for Java, see Examples and Code Samples in the Amazon Athena User Guide.
", "ListTagsForResource": "Lists the tags associated with this workgroup.
", "ListWorkGroups": "Lists available workgroups for the account.
", "StartQueryExecution": "Runs the SQL query statements contained in the Query
. Requires you to have access to the workgroup in which the query ran.
For code samples using the AWS SDK for Java, see Examples and Code Samples in the Amazon Athena User Guide.
", @@ -426,13 +426,13 @@ "QueryExecutionState": { "base": null, "refs": { - "QueryExecutionStatus$State": "The state of query execution. QUEUED
state is listed but is not used by Athena and is reserved for future use. RUNNING
indicates that the query has been submitted to the service, and Athena will execute the query as soon as resources are available. SUCCEEDED
indicates that the query completed without errors. FAILED
indicates that the query experienced an error and did not complete processing. CANCELLED
indicates that a user input interrupted query execution.
The state of query execution. QUEUED
indicates that the query has been submitted to the service, and Athena will execute the query as soon as resources are available. RUNNING
indicates that the query is in execution phase. SUCCEEDED
indicates that the query completed without errors. FAILED
indicates that the query experienced an error and did not complete processing. CANCELLED
indicates that a user input interrupted query execution.
The amount of data scanned during the query execution and the amount of time that it took to execute, and the type of statement that was run.
", "refs": { - "QueryExecution$Statistics": "The amount of data scanned during the query execution and the amount of time that it took to execute, and the type of statement that was run.
" + "QueryExecution$Statistics": "Query execution statistics, such as the amount of data scanned, the amount of time that the query took to process, and the type of statement that was run.
" } }, "QueryExecutionStatus": { @@ -683,8 +683,8 @@ "CreateWorkGroupInput$Name": "The workgroup name.
", "DeleteWorkGroupInput$WorkGroup": "The unique name of the workgroup to delete.
", "GetWorkGroupInput$WorkGroup": "The name of the workgroup.
", - "ListNamedQueriesInput$WorkGroup": "The name of the workgroup from which the named queries are being returned.
", - "ListQueryExecutionsInput$WorkGroup": "The name of the workgroup from which queries are being returned.
", + "ListNamedQueriesInput$WorkGroup": "The name of the workgroup from which the named queries are returned. If a workgroup is not specified, the saved queries for the primary workgroup are returned.
", + "ListQueryExecutionsInput$WorkGroup": "The name of the workgroup from which queries are returned. If a workgroup is not specified, a list of available query execution IDs for the queries in the primary workgroup is returned.
", "NamedQuery$WorkGroup": "The name of the workgroup that contains the named query.
", "QueryExecution$WorkGroup": "The name of the workgroup in which the query ran.
", "StartQueryExecutionInput$WorkGroup": "The name of the workgroup in which the query is being started.
", diff --git a/models/apis/ce/2017-10-25/api-2.json b/models/apis/ce/2017-10-25/api-2.json index 80f25bbf48b..ba801df0150 100644 --- a/models/apis/ce/2017-10-25/api-2.json +++ b/models/apis/ce/2017-10-25/api-2.json @@ -336,6 +336,11 @@ "Rules":{"shape":"CostCategoryRulesList"} } }, + "CostCategoryMaxResults":{ + "type":"integer", + "max":100, + "min":1 + }, "CostCategoryName":{ "type":"string", "max":255, @@ -348,7 +353,8 @@ "CostCategoryArn":{"shape":"Arn"}, "Name":{"shape":"CostCategoryName"}, "EffectiveStart":{"shape":"ZonedDateTime"}, - "EffectiveEnd":{"shape":"ZonedDateTime"} + "EffectiveEnd":{"shape":"ZonedDateTime"}, + "NumberOfRules":{"shape":"NonNegativeInteger"} } }, "CostCategoryReferencesList":{ @@ -522,10 +528,12 @@ "AZ", "INSTANCE_TYPE", "LINKED_ACCOUNT", + "LINKED_ACCOUNT_NAME", "OPERATION", "PURCHASE_TYPE", "REGION", "SERVICE", + "SERVICE_CODE", "USAGE_TYPE", "USAGE_TYPE_GROUP", "RECORD_TYPE", @@ -552,7 +560,8 @@ "type":"structure", "members":{ "Key":{"shape":"Dimension"}, - "Values":{"shape":"Values"} + "Values":{"shape":"Values"}, + "MatchOptions":{"shape":"MatchOptions"} } }, "DimensionValuesWithAttributes":{ @@ -660,7 +669,12 @@ "member":{"shape":"ForecastResult"} }, "GenericBoolean":{"type":"boolean"}, - "GenericString":{"type":"string"}, + "GenericString":{ + "type":"string", + "max":1024, + "min":0, + "pattern":"[\\S\\s]*" + }, "GetCostAndUsageRequest":{ "type":"structure", "required":["TimePeriod"], @@ -820,6 +834,7 @@ "required":["Service"], "members":{ "Filter":{"shape":"Expression"}, + "Configuration":{"shape":"RightsizingRecommendationConfiguration"}, "Service":{"shape":"GenericString"}, "PageSize":{"shape":"NonNegativeInteger"}, "NextPageToken":{"shape":"NextPageToken"} @@ -831,7 +846,8 @@ "Metadata":{"shape":"RightsizingRecommendationMetadata"}, "Summary":{"shape":"RightsizingRecommendationSummary"}, "RightsizingRecommendations":{"shape":"RightsizingRecommendationList"}, - "NextPageToken":{"shape":"NextPageToken"} + "NextPageToken":{"shape":"NextPageToken"}, + "Configuration":{"shape":"RightsizingRecommendationConfiguration"} } }, "GetSavingsPlansCoverageRequest":{ @@ -870,9 +886,11 @@ "SavingsPlansType":{"shape":"SupportedSavingsPlansType"}, "TermInYears":{"shape":"TermInYears"}, "PaymentOption":{"shape":"PaymentOption"}, + "AccountScope":{"shape":"AccountScope"}, "NextPageToken":{"shape":"NextPageToken"}, "PageSize":{"shape":"NonNegativeInteger"}, - "LookbackPeriodInDays":{"shape":"LookbackPeriodInDays"} + "LookbackPeriodInDays":{"shape":"LookbackPeriodInDays"}, + "Filter":{"shape":"Expression"} } }, "GetSavingsPlansPurchaseRecommendationResponse":{ @@ -994,7 +1012,12 @@ "Key":{"shape":"GroupDefinitionKey"} } }, - "GroupDefinitionKey":{"type":"string"}, + "GroupDefinitionKey":{ + "type":"string", + "max":1024, + "min":0, + "pattern":"[\\S\\s]*" + }, "GroupDefinitionType":{ "type":"string", "enum":[ @@ -1044,7 +1067,11 @@ "type":"structure", "members":{ "EffectiveOn":{"shape":"ZonedDateTime"}, - "NextToken":{"shape":"NextPageToken"} + "NextToken":{"shape":"NextPageToken"}, + "MaxResults":{ + "shape":"CostCategoryMaxResults", + "box":true + } } }, "ListCostCategoryDefinitionsResponse":{ @@ -1062,6 +1089,21 @@ "SIXTY_DAYS" ] }, + "MatchOption":{ + "type":"string", + "enum":[ + "EQUALS", + "STARTS_WITH", + "ENDS_WITH", + "CONTAINS", + "CASE_SENSITIVE", + "CASE_INSENSITIVE" + ] + }, + "MatchOptions":{ + "type":"list", + "member":{"shape":"MatchOption"} + }, "MaxResults":{ "type":"integer", "min":1 @@ -1079,7 +1121,12 @@ ] }, "MetricAmount":{"type":"string"}, - "MetricName":{"type":"string"}, + "MetricName":{ + "type":"string", + "max":1024, + "min":0, + "pattern":"[\\S\\s]*" + }, "MetricNames":{ "type":"list", "member":{"shape":"MetricName"} @@ -1104,7 +1151,12 @@ } }, "NetRISavings":{"type":"string"}, - "NextPageToken":{"type":"string"}, + "NextPageToken":{ + "type":"string", + "max":8192, + "min":0, + "pattern":"[\\S\\s]*" + }, "NonNegativeInteger":{ "type":"integer", "min":0 @@ -1153,6 +1205,13 @@ "SizeFlexEligible":{"shape":"GenericBoolean"} } }, + "RecommendationTarget":{ + "type":"string", + "enum":[ + "SAME_INSTANCE_FAMILY", + "CROSS_INSTANCE_FAMILY" + ] + }, "RedshiftInstanceDetails":{ "type":"structure", "members":{ @@ -1318,6 +1377,17 @@ "TerminateRecommendationDetail":{"shape":"TerminateRecommendationDetail"} } }, + "RightsizingRecommendationConfiguration":{ + "type":"structure", + "required":[ + "RecommendationTarget", + "BenefitsConsidered" + ], + "members":{ + "RecommendationTarget":{"shape":"RecommendationTarget"}, + "BenefitsConsidered":{"shape":"GenericBoolean"} + } + }, "RightsizingRecommendationList":{ "type":"list", "member":{"shape":"RightsizingRecommendation"} @@ -1387,6 +1457,7 @@ "SavingsPlansPurchaseRecommendation":{ "type":"structure", "members":{ + "AccountScope":{"shape":"AccountScope"}, "SavingsPlansType":{"shape":"SupportedSavingsPlansType"}, "TermInYears":{"shape":"TermInYears"}, "PaymentOption":{"shape":"PaymentOption"}, @@ -1499,7 +1570,12 @@ "type":"list", "member":{"shape":"SavingsPlansUtilizationByTime"} }, - "SearchString":{"type":"string"}, + "SearchString":{ + "type":"string", + "max":1024, + "min":0, + "pattern":"[\\S\\s]*" + }, "ServiceQuotaExceededException":{ "type":"structure", "members":{ @@ -1520,7 +1596,12 @@ "EC2_INSTANCE_SP" ] }, - "TagKey":{"type":"string"}, + "TagKey":{ + "type":"string", + "max":1024, + "min":0, + "pattern":"[\\S\\s]*" + }, "TagList":{ "type":"list", "member":{"shape":"Entity"} @@ -1529,7 +1610,8 @@ "type":"structure", "members":{ "Key":{"shape":"TagKey"}, - "Values":{"shape":"Values"} + "Values":{"shape":"Values"}, + "MatchOptions":{"shape":"MatchOptions"} } }, "TagValuesList":{ @@ -1614,13 +1696,20 @@ "type":"list", "member":{"shape":"UtilizationByTime"} }, - "Value":{"type":"string"}, + "Value":{ + "type":"string", + "max":1024, + "min":0, + "pattern":"[\\S\\s]*" + }, "Values":{ "type":"list", "member":{"shape":"Value"} }, "YearMonthDay":{ "type":"string", + "max":40, + "min":0, "pattern":"(\\d{4}-\\d{2}-\\d{2})(T\\d{2}:\\d{2}:\\d{2}Z)?" }, "ZonedDateTime":{ diff --git a/models/apis/ce/2017-10-25/docs-2.json b/models/apis/ce/2017-10-25/docs-2.json index 5c3a0724755..ed6ecc1bfae 100644 --- a/models/apis/ce/2017-10-25/docs-2.json +++ b/models/apis/ce/2017-10-25/docs-2.json @@ -1,33 +1,35 @@ { "version": "2.0", - "service": "The Cost Explorer API enables you to programmatically query your cost and usage data. You can query for aggregated data such as total monthly costs or total daily usage. You can also query for granular data, such as the number of daily write operations for Amazon DynamoDB database tables in your production environment.
Service Endpoint
The Cost Explorer API provides the following endpoint:
https://ce.us-east-1.amazonaws.com
For information about costs associated with the Cost Explorer API, see AWS Cost Management Pricing.
", + "service": "The Cost Explorer API enables you to programmatically query your cost and usage data. You can query for aggregated data such as total monthly costs or total daily usage. You can also query for granular data, such as the number of daily write operations for Amazon DynamoDB database tables in your production environment.
Service Endpoint
The Cost Explorer API provides the following endpoint:
https://ce.us-east-1.amazonaws.com
For information about costs associated with the Cost Explorer API, see AWS Cost Management Pricing.
", "operations": { - "CreateCostCategoryDefinition": "Cost Category is in public beta for AWS Billing and Cost Management and is subject to change. Your use of Cost Categories is subject to the Beta Service Participation terms of the AWS Service Terms (Section 1.10).
Creates a new Cost Category with the requested name and rules.
", - "DeleteCostCategoryDefinition": "Cost Category is in public beta for AWS Billing and Cost Management and is subject to change. Your use of Cost Categories is subject to the Beta Service Participation terms of the AWS Service Terms (Section 1.10).
Deletes a Cost Category. Expenses from this month going forward will no longer be categorized with this Cost Category.
", - "DescribeCostCategoryDefinition": "Cost Category is in public beta for AWS Billing and Cost Management and is subject to change. Your use of Cost Categories is subject to the Beta Service Participation terms of the AWS Service Terms (Section 1.10).
Returns the name, ARN, rules, definition, and effective dates of a Cost Category that's defined in the account.
You have the option to use EffectiveOn
to return a Cost Category that is active on a specific date. If there is no EffectiveOn
specified, you’ll see a Cost Category that is effective on the current date. If Cost Category is still effective, EffectiveEnd
is omitted in the response.
Retrieves cost and usage metrics for your account. You can specify which cost and usage-related metric, such as BlendedCosts
or UsageQuantity
, that you want the request to return. You can also filter and group your data by various dimensions, such as SERVICE
or AZ
, in a specific time range. For a complete list of valid dimensions, see the GetDimensionValues operation. Master accounts in an organization in AWS Organizations have access to all member accounts.
Retrieves cost and usage metrics with resources for your account. You can specify which cost and usage-related metric, such as BlendedCosts
or UsageQuantity
, that you want the request to return. You can also filter and group your data by various dimensions, such as SERVICE
or AZ
, in a specific time range. For a complete list of valid dimensions, see the GetDimensionValues operation. Master accounts in an organization in AWS Organizations have access to all member accounts. This API is currently available for the Amazon Elastic Compute Cloud – Compute service only.
This is an opt-in only feature. You can enable this feature from the Cost Explorer Settings page. For information on how to access the Settings page, see Controlling Access for Cost Explorer in the AWS Billing and Cost Management User Guide.
Creates a new Cost Category with the requested name and rules.
", + "DeleteCostCategoryDefinition": "Deletes a Cost Category. Expenses from this month going forward will no longer be categorized with this Cost Category.
", + "DescribeCostCategoryDefinition": "Returns the name, ARN, rules, definition, and effective dates of a Cost Category that's defined in the account.
You have the option to use EffectiveOn
to return a Cost Category that is active on a specific date. If there is no EffectiveOn
specified, you’ll see a Cost Category that is effective on the current date. If Cost Category is still effective, EffectiveEnd
is omitted in the response.
Retrieves cost and usage metrics for your account. You can specify which cost and usage-related metric, such as BlendedCosts
or UsageQuantity
, that you want the request to return. You can also filter and group your data by various dimensions, such as SERVICE
or AZ
, in a specific time range. For a complete list of valid dimensions, see the GetDimensionValues operation. Master accounts in an organization in AWS Organizations have access to all member accounts.
Retrieves cost and usage metrics with resources for your account. You can specify which cost and usage-related metric, such as BlendedCosts
or UsageQuantity
, that you want the request to return. You can also filter and group your data by various dimensions, such as SERVICE
or AZ
, in a specific time range. For a complete list of valid dimensions, see the GetDimensionValues operation. Master accounts in an organization in AWS Organizations have access to all member accounts. This API is currently available for the Amazon Elastic Compute Cloud – Compute service only.
This is an opt-in only feature. You can enable this feature from the Cost Explorer Settings page. For information on how to access the Settings page, see Controlling Access for Cost Explorer in the AWS Billing and Cost Management User Guide.
Retrieves a forecast for how much Amazon Web Services predicts that you will spend over the forecast time period that you select, based on your past costs.
", "GetDimensionValues": "Retrieves all available filter values for a specified filter over a period of time. You can search the dimension values for an arbitrary string.
", - "GetReservationCoverage": "Retrieves the reservation coverage for your account. This enables you to see how much of your Amazon Elastic Compute Cloud, Amazon ElastiCache, Amazon Relational Database Service, or Amazon Redshift usage is covered by a reservation. An organization's master account can see the coverage of the associated member accounts. For any time period, you can filter data about reservation usage by the following dimensions:
AZ
CACHE_ENGINE
DATABASE_ENGINE
DEPLOYMENT_OPTION
INSTANCE_TYPE
LINKED_ACCOUNT
OPERATING_SYSTEM
PLATFORM
REGION
SERVICE
TAG
TENANCY
To determine valid values for a dimension, use the GetDimensionValues
operation.
Retrieves the reservation coverage for your account. This enables you to see how much of your Amazon Elastic Compute Cloud, Amazon ElastiCache, Amazon Relational Database Service, or Amazon Redshift usage is covered by a reservation. An organization's master account can see the coverage of the associated member accounts. This supports dimensions, Cost Categories, and nested expressions. For any time period, you can filter data about reservation usage by the following dimensions:
AZ
CACHE_ENGINE
DATABASE_ENGINE
DEPLOYMENT_OPTION
INSTANCE_TYPE
LINKED_ACCOUNT
OPERATING_SYSTEM
PLATFORM
REGION
SERVICE
TAG
TENANCY
To determine valid values for a dimension, use the GetDimensionValues
operation.
Gets recommendations for which reservations to purchase. These recommendations could help you reduce your costs. Reservations provide a discounted hourly rate (up to 75%) compared to On-Demand pricing.
AWS generates your recommendations by identifying your On-Demand usage during a specific time period and collecting your usage into categories that are eligible for a reservation. After AWS has these categories, it simulates every combination of reservations in each category of usage to identify the best number of each type of RI to purchase to maximize your estimated savings.
For example, AWS automatically aggregates your Amazon EC2 Linux, shared tenancy, and c4 family usage in the US West (Oregon) Region and recommends that you buy size-flexible regional reservations to apply to the c4 family usage. AWS recommends the smallest size instance in an instance family. This makes it easier to purchase a size-flexible RI. AWS also shows the equal number of normalized units so that you can purchase any instance size that you want. For this example, your RI recommendation would be for c4.large
because that is the smallest size instance in the c4 instance family.
Retrieves the reservation utilization for your account. Master accounts in an organization have access to member accounts. You can filter data by dimensions in a time period. You can use GetDimensionValues
to determine the possible dimension values. Currently, you can group only by SUBSCRIPTION_ID
.
Creates recommendations that helps you save cost by identifying idle and underutilized Amazon EC2 instances.
Recommendations are generated to either downsize or terminate instances, along with providing savings detail and metrics. For details on calculation and function, see Optimizing Your Cost with Rightsizing Recommendations.
", - "GetSavingsPlansCoverage": "Retrieves the Savings Plans covered for your account. This enables you to see how much of your cost is covered by a Savings Plan. An organization’s master account can see the coverage of the associated member accounts. For any time period, you can filter data for Savings Plans usage with the following dimensions:
LINKED_ACCOUNT
REGION
SERVICE
INSTANCE_FAMILY
To determine valid values for a dimension, use the GetDimensionValues
operation.
Retrieves your request parameters, Savings Plan Recommendations Summary and Details.
", + "GetSavingsPlansCoverage": "Retrieves the Savings Plans covered for your account. This enables you to see how much of your cost is covered by a Savings Plan. An organization’s master account can see the coverage of the associated member accounts. This supports dimensions, Cost Categories, and nested expressions. For any time period, you can filter data for Savings Plans usage with the following dimensions:
LINKED_ACCOUNT
REGION
SERVICE
INSTANCE_FAMILY
To determine valid values for a dimension, use the GetDimensionValues
operation.
Retrieves your request parameters, Savings Plan Recommendations Summary and Details.
", "GetSavingsPlansUtilization": "Retrieves the Savings Plans utilization for your account across date ranges with daily or monthly granularity. Master accounts in an organization have access to member accounts. You can use GetDimensionValues
in SAVINGS_PLANS
to determine the possible dimension values.
You cannot group by any dimension values for GetSavingsPlansUtilization
.
Retrieves attribute data along with aggregate utilization and savings data for a given time period. This doesn't support granular or grouped data (daily/monthly) in response. You can't retrieve data by dates in a single response similar to GetSavingsPlanUtilization
, but you have the option to make multiple calls to GetSavingsPlanUtilizationDetails
by providing individual dates. You can use GetDimensionValues
in SAVINGS_PLANS
to determine the possible dimension values.
GetSavingsPlanUtilizationDetails
internally groups data by SavingsPlansArn
.
Queries for available tag keys and tag values for a specified period. You can search the tag values for an arbitrary string.
", "GetUsageForecast": "Retrieves a forecast for how much Amazon Web Services predicts that you will use over the forecast time period that you select, based on your past usage.
", - "ListCostCategoryDefinitions": "Cost Category is in public beta for AWS Billing and Cost Management and is subject to change. Your use of Cost Categories is subject to the Beta Service Participation terms of the AWS Service Terms (Section 1.10).
Returns the name, ARN and effective dates of all Cost Categories defined in the account. You have the option to use EffectiveOn
to return a list of Cost Categories that were active on a specific date. If there is no EffectiveOn
specified, you’ll see Cost Categories that are effective on the current date. If Cost Category is still effective, EffectiveEnd
is omitted in the response.
Cost Category is in public beta for AWS Billing and Cost Management and is subject to change. Your use of Cost Categories is subject to the Beta Service Participation terms of the AWS Service Terms (Section 1.10).
Updates an existing Cost Category. Changes made to the Cost Category rules will be used to categorize the current month’s expenses and future expenses. This won’t change categorization for the previous months.
" + "ListCostCategoryDefinitions": "Returns the name, ARN, NumberOfRules
and effective dates of all Cost Categories defined in the account. You have the option to use EffectiveOn
to return a list of Cost Categories that were active on a specific date. If there is no EffectiveOn
specified, you’ll see Cost Categories that are effective on the current date. If Cost Category is still effective, EffectiveEnd
is omitted in the response. ListCostCategoryDefinitions
supports pagination. The request can have a MaxResults
range up to 100.
Updates an existing Cost Category. Changes made to the Cost Category rules will be used to categorize the current month’s expenses and future expenses. This won’t change categorization for the previous months.
" }, "shapes": { "AccountScope": { "base": null, "refs": { - "GetReservationPurchaseRecommendationRequest$AccountScope": "The account scope that you want recommendations for. PAYER
means that AWS includes the master account and any member accounts when it calculates its recommendations. LINKED
means that AWS includes only member accounts when it calculates its recommendations.
Valid values are PAYER
and LINKED
.
The account scope that AWS recommends that you purchase this instance for. For example, you can purchase this reservation for an entire organization in AWS Organizations.
" + "GetReservationPurchaseRecommendationRequest$AccountScope": "The account scope that you want your recommendations for. Amazon Web Services calculates recommendations including the payer account and linked accounts if the value is set to PAYER
. If the value is LINKED
, recommendations are calculated for individual linked accounts only.
The account scope that you want your recommendations for. Amazon Web Services calculates recommendations including the payer account and linked accounts if the value is set to PAYER
. If the value is LINKED
, recommendations are calculated for individual linked accounts only.
The account scope that AWS recommends that you purchase this instance for. For example, you can purchase this reservation for an entire organization in AWS Organizations.
", + "SavingsPlansPurchaseRecommendation$AccountScope": "The account scope that you want your recommendations for. Amazon Web Services calculates recommendations including the payer account and linked accounts if the value is set to PAYER
. If the value is LINKED
, recommendations are calculated for individual linked accounts only.
The unique identifier for your Cost Category.
", - "CostCategoryReference$CostCategoryArn": "The unique identifier for your Cost Category Reference.
", + "CostCategoryReference$CostCategoryArn": "The unique identifier for your Cost Category.
", "CreateCostCategoryDefinitionResponse$CostCategoryArn": "The unique identifier for your newly created Cost Category.
", "DeleteCostCategoryDefinitionRequest$CostCategoryArn": "The unique identifier for your Cost Category.
", "DeleteCostCategoryDefinitionResponse$CostCategoryArn": "The unique identifier for your Cost Category.
", @@ -89,11 +91,17 @@ } }, "CostCategory": { - "base": "Cost Category is in public beta for AWS Billing and Cost Management and is subject to change. Your use of Cost Categories is subject to the Beta Service Participation terms of the AWS Service Terms (Section 1.10).
The structure of Cost Categories. This includes detailed metadata and the set of rules for the CostCategory
object.
The structure of Cost Categories. This includes detailed metadata and the set of rules for the CostCategory
object.
The number of entries a paginated response contains.
" + } + }, "CostCategoryName": { "base": "The unique name of the Cost Category.
", "refs": { @@ -104,7 +112,7 @@ } }, "CostCategoryReference": { - "base": "Cost Category is in public beta for AWS Billing and Cost Management and is subject to change. Your use of Cost Categories is subject to the Beta Service Participation terms of the AWS Service Terms (Section 1.10).
A reference to a Cost Category containing only enough information to identify the Cost Category.
You can use this information to retrieve the full Cost Category information using DescribeCostCategory
.
A reference to a Cost Category containing only enough information to identify the Cost Category.
You can use this information to retrieve the full Cost Category information using DescribeCostCategory
.
Cost Category is in public beta for AWS Billing and Cost Management and is subject to change. Your use of Cost Categories is subject to the Beta Service Participation terms of the AWS Service Terms (Section 1.10).
Rules are processed in order. If there are multiple rules that match the line item, then the first rule to match is used to determine that Cost Category value.
", + "base": "Rules are processed in order. If there are multiple rules that match the line item, then the first rule to match is used to determine that Cost Category value.
", "refs": { "CostCategoryRulesList$member": null } @@ -133,8 +141,8 @@ "base": null, "refs": { "CostCategory$Rules": "Rules are processed in order. If there are multiple rules that match the line item, then the first rule to match is used to determine that Cost Category value.
", - "CreateCostCategoryDefinitionRequest$Rules": " CreateCostCategoryDefinition
supports dimensions, Tags, and nested expressions. Currently the only dimensions supported is LINKED_ACCOUNT
.
Root level OR
is not supported. We recommend you create a separate rule instead.
Rules are processed in order. If there are multiple rules that match the line item, then the first rule to match is used to determine that Cost Category value.
", - "UpdateCostCategoryDefinitionRequest$Rules": " UpdateCostCategoryDefinition
supports dimensions, Tags, and nested expressions. Currently the only dimensions supported is LINKED_ACCOUNT
.
Root level OR
is not supported. We recommend you create a separate rule instead.
Rules are processed in order. If there are multiple rules that match the line item, then the first rule to match is used to determine that Cost Category value.
" + "CreateCostCategoryDefinitionRequest$Rules": "The Cost Category rules used to categorize costs. For more information, see CostCategoryRule.
", + "UpdateCostCategoryDefinitionRequest$Rules": "The Expression
object used to categorize costs. For more information, see CostCategoryRule .
Cost Category is in public beta for AWS Billing and Cost Management and is subject to change. Your use of Cost Categories is subject to the Beta Service Participation terms of the AWS Service Terms (Section 1.10).
The values that are available for Cost Categories.
", + "base": "The Cost Categories values used for filtering the costs.
", "refs": { - "Expression$CostCategories": "Cost Category is in public beta for AWS Billing and Cost Management and is subject to change. Your use of Cost Categories is subject to the Beta Service Participation terms of the AWS Service Terms (Section 1.10).
The specific CostCategory
used for Expression
.
The filter based on CostCategory
values.
How much it cost to run an instance.
", + "base": "How much it costs to run an instance.
", "refs": { "Coverage$CoverageCost": "The amount of cost that the reservation covered.
" } @@ -352,18 +360,19 @@ "Expression": { "base": "Use Expression
to filter by cost or by usage. There are two patterns:
Simple dimension values - You can set the dimension name and values for the filters that you plan to use. For example, you can filter for REGION==us-east-1 OR REGION==us-west-1
. The Expression
for that looks like this:
{ \"Dimensions\": { \"Key\": \"REGION\", \"Values\": [ \"us-east-1\", “us-west-1” ] } }
The list of dimension values are OR'd together to retrieve cost or usage data. You can create Expression
and DimensionValues
objects using either with*
methods or set*
methods in multiple lines.
Compound dimension values with logical operations - You can use multiple Expression
types and the logical operators AND/OR/NOT
to create a list of one or more Expression
objects. This allows you to filter on more advanced options. For example, you can filter on ((REGION == us-east-1 OR REGION == us-west-1) OR (TAG.Type == Type1)) AND (USAGE_TYPE != DataTransfer)
. The Expression
for that looks like this:
{ \"And\": [ {\"Or\": [ {\"Dimensions\": { \"Key\": \"REGION\", \"Values\": [ \"us-east-1\", \"us-west-1\" ] }}, {\"Tags\": { \"Key\": \"TagName\", \"Values\": [\"Value1\"] } } ]}, {\"Not\": {\"Dimensions\": { \"Key\": \"USAGE_TYPE\", \"Values\": [\"DataTransfer\"] }}} ] }
Because each Expression
can have only one operator, the service returns an error if more than one is specified. The following example shows an Expression
object that creates an error.
{ \"And\": [ ... ], \"DimensionValues\": { \"Dimension\": \"USAGE_TYPE\", \"Values\": [ \"DataTransfer\" ] } }
For GetRightsizingRecommendation
action, a combination of OR and NOT is not supported. OR is not supported between different dimensions, or dimensions and tags. NOT operators aren't supported. Dimensions are also limited to LINKED_ACCOUNT
, REGION
, or RIGHTSIZING_TYPE
.
An Expression object used to categorize costs. This supports dimensions, Tags, and nested expressions. Currently the only dimensions supported is LINKED_ACCOUNT
.
Root level OR
is not supported. We recommend you create a separate rule instead.
An Expression object used to categorize costs. This supports dimensions, Tags, and nested expressions. Currently the only dimensions supported are LINKED_ACCOUNT
, SERVICE_CODE
, RECORD_TYPE
, and LINKED_ACCOUNT_NAME
.
Root level OR
is not supported. We recommend that you create a separate rule instead.
RECORD_TYPE
is a dimension used for Cost Explorer APIs, and is also supported for Cost Category expressions. This dimension uses different terms, depending on whether you're using the console or API/JSON editor. For a detailed comparison, see Term Comparisons in the AWS Billing and Cost Management User Guide.
Return results that don't match a Dimension
object.
Filters AWS costs by different dimensions. For example, you can specify SERVICE
and LINKED_ACCOUNT
and get the costs that are associated with that account's usage of that service. You can nest Expression
objects to define any combination of dimension filters. For more information, see Expression.
Filters Amazon Web Services costs by different dimensions. For example, you can specify SERVICE
and LINKED_ACCOUNT
and get the costs that are associated with that account's usage of that service. You can nest Expression
objects to define any combination of dimension filters. For more information, see Expression.
The GetCostAndUsageWithResources
operation requires that you either group by or filter by a ResourceId
.
Filters AWS costs by different dimensions. For example, you can specify SERVICE
and LINKED_ACCOUNT
and get the costs that are associated with that account's usage of that service. You can nest Expression
objects to define any combination of dimension filters. For more information, see Expression.
Filters Amazon Web Services costs by different dimensions. For example, you can specify SERVICE
and LINKED_ACCOUNT
and get the costs that are associated with that account's usage of that service. You can nest Expression
objects to define any combination of dimension filters. For more information, see Expression.
The GetCostAndUsageWithResources
operation requires that you either group by or filter by a ResourceId
.
The filters that you want to use to filter your forecast. Cost Explorer API supports all of the Cost Explorer filters.
", - "GetReservationCoverageRequest$Filter": "Filters utilization data by dimensions. You can filter by the following dimensions:
AZ
CACHE_ENGINE
DATABASE_ENGINE
DEPLOYMENT_OPTION
INSTANCE_TYPE
LINKED_ACCOUNT
OPERATING_SYSTEM
PLATFORM
REGION
SERVICE
TAG
TENANCY
GetReservationCoverage
uses the same Expression object as the other operations, but only AND
is supported among each dimension. You can nest only one level deep. If there are multiple values for a dimension, they are OR'd together.
If you don't provide a SERVICE
filter, Cost Explorer defaults to EC2.
Filters utilization data by dimensions. You can filter by the following dimensions:
AZ
CACHE_ENGINE
DEPLOYMENT_OPTION
INSTANCE_TYPE
LINKED_ACCOUNT
OPERATING_SYSTEM
PLATFORM
REGION
SERVICE
SCOPE
TENANCY
GetReservationUtilization
uses the same Expression object as the other operations, but only AND
is supported among each dimension, and nesting is supported up to only one level deep. If there are multiple values for a dimension, they are OR'd together.
Filters utilization data by dimensions. You can filter by the following dimensions:
AZ
CACHE_ENGINE
DATABASE_ENGINE
DEPLOYMENT_OPTION
INSTANCE_TYPE
LINKED_ACCOUNT
OPERATING_SYSTEM
PLATFORM
REGION
SERVICE
TAG
TENANCY
GetReservationCoverage
uses the same Expression object as the other operations, but only AND
is supported among each dimension. You can nest only one level deep. If there are multiple values for a dimension, they are OR'd together.
If you don't provide a SERVICE
filter, Cost Explorer defaults to EC2.
Cost category is also supported.
", + "GetReservationUtilizationRequest$Filter": "Filters utilization data by dimensions. You can filter by the following dimensions:
AZ
CACHE_ENGINE
DEPLOYMENT_OPTION
INSTANCE_TYPE
LINKED_ACCOUNT
OPERATING_SYSTEM
PLATFORM
REGION
SERVICE
SCOPE
TENANCY
GetReservationUtilization
uses the same Expression object as the other operations, but only AND
is supported among each dimension, and nesting is supported up to only one level deep. If there are multiple values for a dimension, they are OR'd together.
Filters Savings Plans coverage data by dimensions. You can filter data for Savings Plans usage with the following dimensions:
LINKED_ACCOUNT
REGION
SERVICE
INSTANCE_FAMILY
GetSavingsPlansCoverage
uses the same Expression object as the other operations, but only AND
is supported among each dimension. If there are multiple values for a dimension, they are OR'd together.
Filters Savings Plans utilization coverage data for active Savings Plans dimensions. You can filter data with the following dimensions:
LINKED_ACCOUNT
SAVINGS_PLAN_ARN
REGION
PAYMENT_OPTION
INSTANCE_TYPE_FAMILY
GetSavingsPlansUtilizationDetails
uses the same Expression object as the other operations, but only AND
is supported among each dimension.
Filters Savings Plans utilization coverage data for active Savings Plans dimensions. You can filter data with the following dimensions:
LINKED_ACCOUNT
SAVINGS_PLAN_ARN
SAVINGS_PLANS_TYPE
REGION
PAYMENT_OPTION
INSTANCE_TYPE_FAMILY
GetSavingsPlansUtilization
uses the same Expression object as the other operations, but only AND
is supported among each dimension.
Filters Savings Plans coverage data by dimensions. You can filter data for Savings Plans usage with the following dimensions:
LINKED_ACCOUNT
REGION
SERVICE
INSTANCE_FAMILY
GetSavingsPlansCoverage
uses the same Expression object as the other operations, but only AND
is supported among each dimension. If there are multiple values for a dimension, they are OR'd together.
Cost category is also supported.
", + "GetSavingsPlansPurchaseRecommendationRequest$Filter": "You can filter your recommendations by Account ID with the LINKED_ACCOUNT
dimension. To filter your recommendations by Account ID, specify Key
as LINKED_ACCOUNT
and Value
as the comma-separated Acount ID(s) for which you want to see Savings Plans purchase recommendations.
For GetSavingsPlansPurchaseRecommendation, the Filter
does not include CostCategories
or Tags
. It only includes Dimensions
. With Dimensions
, Key
must be LINKED_ACCOUNT
and Value
can be a single Account ID or multiple comma-separated Account IDs for which you want to see Savings Plans Purchase Recommendations. AND
and OR
operators are not supported.
Filters Savings Plans utilization coverage data for active Savings Plans dimensions. You can filter data with the following dimensions:
LINKED_ACCOUNT
SAVINGS_PLAN_ARN
REGION
PAYMENT_OPTION
INSTANCE_TYPE_FAMILY
GetSavingsPlansUtilizationDetails
uses the same Expression object as the other operations, but only AND
is supported among each dimension.
Filters Savings Plans utilization coverage data for active Savings Plans dimensions. You can filter data with the following dimensions:
LINKED_ACCOUNT
SAVINGS_PLAN_ARN
SAVINGS_PLANS_TYPE
REGION
PAYMENT_OPTION
INSTANCE_TYPE_FAMILY
GetSavingsPlansUtilization
uses the same Expression object as the other operations, but only AND
is supported among each dimension.
The filters that you want to use to filter your forecast. Cost Explorer API supports all of the Cost Explorer filters.
" } }, @@ -400,6 +409,7 @@ "RDSInstanceDetails$SizeFlexEligible": "Whether the recommended reservation is size flexible.
", "RedshiftInstanceDetails$CurrentGeneration": "Whether the recommendation is for a current-generation instance.
", "RedshiftInstanceDetails$SizeFlexEligible": "Whether the recommended reservation is size flexible.
", + "RightsizingRecommendationConfiguration$BenefitsConsidered": " The option to consider RI or Savings Plans discount benefits in your savings calculation. The default value is TRUE
.
Indicates whether or not this recommendation is the defaulted Amazon Web Services recommendation.
" } }, @@ -473,7 +483,7 @@ "ReservationPurchaseRecommendationDetail$UpfrontCost": "How much purchasing this instance costs you upfront.
", "ReservationPurchaseRecommendationDetail$RecurringStandardMonthlyCost": "How much purchasing this instance costs you on a monthly basis.
", "ReservationPurchaseRecommendationMetadata$RecommendationId": "The ID for this specific recommendation.
", - "ReservationPurchaseRecommendationMetadata$GenerationTimestamp": "The time stamp for when AWS made this recommendation.
", + "ReservationPurchaseRecommendationMetadata$GenerationTimestamp": "The timestamp for when AWS made this recommendation.
", "ReservationPurchaseRecommendationSummary$TotalEstimatedMonthlySavingsAmount": "The total amount that AWS estimates that this recommendation could save you in a month.
", "ReservationPurchaseRecommendationSummary$TotalEstimatedMonthlySavingsPercentage": "The total amount that AWS estimates that this recommendation could save you in a month, as a percentage of your costs.
", "ReservationPurchaseRecommendationSummary$CurrencyCode": "The currency code used for this recommendation.
", @@ -490,7 +500,7 @@ "SavingsPlansCoverageData$SpendCoveredBySavingsPlans": "The amount of your Amazon Web Services usage that is covered by a Savings Plans.
", "SavingsPlansCoverageData$OnDemandCost": "The cost of your Amazon Web Services usage at the public On-Demand rate.
", "SavingsPlansCoverageData$TotalCost": "The total cost of your Amazon Web Services usage, regardless of your purchase option.
", - "SavingsPlansCoverageData$CoveragePercentage": "The percentage of your existing Savings Planscovered usage, divided by all of your eligible Savings Plans usage in an account(or set of accounts).
", + "SavingsPlansCoverageData$CoveragePercentage": "The percentage of your existing Savings Plans covered usage, divided by all of your eligible Savings Plans usage in an account(or set of accounts).
", "SavingsPlansDetails$Region": "A collection of AWS resources in a geographic area. Each AWS Region is isolated and independent of the other Regions.
", "SavingsPlansDetails$InstanceFamily": "A group of instance types that Savings Plans applies to.
", "SavingsPlansDetails$OfferingId": "The unique ID used to distinguish Savings Plans from one another.
", @@ -778,6 +788,19 @@ "SavingsPlansPurchaseRecommendation$LookbackPeriodInDays": "The lookback period in days, used to generate the recommendation.
" } }, + "MatchOption": { + "base": null, + "refs": { + "MatchOptions$member": null + } + }, + "MatchOptions": { + "base": null, + "refs": { + "DimensionValues$MatchOptions": "The match options that you can use to filter your results. MatchOptions
is only applicable for actions related to Cost Category. The default values for MatchOptions
is EQUALS
and CASE_SENSITIVE
.
The match options that you can use to filter your results. MatchOptions
is only applicable for only applicable for actions related to Cost Category. The default values for MatchOptions
is EQUALS
and CASE_SENSITIVE
.
Which metric Cost Explorer uses to create your forecast. For more information about blended and unblended rates, see Why does the \"blended\" annotation appear on some line items in my bill?.
Valid values for a GetCostForecast
call are the following:
AMORTIZED_COST
BLENDED_COST
NET_AMORTIZED_COST
NET_UNBLENDED_COST
UNBLENDED_COST
Which metric Cost Explorer uses to create your forecast. For more information about blended and unblended rates, see Why does the \"blended\" annotation appear on some line items in my bill?.
Valid values for a GetCostForecast
call are the following:
AMORTIZED_COST
BLENDED_COST
NET_AMORTIZED_COST
NET_UNBLENDED_COST
UNBLENDED_COST
Which metric Cost Explorer uses to create your forecast.
Valid values for a GetUsageForecast
call are the following:
USAGE_QUANTITY
NORMALIZED_USAGE_AMOUNT
Which metrics are returned in the query. For more information about blended and unblended rates, see Why does the \"blended\" annotation appear on some line items in my bill?.
Valid values are AmortizedCost
, BlendedCost
, NetAmortizedCost
, NetUnblendedCost
, NormalizedUsageAmount
, UnblendedCost
, and UsageQuantity
.
If you return the UsageQuantity
metric, the service aggregates all usage numbers without taking into account the units. For example, if you aggregate usageQuantity
across all of Amazon EC2, the results aren't meaningful because Amazon EC2 compute hours and data transfer are measured in different units (for example, hours vs. GB). To get more meaningful UsageQuantity
metrics, filter by UsageType
or UsageTypeGroups
.
Metrics
is required for GetCostAndUsage
requests.
Which metrics are returned in the query. For more information about blended and unblended rates, see Why does the \"blended\" annotation appear on some line items in my bill?.
Valid values are AmortizedCost
, BlendedCost
, NetAmortizedCost
, NetUnblendedCost
, NormalizedUsageAmount
, UnblendedCost
, and UsageQuantity
.
If you return the UsageQuantity
metric, the service aggregates all usage numbers without taking the units into account. For example, if you aggregate usageQuantity
across all of Amazon EC2, the results aren't meaningful because Amazon EC2 compute hours and data transfer are measured in different units (for example, hours vs. GB). To get more meaningful UsageQuantity
metrics, filter by UsageType
or UsageTypeGroups
.
Metrics
is required for GetCostAndUsageWithResources
requests.
Which metrics are returned in the query. For more information about blended and unblended rates, see Why does the \"blended\" annotation appear on some line items in my bill?.
Valid values are AmortizedCost
, BlendedCost
, NetAmortizedCost
, NetUnblendedCost
, NormalizedUsageAmount
, UnblendedCost
, and UsageQuantity
.
If you return the UsageQuantity
metric, the service aggregates all usage numbers without taking into account the units. For example, if you aggregate usageQuantity
across all of Amazon EC2, the results aren't meaningful because Amazon EC2 compute hours and data transfer are measured in different units (for example, hours vs. GB). To get more meaningful UsageQuantity
metrics, filter by UsageType
or UsageTypeGroups
.
Metrics
is required for GetCostAndUsage
requests.
Which metrics are returned in the query. For more information about blended and unblended rates, see Why does the \"blended\" annotation appear on some line items in my bill?.
Valid values are AmortizedCost
, BlendedCost
, NetAmortizedCost
, NetUnblendedCost
, NormalizedUsageAmount
, UnblendedCost
, and UsageQuantity
.
If you return the UsageQuantity
metric, the service aggregates all usage numbers without taking the units into account. For example, if you aggregate usageQuantity
across all of Amazon EC2, the results aren't meaningful because Amazon EC2 compute hours and data transfer are measured in different units (for example, hours vs. GB). To get more meaningful UsageQuantity
metrics, filter by UsageType
or UsageTypeGroups
.
Metrics
is required for GetCostAndUsageWithResources
requests.
The measurement that you want your reservation coverage reported in.
Valid values are Hour
, Unit
, and Cost
. You can use multiple values in a request.
The measurement that you want your Savings Plans coverage reported in. The only valid value is SpendCoveredBySavingsPlans
.
The token to retrieve the next set of results. Amazon Web Services provides the token when the response from a previous call has more results than the maximum page size.
", "GetTagsRequest$NextPageToken": "The token to retrieve the next set of results. AWS provides the token when the response from a previous call has more results than the maximum page size.
", "GetTagsResponse$NextPageToken": "The token for the next set of retrievable results. AWS provides the token when the response from a previous call has more results than the maximum page size.
", - "ListCostCategoryDefinitionsRequest$NextToken": "The token to retrieve the next set of results. Amazon Web Services provides the token when the response from a previous call has more results than the maximum page size.
You can use this information to retrieve the full Cost Category information using DescribeCostCategory
.
The token to retrieve the next set of results. Amazon Web Services provides the token when the response from a previous call has more results than the maximum page size.
", "ListCostCategoryDefinitionsResponse$NextToken": "The token to retrieve the next set of results. Amazon Web Services provides the token when the response from a previous call has more results than the maximum page size.
" } }, "NonNegativeInteger": { "base": null, "refs": { + "CostCategoryReference$NumberOfRules": "The number of rules associated with a specific Cost Category.
", "GetReservationPurchaseRecommendationRequest$PageSize": "The number of recommendations that you want returned in a single response object.
", "GetRightsizingRecommendationRequest$PageSize": "The number of recommendations that you want returned in a single response object.
", "GetSavingsPlansPurchaseRecommendationRequest$PageSize": "The number of recommendations that you want returned in a single response object.
" @@ -893,7 +917,7 @@ "OnDemandCost": { "base": null, "refs": { - "CoverageCost$OnDemandCost": "How much an On-Demand instance cost.
" + "CoverageCost$OnDemandCost": "How much an On-Demand Instance costs.
" } }, "OnDemandCostOfRIHoursUsed": { @@ -957,6 +981,12 @@ "InstanceDetails$RDSInstanceDetails": "The Amazon RDS instances that AWS recommends that you purchase.
" } }, + "RecommendationTarget": { + "base": null, + "refs": { + "RightsizingRecommendationConfiguration$RecommendationTarget": " The option to see recommendations within the same instance family, or recommendations for instances across other families. The default value is SAME_INSTANCE_FAMILY
.
Details about the Amazon Redshift instances that AWS recommends that you purchase.
", "refs": { @@ -1019,7 +1049,7 @@ } }, "ReservationPurchaseRecommendationMetadata": { - "base": "Information about this specific recommendation, such as the time stamp for when AWS made a specific recommendation.
", + "base": "Information about this specific recommendation, such as the timestamp for when AWS made a specific recommendation.
", "refs": { "GetReservationPurchaseRecommendationResponse$Metadata": "Information about this specific recommendation call, such as the time stamp for when Cost Explorer generated this recommendation.
" } @@ -1098,6 +1128,13 @@ "RightsizingRecommendationList$member": null } }, + "RightsizingRecommendationConfiguration": { + "base": "Enables you to customize recommendations across two attributes. You can choose to view recommendations for instances within the same instance families or across different instance families. You can also choose to view your estimated savings associated with recommendations with consideration of existing Savings Plans or RI benefits, or niether.
", + "refs": { + "GetRightsizingRecommendationRequest$Configuration": "Enables you to customize recommendations across two attributes. You can choose to view recommendations for instances within the same instance families or across different instance families. You can also choose to view your estimated savings associated with recommendations with consideration of existing Savings Plans or RI benefits, or niether.
", + "GetRightsizingRecommendationResponse$Configuration": "Enables you to customize recommendations across two attributes. You can choose to view recommendations for instances within the same instance families or across different instance families. You can also choose to view your estimated savings associated with recommendations with consideration of existing Savings Plans or RI benefits, or niether.
" + } + }, "RightsizingRecommendationList": { "base": null, "refs": { @@ -1175,7 +1212,7 @@ "SavingsPlansPurchaseRecommendationDetailList": { "base": null, "refs": { - "SavingsPlansPurchaseRecommendation$SavingsPlansPurchaseRecommendationDetails": "Details for the Savings Plans we recommend you to purchase to cover existing, Savings Plans eligible workloads.
" + "SavingsPlansPurchaseRecommendation$SavingsPlansPurchaseRecommendationDetails": "Details for the Savings Plans we recommend that you purchase to cover existing Savings Plans eligible workloads.
" } }, "SavingsPlansPurchaseRecommendationMetadata": { @@ -1414,7 +1451,7 @@ "base": null, "refs": { "CostCategoryValues$Values": "The specific value of the Cost Category.
", - "DimensionValues$Values": "The metadata values that you can use to filter and group your results. You can use GetDimensionValues
to find specific values.
Valid values for the SERVICE
dimension are Amazon Elastic Compute Cloud - Compute
, Amazon Elasticsearch Service
, Amazon ElastiCache
, Amazon Redshift
, and Amazon Relational Database Service
.
The metadata values that you can use to filter and group your results. You can use GetDimensionValues
to find specific values.
The specific value of the tag.
" } }, diff --git a/models/apis/ce/2017-10-25/paginators-1.json b/models/apis/ce/2017-10-25/paginators-1.json index e976e315b13..431b8e5dc52 100644 --- a/models/apis/ce/2017-10-25/paginators-1.json +++ b/models/apis/ce/2017-10-25/paginators-1.json @@ -9,6 +9,11 @@ "input_token": "NextToken", "output_token": "NextToken", "limit_key": "MaxResults" + }, + "ListCostCategoryDefinitions": { + "input_token": "NextToken", + "output_token": "NextToken", + "limit_key": "MaxResults" } } } diff --git a/models/apis/chime/2018-05-01/api-2.json b/models/apis/chime/2018-05-01/api-2.json index f6fde0eced0..b31a0f0178e 100644 --- a/models/apis/chime/2018-05-01/api-2.json +++ b/models/apis/chime/2018-05-01/api-2.json @@ -321,6 +321,25 @@ {"shape":"ServiceFailureException"} ] }, + "CreateProxySession":{ + "name":"CreateProxySession", + "http":{ + "method":"POST", + "requestUri":"/voice-connectors/{voiceConnectorId}/proxy-sessions", + "responseCode":201 + }, + "input":{"shape":"CreateProxySessionRequest"}, + "output":{"shape":"CreateProxySessionResponse"}, + "errors":[ + {"shape":"UnauthorizedClientException"}, + {"shape":"NotFoundException"}, + {"shape":"ForbiddenException"}, + {"shape":"BadRequestException"}, + {"shape":"ThrottledClientException"}, + {"shape":"ServiceUnavailableException"}, + {"shape":"ServiceFailureException"} + ] + }, "CreateRoom":{ "name":"CreateRoom", "http":{ @@ -513,6 +532,24 @@ {"shape":"ServiceFailureException"} ] }, + "DeleteProxySession":{ + "name":"DeleteProxySession", + "http":{ + "method":"DELETE", + "requestUri":"/voice-connectors/{voiceConnectorId}/proxy-sessions/{proxySessionId}", + "responseCode":204 + }, + "input":{"shape":"DeleteProxySessionRequest"}, + "errors":[ + {"shape":"UnauthorizedClientException"}, + {"shape":"NotFoundException"}, + {"shape":"ForbiddenException"}, + {"shape":"BadRequestException"}, + {"shape":"ThrottledClientException"}, + {"shape":"ServiceUnavailableException"}, + {"shape":"ServiceFailureException"} + ] + }, "DeleteRoom":{ "name":"DeleteRoom", "http":{ @@ -605,6 +642,24 @@ {"shape":"ServiceFailureException"} ] }, + "DeleteVoiceConnectorProxy":{ + "name":"DeleteVoiceConnectorProxy", + "http":{ + "method":"DELETE", + "requestUri":"/voice-connectors/{voiceConnectorId}/programmable-numbers/proxy", + "responseCode":204 + }, + "input":{"shape":"DeleteVoiceConnectorProxyRequest"}, + "errors":[ + {"shape":"UnauthorizedClientException"}, + {"shape":"NotFoundException"}, + {"shape":"ForbiddenException"}, + {"shape":"BadRequestException"}, + {"shape":"ThrottledClientException"}, + {"shape":"ServiceUnavailableException"}, + {"shape":"ServiceFailureException"} + ] + }, "DeleteVoiceConnectorStreamingConfiguration":{ "name":"DeleteVoiceConnectorStreamingConfiguration", "http":{ @@ -918,6 +973,25 @@ {"shape":"ServiceFailureException"} ] }, + "GetProxySession":{ + "name":"GetProxySession", + "http":{ + "method":"GET", + "requestUri":"/voice-connectors/{voiceConnectorId}/proxy-sessions/{proxySessionId}", + "responseCode":200 + }, + "input":{"shape":"GetProxySessionRequest"}, + "output":{"shape":"GetProxySessionResponse"}, + "errors":[ + {"shape":"UnauthorizedClientException"}, + {"shape":"NotFoundException"}, + {"shape":"ForbiddenException"}, + {"shape":"BadRequestException"}, + {"shape":"ThrottledClientException"}, + {"shape":"ServiceUnavailableException"}, + {"shape":"ServiceFailureException"} + ] + }, "GetRoom":{ "name":"GetRoom", "http":{ @@ -1051,6 +1125,25 @@ {"shape":"ServiceFailureException"} ] }, + "GetVoiceConnectorProxy":{ + "name":"GetVoiceConnectorProxy", + "http":{ + "method":"GET", + "requestUri":"/voice-connectors/{voiceConnectorId}/programmable-numbers/proxy", + "responseCode":200 + }, + "input":{"shape":"GetVoiceConnectorProxyRequest"}, + "output":{"shape":"GetVoiceConnectorProxyResponse"}, + "errors":[ + {"shape":"UnauthorizedClientException"}, + {"shape":"NotFoundException"}, + {"shape":"ForbiddenException"}, + {"shape":"BadRequestException"}, + {"shape":"ThrottledClientException"}, + {"shape":"ServiceUnavailableException"}, + {"shape":"ServiceFailureException"} + ] + }, "GetVoiceConnectorStreamingConfiguration":{ "name":"GetVoiceConnectorStreamingConfiguration", "http":{ @@ -1145,6 +1238,25 @@ {"shape":"ServiceFailureException"} ] }, + "ListAttendeeTags":{ + "name":"ListAttendeeTags", + "http":{ + "method":"GET", + "requestUri":"/meetings/{meetingId}/attendees/{attendeeId}/tags", + "responseCode":200 + }, + "input":{"shape":"ListAttendeeTagsRequest"}, + "output":{"shape":"ListAttendeeTagsResponse"}, + "errors":[ + {"shape":"BadRequestException"}, + {"shape":"ForbiddenException"}, + {"shape":"NotFoundException"}, + {"shape":"ThrottledClientException"}, + {"shape":"UnauthorizedClientException"}, + {"shape":"ServiceUnavailableException"}, + {"shape":"ServiceFailureException"} + ] + }, "ListAttendees":{ "name":"ListAttendees", "http":{ @@ -1183,6 +1295,25 @@ {"shape":"ThrottledClientException"} ] }, + "ListMeetingTags":{ + "name":"ListMeetingTags", + "http":{ + "method":"GET", + "requestUri":"/meetings/{meetingId}/tags", + "responseCode":200 + }, + "input":{"shape":"ListMeetingTagsRequest"}, + "output":{"shape":"ListMeetingTagsResponse"}, + "errors":[ + {"shape":"BadRequestException"}, + {"shape":"ForbiddenException"}, + {"shape":"NotFoundException"}, + {"shape":"ThrottledClientException"}, + {"shape":"UnauthorizedClientException"}, + {"shape":"ServiceUnavailableException"}, + {"shape":"ServiceFailureException"} + ] + }, "ListMeetings":{ "name":"ListMeetings", "http":{ @@ -1236,6 +1367,25 @@ {"shape":"ServiceFailureException"} ] }, + "ListProxySessions":{ + "name":"ListProxySessions", + "http":{ + "method":"GET", + "requestUri":"/voice-connectors/{voiceConnectorId}/proxy-sessions", + "responseCode":200 + }, + "input":{"shape":"ListProxySessionsRequest"}, + "output":{"shape":"ListProxySessionsResponse"}, + "errors":[ + {"shape":"UnauthorizedClientException"}, + {"shape":"NotFoundException"}, + {"shape":"ForbiddenException"}, + {"shape":"BadRequestException"}, + {"shape":"ThrottledClientException"}, + {"shape":"ServiceUnavailableException"}, + {"shape":"ServiceFailureException"} + ] + }, "ListRoomMemberships":{ "name":"ListRoomMemberships", "http":{ @@ -1274,6 +1424,23 @@ {"shape":"ServiceFailureException"} ] }, + "ListTagsForResource":{ + "name":"ListTagsForResource", + "http":{ + "method":"GET", + "requestUri":"/tags" + }, + "input":{"shape":"ListTagsForResourceRequest"}, + "output":{"shape":"ListTagsForResourceResponse"}, + "errors":[ + {"shape":"BadRequestException"}, + {"shape":"ForbiddenException"}, + {"shape":"NotFoundException"}, + {"shape":"UnauthorizedClientException"}, + {"shape":"ServiceUnavailableException"}, + {"shape":"ServiceFailureException"} + ] + }, "ListUsers":{ "name":"ListUsers", "http":{ @@ -1424,6 +1591,25 @@ {"shape":"ServiceFailureException"} ] }, + "PutVoiceConnectorProxy":{ + "name":"PutVoiceConnectorProxy", + "http":{ + "method":"PUT", + "requestUri":"/voice-connectors/{voiceConnectorId}/programmable-numbers/proxy" + }, + "input":{"shape":"PutVoiceConnectorProxyRequest"}, + "output":{"shape":"PutVoiceConnectorProxyResponse"}, + "errors":[ + {"shape":"UnauthorizedClientException"}, + {"shape":"AccessDeniedException"}, + {"shape":"NotFoundException"}, + {"shape":"ForbiddenException"}, + {"shape":"BadRequestException"}, + {"shape":"ThrottledClientException"}, + {"shape":"ServiceUnavailableException"}, + {"shape":"ServiceFailureException"} + ] + }, "PutVoiceConnectorStreamingConfiguration":{ "name":"PutVoiceConnectorStreamingConfiguration", "http":{ @@ -1557,6 +1743,114 @@ {"shape":"ServiceFailureException"} ] }, + "TagAttendee":{ + "name":"TagAttendee", + "http":{ + "method":"POST", + "requestUri":"/meetings/{meetingId}/attendees/{attendeeId}/tags?operation=add", + "responseCode":204 + }, + "input":{"shape":"TagAttendeeRequest"}, + "errors":[ + {"shape":"BadRequestException"}, + {"shape":"ForbiddenException"}, + {"shape":"NotFoundException"}, + {"shape":"ResourceLimitExceededException"}, + {"shape":"ThrottledClientException"}, + {"shape":"UnauthorizedClientException"}, + {"shape":"ServiceUnavailableException"}, + {"shape":"ServiceFailureException"} + ] + }, + "TagMeeting":{ + "name":"TagMeeting", + "http":{ + "method":"POST", + "requestUri":"/meetings/{meetingId}/tags?operation=add", + "responseCode":204 + }, + "input":{"shape":"TagMeetingRequest"}, + "errors":[ + {"shape":"BadRequestException"}, + {"shape":"ForbiddenException"}, + {"shape":"NotFoundException"}, + {"shape":"ResourceLimitExceededException"}, + {"shape":"ThrottledClientException"}, + {"shape":"UnauthorizedClientException"}, + {"shape":"ServiceUnavailableException"}, + {"shape":"ServiceFailureException"} + ] + }, + "TagResource":{ + "name":"TagResource", + "http":{ + "method":"POST", + "requestUri":"/tags?operation=tag-resource", + "responseCode":204 + }, + "input":{"shape":"TagResourceRequest"}, + "errors":[ + {"shape":"BadRequestException"}, + {"shape":"ForbiddenException"}, + {"shape":"NotFoundException"}, + {"shape":"UnauthorizedClientException"}, + {"shape":"ServiceUnavailableException"}, + {"shape":"ServiceFailureException"} + ] + }, + "UntagAttendee":{ + "name":"UntagAttendee", + "http":{ + "method":"POST", + "requestUri":"/meetings/{meetingId}/attendees/{attendeeId}/tags?operation=delete", + "responseCode":204 + }, + "input":{"shape":"UntagAttendeeRequest"}, + "errors":[ + {"shape":"BadRequestException"}, + {"shape":"ForbiddenException"}, + {"shape":"ThrottledClientException"}, + {"shape":"NotFoundException"}, + {"shape":"UnauthorizedClientException"}, + {"shape":"ServiceUnavailableException"}, + {"shape":"ServiceFailureException"} + ] + }, + "UntagMeeting":{ + "name":"UntagMeeting", + "http":{ + "method":"POST", + "requestUri":"/meetings/{meetingId}/tags?operation=delete", + "responseCode":204 + }, + "input":{"shape":"UntagMeetingRequest"}, + "errors":[ + {"shape":"BadRequestException"}, + {"shape":"ForbiddenException"}, + {"shape":"ThrottledClientException"}, + {"shape":"NotFoundException"}, + {"shape":"UnauthorizedClientException"}, + {"shape":"ServiceUnavailableException"}, + {"shape":"ServiceFailureException"} + ] + }, + "UntagResource":{ + "name":"UntagResource", + "http":{ + "method":"POST", + "requestUri":"/tags?operation=untag-resource", + "responseCode":204 + }, + "input":{"shape":"UntagResourceRequest"}, + "errors":[ + {"shape":"BadRequestException"}, + {"shape":"ForbiddenException"}, + {"shape":"NotFoundException"}, + {"shape":"UnauthorizedClientException"}, + {"shape":"ServiceUnavailableException"}, + {"shape":"ServiceFailureException"} + ] + }, "UpdateAccount":{ "name":"UpdateAccount", "http":{ @@ -1668,6 +1962,25 @@ {"shape":"ServiceFailureException"} ] }, + "UpdateProxySession":{ + "name":"UpdateProxySession", + "http":{ + "method":"POST", + "requestUri":"/voice-connectors/{voiceConnectorId}/proxy-sessions/{proxySessionId}", + "responseCode":201 + }, + "input":{"shape":"UpdateProxySessionRequest"}, + "output":{"shape":"UpdateProxySessionResponse"}, + "errors":[ + {"shape":"UnauthorizedClientException"}, + {"shape":"NotFoundException"}, + {"shape":"ForbiddenException"}, + {"shape":"BadRequestException"}, + {"shape":"ThrottledClientException"}, + {"shape":"ServiceUnavailableException"}, + {"shape":"ServiceFailureException"} + ] + }, "UpdateRoom":{ "name":"UpdateRoom", "http":{ @@ -1844,6 +2157,10 @@ "AlexaForBusinessRoomArn":{"shape":"SensitiveString"} } }, + "AreaCode":{ + "type":"string", + "pattern":"^$|^[0-9]{3,3}$" + }, "Arn":{ "type":"string", "max":1024, @@ -1947,6 +2264,18 @@ "type":"list", "member":{"shape":"Attendee"} }, + "AttendeeTagKeyList":{ + "type":"list", + "member":{"shape":"TagKey"}, + "max":10, + "min":1 + }, + "AttendeeTagList":{ + "type":"list", + "member":{"shape":"Tag"}, + "max":10, + "min":1 + }, "BadRequestException":{ "type":"structure", "members":{ @@ -2149,6 +2478,17 @@ "type":"list", "member":{"shape":"CallingRegion"} }, + "Capability":{ + "type":"string", + "enum":[ + "Voice", + "SMS" + ] + }, + "CapabilityList":{ + "type":"list", + "member":{"shape":"Capability"} + }, "ClientRequestToken":{ "type":"string", "max":64, @@ -2165,11 +2505,21 @@ "error":{"httpStatusCode":409}, "exception":true }, - "CpsLimit":{ - "type":"integer", - "min":1 + "Country":{ + "type":"string", + "pattern":"^$|^[A-Z]{2,2}$" }, - "CreateAccountRequest":{ + "CountryList":{ + "type":"list", + "member":{"shape":"Country"}, + "max":100, + "min":1 + }, + "CpsLimit":{ + "type":"integer", + "min":1 + }, + "CreateAccountRequest":{ "type":"structure", "required":["Name"], "members":{ @@ -2202,14 +2552,16 @@ "location":"uri", "locationName":"meetingId" }, - "ExternalUserId":{"shape":"ExternalUserIdType"} + "ExternalUserId":{"shape":"ExternalUserIdType"}, + "Tags":{"shape":"AttendeeTagList"} } }, "CreateAttendeeRequestItem":{ "type":"structure", "required":["ExternalUserId"], "members":{ - "ExternalUserId":{"shape":"ExternalUserIdType"} + "ExternalUserId":{"shape":"ExternalUserIdType"}, + "Tags":{"shape":"AttendeeTagList"} } }, "CreateAttendeeRequestItemList":{ @@ -2252,8 +2604,10 @@ "shape":"ClientRequestToken", "idempotencyToken":true }, + "ExternalMeetingId":{"shape":"ExternalMeetingIdType"}, "MeetingHostId":{"shape":"ExternalUserIdType"}, "MediaRegion":{"shape":"String"}, + "Tags":{"shape":"MeetingTagList"}, "NotificationsConfiguration":{"shape":"MeetingNotificationConfiguration"} } }, @@ -2280,6 +2634,34 @@ "PhoneNumberOrder":{"shape":"PhoneNumberOrder"} } }, + "CreateProxySessionRequest":{ + "type":"structure", + "required":[ + "ParticipantPhoneNumbers", + "Capabilities", + "VoiceConnectorId" + ], + "members":{ + "VoiceConnectorId":{ + "shape":"NonEmptyString128", + "location":"uri", + "locationName":"voiceConnectorId" + }, + "ParticipantPhoneNumbers":{"shape":"ParticipantPhoneNumberList"}, + "Name":{"shape":"ProxySessionNameString"}, + "ExpiryMinutes":{"shape":"PositiveInteger"}, + "Capabilities":{"shape":"CapabilityList"}, + "NumberSelectionBehavior":{"shape":"NumberSelectionBehavior"}, + "GeoMatchLevel":{"shape":"GeoMatchLevel"}, + "GeoMatchParams":{"shape":"GeoMatchParams"} + } + }, + "CreateProxySessionResponse":{ + "type":"structure", + "members":{ + "ProxySession":{"shape":"ProxySession"} + } + }, "CreateRoomMembershipRequest":{ "type":"structure", "required":[ @@ -2476,6 +2858,25 @@ } } }, + "DeleteProxySessionRequest":{ + "type":"structure", + "required":[ + "VoiceConnectorId", + "ProxySessionId" + ], + "members":{ + "VoiceConnectorId":{ + "shape":"NonEmptyString128", + "location":"uri", + "locationName":"voiceConnectorId" + }, + "ProxySessionId":{ + "shape":"NonEmptyString128", + "location":"uri", + "locationName":"proxySessionId" + } + } + }, "DeleteRoomMembershipRequest":{ "type":"structure", "required":[ @@ -2542,6 +2943,17 @@ } } }, + "DeleteVoiceConnectorProxyRequest":{ + "type":"structure", + "required":["VoiceConnectorId"], + "members":{ + "VoiceConnectorId":{ + "shape":"NonEmptyString128", + "location":"uri", + "locationName":"voiceConnectorId" + } + } + }, "DeleteVoiceConnectorRequest":{ "type":"structure", "required":["VoiceConnectorId"], @@ -2716,6 +3128,12 @@ "LambdaFunctionArn":{"shape":"SensitiveString"} } }, + "ExternalMeetingIdType":{ + "type":"string", + "max":64, + "min":2, + "sensitive":true + }, "ExternalUserIdType":{ "type":"string", "max":64, @@ -2731,6 +3149,24 @@ "error":{"httpStatusCode":403}, "exception":true }, + "GeoMatchLevel":{ + "type":"string", + "enum":[ + "Country", + "AreaCode" + ] + }, + "GeoMatchParams":{ + "type":"structure", + "required":[ + "Country", + "AreaCode" + ], + "members":{ + "Country":{"shape":"Country"}, + "AreaCode":{"shape":"AreaCode"} + } + }, "GetAccountRequest":{ "type":"structure", "required":["AccountId"], @@ -2905,6 +3341,31 @@ "CallingNameUpdatedTimestamp":{"shape":"Iso8601Timestamp"} } }, + "GetProxySessionRequest":{ + "type":"structure", + "required":[ + "VoiceConnectorId", + "ProxySessionId" + ], + "members":{ + "VoiceConnectorId":{ + "shape":"NonEmptyString128", + "location":"uri", + "locationName":"voiceConnectorId" + }, + "ProxySessionId":{ + "shape":"NonEmptyString128", + "location":"uri", + "locationName":"proxySessionId" + } + } + }, + "GetProxySessionResponse":{ + "type":"structure", + "members":{ + "ProxySession":{"shape":"ProxySession"} + } + }, "GetRoomRequest":{ "type":"structure", "required":[ @@ -3031,6 +3492,23 @@ "Origination":{"shape":"Origination"} } }, + "GetVoiceConnectorProxyRequest":{ + "type":"structure", + "required":["VoiceConnectorId"], + "members":{ + "VoiceConnectorId":{ + "shape":"NonEmptyString128", + "location":"uri", + "locationName":"voiceConnectorId" + } + } + }, + "GetVoiceConnectorProxyResponse":{ + "type":"structure", + "members":{ + "Proxy":{"shape":"Proxy"} + } + }, "GetVoiceConnectorRequest":{ "type":"structure", "required":["VoiceConnectorId"], @@ -3103,6 +3581,7 @@ "type":"string", "pattern":"[a-fA-F0-9]{8}(?:-[a-fA-F0-9]{4}){3}-[a-fA-F0-9]{12}" }, + "Integer":{"type":"integer"}, "Invite":{ "type":"structure", "members":{ @@ -3201,6 +3680,31 @@ "NextToken":{"shape":"String"} } }, + "ListAttendeeTagsRequest":{ + "type":"structure", + "required":[ + "MeetingId", + "AttendeeId" + ], + "members":{ + "MeetingId":{ + "shape":"GuidString", + "location":"uri", + "locationName":"meetingId" + }, + "AttendeeId":{ + "shape":"GuidString", + "location":"uri", + "locationName":"attendeeId" + } + } + }, + "ListAttendeeTagsResponse":{ + "type":"structure", + "members":{ + "Tags":{"shape":"TagList"} + } + }, "ListAttendeesRequest":{ "type":"structure", "required":["MeetingId"], @@ -3257,6 +3761,23 @@ "NextToken":{"shape":"String"} } }, + "ListMeetingTagsRequest":{ + "type":"structure", + "required":["MeetingId"], + "members":{ + "MeetingId":{ + "shape":"GuidString", + "location":"uri", + "locationName":"meetingId" + } + } + }, + "ListMeetingTagsResponse":{ + "type":"structure", + "members":{ + "Tags":{"shape":"TagList"} + } + }, "ListMeetingsRequest":{ "type":"structure", "members":{ @@ -3343,6 +3864,39 @@ "NextToken":{"shape":"String"} } }, + "ListProxySessionsRequest":{ + "type":"structure", + "required":["VoiceConnectorId"], + "members":{ + "VoiceConnectorId":{ + "shape":"NonEmptyString128", + "location":"uri", + "locationName":"voiceConnectorId" + }, + "Status":{ + "shape":"ProxySessionStatus", + "location":"querystring", + "locationName":"status" + }, + "NextToken":{ + "shape":"NextTokenString", + "location":"querystring", + "locationName":"next-token" + }, + "MaxResults":{ + "shape":"ResultMax", + "location":"querystring", + "locationName":"max-results" + } + } + }, + "ListProxySessionsResponse":{ + "type":"structure", + "members":{ + "ProxySessions":{"shape":"ProxySessions"}, + "NextToken":{"shape":"NextTokenString"} + } + }, "ListRoomMembershipsRequest":{ "type":"structure", "required":[ @@ -3412,6 +3966,23 @@ "NextToken":{"shape":"String"} } }, + "ListTagsForResourceRequest":{ + "type":"structure", + "required":["ResourceARN"], + "members":{ + "ResourceARN":{ + "shape":"Arn", + "location":"querystring", + "locationName":"arn" + } + } + }, + "ListTagsForResourceResponse":{ + "type":"structure", + "members":{ + "Tags":{"shape":"TagList"} + } + }, "ListUsersRequest":{ "type":"structure", "required":["AccountId"], @@ -3557,6 +4128,7 @@ "type":"structure", "members":{ "MeetingId":{"shape":"GuidString"}, + "ExternalMeetingId":{"shape":"ExternalMeetingIdType"}, "MediaPlacement":{"shape":"MediaPlacement"}, "MediaRegion":{"shape":"String"} } @@ -3572,6 +4144,18 @@ "SqsQueueArn":{"shape":"Arn"} } }, + "MeetingTagKeyList":{ + "type":"list", + "member":{"shape":"TagKey"}, + "max":50, + "min":1 + }, + "MeetingTagList":{ + "type":"list", + "member":{"shape":"Tag"}, + "max":50, + "min":1 + }, "Member":{ "type":"structure", "members":{ @@ -3614,10 +4198,20 @@ "member":{"shape":"MembershipItem"}, "max":50 }, + "NextTokenString":{ + "type":"string", + "max":65535 + }, "NonEmptyString":{ "type":"string", "pattern":".*\\S.*" }, + "NonEmptyString128":{ + "type":"string", + "max":128, + "min":1, + "pattern":".*\\S.*" + }, "NonEmptyStringList":{ "type":"list", "member":{"shape":"String"}, @@ -3633,6 +4227,13 @@ "exception":true }, "NullableBoolean":{"type":"boolean"}, + "NumberSelectionBehavior":{ + "type":"string", + "enum":[ + "PreferSticky", + "AvoidSticky" + ] + }, "OrderedPhoneNumber":{ "type":"structure", "members":{ @@ -3690,6 +4291,23 @@ "max":100, "min":1 }, + "Participant":{ + "type":"structure", + "members":{ + "PhoneNumber":{"shape":"E164PhoneNumber"}, + "ProxyPhoneNumber":{"shape":"E164PhoneNumber"} + } + }, + "ParticipantPhoneNumberList":{ + "type":"list", + "member":{"shape":"E164PhoneNumber"}, + "max":2, + "min":2 + }, + "Participants":{ + "type":"list", + "member":{"shape":"Participant"} + }, "PhoneNumber":{ "type":"structure", "members":{ @@ -3816,11 +4434,59 @@ "max":65535, "min":0 }, + "PositiveInteger":{ + "type":"integer", + "min":1 + }, "ProfileServiceMaxResults":{ "type":"integer", "max":200, "min":1 }, + "Proxy":{ + "type":"structure", + "members":{ + "DefaultSessionExpiryMinutes":{"shape":"Integer"}, + "Disabled":{"shape":"Boolean"}, + "FallBackPhoneNumber":{"shape":"E164PhoneNumber"}, + "PhoneNumberCountries":{"shape":"StringList"} + } + }, + "ProxySession":{ + "type":"structure", + "members":{ + "VoiceConnectorId":{"shape":"NonEmptyString128"}, + "ProxySessionId":{"shape":"NonEmptyString128"}, + "Name":{"shape":"String128"}, + "Status":{"shape":"ProxySessionStatus"}, + "ExpiryMinutes":{"shape":"PositiveInteger"}, + "Capabilities":{"shape":"CapabilityList"}, + "CreatedTimestamp":{"shape":"Iso8601Timestamp"}, + "UpdatedTimestamp":{"shape":"Iso8601Timestamp"}, + "EndedTimestamp":{"shape":"Iso8601Timestamp"}, + "Participants":{"shape":"Participants"}, + "NumberSelectionBehavior":{"shape":"NumberSelectionBehavior"}, + "GeoMatchLevel":{"shape":"GeoMatchLevel"}, + "GeoMatchParams":{"shape":"GeoMatchParams"} + } + }, + "ProxySessionNameString":{ + "type":"string", + "pattern":"^$|^[a-zA-Z0-9 ]{0,30}$", + "sensitive":true + }, + "ProxySessionStatus":{ + "type":"string", + "enum":[ + "Open", + "InProgress", + "Closed" + ] + }, + "ProxySessions":{ + "type":"list", + "member":{"shape":"ProxySession"} + }, "PutEventsConfigurationRequest":{ "type":"structure", "required":[ @@ -3890,6 +4556,31 @@ "Origination":{"shape":"Origination"} } }, + "PutVoiceConnectorProxyRequest":{ + "type":"structure", + "required":[ + "DefaultSessionExpiryMinutes", + "PhoneNumberPoolCountries", + "VoiceConnectorId" + ], + "members":{ + "VoiceConnectorId":{ + "shape":"NonEmptyString128", + "location":"uri", + "locationName":"voiceConnectorId" + }, + "DefaultSessionExpiryMinutes":{"shape":"Integer"}, + "PhoneNumberPoolCountries":{"shape":"CountryList"}, + "FallBackPhoneNumber":{"shape":"E164PhoneNumber"}, + "Disabled":{"shape":"Boolean"} + } + }, + "PutVoiceConnectorProxyResponse":{ + "type":"structure", + "members":{ + "Proxy":{"shape":"Proxy"} + } + }, "PutVoiceConnectorStreamingConfigurationRequest":{ "type":"structure", "required":[ @@ -4162,10 +4853,96 @@ } }, "String":{"type":"string"}, + "String128":{ + "type":"string", + "max":128 + }, "StringList":{ "type":"list", "member":{"shape":"String"} }, + "Tag":{ + "type":"structure", + "required":[ + "Key", + "Value" + ], + "members":{ + "Key":{"shape":"TagKey"}, + "Value":{"shape":"TagValue"} + } + }, + "TagAttendeeRequest":{ + "type":"structure", + "required":[ + "MeetingId", + "AttendeeId", + "Tags" + ], + "members":{ + "MeetingId":{ + "shape":"GuidString", + "location":"uri", + "locationName":"meetingId" + }, + "AttendeeId":{ + "shape":"GuidString", + "location":"uri", + "locationName":"attendeeId" + }, + "Tags":{"shape":"AttendeeTagList"} + } + }, + "TagKey":{ + "type":"string", + "max":128, + "min":1, + "sensitive":true + }, + "TagKeyList":{ + "type":"list", + "member":{"shape":"TagKey"}, + "max":50, + "min":1 + }, + "TagList":{ + "type":"list", + "member":{"shape":"Tag"}, + "max":50, + "min":1 + }, + "TagMeetingRequest":{ + "type":"structure", + "required":[ + "MeetingId", + "Tags" + ], + "members":{ + "MeetingId":{ + "shape":"GuidString", + "location":"uri", + "locationName":"meetingId" + }, + "Tags":{"shape":"MeetingTagList"} + } + }, + "TagResourceRequest":{ + "type":"structure", + "required":[ + "ResourceARN", + "Tags" + ], + "members":{ + "ResourceARN":{"shape":"Arn"}, + "Tags":{"shape":"TagList"} + } + }, + "TagValue":{ + "type":"string", + "max":256, + "min":1, + "sensitive":true + }, "TelephonySettings":{ "type":"structure", "required":[ @@ -4229,6 +5006,53 @@ "error":{"httpStatusCode":422}, "exception":true }, + "UntagAttendeeRequest":{ + "type":"structure", + "required":[ + "MeetingId", + "TagKeys", + "AttendeeId" + ], + "members":{ + "MeetingId":{ + "shape":"GuidString", + "location":"uri", + "locationName":"meetingId" + }, + "AttendeeId":{ + "shape":"GuidString", + "location":"uri", + "locationName":"attendeeId" + }, + "TagKeys":{"shape":"AttendeeTagKeyList"} + } + }, + "UntagMeetingRequest":{ + "type":"structure", + "required":[ + "MeetingId", + "TagKeys" + ], + "members":{ + "MeetingId":{ + "shape":"GuidString", + "location":"uri", + "locationName":"meetingId" + }, + "TagKeys":{"shape":"MeetingTagKeyList"} + } + }, + "UntagResourceRequest":{ + "type":"structure", + "required":[ + "ResourceARN", + "TagKeys" + ], + "members":{ + "ResourceARN":{"shape":"Arn"}, + "TagKeys":{"shape":"TagKeyList"} + } + }, "UpdateAccountRequest":{ "type":"structure", "required":["AccountId"], @@ -4343,6 +5167,34 @@ "CallingName":{"shape":"CallingName"} } }, + "UpdateProxySessionRequest":{ + "type":"structure", + "required":[ + "Capabilities", + "VoiceConnectorId", + "ProxySessionId" + ], + "members":{ + "VoiceConnectorId":{ + "shape":"NonEmptyString128", + "location":"uri", + "locationName":"voiceConnectorId" + }, + "ProxySessionId":{ + "shape":"NonEmptyString128", + "location":"uri", + "locationName":"proxySessionId" + }, + "Capabilities":{"shape":"CapabilityList"}, + "ExpiryMinutes":{"shape":"PositiveInteger"} + } + }, + "UpdateProxySessionResponse":{ + "type":"structure", + "members":{ + "ProxySession":{"shape":"ProxySession"} + } + }, "UpdateRoomMembershipRequest":{ "type":"structure", "required":[ diff --git a/models/apis/chime/2018-05-01/docs-2.json b/models/apis/chime/2018-05-01/docs-2.json index 27fcf9a162f..1a6f7b348f5 100644 --- a/models/apis/chime/2018-05-01/docs-2.json +++ b/models/apis/chime/2018-05-01/docs-2.json @@ -18,6 +18,7 @@ "CreateBot": "Creates a bot for an Amazon Chime Enterprise account.
", "CreateMeeting": "Creates a new Amazon Chime SDK meeting in the specified media Region with no initial attendees. For more information about the Amazon Chime SDK, see Using the Amazon Chime SDK in the Amazon Chime Developer Guide.
", "CreatePhoneNumberOrder": "Creates an order for phone numbers to be provisioned. Choose from Amazon Chime Business Calling and Amazon Chime Voice Connector product types. For toll-free numbers, you must use the Amazon Chime Voice Connector product type.
", + "CreateProxySession": "Creates a proxy session on the specified Amazon Chime Voice Connector for the specified participant phone numbers.
", "CreateRoom": "Creates a chat room for the specified Amazon Chime Enterprise account.
", "CreateRoomMembership": "Adds a member to a chat room in an Amazon Chime Enterprise account. A member can be either a user or a bot. The member role designates whether the member is a chat room administrator or a general chat room member.
", "CreateUser": "Creates a user under the specified Amazon Chime account.
", @@ -28,11 +29,13 @@ "DeleteEventsConfiguration": "Deletes the events configuration that allows a bot to receive outgoing events.
", "DeleteMeeting": "Deletes the specified Amazon Chime SDK meeting. When a meeting is deleted, its attendees are also deleted and clients can no longer join it. For more information about the Amazon Chime SDK, see Using the Amazon Chime SDK in the Amazon Chime Developer Guide.
", "DeletePhoneNumber": "Moves the specified phone number into the Deletion queue. A phone number must be disassociated from any users or Amazon Chime Voice Connectors before it can be deleted.
Deleted phone numbers remain in the Deletion queue for 7 days before they are deleted permanently.
", + "DeleteProxySession": "Deletes the specified proxy session from the specified Amazon Chime Voice Connector.
", "DeleteRoom": "Deletes a chat room in an Amazon Chime Enterprise account.
", "DeleteRoomMembership": "Removes a member from a chat room in an Amazon Chime Enterprise account.
", "DeleteVoiceConnector": "Deletes the specified Amazon Chime Voice Connector. Any phone numbers associated with the Amazon Chime Voice Connector must be disassociated from it before it can be deleted.
", "DeleteVoiceConnectorGroup": "Deletes the specified Amazon Chime Voice Connector group. Any VoiceConnectorItems
and phone numbers associated with the group must be removed before it can be deleted.
Deletes the origination settings for the specified Amazon Chime Voice Connector.
", + "DeleteVoiceConnectorProxy": "Deletes the proxy configuration from the specified Amazon Chime Voice Connector.
", "DeleteVoiceConnectorStreamingConfiguration": "Deletes the streaming configuration for the specified Amazon Chime Voice Connector.
", "DeleteVoiceConnectorTermination": "Deletes the termination settings for the specified Amazon Chime Voice Connector.
", "DeleteVoiceConnectorTerminationCredentials": "Deletes the specified SIP credentials used by your equipment to authenticate during call termination.
", @@ -50,6 +53,7 @@ "GetPhoneNumber": "Retrieves details for the specified phone number ID, such as associations, capabilities, and product type.
", "GetPhoneNumberOrder": "Retrieves details for the specified phone number order, such as order creation timestamp, phone numbers in E.164 format, product type, and order status.
", "GetPhoneNumberSettings": "Retrieves the phone number settings for the administrator's AWS account, such as the default outbound calling name.
", + "GetProxySession": "Gets the specified proxy session details for the specified Amazon Chime Voice Connector.
", "GetRoom": "Retrieves room details, such as the room name, for a room in an Amazon Chime Enterprise account.
", "GetUser": "Retrieves details for the specified user ID, such as primary email address, license type, and personal meeting PIN.
To retrieve user details with an email address instead of a user ID, use the ListUsers action, and then filter by email address.
", "GetUserSettings": "Retrieves settings for the specified user ID, such as any associated phone number settings.
", @@ -57,18 +61,23 @@ "GetVoiceConnectorGroup": "Retrieves details for the specified Amazon Chime Voice Connector group, such as timestamps, name, and associated VoiceConnectorItems
.
Retrieves the logging configuration details for the specified Amazon Chime Voice Connector. Shows whether SIP message logs are enabled for sending to Amazon CloudWatch Logs.
", "GetVoiceConnectorOrigination": "Retrieves origination setting details for the specified Amazon Chime Voice Connector.
", + "GetVoiceConnectorProxy": "Gets the proxy configuration details for the specified Amazon Chime Voice Connector.
", "GetVoiceConnectorStreamingConfiguration": "Retrieves the streaming configuration details for the specified Amazon Chime Voice Connector. Shows whether media streaming is enabled for sending to Amazon Kinesis. It also shows the retention period, in hours, for the Amazon Kinesis data.
", "GetVoiceConnectorTermination": "Retrieves termination setting details for the specified Amazon Chime Voice Connector.
", "GetVoiceConnectorTerminationHealth": "Retrieves information about the last time a SIP OPTIONS
ping was received from your SIP infrastructure for the specified Amazon Chime Voice Connector.
Sends email to a maximum of 50 users, inviting them to the specified Amazon Chime Team
account. Only Team
account types are currently supported for this action.
Lists the Amazon Chime accounts under the administrator's AWS account. You can filter accounts by account name prefix. To find out which Amazon Chime account a user belongs to, you can filter by the user's email address, which returns one account result.
", + "ListAttendeeTags": "Lists the tags applied to an Amazon Chime SDK attendee resource.
", "ListAttendees": "Lists the attendees for the specified Amazon Chime SDK meeting. For more information about the Amazon Chime SDK, see Using the Amazon Chime SDK in the Amazon Chime Developer Guide.
", "ListBots": "Lists the bots associated with the administrator's Amazon Chime Enterprise account ID.
", + "ListMeetingTags": "Lists the tags applied to an Amazon Chime SDK meeting resource.
", "ListMeetings": "Lists up to 100 active Amazon Chime SDK meetings. For more information about the Amazon Chime SDK, see Using the Amazon Chime SDK in the Amazon Chime Developer Guide.
", "ListPhoneNumberOrders": "Lists the phone number orders for the administrator's Amazon Chime account.
", "ListPhoneNumbers": "Lists the phone numbers for the specified Amazon Chime account, Amazon Chime user, Amazon Chime Voice Connector, or Amazon Chime Voice Connector group.
", + "ListProxySessions": "Lists the proxy sessions for the specified Amazon Chime Voice Connector.
", "ListRoomMemberships": "Lists the membership details for the specified room in an Amazon Chime Enterprise account, such as the members' IDs, email addresses, and names.
", "ListRooms": "Lists the room details for the specified Amazon Chime Enterprise account. Optionally, filter the results by a member ID (user ID or bot ID) to see a list of rooms that the member belongs to.
", + "ListTagsForResource": "Lists the tags applied to an Amazon Chime SDK meeting resource.
", "ListUsers": "Lists the users that belong to the specified Amazon Chime account. You can specify an email address to list only the user that the email address belongs to.
", "ListVoiceConnectorGroups": "Lists the Amazon Chime Voice Connector groups for the administrator's AWS account.
", "ListVoiceConnectorTerminationCredentials": "Lists the SIP credentials for the specified Amazon Chime Voice Connector.
", @@ -77,6 +86,7 @@ "PutEventsConfiguration": "Creates an events configuration that allows a bot to receive outgoing events sent by Amazon Chime. Choose either an HTTPS endpoint or a Lambda function ARN. For more information, see Bot.
", "PutVoiceConnectorLoggingConfiguration": "Adds a logging configuration for the specified Amazon Chime Voice Connector. The logging configuration specifies whether SIP message logs are enabled for sending to Amazon CloudWatch Logs.
", "PutVoiceConnectorOrigination": "Adds origination settings for the specified Amazon Chime Voice Connector.
", + "PutVoiceConnectorProxy": "Puts the specified proxy configuration to the specified Amazon Chime Voice Connector.
", "PutVoiceConnectorStreamingConfiguration": "Adds a streaming configuration for the specified Amazon Chime Voice Connector. The streaming configuration specifies whether media streaming is enabled for sending to Amazon Kinesis. It also sets the retention period, in hours, for the Amazon Kinesis data.
", "PutVoiceConnectorTermination": "Adds termination settings for the specified Amazon Chime Voice Connector.
", "PutVoiceConnectorTerminationCredentials": "Adds termination SIP credentials for the specified Amazon Chime Voice Connector.
", @@ -84,12 +94,19 @@ "ResetPersonalPIN": "Resets the personal meeting PIN for the specified user on an Amazon Chime account. Returns the User object with the updated personal meeting PIN.
", "RestorePhoneNumber": "Moves a phone number from the Deletion queue back into the phone number Inventory.
", "SearchAvailablePhoneNumbers": "Searches phone numbers that can be ordered.
", + "TagAttendee": "Applies the specified tags to the specified Amazon Chime SDK attendee.
", + "TagMeeting": "Applies the specified tags to the specified Amazon Chime SDK meeting.
", + "TagResource": "Applies the specified tags to the specified Amazon Chime SDK meeting resource.
", + "UntagAttendee": "Untags the specified tags from the specified Amazon Chime SDK attendee.
", + "UntagMeeting": "Untags the specified tags from the specified Amazon Chime SDK meeting.
", + "UntagResource": "Untags the specified tags from the specified Amazon Chime SDK meeting resource.
", "UpdateAccount": "Updates account details for the specified Amazon Chime account. Currently, only account name updates are supported for this action.
", "UpdateAccountSettings": "Updates the settings for the specified Amazon Chime account. You can update settings for remote control of shared screens, or for the dial-out option. For more information about these settings, see Use the Policies Page in the Amazon Chime Administration Guide.
", "UpdateBot": "Updates the status of the specified bot, such as starting or stopping the bot from running in your Amazon Chime Enterprise account.
", "UpdateGlobalSettings": "Updates global settings for the administrator's AWS account, such as Amazon Chime Business Calling and Amazon Chime Voice Connector settings.
", "UpdatePhoneNumber": "Updates phone number details, such as product type or calling name, for the specified phone number ID. You can update one phone number detail at a time. For example, you can update either the product type or the calling name in one action.
For toll-free numbers, you must use the Amazon Chime Voice Connector product type.
Updates to outbound calling names can take up to 72 hours to complete. Pending updates to outbound calling names must be complete before you can request another update.
", "UpdatePhoneNumberSettings": "Updates the phone number settings for the administrator's AWS account, such as the default outbound calling name. You can update the default outbound calling name once every seven days. Outbound calling names can take up to 72 hours to update.
", + "UpdateProxySession": "Updates the specified proxy session details, such as voice or SMS capabilities.
", "UpdateRoom": "Updates room details, such as the room name, for a room in an Amazon Chime Enterprise account.
", "UpdateRoomMembership": "Updates room membership details, such as the member role, for a room in an Amazon Chime Enterprise account. The member role designates whether the member is a chat room administrator or a general chat room member. The member role can be updated only for user IDs.
", "UpdateUser": "Updates user details for a specified user ID. Currently, only LicenseType
updates are supported for this action.
The Alexa for Business metadata.
" } }, + "AreaCode": { + "base": null, + "refs": { + "GeoMatchParams$AreaCode": "The area code.
" + } + }, "Arn": { "base": null, "refs": { + "ListTagsForResourceRequest$ResourceARN": "The resource ARN.
", "MeetingNotificationConfiguration$SnsTopicArn": "The SNS topic ARN.
", - "MeetingNotificationConfiguration$SqsQueueArn": "The SQS queue ARN.
" + "MeetingNotificationConfiguration$SqsQueueArn": "The SQS queue ARN.
", + "TagResourceRequest$ResourceARN": "The resource ARN.
", + "UntagResourceRequest$ResourceARN": "The resource ARN.
" } }, "AssociatePhoneNumberWithUserRequest": { @@ -209,6 +235,20 @@ "ListAttendeesResponse$Attendees": "The Amazon Chime SDK attendee information.
" } }, + "AttendeeTagKeyList": { + "base": null, + "refs": { + "UntagAttendeeRequest$TagKeys": "The tag keys.
" + } + }, + "AttendeeTagList": { + "base": null, + "refs": { + "CreateAttendeeRequest$Tags": "The tag key-value pairs.
", + "CreateAttendeeRequestItem$Tags": "The tag key-value pairs.
", + "TagAttendeeRequest$Tags": "The tag key-value pairs.
" + } + }, "BadRequestException": { "base": "The input parameters don't match the service's restrictions.
", "refs": { @@ -299,6 +339,8 @@ "CreateVoiceConnectorRequest$RequireEncryption": "When enabled, requires encryption for the Amazon Chime Voice Connector.
", "LoggingConfiguration$EnableSIPLogs": "When true, enables SIP message logs for sending to Amazon CloudWatch Logs.
", "Origination$Disabled": "When origination settings are disabled, inbound calls are not enabled for your Amazon Chime Voice Connector.
", + "Proxy$Disabled": "When true, stops proxy sessions from being created on the specified Amazon Chime Voice Connector.
", + "PutVoiceConnectorProxyRequest$Disabled": "When true, stops proxy sessions from being created on the specified Amazon Chime Voice Connector.
", "StreamingConfiguration$Disabled": "When true, media streaming to Amazon Kinesis is turned off.
", "TelephonySettings$InboundCalling": "Allows or denies inbound calling.
", "TelephonySettings$OutboundCalling": "Allows or denies outbound calling.
", @@ -365,6 +407,20 @@ "Termination$CallingRegions": "The countries to which calls are allowed, in ISO 3166-1 alpha-2 format. Required.
" } }, + "Capability": { + "base": null, + "refs": { + "CapabilityList$member": null + } + }, + "CapabilityList": { + "base": null, + "refs": { + "CreateProxySessionRequest$Capabilities": "The proxy session capabilities.
", + "ProxySession$Capabilities": "The proxy session capabilities.
", + "UpdateProxySessionRequest$Capabilities": "The proxy session capabilities.
" + } + }, "ClientRequestToken": { "base": null, "refs": { @@ -377,6 +433,19 @@ "refs": { } }, + "Country": { + "base": null, + "refs": { + "CountryList$member": null, + "GeoMatchParams$Country": "The country.
" + } + }, + "CountryList": { + "base": null, + "refs": { + "PutVoiceConnectorProxyRequest$PhoneNumberPoolCountries": "The countries for proxy phone numbers to be selected from.
" + } + }, "CpsLimit": { "base": null, "refs": { @@ -451,6 +520,16 @@ "refs": { } }, + "CreateProxySessionRequest": { + "base": null, + "refs": { + } + }, + "CreateProxySessionResponse": { + "base": null, + "refs": { + } + }, "CreateRoomMembershipRequest": { "base": null, "refs": { @@ -549,6 +628,11 @@ "refs": { } }, + "DeleteProxySessionRequest": { + "base": null, + "refs": { + } + }, "DeleteRoomMembershipRequest": { "base": null, "refs": { @@ -569,6 +653,11 @@ "refs": { } }, + "DeleteVoiceConnectorProxyRequest": { + "base": null, + "refs": { + } + }, "DeleteVoiceConnectorRequest": { "base": null, "refs": { @@ -635,7 +724,12 @@ "AssociatePhoneNumberWithUserRequest$E164PhoneNumber": "The phone number, in E.164 format.
", "E164PhoneNumberList$member": null, "OrderedPhoneNumber$E164PhoneNumber": "The phone number, in E.164 format.
", + "Participant$PhoneNumber": "The participant's phone number.
", + "Participant$ProxyPhoneNumber": "The participant's proxy phone number.
", + "ParticipantPhoneNumberList$member": null, "PhoneNumber$E164PhoneNumber": "The phone number, in E.164 format.
", + "Proxy$FallBackPhoneNumber": "The phone number to route calls to after a proxy session expires.
", + "PutVoiceConnectorProxyRequest$FallBackPhoneNumber": "The phone number to route calls to after a proxy session expires.
", "Termination$DefaultPhoneNumber": "The default caller ID phone number.
" } }, @@ -693,6 +787,13 @@ "PutEventsConfigurationResponse$EventsConfiguration": null } }, + "ExternalMeetingIdType": { + "base": null, + "refs": { + "CreateMeetingRequest$ExternalMeetingId": "The external meeting ID.
", + "Meeting$ExternalMeetingId": "The external meeting ID.
" + } + }, "ExternalUserIdType": { "base": null, "refs": { @@ -708,6 +809,20 @@ "refs": { } }, + "GeoMatchLevel": { + "base": null, + "refs": { + "CreateProxySessionRequest$GeoMatchLevel": "The preference for matching the country or area code of the proxy phone number with that of the first participant.
", + "ProxySession$GeoMatchLevel": "The preference for matching the country or area code of the proxy phone number with that of the first participant.
" + } + }, + "GeoMatchParams": { + "base": "The country and area code for a proxy phone number in a proxy phone session.
", + "refs": { + "CreateProxySessionRequest$GeoMatchParams": "The country and area code for the proxy phone number.
", + "ProxySession$GeoMatchParams": "The country and area code for the proxy phone number.
" + } + }, "GetAccountRequest": { "base": null, "refs": { @@ -798,6 +913,16 @@ "refs": { } }, + "GetProxySessionRequest": { + "base": null, + "refs": { + } + }, + "GetProxySessionResponse": { + "base": null, + "refs": { + } + }, "GetRoomRequest": { "base": null, "refs": { @@ -858,6 +983,16 @@ "refs": { } }, + "GetVoiceConnectorProxyRequest": { + "base": null, + "refs": { + } + }, + "GetVoiceConnectorProxyResponse": { + "base": null, + "refs": { + } + }, "GetVoiceConnectorRequest": { "base": null, "refs": { @@ -911,9 +1046,25 @@ "GetAttendeeRequest$AttendeeId": "The Amazon Chime SDK attendee ID.
", "GetMeetingRequest$MeetingId": "The Amazon Chime SDK meeting ID.
", "GetPhoneNumberOrderRequest$PhoneNumberOrderId": "The ID for the phone number order.
", + "ListAttendeeTagsRequest$MeetingId": "The Amazon Chime SDK meeting ID.
", + "ListAttendeeTagsRequest$AttendeeId": "The Amazon Chime SDK attendee ID.
", "ListAttendeesRequest$MeetingId": "The Amazon Chime SDK meeting ID.
", + "ListMeetingTagsRequest$MeetingId": "The Amazon Chime SDK meeting ID.
", "Meeting$MeetingId": "The Amazon Chime SDK meeting ID.
", - "PhoneNumberOrder$PhoneNumberOrderId": "The phone number order ID.
" + "PhoneNumberOrder$PhoneNumberOrderId": "The phone number order ID.
", + "TagAttendeeRequest$MeetingId": "The Amazon Chime SDK meeting ID.
", + "TagAttendeeRequest$AttendeeId": "The Amazon Chime SDK attendee ID.
", + "TagMeetingRequest$MeetingId": "The Amazon Chime SDK meeting ID.
", + "UntagAttendeeRequest$MeetingId": "The Amazon Chime SDK meeting ID.
", + "UntagAttendeeRequest$AttendeeId": "The Amazon Chime SDK attendee ID.
", + "UntagMeetingRequest$MeetingId": "The Amazon Chime SDK meeting ID.
" + } + }, + "Integer": { + "base": null, + "refs": { + "Proxy$DefaultSessionExpiryMinutes": "The default number of minutes allowed for proxy sessions.
", + "PutVoiceConnectorProxyRequest$DefaultSessionExpiryMinutes": "The default number of minutes allowed for proxy sessions.
" } }, "Invite": { @@ -958,6 +1109,9 @@ "PhoneNumberAssociation$AssociatedTimestamp": "The timestamp of the phone number association, in ISO 8601 format.
", "PhoneNumberOrder$CreatedTimestamp": "The phone number order creation timestamp, in ISO 8601 format.
", "PhoneNumberOrder$UpdatedTimestamp": "The updated phone number order timestamp, in ISO 8601 format.
", + "ProxySession$CreatedTimestamp": "The created timestamp, in ISO 8601 format.
", + "ProxySession$UpdatedTimestamp": "The updated timestamp, in ISO 8601 format.
", + "ProxySession$EndedTimestamp": "The ended timestamp, in ISO 8601 format.
", "Room$CreatedTimestamp": "The room creation timestamp, in ISO 8601 format.
", "Room$UpdatedTimestamp": "The room update timestamp, in ISO 8601 format.
", "RoomMembership$UpdatedTimestamp": "The room membership update timestamp, in ISO 8601 format.
", @@ -1002,6 +1156,16 @@ "refs": { } }, + "ListAttendeeTagsRequest": { + "base": null, + "refs": { + } + }, + "ListAttendeeTagsResponse": { + "base": null, + "refs": { + } + }, "ListAttendeesRequest": { "base": null, "refs": { @@ -1022,6 +1186,16 @@ "refs": { } }, + "ListMeetingTagsRequest": { + "base": null, + "refs": { + } + }, + "ListMeetingTagsResponse": { + "base": null, + "refs": { + } + }, "ListMeetingsRequest": { "base": null, "refs": { @@ -1052,6 +1226,16 @@ "refs": { } }, + "ListProxySessionsRequest": { + "base": null, + "refs": { + } + }, + "ListProxySessionsResponse": { + "base": null, + "refs": { + } + }, "ListRoomMembershipsRequest": { "base": null, "refs": { @@ -1072,6 +1256,16 @@ "refs": { } }, + "ListTagsForResourceRequest": { + "base": null, + "refs": { + } + }, + "ListTagsForResourceResponse": { + "base": null, + "refs": { + } + }, "ListUsersRequest": { "base": null, "refs": { @@ -1156,6 +1350,19 @@ "CreateMeetingRequest$NotificationsConfiguration": "The configuration for resource targets to receive notifications when meeting and attendee events occur.
" } }, + "MeetingTagKeyList": { + "base": null, + "refs": { + "UntagMeetingRequest$TagKeys": "The tag keys.
" + } + }, + "MeetingTagList": { + "base": null, + "refs": { + "CreateMeetingRequest$Tags": "The tag key-value pairs.
", + "TagMeetingRequest$Tags": "The tag key-value pairs.
" + } + }, "Member": { "base": "The member details, such as email address, name, member ID, and member type.
", "refs": { @@ -1192,6 +1399,13 @@ "BatchCreateRoomMembershipRequest$MembershipItemList": "The list of membership items.
" } }, + "NextTokenString": { + "base": null, + "refs": { + "ListProxySessionsRequest$NextToken": "The token to use to retrieve the next page of results.
", + "ListProxySessionsResponse$NextToken": "The token to use to retrieve the next page of results.
" + } + }, "NonEmptyString": { "base": null, "refs": { @@ -1298,6 +1512,24 @@ "VoiceConnectorItem$VoiceConnectorId": "The Amazon Chime Voice Connector ID.
" } }, + "NonEmptyString128": { + "base": null, + "refs": { + "CreateProxySessionRequest$VoiceConnectorId": "The Amazon Chime voice connector ID.
", + "DeleteProxySessionRequest$VoiceConnectorId": "The Amazon Chime voice connector ID.
", + "DeleteProxySessionRequest$ProxySessionId": "The proxy session ID.
", + "DeleteVoiceConnectorProxyRequest$VoiceConnectorId": "The Amazon Chime Voice Connector ID.
", + "GetProxySessionRequest$VoiceConnectorId": "The Amazon Chime voice connector ID.
", + "GetProxySessionRequest$ProxySessionId": "The proxy session ID.
", + "GetVoiceConnectorProxyRequest$VoiceConnectorId": "The Amazon Chime voice connector ID.
", + "ListProxySessionsRequest$VoiceConnectorId": "The Amazon Chime voice connector ID.
", + "ProxySession$VoiceConnectorId": "The Amazon Chime voice connector ID.
", + "ProxySession$ProxySessionId": "The proxy session ID.
", + "PutVoiceConnectorProxyRequest$VoiceConnectorId": "The Amazon Chime voice connector ID.
", + "UpdateProxySessionRequest$VoiceConnectorId": "The Amazon Chime voice connector ID.
", + "UpdateProxySessionRequest$ProxySessionId": "The proxy session ID.
" + } + }, "NonEmptyStringList": { "base": null, "refs": { @@ -1325,6 +1557,13 @@ "UpdateBotRequest$Disabled": "When true, stops the specified bot from running in your account.
" } }, + "NumberSelectionBehavior": { + "base": null, + "refs": { + "CreateProxySessionRequest$NumberSelectionBehavior": "The preference for proxy phone number reuse, or stickiness, between the same participants across sessions.
", + "ProxySession$NumberSelectionBehavior": "The preference for proxy phone number reuse, or stickiness, between the same participants across sessions.
" + } + }, "OrderedPhoneNumber": { "base": "A phone number for which an order has been placed.
", "refs": { @@ -1381,6 +1620,24 @@ "OriginationRoute$Weight": "The weight associated with the host. If hosts are equal in priority, calls are distributed among them based on their relative weight.
" } }, + "Participant": { + "base": "The phone number and proxy phone number for a participant in an Amazon Chime Voice Connector proxy session.
", + "refs": { + "Participants$member": null + } + }, + "ParticipantPhoneNumberList": { + "base": null, + "refs": { + "CreateProxySessionRequest$ParticipantPhoneNumbers": "The participant phone numbers.
" + } + }, + "Participants": { + "base": null, + "refs": { + "ProxySession$Participants": "The proxy session participants.
" + } + }, "PhoneNumber": { "base": "A phone number used for Amazon Chime Business Calling or an Amazon Chime Voice Connector.
", "refs": { @@ -1494,6 +1751,14 @@ "OriginationRoute$Port": "The designated origination route port. Defaults to 5060.
" } }, + "PositiveInteger": { + "base": null, + "refs": { + "CreateProxySessionRequest$ExpiryMinutes": "The number of minutes allowed for the proxy session.
", + "ProxySession$ExpiryMinutes": "The number of minutes allowed for the proxy session.
", + "UpdateProxySessionRequest$ExpiryMinutes": "The number of minutes allowed for the proxy session.
" + } + }, "ProfileServiceMaxResults": { "base": null, "refs": { @@ -1501,6 +1766,41 @@ "ListUsersRequest$MaxResults": "The maximum number of results to return in a single call. Defaults to 100.
" } }, + "Proxy": { + "base": "The proxy configuration for an Amazon Chime Voice Connector.
", + "refs": { + "GetVoiceConnectorProxyResponse$Proxy": "The proxy configuration details.
", + "PutVoiceConnectorProxyResponse$Proxy": "The proxy configuration details.
" + } + }, + "ProxySession": { + "base": "The proxy session for an Amazon Chime Voice Connector.
", + "refs": { + "CreateProxySessionResponse$ProxySession": "The proxy session details.
", + "GetProxySessionResponse$ProxySession": "The proxy session details.
", + "ProxySessions$member": null, + "UpdateProxySessionResponse$ProxySession": "The proxy session details.
" + } + }, + "ProxySessionNameString": { + "base": null, + "refs": { + "CreateProxySessionRequest$Name": "The name of the proxy session.
" + } + }, + "ProxySessionStatus": { + "base": null, + "refs": { + "ListProxySessionsRequest$Status": "The proxy session status.
", + "ProxySession$Status": "The status of the proxy session.
" + } + }, + "ProxySessions": { + "base": null, + "refs": { + "ListProxySessionsResponse$ProxySessions": "The proxy session details.
" + } + }, "PutEventsConfigurationRequest": { "base": null, "refs": { @@ -1531,6 +1831,16 @@ "refs": { } }, + "PutVoiceConnectorProxyRequest": { + "base": null, + "refs": { + } + }, + "PutVoiceConnectorProxyResponse": { + "base": null, + "refs": { + } + }, "PutVoiceConnectorStreamingConfigurationRequest": { "base": null, "refs": { @@ -1605,6 +1915,7 @@ "ListMeetingsRequest$MaxResults": "The maximum number of results to return in a single call.
", "ListPhoneNumberOrdersRequest$MaxResults": "The maximum number of results to return in a single call.
", "ListPhoneNumbersRequest$MaxResults": "The maximum number of results to return in a single call.
", + "ListProxySessionsRequest$MaxResults": "The maximum number of results to return in a single call.
", "ListRoomMembershipsRequest$MaxResults": "The maximum number of results to return in a single call.
", "ListRoomsRequest$MaxResults": "The maximum number of results to return in a single call.
", "ListVoiceConnectorGroupsRequest$MaxResults": "The maximum number of results to return in a single call.
", @@ -1804,12 +2115,72 @@ "VoiceConnectorSettings$CdrBucket": "The Amazon S3 bucket designated for call detail record storage.
" } }, + "String128": { + "base": null, + "refs": { + "ProxySession$Name": "The name of the proxy session.
" + } + }, "StringList": { "base": null, "refs": { + "Proxy$PhoneNumberCountries": "The countries for proxy phone numbers to be selected from.
", "Termination$CidrAllowedList": "The IP addresses allowed to make calls, in CIDR format. Required.
" } }, + "Tag": { + "base": "Describes a tag applied to a resource.
", + "refs": { + "AttendeeTagList$member": null, + "MeetingTagList$member": null, + "TagList$member": null + } + }, + "TagAttendeeRequest": { + "base": null, + "refs": { + } + }, + "TagKey": { + "base": null, + "refs": { + "AttendeeTagKeyList$member": null, + "MeetingTagKeyList$member": null, + "Tag$Key": "The key of the tag.
", + "TagKeyList$member": null + } + }, + "TagKeyList": { + "base": null, + "refs": { + "UntagResourceRequest$TagKeys": "The tag keys.
" + } + }, + "TagList": { + "base": null, + "refs": { + "ListAttendeeTagsResponse$Tags": "A list of tag key-value pairs.
", + "ListMeetingTagsResponse$Tags": "A list of tag key-value pairs.
", + "ListTagsForResourceResponse$Tags": "A list of tag-key value pairs.
", + "TagResourceRequest$Tags": "The tag key-value pairs.
" + } + }, + "TagMeetingRequest": { + "base": null, + "refs": { + } + }, + "TagResourceRequest": { + "base": null, + "refs": { + } + }, + "TagValue": { + "base": null, + "refs": { + "Tag$Value": "The value of the tag.
" + } + }, "TelephonySettings": { "base": "Settings that allow management of telephony permissions for an Amazon Chime user, such as inbound and outbound calling and text messaging.
", "refs": { @@ -1851,6 +2222,21 @@ "refs": { } }, + "UntagAttendeeRequest": { + "base": null, + "refs": { + } + }, + "UntagMeetingRequest": { + "base": null, + "refs": { + } + }, + "UntagResourceRequest": { + "base": null, + "refs": { + } + }, "UpdateAccountRequest": { "base": null, "refs": { @@ -1913,6 +2299,16 @@ "refs": { } }, + "UpdateProxySessionRequest": { + "base": null, + "refs": { + } + }, + "UpdateProxySessionResponse": { + "base": null, + "refs": { + } + }, "UpdateRoomMembershipRequest": { "base": null, "refs": { diff --git a/models/apis/chime/2018-05-01/paginators-1.json b/models/apis/chime/2018-05-01/paginators-1.json index 7d55169a037..6727698813d 100644 --- a/models/apis/chime/2018-05-01/paginators-1.json +++ b/models/apis/chime/2018-05-01/paginators-1.json @@ -30,6 +30,11 @@ "output_token": "NextToken", "limit_key": "MaxResults" }, + "ListProxySessions": { + "input_token": "NextToken", + "output_token": "NextToken", + "limit_key": "MaxResults" + }, "ListRoomMemberships": { "input_token": "NextToken", "output_token": "NextToken", diff --git a/models/apis/cloudformation/2010-05-15/docs-2.json b/models/apis/cloudformation/2010-05-15/docs-2.json index f1601826504..690ddbdb20f 100644 --- a/models/apis/cloudformation/2010-05-15/docs-2.json +++ b/models/apis/cloudformation/2010-05-15/docs-2.json @@ -6,18 +6,18 @@ "ContinueUpdateRollback": "For a specified stack that is in the UPDATE_ROLLBACK_FAILED
state, continues rolling it back to the UPDATE_ROLLBACK_COMPLETE
state. Depending on the cause of the failure, you can manually fix the error and continue the rollback. By continuing the rollback, you can return your stack to a working state (the UPDATE_ROLLBACK_COMPLETE
state), and then try to update the stack again.
A stack goes into the UPDATE_ROLLBACK_FAILED
state when AWS CloudFormation cannot roll back all changes after a failed stack update. For example, you might have a stack that is rolling back to an old database instance that was deleted outside of AWS CloudFormation. Because AWS CloudFormation doesn't know the database was deleted, it assumes that the database instance still exists and attempts to roll back to it, causing the update rollback to fail.
Creates a list of changes that will be applied to a stack so that you can review the changes before executing them. You can create a change set for a stack that doesn't exist or an existing stack. If you create a change set for a stack that doesn't exist, the change set shows all of the resources that AWS CloudFormation will create. If you create a change set for an existing stack, AWS CloudFormation compares the stack's information with the information that you submit in the change set and lists the differences. Use change sets to understand which resources AWS CloudFormation will create or change, and how it will change resources in an existing stack, before you create or update a stack.
To create a change set for a stack that doesn't exist, for the ChangeSetType
parameter, specify CREATE
. To create a change set for an existing stack, specify UPDATE
for the ChangeSetType
parameter. To create a change set for an import operation, specify IMPORT
for the ChangeSetType
parameter. After the CreateChangeSet
call successfully completes, AWS CloudFormation starts creating the change set. To check the status of the change set or to review it, use the DescribeChangeSet action.
When you are satisfied with the changes the change set will make, execute the change set by using the ExecuteChangeSet action. AWS CloudFormation doesn't make changes until you execute the change set.
", "CreateStack": "Creates a stack as specified in the template. After the call completes successfully, the stack creation starts. You can check the status of the stack via the DescribeStacks API.
", - "CreateStackInstances": "Creates stack instances for the specified accounts, within the specified regions. A stack instance refers to a stack in a specific account and region. You must specify at least one value for either Accounts
or DeploymentTargets
, and you must specify at least one value for Regions
.
Creates stack instances for the specified accounts, within the specified Regions. A stack instance refers to a stack in a specific account and Region. You must specify at least one value for either Accounts
or DeploymentTargets
, and you must specify at least one value for Regions
.
Creates a stack set.
", "DeleteChangeSet": "Deletes the specified change set. Deleting change sets ensures that no one executes the wrong change set.
If the call successfully completes, AWS CloudFormation successfully deleted the change set.
", "DeleteStack": "Deletes a specified stack. Once the call completes successfully, stack deletion starts. Deleted stacks do not show up in the DescribeStacks API if the deletion has been completed successfully.
", - "DeleteStackInstances": "Deletes stack instances for the specified accounts, in the specified regions.
", + "DeleteStackInstances": "Deletes stack instances for the specified accounts, in the specified Regions.
", "DeleteStackSet": "Deletes a stack set. Before you can delete a stack set, all of its member stack instances must be deleted. For more information about how to do this, see DeleteStackInstances.
", "DeregisterType": "Removes a type or type version from active use in the CloudFormation registry. If a type or type version is deregistered, it cannot be used in CloudFormation operations.
To deregister a type, you must individually deregister all registered versions of that type. If a type has only a single registered version, deregistering that version results in the type itself being deregistered.
You cannot deregister the default version of a type, unless it is the only registered version of that type, in which case the type itself is deregistered as well.
", "DescribeAccountLimits": "Retrieves your account's AWS CloudFormation limits, such as the maximum number of stacks that you can create in your account. For more information about account limits, see AWS CloudFormation Limits in the AWS CloudFormation User Guide.
", "DescribeChangeSet": "Returns the inputs for the change set and a list of changes that AWS CloudFormation will make if you execute the change set. For more information, see Updating Stacks Using Change Sets in the AWS CloudFormation User Guide.
", "DescribeStackDriftDetectionStatus": "Returns information about a stack drift detection operation. A stack drift detection operation detects whether a stack's actual configuration differs, or has drifted, from it's expected configuration, as defined in the stack template and any values specified as template parameters. A stack is considered to have drifted if one or more of its resources have drifted. For more information on stack and resource drift, see Detecting Unregulated Configuration Changes to Stacks and Resources.
Use DetectStackDrift to initiate a stack drift detection operation. DetectStackDrift
returns a StackDriftDetectionId
you can use to monitor the progress of the operation using DescribeStackDriftDetectionStatus
. Once the drift detection operation has completed, use DescribeStackResourceDrifts to return drift information about the stack and its resources.
Returns all stack related events for a specified stack in reverse chronological order. For more information about a stack's event history, go to Stacks in the AWS CloudFormation User Guide.
You can list events for stacks that have failed to create or have been deleted by specifying the unique stack identifier (stack ID).
Returns the stack instance that's associated with the specified stack set, AWS account, and region.
For a list of stack instances that are associated with a specific stack set, use ListStackInstances.
", + "DescribeStackInstance": "Returns the stack instance that's associated with the specified stack set, AWS account, and Region.
For a list of stack instances that are associated with a specific stack set, use ListStackInstances.
", "DescribeStackResource": "Returns a description of the specified resource in the specified stack.
For deleted stacks, DescribeStackResource returns resource information for up to 90 days after the stack has been deleted.
", "DescribeStackResourceDrifts": "Returns drift information for the resources that have been checked for drift in the specified stack. This includes actual and expected configuration values for resources where AWS CloudFormation detects configuration drift.
For a given stack, there will be one StackResourceDrift
for each stack resource that has been checked for drift. Resources that have not yet been checked for drift are not included. Resources that do not currently support drift detection are not checked, and so not included. For a list of resources that support drift detection, see Resources that Support Drift Detection.
Use DetectStackResourceDrift to detect drift on individual resources, or DetectStackDrift to detect drift on all supported resources for a given stack.
", "DescribeStackResources": "Returns AWS resource descriptions for running and deleted stacks. If StackName
is specified, all the associated resources that are part of the stack are returned. If PhysicalResourceId
is specified, the associated resources of the stack that the resource belongs to are returned.
Only the first 100 resources will be returned. If your stack has more resources than this, you should use ListStackResources
instead.
For deleted stacks, DescribeStackResources
returns resource information for up to 90 days after the stack has been deleted.
You must specify either StackName
or PhysicalResourceId
, but not both. In addition, you can specify LogicalResourceId
to filter the returned result. For more information about resources, the LogicalResourceId
and PhysicalResourceId
, go to the AWS CloudFormation User Guide.
A ValidationError
is returned if you specify both StackName
and PhysicalResourceId
in the same request.
Returns the template body for a specified stack. You can get the template for running or deleted stacks.
For deleted stacks, GetTemplate returns the template for up to 90 days after the stack has been deleted.
If the template does not exist, a ValidationError
is returned.
Returns information about a new or existing template. The GetTemplateSummary
action is useful for viewing parameter information, such as default parameter values and parameter types, before you create or update a stack or stack set.
You can use the GetTemplateSummary
action when you submit a template, or you can get template information for a stack set, or a running or deleted stack.
For deleted stacks, GetTemplateSummary
returns the template information for up to 90 days after the stack has been deleted. If the template does not exist, a ValidationError
is returned.
Returns the ID and status of each active change set for a stack. For example, AWS CloudFormation lists change sets that are in the CREATE_IN_PROGRESS
or CREATE_PENDING
state.
Lists all exported output values in the account and region in which you call this action. Use this action to see the exported output values that you can import into other stacks. To import values, use the Fn::ImportValue
function.
For more information, see AWS CloudFormation Export Stack Output Values.
", + "ListExports": "Lists all exported output values in the account and Region in which you call this action. Use this action to see the exported output values that you can import into other stacks. To import values, use the Fn::ImportValue
function.
For more information, see AWS CloudFormation Export Stack Output Values.
", "ListImports": "Lists all stacks that are importing an exported output value. To modify or remove an exported output value, first use this action to see which stacks are using it. To see the exported output values in your account, see ListExports.
For more information about importing an exported output value, see the Fn::ImportValue
function.
Returns summary information about stack instances that are associated with the specified stack set. You can filter for stack instances that are associated with a specific AWS account name or region.
", + "ListStackInstances": "Returns summary information about stack instances that are associated with the specified stack set. You can filter for stack instances that are associated with a specific AWS account name or Region.
", "ListStackResources": "Returns descriptions of all resources of the specified stack.
For deleted stacks, ListStackResources returns resource information for up to 90 days after the stack has been deleted.
", "ListStackSetOperationResults": "Returns summary information about the results of a stack set operation.
", "ListStackSetOperations": "Returns summary information about operations performed on a stack set.
", @@ -47,14 +47,14 @@ "ListTypeVersions": "Returns summary information about the versions of a type.
", "ListTypes": "Returns summary information about types that have been registered with CloudFormation.
", "RecordHandlerProgress": "Reports progress of a resource handler to CloudFormation.
Reserved for use by the CloudFormation CLI. Do not use this API in your code.
", - "RegisterType": "Registers a type with the CloudFormation service. Registering a type makes it available for use in CloudFormation templates in your AWS account, and includes:
Validating the resource schema
Determining which handlers have been specified for the resource
Making the resource type available for use in your account
For more information on how to develop types and ready them for registeration, see Creating Resource Providers in the CloudFormation CLI User Guide.
Once you have initiated a registration request using RegisterType
, you can use DescribeTypeRegistration
to monitor the progress of the registration request.
Registers a type with the CloudFormation service. Registering a type makes it available for use in CloudFormation templates in your AWS account, and includes:
Validating the resource schema
Determining which handlers have been specified for the resource
Making the resource type available for use in your account
For more information on how to develop types and ready them for registeration, see Creating Resource Providers in the CloudFormation CLI User Guide.
You can have a maximum of 50 resource type versions registered at a time. This maximum is per account and per region. Use DeregisterType to deregister specific resource type versions if necessary.
Once you have initiated a registration request using RegisterType
, you can use DescribeTypeRegistration
to monitor the progress of the registration request.
Sets a stack policy for a specified stack.
", "SetTypeDefaultVersion": "Specify the default version of a type. The default version of a type will be used in CloudFormation operations.
", "SignalResource": "Sends a signal to the specified resource with a success or failure status. You can use the SignalResource API in conjunction with a creation policy or update policy. AWS CloudFormation doesn't proceed with a stack creation or update until resources receive the required number of signals or the timeout period is exceeded. The SignalResource API is useful in cases where you want to send signals from anywhere other than an Amazon EC2 instance.
", "StopStackSetOperation": "Stops an in-progress operation on a stack set and its associated stack instances.
", "UpdateStack": "Updates a stack as specified in the template. After the call completes successfully, the stack update starts. You can check the status of the stack via the DescribeStacks action.
To get a copy of the template for an existing stack, you can use the GetTemplate action.
For more information about creating an update template, updating a stack, and monitoring the progress of the update, see Updating a Stack.
", - "UpdateStackInstances": "Updates the parameter values for stack instances for the specified accounts, within the specified regions. A stack instance refers to a stack in a specific account and region.
You can only update stack instances in regions and accounts where they already exist; to create additional stack instances, use CreateStackInstances.
During stack set updates, any parameters overridden for a stack instance are not updated, but retain their overridden value.
You can only update the parameter values that are specified in the stack set; to add or delete a parameter itself, use UpdateStackSet to update the stack set template. If you add a parameter to a template, before you can override the parameter value specified in the stack set you must first use UpdateStackSet to update all stack instances with the updated template and parameter value specified in the stack set. Once a stack instance has been updated with the new parameter, you can then override the parameter value using UpdateStackInstances
.
Updates the stack set, and associated stack instances in the specified accounts and regions.
Even if the stack set operation created by updating the stack set fails (completely or partially, below or above a specified failure tolerance), the stack set is updated with your changes. Subsequent CreateStackInstances calls on the specified stack set use the updated stack set.
", + "UpdateStackInstances": "Updates the parameter values for stack instances for the specified accounts, within the specified Regions. A stack instance refers to a stack in a specific account and Region.
You can only update stack instances in Regions and accounts where they already exist; to create additional stack instances, use CreateStackInstances.
During stack set updates, any parameters overridden for a stack instance are not updated, but retain their overridden value.
You can only update the parameter values that are specified in the stack set; to add or delete a parameter itself, use UpdateStackSet to update the stack set template. If you add a parameter to a template, before you can override the parameter value specified in the stack set you must first use UpdateStackSet to update all stack instances with the updated template and parameter value specified in the stack set. Once a stack instance has been updated with the new parameter, you can then override the parameter value using UpdateStackInstances
.
Updates the stack set, and associated stack instances in the specified accounts and Regions.
Even if the stack set operation created by updating the stack set fails (completely or partially, below or above a specified failure tolerance), the stack set is updated with your changes. Subsequent CreateStackInstances calls on the specified stack set use the updated stack set.
", "UpdateTerminationProtection": "Updates termination protection for the specified stack. If a user attempts to delete a stack with termination protection enabled, the operation fails and the stack remains unchanged. For more information, see Protecting a Stack From Being Deleted in the AWS CloudFormation User Guide.
For nested stacks, termination protection is set on the root stack and cannot be changed directly on the nested stack.
", "ValidateTemplate": "Validates a specified template. AWS CloudFormation first checks if the template is valid JSON. If it isn't, AWS CloudFormation checks if the template is valid YAML. If both these checks fail, AWS CloudFormation returns a template validation error.
" }, @@ -65,13 +65,13 @@ "AccountList$member": null, "DescribeStackInstanceInput$StackInstanceAccount": "The ID of an AWS account that's associated with this stack instance.
", "ListStackInstancesInput$StackInstanceAccount": "The name of the AWS account that you want to list stack instances for.
", - "StackInstance$Account": "[Self-managed permissions] The name of the AWS account that the stack instance is associated with.
", - "StackInstanceSummary$Account": "[Self-managed permissions] The name of the AWS account that the stack instance is associated with.
", - "StackSetOperationResultSummary$Account": "[Self-managed permissions] The name of the AWS account for this operation result.
" + "StackInstance$Account": "[Self-managed
permissions] The name of the AWS account that the stack instance is associated with.
[Self-managed
permissions] The name of the AWS account that the stack instance is associated with.
[Self-managed
permissions] The name of the AWS account for this operation result.
Structure that contains the results of the account gate function which AWS CloudFormation invokes, if present, before proceeding with a stack set operation in an account and region.
For each account and region, AWS CloudFormation lets you specify a Lamdba function that encapsulates any requirements that must be met before CloudFormation can proceed with a stack set operation in that account and region. CloudFormation invokes the function each time a stack set operation is requested for that account and region; if the function returns FAILED
, CloudFormation cancels the operation in that account and region, and sets the stack set operation result status for that account and region to FAILED
.
For more information, see Configuring a target account gate.
", + "base": "Structure that contains the results of the account gate function which AWS CloudFormation invokes, if present, before proceeding with a stack set operation in an account and Region.
For each account and Region, AWS CloudFormation lets you specify a Lamdba function that encapsulates any requirements that must be met before CloudFormation can proceed with a stack set operation in that account and Region. CloudFormation invokes the function each time a stack set operation is requested for that account and Region; if the function returns FAILED
, CloudFormation cancels the operation in that account and Region, and sets the stack set operation result status for that account and Region to FAILED
.
For more information, see Configuring a target account gate.
", "refs": { "StackSetOperationResultSummary$AccountGateResult": "The results of the account gate function AWS CloudFormation invokes, if present, before proceeding with stack set operations in an account
" } @@ -79,13 +79,13 @@ "AccountGateStatus": { "base": null, "refs": { - "AccountGateResult$Status": "The status of the account gate function.
SUCCEEDED
: The account gate function has determined that the account and region passes any requirements for a stack set operation to occur. AWS CloudFormation proceeds with the stack operation in that account and region.
FAILED
: The account gate function has determined that the account and region does not meet the requirements for a stack set operation to occur. AWS CloudFormation cancels the stack set operation in that account and region, and sets the stack set operation result status for that account and region to FAILED
.
SKIPPED
: AWS CloudFormation has skipped calling the account gate function for this account and region, for one of the following reasons:
An account gate function has not been specified for the account and region. AWS CloudFormation proceeds with the stack set operation in this account and region.
The AWSCloudFormationStackSetExecutionRole
of the stack set adminstration account lacks permissions to invoke the function. AWS CloudFormation proceeds with the stack set operation in this account and region.
Either no action is necessary, or no action is possible, on the stack. AWS CloudFormation skips the stack set operation in this account and region.
The status of the account gate function.
SUCCEEDED
: The account gate function has determined that the account and Region passes any requirements for a stack set operation to occur. AWS CloudFormation proceeds with the stack operation in that account and Region.
FAILED
: The account gate function has determined that the account and Region does not meet the requirements for a stack set operation to occur. AWS CloudFormation cancels the stack set operation in that account and Region, and sets the stack set operation result status for that account and Region to FAILED
.
SKIPPED
: AWS CloudFormation has skipped calling the account gate function for this account and Region, for one of the following reasons:
An account gate function has not been specified for the account and Region. AWS CloudFormation proceeds with the stack set operation in this account and Region.
The AWSCloudFormationStackSetExecutionRole
of the stack set adminstration account lacks permissions to invoke the function. AWS CloudFormation proceeds with the stack set operation in this account and Region.
Either no action is necessary, or no action is possible, on the stack. AWS CloudFormation skips the stack set operation in this account and Region.
The reason for the account gate status assigned to this account and region for the stack set operation.
" + "AccountGateResult$StatusReason": "The reason for the account gate status assigned to this account and Region for the stack set operation.
" } }, "AccountLimit": { @@ -103,11 +103,11 @@ "AccountList": { "base": null, "refs": { - "CreateStackInstancesInput$Accounts": "[Self-managed permissions] The names of one or more AWS accounts that you want to create stack instances in the specified region(s) for.
You can specify Accounts
or DeploymentTargets
, but not both.
[Self-managed permissions] The names of the AWS accounts that you want to delete stack instances for.
You can specify Accounts
or DeploymentTargets
, but not both.
[Self-managed
permissions] The names of one or more AWS accounts that you want to create stack instances in the specified Region(s) for.
You can specify Accounts
or DeploymentTargets
, but not both.
[Self-managed
permissions] The names of the AWS accounts that you want to delete stack instances for.
You can specify Accounts
or DeploymentTargets
, but not both.
The names of one or more AWS accounts for which you want to deploy stack set updates.
", - "UpdateStackInstancesInput$Accounts": "[Self-managed permissions] The names of one or more AWS accounts for which you want to update parameter values for stack instances. The overridden parameter values will be applied to all stack instances in the specified accounts and regions.
You can specify Accounts
or DeploymentTargets
, but not both.
[Self-managed permissions] The accounts in which to update associated stack instances. If you specify accounts, you must also specify the regions in which to update stack set instances.
To update all the stack instances associated with this stack set, do not specify the Accounts
or Regions
properties.
If the stack set update includes changes to the template (that is, if the TemplateBody
or TemplateURL
properties are specified), or the Parameters
property, AWS CloudFormation marks all stack instances with a status of OUTDATED
prior to updating the stack instances in the specified accounts and regions. If the stack set update does not include changes to the template or parameters, AWS CloudFormation updates the stack instances in the specified accounts and regions, while leaving all other stack instances with their existing stack instance status.
[Self-managed
permissions] The names of one or more AWS accounts for which you want to update parameter values for stack instances. The overridden parameter values will be applied to all stack instances in the specified accounts and Regions.
You can specify Accounts
or DeploymentTargets
, but not both.
[Self-managed
permissions] The accounts in which to update associated stack instances. If you specify accounts, you must also specify the Regions in which to update stack set instances.
To update all the stack instances associated with this stack set, do not specify the Accounts
or Regions
properties.
If the stack set update includes changes to the template (that is, if the TemplateBody
or TemplateURL
properties are specified), or the Parameters
property, AWS CloudFormation marks all stack instances with a status of OUTDATED
prior to updating the stack instances in the specified accounts and Regions. If the stack set update does not include changes to the template or parameters, AWS CloudFormation updates the stack instances in the specified accounts and Regions, while leaving all other stack instances with their existing stack instance status.
[Service-managed
permissions] Describes whether StackSets automatically deploys to AWS Organizations accounts that are added to a target organization or organizational unit (OU).
Describes whether StackSets automatically deploys to AWS Organizations accounts that are added to the target organization or organizational unit (OU). Specify only if PermissionModel
is SERVICE_MANAGED
.
If you specify AutoDeployment
, do not specify DeploymentTargets
or Regions
.
Describes whether StackSets automatically deploys to AWS Organizations accounts that are added to the target organization or organizational unit (OU). Specify only if PermissionModel
is SERVICE_MANAGED
.
[Service-managed
permissions] Describes whether StackSets automatically deploys to AWS Organizations accounts that are added to a target organization or organizational unit (OU).
[Service-managed
permissions] Describes whether StackSets automatically deploys to AWS Organizations accounts that are added to a target organizational unit (OU).
[Service-managed
permissions] Describes whether StackSets automatically deploys to AWS Organizations accounts that are added to a target organization or organizational unit (OU).
If you specify AutoDeployment
, do not specify DeploymentTargets
or Regions
.
[Service-managed
permissions] The AWS Organizations accounts to which StackSets deploys.
For update operations, you can specify either Accounts
or OrganizationalUnitIds
. For create and delete operations, specify OrganizationalUnitIds
.
[Service-managed
permissions] The AWS Organizations accounts to which StackSets deploys. StackSets does not deploy stack instances to the organization master account, even if the master account is in your organization or in an OU in your organization.
For update operations, you can specify either Accounts
or OrganizationalUnitIds
. For create and delete operations, specify OrganizationalUnitIds
.
[Service-managed
permissions] The AWS Organizations accounts for which to create stack instances in the specified Regions.
You can specify Accounts
or DeploymentTargets
, but not both.
[Service-managed
permissions] The AWS Organizations accounts from which to delete stack instances.
You can specify Accounts
or DeploymentTargets
, but not both.
The number of accounts, per region, for which this operation can fail before AWS CloudFormation stops the operation in that region. If the operation is stopped in a region, AWS CloudFormation doesn't attempt the operation in any subsequent regions.
Conditional: You must specify either FailureToleranceCount
or FailureTolerancePercentage
(but not both).
The number of accounts, per Region, for which this operation can fail before AWS CloudFormation stops the operation in that Region. If the operation is stopped in a Region, AWS CloudFormation doesn't attempt the operation in any subsequent Regions.
Conditional: You must specify either FailureToleranceCount
or FailureTolerancePercentage
(but not both).
The percentage of accounts, per region, for which this stack operation can fail before AWS CloudFormation stops the operation in that region. If the operation is stopped in a region, AWS CloudFormation doesn't attempt the operation in any subsequent regions.
When calculating the number of accounts based on the specified percentage, AWS CloudFormation rounds down to the next whole number.
Conditional: You must specify either FailureToleranceCount
or FailureTolerancePercentage
, but not both.
The percentage of accounts, per Region, for which this stack operation can fail before AWS CloudFormation stops the operation in that Region. If the operation is stopped in a Region, AWS CloudFormation doesn't attempt the operation in any subsequent Regions.
When calculating the number of accounts based on the specified percentage, AWS CloudFormation rounds down to the next whole number.
Conditional: You must specify either FailureToleranceCount
or FailureTolerancePercentage
, but not both.
[Service-managed
permissions] The organization root ID or organizational unit (OU) ID that the stack instance is associated with.
[Service-managed
permissions] The organization root ID or organizational unit (OU) ID that the stack instance is associated with.
[Service-managed
permissions] The organization root ID or organizational unit (OU) ID for this operation result.
Reserved for internal use. No data returned.
", + "StackInstanceSummary$OrganizationalUnitId": "Reserved for internal use. No data returned.
", + "StackSetOperationResultSummary$OrganizationalUnitId": "Reserved for internal use. No data returned.
" } }, "OrganizationalUnitIdList": { "base": null, "refs": { - "DeploymentTargets$OrganizationalUnitIds": "The organization root ID or organizational unit (OUs) IDs to which StackSets deploys.
", - "StackSet$OrganizationalUnitIds": "[Service-managed
permissions] The organization root ID or organizational unit (OUs) IDs to which stacks in your stack set have been deployed.
The organization root ID or organizational unit (OU) IDs to which StackSets deploys.
", + "StackSet$OrganizationalUnitIds": "Reserved for internal use. No data returned.
" } }, "Output": { @@ -1274,7 +1274,7 @@ "refs": { "CreateChangeSetInput$Parameters": "A list of Parameter
structures that specify input parameters for the change set. For more information, see the Parameter data type.
A list of Parameter
structures that specify input parameters for the stack. For more information, see the Parameter data type.
A list of stack set parameters whose values you want to override in the selected stack instances.
Any overridden parameter values will be applied to all stack instances in the specified accounts and regions. When specifying parameters and their values, be aware of how AWS CloudFormation sets parameter values during stack instance operations:
To override the current value for a parameter, include the parameter and specify its value.
To leave a parameter set to its present value, you can do one of the following:
Do not include the parameter in the list.
Include the parameter and specify UsePreviousValue
as true
. (You cannot specify both a value and set UsePreviousValue
to true
.)
To set all overridden parameter back to the values specified in the stack set, specify a parameter list but do not include any parameters.
To leave all parameters set to their present values, do not specify this property at all.
During stack set updates, any parameter values overridden for a stack instance are not updated, but retain their overridden value.
You can only override the parameter values that are specified in the stack set; to add or delete a parameter itself, use UpdateStackSet to update the stack set template.
", + "CreateStackInstancesInput$ParameterOverrides": "A list of stack set parameters whose values you want to override in the selected stack instances.
Any overridden parameter values will be applied to all stack instances in the specified accounts and Regions. When specifying parameters and their values, be aware of how AWS CloudFormation sets parameter values during stack instance operations:
To override the current value for a parameter, include the parameter and specify its value.
To leave a parameter set to its present value, you can do one of the following:
Do not include the parameter in the list.
Include the parameter and specify UsePreviousValue
as true
. (You cannot specify both a value and set UsePreviousValue
to true
.)
To set all overridden parameter back to the values specified in the stack set, specify a parameter list but do not include any parameters.
To leave all parameters set to their present values, do not specify this property at all.
During stack set updates, any parameter values overridden for a stack instance are not updated, but retain their overridden value.
You can only override the parameter values that are specified in the stack set; to add or delete a parameter itself, use UpdateStackSet to update the stack set template.
", "CreateStackSetInput$Parameters": "The input parameters for the stack set template.
", "DescribeChangeSetOutput$Parameters": "A list of Parameter
structures that describes the input parameters and their values used to create the change set. For more information, see the Parameter data type.
A list of Parameter
structures that specify input parameters.
A list of parameters from the stack set template whose values have been overridden in this stack instance.
", "StackSet$Parameters": "A list of input parameters for a stack set.
", "UpdateStackInput$Parameters": "A list of Parameter
structures that specify input parameters for the stack. For more information, see the Parameter data type.
A list of input parameters whose values you want to update for the specified stack instances.
Any overridden parameter values will be applied to all stack instances in the specified accounts and regions. When specifying parameters and their values, be aware of how AWS CloudFormation sets parameter values during stack instance update operations:
To override the current value for a parameter, include the parameter and specify its value.
To leave a parameter set to its present value, you can do one of the following:
Do not include the parameter in the list.
Include the parameter and specify UsePreviousValue
as true
. (You cannot specify both a value and set UsePreviousValue
to true
.)
To set all overridden parameter back to the values specified in the stack set, specify a parameter list but do not include any parameters.
To leave all parameters set to their present values, do not specify this property at all.
During stack set updates, any parameter values overridden for a stack instance are not updated, but retain their overridden value.
You can only override the parameter values that are specified in the stack set; to add or delete a parameter itself, use UpdateStackSet
to update the stack set template. If you add a parameter to a template, before you can override the parameter value specified in the stack set you must first use UpdateStackSet to update all stack instances with the updated template and parameter value specified in the stack set. Once a stack instance has been updated with the new parameter, you can then override the parameter value using UpdateStackInstances
.
A list of input parameters whose values you want to update for the specified stack instances.
Any overridden parameter values will be applied to all stack instances in the specified accounts and Regions. When specifying parameters and their values, be aware of how AWS CloudFormation sets parameter values during stack instance update operations:
To override the current value for a parameter, include the parameter and specify its value.
To leave a parameter set to its present value, you can do one of the following:
Do not include the parameter in the list.
Include the parameter and specify UsePreviousValue
as true
. (You cannot specify both a value and set UsePreviousValue
to true
.)
To set all overridden parameter back to the values specified in the stack set, specify a parameter list but do not include any parameters.
To leave all parameters set to their present values, do not specify this property at all.
During stack set updates, any parameter values overridden for a stack instance are not updated, but retain their overridden value.
You can only override the parameter values that are specified in the stack set; to add or delete a parameter itself, use UpdateStackSet
to update the stack set template. If you add a parameter to a template, before you can override the parameter value specified in the stack set you must first use UpdateStackSet to update all stack instances with the updated template and parameter value specified in the stack set. Once a stack instance has been updated with the new parameter, you can then override the parameter value using UpdateStackInstances
.
A list of input parameters for the stack set template.
" } }, @@ -1393,22 +1393,22 @@ "Region": { "base": null, "refs": { - "DescribeStackInstanceInput$StackInstanceRegion": "The name of a region that's associated with this stack instance.
", - "ListStackInstancesInput$StackInstanceRegion": "The name of the region where you want to list stack instances.
", + "DescribeStackInstanceInput$StackInstanceRegion": "The name of a Region that's associated with this stack instance.
", + "ListStackInstancesInput$StackInstanceRegion": "The name of the Region where you want to list stack instances.
", "RegionList$member": null, - "StackInstance$Region": "The name of the AWS region that the stack instance is associated with.
", - "StackInstanceSummary$Region": "The name of the AWS region that the stack instance is associated with.
", - "StackSetOperationResultSummary$Region": "The name of the AWS region for this operation result.
" + "StackInstance$Region": "The name of the AWS Region that the stack instance is associated with.
", + "StackInstanceSummary$Region": "The name of the AWS Region that the stack instance is associated with.
", + "StackSetOperationResultSummary$Region": "The name of the AWS Region for this operation result.
" } }, "RegionList": { "base": null, "refs": { - "CreateStackInstancesInput$Regions": "The names of one or more regions where you want to create stack instances using the specified AWS account(s).
", - "DeleteStackInstancesInput$Regions": "The regions where you want to delete stack set instances.
", - "StackSetOperationPreferences$RegionOrder": "The order of the regions in where you want to perform the stack operation.
", - "UpdateStackInstancesInput$Regions": "The names of one or more regions in which you want to update parameter values for stack instances. The overridden parameter values will be applied to all stack instances in the specified accounts and regions.
", - "UpdateStackSetInput$Regions": "The regions in which to update associated stack instances. If you specify regions, you must also specify accounts in which to update stack set instances.
To update all the stack instances associated with this stack set, do not specify the Accounts
or Regions
properties.
If the stack set update includes changes to the template (that is, if the TemplateBody
or TemplateURL
properties are specified), or the Parameters
property, AWS CloudFormation marks all stack instances with a status of OUTDATED
prior to updating the stack instances in the specified accounts and regions. If the stack set update does not include changes to the template or parameters, AWS CloudFormation updates the stack instances in the specified accounts and regions, while leaving all other stack instances with their existing stack instance status.
The names of one or more Regions where you want to create stack instances using the specified AWS account(s).
", + "DeleteStackInstancesInput$Regions": "The Regions where you want to delete stack set instances.
", + "StackSetOperationPreferences$RegionOrder": "The order of the Regions in where you want to perform the stack operation.
", + "UpdateStackInstancesInput$Regions": "The names of one or more Regions in which you want to update parameter values for stack instances. The overridden parameter values will be applied to all stack instances in the specified accounts and Regions.
", + "UpdateStackSetInput$Regions": "The Regions in which to update associated stack instances. If you specify Regions, you must also specify accounts in which to update stack set instances.
To update all the stack instances associated with this stack set, do not specify the Accounts
or Regions
properties.
If the stack set update includes changes to the template (that is, if the TemplateBody
or TemplateURL
properties are specified), or the Parameters
property, AWS CloudFormation marks all stack instances with a status of OUTDATED
prior to updating the stack instances in the specified accounts and Regions. If the stack set update does not include changes to the template or parameters, AWS CloudFormation updates the stack instances in the specified accounts and Regions, while leaving all other stack instances with their existing stack instance status.
An AWS CloudFormation stack, in a specific account and region, that's part of a stack set operation. A stack instance is a reference to an attempted or actual stack in a given account within a given region. A stack instance can exist without a stack—for example, if the stack couldn't be created for some reason. A stack instance is associated with only one stack set. Each stack instance contains the ID of its associated stack set, as well as the ID of the actual stack and the stack status.
", + "base": "An AWS CloudFormation stack, in a specific account and Region, that's part of a stack set operation. A stack instance is a reference to an attempted or actual stack in a given account within a given Region. A stack instance can exist without a stack—for example, if the stack couldn't be created for some reason. A stack instance is associated with only one stack set. Each stack instance contains the ID of its associated stack set, as well as the ID of the actual stack and the stack status.
", "refs": { "DescribeStackInstanceOutput$StackInstance": "The stack instance that matches the specified request parameters.
" } @@ -1853,7 +1853,7 @@ "refs": { "CancelUpdateStackInput$StackName": "The name or the unique stack ID that is associated with the stack.
", "ChangeSetSummary$StackName": "The name of the stack with which the change set is associated.
", - "CreateStackInput$StackName": "The name that is associated with the stack. The name must be unique in the region in which you are creating the stack.
A stack name can contain only alphanumeric characters (case sensitive) and hyphens. It must start with an alphabetic character and cannot be longer than 128 characters.
The name that is associated with the stack. The name must be unique in the Region in which you are creating the stack.
A stack name can contain only alphanumeric characters (case sensitive) and hyphens. It must start with an alphabetic character and cannot be longer than 128 characters.
The name or the unique stack ID that is associated with the stack.
", "DescribeChangeSetOutput$StackName": "The name of the stack that is associated with the change set.
", "DescribeStackEventsInput$StackName": "The name or the unique stack ID that is associated with the stack, which are not always interchangeable:
Running stacks: You can specify either the stack's name or its unique stack ID.
Deleted stacks: You must specify the unique stack ID.
Default: There is no default value.
", @@ -1908,15 +1908,15 @@ "StackPolicyDuringUpdateURL": { "base": null, "refs": { - "UpdateStackInput$StackPolicyDuringUpdateURL": "Location of a file containing the temporary overriding stack policy. The URL must point to a policy (max size: 16KB) located in an S3 bucket in the same region as the stack. You can specify either the StackPolicyDuringUpdateBody
or the StackPolicyDuringUpdateURL
parameter, but not both.
If you want to update protected resources, specify a temporary overriding stack policy during this update. If you do not specify a stack policy, the current policy that is associated with the stack will be used.
" + "UpdateStackInput$StackPolicyDuringUpdateURL": "Location of a file containing the temporary overriding stack policy. The URL must point to a policy (max size: 16KB) located in an S3 bucket in the same Region as the stack. You can specify either the StackPolicyDuringUpdateBody
or the StackPolicyDuringUpdateURL
parameter, but not both.
If you want to update protected resources, specify a temporary overriding stack policy during this update. If you do not specify a stack policy, the current policy that is associated with the stack will be used.
" } }, "StackPolicyURL": { "base": null, "refs": { - "CreateStackInput$StackPolicyURL": "Location of a file containing the stack policy. The URL must point to a policy (maximum size: 16 KB) located in an S3 bucket in the same region as the stack. You can specify either the StackPolicyBody
or the StackPolicyURL
parameter, but not both.
Location of a file containing the stack policy. The URL must point to a policy (maximum size: 16 KB) located in an S3 bucket in the same region as the stack. You can specify either the StackPolicyBody
or the StackPolicyURL
parameter, but not both.
Location of a file containing the updated stack policy. The URL must point to a policy (max size: 16KB) located in an S3 bucket in the same region as the stack. You can specify either the StackPolicyBody
or the StackPolicyURL
parameter, but not both.
You might update the stack policy, for example, in order to protect a new resource that you created during a stack update. If you do not specify a stack policy, the current policy that is associated with the stack is unchanged.
" + "CreateStackInput$StackPolicyURL": "Location of a file containing the stack policy. The URL must point to a policy (maximum size: 16 KB) located in an S3 bucket in the same Region as the stack. You can specify either the StackPolicyBody
or the StackPolicyURL
parameter, but not both.
Location of a file containing the stack policy. The URL must point to a policy (maximum size: 16 KB) located in an S3 bucket in the same Region as the stack. You can specify either the StackPolicyBody
or the StackPolicyURL
parameter, but not both.
Location of a file containing the updated stack policy. The URL must point to a policy (max size: 16KB) located in an S3 bucket in the same Region as the stack. You can specify either the StackPolicyBody
or the StackPolicyURL
parameter, but not both.
You might update the stack policy, for example, in order to protect a new resource that you created during a stack update. If you do not specify a stack policy, the current policy that is associated with the stack is unchanged.
" } }, "StackResource": { @@ -1991,7 +1991,7 @@ } }, "StackSet": { - "base": "A structure that contains information about a stack set. A stack set enables you to provision stacks into AWS accounts and across regions by using a single CloudFormation template. In the stack set, you specify the template to use, as well as any parameters and capabilities that the template requires.
", + "base": "A structure that contains information about a stack set. A stack set enables you to provision stacks into AWS accounts and across Regions by using a single CloudFormation template. In the stack set, you specify the template to use, as well as any parameters and capabilities that the template requires.
", "refs": { "DescribeStackSetOutput$StackSet": "The specified stack set.
" } @@ -2036,7 +2036,7 @@ "base": null, "refs": { "CreateStackInstancesInput$StackSetName": "The name or unique ID of the stack set that you want to create stack instances from.
", - "CreateStackSetInput$StackSetName": "The name to associate with the stack set. The name must be unique in the region where you create your stack set.
A stack name can contain only alphanumeric characters (case-sensitive) and hyphens. It must start with an alphabetic character and can't be longer than 128 characters.
The name to associate with the stack set. The name must be unique in the Region where you create your stack set.
A stack name can contain only alphanumeric characters (case-sensitive) and hyphens. It must start with an alphabetic character and can't be longer than 128 characters.
The name or unique ID of the stack set that you want to delete stack instances for.
", "DeleteStackSetInput$StackSetName": "The name or unique ID of the stack set that you're deleting. You can obtain this value by running ListStackSets.
", "DescribeStackInstanceInput$StackSetName": "The name or the unique stack ID of the stack set that you want to get stack instance information for.
", @@ -2096,17 +2096,17 @@ "StackSetOperationResultStatus": { "base": null, "refs": { - "StackSetOperationResultSummary$Status": "The result status of the stack set operation for the given account in the given region.
CANCELLED
: The operation in the specified account and region has been cancelled. This is either because a user has stopped the stack set operation, or because the failure tolerance of the stack set operation has been exceeded.
FAILED
: The operation in the specified account and region failed.
If the stack set operation fails in enough accounts within a region, the failure tolerance for the stack set operation as a whole might be exceeded.
RUNNING
: The operation in the specified account and region is currently in progress.
PENDING
: The operation in the specified account and region has yet to start.
SUCCEEDED
: The operation in the specified account and region completed successfully.
The result status of the stack set operation for the given account in the given Region.
CANCELLED
: The operation in the specified account and Region has been cancelled. This is either because a user has stopped the stack set operation, or because the failure tolerance of the stack set operation has been exceeded.
FAILED
: The operation in the specified account and Region failed.
If the stack set operation fails in enough accounts within a Region, the failure tolerance for the stack set operation as a whole might be exceeded.
RUNNING
: The operation in the specified account and Region is currently in progress.
PENDING
: The operation in the specified account and Region has yet to start.
SUCCEEDED
: The operation in the specified account and Region completed successfully.
A list of StackSetOperationResultSummary
structures that contain information about the specified operation results, for accounts and regions that are included in the operation.
A list of StackSetOperationResultSummary
structures that contain information about the specified operation results, for accounts and Regions that are included in the operation.
The structure that contains information about a specified operation's results for a given account in a given region.
", + "base": "The structure that contains information about a specified operation's results for a given account in a given Region.
", "refs": { "StackSetOperationResultSummaries$member": null } @@ -2114,8 +2114,8 @@ "StackSetOperationStatus": { "base": null, "refs": { - "StackSetOperation$Status": "The status of the operation.
FAILED
: The operation exceeded the specified failure tolerance. The failure tolerance value that you've set for an operation is applied for each region during stack create and update operations. If the number of failed stacks within a region exceeds the failure tolerance, the status of the operation in the region is set to FAILED
. This in turn sets the status of the operation as a whole to FAILED
, and AWS CloudFormation cancels the operation in any remaining regions.
QUEUED
: [Service-managed permissions] For automatic deployments that require a sequence of operations. The operation is queued to be performed. For more information, see the stack set operation status codes in the AWS CloudFormation User Guide.
RUNNING
: The operation is currently being performed.
STOPPED
: The user has cancelled the operation.
STOPPING
: The operation is in the process of stopping, at user request.
SUCCEEDED
: The operation completed creating or updating all the specified stacks without exceeding the failure tolerance for the operation.
The overall status of the operation.
FAILED
: The operation exceeded the specified failure tolerance. The failure tolerance value that you've set for an operation is applied for each region during stack create and update operations. If the number of failed stacks within a region exceeds the failure tolerance, the status of the operation in the region is set to FAILED
. This in turn sets the status of the operation as a whole to FAILED
, and AWS CloudFormation cancels the operation in any remaining regions.
QUEUED
: [Service-managed permissions] For automatic deployments that require a sequence of operations. The operation is queued to be performed. For more information, see the stack set operation status codes in the AWS CloudFormation User Guide.
RUNNING
: The operation is currently being performed.
STOPPED
: The user has cancelled the operation.
STOPPING
: The operation is in the process of stopping, at user request.
SUCCEEDED
: The operation completed creating or updating all the specified stacks without exceeding the failure tolerance for the operation.
The status of the operation.
FAILED
: The operation exceeded the specified failure tolerance. The failure tolerance value that you've set for an operation is applied for each Region during stack create and update operations. If the number of failed stacks within a Region exceeds the failure tolerance, the status of the operation in the Region is set to FAILED
. This in turn sets the status of the operation as a whole to FAILED
, and AWS CloudFormation cancels the operation in any remaining Regions.
QUEUED
: [Service-managed
permissions] For automatic deployments that require a sequence of operations, the operation is queued to be performed. For more information, see the stack set operation status codes in the AWS CloudFormation User Guide.
RUNNING
: The operation is currently being performed.
STOPPED
: The user has cancelled the operation.
STOPPING
: The operation is in the process of stopping, at user request.
SUCCEEDED
: The operation completed creating or updating all the specified stacks without exceeding the failure tolerance for the operation.
The overall status of the operation.
FAILED
: The operation exceeded the specified failure tolerance. The failure tolerance value that you've set for an operation is applied for each Region during stack create and update operations. If the number of failed stacks within a Region exceeds the failure tolerance, the status of the operation in the Region is set to FAILED
. This in turn sets the status of the operation as a whole to FAILED
, and AWS CloudFormation cancels the operation in any remaining Regions.
QUEUED
: [Service-managed
permissions] For automatic deployments that require a sequence of operations, the operation is queued to be performed. For more information, see the stack set operation status codes in the AWS CloudFormation User Guide.
RUNNING
: The operation is currently being performed.
STOPPED
: The user has cancelled the operation.
STOPPING
: The operation is in the process of stopping, at user request.
SUCCEEDED
: The operation completed creating or updating all the specified stacks without exceeding the failure tolerance for the operation.
When AWS CloudFormation last checked if the resource had drifted from its expected configuration.
", "StackResourceSummary$LastUpdatedTimestamp": "Time the status was updated.
", "StackSetDriftDetectionDetails$LastDriftCheckTimestamp": "Most recent time when CloudFormation performed a drift detection operation on the stack set. This value will be NULL
for any stack set on which drift detection has not yet been performed.
The time at which the operation was initiated. Note that the creation times for the stack set operation might differ from the creation time of the individual stacks themselves. This is because AWS CloudFormation needs to perform preparatory work for the operation, such as dispatching the work to the requested regions, before actually creating the first stacks.
", - "StackSetOperation$EndTimestamp": "The time at which the stack set operation ended, across all accounts and regions specified. Note that this doesn't necessarily mean that the stack set operation was successful, or even attempted, in each account or region.
", - "StackSetOperationSummary$CreationTimestamp": "The time at which the operation was initiated. Note that the creation times for the stack set operation might differ from the creation time of the individual stacks themselves. This is because AWS CloudFormation needs to perform preparatory work for the operation, such as dispatching the work to the requested regions, before actually creating the first stacks.
", - "StackSetOperationSummary$EndTimestamp": "The time at which the stack set operation ended, across all accounts and regions specified. Note that this doesn't necessarily mean that the stack set operation was successful, or even attempted, in each account or region.
", + "StackSetOperation$CreationTimestamp": "The time at which the operation was initiated. Note that the creation times for the stack set operation might differ from the creation time of the individual stacks themselves. This is because AWS CloudFormation needs to perform preparatory work for the operation, such as dispatching the work to the requested Regions, before actually creating the first stacks.
", + "StackSetOperation$EndTimestamp": "The time at which the stack set operation ended, across all accounts and Regions specified. Note that this doesn't necessarily mean that the stack set operation was successful, or even attempted, in each account or Region.
", + "StackSetOperationSummary$CreationTimestamp": "The time at which the operation was initiated. Note that the creation times for the stack set operation might differ from the creation time of the individual stacks themselves. This is because AWS CloudFormation needs to perform preparatory work for the operation, such as dispatching the work to the requested Regions, before actually creating the first stacks.
", + "StackSetOperationSummary$EndTimestamp": "The time at which the stack set operation ended, across all accounts and Regions specified. Note that this doesn't necessarily mean that the stack set operation was successful, or even attempted, in each account or Region.
", "StackSetSummary$LastDriftCheckTimestamp": "Most recent time when CloudFormation performed a drift detection operation on the stack set. This value will be NULL
for any stack set on which drift detection has not yet been performed.
When the current default version of the type was registered.
", "TypeVersionSummary$TimeCreated": "When the version was registered.
" diff --git a/models/apis/codeguru-reviewer/2019-09-19/api-2.json b/models/apis/codeguru-reviewer/2019-09-19/api-2.json index 031fced1665..d3c6d6c9eec 100644 --- a/models/apis/codeguru-reviewer/2019-09-19/api-2.json +++ b/models/apis/codeguru-reviewer/2019-09-19/api-2.json @@ -229,7 +229,8 @@ "Name":{ "type":"string", "max":100, - "min":1 + "min":1, + "pattern":"^\\S[\\w.-]*$" }, "Names":{ "type":"list", @@ -253,7 +254,8 @@ "Owner":{ "type":"string", "max":100, - "min":1 + "min":1, + "pattern":"^\\S(.*\\S)?$" }, "Owners":{ "type":"list", diff --git a/models/apis/codeguruprofiler/2019-07-18/api-2.json b/models/apis/codeguruprofiler/2019-07-18/api-2.json index 682d7460532..3009258858e 100644 --- a/models/apis/codeguruprofiler/2019-07-18/api-2.json +++ b/models/apis/codeguruprofiler/2019-07-18/api-2.json @@ -79,6 +79,21 @@ {"shape":"ResourceNotFoundException"} ] }, + "GetPolicy":{ + "name":"GetPolicy", + "http":{ + "method":"GET", + "requestUri":"/profilingGroups/{profilingGroupName}/policy", + "responseCode":200 + }, + "input":{"shape":"GetPolicyRequest"}, + "output":{"shape":"GetPolicyResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ThrottlingException"}, + {"shape":"ResourceNotFoundException"} + ] + }, "GetProfile":{ "name":"GetProfile", "http":{ @@ -141,6 +156,41 @@ {"shape":"ResourceNotFoundException"} ] }, + "PutPermission":{ + "name":"PutPermission", + "http":{ + "method":"PUT", + "requestUri":"/profilingGroups/{profilingGroupName}/policy/{actionGroup}", + "responseCode":200 + }, + "input":{"shape":"PutPermissionRequest"}, + "output":{"shape":"PutPermissionResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ConflictException"}, + {"shape":"ValidationException"}, + {"shape":"ThrottlingException"}, + {"shape":"ResourceNotFoundException"} + ], + "idempotent":true + }, + "RemovePermission":{ + "name":"RemovePermission", + "http":{ + "method":"DELETE", + "requestUri":"/profilingGroups/{profilingGroupName}/policy/{actionGroup}", + "responseCode":200 + }, + "input":{"shape":"RemovePermissionRequest"}, + "output":{"shape":"RemovePermissionResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ConflictException"}, + {"shape":"ValidationException"}, + {"shape":"ThrottlingException"}, + {"shape":"ResourceNotFoundException"} + ] + }, "UpdateProfilingGroup":{ "name":"UpdateProfilingGroup", "http":{ @@ -161,6 +211,10 @@ } }, "shapes":{ + "ActionGroup":{ + "type":"string", + "enum":["agentPermissions"] + }, "AgentConfiguration":{ "type":"structure", "required":[ @@ -304,6 +358,28 @@ "min":1, "pattern":"^[\\w-.:/]+$" }, + "GetPolicyRequest":{ + "type":"structure", + "required":["profilingGroupName"], + "members":{ + "profilingGroupName":{ + "shape":"ProfilingGroupName", + "location":"uri", + "locationName":"profilingGroupName" + } + } + }, + "GetPolicyResponse":{ + "type":"structure", + "required":[ + "policy", + "revisionId" + ], + "members":{ + "policy":{"shape":"String"}, + "revisionId":{"shape":"RevisionId"} + } + }, "GetProfileRequest":{ "type":"structure", "required":["profilingGroupName"], @@ -521,6 +597,13 @@ "members":{ } }, + "Principal":{"type":"string"}, + "Principals":{ + "type":"list", + "member":{"shape":"Principal"}, + "max":50, + "min":1 + }, "ProfileTime":{ "type":"structure", "members":{ @@ -565,6 +648,75 @@ "latestAggregatedProfile":{"shape":"AggregatedProfileTime"} } }, + "PutPermissionRequest":{ + "type":"structure", + "required":[ + "actionGroup", + "principals", + "profilingGroupName" + ], + "members":{ + "actionGroup":{ + "shape":"ActionGroup", + "location":"uri", + "locationName":"actionGroup" + }, + "principals":{"shape":"Principals"}, + "profilingGroupName":{ + "shape":"ProfilingGroupName", + "location":"uri", + "locationName":"profilingGroupName" + }, + "revisionId":{"shape":"RevisionId"} + } + }, + "PutPermissionResponse":{ + "type":"structure", + "required":[ + "policy", + "revisionId" + ], + "members":{ + "policy":{"shape":"String"}, + "revisionId":{"shape":"RevisionId"} + } + }, + "RemovePermissionRequest":{ + "type":"structure", + "required":[ + "actionGroup", + "profilingGroupName", + "revisionId" + ], + "members":{ + "actionGroup":{ + "shape":"ActionGroup", + "location":"uri", + "locationName":"actionGroup" + }, + "profilingGroupName":{ + "shape":"ProfilingGroupName", + "location":"uri", + "locationName":"profilingGroupName" + }, + "revisionId":{ + "shape":"RevisionId", + "location":"querystring", + "locationName":"revisionId" + } + } + }, + "RemovePermissionResponse":{ + "type":"structure", + "required":[ + "policy", + "revisionId" + ], + "members":{ + "policy":{"shape":"String"}, + "revisionId":{"shape":"RevisionId"} + } + }, "ResourceNotFoundException":{ "type":"structure", "required":["message"], @@ -577,6 +729,10 @@ }, "exception":true }, + "RevisionId":{ + "type":"string", + "pattern":"[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}" + }, "ServiceQuotaExceededException":{ "type":"structure", "required":["message"], diff --git a/models/apis/codeguruprofiler/2019-07-18/docs-2.json b/models/apis/codeguruprofiler/2019-07-18/docs-2.json index 14aaf74fb2f..b5353145c8e 100644 --- a/models/apis/codeguruprofiler/2019-07-18/docs-2.json +++ b/models/apis/codeguruprofiler/2019-07-18/docs-2.json @@ -6,13 +6,23 @@ "CreateProfilingGroup": "Creates a profiling group.
", "DeleteProfilingGroup": "Deletes a profiling group.
", "DescribeProfilingGroup": "Describes a profiling group.
", + "GetPolicy": "Gets the profiling group policy.
", "GetProfile": "Gets the aggregated profile of a profiling group for the specified time range. If the requested time range does not align with the available aggregated profiles, it is expanded to attain alignment. If aggregated profiles are available only for part of the period requested, the profile is returned from the earliest available to the latest within the requested time range.
For example, if the requested time range is from 00:00 to 00:20 and the available profiles are from 00:15 to 00:25, the returned profile will be from 00:15 to 00:20.
You must specify exactly two of the following parameters: startTime
, period
, and endTime
.
List the start times of the available aggregated profiles of a profiling group for an aggregation period within the specified time range.
", "ListProfilingGroups": "Lists profiling groups.
", "PostAgentProfile": "", + "PutPermission": "Provides permission to the principals. This overwrites the existing permissions, and is not additive.
", + "RemovePermission": "Removes statement for the provided action group from the policy.
", "UpdateProfilingGroup": "Updates a profiling group.
" }, "shapes": { + "ActionGroup": { + "base": null, + "refs": { + "PutPermissionRequest$actionGroup": "The list of actions that the users and roles can perform on the profiling group.
", + "RemovePermissionRequest$actionGroup": "The list of actions that the users and roles can perform on the profiling group.
" + } + }, "AgentConfiguration": { "base": "", "refs": { @@ -118,6 +128,16 @@ "ConfigureAgentRequest$fleetInstanceId": "" } }, + "GetPolicyRequest": { + "base": "The structure representing the getPolicyRequest.
", + "refs": { + } + }, + "GetPolicyResponse": { + "base": "The structure representing the getPolicyResponse.
", + "refs": { + } + }, "GetProfileRequest": { "base": "The structure representing the getProfileRequest.
", "refs": { @@ -203,6 +223,18 @@ "refs": { } }, + "Principal": { + "base": null, + "refs": { + "Principals$member": null + } + }, + "Principals": { + "base": null, + "refs": { + "PutPermissionRequest$principals": "The list of role and user ARNs or the accountId that needs access (wildcards are not allowed).
" + } + }, "ProfileTime": { "base": "Information about the profile time.
", "refs": { @@ -243,11 +275,14 @@ "CreateProfilingGroupRequest$profilingGroupName": "The name of the profiling group.
", "DeleteProfilingGroupRequest$profilingGroupName": "The profiling group name to delete.
", "DescribeProfilingGroupRequest$profilingGroupName": "The profiling group name.
", + "GetPolicyRequest$profilingGroupName": "The name of the profiling group.
", "GetProfileRequest$profilingGroupName": "The name of the profiling group to get.
", "ListProfileTimesRequest$profilingGroupName": "The name of the profiling group.
", "PostAgentProfileRequest$profilingGroupName": "", "ProfilingGroupDescription$name": "The name of the profiling group.
", "ProfilingGroupNames$member": null, + "PutPermissionRequest$profilingGroupName": "The name of the profiling group.
", + "RemovePermissionRequest$profilingGroupName": "The name of the profiling group.
", "UpdateProfilingGroupRequest$profilingGroupName": "The name of the profiling group to update.
" } }, @@ -263,11 +298,41 @@ "ProfilingGroupDescription$profilingStatus": "The status of the profiling group.
" } }, + "PutPermissionRequest": { + "base": "The structure representing the putPermissionRequest.
", + "refs": { + } + }, + "PutPermissionResponse": { + "base": "The structure representing the putPermissionResponse.
", + "refs": { + } + }, + "RemovePermissionRequest": { + "base": "The structure representing the removePermissionRequest.
", + "refs": { + } + }, + "RemovePermissionResponse": { + "base": "The structure representing the removePermissionResponse.
", + "refs": { + } + }, "ResourceNotFoundException": { "base": "The resource specified in the request does not exist.
", "refs": { } }, + "RevisionId": { + "base": null, + "refs": { + "GetPolicyResponse$revisionId": "A unique identifier for the current revision of the policy.
", + "PutPermissionRequest$revisionId": "A unique identifier for the current revision of the policy. This is required, if a policy exists for the profiling group. This is not required when creating the policy for the first time.
", + "PutPermissionResponse$revisionId": "A unique identifier for the current revision of the policy.
", + "RemovePermissionRequest$revisionId": "A unique identifier for the current revision of the policy.
", + "RemovePermissionResponse$revisionId": "A unique identifier for the current revision of the policy.
" + } + }, "ServiceQuotaExceededException": { "base": "You have exceeded your service quota. To perform the requested action, remove some of the relevant resources, or use Service Quotas to request a service quota increase.
", "refs": { @@ -277,11 +342,14 @@ "base": null, "refs": { "ConflictException$message": null, + "GetPolicyResponse$policy": "The resource-based policy attached to the ProfilingGroup
.
The format of the profile to return. You can choose application/json
or the default application/x-amzn-ion
.
The content encoding of the profile.
", "GetProfileResponse$contentType": "The content type of the profile in the payload. It is either application/json
or the default application/x-amzn-ion
.
The resource-based policy.
", + "RemovePermissionResponse$policy": "The resource-based policy.
", "ResourceNotFoundException$message": null, "ServiceQuotaExceededException$message": null, "ThrottlingException$message": null, diff --git a/models/apis/codeguruprofiler/2019-07-18/paginators-1.json b/models/apis/codeguruprofiler/2019-07-18/paginators-1.json index afbbca8aabb..9dbcc85954c 100644 --- a/models/apis/codeguruprofiler/2019-07-18/paginators-1.json +++ b/models/apis/codeguruprofiler/2019-07-18/paginators-1.json @@ -3,7 +3,8 @@ "ListProfileTimes": { "input_token": "nextToken", "output_token": "nextToken", - "limit_key": "maxResults" + "limit_key": "maxResults", + "result_key": "profileTimes" }, "ListProfilingGroups": { "input_token": "nextToken", diff --git a/models/apis/detective/2018-10-26/api-2.json b/models/apis/detective/2018-10-26/api-2.json index ac4453e4045..e4aeeb3ca99 100644 --- a/models/apis/detective/2018-10-26/api-2.json +++ b/models/apis/detective/2018-10-26/api-2.json @@ -35,7 +35,8 @@ "output":{"shape":"CreateGraphResponse"}, "errors":[ {"shape":"ConflictException"}, - {"shape":"InternalServerException"} + {"shape":"InternalServerException"}, + {"shape":"ServiceQuotaExceededException"} ] }, "CreateMembers":{ @@ -162,6 +163,21 @@ {"shape":"ResourceNotFoundException"}, {"shape":"ValidationException"} ] + }, + "StartMonitoringMember":{ + "name":"StartMonitoringMember", + "http":{ + "method":"POST", + "requestUri":"/graph/member/monitoringstate" + }, + "input":{"shape":"StartMonitoringMemberRequest"}, + "errors":[ + {"shape":"ConflictException"}, + {"shape":"InternalServerException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ServiceQuotaExceededException"}, + {"shape":"ValidationException"} + ] } }, "shapes":{ @@ -371,14 +387,24 @@ "GraphArn":{"shape":"GraphArn"}, "MasterId":{"shape":"AccountId"}, "Status":{"shape":"MemberStatus"}, + "DisabledReason":{"shape":"MemberDisabledReason"}, "InvitedTime":{"shape":"Timestamp"}, - "UpdatedTime":{"shape":"Timestamp"} + "UpdatedTime":{"shape":"Timestamp"}, + "PercentOfGraphUtilization":{"shape":"Percentage"}, + "PercentOfGraphUtilizationUpdatedTime":{"shape":"Timestamp"} } }, "MemberDetailList":{ "type":"list", "member":{"shape":"MemberDetail"} }, + "MemberDisabledReason":{ + "type":"string", + "enum":[ + "VOLUME_TOO_HIGH", + "VOLUME_UNKNOWN" + ] + }, "MemberResultsLimit":{ "type":"integer", "box":true, @@ -391,7 +417,8 @@ "INVITED", "VERIFICATION_IN_PROGRESS", "VERIFICATION_FAILED", - "ENABLED" + "ENABLED", + "ACCEPTED_BUT_DISABLED" ] }, "PaginationToken":{ @@ -399,6 +426,7 @@ "max":1024, "min":1 }, + "Percentage":{"type":"double"}, "RejectInvitationRequest":{ "type":"structure", "required":["GraphArn"], @@ -422,6 +450,17 @@ "error":{"httpStatusCode":402}, "exception":true }, + "StartMonitoringMemberRequest":{ + "type":"structure", + "required":[ + "GraphArn", + "AccountId" + ], + "members":{ + "GraphArn":{"shape":"GraphArn"}, + "AccountId":{"shape":"AccountId"} + } + }, "Timestamp":{"type":"timestamp"}, "UnprocessedAccount":{ "type":"structure", diff --git a/models/apis/detective/2018-10-26/docs-2.json b/models/apis/detective/2018-10-26/docs-2.json index 01d629ffbbf..46332a9b9be 100644 --- a/models/apis/detective/2018-10-26/docs-2.json +++ b/models/apis/detective/2018-10-26/docs-2.json @@ -1,18 +1,19 @@ { "version": "2.0", - "service": "Amazon Detective is currently in preview. The Detective API can only be used by accounts that are admitted into the preview.
Detective uses machine learning and purpose-built visualizations to help you analyze and investigate security issues across your Amazon Web Services (AWS) workloads. Detective automatically extracts time-based events such as login attempts, API calls, and network traffic from AWS CloudTrail and Amazon Virtual Private Cloud (Amazon VPC) flow logs. It also extracts findings detected by Amazon GuardDuty.
The Detective API primarily supports the creation and management of behavior graphs. A behavior graph contains the extracted data from a set of member accounts, and is created and managed by a master account.
Every behavior graph is specific to a Region. You can only use the API to manage graphs that belong to the Region that is associated with the currently selected endpoint.
A Detective master account can use the Detective API to do the following:
Enable and disable Detective. Enabling Detective creates a new behavior graph.
View the list of member accounts in a behavior graph.
Add member accounts to a behavior graph.
Remove member accounts from a behavior graph.
A member account can use the Detective API to do the following:
View the list of behavior graphs that they are invited to.
Accept an invitation to contribute to a behavior graph.
Decline an invitation to contribute to a behavior graph.
Remove their account from a behavior graph.
All API actions are logged as CloudTrail events. See Logging Detective API Calls with CloudTrail.
", + "service": "Detective uses machine learning and purpose-built visualizations to help you analyze and investigate security issues across your Amazon Web Services (AWS) workloads. Detective automatically extracts time-based events such as login attempts, API calls, and network traffic from AWS CloudTrail and Amazon Virtual Private Cloud (Amazon VPC) flow logs. It also extracts findings detected by Amazon GuardDuty.
The Detective API primarily supports the creation and management of behavior graphs. A behavior graph contains the extracted data from a set of member accounts, and is created and managed by a master account.
Every behavior graph is specific to a Region. You can only use the API to manage graphs that belong to the Region that is associated with the currently selected endpoint.
A Detective master account can use the Detective API to do the following:
Enable and disable Detective. Enabling Detective creates a new behavior graph.
View the list of member accounts in a behavior graph.
Add member accounts to a behavior graph.
Remove member accounts from a behavior graph.
A member account can use the Detective API to do the following:
View the list of behavior graphs that they are invited to.
Accept an invitation to contribute to a behavior graph.
Decline an invitation to contribute to a behavior graph.
Remove their account from a behavior graph.
All API actions are logged as CloudTrail events. See Logging Detective API Calls with CloudTrail.
", "operations": { - "AcceptInvitation": "Amazon Detective is currently in preview.
Accepts an invitation for the member account to contribute data to a behavior graph. This operation can only be called by an invited member account.
The request provides the ARN of behavior graph.
The member account status in the graph must be INVITED
.
Amazon Detective is currently in preview.
Creates a new behavior graph for the calling account, and sets that account as the master account. This operation is called by the account that is enabling Detective.
The operation also enables Detective for the calling account in the currently selected Region. It returns the ARN of the new behavior graph.
CreateGraph
triggers a process to create the corresponding data tables for the new behavior graph.
An account can only be the master account for one behavior graph within a Region. If the same account calls CreateGraph
with the same master account, it always returns the same behavior graph ARN. It does not create a new behavior graph.
Amazon Detective is currently in preview.
Sends a request to invite the specified AWS accounts to be member accounts in the behavior graph. This operation can only be called by the master account for a behavior graph.
CreateMembers
verifies the accounts and then sends invitations to the verified accounts.
The request provides the behavior graph ARN and the list of accounts to invite.
The response separates the requested accounts into two lists:
The accounts that CreateMembers
was able to start the verification for. This list includes member accounts that are being verified, that have passed verification and are being sent an invitation, and that have failed verification.
The accounts that CreateMembers
was unable to process. This list includes accounts that were already invited to be member accounts in the behavior graph.
Amazon Detective is currently in preview.
Disables the specified behavior graph and queues it to be deleted. This operation removes the graph from each member account's list of behavior graphs.
DeleteGraph
can only be called by the master account for a behavior graph.
Amazon Detective is currently in preview.
Deletes one or more member accounts from the master account behavior graph. This operation can only be called by a Detective master account. That account cannot use DeleteMembers
to delete their own account from the behavior graph. To disable a behavior graph, the master account uses the DeleteGraph
API method.
Amazon Detective is currently in preview.
Removes the member account from the specified behavior graph. This operation can only be called by a member account that has the ENABLED
status.
Amazon Detective is currently in preview.
Returns the membership details for specified member accounts for a behavior graph.
", - "ListGraphs": "Amazon Detective is currently in preview.
Returns the list of behavior graphs that the calling account is a master of. This operation can only be called by a master account.
Because an account can currently only be the master of one behavior graph within a Region, the results always contain a single graph.
", - "ListInvitations": "Amazon Detective is currently in preview.
Retrieves the list of open and accepted behavior graph invitations for the member account. This operation can only be called by a member account.
Open invitations are invitations that the member account has not responded to.
The results do not include behavior graphs for which the member account declined the invitation. The results also do not include behavior graphs that the member account resigned from or was removed from.
", - "ListMembers": "Amazon Detective is currently in preview.
Retrieves the list of member accounts for a behavior graph. Does not return member accounts that were removed from the behavior graph.
", - "RejectInvitation": "Amazon Detective is currently in preview.
Rejects an invitation to contribute the account data to a behavior graph. This operation must be called by a member account that has the INVITED
status.
Accepts an invitation for the member account to contribute data to a behavior graph. This operation can only be called by an invited member account.
The request provides the ARN of behavior graph.
The member account status in the graph must be INVITED
.
Creates a new behavior graph for the calling account, and sets that account as the master account. This operation is called by the account that is enabling Detective.
Before you try to enable Detective, make sure that your account has been enrolled in Amazon GuardDuty for at least 48 hours. If you do not meet this requirement, you cannot enable Detective. If you do meet the GuardDuty prerequisite, then when you make the request to enable Detective, it checks whether your data volume is within the Detective quota. If it exceeds the quota, then you cannot enable Detective.
The operation also enables Detective for the calling account in the currently selected Region. It returns the ARN of the new behavior graph.
CreateGraph
triggers a process to create the corresponding data tables for the new behavior graph.
An account can only be the master account for one behavior graph within a Region. If the same account calls CreateGraph
with the same master account, it always returns the same behavior graph ARN. It does not create a new behavior graph.
Sends a request to invite the specified AWS accounts to be member accounts in the behavior graph. This operation can only be called by the master account for a behavior graph.
CreateMembers
verifies the accounts and then sends invitations to the verified accounts.
The request provides the behavior graph ARN and the list of accounts to invite.
The response separates the requested accounts into two lists:
The accounts that CreateMembers
was able to start the verification for. This list includes member accounts that are being verified, that have passed verification and are being sent an invitation, and that have failed verification.
The accounts that CreateMembers
was unable to process. This list includes accounts that were already invited to be member accounts in the behavior graph.
Disables the specified behavior graph and queues it to be deleted. This operation removes the graph from each member account's list of behavior graphs.
DeleteGraph
can only be called by the master account for a behavior graph.
Deletes one or more member accounts from the master account behavior graph. This operation can only be called by a Detective master account. That account cannot use DeleteMembers
to delete their own account from the behavior graph. To disable a behavior graph, the master account uses the DeleteGraph
API method.
Removes the member account from the specified behavior graph. This operation can only be called by a member account that has the ENABLED
status.
Returns the membership details for specified member accounts for a behavior graph.
", + "ListGraphs": "Returns the list of behavior graphs that the calling account is a master of. This operation can only be called by a master account.
Because an account can currently only be the master of one behavior graph within a Region, the results always contain a single graph.
", + "ListInvitations": "Retrieves the list of open and accepted behavior graph invitations for the member account. This operation can only be called by a member account.
Open invitations are invitations that the member account has not responded to.
The results do not include behavior graphs for which the member account declined the invitation. The results also do not include behavior graphs that the member account resigned from or was removed from.
", + "ListMembers": "Retrieves the list of member accounts for a behavior graph. Does not return member accounts that were removed from the behavior graph.
", + "RejectInvitation": "Rejects an invitation to contribute the account data to a behavior graph. This operation must be called by a member account that has the INVITED
status.
Sends a request to enable data ingest for a member account that has a status of ACCEPTED_BUT_DISABLED
.
For valid member accounts, the status is updated as follows.
If Detective enabled the member account, then the new status is ENABLED
.
If Detective cannot enable the member account, the status remains ACCEPTED_BUT_DISABLED
.
Amazon Detective is currently in preview.
An AWS account that is the master of or a member of a behavior graph.
", + "base": "An AWS account that is the master of or a member of a behavior graph.
", "refs": { "AccountList$member": null } @@ -33,6 +34,7 @@ "AccountIdList$member": null, "MemberDetail$AccountId": "The AWS account identifier for the member account.
", "MemberDetail$MasterId": "The AWS account identifier of the master account for the behavior graph.
", + "StartMonitoringMemberRequest$AccountId": "The account ID of the member account to try to enable.
The account must be an invited member account with a status of ACCEPTED_BUT_DISABLED
.
The AWS account identifier of the member account that was not processed.
" } }, @@ -124,7 +126,7 @@ } }, "Graph": { - "base": "Amazon Detective is currently in preview.
A behavior graph in Detective.
", + "base": "A behavior graph in Detective.
", "refs": { "GraphList$member": null } @@ -142,7 +144,8 @@ "Graph$Arn": "The ARN of the behavior graph.
", "ListMembersRequest$GraphArn": "The ARN of the behavior graph for which to retrieve the list of member accounts.
", "MemberDetail$GraphArn": "The ARN of the behavior graph that the member account was invited to.
", - "RejectInvitationRequest$GraphArn": "The ARN of the behavior graph to reject the invitation to.
The member account's current member status in the behavior graph must be INVITED
.
The ARN of the behavior graph to reject the invitation to.
The member account's current member status in the behavior graph must be INVITED
.
The ARN of the behavior graph.
" } }, "GraphList": { @@ -187,7 +190,7 @@ } }, "MemberDetail": { - "base": "Amazon Detective is currently in preview.
Details about a member account that was invited to contribute to a behavior graph.
", + "base": "Details about a member account that was invited to contribute to a behavior graph.
", "refs": { "MemberDetailList$member": null } @@ -201,6 +204,12 @@ "ListMembersResponse$MemberDetails": "The list of member accounts in the behavior graph.
The results include member accounts that did not pass verification and member accounts that have not yet accepted the invitation to the behavior graph. The results do not include member accounts that were removed from the behavior graph.
" } }, + "MemberDisabledReason": { + "base": null, + "refs": { + "MemberDetail$DisabledReason": "For member accounts with a status of ACCEPTED_BUT_DISABLED
, the reason that the member account is not enabled.
The reason can have one of the following values:
VOLUME_TOO_HIGH
- Indicates that adding the member account would cause the data volume for the behavior graph to be too high.
VOLUME_UNKNOWN
- Indicates that Detective is unable to verify the data volume for the member account. This is usually because the member account is not enrolled in Amazon GuardDuty.
The current membership status of the member account. The status can have one of the following values:
INVITED
- Indicates that the member was sent an invitation but has not yet responded.
VERIFICATION_IN_PROGRESS
- Indicates that Detective is verifying that the account identifier and email address provided for the member account match. If they do match, then Detective sends the invitation. If the email address and account identifier don't match, then the member cannot be added to the behavior graph.
VERIFICATION_FAILED
- Indicates that the account and email address provided for the member account do not match, and Detective did not send an invitation to the account.
ENABLED
- Indicates that the member account accepted the invitation to contribute to the behavior graph.
Member accounts that declined an invitation or that were removed from the behavior graph are not included.
" + "MemberDetail$Status": "The current membership status of the member account. The status can have one of the following values:
INVITED
- Indicates that the member was sent an invitation but has not yet responded.
VERIFICATION_IN_PROGRESS
- Indicates that Detective is verifying that the account identifier and email address provided for the member account match. If they do match, then Detective sends the invitation. If the email address and account identifier don't match, then the member cannot be added to the behavior graph.
VERIFICATION_FAILED
- Indicates that the account and email address provided for the member account do not match, and Detective did not send an invitation to the account.
ENABLED
- Indicates that the member account accepted the invitation to contribute to the behavior graph.
ACCEPTED_BUT_DISABLED
- Indicates that the member account accepted the invitation but is prevented from contributing data to the behavior graph. DisabledReason
provides the reason why the member account is not enabled.
Member accounts that declined an invitation or that were removed from the behavior graph are not included.
" } }, "PaginationToken": { @@ -226,6 +235,12 @@ "ListMembersResponse$NextToken": "If there are more member accounts remaining in the results, then this is the pagination token to use to request the next page of member accounts.
" } }, + "Percentage": { + "base": null, + "refs": { + "MemberDetail$PercentOfGraphUtilization": "The member account data volume as a percentage of the maximum allowed data volume. 0 indicates 0 percent, and 100 indicates 100 percent.
Note that this is not the percentage of the behavior graph data volume.
For example, the data volume for the behavior graph is 80 GB per day. The maximum data volume is 160 GB per day. If the data volume for the member account is 40 GB per day, then PercentOfGraphUtilization
is 25. It represents 25% of the maximum allowed data volume.
This request would cause the number of member accounts in the behavior graph to exceed the maximum allowed. A behavior graph cannot have more than 1000 member accounts.
", + "base": "This request cannot be completed for one of the following reasons.
The request would cause the number of member accounts in the behavior graph to exceed the maximum allowed. A behavior graph cannot have more than 1000 member accounts.
The request would cause the data rate for the behavior graph to exceed the maximum allowed.
Detective is unable to verify the data rate for the member account. This is usually because the member account is not enrolled in Amazon GuardDuty.
The date and time that the behavior graph was created. The value is in milliseconds since the epoch.
", "MemberDetail$InvitedTime": "The date and time that Detective sent the invitation to the member account. The value is in milliseconds since the epoch.
", - "MemberDetail$UpdatedTime": "The date and time that the member account was last updated. The value is in milliseconds since the epoch.
" + "MemberDetail$UpdatedTime": "The date and time that the member account was last updated. The value is in milliseconds since the epoch.
", + "MemberDetail$PercentOfGraphUtilizationUpdatedTime": "The date and time when the graph utilization percentage was last updated.
" } }, "UnprocessedAccount": { - "base": "Amazon Detective is currently in preview.
A member account that was included in a request but for which the request could not be processed.
", + "base": "A member account that was included in a request but for which the request could not be processed.
", "refs": { "UnprocessedAccountList$member": null } diff --git a/models/apis/ec2/2016-11-15/api-2.json b/models/apis/ec2/2016-11-15/api-2.json index 43677263a04..3bb35ba7b3a 100755 --- a/models/apis/ec2/2016-11-15/api-2.json +++ b/models/apis/ec2/2016-11-15/api-2.json @@ -596,7 +596,8 @@ "method":"POST", "requestUri":"/" }, - "input":{"shape":"CreatePlacementGroupRequest"} + "input":{"shape":"CreatePlacementGroupRequest"}, + "output":{"shape":"CreatePlacementGroupResult"} }, "CreateReservedInstancesListing":{ "name":"CreateReservedInstancesListing", @@ -1268,6 +1269,15 @@ }, "input":{"shape":"DeregisterImageRequest"} }, + "DeregisterInstanceEventNotificationAttributes":{ + "name":"DeregisterInstanceEventNotificationAttributes", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeregisterInstanceEventNotificationAttributesRequest"}, + "output":{"shape":"DeregisterInstanceEventNotificationAttributesResult"} + }, "DeregisterTransitGatewayMulticastGroupMembers":{ "name":"DeregisterTransitGatewayMulticastGroupMembers", "http":{ @@ -1646,6 +1656,15 @@ "input":{"shape":"DescribeInstanceCreditSpecificationsRequest"}, "output":{"shape":"DescribeInstanceCreditSpecificationsResult"} }, + "DescribeInstanceEventNotificationAttributes":{ + "name":"DescribeInstanceEventNotificationAttributes", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeInstanceEventNotificationAttributesRequest"}, + "output":{"shape":"DescribeInstanceEventNotificationAttributesResult"} + }, "DescribeInstanceStatus":{ "name":"DescribeInstanceStatus", "http":{ @@ -3167,6 +3186,15 @@ "input":{"shape":"RegisterImageRequest"}, "output":{"shape":"RegisterImageResult"} }, + "RegisterInstanceEventNotificationAttributes":{ + "name":"RegisterInstanceEventNotificationAttributes", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"RegisterInstanceEventNotificationAttributesRequest"}, + "output":{"shape":"RegisterInstanceEventNotificationAttributesResult"} + }, "RegisterTransitGatewayMulticastGroupMembers":{ "name":"RegisterTransitGatewayMulticastGroupMembers", "http":{ @@ -6787,6 +6815,10 @@ "DryRun":{ "shape":"Boolean", "locationName":"dryRun" + }, + "TagSpecifications":{ + "shape":"TagSpecificationList", + "locationName":"TagSpecification" } } }, @@ -7081,7 +7113,20 @@ "shape":"PlacementStrategy", "locationName":"strategy" }, - "PartitionCount":{"shape":"Integer"} + "PartitionCount":{"shape":"Integer"}, + "TagSpecifications":{ + "shape":"TagSpecificationList", + "locationName":"TagSpecification" + } + } + }, + "CreatePlacementGroupResult":{ + "type":"structure", + "members":{ + "PlacementGroup":{ + "shape":"PlacementGroup", + "locationName":"placementGroup" + } } }, "CreateReservedInstancesListingRequest":{ @@ -8248,9 +8293,9 @@ }, "DeleteKeyPairRequest":{ "type":"structure", - "required":["KeyName"], "members":{ "KeyName":{"shape":"KeyPairName"}, + "KeyPairId":{"shape":"KeyPairId"}, "DryRun":{ "shape":"Boolean", "locationName":"dryRun" @@ -8983,6 +9028,32 @@ } } }, + "DeregisterInstanceEventNotificationAttributesRequest":{ + "type":"structure", + "members":{ + "DryRun":{"shape":"Boolean"}, + "InstanceTagAttribute":{"shape":"DeregisterInstanceTagAttributeRequest"} + } + }, + "DeregisterInstanceEventNotificationAttributesResult":{ + "type":"structure", + "members":{ + "InstanceTagAttribute":{ + "shape":"InstanceTagNotificationAttribute", + "locationName":"instanceTagAttribute" + } + } + }, + "DeregisterInstanceTagAttributeRequest":{ + "type":"structure", + "members":{ + "IncludeAllTagsOfInstance":{"shape":"Boolean"}, + "InstanceTagKeys":{ + "shape":"InstanceTagKeySet", + "locationName":"InstanceTagKey" + } + } + }, "DeregisterTransitGatewayMulticastGroupMembersRequest":{ "type":"structure", "members":{ @@ -10306,6 +10377,21 @@ } } }, + "DescribeInstanceEventNotificationAttributesRequest":{ + "type":"structure", + "members":{ + "DryRun":{"shape":"Boolean"} + } + }, + "DescribeInstanceEventNotificationAttributesResult":{ + "type":"structure", + "members":{ + "InstanceTagAttribute":{ + "shape":"InstanceTagNotificationAttribute", + "locationName":"instanceTagAttribute" + } + } + }, "DescribeInstanceStatusRequest":{ "type":"structure", "members":{ @@ -16472,6 +16558,10 @@ "PublicKeyMaterial":{ "shape":"Blob", "locationName":"publicKeyMaterial" + }, + "TagSpecifications":{ + "shape":"TagSpecificationList", + "locationName":"TagSpecification" } } }, @@ -16485,6 +16575,14 @@ "KeyName":{ "shape":"String", "locationName":"keyName" + }, + "KeyPairId":{ + "shape":"String", + "locationName":"keyPairId" + }, + "Tags":{ + "shape":"TagList", + "locationName":"tagSet" } } }, @@ -17638,6 +17736,26 @@ } } }, + "InstanceTagKeySet":{ + "type":"list", + "member":{ + "shape":"String", + "locationName":"item" + } + }, + "InstanceTagNotificationAttribute":{ + "type":"structure", + "members":{ + "InstanceTagKeys":{ + "shape":"InstanceTagKeySet", + "locationName":"instanceTagKeySet" + }, + "IncludeAllTagsOfInstance":{ + "shape":"Boolean", + "locationName":"includeAllTagsOfInstance" + } + } + }, "InstanceType":{ "type":"string", "enum":[ @@ -18340,6 +18458,10 @@ "KeyPairId":{ "shape":"String", "locationName":"keyPairId" + }, + "Tags":{ + "shape":"TagList", + "locationName":"tagSet" } } }, @@ -22606,6 +22728,32 @@ } } }, + "RegisterInstanceEventNotificationAttributesRequest":{ + "type":"structure", + "members":{ + "DryRun":{"shape":"Boolean"}, + "InstanceTagAttribute":{"shape":"RegisterInstanceTagAttributeRequest"} + } + }, + "RegisterInstanceEventNotificationAttributesResult":{ + "type":"structure", + "members":{ + "InstanceTagAttribute":{ + "shape":"InstanceTagNotificationAttribute", + "locationName":"instanceTagAttribute" + } + } + }, + "RegisterInstanceTagAttributeRequest":{ + "type":"structure", + "members":{ + "IncludeAllTagsOfInstance":{"shape":"Boolean"}, + "InstanceTagKeys":{ + "shape":"InstanceTagKeySet", + "locationName":"InstanceTagKey" + } + } + }, "RegisterTransitGatewayMulticastGroupMembersRequest":{ "type":"structure", "members":{ diff --git a/models/apis/ec2/2016-11-15/docs-2.json b/models/apis/ec2/2016-11-15/docs-2.json index 5a767690c6c..6e8463d11e0 100755 --- a/models/apis/ec2/2016-11-15/docs-2.json +++ b/models/apis/ec2/2016-11-15/docs-2.json @@ -145,6 +145,7 @@ "DeleteVpnGateway": "Deletes the specified virtual private gateway. You must first detach the virtual private gateway from the VPC. Note that you don't need to delete the virtual private gateway if you plan to delete and recreate the VPN connection between your VPC and your network.
", "DeprovisionByoipCidr": "Releases the specified address range that you provisioned for use with your AWS resources through bring your own IP addresses (BYOIP) and deletes the corresponding address pool.
Before you can release an address range, you must stop advertising it using WithdrawByoipCidr and you must not have any IP addresses allocated from its address range.
", "DeregisterImage": "Deregisters the specified AMI. After you deregister an AMI, it can't be used to launch new instances; however, it doesn't affect any instances that you've already launched from the AMI. You'll continue to incur usage costs for those instances until you terminate them.
When you deregister an Amazon EBS-backed AMI, it doesn't affect the snapshot that was created for the root volume of the instance during the AMI creation process. When you deregister an instance store-backed AMI, it doesn't affect the files that you uploaded to Amazon S3 when you created the AMI.
", + "DeregisterInstanceEventNotificationAttributes": "Deregisters tag keys to prevent tags that have the specified tag keys from being included in scheduled event notifications for resources in the Region.
", "DeregisterTransitGatewayMulticastGroupMembers": "Deregisters the specified members (network interfaces) from the transit gateway multicast group.
", "DeregisterTransitGatewayMulticastGroupSources": "Deregisters the specified sources (network interfaces) from the transit gateway multicast group.
", "DescribeAccountAttributes": "Describes attributes of your AWS account. The following are the supported account attributes:
supported-platforms
: Indicates whether your account can launch instances into EC2-Classic and EC2-VPC, or only into EC2-VPC.
default-vpc
: The ID of the default VPC for your account, or none
.
max-instances
: This attribute is no longer supported. The returned value does not reflect your actual vCPU limit for running On-Demand Instances. For more information, see On-Demand Instance Limits in the Amazon Elastic Compute Cloud User Guide.
vpc-max-security-groups-per-interface
: The maximum number of security groups that you can assign to a network interface.
max-elastic-ips
: The maximum number of Elastic IP addresses that you can allocate for use with EC2-Classic.
vpc-max-elastic-ips
: The maximum number of Elastic IP addresses that you can allocate for use with EC2-VPC.
Describes your import snapshot tasks.
", "DescribeInstanceAttribute": "Describes the specified attribute of the specified instance. You can specify only one attribute at a time. Valid attribute values are: instanceType
| kernel
| ramdisk
| userData
| disableApiTermination
| instanceInitiatedShutdownBehavior
| rootDeviceName
| blockDeviceMapping
| productCodes
| sourceDestCheck
| groupSet
| ebsOptimized
| sriovNetSupport
Describes the credit option for CPU usage of the specified burstable performance instances. The credit options are standard
and unlimited
.
If you do not specify an instance ID, Amazon EC2 returns burstable performance instances with the unlimited
credit option, as well as instances that were previously configured as T2, T3, and T3a with the unlimited
credit option. For example, if you resize a T2 instance, while it is configured as unlimited
, to an M4 instance, Amazon EC2 returns the M4 instance.
If you specify one or more instance IDs, Amazon EC2 returns the credit option (standard
or unlimited
) of those instances. If you specify an instance ID that is not valid, such as an instance that is not a burstable performance instance, an error is returned.
Recently terminated instances might appear in the returned results. This interval is usually less than one hour.
If an Availability Zone is experiencing a service disruption and you specify instance IDs in the affected zone, or do not specify any instance IDs at all, the call fails. If you specify only instance IDs in an unaffected zone, the call works normally.
For more information, see Burstable Performance Instances in the Amazon Elastic Compute Cloud User Guide.
", + "DescribeInstanceEventNotificationAttributes": "Describes the tag keys that are registered to appear in scheduled event notifications for resources in the current Region.
", "DescribeInstanceStatus": "Describes the status of the specified instances or all of your instances. By default, only running instances are described, unless you specifically indicate to return the status of all instances.
Instance status includes the following components:
Status checks - Amazon EC2 performs status checks on running EC2 instances to identify hardware and software issues. For more information, see Status Checks for Your Instances and Troubleshooting Instances with Failed Status Checks in the Amazon Elastic Compute Cloud User Guide.
Scheduled events - Amazon EC2 can schedule events (such as reboot, stop, or terminate) for your instances related to hardware issues, software updates, or system maintenance. For more information, see Scheduled Events for Your Instances in the Amazon Elastic Compute Cloud User Guide.
Instance state - You can manage your instances from the moment you launch them through their termination. For more information, see Instance Lifecycle in the Amazon Elastic Compute Cloud User Guide.
Returns a list of all instance types offered. The results can be filtered by location (Region or Availability Zone). If no location is specified, the instance types offered in the current Region are returned.
", "DescribeInstanceTypes": "Returns a list of all instance types offered in your current AWS Region. The results can be filtered by the attributes of the instance types.
", @@ -228,7 +230,7 @@ "DescribeSpotFleetInstances": "Describes the running instances for the specified Spot Fleet.
", "DescribeSpotFleetRequestHistory": "Describes the events for the specified Spot Fleet request during the specified time.
Spot Fleet events are delayed by up to 30 seconds before they can be described. This ensures that you can query by the last evaluated time and not miss a recorded event. Spot Fleet events are available for 48 hours.
", "DescribeSpotFleetRequests": "Describes your Spot Fleet requests.
Spot Fleet requests are deleted 48 hours after they are canceled and their instances are terminated.
", - "DescribeSpotInstanceRequests": "Describes the specified Spot Instance requests.
You can use DescribeSpotInstanceRequests
to find a running Spot Instance by examining the response. If the status of the Spot Instance is fulfilled
, the instance ID appears in the response and contains the identifier of the instance. Alternatively, you can use DescribeInstances with a filter to look for instances where the instance lifecycle is spot
.
We recommend that you set MaxResults
to a value between 5 and 1000 to limit the number of results returned. This paginates the output, which makes the list more manageable and returns the results faster. If the list of results exceeds your MaxResults
value, then that number of results is returned along with a NextToken
value that can be passed to a subsequent DescribeSpotInstanceRequests
request to retrieve the remaining results.
Spot Instance requests are deleted four hours after they are canceled and their instances are terminated.
", + "DescribeSpotInstanceRequests": "Describes the specified Spot Instance requests.
You can use DescribeSpotInstanceRequests
to find a running Spot Instance by examining the response. If the status of the Spot Instance is fulfilled
, the instance ID appears in the response and contains the identifier of the instance. Alternatively, you can use DescribeInstances with a filter to look for instances where the instance lifecycle is spot
.
We recommend that you set MaxResults
to a value between 5 and 1000 to limit the number of results returned. This paginates the output, which makes the list more manageable and returns the results faster. If the list of results exceeds your MaxResults
value, then that number of results is returned along with a NextToken
value that can be passed to a subsequent DescribeSpotInstanceRequests
request to retrieve the remaining results.
Spot Instance requests are deleted four hours after they are canceled and their instances are terminated.
", "DescribeSpotPriceHistory": "Describes the Spot price history. For more information, see Spot Instance Pricing History in the Amazon EC2 User Guide for Linux Instances.
When you specify a start and end time, this operation returns the prices of the instance types within the time range that you specified and the time when the price changed. The price is valid within the time period that you specified; the response merely indicates the last time that the price changed.
", "DescribeStaleSecurityGroups": "[VPC only] Describes the stale security group rules for security groups in a specified VPC. Rules are stale when they reference a deleted security group in a peer VPC, or a security group in a peer VPC for which the VPC peering connection has been deleted.
", "DescribeSubnets": "Describes one or more of your subnets.
For more information, see Your VPC and Subnets in the Amazon Virtual Private Cloud User Guide.
", @@ -358,6 +360,7 @@ "PurchaseScheduledInstances": "Purchases the Scheduled Instances with the specified schedule.
Scheduled Instances enable you to purchase Amazon EC2 compute capacity by the hour for a one-year term. Before you can purchase a Scheduled Instance, you must call DescribeScheduledInstanceAvailability to check for available schedules and obtain a purchase token. After you purchase a Scheduled Instance, you must call RunScheduledInstances during each scheduled time period.
After you purchase a Scheduled Instance, you can't cancel, modify, or resell your purchase.
", "RebootInstances": "Requests a reboot of the specified instances. This operation is asynchronous; it only queues a request to reboot the specified instances. The operation succeeds if the instances are valid and belong to you. Requests to reboot terminated instances are ignored.
If an instance does not cleanly shut down within four minutes, Amazon EC2 performs a hard reboot.
For more information about troubleshooting, see Getting Console Output and Rebooting Instances in the Amazon Elastic Compute Cloud User Guide.
", "RegisterImage": "Registers an AMI. When you're creating an AMI, this is the final step you must complete before you can launch an instance from the AMI. For more information about creating AMIs, see Creating Your Own AMIs in the Amazon Elastic Compute Cloud User Guide.
For Amazon EBS-backed instances, CreateImage creates and registers the AMI in a single request, so you don't have to register the AMI yourself.
You can also use RegisterImage
to create an Amazon EBS-backed Linux AMI from a snapshot of a root device volume. You specify the snapshot using the block device mapping. For more information, see Launching a Linux Instance from a Backup in the Amazon Elastic Compute Cloud User Guide.
You can't register an image where a secondary (non-root) snapshot has AWS Marketplace product codes.
Windows and some Linux distributions, such as Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES), use the EC2 billing product code associated with an AMI to verify the subscription status for package updates. To create a new AMI for operating systems that require a billing product code, instead of registering the AMI, do the following to preserve the billing product code association:
Launch an instance from an existing AMI with that billing product code.
Customize the instance.
Create an AMI from the instance using CreateImage.
If you purchase a Reserved Instance to apply to an On-Demand Instance that was launched from an AMI with a billing product code, make sure that the Reserved Instance has the matching billing product code. If you purchase a Reserved Instance without the matching billing product code, the Reserved Instance will not be applied to the On-Demand Instance. For information about how to obtain the platform details and billing information of an AMI, see Obtaining Billing Information in the Amazon Elastic Compute Cloud User Guide.
If needed, you can deregister an AMI at any time. Any modifications you make to an AMI backed by an instance store volume invalidates its registration. If you make changes to an image, deregister the previous image and register the new image.
", + "RegisterInstanceEventNotificationAttributes": "Registers a set of tag keys to include in scheduled event notifications for your resources.
To remove tags, use .
", "RegisterTransitGatewayMulticastGroupMembers": "Registers members (network interfaces) with the transit gateway multicast group. A member is a network interface associated with a supported EC2 instance that receives multicast traffic. For information about supported instances, see Multicast Consideration in Amazon VPC Transit Gateways.
After you add the members, use SearchTransitGatewayMulticastGroups to verify that the members were added to the transit gateway multicast group.
", "RegisterTransitGatewayMulticastGroupSources": "Registers sources (network interfaces) with the specified transit gateway multicast group.
A multicast source is a network interface attached to a supported instance that sends multicast traffic. For information about supported instances, see Multicast Considerations in Amazon VPC Transit Gateways.
After you add the source, use SearchTransitGatewayMulticastGroups to verify that the source was added to the multicast group.
", "RejectTransitGatewayPeeringAttachment": "Rejects a transit gateway peering attachment request.
", @@ -385,7 +388,7 @@ "RevokeClientVpnIngress": "Removes an ingress authorization rule from a Client VPN endpoint.
", "RevokeSecurityGroupEgress": "[VPC only] Removes the specified egress rules from a security group for EC2-VPC. This action doesn't apply to security groups for use in EC2-Classic. To remove a rule, the values that you specify (for example, ports) must match the existing rule's values exactly.
Each rule consists of the protocol and the IPv4 or IPv6 CIDR range or source security group. For the TCP and UDP protocols, you must also specify the destination port or range of ports. For the ICMP protocol, you must also specify the ICMP type and code. If the security group rule has a description, you do not have to specify the description to revoke the rule.
Rule changes are propagated to instances within the security group as quickly as possible. However, a small delay might occur.
", "RevokeSecurityGroupIngress": "Removes the specified ingress rules from a security group. To remove a rule, the values that you specify (for example, ports) must match the existing rule's values exactly.
[EC2-Classic only] If the values you specify do not match the existing rule's values, no error is returned. Use DescribeSecurityGroups to verify that the rule has been removed.
Each rule consists of the protocol and the CIDR range or source security group. For the TCP and UDP protocols, you must also specify the destination port or range of ports. For the ICMP protocol, you must also specify the ICMP type and code. If the security group rule has a description, you do not have to specify the description to revoke the rule.
Rule changes are propagated to instances within the security group as quickly as possible. However, a small delay might occur.
", - "RunInstances": "Launches the specified number of instances using an AMI for which you have permissions.
You can specify a number of options, or leave the default options. The following rules apply:
[EC2-VPC] If you don't specify a subnet ID, we choose a default subnet from your default VPC for you. If you don't have a default VPC, you must specify a subnet ID in the request.
[EC2-Classic] If don't specify an Availability Zone, we choose one for you.
Some instance types must be launched into a VPC. If you do not have a default VPC, or if you do not specify a subnet ID, the request fails. For more information, see Instance Types Available Only in a VPC.
[EC2-VPC] All instances have a network interface with a primary private IPv4 address. If you don't specify this address, we choose one from the IPv4 range of your subnet.
Not all instance types support IPv6 addresses. For more information, see Instance Types.
If you don't specify a security group ID, we use the default security group. For more information, see Security Groups.
If any of the AMIs have a product code attached for which the user has not subscribed, the request fails.
You can create a launch template, which is a resource that contains the parameters to launch an instance. When you launch an instance using RunInstances, you can specify the launch template instead of specifying the launch parameters.
To ensure faster instance launches, break up large requests into smaller batches. For example, create five separate launch requests for 100 instances each instead of one launch request for 500 instances.
An instance is ready for you to use when it's in the running
state. You can check the state of your instance using DescribeInstances. You can tag instances and EBS volumes during launch, after launch, or both. For more information, see CreateTags and Tagging Your Amazon EC2 Resources.
Linux instances have access to the public key of the key pair at boot. You can use this key to provide secure access to the instance. Amazon EC2 public images use this feature to provide secure access without passwords. For more information, see Key Pairs in the Amazon Elastic Compute Cloud User Guide.
For troubleshooting, see What To Do If An Instance Immediately Terminates, and Troubleshooting Connecting to Your Instance in the Amazon Elastic Compute Cloud User Guide.
", + "RunInstances": "Launches the specified number of instances using an AMI for which you have permissions.
You can specify a number of options, or leave the default options. The following rules apply:
[EC2-VPC] If you don't specify a subnet ID, we choose a default subnet from your default VPC for you. If you don't have a default VPC, you must specify a subnet ID in the request.
[EC2-Classic] If don't specify an Availability Zone, we choose one for you.
Some instance types must be launched into a VPC. If you do not have a default VPC, or if you do not specify a subnet ID, the request fails. For more information, see Instance Types Available Only in a VPC.
[EC2-VPC] All instances have a network interface with a primary private IPv4 address. If you don't specify this address, we choose one from the IPv4 range of your subnet.
Not all instance types support IPv6 addresses. For more information, see Instance Types.
If you don't specify a security group ID, we use the default security group. For more information, see Security Groups.
If any of the AMIs have a product code attached for which the user has not subscribed, the request fails.
You can create a launch template, which is a resource that contains the parameters to launch an instance. When you launch an instance using RunInstances, you can specify the launch template instead of specifying the launch parameters.
To ensure faster instance launches, break up large requests into smaller batches. For example, create five separate launch requests for 100 instances each instead of one launch request for 500 instances.
An instance is ready for you to use when it's in the running
state. You can check the state of your instance using DescribeInstances. You can tag instances and EBS volumes during launch, after launch, or both. For more information, see CreateTags and Tagging Your Amazon EC2 Resources.
Linux instances have access to the public key of the key pair at boot. You can use this key to provide secure access to the instance. Amazon EC2 public images use this feature to provide secure access without passwords. For more information, see Key Pairs in the Amazon Elastic Compute Cloud User Guide.
For troubleshooting, see What To Do If An Instance Immediately Terminates, and Troubleshooting Connecting to Your Instance in the Amazon Elastic Compute Cloud User Guide.
", "RunScheduledInstances": "Launches the specified Scheduled Instances.
Before you can launch a Scheduled Instance, you must purchase it and obtain an identifier using PurchaseScheduledInstances.
You must launch a Scheduled Instance during its scheduled time period. You can't stop or reboot a Scheduled Instance, but you can terminate it as needed. If you terminate a Scheduled Instance before the current scheduled time period ends, you can launch it again after a few minutes. For more information, see Scheduled Instances in the Amazon Elastic Compute Cloud User Guide.
", "SearchLocalGatewayRoutes": "Searches for routes in the specified local gateway route table.
", "SearchTransitGatewayMulticastGroups": "Searches one or more transit gateway multicast groups and returns the group membership information.
", @@ -1226,6 +1229,8 @@ "DeleteVpnGatewayRequest$DryRun": "Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
Indicates whether to deregister all tag keys in the current Region. Specify false
to deregister all tag keys.
Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
When true
, includes the health status for all instances. When false
, includes the health status for running instances only.
Default: false
Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
If set to true
, the interface is deleted when the instance is terminated. You can specify true
only if creating a new network interface when launching an instance.
Indicates whether this IPv4 address is the primary private IP address of the network interface.
", "InstanceSpecification$ExcludeBootVolume": "Excludes the root volume from being snapshotted.
", + "InstanceTagNotificationAttribute$IncludeAllTagsOfInstance": "Indicates wheter all tag keys in the current Region are registered to appear in scheduled event notifications. true
indicates that all tag keys in the current Region are registered.
Indicates whether the instance is optimized for EBS I/O. This optimization provides dedicated throughput to Amazon EBS and an optimized configuration stack to provide optimal EBS I/O performance. This optimization isn't available with all instance types. Additional usage charges apply when using an EBS Optimized instance.
Default: false
Indicates whether the EBS volume is encrypted.
", "LaunchTemplateEbsBlockDevice$DeleteOnTermination": "Indicates whether the EBS volume is deleted on instance termination.
", @@ -1514,6 +1521,8 @@ "RebootInstancesRequest$DryRun": "Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
Set to true
to enable enhanced networking with ENA for the AMI and any instances that you launch from the AMI.
This option is supported only for HVM AMIs. Specifying this option with a PV AMI can make instances launched from the AMI unreachable.
", + "RegisterInstanceEventNotificationAttributesRequest$DryRun": "Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
Indicates whether to register all tag keys in the current Region. Specify true
to register all tag keys.
Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
Contains the parameters for CreateReservedInstancesListing.
", "refs": { @@ -3603,6 +3617,22 @@ "refs": { } }, + "DeregisterInstanceEventNotificationAttributesRequest": { + "base": null, + "refs": { + } + }, + "DeregisterInstanceEventNotificationAttributesResult": { + "base": null, + "refs": { + } + }, + "DeregisterInstanceTagAttributeRequest": { + "base": "Information about the tag keys to deregister for the current Region. You can either specify individual tag keys or deregister all tag keys in the current Region. You must specify either IncludeAllTagsOfInstance
or InstanceTagKeys
in the request
Information about the tag keys to deregister.
" + } + }, "DeregisterTransitGatewayMulticastGroupMembersRequest": { "base": null, "refs": { @@ -4157,6 +4187,16 @@ "refs": { } }, + "DescribeInstanceEventNotificationAttributesRequest": { + "base": null, + "refs": { + } + }, + "DescribeInstanceEventNotificationAttributesResult": { + "base": null, + "refs": { + } + }, "DescribeInstanceStatusRequest": { "base": null, "refs": { @@ -5929,7 +5969,7 @@ "DescribeHostReservationOfferingsRequest$Filter": "The filters.
instance-family
- The instance family of the offering (for example, m4
).
payment-option
- The payment option (NoUpfront
| PartialUpfront
| AllUpfront
).
The filters.
instance-family
- The instance family (for example, m4
).
payment-option
- The payment option (NoUpfront
| PartialUpfront
| AllUpfront
).
state
- The state of the reservation (payment-pending
| payment-failed
| active
| retired
).
tag
:<key> - The key/value combination of a tag assigned to the resource. Use the tag key in the filter name and the tag value as the filter value. For example, to find all resources that have a tag with the key Owner
and the value TeamA
, specify tag:Owner
for the filter name and TeamA
for the filter value.
tag-key
- The key of a tag assigned to the resource. Use this filter to find all resources assigned a tag with a specific key, regardless of the tag value.
The filters.
auto-placement
- Whether auto-placement is enabled or disabled (on
| off
).
availability-zone
- The Availability Zone of the host.
client-token
- The idempotency token that you provided when you allocated the host.
host-reservation-id
- The ID of the reservation assigned to this host.
instance-type
- The instance type size that the Dedicated Host is configured to support.
state
- The allocation state of the Dedicated Host (available
| under-assessment
| permanent-failure
| released
| released-permanent-failure
).
tag-key
- The key of a tag assigned to the resource. Use this filter to find all resources assigned a tag with a specific key, regardless of the tag value.
The filters.
instance-id
- The ID of the instance.
state
- The state of the association (associating
| associated
| disassociating
| disassociated
).
The filters.
instance-id
- The ID of the instance.
state
- The state of the association (associating
| associated
| disassociating
).
The filters.
architecture
- The image architecture (i386
| x86_64
| arm64
).
block-device-mapping.delete-on-termination
- A Boolean value that indicates whether the Amazon EBS volume is deleted on instance termination.
block-device-mapping.device-name
- The device name specified in the block device mapping (for example, /dev/sdh
or xvdh
).
block-device-mapping.snapshot-id
- The ID of the snapshot used for the EBS volume.
block-device-mapping.volume-size
- The volume size of the EBS volume, in GiB.
block-device-mapping.volume-type
- The volume type of the EBS volume (gp2
| io1
| st1
| sc1
| standard
).
block-device-mapping.encrypted
- A Boolean that indicates whether the EBS volume is encrypted.
description
- The description of the image (provided during image creation).
ena-support
- A Boolean that indicates whether enhanced networking with ENA is enabled.
hypervisor
- The hypervisor type (ovm
| xen
).
image-id
- The ID of the image.
image-type
- The image type (machine
| kernel
| ramdisk
).
is-public
- A Boolean that indicates whether the image is public.
kernel-id
- The kernel ID.
manifest-location
- The location of the image manifest.
name
- The name of the AMI (provided during image creation).
owner-alias
- String value from an Amazon-maintained list (amazon
| aws-marketplace
| microsoft
) of snapshot owners. Not to be confused with the user-configured AWS account alias, which is set from the IAM console.
owner-id
- The AWS account ID of the image owner.
platform
- The platform. To only list Windows-based AMIs, use windows
.
product-code
- The product code.
product-code.type
- The type of the product code (devpay
| marketplace
).
ramdisk-id
- The RAM disk ID.
root-device-name
- The device name of the root device volume (for example, /dev/sda1
).
root-device-type
- The type of the root device volume (ebs
| instance-store
).
state
- The state of the image (available
| pending
| failed
).
state-reason-code
- The reason code for the state change.
state-reason-message
- The message for the state change.
sriov-net-support
- A value of simple
indicates that enhanced networking with the Intel 82599 VF interface is enabled.
tag
:<key> - The key/value combination of a tag assigned to the resource. Use the tag key in the filter name and the tag value as the filter value. For example, to find all resources that have a tag with the key Owner
and the value TeamA
, specify tag:Owner
for the filter name and TeamA
for the filter value.
tag-key
- The key of a tag assigned to the resource. Use this filter to find all resources assigned a tag with a specific key, regardless of the tag value.
virtualization-type
- The virtualization type (paravirtual
| hvm
).
Filter tasks using the task-state
filter and one of the following values: active
, completed
, deleting
, or deleted
.
The filters.
", @@ -5940,7 +5980,7 @@ "DescribeInstancesRequest$Filters": "The filters.
affinity
- The affinity setting for an instance running on a Dedicated Host (default
| host
).
architecture
- The instance architecture (i386
| x86_64
| arm64
).
availability-zone
- The Availability Zone of the instance.
block-device-mapping.attach-time
- The attach time for an EBS volume mapped to the instance, for example, 2010-09-15T17:15:20.000Z
.
block-device-mapping.delete-on-termination
- A Boolean that indicates whether the EBS volume is deleted on instance termination.
block-device-mapping.device-name
- The device name specified in the block device mapping (for example, /dev/sdh
or xvdh
).
block-device-mapping.status
- The status for the EBS volume (attaching
| attached
| detaching
| detached
).
block-device-mapping.volume-id
- The volume ID of the EBS volume.
client-token
- The idempotency token you provided when you launched the instance.
dns-name
- The public DNS name of the instance.
group-id
- The ID of the security group for the instance. EC2-Classic only.
group-name
- The name of the security group for the instance. EC2-Classic only.
hibernation-options.configured
- A Boolean that indicates whether the instance is enabled for hibernation. A value of true
means that the instance is enabled for hibernation.
host-id
- The ID of the Dedicated Host on which the instance is running, if applicable.
hypervisor
- The hypervisor type of the instance (ovm
| xen
). The value xen
is used for both Xen and Nitro hypervisors.
iam-instance-profile.arn
- The instance profile associated with the instance. Specified as an ARN.
image-id
- The ID of the image used to launch the instance.
instance-id
- The ID of the instance.
instance-lifecycle
- Indicates whether this is a Spot Instance or a Scheduled Instance (spot
| scheduled
).
instance-state-code
- The state of the instance, as a 16-bit unsigned integer. The high byte is used for internal purposes and should be ignored. The low byte is set based on the state represented. The valid values are: 0 (pending), 16 (running), 32 (shutting-down), 48 (terminated), 64 (stopping), and 80 (stopped).
instance-state-name
- The state of the instance (pending
| running
| shutting-down
| terminated
| stopping
| stopped
).
instance-type
- The type of instance (for example, t2.micro
).
instance.group-id
- The ID of the security group for the instance.
instance.group-name
- The name of the security group for the instance.
ip-address
- The public IPv4 address of the instance.
kernel-id
- The kernel ID.
key-name
- The name of the key pair used when the instance was launched.
launch-index
- When launching multiple instances, this is the index for the instance in the launch group (for example, 0, 1, 2, and so on).
launch-time
- The time when the instance was launched.
metadata-options.http-tokens
- The metadata request authorization state (optional
| required
)
metadata-options.http-put-response-hop-limit
- The http metadata request put response hop limit (integer, possible values 1
to 64
)
metadata-options.http-endpoint
- Enable or disable metadata access on http endpoint (enabled
| disabled
)
monitoring-state
- Indicates whether detailed monitoring is enabled (disabled
| enabled
).
network-interface.addresses.private-ip-address
- The private IPv4 address associated with the network interface.
network-interface.addresses.primary
- Specifies whether the IPv4 address of the network interface is the primary private IPv4 address.
network-interface.addresses.association.public-ip
- The ID of the association of an Elastic IP address (IPv4) with a network interface.
network-interface.addresses.association.ip-owner-id
- The owner ID of the private IPv4 address associated with the network interface.
network-interface.association.public-ip
- The address of the Elastic IP address (IPv4) bound to the network interface.
network-interface.association.ip-owner-id
- The owner of the Elastic IP address (IPv4) associated with the network interface.
network-interface.association.allocation-id
- The allocation ID returned when you allocated the Elastic IP address (IPv4) for your network interface.
network-interface.association.association-id
- The association ID returned when the network interface was associated with an IPv4 address.
network-interface.attachment.attachment-id
- The ID of the interface attachment.
network-interface.attachment.instance-id
- The ID of the instance to which the network interface is attached.
network-interface.attachment.instance-owner-id
- The owner ID of the instance to which the network interface is attached.
network-interface.attachment.device-index
- The device index to which the network interface is attached.
network-interface.attachment.status
- The status of the attachment (attaching
| attached
| detaching
| detached
).
network-interface.attachment.attach-time
- The time that the network interface was attached to an instance.
network-interface.attachment.delete-on-termination
- Specifies whether the attachment is deleted when an instance is terminated.
network-interface.availability-zone
- The Availability Zone for the network interface.
network-interface.description
- The description of the network interface.
network-interface.group-id
- The ID of a security group associated with the network interface.
network-interface.group-name
- The name of a security group associated with the network interface.
network-interface.ipv6-addresses.ipv6-address
- The IPv6 address associated with the network interface.
network-interface.mac-address
- The MAC address of the network interface.
network-interface.network-interface-id
- The ID of the network interface.
network-interface.owner-id
- The ID of the owner of the network interface.
network-interface.private-dns-name
- The private DNS name of the network interface.
network-interface.requester-id
- The requester ID for the network interface.
network-interface.requester-managed
- Indicates whether the network interface is being managed by AWS.
network-interface.status
- The status of the network interface (available
) | in-use
).
network-interface.source-dest-check
- Whether the network interface performs source/destination checking. A value of true
means that checking is enabled, and false
means that checking is disabled. The value must be false
for the network interface to perform network address translation (NAT) in your VPC.
network-interface.subnet-id
- The ID of the subnet for the network interface.
network-interface.vpc-id
- The ID of the VPC for the network interface.
owner-id
- The AWS account ID of the instance owner.
placement-group-name
- The name of the placement group for the instance.
placement-partition-number
- The partition in which the instance is located.
platform
- The platform. To list only Windows instances, use windows
.
private-dns-name
- The private IPv4 DNS name of the instance.
private-ip-address
- The private IPv4 address of the instance.
product-code
- The product code associated with the AMI used to launch the instance.
product-code.type
- The type of product code (devpay
| marketplace
).
ramdisk-id
- The RAM disk ID.
reason
- The reason for the current state of the instance (for example, shows \"User Initiated [date]\" when you stop or terminate the instance). Similar to the state-reason-code filter.
requester-id
- The ID of the entity that launched the instance on your behalf (for example, AWS Management Console, Auto Scaling, and so on).
reservation-id
- The ID of the instance's reservation. A reservation ID is created any time you launch an instance. A reservation ID has a one-to-one relationship with an instance launch request, but can be associated with more than one instance if you launch multiple instances using the same launch request. For example, if you launch one instance, you get one reservation ID. If you launch ten instances using the same launch request, you also get one reservation ID.
root-device-name
- The device name of the root device volume (for example, /dev/sda1
).
root-device-type
- The type of the root device volume (ebs
| instance-store
).
source-dest-check
- Indicates whether the instance performs source/destination checking. A value of true
means that checking is enabled, and false
means that checking is disabled. The value must be false
for the instance to perform network address translation (NAT) in your VPC.
spot-instance-request-id
- The ID of the Spot Instance request.
state-reason-code
- The reason code for the state change.
state-reason-message
- A message that describes the state change.
subnet-id
- The ID of the subnet for the instance.
tag
:<key> - The key/value combination of a tag assigned to the resource. Use the tag key in the filter name and the tag value as the filter value. For example, to find all resources that have a tag with the key Owner
and the value TeamA
, specify tag:Owner
for the filter name and TeamA
for the filter value.
tag-key
- The key of a tag assigned to the resource. Use this filter to find all resources that have a tag with a specific key, regardless of the tag value.
tenancy
- The tenancy of an instance (dedicated
| default
| host
).
virtualization-type
- The virtualization type of the instance (paravirtual
| hvm
).
vpc-id
- The ID of the VPC that the instance is running in.
One or more filters.
attachment.state
- The current state of the attachment between the gateway and the VPC (available
). Present only if a VPC is attached.
attachment.vpc-id
- The ID of an attached VPC.
internet-gateway-id
- The ID of the Internet gateway.
owner-id
- The ID of the AWS account that owns the internet gateway.
tag
:<key> - The key/value combination of a tag assigned to the resource. Use the tag key in the filter name and the tag value as the filter value. For example, to find all resources that have a tag with the key Owner
and the value TeamA
, specify tag:Owner
for the filter name and TeamA
for the filter value.
tag-key
- The key of a tag assigned to the resource. Use this filter to find all resources assigned a tag with a specific key, regardless of the tag value.
One or more filters.
tag
:<key> - The key/value combination of a tag assigned to the resource. Use the tag key in the filter name and the tag value as the filter value. For example, to find all resources that have a tag with the key Owner
and the value TeamA
, specify tag:Owner
for the filter name and TeamA
for the filter value.
tag-key
- The key of a tag assigned to the resource. Use this filter to find all resources assigned a tag with a specific key, regardless of the tag value.
The filters.
fingerprint
- The fingerprint of the key pair.
key-name
- The name of the key pair.
The filters.
key-pair-id
- The ID of the key pair.
fingerprint
- The fingerprint of the key pair.
key-name
- The name of the key pair.
tag-key
- The key of a tag assigned to the resource. Use this filter to find all resources assigned a tag with a specific key, regardless of the tag value.
tag
:<key> - The key/value combination of a tag assigned to the resource. Use the tag key in the filter name and the tag value as the filter value. For example, to find all resources that have a tag with the key Owner
and the value TeamA
, specify tag:Owner
for the filter name and TeamA
for the filter value.
One or more filters.
create-time
- The time the launch template version was created.
ebs-optimized
- A boolean that indicates whether the instance is optimized for Amazon EBS I/O.
iam-instance-profile
- The ARN of the IAM instance profile.
image-id
- The ID of the AMI.
instance-type
- The instance type.
is-default-version
- A boolean that indicates whether the launch template version is the default version.
kernel-id
- The kernel ID.
ram-disk-id
- The RAM disk ID.
One or more filters.
create-time
- The time the launch template was created.
launch-template-name
- The name of the launch template.
tag
:<key> - The key/value combination of a tag assigned to the resource. Use the tag key in the filter name and the tag value as the filter value. For example, to find all resources that have a tag with the key Owner
and the value TeamA
, specify tag:Owner
for the filter name and TeamA
for the filter value.
tag-key
- The key of a tag assigned to the resource. Use this filter to find all resources assigned a tag with a specific key, regardless of the tag value.
One or more filters.
", @@ -5954,7 +5994,7 @@ "DescribeNetworkAclsRequest$Filters": "One or more filters.
association.association-id
- The ID of an association ID for the ACL.
association.network-acl-id
- The ID of the network ACL involved in the association.
association.subnet-id
- The ID of the subnet involved in the association.
default
- Indicates whether the ACL is the default network ACL for the VPC.
entry.cidr
- The IPv4 CIDR range specified in the entry.
entry.icmp.code
- The ICMP code specified in the entry, if any.
entry.icmp.type
- The ICMP type specified in the entry, if any.
entry.ipv6-cidr
- The IPv6 CIDR range specified in the entry.
entry.port-range.from
- The start of the port range specified in the entry.
entry.port-range.to
- The end of the port range specified in the entry.
entry.protocol
- The protocol specified in the entry (tcp
| udp
| icmp
or a protocol number).
entry.rule-action
- Allows or denies the matching traffic (allow
| deny
).
entry.rule-number
- The number of an entry (in other words, rule) in the set of ACL entries.
network-acl-id
- The ID of the network ACL.
owner-id
- The ID of the AWS account that owns the network ACL.
tag
:<key> - The key/value combination of a tag assigned to the resource. Use the tag key in the filter name and the tag value as the filter value. For example, to find all resources that have a tag with the key Owner
and the value TeamA
, specify tag:Owner
for the filter name and TeamA
for the filter value.
tag-key
- The key of a tag assigned to the resource. Use this filter to find all resources assigned a tag with a specific key, regardless of the tag value.
vpc-id
- The ID of the VPC for the network ACL.
One or more filters.
network-interface-permission.network-interface-permission-id
- The ID of the permission.
network-interface-permission.network-interface-id
- The ID of the network interface.
network-interface-permission.aws-account-id
- The AWS account ID.
network-interface-permission.aws-service
- The AWS service.
network-interface-permission.permission
- The type of permission (INSTANCE-ATTACH
| EIP-ASSOCIATE
).
One or more filters.
addresses.private-ip-address
- The private IPv4 addresses associated with the network interface.
addresses.primary
- Whether the private IPv4 address is the primary IP address associated with the network interface.
addresses.association.public-ip
- The association ID returned when the network interface was associated with the Elastic IP address (IPv4).
addresses.association.owner-id
- The owner ID of the addresses associated with the network interface.
association.association-id
- The association ID returned when the network interface was associated with an IPv4 address.
association.allocation-id
- The allocation ID returned when you allocated the Elastic IP address (IPv4) for your network interface.
association.ip-owner-id
- The owner of the Elastic IP address (IPv4) associated with the network interface.
association.public-ip
- The address of the Elastic IP address (IPv4) bound to the network interface.
association.public-dns-name
- The public DNS name for the network interface (IPv4).
attachment.attachment-id
- The ID of the interface attachment.
attachment.attach-time
- The time that the network interface was attached to an instance.
attachment.delete-on-termination
- Indicates whether the attachment is deleted when an instance is terminated.
attachment.device-index
- The device index to which the network interface is attached.
attachment.instance-id
- The ID of the instance to which the network interface is attached.
attachment.instance-owner-id
- The owner ID of the instance to which the network interface is attached.
attachment.nat-gateway-id
- The ID of the NAT gateway to which the network interface is attached.
attachment.status
- The status of the attachment (attaching
| attached
| detaching
| detached
).
availability-zone
- The Availability Zone of the network interface.
description
- The description of the network interface.
group-id
- The ID of a security group associated with the network interface.
group-name
- The name of a security group associated with the network interface.
ipv6-addresses.ipv6-address
- An IPv6 address associated with the network interface.
mac-address
- The MAC address of the network interface.
network-interface-id
- The ID of the network interface.
owner-id
- The AWS account ID of the network interface owner.
private-ip-address
- The private IPv4 address or addresses of the network interface.
private-dns-name
- The private DNS name of the network interface (IPv4).
requester-id
- The ID of the entity that launched the instance on your behalf (for example, AWS Management Console, Auto Scaling, and so on).
requester-managed
- Indicates whether the network interface is being managed by an AWS service (for example, AWS Management Console, Auto Scaling, and so on).
source-dest-check
- Indicates whether the network interface performs source/destination checking. A value of true
means checking is enabled, and false
means checking is disabled. The value must be false
for the network interface to perform network address translation (NAT) in your VPC.
status
- The status of the network interface. If the network interface is not attached to an instance, the status is available
; if a network interface is attached to an instance the status is in-use
.
subnet-id
- The ID of the subnet for the network interface.
tag
:<key> - The key/value combination of a tag assigned to the resource. Use the tag key in the filter name and the tag value as the filter value. For example, to find all resources that have a tag with the key Owner
and the value TeamA
, specify tag:Owner
for the filter name and TeamA
for the filter value.
tag-key
- The key of a tag assigned to the resource. Use this filter to find all resources assigned a tag with a specific key, regardless of the tag value.
vpc-id
- The ID of the VPC for the network interface.
The filters.
group-name
- The name of the placement group.
state
- The state of the placement group (pending
| available
| deleting
| deleted
).
strategy
- The strategy of the placement group (cluster
| spread
| partition
).
The filters.
group-name
- The name of the placement group.
state
- The state of the placement group (pending
| available
| deleting
| deleted
).
strategy
- The strategy of the placement group (cluster
| spread
| partition
).
tag
:<key> - The key/value combination of a tag assigned to the resource. Use the tag key in the filter name and the tag value as the filter value. For example, to find all resources that have a tag with the key Owner
and the value TeamA
, specify tag:Owner
for the filter name and TeamA
for the filter value.
tag-key
- The key of a tag assigned to the resource. Use this filter to find all resources that have a tag with a specific key, regardless of the tag value.
One or more filters.
prefix-list-id
: The ID of a prefix list.
prefix-list-name
: The name of a prefix list.
One or more filters.
tag
:<key> - The key/value combination of a tag assigned to the resource. Use the tag key in the filter name and the tag value as the filter value. For example, to find all resources that have a tag with the key Owner
and the value TeamA
, specify tag:Owner
for the filter name and TeamA
for the filter value.
tag-key
- The key of a tag assigned to the resource. Use this filter to find all resources assigned a tag with a specific key, regardless of the tag value.
The filters.
endpoint
- The endpoint of the Region (for example, ec2.us-east-1.amazonaws.com
).
opt-in-status
- The opt-in status of the Region (opt-in-not-required
| opted-in
| not-opted-in
).
region-name
- The name of the Region (for example, us-east-1
).
One or more filters.
availability-zone-group
- The Availability Zone group.
create-time
- The time stamp when the Spot Instance request was created.
fault-code
- The fault code related to the request.
fault-message
- The fault message related to the request.
instance-id
- The ID of the instance that fulfilled the request.
launch-group
- The Spot Instance launch group.
launch.block-device-mapping.delete-on-termination
- Indicates whether the EBS volume is deleted on instance termination.
launch.block-device-mapping.device-name
- The device name for the volume in the block device mapping (for example, /dev/sdh
or xvdh
).
launch.block-device-mapping.snapshot-id
- The ID of the snapshot for the EBS volume.
launch.block-device-mapping.volume-size
- The size of the EBS volume, in GiB.
launch.block-device-mapping.volume-type
- The type of EBS volume: gp2
for General Purpose SSD, io1
for Provisioned IOPS SSD, st1
for Throughput Optimized HDD, sc1
for Cold HDD, or standard
for Magnetic.
launch.group-id
- The ID of the security group for the instance.
launch.group-name
- The name of the security group for the instance.
launch.image-id
- The ID of the AMI.
launch.instance-type
- The type of instance (for example, m3.medium
).
launch.kernel-id
- The kernel ID.
launch.key-name
- The name of the key pair the instance launched with.
launch.monitoring-enabled
- Whether detailed monitoring is enabled for the Spot Instance.
launch.ramdisk-id
- The RAM disk ID.
launched-availability-zone
- The Availability Zone in which the request is launched.
network-interface.addresses.primary
- Indicates whether the IP address is the primary private IP address.
network-interface.delete-on-termination
- Indicates whether the network interface is deleted when the instance is terminated.
network-interface.description
- A description of the network interface.
network-interface.device-index
- The index of the device for the network interface attachment on the instance.
network-interface.group-id
- The ID of the security group associated with the network interface.
network-interface.network-interface-id
- The ID of the network interface.
network-interface.private-ip-address
- The primary private IP address of the network interface.
network-interface.subnet-id
- The ID of the subnet for the instance.
product-description
- The product description associated with the instance (Linux/UNIX
| Windows
).
spot-instance-request-id
- The Spot Instance request ID.
spot-price
- The maximum hourly price for any Spot Instance launched to fulfill the request.
state
- The state of the Spot Instance request (open
| active
| closed
| cancelled
| failed
). Spot request status information can help you track your Amazon EC2 Spot Instance requests. For more information, see Spot Request Status in the Amazon EC2 User Guide for Linux Instances.
status-code
- The short code describing the most recent evaluation of your Spot Instance request.
status-message
- The message explaining the status of the Spot Instance request.
tag
:<key> - The key/value combination of a tag assigned to the resource. Use the tag key in the filter name and the tag value as the filter value. For example, to find all resources that have a tag with the key Owner
and the value TeamA
, specify tag:Owner
for the filter name and TeamA
for the filter value.
tag-key
- The key of a tag assigned to the resource. Use this filter to find all resources assigned a tag with a specific key, regardless of the tag value.
type
- The type of Spot Instance request (one-time
| persistent
).
valid-from
- The start date of the request.
valid-until
- The end date of the request.
One or more filters.
availability-zone
- The Availability Zone for which prices should be returned.
instance-type
- The type of instance (for example, m3.medium
).
product-description
- The product description for the Spot price (Linux/UNIX
| SUSE Linux
| Windows
| Linux/UNIX (Amazon VPC)
| SUSE Linux (Amazon VPC)
| Windows (Amazon VPC)
).
spot-price
- The Spot price. The value must match exactly (or use wildcards; greater than or less than comparison is not supported).
timestamp
- The time stamp of the Spot price history, in UTC format (for example, YYYY-MM-DDTHH:MM:SSZ). You can use wildcards (* and ?). Greater than or less than comparison is not supported.
One or more filters.
availability-zone
- The Availability Zone for the subnet. You can also use availabilityZone
as the filter name.
availability-zone-id
- The ID of the Availability Zone for the subnet. You can also use availabilityZoneId
as the filter name.
available-ip-address-count
- The number of IPv4 addresses in the subnet that are available.
cidr-block
- The IPv4 CIDR block of the subnet. The CIDR block you specify must exactly match the subnet's CIDR block for information to be returned for the subnet. You can also use cidr
or cidrBlock
as the filter names.
default-for-az
- Indicates whether this is the default subnet for the Availability Zone. You can also use defaultForAz
as the filter name.
ipv6-cidr-block-association.ipv6-cidr-block
- An IPv6 CIDR block associated with the subnet.
ipv6-cidr-block-association.association-id
- An association ID for an IPv6 CIDR block associated with the subnet.
ipv6-cidr-block-association.state
- The state of an IPv6 CIDR block associated with the subnet.
owner-id
- The ID of the AWS account that owns the subnet.
state
- The state of the subnet (pending
| available
).
subnet-arn
- The Amazon Resource Name (ARN) of the subnet.
subnet-id
- The ID of the subnet.
tag
:<key> - The key/value combination of a tag assigned to the resource. Use the tag key in the filter name and the tag value as the filter value. For example, to find all resources that have a tag with the key Owner
and the value TeamA
, specify tag:Owner
for the filter name and TeamA
for the filter value.
tag-key
- The key of a tag assigned to the resource. Use this filter to find all resources assigned a tag with a specific key, regardless of the tag value.
vpc-id
- The ID of the VPC for the subnet.
The filters.
key
- The tag key.
resource-id
- The ID of the resource.
resource-type
- The resource type (customer-gateway
| dedicated-host
| dhcp-options
| elastic-ip
| fleet
| fpga-image
| image
| instance
| host-reservation
| internet-gateway
| launch-template
| natgateway
| network-acl
| network-interface
| placement-group
| reserved-instances
| route-table
| security-group
| snapshot
| spot-instances-request
| subnet
| volume
| vpc
| vpc-endpoint
| vpc-endpoint-service
| vpc-peering-connection
| vpn-connection
| vpn-gateway
).
tag
:<key> - The key/value combination of the tag. For example, specify \"tag:Owner\" for the filter name and \"TeamA\" for the filter value to find resources with the tag \"Owner=TeamA\".
value
- The tag value.
The filters.
key
- The tag key.
resource-id
- The ID of the resource.
resource-type
- The resource type (customer-gateway
| dedicated-host
| dhcp-options
| elastic-ip
| fleet
| fpga-image
| host-reservation
| image
| instance
| internet-gateway
| key-pair
| launch-template
| natgateway
| network-acl
| network-interface
| placement-group
| reserved-instances
| route-table
| security-group
| snapshot
| spot-instances-request
| subnet
| volume
| vpc
| vpc-endpoint
| vpc-endpoint-service
| vpc-peering-connection
| vpn-connection
| vpn-gateway
).
tag
:<key> - The key/value combination of the tag. For example, specify \"tag:Owner\" for the filter name and \"TeamA\" for the filter value to find resources with the tag \"Owner=TeamA\".
value
- The tag value.
One or more filters. The possible values are:
description
: The Traffic Mirror filter description.
traffic-mirror-filter-id
: The ID of the Traffic Mirror filter.
One or more filters. The possible values are:
description
: The Traffic Mirror session description.
network-interface-id
: The ID of the Traffic Mirror session network interface.
owner-id
: The ID of the account that owns the Traffic Mirror session.
packet-length
: The assigned number of packets to mirror.
session-number
: The assigned session number.
traffic-mirror-filter-id
: The ID of the Traffic Mirror filter.
traffic-mirror-session-id
: The ID of the Traffic Mirror session.
traffic-mirror-target-id
: The ID of the Traffic Mirror target.
virtual-network-id
: The virtual network ID of the Traffic Mirror session.
One or more filters. The possible values are:
description
: The Traffic Mirror target description.
network-interface-id
: The ID of the Traffic Mirror session network interface.
network-load-balancer-arn
: The Amazon Resource Name (ARN) of the Network Load Balancer that is associated with the session.
owner-id
: The ID of the account that owns the Traffic Mirror session.
traffic-mirror-target-id
: The ID of the Traffic Mirror target.
Describes the disks for the instance type.
" } }, + "InstanceTagKeySet": { + "base": null, + "refs": { + "DeregisterInstanceTagAttributeRequest$InstanceTagKeys": "Information about the tag keys to deregister.
", + "InstanceTagNotificationAttribute$InstanceTagKeys": "The registered tag keys.
", + "RegisterInstanceTagAttributeRequest$InstanceTagKeys": "The tag keys to register.
" + } + }, + "InstanceTagNotificationAttribute": { + "base": "Describes the registered tag keys for the current Region.
", + "refs": { + "DeregisterInstanceEventNotificationAttributesResult$InstanceTagAttribute": "The resulting set of tag keys.
", + "DescribeInstanceEventNotificationAttributesResult$InstanceTagAttribute": "Information about the registered tag keys.
", + "RegisterInstanceEventNotificationAttributesResult$InstanceTagAttribute": "The resulting set of tag keys.
" + } + }, "InstanceType": { "base": null, "refs": { @@ -7740,7 +7796,7 @@ "Phase1DHGroupNumbersRequestListValue$Value": "The Diffie-Hellmann group number.
", "Phase2DHGroupNumbersListValue$Value": "The Diffie-Hellmann group number.
", "Phase2DHGroupNumbersRequestListValue$Value": "The Diffie-Hellmann group number.
", - "Placement$PartitionNumber": "The number of the partition the instance is in. Valid only if the placement group strategy is set to partition
.
This parameter is not supported by .
", + "Placement$PartitionNumber": "The number of the partition the instance is in. Valid only if the placement group strategy is set to partition
.
This parameter is not supported by CreateFleet.
", "PlacementGroup$PartitionCount": "The number of partitions. Valid only if strategy is set to partition
.
The first port in the range.
", "PortRange$To": "The last port in the range.
", @@ -8046,6 +8102,7 @@ "KeyPairId": { "base": null, "refs": { + "DeleteKeyPairRequest$KeyPairId": "The ID of the key pair.
", "KeyPairIdStringList$member": null } }, @@ -9938,6 +9995,7 @@ "PlacementGroup": { "base": "Describes a placement group.
", "refs": { + "CreatePlacementGroupResult$PlacementGroup": null, "PlacementGroupList$member": null } }, @@ -10432,6 +10490,22 @@ "refs": { } }, + "RegisterInstanceEventNotificationAttributesRequest": { + "base": null, + "refs": { + } + }, + "RegisterInstanceEventNotificationAttributesResult": { + "base": null, + "refs": { + } + }, + "RegisterInstanceTagAttributeRequest": { + "base": "Information about the tag keys to register for the current Region. You can either specify individual tag keys or register all tag keys in the current Region. You must specify either IncludeAllTagsOfInstance
or InstanceTagKeys
in the request
Information about the tag keys to register.
" + } + }, "RegisterTransitGatewayMulticastGroupMembersRequest": { "base": null, "refs": { @@ -11665,7 +11739,7 @@ "LaunchTemplateSpotMarketOptionsRequest$SpotInstanceType": "The Spot Instance request type.
", "RequestSpotInstancesRequest$Type": "The Spot Instance request type.
Default: one-time
The Spot Instance request type.
", - "SpotMarketOptions$SpotInstanceType": "The Spot Instance request type. For RunInstances, persistent Spot Instance requests are only supported when InstanceInterruptionBehavior is set to either hibernate
or stop
.
The Spot Instance request type. For RunInstances, persistent Spot Instance requests are only supported when InstanceInterruptionBehavior is set to either hibernate
or stop
.
A unique name for the key pair.
", "ImportKeyPairResult$KeyFingerprint": "The MD5 public key fingerprint as specified in section 4 of RFC 4716.
", "ImportKeyPairResult$KeyName": "The key pair name you provided.
", + "ImportKeyPairResult$KeyPairId": "The ID of the resulting key pair.
", "ImportSnapshotRequest$ClientToken": "Token to enable idempotency for VM import requests.
", "ImportSnapshotRequest$Description": "The description string for the import snapshot task.
", "ImportSnapshotRequest$RoleName": "The name of the role to use when not using the default role, 'vmimport'.
", @@ -12591,6 +12666,7 @@ "InstanceStatus$OutpostArn": "The Amazon Resource Name (ARN) of the Outpost.
", "InstanceStatus$InstanceId": "The ID of the instance.
", "InstanceStatusEvent$Description": "A description of the event.
After a scheduled event is completed, it can still be described for up to a week. If the event has been completed, this description starts with the following text: [Completed].
", + "InstanceTagKeySet$member": null, "InstanceUsage$AccountId": "The ID of the AWS account that is making use of the Capacity Reservation.
", "InternetGateway$InternetGatewayId": "The ID of the internet gateway.
", "InternetGateway$OwnerId": "The ID of the AWS account that owns the internet gateway.
", @@ -12788,12 +12864,12 @@ "Phase2EncryptionAlgorithmsRequestListValue$Value": "The encryption algorithm.
", "Phase2IntegrityAlgorithmsListValue$Value": "The integrity algorithm.
", "Phase2IntegrityAlgorithmsRequestListValue$Value": "The integrity algorithm.
", - "Placement$AvailabilityZone": "The Availability Zone of the instance.
If not specified, an Availability Zone will be automatically chosen for you based on the load balancing criteria for the Region.
This parameter is not supported by .
", - "Placement$Affinity": "The affinity setting for the instance on the Dedicated Host. This parameter is not supported for the ImportInstance command.
This parameter is not supported by .
", + "Placement$AvailabilityZone": "The Availability Zone of the instance.
If not specified, an Availability Zone will be automatically chosen for you based on the load balancing criteria for the Region.
This parameter is not supported by CreateFleet.
", + "Placement$Affinity": "The affinity setting for the instance on the Dedicated Host. This parameter is not supported for the ImportInstance command.
This parameter is not supported by CreateFleet.
", "Placement$GroupName": "The name of the placement group the instance is in.
", - "Placement$HostId": "The ID of the Dedicated Host on which the instance resides. This parameter is not supported for the ImportInstance command.
This parameter is not supported by .
", - "Placement$SpreadDomain": "Reserved for future use.
This parameter is not supported by .
", - "Placement$HostResourceGroupArn": "The ARN of the host resource group in which to launch the instances. If you specify a host resource group ARN, omit the Tenancy parameter or set it to host
.
This parameter is not supported by .
", + "Placement$HostId": "The ID of the Dedicated Host on which the instance resides. This parameter is not supported for the ImportInstance command.
This parameter is not supported by CreateFleet.
", + "Placement$SpreadDomain": "Reserved for future use.
This parameter is not supported by CreateFleet.
", + "Placement$HostResourceGroupArn": "The ARN of the host resource group in which to launch the instances. If you specify a host resource group ARN, omit the Tenancy parameter or set it to host
.
This parameter is not supported by CreateFleet.
", "PlacementGroup$GroupName": "The name of the placement group.
", "PlacementGroup$GroupId": "The ID of the placement group.
", "PlacementResponse$GroupName": "The name of the placement group that the instance is in.
", @@ -13038,7 +13114,7 @@ "SpotFleetLaunchSpecification$UserData": "The Base64-encoded user data that instances use when starting up.
", "SpotFleetRequestConfig$SpotFleetRequestId": "The ID of the Spot Fleet request.
", "SpotFleetRequestConfigData$ClientToken": "A unique, case-sensitive identifier that you provide to ensure the idempotency of your listings. This helps to avoid duplicate listings. For more information, see Ensuring Idempotency.
", - "SpotFleetRequestConfigData$IamFleetRole": "The Amazon Resource Name (ARN) of an AWS Identity and Access Management (IAM) role that grants the Spot Fleet the permission to request, launch, terminate, and tag instances on your behalf. For more information, see Spot Fleet Prerequisites in the Amazon EC2 User Guide for Linux Instances. Spot Fleet can terminate Spot Instances on your behalf when you cancel its Spot Fleet request using CancelSpotFleetRequests or when the Spot Fleet request expires, if you set TerminateInstancesWithExpiration
.
The Amazon Resource Name (ARN) of an AWS Identity and Access Management (IAM) role that grants the Spot Fleet the permission to request, launch, terminate, and tag instances on your behalf. For more information, see Spot Fleet Prerequisites in the Amazon EC2 User Guide for Linux Instances. Spot Fleet can terminate Spot Instances on your behalf when you cancel its Spot Fleet request using CancelSpotFleetRequests or when the Spot Fleet request expires, if you set TerminateInstancesWithExpiration
.
The maximum price per unit hour that you are willing to pay for a Spot Instance. The default is the On-Demand price.
", "SpotFleetRequestConfigData$OnDemandMaxTotalPrice": "The maximum amount per hour for On-Demand Instances that you're willing to pay. You can use the onDemandMaxTotalPrice
parameter, the spotMaxTotalPrice
parameter, or both parameters to ensure that your fleet cost does not exceed your budget. If you set a maximum price per hour for the On-Demand Instances and Spot Instances in your request, Spot Fleet will launch instances until it reaches the maximum amount you're willing to pay. When the maximum amount you're willing to pay is reached, the fleet stops launching instances even if it hasn’t met the target capacity.
The maximum amount per hour for Spot Instances that you're willing to pay. You can use the spotdMaxTotalPrice
parameter, the onDemandMaxTotalPrice
parameter, or both parameters to ensure that your fleet cost does not exceed your budget. If you set a maximum price per hour for the On-Demand Instances and Spot Instances in your request, Spot Fleet will launch instances until it reaches the maximum amount you're willing to pay. When the maximum amount you're willing to pay is reached, the fleet stops launching instances even if it hasn’t met the target capacity.
Any tags assigned to the Dedicated Host Reservation.
", "Image$Tags": "Any tags assigned to the image.
", "ImportImageTask$Tags": "The tags for the import image task.
", + "ImportKeyPairResult$Tags": "The tags applied to the imported key pair.
", "ImportSnapshotTask$Tags": "The tags for the import snapshot task.
", "Instance$Tags": "Any tags assigned to the instance.
", "InternetGateway$Tags": "Any tags assigned to the internet gateway.
", "Ipv6Pool$Tags": "Any tags for the address pool.
", + "KeyPair$Tags": "Any tags applied to the key pair.
", "KeyPairInfo$Tags": "Any tags applied to the key pair.
", "LaunchTemplate$Tags": "The tags for the launch template.
", "LaunchTemplateTagSpecification$Tags": "The tags for the resource.
", @@ -13495,8 +13573,10 @@ "CreateFleetRequest$TagSpecifications": "The key-value pair for tagging the EC2 Fleet request on creation. The value for ResourceType
must be fleet
, otherwise the fleet request fails. To tag instances at launch, specify the tags in the launch template. For information about tagging after launch, see Tagging Your Resources.
The tags to apply to the flow logs.
", "CreateFpgaImageRequest$TagSpecifications": "The tags to apply to the FPGA image during creation.
", + "CreateKeyPairRequest$TagSpecifications": "The tags to apply to the new key pair.
", "CreateLaunchTemplateRequest$TagSpecifications": "The tags to apply to the launch template during creation.
", "CreateNatGatewayRequest$TagSpecifications": "The tags to assign to the NAT gateway.
", + "CreatePlacementGroupRequest$TagSpecifications": "The tags to apply to the new placement group.
", "CreateSnapshotRequest$TagSpecifications": "The tags to apply to the snapshot during creation.
", "CreateSnapshotsRequest$TagSpecifications": "Tags to apply to every snapshot specified by the instance.
", "CreateTrafficMirrorFilterRequest$TagSpecifications": "The tags to assign to a Traffic Mirror filter.
", @@ -13510,6 +13590,7 @@ "CreateVolumeRequest$TagSpecifications": "The tags to apply to the volume during creation.
", "CreateVpcEndpointRequest$TagSpecifications": "The tags to associate with the endpoint.
", "CreateVpcEndpointServiceConfigurationRequest$TagSpecifications": "The tags to associate with the service.
", + "ImportKeyPairRequest$TagSpecifications": "The tags to apply to the imported key pair.
", "RunInstancesRequest$TagSpecifications": "The tags to apply to the resources during launch. You can only tag instances and volumes on launch. The specified tags are applied to all instances or volumes that are created during launch. To tag a resource after it has been created, see CreateTags.
", "SpotFleetRequestConfigData$TagSpecifications": "The key-value pair for tagging the Spot Fleet request on creation. The value for ResourceType
must be spot-fleet-request
, otherwise the Spot Fleet request fails. To tag instances at launch, specify the tags in the launch template (valid only if you use LaunchTemplateConfigs
) or in the SpotFleetTagSpecification
(valid only if you use LaunchSpecifications
). For information about tagging after launch, see Tagging Your Resources.
The number of units to request. You can choose to set the target capacity in terms of instances or a performance characteristic that is important to your application workload, such as vCPUs, memory, or I/O. If the request type is maintain
, you can specify a target capacity of 0 and add capacity later.
You can use the On-Demand Instance MaxTotalPrice
parameter, the Spot Instance MaxTotalPrice
, or both to ensure that your fleet cost does not exceed your budget. If you set a maximum price per hour for the On-Demand Instances and Spot Instances in your request, EC2 Fleet will launch instances until it reaches the maximum amount that you're willing to pay. When the maximum amount you're willing to pay is reached, the fleet stops launching instances even if it hasn’t met the target capacity. The MaxTotalPrice
parameters are located in and
The number of units to request. You can choose to set the target capacity in terms of instances or a performance characteristic that is important to your application workload, such as vCPUs, memory, or I/O. If the request type is maintain
, you can specify a target capacity of 0 and add capacity later.
You can use the On-Demand Instance MaxTotalPrice
parameter, the Spot Instance MaxTotalPrice
, or both to ensure that your fleet cost does not exceed your budget. If you set a maximum price per hour for the On-Demand Instances and Spot Instances in your request, EC2 Fleet will launch instances until it reaches the maximum amount that you're willing to pay. When the maximum amount you're willing to pay is reached, the fleet stops launching instances even if it hasn’t met the target capacity. The MaxTotalPrice
parameters are located in OnDemandOptions and SpotOptions
The number of units to request. You can choose to set the target capacity in terms of instances or a performance characteristic that is important to your application workload, such as vCPUs, memory, or I/O. If the request type is maintain
, you can specify a target capacity of 0 and add capacity later.
The number of units to request. You can choose to set the target capacity as the number of instances. Or you can set the target capacity to a performance characteristic that is important to your application workload, such as vCPUs, memory, or I/O. If the request type is maintain
, you can specify a target capacity of 0 and add capacity later.
You can use the On-Demand Instance MaxTotalPrice
parameter, the Spot Instance MaxTotalPrice
parameter, or both parameters to ensure that your fleet cost does not exceed your budget. If you set a maximum price per hour for the On-Demand Instances and Spot Instances in your request, EC2 Fleet will launch instances until it reaches the maximum amount that you're willing to pay. When the maximum amount you're willing to pay is reached, the fleet stops launching instances even if it hasn’t met the target capacity. The MaxTotalPrice
parameters are located in and .
The number of units to request. You can choose to set the target capacity as the number of instances. Or you can set the target capacity to a performance characteristic that is important to your application workload, such as vCPUs, memory, or I/O. If the request type is maintain
, you can specify a target capacity of 0 and add capacity later.
You can use the On-Demand Instance MaxTotalPrice
parameter, the Spot Instance MaxTotalPrice
parameter, or both parameters to ensure that your fleet cost does not exceed your budget. If you set a maximum price per hour for the On-Demand Instances and Spot Instances in your request, EC2 Fleet will launch instances until it reaches the maximum amount that you're willing to pay. When the maximum amount you're willing to pay is reached, the fleet stops launching instances even if it hasn’t met the target capacity. The MaxTotalPrice
parameters are located in OnDemandOptionsRequest and SpotOptionsRequest.
The number of units to request.
", "ModifyFleetRequest$TargetCapacitySpecification": "The size of the EC2 Fleet.
" @@ -13607,7 +13688,7 @@ "DescribeReservedInstancesOfferingsRequest$InstanceTenancy": "The tenancy of the instances covered by the reservation. A Reserved Instance with a tenancy of dedicated
is applied to instances that run in a VPC on single-tenant hardware (i.e., Dedicated Instances).
Important: The host
value cannot be used with this parameter. Use the default
or dedicated
values only.
Default: default
The tenancy of the instance (if the instance is running in a VPC). An instance with a tenancy of dedicated
runs on single-tenant hardware.
The tenancy of the instance (if the instance is running in a VPC). An instance with a tenancy of dedicated runs on single-tenant hardware.
", - "Placement$Tenancy": "The tenancy of the instance (if the instance is running in a VPC). An instance with a tenancy of dedicated
runs on single-tenant hardware. The host
tenancy is not supported for the ImportInstance command.
This parameter is not supported by .
", + "Placement$Tenancy": "The tenancy of the instance (if the instance is running in a VPC). An instance with a tenancy of dedicated
runs on single-tenant hardware. The host
tenancy is not supported for the ImportInstance command.
This parameter is not supported by CreateFleet.
", "ReservedInstances$InstanceTenancy": "The tenancy of the instance.
", "ReservedInstancesOffering$InstanceTenancy": "The tenancy of the instance.
", "SpotPlacement$Tenancy": "The tenancy of the instance (if the instance is running in a VPC). An instance with a tenancy of dedicated
runs on single-tenant hardware. The host
tenancy is not supported for Spot Instances.
Creates a new capacity provider. Capacity providers are associated with an Amazon ECS cluster and are used in capacity provider strategies to facilitate cluster auto scaling.
Only capacity providers using an Auto Scaling group can be created. Amazon ECS tasks on AWS Fargate use the FARGATE
and FARGATE_SPOT
capacity providers which are already created and available to all accounts in Regions supported by AWS Fargate.
Creates a new Amazon ECS cluster. By default, your account receives a default
cluster when you launch your first container instance. However, you can create your own cluster with a unique name with the CreateCluster
action.
When you call the CreateCluster API operation, Amazon ECS attempts to create the Amazon ECS service-linked role for your account so that required resources in other AWS services can be managed on your behalf. However, if the IAM user that makes the call does not have permissions to create the service-linked role, it is not created. For more information, see Using Service-Linked Roles for Amazon ECS in the Amazon Elastic Container Service Developer Guide.
Runs and maintains a desired number of tasks from a specified task definition. If the number of tasks running in a service drops below the desiredCount
, Amazon ECS runs another copy of the task in the specified cluster. To update an existing service, see the UpdateService action.
In addition to maintaining the desired count of tasks in your service, you can optionally run your service behind one or more load balancers. The load balancers distribute traffic across the tasks that are associated with the service. For more information, see Service Load Balancing in the Amazon Elastic Container Service Developer Guide.
Tasks for services that do not use a load balancer are considered healthy if they're in the RUNNING
state. Tasks for services that do use a load balancer are considered healthy if they're in the RUNNING
state and the container instance that they're hosted on is reported as healthy by the load balancer.
There are two service scheduler strategies available:
REPLICA
- The replica scheduling strategy places and maintains the desired number of tasks across your cluster. By default, the service scheduler spreads tasks across Availability Zones. You can use task placement strategies and constraints to customize task placement decisions. For more information, see Service Scheduler Concepts in the Amazon Elastic Container Service Developer Guide.
DAEMON
- The daemon scheduling strategy deploys exactly one task on each active container instance that meets all of the task placement constraints that you specify in your cluster. When using this strategy, you don't need to specify a desired number of tasks, a task placement strategy, or use Service Auto Scaling policies. For more information, see Service Scheduler Concepts in the Amazon Elastic Container Service Developer Guide.
You can optionally specify a deployment configuration for your service. The deployment is triggered by changing properties, such as the task definition or the desired count of a service, with an UpdateService operation. The default value for a replica service for minimumHealthyPercent
is 100%. The default value for a daemon service for minimumHealthyPercent
is 0%.
If a service is using the ECS
deployment controller, the minimum healthy percent represents a lower limit on the number of tasks in a service that must remain in the RUNNING
state during a deployment, as a percentage of the desired number of tasks (rounded up to the nearest integer), and while any container instances are in the DRAINING
state if the service contains tasks using the EC2 launch type. This parameter enables you to deploy without using additional cluster capacity. For example, if your service has a desired number of four tasks and a minimum healthy percent of 50%, the scheduler might stop two existing tasks to free up cluster capacity before starting two new tasks. Tasks for services that do not use a load balancer are considered healthy if they're in the RUNNING
state. Tasks for services that do use a load balancer are considered healthy if they're in the RUNNING
state and they're reported as healthy by the load balancer. The default value for minimum healthy percent is 100%.
If a service is using the ECS
deployment controller, the maximum percent parameter represents an upper limit on the number of tasks in a service that are allowed in the RUNNING
or PENDING
state during a deployment, as a percentage of the desired number of tasks (rounded down to the nearest integer), and while any container instances are in the DRAINING
state if the service contains tasks using the EC2 launch type. This parameter enables you to define the deployment batch size. For example, if your service has a desired number of four tasks and a maximum percent value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The default value for maximum percent is 200%.
If a service is using either the CODE_DEPLOY
or EXTERNAL
deployment controller types and tasks that use the EC2 launch type, the minimum healthy percent and maximum percent values are used only to define the lower and upper limit on the number of the tasks in the service that remain in the RUNNING
state while the container instances are in the DRAINING
state. If the tasks in the service use the Fargate launch type, the minimum healthy percent and maximum percent values aren't used, although they're currently visible when describing your service.
When creating a service that uses the EXTERNAL
deployment controller, you can specify only parameters that aren't controlled at the task set level. The only required parameter is the service name. You control your services using the CreateTaskSet operation. For more information, see Amazon ECS Deployment Types in the Amazon Elastic Container Service Developer Guide.
When the service scheduler launches new tasks, it determines task placement in your cluster using the following logic:
Determine which of the container instances in your cluster can support your service's task definition (for example, they have the required CPU, memory, ports, and container instance attributes).
By default, the service scheduler attempts to balance tasks across Availability Zones in this manner (although you can choose a different placement strategy) with the placementStrategy
parameter):
Sort the valid container instances, giving priority to instances that have the fewest number of running tasks for this service in their respective Availability Zone. For example, if zone A has one running service task and zones B and C each have zero, valid container instances in either zone B or C are considered optimal for placement.
Place the new service task on a valid container instance in an optimal Availability Zone (based on the previous steps), favoring container instances with the fewest number of running tasks for this service.
Runs and maintains a desired number of tasks from a specified task definition. If the number of tasks running in a service drops below the desiredCount
, Amazon ECS runs another copy of the task in the specified cluster. To update an existing service, see the UpdateService action.
In addition to maintaining the desired count of tasks in your service, you can optionally run your service behind one or more load balancers. The load balancers distribute traffic across the tasks that are associated with the service. For more information, see Service Load Balancing in the Amazon Elastic Container Service Developer Guide.
Tasks for services that do not use a load balancer are considered healthy if they're in the RUNNING
state. Tasks for services that do use a load balancer are considered healthy if they're in the RUNNING
state and the container instance that they're hosted on is reported as healthy by the load balancer.
There are two service scheduler strategies available:
REPLICA
- The replica scheduling strategy places and maintains the desired number of tasks across your cluster. By default, the service scheduler spreads tasks across Availability Zones. You can use task placement strategies and constraints to customize task placement decisions. For more information, see Service Scheduler Concepts in the Amazon Elastic Container Service Developer Guide.
DAEMON
- The daemon scheduling strategy deploys exactly one task on each active container instance that meets all of the task placement constraints that you specify in your cluster. The service scheduler also evaluates the task placement constraints for running tasks and will stop tasks that do not meet the placement constraints. When using this strategy, you don't need to specify a desired number of tasks, a task placement strategy, or use Service Auto Scaling policies. For more information, see Service Scheduler Concepts in the Amazon Elastic Container Service Developer Guide.
You can optionally specify a deployment configuration for your service. The deployment is triggered by changing properties, such as the task definition or the desired count of a service, with an UpdateService operation. The default value for a replica service for minimumHealthyPercent
is 100%. The default value for a daemon service for minimumHealthyPercent
is 0%.
If a service is using the ECS
deployment controller, the minimum healthy percent represents a lower limit on the number of tasks in a service that must remain in the RUNNING
state during a deployment, as a percentage of the desired number of tasks (rounded up to the nearest integer), and while any container instances are in the DRAINING
state if the service contains tasks using the EC2 launch type. This parameter enables you to deploy without using additional cluster capacity. For example, if your service has a desired number of four tasks and a minimum healthy percent of 50%, the scheduler might stop two existing tasks to free up cluster capacity before starting two new tasks. Tasks for services that do not use a load balancer are considered healthy if they're in the RUNNING
state. Tasks for services that do use a load balancer are considered healthy if they're in the RUNNING
state and they're reported as healthy by the load balancer. The default value for minimum healthy percent is 100%.
If a service is using the ECS
deployment controller, the maximum percent parameter represents an upper limit on the number of tasks in a service that are allowed in the RUNNING
or PENDING
state during a deployment, as a percentage of the desired number of tasks (rounded down to the nearest integer), and while any container instances are in the DRAINING
state if the service contains tasks using the EC2 launch type. This parameter enables you to define the deployment batch size. For example, if your service has a desired number of four tasks and a maximum percent value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The default value for maximum percent is 200%.
If a service is using either the CODE_DEPLOY
or EXTERNAL
deployment controller types and tasks that use the EC2 launch type, the minimum healthy percent and maximum percent values are used only to define the lower and upper limit on the number of the tasks in the service that remain in the RUNNING
state while the container instances are in the DRAINING
state. If the tasks in the service use the Fargate launch type, the minimum healthy percent and maximum percent values aren't used, although they're currently visible when describing your service.
When creating a service that uses the EXTERNAL
deployment controller, you can specify only parameters that aren't controlled at the task set level. The only required parameter is the service name. You control your services using the CreateTaskSet operation. For more information, see Amazon ECS Deployment Types in the Amazon Elastic Container Service Developer Guide.
When the service scheduler launches new tasks, it determines task placement in your cluster using the following logic:
Determine which of the container instances in your cluster can support your service's task definition (for example, they have the required CPU, memory, ports, and container instance attributes).
By default, the service scheduler attempts to balance tasks across Availability Zones in this manner (although you can choose a different placement strategy) with the placementStrategy
parameter):
Sort the valid container instances, giving priority to instances that have the fewest number of running tasks for this service in their respective Availability Zone. For example, if zone A has one running service task and zones B and C each have zero, valid container instances in either zone B or C are considered optimal for placement.
Place the new service task on a valid container instance in an optimal Availability Zone (based on the previous steps), favoring container instances with the fewest number of running tasks for this service.
Create a task set in the specified cluster and service. This is used when a service uses the EXTERNAL
deployment controller type. For more information, see Amazon ECS Deployment Types in the Amazon Elastic Container Service Developer Guide.
Disables an account setting for a specified IAM user, IAM role, or the root user for an account.
", "DeleteAttributes": "Deletes one or more custom attributes from an Amazon ECS resource.
", @@ -47,7 +47,7 @@ "UpdateClusterSettings": "Modifies the settings to use for a cluster.
", "UpdateContainerAgent": "Updates the Amazon ECS container agent on a specified container instance. Updating the Amazon ECS container agent does not interrupt running tasks or services on the container instance. The process for updating the agent differs depending on whether your container instance was launched with the Amazon ECS-optimized AMI or another operating system.
UpdateContainerAgent
requires the Amazon ECS-optimized AMI or Amazon Linux with the ecs-init
service installed and running. For help updating the Amazon ECS container agent on other operating systems, see Manually Updating the Amazon ECS Container Agent in the Amazon Elastic Container Service Developer Guide.
Modifies the status of an Amazon ECS container instance.
Once a container instance has reached an ACTIVE
state, you can change the status of a container instance to DRAINING
to manually remove an instance from a cluster, for example to perform system updates, update the Docker daemon, or scale down the cluster size.
A container instance cannot be changed to DRAINING
until it has reached an ACTIVE
status. If the instance is in any other status, an error will be received.
When you set a container instance to DRAINING
, Amazon ECS prevents new tasks from being scheduled for placement on the container instance and replacement service tasks are started on other container instances in the cluster if the resources are available. Service tasks on the container instance that are in the PENDING
state are stopped immediately.
Service tasks on the container instance that are in the RUNNING
state are stopped and replaced according to the service's deployment configuration parameters, minimumHealthyPercent
and maximumPercent
. You can change the deployment configuration of your service using UpdateService.
If minimumHealthyPercent
is below 100%, the scheduler can ignore desiredCount
temporarily during task replacement. For example, desiredCount
is four tasks, a minimum of 50% allows the scheduler to stop two existing tasks before starting two new tasks. If the minimum is 100%, the service scheduler can't remove existing tasks until the replacement tasks are considered healthy. Tasks for services that do not use a load balancer are considered healthy if they are in the RUNNING
state. Tasks for services that use a load balancer are considered healthy if they are in the RUNNING
state and the container instance they are hosted on is reported as healthy by the load balancer.
The maximumPercent
parameter represents an upper limit on the number of running tasks during task replacement, which enables you to define the replacement batch size. For example, if desiredCount
is four tasks, a maximum of 200% starts four new tasks before stopping the four tasks to be drained, provided that the cluster resources required to do this are available. If the maximum is 100%, then replacement tasks can't start until the draining tasks have stopped.
Any PENDING
or RUNNING
tasks that do not belong to a service are not affected. You must wait for them to finish or stop them manually.
A container instance has completed draining when it has no more RUNNING
tasks. You can verify this using ListTasks.
When a container instance has been drained, you can set a container instance to ACTIVE
status and once it has reached that status the Amazon ECS scheduler can begin scheduling tasks on the instance again.
Modifies the parameters of a service.
For services using the rolling update (ECS
) deployment controller, the desired count, deployment configuration, network configuration, or task definition used can be updated.
For services using the blue/green (CODE_DEPLOY
) deployment controller, only the desired count, deployment configuration, and health check grace period can be updated using this API. If the network configuration, platform version, or task definition need to be updated, a new AWS CodeDeploy deployment should be created. For more information, see CreateDeployment in the AWS CodeDeploy API Reference.
For services using an external deployment controller, you can update only the desired count and health check grace period using this API. If the launch type, load balancer, network configuration, platform version, or task definition need to be updated, you should create a new task set. For more information, see CreateTaskSet.
You can add to or subtract from the number of instantiations of a task definition in a service by specifying the cluster that the service is running in and a new desiredCount
parameter.
If you have updated the Docker image of your application, you can create a new task definition with that image and deploy it to your service. The service scheduler uses the minimum healthy percent and maximum percent parameters (in the service's deployment configuration) to determine the deployment strategy.
If your updated Docker image uses the same tag as what is in the existing task definition for your service (for example, my_image:latest
), you do not need to create a new revision of your task definition. You can update the service using the forceNewDeployment
option. The new tasks launched by the deployment pull the current image/tag combination from your repository when they start.
You can also update the deployment configuration of a service. When a deployment is triggered by updating the task definition of a service, the service scheduler uses the deployment configuration parameters, minimumHealthyPercent
and maximumPercent
, to determine the deployment strategy.
If minimumHealthyPercent
is below 100%, the scheduler can ignore desiredCount
temporarily during a deployment. For example, if desiredCount
is four tasks, a minimum of 50% allows the scheduler to stop two existing tasks before starting two new tasks. Tasks for services that do not use a load balancer are considered healthy if they are in the RUNNING
state. Tasks for services that use a load balancer are considered healthy if they are in the RUNNING
state and the container instance they are hosted on is reported as healthy by the load balancer.
The maximumPercent
parameter represents an upper limit on the number of running tasks during a deployment, which enables you to define the deployment batch size. For example, if desiredCount
is four tasks, a maximum of 200% starts four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available).
When UpdateService stops a task during a deployment, the equivalent of docker stop
is issued to the containers running in the task. This results in a SIGTERM
and a 30-second timeout, after which SIGKILL
is sent and the containers are forcibly stopped. If the container handles the SIGTERM
gracefully and exits within 30 seconds from receiving it, no SIGKILL
is sent.
When the service scheduler launches new tasks, it determines task placement in your cluster with the following logic:
Determine which of the container instances in your cluster can support your service's task definition (for example, they have the required CPU, memory, ports, and container instance attributes).
By default, the service scheduler attempts to balance tasks across Availability Zones in this manner (although you can choose a different placement strategy):
Sort the valid container instances by the fewest number of running tasks for this service in the same Availability Zone as the instance. For example, if zone A has one running service task and zones B and C each have zero, valid container instances in either zone B or C are considered optimal for placement.
Place the new service task on a valid container instance in an optimal Availability Zone (based on the previous steps), favoring container instances with the fewest number of running tasks for this service.
When the service scheduler stops running tasks, it attempts to maintain balance across the Availability Zones in your cluster using the following logic:
Sort the container instances by the largest number of running tasks for this service in the same Availability Zone as the instance. For example, if zone A has one running service task and zones B and C each have two, container instances in either zone B or C are considered optimal for termination.
Stop the task on a container instance in an optimal Availability Zone (based on the previous steps), favoring container instances with the largest number of running tasks for this service.
Updating the task placement strategies and constraints on an Amazon ECS service remains in preview and is a Beta Service as defined by and subject to the Beta Service Participation Service Terms located at https://aws.amazon.com/service-terms (\"Beta Terms\"). These Beta Terms apply to your participation in this preview.
Modifies the parameters of a service.
For services using the rolling update (ECS
) deployment controller, the desired count, deployment configuration, network configuration, task placement constraints and strategies, or task definition used can be updated.
For services using the blue/green (CODE_DEPLOY
) deployment controller, only the desired count, deployment configuration, task placement constraints and strategies, and health check grace period can be updated using this API. If the network configuration, platform version, or task definition need to be updated, a new AWS CodeDeploy deployment should be created. For more information, see CreateDeployment in the AWS CodeDeploy API Reference.
For services using an external deployment controller, you can update only the desired count, task placement constraints and strategies, and health check grace period using this API. If the launch type, load balancer, network configuration, platform version, or task definition need to be updated, you should create a new task set. For more information, see CreateTaskSet.
You can add to or subtract from the number of instantiations of a task definition in a service by specifying the cluster that the service is running in and a new desiredCount
parameter.
If you have updated the Docker image of your application, you can create a new task definition with that image and deploy it to your service. The service scheduler uses the minimum healthy percent and maximum percent parameters (in the service's deployment configuration) to determine the deployment strategy.
If your updated Docker image uses the same tag as what is in the existing task definition for your service (for example, my_image:latest
), you do not need to create a new revision of your task definition. You can update the service using the forceNewDeployment
option. The new tasks launched by the deployment pull the current image/tag combination from your repository when they start.
You can also update the deployment configuration of a service. When a deployment is triggered by updating the task definition of a service, the service scheduler uses the deployment configuration parameters, minimumHealthyPercent
and maximumPercent
, to determine the deployment strategy.
If minimumHealthyPercent
is below 100%, the scheduler can ignore desiredCount
temporarily during a deployment. For example, if desiredCount
is four tasks, a minimum of 50% allows the scheduler to stop two existing tasks before starting two new tasks. Tasks for services that do not use a load balancer are considered healthy if they are in the RUNNING
state. Tasks for services that use a load balancer are considered healthy if they are in the RUNNING
state and the container instance they are hosted on is reported as healthy by the load balancer.
The maximumPercent
parameter represents an upper limit on the number of running tasks during a deployment, which enables you to define the deployment batch size. For example, if desiredCount
is four tasks, a maximum of 200% starts four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available).
When UpdateService stops a task during a deployment, the equivalent of docker stop
is issued to the containers running in the task. This results in a SIGTERM
and a 30-second timeout, after which SIGKILL
is sent and the containers are forcibly stopped. If the container handles the SIGTERM
gracefully and exits within 30 seconds from receiving it, no SIGKILL
is sent.
When the service scheduler launches new tasks, it determines task placement in your cluster with the following logic:
Determine which of the container instances in your cluster can support your service's task definition (for example, they have the required CPU, memory, ports, and container instance attributes).
By default, the service scheduler attempts to balance tasks across Availability Zones in this manner (although you can choose a different placement strategy):
Sort the valid container instances by the fewest number of running tasks for this service in the same Availability Zone as the instance. For example, if zone A has one running service task and zones B and C each have zero, valid container instances in either zone B or C are considered optimal for placement.
Place the new service task on a valid container instance in an optimal Availability Zone (based on the previous steps), favoring container instances with the fewest number of running tasks for this service.
When the service scheduler stops running tasks, it attempts to maintain balance across the Availability Zones in your cluster using the following logic:
Sort the container instances by the largest number of running tasks for this service in the same Availability Zone as the instance. For example, if zone A has one running service task and zones B and C each have two, container instances in either zone B or C are considered optimal for termination.
Stop the task on a container instance in an optimal Availability Zone (based on the previous steps), favoring container instances with the largest number of running tasks for this service.
Modifies which task set in a service is the primary task set. Any parameters that are updated on the primary task set in a service will transition to the service. This is used when a service uses the EXTERNAL
deployment controller type. For more information, see Amazon ECS Deployment Types in the Amazon Elastic Container Service Developer Guide.
Modifies a task set. This is used when a service uses the EXTERNAL
deployment controller type. For more information, see Amazon ECS Deployment Types in the Amazon Elastic Container Service Developer Guide.
If a service is using the rolling update (ECS
) deployment type, the maximum percent parameter represents an upper limit on the number of tasks in a service that are allowed in the RUNNING
or PENDING
state during a deployment, as a percentage of the desired number of tasks (rounded down to the nearest integer), and while any container instances are in the DRAINING
state if the service contains tasks using the EC2 launch type. This parameter enables you to define the deployment batch size. For example, if your service has a desired number of four tasks and a maximum percent value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The default value for maximum percent is 200%.
If a service is using the blue/green (CODE_DEPLOY
) or EXTERNAL
deployment types and tasks that use the EC2 launch type, the maximum percent value is set to the default value and is used to define the upper limit on the number of the tasks in the service that remain in the RUNNING
state while the container instances are in the DRAINING
state. If the tasks in the service use the Fargate launch type, the maximum percent value is not used, although it is returned when describing your service.
If a service is using the rolling update (ECS
) deployment type, the minimum healthy percent represents a lower limit on the number of tasks in a service that must remain in the RUNNING
state during a deployment, as a percentage of the desired number of tasks (rounded up to the nearest integer), and while any container instances are in the DRAINING
state if the service contains tasks using the EC2 launch type. This parameter enables you to deploy without using additional cluster capacity. For example, if your service has a desired number of four tasks and a minimum healthy percent of 50%, the scheduler may stop two existing tasks to free up cluster capacity before starting two new tasks. Tasks for services that do not use a load balancer are considered healthy if they are in the RUNNING
state; tasks for services that do use a load balancer are considered healthy if they are in the RUNNING
state and they are reported as healthy by the load balancer. The default value for minimum healthy percent is 100%.
If a service is using the blue/green (CODE_DEPLOY
) or EXTERNAL
deployment types and tasks that use the EC2 launch type, the minimum healthy percent value is set to the default value and is used to define the lower limit on the number of the tasks in the service that remain in the RUNNING
state while the container instances are in the DRAINING
state. If the tasks in the service use the Fargate launch type, the minimum healthy percent value is not used, although it is returned when describing your service.
The maximum number of account setting results returned by DescribeCapacityProviders
in paginated output. When this parameter is used, DescribeCapacityProviders
only returns maxResults
results in a single page along with a nextToken
response element. The remaining results of the initial request can be seen by sending another DescribeCapacityProviders
request with the returned nextToken
value. This value can be between 1 and 10. If this parameter is not used, then DescribeCapacityProviders
returns up to 10 results and a nextToken
value if applicable.
The port to use when sending encrypted data between the Amazon ECS host and the Amazon EFS server. If you do not specify a transit encryption port, it will use the port selection strategy that the Amazon EFS mount helper uses. For more information, see EFS Mount Helper in the Amazon Elastic File System User Guide.
", "HealthCheck$interval": "The time period in seconds between each health check execution. You may specify between 5 and 300 seconds. The default value is 30 seconds.
", "HealthCheck$timeout": "The time period in seconds to wait for a health check to succeed before it is considered a failure. You may specify between 2 and 60 seconds. The default value is 5.
", "HealthCheck$retries": "The number of times to retry a failed health check before the container is considered unhealthy. You may specify between 1 and 10 retries. The default value is 3.
", @@ -257,7 +258,7 @@ "RunTaskRequest$capacityProviderStrategy": "The capacity provider strategy to use for the task.
A capacity provider strategy consists of one or more capacity providers along with the base
and weight
to assign to them. A capacity provider must be associated with the cluster to be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster. Only capacity providers with an ACTIVE
or UPDATING
status can be used.
If a capacityProviderStrategy
is specified, the launchType
parameter must be omitted. If no capacityProviderStrategy
or launchType
is specified, the defaultCapacityProviderStrategy
for the cluster is used.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New capacity providers can be created with the CreateCapacityProvider API operation.
To use a AWS Fargate capacity provider, specify either the FARGATE
or FARGATE_SPOT
capacity providers. The AWS Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used.
The PutClusterCapacityProviders API operation is used to update the list of available capacity providers for a cluster after the cluster is created.
", "Service$capacityProviderStrategy": "The capacity provider strategy associated with the service.
", "TaskSet$capacityProviderStrategy": "The capacity provider strategy associated with the task set.
", - "UpdateServiceRequest$capacityProviderStrategy": "The capacity provider strategy to update the service to use.
If the service is using the default capacity provider strategy for the cluster, the service can be updated to use one or more capacity providers. However, when a service is using a non-default capacity provider strategy, the service cannot be updated to use the cluster's default capacity provider strategy.
" + "UpdateServiceRequest$capacityProviderStrategy": "The capacity provider strategy to update the service to use.
If the service is using the default capacity provider strategy for the cluster, the service can be updated to use one or more capacity providers as opposed to the default capacity provider strategy. However, when a service is using a capacity provider strategy that is not the default capacity provider strategy, the service cannot be updated to use the cluster's default capacity provider strategy.
A capacity provider strategy consists of one or more capacity providers along with the base
and weight
to assign to them. A capacity provider must be associated with the cluster to be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster. Only capacity providers with an ACTIVE
or UPDATING
status can be used.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New capacity providers can be created with the CreateCapacityProvider API operation.
To use a AWS Fargate capacity provider, specify either the FARGATE
or FARGATE_SPOT
capacity providers. The AWS Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used.
The PutClusterCapacityProviders API operation is used to update the list of available capacity providers for a cluster after the cluster is created.
" } }, "CapacityProviderStrategyItem": { @@ -751,8 +752,26 @@ "Scale$value": "The value, specified as a percent total of a service's desiredCount
, to scale the task set. Accepted values are numbers between 0 and 100.
The authorization configuration details for the Amazon EFS file system.
", + "refs": { + "EFSVolumeConfiguration$authorizationConfig": "The authorization configuration details for the Amazon EFS file system.
" + } + }, + "EFSAuthorizationConfigIAM": { + "base": null, + "refs": { + "EFSAuthorizationConfig$iam": "Whether or not to use the Amazon ECS task IAM role defined in a task definition when mounting the Amazon EFS file system. If enabled, transit encryption must be enabled in the EFSVolumeConfiguration
. If this parameter is omitted, the default value of DISABLED
is used. For more information, see Using Amazon EFS Access Points in the Amazon Elastic Container Service Developer Guide.
Whether or not to enable encryption for Amazon EFS data in transit between the Amazon ECS host and the Amazon EFS server. Transit encryption must be enabled if Amazon EFS IAM authorization is used. If this parameter is omitted, the default value of DISABLED
is used. For more information, see Encrypting Data in Transit in the Amazon Elastic File System User Guide.
This parameter is specified when you are using an Amazon Elastic File System (Amazon EFS) file storage. Amazon EFS file systems are only supported when you are using the EC2 launch type.
EFSVolumeConfiguration
remains in preview and is a Beta Service as defined by and subject to the Beta Service Participation Service Terms located at https://aws.amazon.com/service-terms (\"Beta Terms\"). These Beta Terms apply to your participation in this preview of EFSVolumeConfiguration
.
This parameter is specified when you are using an Amazon Elastic File System file system for task storage. For more information, see Amazon EFS Volumes in the Amazon Elastic Container Service Developer Guide.
", "refs": { "Volume$efsVolumeConfiguration": "This parameter is specified when you are using an Amazon Elastic File System (Amazon EFS) file storage. Amazon EFS file systems are only supported when you are using the EC2 launch type.
EFSVolumeConfiguration
remains in preview and is a Beta Service as defined by and subject to the Beta Service Participation Service Terms located at https://aws.amazon.com/service-terms (\"Beta Terms\"). These Beta Terms apply to your participation in this preview of EFSVolumeConfiguration
.
An object representing a container health check. Health check parameters that are specified in a container definition override any Docker health checks that exist in the container image (such as those specified in a parent image or from the image's Dockerfile).
The following are notes about container health check support:
Container health checks require version 1.17.0 or greater of the Amazon ECS container agent. For more information, see Updating the Amazon ECS Container Agent.
Container health checks are supported for Fargate tasks if you are using platform version 1.1.0 or greater. For more information, see AWS Fargate Platform Versions.
Container health checks are not supported for tasks that are part of a service that is configured to use a Classic Load Balancer.
An object representing a container health check. Health check parameters that are specified in a container definition override any Docker health checks that exist in the container image (such as those specified in a parent image or from the image's Dockerfile).
You can view the health status of both individual containers and a task with the DescribeTasks API operation or when viewing the task details in the console.
The following describes the possible healthStatus
values for a container:
HEALTHY
-The container health check has passed successfully.
UNHEALTHY
-The container health check has failed.
UNKNOWN
-The container health check is being evaluated or there is no container health check defined.
The following describes the possible healthStatus
values for a task. The container health check status of nonessential containers do not have an effect on the health status of a task.
HEALTHY
-All essential containers within the task have passed their health checks.
UNHEALTHY
-One or more essential containers have failed their health check.
UNKNOWN
-The essential containers within the task are still having their health checks evaluated or there are no container health checks defined.
If a task is run manually, and not as part of a service, the task will continue its lifecycle regardless of its health status. For tasks that are part of a service, if the task reports as unhealthy then the task will be stopped and the service scheduler will replace it.
The following are notes about container health check support:
Container health checks require version 1.17.0 or greater of the Amazon ECS container agent. For more information, see Updating the Amazon ECS Container Agent.
Container health checks are supported for Fargate tasks if you are using platform version 1.1.0 or greater. For more information, see AWS Fargate Platform Versions.
Container health checks are not supported for tasks that are part of a service that is configured to use a Classic Load Balancer.
The health check command and associated configuration parameters for the container. This parameter maps to HealthCheck
in the Create a container section of the Docker Remote API and the HEALTHCHECK
parameter of docker run.
The container health check command and associated configuration parameters for the container. This parameter maps to HealthCheck
in the Create a container section of the Docker Remote API and the HEALTHCHECK
parameter of docker run.
The scheduling strategy to use for the service. For more information, see Services.
There are two service scheduler strategies available:
REPLICA
-The replica scheduling strategy places and maintains the desired number of tasks across your cluster. By default, the service scheduler spreads tasks across Availability Zones. You can use task placement strategies and constraints to customize task placement decisions. This scheduler strategy is required if the service is using the CODE_DEPLOY
or EXTERNAL
deployment controller types.
DAEMON
-The daemon scheduling strategy deploys exactly one task on each active container instance that meets all of the task placement constraints that you specify in your cluster. When you're using this strategy, you don't need to specify a desired number of tasks, a task placement strategy, or use Service Auto Scaling policies.
Tasks using the Fargate launch type or the CODE_DEPLOY
or EXTERNAL
deployment controller types don't support the DAEMON
scheduling strategy.
The scheduling strategy to use for the service. For more information, see Services.
There are two service scheduler strategies available:
REPLICA
-The replica scheduling strategy places and maintains the desired number of tasks across your cluster. By default, the service scheduler spreads tasks across Availability Zones. You can use task placement strategies and constraints to customize task placement decisions. This scheduler strategy is required if the service is using the CODE_DEPLOY
or EXTERNAL
deployment controller types.
DAEMON
-The daemon scheduling strategy deploys exactly one task on each active container instance that meets all of the task placement constraints that you specify in your cluster. The service scheduler also evaluates the task placement constraints for running tasks and will stop tasks that do not meet the placement constraints. When you're using this strategy, you don't need to specify a desired number of tasks, a task placement strategy, or use Service Auto Scaling policies.
Tasks using the Fargate launch type or the CODE_DEPLOY
or EXTERNAL
deployment controller types don't support the DAEMON
scheduling strategy.
The scheduling strategy for services to list.
", - "Service$schedulingStrategy": "The scheduling strategy to use for the service. For more information, see Services.
There are two service scheduler strategies available:
REPLICA
-The replica scheduling strategy places and maintains the desired number of tasks across your cluster. By default, the service scheduler spreads tasks across Availability Zones. You can use task placement strategies and constraints to customize task placement decisions.
DAEMON
-The daemon scheduling strategy deploys exactly one task on each container instance in your cluster. When you are using this strategy, do not specify a desired number of tasks or any task placement strategies.
Fargate tasks do not support the DAEMON
scheduling strategy.
The scheduling strategy to use for the service. For more information, see Services.
There are two service scheduler strategies available:
REPLICA
-The replica scheduling strategy places and maintains the desired number of tasks across your cluster. By default, the service scheduler spreads tasks across Availability Zones. You can use task placement strategies and constraints to customize task placement decisions.
DAEMON
-The daemon scheduling strategy deploys exactly one task on each active container instance that meets all of the task placement constraints that you specify in your cluster. The service scheduler also evaluates the task placement constraints for running tasks and will stop tasks that do not meet the placement constraints.
Fargate tasks do not support the DAEMON
scheduling strategy.
The Amazon Resource Name (ARN) that identifies the Auto Scaling group.
", "CapacityProvider$capacityProviderArn": "The Amazon Resource Name (ARN) that identifies the capacity provider.
", "CapacityProvider$name": "The name of the capacity provider.
", - "CapacityProviderStrategyItem$capacityProvider": "The short name or full Amazon Resource Name (ARN) of the capacity provider.
", + "CapacityProviderStrategyItem$capacityProvider": "The short name of the capacity provider.
", "ClientException$message": null, "Cluster$clusterArn": "The Amazon Resource Name (ARN) that identifies the cluster. The ARN contains the arn:aws:ecs
namespace, followed by the Region of the cluster, the AWS account ID of the cluster owner, the cluster
namespace, and then the cluster name. For example, arn:aws:ecs:region:012345678910:cluster/test
.
A user-generated string that you use to identify your cluster.
", @@ -1684,8 +1703,9 @@ "DockerLabelsMap$key": null, "DockerLabelsMap$value": null, "DockerVolumeConfiguration$driver": "The Docker volume driver to use. The driver value must match the driver name provided by Docker because it is used for task placement. If the driver was installed using the Docker plugin CLI, use docker plugin ls
to retrieve the driver name from your container instance. If the driver was installed using another method, use Docker plugin discovery to retrieve the driver name. For more information, see Docker plugin discovery. This parameter maps to Driver
in the Create a volume section of the Docker Remote API and the xxdriver
option to docker volume create.
The Amazon EFS access point ID to use. If an access point is specified, the root directory value specified in the EFSVolumeConfiguration
will be relative to the directory set for the access point. If an access point is used, transit encryption must be enabled in the EFSVolumeConfiguration
. For more information, see Working with Amazon EFS Access Points in the Amazon Elastic File System User Guide.
The Amazon EFS file system ID to use.
", - "EFSVolumeConfiguration$rootDirectory": "The directory within the Amazon EFS file system to mount as the root directory inside the host.
", + "EFSVolumeConfiguration$rootDirectory": "The directory within the Amazon EFS file system to mount as the root directory inside the host. If this parameter is omitted, the root of the Amazon EFS volume will be used. Specifying /
will have the same effect as omitting this parameter.
The Amazon Resource Name (ARN) of the failed resource.
", "Failure$reason": "The reason for the failure.
", "Failure$detail": "The details of the failure.
", @@ -1833,7 +1853,7 @@ "Task$taskDefinitionArn": "The ARN of the task definition that creates the task.
", "TaskDefinition$taskDefinitionArn": "The full Amazon Resource Name (ARN) of the task definition.
", "TaskDefinition$family": "The name of a family that this task definition is registered to. Up to 255 letters (uppercase and lowercase), numbers, hyphens, and underscores are allowed.
A family groups multiple versions of a task definition. Amazon ECS gives the first task definition that you registered to a family a revision number of 1. Amazon ECS gives sequential revision numbers to each task definition that you add.
", - "TaskDefinition$taskRoleArn": "The short name or full Amazon Resource Name (ARN) of the AWS Identity and Access Management (IAM) role that grants containers in the task permission to call AWS APIs on your behalf. For more information, see Amazon ECS Task Role in the Amazon Elastic Container Service Developer Guide.
IAM roles for tasks on Windows require that the -EnableTaskIAMRole
option is set when you launch the Amazon ECS-optimized Windows AMI. Your containers must also run some configuration code in order to take advantage of the feature. For more information, see Windows IAM Roles for Tasks in the Amazon Elastic Container Service Developer Guide.
The short name or full Amazon Resource Name (ARN) of the AWS Identity and Access Management (IAM) role that grants containers in the task permission to call AWS APIs on your behalf. For more information, see Amazon ECS Task Role in the Amazon Elastic Container Service Developer Guide.
IAM roles for tasks on Windows require that the -EnableTaskIAMRole
option is set when you launch the Amazon ECS-optimized Windows AMI. Your containers must also run some configuration code in order to take advantage of the feature. For more information, see Windows IAM Roles for Tasks in the Amazon Elastic Container Service Developer Guide.
The Amazon Resource Name (ARN) of the task execution role that containers in this task can assume. All containers in this task are granted the permissions that are specified in this role.
", "TaskDefinition$cpu": "The number of cpu
units used by the task. If you are using the EC2 launch type, this field is optional and any value can be used. If you are using the Fargate launch type, this field is required and you must use one of the following values, which determines your range of valid values for the memory
parameter:
256 (.25 vCPU) - Available memory
values: 512 (0.5 GB), 1024 (1 GB), 2048 (2 GB)
512 (.5 vCPU) - Available memory
values: 1024 (1 GB), 2048 (2 GB), 3072 (3 GB), 4096 (4 GB)
1024 (1 vCPU) - Available memory
values: 2048 (2 GB), 3072 (3 GB), 4096 (4 GB), 5120 (5 GB), 6144 (6 GB), 7168 (7 GB), 8192 (8 GB)
2048 (2 vCPU) - Available memory
values: Between 4096 (4 GB) and 16384 (16 GB) in increments of 1024 (1 GB)
4096 (4 vCPU) - Available memory
values: Between 8192 (8 GB) and 30720 (30 GB) in increments of 1024 (1 GB)
The amount (in MiB) of memory used by the task.
If using the EC2 launch type, this field is optional and any value can be used. If a task-level memory value is specified then the container-level memory value is optional.
If using the Fargate launch type, this field is required and you must use one of the following values, which determines your range of valid values for the cpu
parameter:
512 (0.5 GB), 1024 (1 GB), 2048 (2 GB) - Available cpu
values: 256 (.25 vCPU)
1024 (1 GB), 2048 (2 GB), 3072 (3 GB), 4096 (4 GB) - Available cpu
values: 512 (.5 vCPU)
2048 (2 GB), 3072 (3 GB), 4096 (4 GB), 5120 (5 GB), 6144 (6 GB), 7168 (7 GB), 8192 (8 GB) - Available cpu
values: 1024 (1 vCPU)
Between 4096 (4 GB) and 16384 (16 GB) in increments of 1024 (1 GB) - Available cpu
values: 2048 (2 vCPU)
Between 8192 (8 GB) and 30720 (30 GB) in increments of 1024 (1 GB) - Available cpu
values: 4096 (4 vCPU)
A list of DNS search domains that are presented to the container. This parameter maps to DnsSearch
in the Create a container section of the Docker Remote API and the --dns-search
option to docker run.
This parameter is not supported for Windows containers.
A list of strings to provide custom labels for SELinux and AppArmor multi-level security systems. This field is not valid for containers in tasks using the Fargate launch type.
With Windows containers, this parameter can be used to reference a credential spec file when configuring a container for Active Directory authentication. For more information, see Using gMSAs for Windows Containers in the Amazon Elastic Container Service Developer Guide.
This parameter maps to SecurityOpt
in the Create a container section of the Docker Remote API and the --security-opt
option to docker run.
The Amazon ECS container agent running on a container instance must register with the ECS_SELINUX_CAPABLE=true
or ECS_APPARMOR_CAPABLE=true
environment variables before containers placed on that instance can use these security options. For more information, see Amazon ECS Container Agent Configuration in the Amazon Elastic Container Service Developer Guide.
The command to send to the container that overrides the default command from the Docker image or the task definition. You must also specify a container name.
", - "CreateClusterRequest$capacityProviders": "The short name or full Amazon Resource Name (ARN) of one or more capacity providers to associate with the cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created and not already associated with another cluster. New capacity providers can be created with the CreateCapacityProvider API operation.
To use a AWS Fargate capacity provider, specify either the FARGATE
or FARGATE_SPOT
capacity providers. The AWS Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used.
The PutClusterCapacityProviders API operation is used to update the list of available capacity providers for a cluster after the cluster is created.
", + "CreateClusterRequest$capacityProviders": "The short name of one or more capacity providers to associate with the cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created and not already associated with another cluster. New capacity providers can be created with the CreateCapacityProvider API operation.
To use a AWS Fargate capacity provider, specify either the FARGATE
or FARGATE_SPOT
capacity providers. The AWS Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used.
The PutClusterCapacityProviders API operation is used to update the list of available capacity providers for a cluster after the cluster is created.
", "DescribeCapacityProvidersRequest$capacityProviders": "The short name or full Amazon Resource Name (ARN) of one or more capacity providers. Up to 100
capacity providers can be described in an action.
A list of up to 100 cluster names or full cluster Amazon Resource Name (ARN) entries. If you do not specify a cluster, the default cluster is assumed.
", "DescribeContainerInstancesRequest$containerInstances": "A list of up to 100 container instance IDs or full Amazon Resource Name (ARN) entries.
", diff --git a/models/apis/eks/2017-11-01/api-2.json b/models/apis/eks/2017-11-01/api-2.json index b11295c24b8..f893de9c86f 100644 --- a/models/apis/eks/2017-11-01/api-2.json +++ b/models/apis/eks/2017-11-01/api-2.json @@ -1037,8 +1037,11 @@ "Ec2LaunchTemplateNotFound", "Ec2LaunchTemplateVersionMismatch", "Ec2SubnetNotFound", + "Ec2SubnetInvalidConfiguration", "IamInstanceProfileNotFound", + "IamLimitExceeded", "IamNodeRoleNotFound", + "NodeCreationFailure", "AsgInstanceLaunchFailures", "InstanceLimitExceeded", "InsufficientFreeAddresses", diff --git a/models/apis/eks/2017-11-01/docs-2.json b/models/apis/eks/2017-11-01/docs-2.json index 8ec48994bcc..2de0f0824d4 100644 --- a/models/apis/eks/2017-11-01/docs-2.json +++ b/models/apis/eks/2017-11-01/docs-2.json @@ -5,7 +5,7 @@ "CreateCluster": "Creates an Amazon EKS control plane.
The Amazon EKS control plane consists of control plane instances that run the Kubernetes software, such as etcd
and the API server. The control plane runs in an account managed by AWS, and the Kubernetes API is exposed via the Amazon EKS API server endpoint. Each Amazon EKS cluster control plane is single-tenant and unique and runs on its own set of Amazon EC2 instances.
The cluster control plane is provisioned across multiple Availability Zones and fronted by an Elastic Load Balancing Network Load Balancer. Amazon EKS also provisions elastic network interfaces in your VPC subnets to provide connectivity from the control plane instances to the worker nodes (for example, to support kubectl exec
, logs
, and proxy
data flows).
Amazon EKS worker nodes run in your AWS account and connect to your cluster's control plane via the Kubernetes API server endpoint and a certificate file that is created for your cluster.
You can use the endpointPublicAccess
and endpointPrivateAccess
parameters to enable or disable public and private access to your cluster's Kubernetes API server endpoint. By default, public access is enabled, and private access is disabled. For more information, see Amazon EKS Cluster Endpoint Access Control in the Amazon EKS User Guide .
You can use the logging
parameter to enable or disable exporting the Kubernetes control plane logs for your cluster to CloudWatch Logs. By default, cluster control plane logs aren't exported to CloudWatch Logs. For more information, see Amazon EKS Cluster Control Plane Logs in the Amazon EKS User Guide .
CloudWatch Logs ingestion, archive storage, and data scanning rates apply to exported control plane logs. For more information, see Amazon CloudWatch Pricing.
Cluster creation typically takes between 10 and 15 minutes. After you create an Amazon EKS cluster, you must configure your Kubernetes tooling to communicate with the API server and launch worker nodes into your cluster. For more information, see Managing Cluster Authentication and Launching Amazon EKS Worker Nodes in the Amazon EKS User Guide.
", "CreateFargateProfile": "Creates an AWS Fargate profile for your Amazon EKS cluster. You must have at least one Fargate profile in a cluster to be able to run pods on Fargate.
The Fargate profile allows an administrator to declare which pods run on Fargate and specify which pods run on which Fargate profile. This declaration is done through the profile’s selectors. Each profile can have up to five selectors that contain a namespace and labels. A namespace is required for every selector. The label field consists of multiple optional key-value pairs. Pods that match the selectors are scheduled on Fargate. If a to-be-scheduled pod matches any of the selectors in the Fargate profile, then that pod is run on Fargate.
When you create a Fargate profile, you must specify a pod execution role to use with the pods that are scheduled with the profile. This role is added to the cluster's Kubernetes Role Based Access Control (RBAC) for authorization so that the kubelet
that is running on the Fargate infrastructure can register with your Amazon EKS cluster so that it can appear in your cluster as a node. The pod execution role also provides IAM permissions to the Fargate infrastructure to allow read access to Amazon ECR image repositories. For more information, see Pod Execution Role in the Amazon EKS User Guide.
Fargate profiles are immutable. However, you can create a new updated profile to replace an existing profile and then delete the original after the updated profile has finished creating.
If any Fargate profiles in a cluster are in the DELETING
status, you must wait for that Fargate profile to finish deleting before you can create any other profiles in that cluster.
For more information, see AWS Fargate Profile in the Amazon EKS User Guide.
", "CreateNodegroup": "Creates a managed worker node group for an Amazon EKS cluster. You can only create a node group for your cluster that is equal to the current Kubernetes version for the cluster. All node groups are created with the latest AMI release version for the respective minor Kubernetes version of the cluster.
An Amazon EKS managed node group is an Amazon EC2 Auto Scaling group and associated Amazon EC2 instances that are managed by AWS for an Amazon EKS cluster. Each node group uses a version of the Amazon EKS-optimized Amazon Linux 2 AMI. For more information, see Managed Node Groups in the Amazon EKS User Guide.
", - "DeleteCluster": "Deletes the Amazon EKS cluster control plane.
If you have active services in your cluster that are associated with a load balancer, you must delete those services before deleting the cluster so that the load balancers are deleted properly. Otherwise, you can have orphaned resources in your VPC that prevent you from being able to delete the VPC. For more information, see Deleting a Cluster in the Amazon EKS User Guide.
If you have managed node groups or Fargate profiles attached to the cluster, you must delete them first. For more information, see DeleteNodegroup andDeleteFargateProfile.
", + "DeleteCluster": "Deletes the Amazon EKS cluster control plane.
If you have active services in your cluster that are associated with a load balancer, you must delete those services before deleting the cluster so that the load balancers are deleted properly. Otherwise, you can have orphaned resources in your VPC that prevent you from being able to delete the VPC. For more information, see Deleting a Cluster in the Amazon EKS User Guide.
If you have managed node groups or Fargate profiles attached to the cluster, you must delete them first. For more information, see DeleteNodegroup and DeleteFargateProfile.
", "DeleteFargateProfile": "Deletes an AWS Fargate profile.
When you delete a Fargate profile, any pods running on Fargate that were created with the profile are deleted. If those pods match another Fargate profile, then they are scheduled on Fargate with that profile. If they no longer match any Fargate profiles, then they are not scheduled on Fargate and they may remain in a pending state.
Only one Fargate profile in a cluster can be in the DELETING
status at a time. You must wait for a Fargate profile to finish deleting before you can delete any other profiles in that cluster.
Deletes an Amazon EKS node group for a cluster.
", "DescribeCluster": "Returns descriptive information about an Amazon EKS cluster.
The API server endpoint and certificate authority data returned by this operation are required for kubelet
and kubectl
to communicate with your Kubernetes API server. For more information, see Create a kubeconfig for Amazon EKS.
The API server endpoint and certificate authority data aren't available until the cluster reaches the ACTIVE
state.
Returns descriptive information about an update against your Amazon EKS cluster or associated managed node group.
When the status of the update is Succeeded
, the update is complete. If an update fails, the status is Failed
, and an error detail explains the reason for the failure.
Lists the Amazon EKS clusters in your AWS account in the specified Region.
", "ListFargateProfiles": "Lists the AWS Fargate profiles associated with the specified cluster in your AWS account in the specified Region.
", - "ListNodegroups": "Lists the Amazon EKS node groups associated with the specified cluster in your AWS account in the specified Region.
", + "ListNodegroups": "Lists the Amazon EKS managed node groups associated with the specified cluster in your AWS account in the specified Region. Self-managed node groups are not listed.
", "ListTagsForResource": "List the tags for an Amazon EKS resource.
", "ListUpdates": "Lists the updates associated with an Amazon EKS cluster or managed node group in your AWS account, in the specified Region.
", "TagResource": "Associates the specified tags to a resource with the specified resourceArn
. If existing tags on a resource are not specified in the request parameters, they are not changed. When a resource is deleted, the tags associated with that resource are deleted as well. Tags that you create for Amazon EKS resources do not propagate to any other resources associated with the cluster. For example, if you tag a cluster with this operation, that tag does not automatically propagate to the subnets and worker nodes associated with the cluster.
Unique, case-sensitive identifier that you provide to ensure the idempotency of the request.
", "CreateNodegroupRequest$clusterName": "The name of the cluster to create the node group in.
", "CreateNodegroupRequest$nodegroupName": "The unique name to give your node group.
", - "CreateNodegroupRequest$nodeRole": "The IAM role associated with your node group. The Amazon EKS worker node kubelet
daemon makes calls to AWS APIs on your behalf. Worker nodes receive permissions for these API calls through an IAM instance profile and associated policies. Before you can launch worker nodes and register them into a cluster, you must create an IAM role for those worker nodes to use when they are launched. For more information, see Amazon EKS Worker Node IAM Role in the Amazon EKS User Guide .
The Amazon Resource Name (ARN) of the IAM role to associate with your node group. The Amazon EKS worker node kubelet
daemon makes calls to AWS APIs on your behalf. Worker nodes receive permissions for these API calls through an IAM instance profile and associated policies. Before you can launch worker nodes and register them into a cluster, you must create an IAM role for those worker nodes to use when they are launched. For more information, see Amazon EKS Worker Node IAM Role in the Amazon EKS User Guide .
Unique, case-sensitive identifier that you provide to ensure the idempotency of the request.
", "CreateNodegroupRequest$version": "The Kubernetes version to use for your managed nodes. By default, the Kubernetes version of the cluster is used, and this is the only accepted specified value.
", "CreateNodegroupRequest$releaseVersion": "The AMI version of the Amazon EKS-optimized AMI to use with your node group. By default, the latest available AMI version for the node group's current Kubernetes version is used. For more information, see Amazon EKS-Optimized Linux AMI Versions in the Amazon EKS User Guide.
", diff --git a/models/apis/elastic-inference/2017-07-25/api-2.json b/models/apis/elastic-inference/2017-07-25/api-2.json index 8d093228a39..6b942c3a2ae 100644 --- a/models/apis/elastic-inference/2017-07-25/api-2.json +++ b/models/apis/elastic-inference/2017-07-25/api-2.json @@ -2,7 +2,7 @@ "version":"2.0", "metadata":{ "apiVersion":"2017-07-25", - "endpointPrefix":"api.elastic-inference", + "endpointPrefix":"elastic-inference", "jsonVersion":"1.1", "protocol":"rest-json", "serviceAbbreviation":"Amazon Elastic Inference", diff --git a/models/apis/elasticbeanstalk/2010-12-01/api-2.json b/models/apis/elasticbeanstalk/2010-12-01/api-2.json index ffe9df97158..121c3714785 100644 --- a/models/apis/elasticbeanstalk/2010-12-01/api-2.json +++ b/models/apis/elasticbeanstalk/2010-12-01/api-2.json @@ -425,6 +425,18 @@ "resultWrapper":"ListAvailableSolutionStacksResult" } }, + "ListPlatformBranches":{ + "name":"ListPlatformBranches", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListPlatformBranchesRequest"}, + "output":{ + "shape":"ListPlatformBranchesResult", + "resultWrapper":"ListPlatformBranchesResult" + } + }, "ListPlatformVersions":{ "name":"ListPlatformVersions", "http":{ @@ -810,6 +822,8 @@ }, "BoxedBoolean":{"type":"boolean"}, "BoxedInt":{"type":"integer"}, + "BranchName":{"type":"string"}, + "BranchOrder":{"type":"integer"}, "BuildConfiguration":{ "type":"structure", "required":[ @@ -1697,6 +1711,21 @@ "SolutionStackDetails":{"shape":"AvailableSolutionStackDetailsList"} } }, + "ListPlatformBranchesRequest":{ + "type":"structure", + "members":{ + "Filters":{"shape":"SearchFilters"}, + "MaxRecords":{"shape":"PlatformBranchMaxRecords"}, + "NextToken":{"shape":"Token"} + } + }, + "ListPlatformBranchesResult":{ + "type":"structure", + "members":{ + "PlatformBranchSummaryList":{"shape":"PlatformBranchSummaryList"}, + "NextToken":{"shape":"Token"} + } + }, "ListPlatformVersionsRequest":{ "type":"structure", "members":{ @@ -1873,6 +1902,25 @@ "member":{"shape":"OptionSpecification"} }, "PlatformArn":{"type":"string"}, + "PlatformBranchLifecycleState":{"type":"string"}, + "PlatformBranchMaxRecords":{ + "type":"integer", + "min":1 + }, + "PlatformBranchSummary":{ + "type":"structure", + "members":{ + "PlatformName":{"shape":"PlatformName"}, + "BranchName":{"shape":"BranchName"}, + "LifecycleState":{"shape":"PlatformBranchLifecycleState"}, + "BranchOrder":{"shape":"BranchOrder"}, + "SupportedTierList":{"shape":"SupportedTierList"} + } + }, + "PlatformBranchSummaryList":{ + "type":"list", + "member":{"shape":"PlatformBranchSummary"} + }, "PlatformCategory":{"type":"string"}, "PlatformDescription":{ "type":"structure", @@ -1894,7 +1942,10 @@ "Frameworks":{"shape":"PlatformFrameworks"}, "CustomAmiList":{"shape":"CustomAmiList"}, "SupportedTierList":{"shape":"SupportedTierList"}, - "SupportedAddonList":{"shape":"SupportedAddonList"} + "SupportedAddonList":{"shape":"SupportedAddonList"}, + "PlatformLifecycleState":{"shape":"PlatformLifecycleState"}, + "PlatformBranchName":{"shape":"BranchName"}, + "PlatformBranchLifecycleState":{"shape":"PlatformBranchLifecycleState"} } }, "PlatformFilter":{ @@ -1927,6 +1978,7 @@ "type":"list", "member":{"shape":"PlatformFramework"} }, + "PlatformLifecycleState":{"type":"string"}, "PlatformMaxRecords":{ "type":"integer", "min":1 @@ -1964,7 +2016,11 @@ "OperatingSystemName":{"shape":"OperatingSystemName"}, "OperatingSystemVersion":{"shape":"OperatingSystemVersion"}, "SupportedTierList":{"shape":"SupportedTierList"}, - "SupportedAddonList":{"shape":"SupportedAddonList"} + "SupportedAddonList":{"shape":"SupportedAddonList"}, + "PlatformLifecycleState":{"shape":"PlatformLifecycleState"}, + "PlatformVersion":{"shape":"PlatformVersion"}, + "PlatformBranchName":{"shape":"BranchName"}, + "PlatformBranchLifecycleState":{"shape":"PlatformBranchLifecycleState"} } }, "PlatformSummaryList":{ @@ -2127,6 +2183,25 @@ "exception":true }, "SampleTimestamp":{"type":"timestamp"}, + "SearchFilter":{ + "type":"structure", + "members":{ + "Attribute":{"shape":"SearchFilterAttribute"}, + "Operator":{"shape":"SearchFilterOperator"}, + "Values":{"shape":"SearchFilterValues"} + } + }, + "SearchFilterAttribute":{"type":"string"}, + "SearchFilterOperator":{"type":"string"}, + "SearchFilterValue":{"type":"string"}, + "SearchFilterValues":{ + "type":"list", + "member":{"shape":"SearchFilterValue"} + }, + "SearchFilters":{ + "type":"list", + "member":{"shape":"SearchFilter"} + }, "SingleInstanceHealth":{ "type":"structure", "members":{ diff --git a/models/apis/elasticbeanstalk/2010-12-01/docs-2.json b/models/apis/elasticbeanstalk/2010-12-01/docs-2.json index 1a57f08c1bc..b5cf8d2d563 100644 --- a/models/apis/elasticbeanstalk/2010-12-01/docs-2.json +++ b/models/apis/elasticbeanstalk/2010-12-01/docs-2.json @@ -6,10 +6,10 @@ "ApplyEnvironmentManagedAction": "Applies a scheduled managed action immediately. A managed action can be applied only if its status is Scheduled
. Get the status and action ID of a managed action with DescribeEnvironmentManagedActions.
Checks if the specified CNAME is available.
", "ComposeEnvironments": "Create or update a group of environments that each run a separate component of a single application. Takes a list of version labels that specify application source bundles for each of the environments to create or update. The name of each environment and other required information must be included in the source bundles in an environment manifest named env.yaml
. See Compose Environments for details.
Creates an application that has one configuration template named default
and no application versions.
Creates an application version for the specified application. You can create an application version from a source bundle in Amazon S3, a commit in AWS CodeCommit, or the output of an AWS CodeBuild build as follows:
Specify a commit in an AWS CodeCommit repository with SourceBuildInformation
.
Specify a build in an AWS CodeBuild with SourceBuildInformation
and BuildConfiguration
.
Specify a source bundle in S3 with SourceBundle
Omit both SourceBuildInformation
and SourceBundle
to use the default sample application.
Once you create an application version with a specified Amazon S3 bucket and key location, you cannot change that Amazon S3 location. If you change the Amazon S3 location, you receive an exception when you attempt to launch an environment from the application version.
Creates a configuration template. Templates are associated with a specific application and are used to deploy different versions of the application with the same configuration settings.
Templates aren't associated with any environment. The EnvironmentName
response element is always null
.
Related Topics
", - "CreateEnvironment": "Launches an environment for the specified application using the specified configuration.
", + "CreateApplication": "Creates an application that has one configuration template named default
and no application versions.
Creates an application version for the specified application. You can create an application version from a source bundle in Amazon S3, a commit in AWS CodeCommit, or the output of an AWS CodeBuild build as follows:
Specify a commit in an AWS CodeCommit repository with SourceBuildInformation
.
Specify a build in an AWS CodeBuild with SourceBuildInformation
and BuildConfiguration
.
Specify a source bundle in S3 with SourceBundle
Omit both SourceBuildInformation
and SourceBundle
to use the default sample application.
After you create an application version with a specified Amazon S3 bucket and key location, you can't change that Amazon S3 location. If you change the Amazon S3 location, you receive an exception when you attempt to launch an environment from the application version.
Creates an AWS Elastic Beanstalk configuration template, associated with a specific Elastic Beanstalk application. You define application configuration settings in a configuration template. You can then use the configuration template to deploy different versions of the application with the same configuration settings.
Templates aren't associated with any environment. The EnvironmentName
response element is always null
.
Related Topics
", + "CreateEnvironment": "Launches an AWS Elastic Beanstalk environment for the specified application using the specified configuration.
", "CreatePlatformVersion": "Create a new version of your custom platform.
", "CreateStorageLocation": "Creates a bucket in Amazon S3 to store application versions, logs, and other files used by Elastic Beanstalk environments. The Elastic Beanstalk console and EB CLI call this API the first time you create an environment in a region. If the storage location already exists, CreateStorageLocation
still returns the bucket name but does not create a new bucket.
Deletes the specified application along with all associated versions and configurations. The application versions will not be deleted from your Amazon S3 bucket.
You cannot delete an application that has a running environment.
Returns descriptions for existing environments.
", "DescribeEvents": "Returns list of event descriptions matching criteria up to the last 6 weeks.
This action returns the most recent 1,000 events from the specified NextToken
.
Retrieves detailed information about the health of instances in your AWS Elastic Beanstalk. This operation requires enhanced health reporting.
", - "DescribePlatformVersion": "Describes the version of the platform.
", + "DescribePlatformVersion": "Describes a platform version. Provides full details. Compare to ListPlatformVersions, which provides summary information about a list of platform versions.
For definitions of platform version and other platform-related terms, see AWS Elastic Beanstalk Platforms Glossary.
", "ListAvailableSolutionStacks": "Returns a list of the available solution stack names, with the public version first and then in reverse chronological order.
", - "ListPlatformVersions": "Lists the available platforms.
", - "ListTagsForResource": "Returns the tags applied to an AWS Elastic Beanstalk resource. The response contains a list of tag key-value pairs.
Currently, Elastic Beanstalk only supports tagging of Elastic Beanstalk environments. For details about environment tagging, see Tagging Resources in Your Elastic Beanstalk Environment.
", + "ListPlatformBranches": "Lists the platform branches available for your account in an AWS Region. Provides summary information about each platform branch.
For definitions of platform branch and other platform-related terms, see AWS Elastic Beanstalk Platforms Glossary.
", + "ListPlatformVersions": "Lists the platform versions available for your account in an AWS Region. Provides summary information about each platform version. Compare to DescribePlatformVersion, which provides full details about a single platform version.
For definitions of platform version and other platform-related terms, see AWS Elastic Beanstalk Platforms Glossary.
", + "ListTagsForResource": "Return the tags applied to an AWS Elastic Beanstalk resource. The response contains a list of tag key-value pairs.
Elastic Beanstalk supports tagging of all of its resources. For details about resource tagging, see Tagging Application Resources.
", "RebuildEnvironment": "Deletes and recreates all of the AWS resources (for example: the Auto Scaling group, load balancer, etc.) for a specified environment and forces a restart.
", "RequestEnvironmentInfo": "Initiates a request to compile the specified type of information of the deployed environment.
Setting the InfoType
to tail
compiles the last lines from the application server log files of every Amazon EC2 instance in your environment.
Setting the InfoType
to bundle
compresses the application server log files for every Amazon EC2 instance into a .zip
file. Legacy and .NET containers do not support bundle logs.
Use RetrieveEnvironmentInfo to obtain the set of logs.
Related Topics
", "RestartAppServer": "Causes the environment to restart the application container server running on each Amazon EC2 instance.
", @@ -44,7 +45,7 @@ "UpdateApplicationVersion": "Updates the specified application version to have the specified properties.
If a property (for example, description
) is not provided, the value remains unchanged. To clear properties, specify an empty string.
Updates the specified configuration template to have the specified properties or configuration option values.
If a property (for example, ApplicationName
) is not provided, its value remains unchanged. To clear such properties, specify an empty string.
Related Topics
", "UpdateEnvironment": "Updates the environment description, deploys a new application version, updates the configuration settings to an entirely new configuration template, or updates select configuration option values in the running environment.
Attempting to update both the release and configuration is not allowed and AWS Elastic Beanstalk returns an InvalidParameterCombination
error.
When updating the configuration settings to a new template or individual settings, a draft configuration is created and DescribeConfigurationSettings for this environment returns two setting descriptions with different DeploymentStatus
values.
Update the list of tags applied to an AWS Elastic Beanstalk resource. Two lists can be passed: TagsToAdd
for tags to add or update, and TagsToRemove
.
Currently, Elastic Beanstalk only supports tagging of Elastic Beanstalk environments. For details about environment tagging, see Tagging Resources in Your Elastic Beanstalk Environment.
If you create a custom IAM user policy to control permission to this operation, specify one of the following two virtual actions (or both) instead of the API operation name:
Controls permission to call UpdateTagsForResource
and pass a list of tags to add in the TagsToAdd
parameter.
Controls permission to call UpdateTagsForResource
and pass a list of tag keys to remove in the TagsToRemove
parameter.
For details about creating a custom user policy, see Creating a Custom User Policy.
", + "UpdateTagsForResource": "Update the list of tags applied to an AWS Elastic Beanstalk resource. Two lists can be passed: TagsToAdd
for tags to add or update, and TagsToRemove
.
Elastic Beanstalk supports tagging of all of its resources. For details about resource tagging, see Tagging Application Resources.
If you create a custom IAM user policy to control permission to this operation, specify one of the following two virtual actions (or both) instead of the API operation name:
Controls permission to call UpdateTagsForResource
and pass a list of tags to add in the TagsToAdd
parameter.
Controls permission to call UpdateTagsForResource
and pass a list of tag keys to remove in the TagsToRemove
parameter.
For details about creating a custom user policy, see Creating a Custom User Policy.
", "ValidateConfigurationSettings": "Takes a set of configuration settings and either a configuration template or environment, and determines whether those values are valid.
This action returns a list of messages indicating any errors or warnings associated with the selection of option values.
" }, "shapes": { @@ -131,10 +132,10 @@ "ApplicationVersionDescription$ApplicationName": "The name of the application to which the application version belongs.
", "ComposeEnvironmentsMessage$ApplicationName": "The name of the application to which the specified source bundles belong.
", "ConfigurationSettingsDescription$ApplicationName": "The name of the application associated with this configuration set.
", - "CreateApplicationMessage$ApplicationName": "The name of the application.
Constraint: This name must be unique within your account. If the specified name already exists, the action returns an InvalidParameterValue
error.
The name of the application. Must be unique within your account.
", "CreateApplicationVersionMessage$ApplicationName": " The name of the application. If no application is found with this name, and AutoCreateApplication
is false
, returns an InvalidParameterValue
error.
The name of the application to associate with this configuration template. If no application is found with this name, AWS Elastic Beanstalk returns an InvalidParameterValue
error.
The name of the application that contains the version to be deployed.
If no application is found with this name, CreateEnvironment
returns an InvalidParameterValue
error.
The name of the Elastic Beanstalk application to associate with this configuration template.
", + "CreateEnvironmentMessage$ApplicationName": "The name of the application that is associated with this environment.
", "DeleteApplicationMessage$ApplicationName": "The name of the application to delete.
", "DeleteApplicationVersionMessage$ApplicationName": "The name of the application to which the version belongs.
", "DeleteConfigurationTemplateMessage$ApplicationName": "The name of the application to delete the configuration template from.
", @@ -162,11 +163,11 @@ } }, "ApplicationResourceLifecycleConfig": { - "base": "The resource lifecycle configuration for an application. Defines lifecycle settings for resources that belong to the application, and the service role that Elastic Beanstalk assumes in order to apply lifecycle settings. The version lifecycle configuration defines lifecycle settings for application versions.
", + "base": "The resource lifecycle configuration for an application. Defines lifecycle settings for resources that belong to the application, and the service role that AWS Elastic Beanstalk assumes in order to apply lifecycle settings. The version lifecycle configuration defines lifecycle settings for application versions.
", "refs": { "ApplicationDescription$ResourceLifecycleConfig": "The lifecycle settings for the application.
", "ApplicationResourceLifecycleDescriptionMessage$ResourceLifecycleConfig": "The lifecycle configuration.
", - "CreateApplicationMessage$ResourceLifecycleConfig": "Specify an application resource lifecycle configuration to prevent your application from accumulating too many versions.
", + "CreateApplicationMessage$ResourceLifecycleConfig": "Specifies an application resource lifecycle configuration to prevent your application from accumulating too many versions.
", "UpdateApplicationResourceLifecycleMessage$ResourceLifecycleConfig": "The lifecycle configuration.
" } }, @@ -207,7 +208,7 @@ "ApplicationVersionLifecycleConfig": { "base": "The application version lifecycle settings for an application. Defines the rules that Elastic Beanstalk applies to an application's versions in order to avoid hitting the per-region limit for application versions.
When Elastic Beanstalk deletes an application version from its database, you can no longer deploy that version to an environment. The source bundle remains in S3 unless you configure the rule to delete it.
", "refs": { - "ApplicationResourceLifecycleConfig$VersionLifecycleConfig": "The application version lifecycle configuration.
" + "ApplicationResourceLifecycleConfig$VersionLifecycleConfig": "Defines lifecycle settings for application versions.
" } }, "ApplicationVersionProccess": { @@ -280,6 +281,20 @@ "ResourceQuota$Maximum": "The maximum number of instances of this Elastic Beanstalk resource type that an AWS account can use.
" } }, + "BranchName": { + "base": null, + "refs": { + "PlatformBranchSummary$BranchName": "The name of the platform branch.
", + "PlatformDescription$PlatformBranchName": "The platform branch to which the platform version belongs.
", + "PlatformSummary$PlatformBranchName": "The platform branch to which the platform version belongs.
" + } + }, + "BranchOrder": { + "base": null, + "refs": { + "PlatformBranchSummary$BranchOrder": "An ordinal number that designates the order in which platform branches have been added to a platform. This can be helpful, for example, if your code calls the ListPlatformBranches
action and then displays a list of platform branches.
A larger BranchOrder
value designates a newer platform branch within the platform.
Settings for an AWS CodeBuild build.
", "refs": { @@ -389,7 +404,7 @@ } }, "ConfigurationOptionSetting": { - "base": "A specification identifying an individual configuration option along with its current value. For a list of possible option values, go to Option Values in the AWS Elastic Beanstalk Developer Guide.
", + "base": "A specification identifying an individual configuration option along with its current value. For a list of possible namespaces and option values, see Option Values in the AWS Elastic Beanstalk Developer Guide.
", "refs": { "ConfigurationOptionSettingsList$member": null } @@ -398,7 +413,7 @@ "base": null, "refs": { "ConfigurationSettingsDescription$OptionSettings": "A list of the configuration options and their values in this configuration set.
", - "CreateConfigurationTemplateMessage$OptionSettings": "If specified, AWS Elastic Beanstalk sets the specified configuration option to the requested value. The new value overrides the value obtained from the solution stack or the source configuration template.
", + "CreateConfigurationTemplateMessage$OptionSettings": "Option values for the Elastic Beanstalk configuration, such as the instance type. If specified, these values override the values obtained from the solution stack or the source configuration template. For a complete list of Elastic Beanstalk configuration options, see Option Values in the AWS Elastic Beanstalk Developer Guide.
", "CreateEnvironmentMessage$OptionSettings": "If specified, AWS Elastic Beanstalk sets the specified configuration options to the requested value in the configuration set for the new environment. These override the values obtained from the solution stack or the configuration template.
", "CreatePlatformVersionRequest$OptionSettings": "The configuration option settings to apply to the builder environment.
", "UpdateConfigurationTemplateMessage$OptionSettings": "A list of configuration option settings to update with the new specified option value.
", @@ -456,8 +471,8 @@ "refs": { "ConfigurationSettingsDescription$TemplateName": " If not null
, the name of the configuration template for this configuration set.
The name of the configuration template.
Constraint: This name must be unique per application.
Default: If a configuration template already exists with this name, AWS Elastic Beanstalk returns an InvalidParameterValue
error.
The name of the configuration template to use in deployment. If no configuration template is found with this name, AWS Elastic Beanstalk returns an InvalidParameterValue
error.
The name of the configuration template.
Constraint: This name must be unique per application.
", + "CreateEnvironmentMessage$TemplateName": "The name of the Elastic Beanstalk configuration template to use with the environment.
If you specify TemplateName
, then don't specify SolutionStackName
.
The name of the configuration template to delete.
", "DescribeConfigurationOptionsMessage$TemplateName": "The name of the configuration template whose configuration options you want to describe.
", "DescribeConfigurationSettingsMessage$TemplateName": "The name of the configuration template to describe.
Conditional: You must specify either this parameter or an EnvironmentName, but not both. If you specify both, AWS Elastic Beanstalk returns an InvalidParameterCombination
error. If you do not specify either, AWS Elastic Beanstalk returns a MissingRequiredParameter
error.
The creation date of the application version.
", "ConfigurationSettingsDescription$DateCreated": "The date (in UTC time) when this configuration set was created.
", "EnvironmentDescription$DateCreated": "The creation date for this environment.
", - "PlatformDescription$DateCreated": "The date when the platform was created.
" + "PlatformDescription$DateCreated": "The date when the platform version was created.
" } }, "CustomAmi": { @@ -530,7 +545,7 @@ "CustomAmiList": { "base": null, "refs": { - "PlatformDescription$CustomAmiList": "The custom AMIs supported by the platform.
" + "PlatformDescription$CustomAmiList": "The custom AMIs supported by the platform version.
" } }, "DNSCname": { @@ -544,7 +559,7 @@ "base": null, "refs": { "CheckDNSAvailabilityMessage$CNAMEPrefix": "The prefix used when this CNAME is reserved.
", - "CreateEnvironmentMessage$CNAMEPrefix": "If specified, the environment attempts to use this value as the prefix for the CNAME. If not specified, the CNAME is generated automatically by appending a random alphanumeric string to the environment name.
" + "CreateEnvironmentMessage$CNAMEPrefix": "If specified, the environment attempts to use this value as the prefix for the CNAME in your Elastic Beanstalk environment URL. If not specified, the CNAME is generated automatically by appending a random alphanumeric string to the environment name.
" } }, "DeleteApplicationMessage": { @@ -691,12 +706,12 @@ "ApplicationDescription$Description": "User-defined description of the application.
", "ApplicationVersionDescription$Description": "The description of the application version.
", "ConfigurationSettingsDescription$Description": "Describes this configuration set.
", - "CreateApplicationMessage$Description": "Describes the application.
", - "CreateApplicationVersionMessage$Description": "Describes this version.
", - "CreateConfigurationTemplateMessage$Description": "Describes this configuration.
", - "CreateEnvironmentMessage$Description": "Describes this environment.
", + "CreateApplicationMessage$Description": "Your description of the application.
", + "CreateApplicationVersionMessage$Description": "A description of this application version.
", + "CreateConfigurationTemplateMessage$Description": "An optional description for this configuration.
", + "CreateEnvironmentMessage$Description": "Your description for this environment.
", "EnvironmentDescription$Description": "Describes this environment.
", - "PlatformDescription$Description": "The description of the platform.
", + "PlatformDescription$Description": "The description of the platform version.
", "UpdateApplicationMessage$Description": "A new description for the application.
Default: If not specified, AWS Elastic Beanstalk does not update the description.
", "UpdateApplicationVersionMessage$Description": "A new description for this version.
", "UpdateConfigurationTemplateMessage$Description": "A new description for the configuration.
", @@ -772,7 +787,7 @@ "base": null, "refs": { "AbortEnvironmentUpdateMessage$EnvironmentId": "This specifies the ID of the environment with the in-progress update that you want to cancel.
", - "CreateConfigurationTemplateMessage$EnvironmentId": "The ID of the environment used with this configuration template.
", + "CreateConfigurationTemplateMessage$EnvironmentId": "The ID of an environment whose settings you want to use to create the configuration template. You must specify EnvironmentId
if you don't specify PlatformArn
, SolutionStackName
, or SourceConfiguration
.
Specify the environment by ID.
You must specify either this or an EnvironmentName, or both.
", "DescribeEnvironmentManagedActionHistoryRequest$EnvironmentId": "The environment ID of the target environment.
", "DescribeEnvironmentResourcesMessage$EnvironmentId": "The ID of the environment to retrieve AWS resource usage data.
Condition: You must specify either this or an EnvironmentName, or both. If you do not specify either, AWS Elastic Beanstalk returns MissingRequiredParameter
error.
This specifies the name of the environment with the in-progress update that you want to cancel.
", "ConfigurationSettingsDescription$EnvironmentName": " If not null
, the name of the environment for this configuration set.
A unique name for the deployment environment. Used in the application URL.
Constraint: Must be from 4 to 40 characters in length. The name can contain only letters, numbers, and hyphens. It cannot start or end with a hyphen. This name must be unique within a region in your account. If the specified name already exists in the region, AWS Elastic Beanstalk returns an InvalidParameterValue
error.
Default: If the CNAME parameter is not specified, the environment name becomes part of the CNAME, and therefore part of the visible URL for your application.
", + "CreateEnvironmentMessage$EnvironmentName": "A unique name for the environment.
Constraint: Must be from 4 to 40 characters in length. The name can contain only letters, numbers, and hyphens. It can't start or end with a hyphen. This name must be unique within a region in your account. If the specified name already exists in the region, Elastic Beanstalk returns an InvalidParameterValue
error.
If you don't specify the CNAMEPrefix
parameter, the environment name becomes part of the CNAME, and therefore part of the visible URL for your application.
The name of the builder environment.
", "DeleteEnvironmentConfigurationMessage$EnvironmentName": "The name of the environment to delete the draft configuration from.
", "DescribeConfigurationOptionsMessage$EnvironmentName": "The name of the environment whose configuration options you want to describe.
", @@ -891,7 +906,7 @@ "EnvironmentTier": { "base": "Describes the properties of an environment tier
", "refs": { - "CreateEnvironmentMessage$Tier": "This specifies the tier to use for creating this environment.
", + "CreateEnvironmentMessage$Tier": "Specifies the tier to use in creating this environment. The environment tier that you choose determines whether Elastic Beanstalk provisions resources to support a web application that handles HTTP(S) requests or a web application that handles background-processing tasks.
", "EnvironmentDescription$Tier": "Describes the current tier of this environment.
", "UpdateEnvironmentMessage$Tier": "This specifies the tier to use to update the environment.
Condition: At this time, if you change the tier version, name, or type, AWS Elastic Beanstalk returns InvalidParameterValue
error.
Information about the maintainer of the platform.
" + "PlatformDescription$Maintainer": "Information about the maintainer of the platform version.
" } }, "ManagedAction": { @@ -1262,15 +1287,15 @@ "OperatingSystemName": { "base": null, "refs": { - "PlatformDescription$OperatingSystemName": "The operating system used by the platform.
", - "PlatformSummary$OperatingSystemName": "The operating system used by the platform.
" + "PlatformDescription$OperatingSystemName": "The operating system used by the platform version.
", + "PlatformSummary$OperatingSystemName": "The operating system used by the platform version.
" } }, "OperatingSystemVersion": { "base": null, "refs": { - "PlatformDescription$OperatingSystemVersion": "The version of the operating system used by the platform.
", - "PlatformSummary$OperatingSystemVersion": "The version of the operating system used by the platform.
" + "PlatformDescription$OperatingSystemVersion": "The version of the operating system used by the platform version.
", + "PlatformSummary$OperatingSystemVersion": "The version of the operating system used by the platform version.
" } }, "OperationInProgressException": { @@ -1282,7 +1307,7 @@ "base": null, "refs": { "ConfigurationOptionDescription$Namespace": "A unique namespace identifying the option's associated AWS resource.
", - "ConfigurationOptionSetting$Namespace": "A unique namespace identifying the option's associated AWS resource.
", + "ConfigurationOptionSetting$Namespace": "A unique namespace that identifies the option's associated AWS resource.
", "OptionSpecification$Namespace": "A unique namespace identifying the option's associated AWS resource.
", "ValidationMessage$Namespace": "The namespace to which the option belongs.
" } @@ -1329,36 +1354,62 @@ "PlatformArn": { "base": null, "refs": { - "ConfigurationOptionsDescription$PlatformArn": "The ARN of the platform.
", - "ConfigurationSettingsDescription$PlatformArn": "The ARN of the platform.
", - "CreateConfigurationTemplateMessage$PlatformArn": "The ARN of the custom platform.
", - "CreateEnvironmentMessage$PlatformArn": "The ARN of the platform.
", + "ConfigurationOptionsDescription$PlatformArn": "The ARN of the platform version.
", + "ConfigurationSettingsDescription$PlatformArn": "The ARN of the platform version.
", + "CreateConfigurationTemplateMessage$PlatformArn": "The Amazon Resource Name (ARN) of the custom platform. For more information, see Custom Platforms in the AWS Elastic Beanstalk Developer Guide.
If you specify PlatformArn
, then don't specify SolutionStackName
.
The Amazon Resource Name (ARN) of the custom platform to use with the environment. For more information, see Custom Platforms in the AWS Elastic Beanstalk Developer Guide.
If you specify PlatformArn
, don't specify SolutionStackName
.
The ARN of the version of the custom platform.
", "DescribeConfigurationOptionsMessage$PlatformArn": "The ARN of the custom platform.
", - "DescribeEventsMessage$PlatformArn": "The ARN of the version of the custom platform.
", - "DescribePlatformVersionRequest$PlatformArn": "The ARN of the version of the platform.
", - "EnvironmentDescription$PlatformArn": "The ARN of the platform.
", - "EventDescription$PlatformArn": "The ARN of the platform.
", - "PlatformDescription$PlatformArn": "The ARN of the platform.
", - "PlatformSummary$PlatformArn": "The ARN of the platform.
", + "DescribeEventsMessage$PlatformArn": "The ARN of a custom platform version. If specified, AWS Elastic Beanstalk restricts the returned descriptions to those associated with this custom platform version.
", + "DescribePlatformVersionRequest$PlatformArn": "The ARN of the platform version.
", + "EnvironmentDescription$PlatformArn": "The ARN of the platform version.
", + "EventDescription$PlatformArn": "The ARN of the platform version.
", + "PlatformDescription$PlatformArn": "The ARN of the platform version.
", + "PlatformSummary$PlatformArn": "The ARN of the platform version.
", "UpdateEnvironmentMessage$PlatformArn": "The ARN of the platform, if used.
" } }, + "PlatformBranchLifecycleState": { + "base": null, + "refs": { + "PlatformBranchSummary$LifecycleState": "The support life cycle state of the platform branch.
Possible values: beta
| supported
| deprecated
| retired
The state of the platform version's branch in its lifecycle.
Possible values: Beta
| Supported
| Deprecated
| Retired
The state of the platform version's branch in its lifecycle.
Possible values: beta
| supported
| deprecated
| retired
The maximum number of platform branch values returned in one call.
" + } + }, + "PlatformBranchSummary": { + "base": "Summary information about a platform branch.
", + "refs": { + "PlatformBranchSummaryList$member": null + } + }, + "PlatformBranchSummaryList": { + "base": null, + "refs": { + "ListPlatformBranchesResult$PlatformBranchSummaryList": "Summary information about the platform branches.
" + } + }, "PlatformCategory": { "base": null, "refs": { - "PlatformDescription$PlatformCategory": "The category of the platform.
", - "PlatformSummary$PlatformCategory": "The category of platform.
" + "PlatformDescription$PlatformCategory": "The category of the platform version.
", + "PlatformSummary$PlatformCategory": "The category of platform version.
" } }, "PlatformDescription": { - "base": "Detailed information about a platform.
", + "base": "Detailed information about a platform version.
", "refs": { - "DescribePlatformVersionResult$PlatformDescription": "Detailed information about the version of the platform.
" + "DescribePlatformVersionResult$PlatformDescription": "Detailed information about the platform version.
" } }, "PlatformFilter": { - "base": "Specify criteria to restrict the results when listing custom platforms.
The filter is evaluated as the expression:
Type
Operator
Values[i]
Describes criteria to restrict the results when listing platform versions.
The filter is evaluated as follows: Type Operator Values[1]
The operator to apply to the Type
with each of the Values
.
Valid Values: =
(equal to) | !=
(not equal to) | <
(less than) | <=
(less than or equal to) | >
(greater than) | >=
(greater than or equal to) | contains
| begins_with
| ends_with
The operator to apply to the Type
with each of the Values
.
Valid values: =
| !=
| <
| <=
| >
| >=
| contains
| begins_with
| ends_with
The custom platform attribute to which the filter values are applied.
Valid Values: PlatformName
| PlatformVersion
| PlatformStatus
| PlatformOwner
The platform version attribute to which the filter values are applied.
Valid values: PlatformName
| PlatformVersion
| PlatformStatus
| PlatformBranchName
| PlatformLifecycleState
| PlatformOwner
| SupportedTier
| SupportedAddon
| ProgrammingLanguageName
| OperatingSystemName
The list of values applied to the custom platform attribute.
" + "PlatformFilter$Values": "The list of values applied to the filtering platform version attribute. Only one value is supported for all current operators.
The following list shows valid filter values for some filter attributes.
PlatformStatus
: Creating
| Failed
| Ready
| Deleting
| Deleted
PlatformLifecycleState
: recommended
SupportedTier
: WebServer/Standard
| Worker/SQS/HTTP
SupportedAddon
: Log/S3
| Monitoring/Healthd
| WorkerDaemon/SQSD
List only the platforms where the platform member value relates to one of the supplied values.
" + "ListPlatformVersionsRequest$Filters": "Criteria for restricting the resulting list of platform versions. The filter is interpreted as a logical conjunction (AND) of the separate PlatformFilter
terms.
A framework supported by the custom platform.
", + "base": "A framework supported by the platform.
", "refs": { "PlatformFrameworks$member": null } @@ -1402,27 +1453,35 @@ "PlatformFrameworks": { "base": null, "refs": { - "PlatformDescription$Frameworks": "The frameworks supported by the platform.
" + "PlatformDescription$Frameworks": "The frameworks supported by the platform version.
" + } + }, + "PlatformLifecycleState": { + "base": null, + "refs": { + "PlatformDescription$PlatformLifecycleState": "The state of the platform version in its lifecycle.
Possible values: Recommended
| null
If a null value is returned, the platform version isn't the recommended one for its branch. Each platform branch has a single recommended platform version, typically the most recent one.
", + "PlatformSummary$PlatformLifecycleState": "The state of the platform version in its lifecycle.
Possible values: recommended
| empty
If an empty value is returned, the platform version is supported but isn't the recommended one for its branch.
" } }, "PlatformMaxRecords": { "base": null, "refs": { - "ListPlatformVersionsRequest$MaxRecords": "The maximum number of platform values returned in one call.
" + "ListPlatformVersionsRequest$MaxRecords": "The maximum number of platform version values returned in one call.
" } }, "PlatformName": { "base": null, "refs": { "CreatePlatformVersionRequest$PlatformName": "The name of your custom platform.
", - "PlatformDescription$PlatformName": "The name of the platform.
" + "PlatformBranchSummary$PlatformName": "The name of the platform to which this platform branch belongs.
", + "PlatformDescription$PlatformName": "The name of the platform version.
" } }, "PlatformOwner": { "base": null, "refs": { - "PlatformDescription$PlatformOwner": "The AWS account ID of the person who created the platform.
", - "PlatformSummary$PlatformOwner": "The AWS account ID of the person who created the platform.
" + "PlatformDescription$PlatformOwner": "The AWS account ID of the person who created the platform version.
", + "PlatformSummary$PlatformOwner": "The AWS account ID of the person who created the platform version.
" } }, "PlatformProgrammingLanguage": { @@ -1434,18 +1493,18 @@ "PlatformProgrammingLanguages": { "base": null, "refs": { - "PlatformDescription$ProgrammingLanguages": "The programming languages supported by the platform.
" + "PlatformDescription$ProgrammingLanguages": "The programming languages supported by the platform version.
" } }, "PlatformStatus": { "base": null, "refs": { - "PlatformDescription$PlatformStatus": "The status of the platform.
", - "PlatformSummary$PlatformStatus": "The status of the platform. You can create an environment from the platform once it is ready.
" + "PlatformDescription$PlatformStatus": "The status of the platform version.
", + "PlatformSummary$PlatformStatus": "The status of the platform version. You can create an environment from the platform version once it is ready.
" } }, "PlatformSummary": { - "base": "Detailed information about a platform.
", + "base": "Summary information about a platform version.
", "refs": { "CreatePlatformVersionResult$PlatformSummary": "Detailed information about the new version of the custom platform.
", "DeletePlatformVersionResult$PlatformSummary": "Detailed information about the version of the custom platform.
", @@ -1455,14 +1514,15 @@ "PlatformSummaryList": { "base": null, "refs": { - "ListPlatformVersionsResult$PlatformSummaryList": "Detailed information about the platforms.
" + "ListPlatformVersionsResult$PlatformSummaryList": "Summary information about the platform versions.
" } }, "PlatformVersion": { "base": null, "refs": { "CreatePlatformVersionRequest$PlatformVersion": "The number, such as 1.0.2, for the new platform version.
", - "PlatformDescription$PlatformVersion": "The version of the platform.
" + "PlatformDescription$PlatformVersion": "The version of the platform version.
", + "PlatformSummary$PlatformVersion": "The version string of the platform version.
" } }, "PlatformVersionStillReferencedException": { @@ -1527,9 +1587,9 @@ "ResourceArn": { "base": null, "refs": { - "ListTagsForResourceMessage$ResourceArn": "The Amazon Resource Name (ARN) of the resouce for which a tag list is requested.
Must be the ARN of an Elastic Beanstalk environment.
", - "ResourceTagsDescriptionMessage$ResourceArn": "The Amazon Resource Name (ARN) of the resouce for which a tag list was requested.
", - "UpdateTagsForResourceMessage$ResourceArn": "The Amazon Resource Name (ARN) of the resouce to be updated.
Must be the ARN of an Elastic Beanstalk environment.
" + "ListTagsForResourceMessage$ResourceArn": "The Amazon Resource Name (ARN) of the resouce for which a tag list is requested.
Must be the ARN of an Elastic Beanstalk resource.
", + "ResourceTagsDescriptionMessage$ResourceArn": "The Amazon Resource Name (ARN) of the resource for which a tag list was requested.
", + "UpdateTagsForResourceMessage$ResourceArn": "The Amazon Resource Name (ARN) of the resouce to be updated.
Must be the ARN of an Elastic Beanstalk resource.
" } }, "ResourceId": { @@ -1546,7 +1606,7 @@ "ResourceName": { "base": null, "refs": { - "ConfigurationOptionSetting$ResourceName": "A unique resource name for a time-based scaling configuration option.
", + "ConfigurationOptionSetting$ResourceName": "A unique resource name for the option setting. Use it for a time–based scaling configuration option.
", "OptionSpecification$ResourceName": "A unique resource name for a time-based scaling configuration option.
" } }, @@ -1633,6 +1693,42 @@ "EnvironmentInfoDescription$SampleTimestamp": "The time stamp when this information was retrieved.
" } }, + "SearchFilter": { + "base": "Describes criteria to restrict a list of results.
For operators that apply a single value to the attribute, the filter is evaluated as follows: Attribute Operator Values[1]
Some operators, e.g. in
, can apply multiple values. In this case, the filter is evaluated as a logical union (OR) of applications of the operator to the attribute with each one of the values: (Attribute Operator Values[1]) OR (Attribute Operator Values[2]) OR ...
The valid values for attributes of SearchFilter
depend on the API action. For valid values, see the reference page for the API action you're calling that takes a SearchFilter
parameter.
The result attribute to which the filter values are applied. Valid values vary by API action.
" + } + }, + "SearchFilterOperator": { + "base": null, + "refs": { + "SearchFilter$Operator": "The operator to apply to the Attribute
with each of the Values
. Valid values vary by Attribute
.
The list of values applied to the Attribute
and Operator
attributes. Number of values and valid values vary by Attribute
.
Criteria for restricting the resulting list of platform branches. The filter is evaluated as a logical conjunction (AND) of the separate SearchFilter
terms.
The following list shows valid attribute values for each of the SearchFilter
terms. Most operators take a single value. The in
and not_in
operators can take multiple values.
Attribute = BranchName
:
Operator
: =
| !=
| begins_with
| ends_with
| contains
| in
| not_in
Attribute = LifecycleState
:
Operator
: =
| !=
| in
| not_in
Values
: beta
| supported
| deprecated
| retired
Attribute = PlatformName
:
Operator
: =
| !=
| begins_with
| ends_with
| contains
| in
| not_in
Attribute = TierType
:
Operator
: =
| !=
Values
: WebServer/Standard
| Worker/SQS/HTTP
Array size: limited to 10 SearchFilter
objects.
Within each SearchFilter
item, the Values
array is limited to 10 items.
Detailed health information about an Amazon EC2 instance in your Elastic Beanstalk environment.
", "refs": { @@ -1657,11 +1753,11 @@ "AvailableSolutionStackNamesList$member": null, "ConfigurationOptionsDescription$SolutionStackName": "The name of the solution stack these configuration options belong to.
", "ConfigurationSettingsDescription$SolutionStackName": "The name of the solution stack this configuration set uses.
", - "CreateConfigurationTemplateMessage$SolutionStackName": "The name of the solution stack used by this configuration. The solution stack specifies the operating system, architecture, and application server for a configuration template. It determines the set of configuration options as well as the possible and default values.
Use ListAvailableSolutionStacks to obtain a list of available solution stacks.
A solution stack name or a source configuration parameter must be specified, otherwise AWS Elastic Beanstalk returns an InvalidParameterValue
error.
If a solution stack name is not specified and the source configuration parameter is specified, AWS Elastic Beanstalk uses the same solution stack as the source configuration template.
", - "CreateEnvironmentMessage$SolutionStackName": "This is an alternative to specifying a template name. If specified, AWS Elastic Beanstalk sets the configuration values to the default values associated with the specified solution stack.
For a list of current solution stacks, see Elastic Beanstalk Supported Platforms.
", + "CreateConfigurationTemplateMessage$SolutionStackName": "The name of an Elastic Beanstalk solution stack (platform version) that this configuration uses. For example, 64bit Amazon Linux 2013.09 running Tomcat 7 Java 7
. A solution stack specifies the operating system, runtime, and application server for a configuration template. It also determines the set of configuration options as well as the possible and default values. For more information, see Supported Platforms in the AWS Elastic Beanstalk Developer Guide.
You must specify SolutionStackName
if you don't specify PlatformArn
, EnvironmentId
, or SourceConfiguration
.
Use the ListAvailableSolutionStacks
API to obtain a list of available solution stacks.
The name of an Elastic Beanstalk solution stack (platform version) to use with the environment. If specified, Elastic Beanstalk sets the configuration values to the default values associated with the specified solution stack. For a list of current solution stacks, see Elastic Beanstalk Supported Platforms in the AWS Elastic Beanstalk Platforms guide.
If you specify SolutionStackName
, don't specify PlatformArn
or TemplateName
.
The name of the solution stack whose configuration options you want to describe.
", "EnvironmentDescription$SolutionStackName": " The name of the SolutionStack
deployed with this environment.
The name of the solution stack used by the platform.
", + "PlatformDescription$SolutionStackName": "The name of the solution stack used by the platform version.
", "SolutionStackDescription$SolutionStackName": "The name of the solution stack.
", "UpdateEnvironmentMessage$SolutionStackName": "This specifies the platform version that the environment will run after the environment is updated.
" } @@ -1679,9 +1775,9 @@ } }, "SourceConfiguration": { - "base": "A specification for an environment configuration
", + "base": "A specification for an environment configuration.
", "refs": { - "CreateConfigurationTemplateMessage$SourceConfiguration": "If specified, AWS Elastic Beanstalk uses the configuration values from the specified configuration template to create a new configuration.
Values specified in the OptionSettings
parameter of this call overrides any values obtained from the SourceConfiguration
.
If no configuration template is found, returns an InvalidParameterValue
error.
Constraint: If both the solution stack name parameter and the source configuration parameters are specified, the solution stack of the source configuration template must match the specified solution stack name or else AWS Elastic Beanstalk returns an InvalidParameterCombination
error.
An Elastic Beanstalk configuration template to base this one on. If specified, Elastic Beanstalk uses the configuration values from the specified configuration template to create a new configuration.
Values specified in OptionSettings
override any values obtained from the SourceConfiguration
.
You must specify SourceConfiguration
if you don't specify PlatformArn
, EnvironmentId
, or SolutionStackName
.
Constraint: If both solution stack name and source configuration are specified, the solution stack of the source configuration template must match the specified solution stack name.
" } }, "SourceLocation": { @@ -1762,8 +1858,8 @@ "SupportedAddonList": { "base": null, "refs": { - "PlatformDescription$SupportedAddonList": "The additions supported by the platform.
", - "PlatformSummary$SupportedAddonList": "The additions associated with the platform.
" + "PlatformDescription$SupportedAddonList": "The additions supported by the platform version.
", + "PlatformSummary$SupportedAddonList": "The additions associated with the platform version.
" } }, "SupportedTier": { @@ -1775,8 +1871,9 @@ "SupportedTierList": { "base": null, "refs": { - "PlatformDescription$SupportedTierList": "The tiers supported by the platform.
", - "PlatformSummary$SupportedTierList": "The tiers in which the platform runs.
" + "PlatformBranchSummary$SupportedTierList": "The environment tiers that platform versions in this branch support.
Possible values: WebServer/Standard
| Worker/SQS/HTTP
The tiers supported by the platform version.
", + "PlatformSummary$SupportedTierList": "The tiers in which the platform version runs.
" } }, "SwapEnvironmentCNAMEsMessage": { @@ -1879,8 +1976,10 @@ "DescribeEventsMessage$NextToken": "Pagination token. If specified, the events return the next batch of results.
", "EnvironmentDescriptionsMessage$NextToken": "In a paginated request, the token that you can pass in a subsequent request to get the next response page.
", "EventDescriptionsMessage$NextToken": "If returned, this indicates that there are more results to obtain. Use this token in the next DescribeEvents call to get the next batch of events.
", - "ListPlatformVersionsRequest$NextToken": "The starting index into the remaining list of platforms. Use the NextToken
value from a previous ListPlatformVersion
call.
The starting index into the remaining list of platforms. if this value is not null
, you can use it in a subsequent ListPlatformVersion
call.
For a paginated request. Specify a token from a previous response page to retrieve the next response page. All other parameter values must be identical to the ones specified in the initial request.
If no NextToken
is specified, the first page is retrieved.
In a paginated request, if this value isn't null
, it's the token that you can pass in a subsequent request to get the next response page.
For a paginated request. Specify a token from a previous response page to retrieve the next response page. All other parameter values must be identical to the ones specified in the initial request.
If no NextToken
is specified, the first page is retrieved.
In a paginated request, if this value isn't null
, it's the token that you can pass in a subsequent request to get the next response page.
The last modified date of the application version.
", "ConfigurationSettingsDescription$DateUpdated": "The date (in UTC time) when this configuration set was last modified.
", "EnvironmentDescription$DateUpdated": "The last modified date for this environment.
", - "PlatformDescription$DateUpdated": "The date when the platform was last updated.
" + "PlatformDescription$DateUpdated": "The date when the platform version was last updated.
" } }, "UpdateEnvironmentMessage": { @@ -2010,7 +2109,7 @@ "refs": { "ApplicationVersionDescription$VersionLabel": "A unique identifier for the application version.
", "CreateApplicationVersionMessage$VersionLabel": "A label identifying this version.
Constraint: Must be unique per application. If an application version already exists with this label for the specified application, AWS Elastic Beanstalk returns an InvalidParameterValue
error.
The name of the application version to deploy.
If the specified application has no associated application versions, AWS Elastic Beanstalk UpdateEnvironment
returns an InvalidParameterValue
error.
Default: If not specified, AWS Elastic Beanstalk attempts to launch the sample application in the container.
", + "CreateEnvironmentMessage$VersionLabel": "The name of the application version to deploy.
Default: If not specified, Elastic Beanstalk attempts to deploy the sample application.
", "DeleteApplicationVersionMessage$VersionLabel": "The label of the version to delete.
", "DescribeEnvironmentsMessage$VersionLabel": "If specified, AWS Elastic Beanstalk restricts the returned descriptions to include only those that are associated with this application version.
", "DescribeEventsMessage$VersionLabel": "If specified, AWS Elastic Beanstalk restricts the returned descriptions to those associated with this application version.
", diff --git a/models/apis/elasticbeanstalk/2010-12-01/paginators-1.json b/models/apis/elasticbeanstalk/2010-12-01/paginators-1.json index b4e93b3d8cb..874292e01c6 100644 --- a/models/apis/elasticbeanstalk/2010-12-01/paginators-1.json +++ b/models/apis/elasticbeanstalk/2010-12-01/paginators-1.json @@ -20,6 +20,11 @@ }, "ListAvailableSolutionStacks": { "result_key": "SolutionStacks" + }, + "ListPlatformBranches": { + "input_token": "NextToken", + "limit_key": "MaxRecords", + "output_token": "NextToken" } } } \ No newline at end of file diff --git a/models/apis/elasticmapreduce/2009-03-31/api-2.json b/models/apis/elasticmapreduce/2009-03-31/api-2.json index 77e10fa51ff..ea85d09472b 100644 --- a/models/apis/elasticmapreduce/2009-03-31/api-2.json +++ b/models/apis/elasticmapreduce/2009-03-31/api-2.json @@ -167,6 +167,15 @@ {"shape":"InvalidRequestException"} ] }, + "GetManagedScalingPolicy":{ + "name":"GetManagedScalingPolicy", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"GetManagedScalingPolicyInput"}, + "output":{"shape":"GetManagedScalingPolicyOutput"} + }, "ListBootstrapActions":{ "name":"ListBootstrapActions", "http":{ @@ -316,6 +325,15 @@ {"shape":"InvalidRequestException"} ] }, + "PutManagedScalingPolicy":{ + "name":"PutManagedScalingPolicy", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"PutManagedScalingPolicyInput"}, + "output":{"shape":"PutManagedScalingPolicyOutput"} + }, "RemoveAutoScalingPolicy":{ "name":"RemoveAutoScalingPolicy", "http":{ @@ -325,6 +343,15 @@ "input":{"shape":"RemoveAutoScalingPolicyInput"}, "output":{"shape":"RemoveAutoScalingPolicyOutput"} }, + "RemoveManagedScalingPolicy":{ + "name":"RemoveManagedScalingPolicy", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"RemoveManagedScalingPolicyInput"}, + "output":{"shape":"RemoveManagedScalingPolicyOutput"} + }, "RemoveTags":{ "name":"RemoveTags", "http":{ @@ -675,8 +702,8 @@ "RepoUpgradeOnBoot":{"shape":"RepoUpgradeOnBoot"}, "KerberosAttributes":{"shape":"KerberosAttributes"}, "ClusterArn":{"shape":"ArnType"}, - "StepConcurrencyLevel":{"shape":"Integer"}, - "OutpostArn":{"shape":"OptionalArnType"} + "OutpostArn":{"shape":"OptionalArnType"}, + "StepConcurrencyLevel":{"shape":"Integer"} } }, "ClusterId":{"type":"string"}, @@ -768,6 +795,28 @@ "LESS_THAN_OR_EQUAL" ] }, + "ComputeLimits":{ + "type":"structure", + "required":[ + "UnitType", + "MinimumCapacityUnits", + "MaximumCapacityUnits" + ], + "members":{ + "UnitType":{"shape":"ComputeLimitsUnitType"}, + "MinimumCapacityUnits":{"shape":"Integer"}, + "MaximumCapacityUnits":{"shape":"Integer"}, + "MaximumOnDemandCapacityUnits":{"shape":"Integer"} + } + }, + "ComputeLimitsUnitType":{ + "type":"string", + "enum":[ + "InstanceFleetUnits", + "Instances", + "VCPU" + ] + }, "Configuration":{ "type":"structure", "members":{ @@ -970,6 +1019,19 @@ "BlockPublicAccessConfigurationMetadata":{"shape":"BlockPublicAccessConfigurationMetadata"} } }, + "GetManagedScalingPolicyInput":{ + "type":"structure", + "required":["ClusterId"], + "members":{ + "ClusterId":{"shape":"ClusterId"} + } + }, + "GetManagedScalingPolicyOutput":{ + "type":"structure", + "members":{ + "ManagedScalingPolicy":{"shape":"ManagedScalingPolicy"} + } + }, "HadoopJarStepConfig":{ "type":"structure", "required":["Jar"], @@ -1650,6 +1712,12 @@ } }, "Long":{"type":"long"}, + "ManagedScalingPolicy":{ + "type":"structure", + "members":{ + "ComputeLimits":{"shape":"ComputeLimits"} + } + }, "Marker":{"type":"string"}, "MarketType":{ "type":"string", @@ -1772,6 +1840,22 @@ "members":{ } }, + "PutManagedScalingPolicyInput":{ + "type":"structure", + "required":[ + "ClusterId", + "ManagedScalingPolicy" + ], + "members":{ + "ClusterId":{"shape":"ClusterId"}, + "ManagedScalingPolicy":{"shape":"ManagedScalingPolicy"} + } + }, + "PutManagedScalingPolicyOutput":{ + "type":"structure", + "members":{ + } + }, "RemoveAutoScalingPolicyInput":{ "type":"structure", "required":[ @@ -1788,6 +1872,18 @@ "members":{ } }, + "RemoveManagedScalingPolicyInput":{ + "type":"structure", + "required":["ClusterId"], + "members":{ + "ClusterId":{"shape":"ClusterId"} + } + }, + "RemoveManagedScalingPolicyOutput":{ + "type":"structure", + "members":{ + } + }, "RemoveTagsInput":{ "type":"structure", "required":[ @@ -1842,7 +1938,8 @@ "EbsRootVolumeSize":{"shape":"Integer"}, "RepoUpgradeOnBoot":{"shape":"RepoUpgradeOnBoot"}, "KerberosAttributes":{"shape":"KerberosAttributes"}, - "StepConcurrencyLevel":{"shape":"Integer"} + "StepConcurrencyLevel":{"shape":"Integer"}, + "ManagedScalingPolicy":{"shape":"ManagedScalingPolicy"} } }, "RunJobFlowOutput":{ diff --git a/models/apis/elasticmapreduce/2009-03-31/docs-2.json b/models/apis/elasticmapreduce/2009-03-31/docs-2.json index e5c0b4f1788..7ba9f3cdd08 100644 --- a/models/apis/elasticmapreduce/2009-03-31/docs-2.json +++ b/models/apis/elasticmapreduce/2009-03-31/docs-2.json @@ -14,6 +14,7 @@ "DescribeSecurityConfiguration": "Provides the details of a security configuration by returning the configuration JSON.
", "DescribeStep": "Provides more detail about the cluster step.
", "GetBlockPublicAccessConfiguration": "Returns the Amazon EMR block public access configuration for your AWS account in the current Region. For more information see Configure Block Public Access for Amazon EMR in the Amazon EMR Management Guide.
", + "GetManagedScalingPolicy": "Fetches the attached managed scaling policy for an Amazon EMR cluster.
", "ListBootstrapActions": "Provides information about the bootstrap actions associated with a cluster.
", "ListClusters": "Provides the status of all clusters visible to this AWS account. Allows you to filter the list of clusters based on certain criteria; for example, filtering by cluster creation date and time or by status. This call returns a maximum of 50 clusters per call, but returns a marker to track the paging of the cluster list across multiple ListClusters calls.
", "ListInstanceFleets": "Lists all available details about the instance fleets in a cluster.
The instance fleet configuration is available only in Amazon EMR versions 4.8.0 and later, excluding 5.0.x versions.
ModifyInstanceGroups modifies the number of nodes and configuration settings of an instance group. The input parameters include the new target instance count for the group and the instance group ID. The call will either succeed or fail atomically.
", "PutAutoScalingPolicy": "Creates or updates an automatic scaling policy for a core instance group or task instance group in an Amazon EMR cluster. The automatic scaling policy defines how an instance group dynamically adds and terminates EC2 instances in response to the value of a CloudWatch metric.
", "PutBlockPublicAccessConfiguration": "Creates or updates an Amazon EMR block public access configuration for your AWS account in the current Region. For more information see Configure Block Public Access for Amazon EMR in the Amazon EMR Management Guide.
", + "PutManagedScalingPolicy": "Creates or updates a managed scaling policy for an Amazon EMR cluster. The managed scaling policy defines the limits for resources, such as EC2 instances that can be added or terminated from a cluster. The policy only applies to the core and task nodes. The master node cannot be scaled after initial configuration.
", "RemoveAutoScalingPolicy": "Removes an automatic scaling policy from a specified instance group within an EMR cluster.
", + "RemoveManagedScalingPolicy": "Removes a managed scaling policy from a specified EMR cluster.
", "RemoveTags": "Removes tags from an Amazon EMR resource. Tags make it easier to associate clusters in various ways, such as grouping clusters to track your Amazon EMR resource allocation costs. For more information, see Tag Clusters.
The following example removes the stack tag with value Prod from a cluster:
", "RunJobFlow": "RunJobFlow creates and starts running a new cluster (job flow). The cluster runs the steps specified. After the steps complete, the cluster stops and the HDFS partition is lost. To prevent loss of data, configure the last step of the job flow to store results in Amazon S3. If the JobFlowInstancesConfig KeepJobFlowAliveWhenNoSteps
parameter is set to TRUE
, the cluster transitions to the WAITING state rather than shutting down after the steps have completed.
For additional protection, you can set the JobFlowInstancesConfig TerminationProtected
parameter to TRUE
to lock the cluster and prevent it from being terminated by API call, user intervention, or in the event of a job flow error.
A maximum of 256 steps are allowed in each job flow.
If your cluster is long-running (such as a Hive data warehouse) or complex, you may require more than 256 steps to process your data. You can bypass the 256-step limitation in various ways, including using the SSH shell to connect to the master node and submitting queries directly to the software running on the master node, such as Hive and Hadoop. For more information on how to do this, see Add More than 256 Steps to a Cluster in the Amazon EMR Management Guide.
For long running clusters, we recommend that you periodically store your results.
The instance fleets configuration is available only in Amazon EMR versions 4.8.0 and later, excluding 5.0.x versions. The RunJobFlow request can contain InstanceFleets parameters or InstanceGroups parameters, but not both.
SetTerminationProtection locks a cluster (job flow) so the EC2 instances in the cluster cannot be terminated by user intervention, an API call, or in the event of a job-flow error. The cluster still terminates upon successful completion of the job flow. Calling SetTerminationProtection
on a cluster is similar to calling the Amazon EC2 DisableAPITermination
API on all EC2 instances in a cluster.
SetTerminationProtection
is used to prevent accidental termination of a cluster and to ensure that in the event of an error, the instances persist so that you can recover any data stored in their ephemeral instance storage.
To terminate a cluster that has been locked by setting SetTerminationProtection
to true
, you must first unlock the job flow by a subsequent call to SetTerminationProtection
in which you set the value to false
.
For more information, seeManaging Cluster Termination in the Amazon EMR Management Guide.
", @@ -154,8 +157,8 @@ "BlockPublicAccessConfiguration": { "base": "A configuration for Amazon EMR block public access. When BlockPublicSecurityGroupRules
is set to true
, Amazon EMR prevents cluster creation if one of the cluster's security groups has a rule that allows inbound traffic from 0.0.0.0/0 or ::/0 on a port, unless the port is specified as an exception using PermittedPublicSecurityGroupRuleRanges
.
A configuration for Amazon EMR block public access. The configuration applies to all clusters created in your account for the current Region. The configuration specifies whether block public access is enabled. If block public access is enabled, security groups associated with the cluster cannot have rules that allow inbound traffic from 0.0.0.0/0 or ::/0 on a port, unless the port is specified as an exception using PermittedPublicSecurityGroupRuleRanges
in the BlockPublicAccessConfiguration
. By default, Port 22 (SSH) is an exception, and public access is allowed on this port. You can change this by updating the block public access configuration to remove the exception.
A configuration for Amazon EMR block public access. The configuration applies to all clusters created in your account for the current Region. The configuration specifies whether block public access is enabled. If block public access is enabled, security groups associated with the cluster cannot have rules that allow inbound traffic from 0.0.0.0/0 or ::/0 on a port, unless the port is specified as an exception using PermittedPublicSecurityGroupRuleRanges
in the BlockPublicAccessConfiguration
. By default, Port 22 (SSH) is an exception, and public access is allowed on this port. You can change this by updating BlockPublicSecurityGroupRules
to remove the exception.
A configuration for Amazon EMR block public access. The configuration applies to all clusters created in your account for the current Region. The configuration specifies whether block public access is enabled. If block public access is enabled, security groups associated with the cluster cannot have rules that allow inbound traffic from 0.0.0.0/0 or ::/0 on a port, unless the port is specified as an exception using PermittedPublicSecurityGroupRuleRanges
in the BlockPublicAccessConfiguration
. By default, Port 22 (SSH) is an exception, and public access is allowed on this port. You can change this by updating the block public access configuration to remove the exception.
For accounts that created clusters in a Region before November 25, 2019, block public access is disabled by default in that Region. To use this feature, you must manually enable and configure it. For accounts that did not create an EMR cluster in a Region before this date, block public access is enabled by default in that Region.
A configuration for Amazon EMR block public access. The configuration applies to all clusters created in your account for the current Region. The configuration specifies whether block public access is enabled. If block public access is enabled, security groups associated with the cluster cannot have rules that allow inbound traffic from 0.0.0.0/0 or ::/0 on a port, unless the port is specified as an exception using PermittedPublicSecurityGroupRuleRanges
in the BlockPublicAccessConfiguration
. By default, Port 22 (SSH) is an exception, and public access is allowed on this port. You can change this by updating BlockPublicSecurityGroupRules
to remove the exception.
For accounts that created clusters in a Region before November 25, 2019, block public access is disabled by default in that Region. To use this feature, you must manually enable and configure it. For accounts that did not create an EMR cluster in a Region before this date, block public access is enabled by default in that Region.
The unique identifier for the cluster.
", "DescribeClusterInput$ClusterId": "The identifier of the cluster to describe.
", "DescribeStepInput$ClusterId": "The identifier of the cluster with steps to describe.
", + "GetManagedScalingPolicyInput$ClusterId": "Specifies the ID of the cluster for which the managed scaling policy will be fetched.
", "ListBootstrapActionsInput$ClusterId": "The cluster identifier for the bootstrap actions to list.
", "ListInstanceFleetsInput$ClusterId": "The unique identifier of the cluster.
", "ListInstanceGroupsInput$ClusterId": "The identifier of the cluster for which to list the instance groups.
", @@ -270,7 +274,9 @@ "ModifyInstanceGroupsInput$ClusterId": "The ID of the cluster to which the instance group belongs.
", "PutAutoScalingPolicyInput$ClusterId": "Specifies the ID of a cluster. The instance group to which the automatic scaling policy is applied is within this cluster.
", "PutAutoScalingPolicyOutput$ClusterId": "Specifies the ID of a cluster. The instance group to which the automatic scaling policy is applied is within this cluster.
", - "RemoveAutoScalingPolicyInput$ClusterId": "Specifies the ID of a cluster. The instance group to which the automatic scaling policy is applied is within this cluster.
" + "PutManagedScalingPolicyInput$ClusterId": "Specifies the ID of an EMR cluster where the managed scaling policy is attached.
", + "RemoveAutoScalingPolicyInput$ClusterId": "Specifies the ID of a cluster. The instance group to which the automatic scaling policy is applied is within this cluster.
", + "RemoveManagedScalingPolicyInput$ClusterId": "Specifies the ID of the cluster from which the managed scaling policy will be removed.
" } }, "ClusterState": { @@ -341,6 +347,18 @@ "CloudWatchAlarmDefinition$ComparisonOperator": "Determines how the metric specified by MetricName
is compared to the value specified by Threshold
.
The EC2 unit limits for a managed scaling policy. The managed scaling activity of a cluster can not be above or below these limits. The limit only applies to the core and task nodes. The master node cannot be scaled after initial configuration.
", + "refs": { + "ManagedScalingPolicy$ComputeLimits": "The EC2 unit limits for a managed scaling policy. The managed scaling activity of a cluster is not allowed to go above or below these limits. The limit only applies to the core and task nodes. The master node cannot be scaled after initial configuration.
" + } + }, + "ComputeLimitsUnitType": { + "base": null, + "refs": { + "ComputeLimits$UnitType": "The unit type used for specifying a managed scaling policy.
" + } + }, "Configuration": { "base": "Amazon EMR releases 4.x or later.
An optional configuration specification to be used when provisioning cluster instances, which can include configurations for applications and software bundled with Amazon EMR. A configuration consists of a classification, properties, and optional nested configurations. A classification refers to an application-specific configuration file. Properties are the settings you want to change in that file. For more information, see Configuring Applications.
", "refs": { @@ -552,6 +570,16 @@ "refs": { } }, + "GetManagedScalingPolicyInput": { + "base": null, + "refs": { + } + }, + "GetManagedScalingPolicyOutput": { + "base": null, + "refs": { + } + }, "HadoopJarStepConfig": { "base": "A job flow step consisting of a JAR file whose main function will be executed. The main function submits a job for Hadoop to execute and waits for the job to finish or fail.
", "refs": { @@ -885,6 +913,9 @@ "Cluster$EbsRootVolumeSize": "The size, in GiB, of the EBS root device volume of the Linux AMI that is used for each EC2 instance. Available in Amazon EMR version 4.x and later.
", "Cluster$StepConcurrencyLevel": "Specifies the number of steps that can be executed concurrently.
", "ClusterSummary$NormalizedInstanceHours": "An approximation of the cost of the cluster, represented in m1.small/hours. This value is incremented one time for every hour an m1.small instance runs. Larger instances are weighted more, so an EC2 instance that is roughly four times more expensive would result in the normalized instance hours being incremented by four. This result is only an approximation and does not reflect the actual billing rate.
", + "ComputeLimits$MinimumCapacityUnits": "The lower boundary of EC2 units. It is measured through VCPU cores or instances for instance groups and measured through units for instance fleets. Managed scaling activities are not allowed beyond this boundary. The limit only applies to the core and task nodes. The master node cannot be scaled after initial configuration.
", + "ComputeLimits$MaximumCapacityUnits": "The upper boundary of EC2 units. It is measured through VCPU cores or instances for instance groups and measured through units for instance fleets. Managed scaling activities are not allowed beyond this boundary. The limit only applies to the core and task nodes. The master node cannot be scaled after initial configuration.
", + "ComputeLimits$MaximumOnDemandCapacityUnits": "The upper boundary of on-demand EC2 units. It is measured through VCPU cores or instances for instance groups and measured through units for instance fleets. The on-demand units are not allowed to scale beyond this boundary. The limit only applies to the core and task nodes. The master node cannot be scaled after initial configuration.
", "EbsBlockDeviceConfig$VolumesPerInstance": "Number of EBS volumes with a specific volume configuration that will be associated with every instance in the instance group
", "InstanceGroup$RequestedInstanceCount": "The target number of instances for the instance group.
", "InstanceGroup$RunningInstanceCount": "The number of instances currently running in this instance group.
", @@ -1063,6 +1094,14 @@ "InstanceGroup$LastSuccessfullyAppliedConfigurationsVersion": "The version number of a configuration specification that was successfully applied for an instance group last time.
" } }, + "ManagedScalingPolicy": { + "base": "Managed scaling policy for an Amazon EMR cluster. The policy specifies the limits for resources that can be added or terminated from a cluster. The policy only applies to the core and task nodes. The master node cannot be scaled after initial configuration.
", + "refs": { + "GetManagedScalingPolicyOutput$ManagedScalingPolicy": "Specifies the managed scaling policy that is attached to an Amazon EMR cluster.
", + "PutManagedScalingPolicyInput$ManagedScalingPolicy": "Specifies the constraints for the managed scaling policy.
", + "RunJobFlowInput$ManagedScalingPolicy": "The specified managed scaling policy for an Amazon EMR cluster.
" + } + }, "Marker": { "base": null, "refs": { @@ -1191,6 +1230,16 @@ "refs": { } }, + "PutManagedScalingPolicyInput": { + "base": null, + "refs": { + } + }, + "PutManagedScalingPolicyOutput": { + "base": null, + "refs": { + } + }, "RemoveAutoScalingPolicyInput": { "base": null, "refs": { @@ -1201,6 +1250,16 @@ "refs": { } }, + "RemoveManagedScalingPolicyInput": { + "base": null, + "refs": { + } + }, + "RemoveManagedScalingPolicyOutput": { + "base": null, + "refs": { + } + }, "RemoveTagsInput": { "base": "This input identifies a cluster and a list of tags to remove.
", "refs": { diff --git a/models/apis/es/2015-01-01/api-2.json b/models/apis/es/2015-01-01/api-2.json index 9d927494844..855eb8bb9e4 100644 --- a/models/apis/es/2015-01-01/api-2.json +++ b/models/apis/es/2015-01-01/api-2.json @@ -24,6 +24,23 @@ {"shape":"InternalException"} ] }, + "AssociatePackage":{ + "name":"AssociatePackage", + "http":{ + "method":"POST", + "requestUri":"/2015-01-01/packages/associate/{PackageID}/{DomainName}" + }, + "input":{"shape":"AssociatePackageRequest"}, + "output":{"shape":"AssociatePackageResponse"}, + "errors":[ + {"shape":"BaseException"}, + {"shape":"InternalException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ValidationException"}, + {"shape":"ConflictException"} + ] + }, "CancelElasticsearchServiceSoftwareUpdate":{ "name":"CancelElasticsearchServiceSoftwareUpdate", "http":{ @@ -57,6 +74,24 @@ {"shape":"ValidationException"} ] }, + "CreatePackage":{ + "name":"CreatePackage", + "http":{ + "method":"POST", + "requestUri":"/2015-01-01/packages" + }, + "input":{"shape":"CreatePackageRequest"}, + "output":{"shape":"CreatePackageResponse"}, + "errors":[ + {"shape":"BaseException"}, + {"shape":"InternalException"}, + {"shape":"LimitExceededException"}, + {"shape":"InvalidTypeException"}, + {"shape":"ResourceAlreadyExistsException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ValidationException"} + ] + }, "DeleteElasticsearchDomain":{ "name":"DeleteElasticsearchDomain", "http":{ @@ -84,6 +119,23 @@ {"shape":"ValidationException"} ] }, + "DeletePackage":{ + "name":"DeletePackage", + "http":{ + "method":"DELETE", + "requestUri":"/2015-01-01/packages/{PackageID}" + }, + "input":{"shape":"DeletePackageRequest"}, + "output":{"shape":"DeletePackageResponse"}, + "errors":[ + {"shape":"BaseException"}, + {"shape":"InternalException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ValidationException"}, + {"shape":"ConflictException"} + ] + }, "DescribeElasticsearchDomain":{ "name":"DescribeElasticsearchDomain", "http":{ @@ -145,6 +197,22 @@ {"shape":"ValidationException"} ] }, + "DescribePackages":{ + "name":"DescribePackages", + "http":{ + "method":"POST", + "requestUri":"/2015-01-01/packages/describe" + }, + "input":{"shape":"DescribePackagesRequest"}, + "output":{"shape":"DescribePackagesResponse"}, + "errors":[ + {"shape":"BaseException"}, + {"shape":"InternalException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ValidationException"} + ] + }, "DescribeReservedElasticsearchInstanceOfferings":{ "name":"DescribeReservedElasticsearchInstanceOfferings", "http":{ @@ -175,6 +243,23 @@ {"shape":"DisabledOperationException"} ] }, + "DissociatePackage":{ + "name":"DissociatePackage", + "http":{ + "method":"POST", + "requestUri":"/2015-01-01/packages/dissociate/{PackageID}/{DomainName}" + }, + "input":{"shape":"DissociatePackageRequest"}, + "output":{"shape":"DissociatePackageResponse"}, + "errors":[ + {"shape":"BaseException"}, + {"shape":"InternalException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ValidationException"}, + {"shape":"ConflictException"} + ] + }, "GetCompatibleElasticsearchVersions":{ "name":"GetCompatibleElasticsearchVersions", "http":{ @@ -235,6 +320,22 @@ {"shape":"ValidationException"} ] }, + "ListDomainsForPackage":{ + "name":"ListDomainsForPackage", + "http":{ + "method":"GET", + "requestUri":"/2015-01-01/packages/{PackageID}/domains" + }, + "input":{"shape":"ListDomainsForPackageRequest"}, + "output":{"shape":"ListDomainsForPackageResponse"}, + "errors":[ + {"shape":"BaseException"}, + {"shape":"InternalException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ValidationException"} + ] + }, "ListElasticsearchInstanceTypes":{ "name":"ListElasticsearchInstanceTypes", "http":{ @@ -265,6 +366,22 @@ {"shape":"ValidationException"} ] }, + "ListPackagesForDomain":{ + "name":"ListPackagesForDomain", + "http":{ + "method":"GET", + "requestUri":"/2015-01-01/domain/{DomainName}/packages" + }, + "input":{"shape":"ListPackagesForDomainRequest"}, + "output":{"shape":"ListPackagesForDomainResponse"}, + "errors":[ + {"shape":"BaseException"}, + {"shape":"InternalException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ValidationException"} + ] + }, "ListTags":{ "name":"ListTags", "http":{ @@ -362,6 +479,13 @@ }, "shapes":{ "ARN":{"type":"string"}, + "AccessDeniedException":{ + "type":"structure", + "members":{ + }, + "error":{"httpStatusCode":403}, + "exception":true + }, "AccessPoliciesStatus":{ "type":"structure", "required":[ @@ -437,6 +561,31 @@ "Status":{"shape":"OptionStatus"} } }, + "AssociatePackageRequest":{ + "type":"structure", + "required":[ + "PackageID", + "DomainName" + ], + "members":{ + "PackageID":{ + "shape":"PackageID", + "location":"uri", + "locationName":"PackageID" + }, + "DomainName":{ + "shape":"DomainName", + "location":"uri", + "locationName":"DomainName" + } + } + }, + "AssociatePackageResponse":{ + "type":"structure", + "members":{ + "DomainPackageDetails":{"shape":"DomainPackageDetails"} + } + }, "BaseException":{ "type":"structure", "members":{ @@ -490,6 +639,13 @@ "TargetVersions":{"shape":"ElasticsearchVersionList"} } }, + "ConflictException":{ + "type":"structure", + "members":{ + }, + "error":{"httpStatusCode":409}, + "exception":true + }, "CreateElasticsearchDomainRequest":{ "type":"structure", "required":["DomainName"], @@ -516,6 +672,27 @@ "DomainStatus":{"shape":"ElasticsearchDomainStatus"} } }, + "CreatePackageRequest":{ + "type":"structure", + "required":[ + "PackageName", + "PackageType", + "PackageSource" + ], + "members":{ + "PackageName":{"shape":"PackageName"}, + "PackageType":{"shape":"PackageType"}, + "PackageDescription":{"shape":"PackageDescription"}, + "PackageSource":{"shape":"PackageSource"} + } + }, + "CreatePackageResponse":{ + "type":"structure", + "members":{ + "PackageDetails":{"shape":"PackageDetails"} + } + }, + "CreatedAt":{"type":"timestamp"}, "DeleteElasticsearchDomainRequest":{ "type":"structure", "required":["DomainName"], @@ -533,6 +710,23 @@ "DomainStatus":{"shape":"ElasticsearchDomainStatus"} } }, + "DeletePackageRequest":{ + "type":"structure", + "required":["PackageID"], + "members":{ + "PackageID":{ + "shape":"PackageID", + "location":"uri", + "locationName":"PackageID" + } + } + }, + "DeletePackageResponse":{ + "type":"structure", + "members":{ + "PackageDetails":{"shape":"PackageDetails"} + } + }, "DeploymentCloseDateTimeStamp":{"type":"timestamp"}, "DeploymentStatus":{ "type":"string", @@ -624,6 +818,48 @@ "LimitsByRole":{"shape":"LimitsByRole"} } }, + "DescribePackagesFilter":{ + "type":"structure", + "members":{ + "Name":{"shape":"DescribePackagesFilterName"}, + "Value":{"shape":"DescribePackagesFilterValues"} + } + }, + "DescribePackagesFilterList":{ + "type":"list", + "member":{"shape":"DescribePackagesFilter"} + }, + "DescribePackagesFilterName":{ + "type":"string", + "enum":[ + "PackageID", + "PackageName", + "PackageStatus" + ] + }, + "DescribePackagesFilterValue":{ + "type":"string", + "pattern":"^[0-9a-zA-Z\\*\\.\\\\/\\?-]*$" + }, + "DescribePackagesFilterValues":{ + "type":"list", + "member":{"shape":"DescribePackagesFilterValue"} + }, + "DescribePackagesRequest":{ + "type":"structure", + "members":{ + "Filters":{"shape":"DescribePackagesFilterList"}, + "MaxResults":{"shape":"MaxResults"}, + "NextToken":{"shape":"NextToken"} + } + }, + "DescribePackagesResponse":{ + "type":"structure", + "members":{ + "PackageDetailsList":{"shape":"PackageDetailsList"}, + "NextToken":{"shape":"String"} + } + }, "DescribeReservedElasticsearchInstanceOfferingsRequest":{ "type":"structure", "members":{ @@ -685,6 +921,31 @@ "error":{"httpStatusCode":409}, "exception":true }, + "DissociatePackageRequest":{ + "type":"structure", + "required":[ + "PackageID", + "DomainName" + ], + "members":{ + "PackageID":{ + "shape":"PackageID", + "location":"uri", + "locationName":"PackageID" + }, + "DomainName":{ + "shape":"DomainName", + "location":"uri", + "locationName":"DomainName" + } + } + }, + "DissociatePackageResponse":{ + "type":"structure", + "members":{ + "DomainPackageDetails":{"shape":"DomainPackageDetails"} + } + }, "DomainEndpointOptions":{ "type":"structure", "members":{ @@ -728,6 +989,33 @@ "type":"list", "member":{"shape":"DomainName"} }, + "DomainPackageDetails":{ + "type":"structure", + "members":{ + "PackageID":{"shape":"PackageID"}, + "PackageName":{"shape":"PackageName"}, + "PackageType":{"shape":"PackageType"}, + "LastUpdated":{"shape":"LastUpdated"}, + "DomainName":{"shape":"DomainName"}, + "DomainPackageStatus":{"shape":"DomainPackageStatus"}, + "ReferencePath":{"shape":"ReferencePath"}, + "ErrorDetails":{"shape":"ErrorDetails"} + } + }, + "DomainPackageDetailsList":{ + "type":"list", + "member":{"shape":"DomainPackageDetails"} + }, + "DomainPackageStatus":{ + "type":"string", + "enum":[ + "ASSOCIATING", + "ASSOCIATION_FAILED", + "ACTIVE", + "DISSOCIATING", + "DISSOCIATION_FAILED" + ] + }, "Double":{"type":"double"}, "EBSOptions":{ "type":"structure", @@ -944,7 +1232,15 @@ "key":{"shape":"String"}, "value":{"shape":"ServiceUrl"} }, + "ErrorDetails":{ + "type":"structure", + "members":{ + "ErrorType":{"shape":"ErrorType"}, + "ErrorMessage":{"shape":"ErrorMessage"} + } + }, "ErrorMessage":{"type":"string"}, + "ErrorType":{"type":"string"}, "GUID":{ "type":"string", "pattern":"\\p{XDigit}{8}-\\p{XDigit}{4}-\\p{XDigit}{4}-\\p{XDigit}{4}-\\p{XDigit}{12}" @@ -1062,6 +1358,7 @@ "max":500, "min":1 }, + "LastUpdated":{"type":"timestamp"}, "LimitExceededException":{ "type":"structure", "members":{ @@ -1094,6 +1391,34 @@ "DomainNames":{"shape":"DomainInfoList"} } }, + "ListDomainsForPackageRequest":{ + "type":"structure", + "required":["PackageID"], + "members":{ + "PackageID":{ + "shape":"PackageID", + "location":"uri", + "locationName":"PackageID" + }, + "MaxResults":{ + "shape":"MaxResults", + "location":"querystring", + "locationName":"maxResults" + }, + "NextToken":{ + "shape":"NextToken", + "location":"querystring", + "locationName":"nextToken" + } + } + }, + "ListDomainsForPackageResponse":{ + "type":"structure", + "members":{ + "DomainPackageDetailsList":{"shape":"DomainPackageDetailsList"}, + "NextToken":{"shape":"String"} + } + }, "ListElasticsearchInstanceTypesRequest":{ "type":"structure", "required":["ElasticsearchVersion"], @@ -1149,6 +1474,34 @@ "NextToken":{"shape":"NextToken"} } }, + "ListPackagesForDomainRequest":{ + "type":"structure", + "required":["DomainName"], + "members":{ + "DomainName":{ + "shape":"DomainName", + "location":"uri", + "locationName":"DomainName" + }, + "MaxResults":{ + "shape":"MaxResults", + "location":"querystring", + "locationName":"maxResults" + }, + "NextToken":{ + "shape":"NextToken", + "location":"querystring", + "locationName":"nextToken" + } + } + }, + "ListPackagesForDomainResponse":{ + "type":"structure", + "members":{ + "DomainPackageDetailsList":{"shape":"DomainPackageDetailsList"}, + "NextToken":{"shape":"String"} + } + }, "ListTagsRequest":{ "type":"structure", "required":["ARN"], @@ -1248,6 +1601,57 @@ "PendingDeletion":{"shape":"Boolean"} } }, + "PackageDescription":{ + "type":"string", + "max":1024 + }, + "PackageDetails":{ + "type":"structure", + "members":{ + "PackageID":{"shape":"PackageID"}, + "PackageName":{"shape":"PackageName"}, + "PackageType":{"shape":"PackageType"}, + "PackageDescription":{"shape":"PackageDescription"}, + "PackageStatus":{"shape":"PackageStatus"}, + "CreatedAt":{"shape":"CreatedAt"}, + "ErrorDetails":{"shape":"ErrorDetails"} + } + }, + "PackageDetailsList":{ + "type":"list", + "member":{"shape":"PackageDetails"} + }, + "PackageID":{"type":"string"}, + "PackageName":{ + "type":"string", + "max":28, + "min":3, + "pattern":"[a-z][a-z0-9\\-]+" + }, + "PackageSource":{ + "type":"structure", + "members":{ + "S3BucketName":{"shape":"S3BucketName"}, + "S3Key":{"shape":"S3Key"} + } + }, + "PackageStatus":{ + "type":"string", + "enum":[ + "COPYING", + "COPY_FAILED", + "VALIDATING", + "VALIDATION_FAILED", + "AVAILABLE", + "DELETING", + "DELETED", + "DELETE_FAILED" + ] + }, + "PackageType":{ + "type":"string", + "enum":["TXT-DICTIONARY"] + }, "Password":{ "type":"string", "min":8, @@ -1284,6 +1688,7 @@ "type":"list", "member":{"shape":"RecurringCharge"} }, + "ReferencePath":{"type":"string"}, "RemoveTagsRequest":{ "type":"structure", "required":[ @@ -1366,6 +1771,12 @@ "max":2048, "min":20 }, + "S3BucketName":{ + "type":"string", + "max":63, + "min":3 + }, + "S3Key":{"type":"string"}, "ServiceSoftwareOptions":{ "type":"structure", "members":{ diff --git a/models/apis/es/2015-01-01/docs-2.json b/models/apis/es/2015-01-01/docs-2.json index 9c135101ffa..f57214672ef 100644 --- a/models/apis/es/2015-01-01/docs-2.json +++ b/models/apis/es/2015-01-01/docs-2.json @@ -3,22 +3,29 @@ "service": "Use the Amazon Elasticsearch Configuration API to create, configure, and manage Elasticsearch domains.
For sample code that uses the Configuration API, see the Amazon Elasticsearch Service Developer Guide. The guide also contains sample code for sending signed HTTP requests to the Elasticsearch APIs.
The endpoint for configuration service requests is region-specific: es.region.amazonaws.com. For example, es.us-east-1.amazonaws.com. For a current list of supported regions and endpoints, see Regions and Endpoints.
", "operations": { "AddTags": "Attaches tags to an existing Elasticsearch domain. Tags are a set of case-sensitive key value pairs. An Elasticsearch domain may have up to 10 tags. See Tagging Amazon Elasticsearch Service Domains for more information.
", + "AssociatePackage": "Associates a package with an Amazon ES domain.
", "CancelElasticsearchServiceSoftwareUpdate": "Cancels a scheduled service software update for an Amazon ES domain. You can only perform this operation before the AutomatedUpdateDate
and when the UpdateStatus
is in the PENDING_UPDATE
state.
Creates a new Elasticsearch domain. For more information, see Creating Elasticsearch Domains in the Amazon Elasticsearch Service Developer Guide.
", + "CreatePackage": "Create a package for use with Amazon ES domains.
", "DeleteElasticsearchDomain": "Permanently deletes the specified Elasticsearch domain and all of its data. Once a domain is deleted, it cannot be recovered.
", "DeleteElasticsearchServiceRole": "Deletes the service-linked role that Elasticsearch Service uses to manage and maintain VPC domains. Role deletion will fail if any existing VPC domains use the role. You must delete any such Elasticsearch domains before deleting the role. See Deleting Elasticsearch Service Role in VPC Endpoints for Amazon Elasticsearch Service Domains.
", + "DeletePackage": "Delete the package.
", "DescribeElasticsearchDomain": "Returns domain configuration information about the specified Elasticsearch domain, including the domain ID, domain endpoint, and domain ARN.
", "DescribeElasticsearchDomainConfig": "Provides cluster configuration information about the specified Elasticsearch domain, such as the state, creation date, update version, and update date for cluster options.
", "DescribeElasticsearchDomains": "Returns domain configuration information about the specified Elasticsearch domains, including the domain ID, domain endpoint, and domain ARN.
", "DescribeElasticsearchInstanceTypeLimits": " Describe Elasticsearch Limits for a given InstanceType and ElasticsearchVersion. When modifying existing Domain, specify the DomainName
to know what Limits are supported for modifying.
Describes all packages available to Amazon ES. Includes options for filtering, limiting the number of results, and pagination.
", "DescribeReservedElasticsearchInstanceOfferings": "Lists available reserved Elasticsearch instance offerings.
", "DescribeReservedElasticsearchInstances": "Returns information about reserved Elasticsearch instances for this account.
", + "DissociatePackage": "Dissociates a package from the Amazon ES domain.
", "GetCompatibleElasticsearchVersions": " Returns a list of upgrade compatible Elastisearch versions. You can optionally pass a DomainName
to get all upgrade compatible Elasticsearch versions for that specific domain.
Retrieves the complete history of the last 10 upgrades that were performed on the domain.
", "GetUpgradeStatus": "Retrieves the latest status of the last upgrade or upgrade eligibility check that was performed on the domain.
", "ListDomainNames": "Returns the name of all Elasticsearch domains owned by the current user's account.
", + "ListDomainsForPackage": "Lists all Amazon ES domains associated with the package.
", "ListElasticsearchInstanceTypes": "List all Elasticsearch instance types that are supported for given ElasticsearchVersion
", "ListElasticsearchVersions": "List all supported Elasticsearch versions
", + "ListPackagesForDomain": "Lists all packages associated with the Amazon ES domain.
", "ListTags": "Returns all tags for the given Elasticsearch domain.
", "PurchaseReservedElasticsearchInstanceOffering": "Allows you to purchase reserved Elasticsearch instances.
", "RemoveTags": "Removes the specified set of tags from the specified Elasticsearch domain.
", @@ -37,6 +44,11 @@ "RemoveTagsRequest$ARN": "Specifies the ARN
for the Elasticsearch domain from which you want to delete the specified tags.
An error occurred because user does not have permissions to access the resource. Returns HTTP status code 403.
", + "refs": { + } + }, "AccessPoliciesStatus": { "base": "The configured access rules for the domain's document and search endpoints, and the current status of those rules.
", "refs": { @@ -95,6 +107,16 @@ "ElasticsearchDomainConfig$AdvancedSecurityOptions": "Specifies AdvancedSecurityOptions
for the domain.
Container for request parameters to AssociatePackage
operation.
Container for response returned by AssociatePackage
operation.
An error occurred while processing the request.
", "refs": { @@ -170,6 +192,11 @@ "CompatibleElasticsearchVersionsList$member": null } }, + "ConflictException": { + "base": "An error occurred because the client attempts to remove a resource that is currently in use. Returns HTTP status code 409.
", + "refs": { + } + }, "CreateElasticsearchDomainRequest": { "base": null, "refs": { @@ -180,6 +207,22 @@ "refs": { } }, + "CreatePackageRequest": { + "base": " Container for request parameters to CreatePackage
operation.
Container for response returned by CreatePackage
operation.
Timestamp which tells creation date of the package.
" + } + }, "DeleteElasticsearchDomainRequest": { "base": "Container for the parameters to the DeleteElasticsearchDomain
operation. Specifies the name of the Elasticsearch domain that you want to delete.
Container for request parameters to DeletePackage
operation.
Container for response parameters to DeletePackage
operation.
Filter to apply in DescribePackage
response.
A list of DescribePackagesFilter
to filter the packages included in a DescribePackages
response.
Only returns packages that match the DescribePackagesFilterList
values.
Any field from PackageDetails
.
A list of values for the specified field.
" + } + }, + "DescribePackagesRequest": { + "base": " Container for request parameters to DescribePackage
operation.
Container for response returned by DescribePackages
operation.
Container for parameters to DescribeReservedElasticsearchInstanceOfferings
Container for request parameters to DissociatePackage
operation.
Container for response returned by DissociatePackage
operation.
Options to configure endpoint for the Elasticsearch domain.
", "refs": { @@ -303,19 +406,23 @@ "DomainName": { "base": "The name of an Elasticsearch domain. Domain names are unique across the domains owned by an account within an AWS region. Domain names start with a letter or number and can contain the following characters: a-z (lowercase), 0-9, and - (hyphen).
", "refs": { + "AssociatePackageRequest$DomainName": "Name of the domain that you want to associate the package with.
", "CancelElasticsearchServiceSoftwareUpdateRequest$DomainName": "The name of the domain that you want to stop the latest service software update on.
", "CreateElasticsearchDomainRequest$DomainName": "The name of the Elasticsearch domain that you are creating. Domain names are unique across the domains owned by an account within an AWS region. Domain names must start with a lowercase letter and can contain the following characters: a-z (lowercase), 0-9, and - (hyphen).
", "DeleteElasticsearchDomainRequest$DomainName": "The name of the Elasticsearch domain that you want to permanently delete.
", "DescribeElasticsearchDomainConfigRequest$DomainName": "The Elasticsearch domain that you want to get information about.
", "DescribeElasticsearchDomainRequest$DomainName": "The name of the Elasticsearch domain for which you want information.
", "DescribeElasticsearchInstanceTypeLimitsRequest$DomainName": " DomainName represents the name of the Domain that we are trying to modify. This should be present only if we are querying for Elasticsearch Limits
for existing domain.
Name of the domain that you want to associate the package with.
", "DomainInfo$DomainName": " Specifies the DomainName
.
Name of the domain you've associated a package with.
", "ElasticsearchDomainStatus$DomainName": "The name of an Elasticsearch domain. Domain names are unique across the domains owned by an account within an AWS region. Domain names start with a letter or number and can contain the following characters: a-z (lowercase), 0-9, and - (hyphen).
", "GetCompatibleElasticsearchVersionsRequest$DomainName": null, "GetUpgradeHistoryRequest$DomainName": null, "GetUpgradeStatusRequest$DomainName": null, "ListElasticsearchInstanceTypesRequest$DomainName": "DomainName represents the name of the Domain that we are trying to modify. This should be present only if we are querying for list of available Elasticsearch instance types when modifying existing domain.
", + "ListPackagesForDomainRequest$DomainName": "The name of the domain for which you want to list associated packages.
", "StartElasticsearchServiceSoftwareUpdateRequest$DomainName": "The name of the domain that you want to update to the latest service software.
", "UpdateElasticsearchDomainConfigRequest$DomainName": "The name of the Elasticsearch domain that you are updating.
", "UpgradeElasticsearchDomainRequest$DomainName": null, @@ -328,6 +435,27 @@ "DescribeElasticsearchDomainsRequest$DomainNames": "The Elasticsearch domains for which you want information.
" } }, + "DomainPackageDetails": { + "base": "Information on a package that is associated with a domain.
", + "refs": { + "AssociatePackageResponse$DomainPackageDetails": "DomainPackageDetails
DomainPackageDetails
List of DomainPackageDetails
objects.
List of DomainPackageDetails
objects.
State of the association. Values are ASSOCIATING/ASSOCIATION_FAILED/ACTIVE/DISSOCIATING/DISSOCIATION_FAILED.
" + } + }, "Double": { "base": null, "refs": { @@ -461,10 +589,24 @@ "ElasticsearchDomainStatus$Endpoints": "Map containing the Elasticsearch domain endpoints used to submit index and search requests. Example key, value
: 'vpc','vpc-endpoint-h2dsd34efgyghrtguk5gt6j2foh4.us-east-1.es.amazonaws.com'
.
Additional information if the package is in an error state. Null otherwise.
", + "PackageDetails$ErrorDetails": "Additional information if the package is in an error state. Null otherwise.
" + } + }, "ErrorMessage": { "base": null, "refs": { - "BaseException$message": "A description of the error.
" + "BaseException$message": "A description of the error.
", + "ErrorDetails$ErrorMessage": null + } + }, + "ErrorType": { + "base": null, + "refs": { + "ErrorDetails$ErrorType": null } }, "GUID": { @@ -586,6 +728,12 @@ "EncryptionAtRestOptions$KmsKeyId": "Specifies the KMS Key ID for Encryption At Rest options.
" } }, + "LastUpdated": { + "base": null, + "refs": { + "DomainPackageDetails$LastUpdated": "Timestamp of the most-recent update to the association status.
" + } + }, "LimitExceededException": { "base": "An exception for trying to create more than allowed resources or sub-resources. Gives http status code of 409.
", "refs": { @@ -628,6 +776,16 @@ "refs": { } }, + "ListDomainsForPackageRequest": { + "base": " Container for request parameters to ListDomainsForPackage
operation.
Container for response parameters to ListDomainsForPackage
operation.
Container for the parameters to the ListElasticsearchInstanceTypes
operation.
Container for request parameters to ListPackagesForDomain
operation.
Container for response parameters to ListPackagesForDomain
operation.
Container for the parameters to the ListTags
operation. Specify the ARN
for the Elasticsearch domain to which the tags are attached that you want to view are attached.
Set this value to limit the number of results returned.
", "refs": { + "DescribePackagesRequest$MaxResults": "Limits results to a maximum number of packages.
", "DescribeReservedElasticsearchInstanceOfferingsRequest$MaxResults": "Set this value to limit the number of results returned. If not specified, defaults to 100.
", "DescribeReservedElasticsearchInstancesRequest$MaxResults": "Set this value to limit the number of results returned. If not specified, defaults to 100.
", "GetUpgradeHistoryRequest$MaxResults": null, + "ListDomainsForPackageRequest$MaxResults": "Limits results to a maximum number of domains.
", "ListElasticsearchInstanceTypesRequest$MaxResults": "Set this value to limit the number of results returned. Value provided must be greater than 30 else it wont be honored.
", - "ListElasticsearchVersionsRequest$MaxResults": "Set this value to limit the number of results returned. Value provided must be greater than 10 else it wont be honored.
" + "ListElasticsearchVersionsRequest$MaxResults": "Set this value to limit the number of results returned. Value provided must be greater than 10 else it wont be honored.
", + "ListPackagesForDomainRequest$MaxResults": "Limits results to a maximum number of packages.
" } }, "MaximumInstanceCount": { @@ -716,14 +887,17 @@ "NextToken": { "base": "Paginated APIs accepts NextToken input to returns next page results and provides a NextToken output in the response which can be used by the client to retrieve more results.
", "refs": { + "DescribePackagesRequest$NextToken": "Used for pagination. Only necessary if a previous API call includes a non-null NextToken value. If provided, returns results for the next page.
", "DescribeReservedElasticsearchInstanceOfferingsRequest$NextToken": "NextToken should be sent in case if earlier API call produced result containing NextToken. It is used for pagination.
", "DescribeReservedElasticsearchInstanceOfferingsResponse$NextToken": "Provides an identifier to allow retrieval of paginated results.
", "DescribeReservedElasticsearchInstancesRequest$NextToken": "NextToken should be sent in case if earlier API call produced result containing NextToken. It is used for pagination.
", "GetUpgradeHistoryRequest$NextToken": null, + "ListDomainsForPackageRequest$NextToken": "Used for pagination. Only necessary if a previous API call includes a non-null NextToken value. If provided, returns results for the next page.
", "ListElasticsearchInstanceTypesRequest$NextToken": "NextToken should be sent in case if earlier API call produced result containing NextToken. It is used for pagination.
", "ListElasticsearchInstanceTypesResponse$NextToken": "In case if there are more results available NextToken would be present, make further request to the same API with received NextToken to paginate remaining results.
", "ListElasticsearchVersionsRequest$NextToken": null, - "ListElasticsearchVersionsResponse$NextToken": null + "ListElasticsearchVersionsResponse$NextToken": null, + "ListPackagesForDomainRequest$NextToken": "Used for pagination. Only necessary if a previous API call includes a non-null NextToken value. If provided, returns results for the next page.
" } }, "NodeToNodeEncryptionOptions": { @@ -764,6 +938,66 @@ "VPCDerivedInfoStatus$Status": "Specifies the status of the VPC options for the specified Elasticsearch domain.
" } }, + "PackageDescription": { + "base": null, + "refs": { + "CreatePackageRequest$PackageDescription": "Description of the package.
", + "PackageDetails$PackageDescription": "User-specified description of the package.
" + } + }, + "PackageDetails": { + "base": "Basic information about a package.
", + "refs": { + "CreatePackageResponse$PackageDetails": "Information about the package PackageDetails
.
PackageDetails
List of PackageDetails
objects.
Internal ID of the package that you want to associate with a domain. Use DescribePackages
to find this value.
Internal ID of the package that you want to delete. Use DescribePackages
to find this value.
Internal ID of the package that you want to associate with a domain. Use DescribePackages
to find this value.
Internal ID of the package.
", + "ListDomainsForPackageRequest$PackageID": "The package for which to list domains.
", + "PackageDetails$PackageID": "Internal ID of the package.
" + } + }, + "PackageName": { + "base": null, + "refs": { + "CreatePackageRequest$PackageName": "Unique identifier for the package.
", + "DomainPackageDetails$PackageName": "User specified name of the package.
", + "PackageDetails$PackageName": "User specified name of the package.
" + } + }, + "PackageSource": { + "base": "The S3 location for importing the package specified as S3BucketName
and S3Key
The customer S3 location PackageSource
for importing the package.
Current state of the package. Values are COPYING/COPY_FAILED/AVAILABLE/DELETING/DELETE_FAILED
" + } + }, + "PackageType": { + "base": null, + "refs": { + "CreatePackageRequest$PackageType": "Type of package. Currently supports only TXT-DICTIONARY.
", + "DomainPackageDetails$PackageType": "Currently supports only TXT-DICTIONARY.
", + "PackageDetails$PackageType": "Currently supports only TXT-DICTIONARY.
" + } + }, "Password": { "base": null, "refs": { @@ -802,6 +1036,12 @@ "ReservedElasticsearchInstanceOffering$RecurringCharges": "The charge to your account regardless of whether you are creating any domains using the instance offering.
" } }, + "ReferencePath": { + "base": null, + "refs": { + "DomainPackageDetails$ReferencePath": "The relative path on Amazon ES nodes, which can be used as synonym_path when the package is synonym file.
" + } + }, "RemoveTagsRequest": { "base": "Container for the parameters to the RemoveTags
operation. Specify the ARN
for the Elasticsearch domain from which you want to remove the specified TagKey
.
Specifies the role ARN that provides Elasticsearch permissions for accessing Cognito resources.
" } }, + "S3BucketName": { + "base": null, + "refs": { + "PackageSource$S3BucketName": "Name of the bucket containing the package.
" + } + }, + "S3Key": { + "base": null, + "refs": { + "PackageSource$S3Key": "Key (file name) of the package.
" + } + }, "ServiceSoftwareOptions": { "base": "The current options of an Elasticsearch domain service software options.
", "refs": { @@ -949,9 +1201,12 @@ "refs": { "AdvancedOptions$key": null, "AdvancedOptions$value": null, + "DescribePackagesResponse$NextToken": null, "DescribeReservedElasticsearchInstancesResponse$NextToken": "Provides an identifier to allow retrieval of paginated results.
", "EndpointsMap$key": null, "GetUpgradeHistoryResponse$NextToken": "Pagination token that needs to be supplied to the next call to get the next page of results
", + "ListDomainsForPackageResponse$NextToken": null, + "ListPackagesForDomainResponse$NextToken": "Pagination token that needs to be supplied to the next call to get the next page of results.
", "RecurringCharge$RecurringChargeFrequency": "The frequency of the recurring charge.
", "ReservedElasticsearchInstance$ReservedElasticsearchInstanceOfferingId": "The offering identifier.
", "ReservedElasticsearchInstance$CurrencyCode": "The currency code for the reserved Elasticsearch instance offering.
", diff --git a/models/apis/es/2015-01-01/paginators-1.json b/models/apis/es/2015-01-01/paginators-1.json index a8442e1eb5f..e360c36eb9a 100644 --- a/models/apis/es/2015-01-01/paginators-1.json +++ b/models/apis/es/2015-01-01/paginators-1.json @@ -1,5 +1,10 @@ { "pagination": { + "DescribePackages": { + "input_token": "NextToken", + "output_token": "NextToken", + "limit_key": "MaxResults" + }, "DescribeReservedElasticsearchInstanceOfferings": { "input_token": "NextToken", "output_token": "NextToken", @@ -15,6 +20,11 @@ "output_token": "NextToken", "limit_key": "MaxResults" }, + "ListDomainsForPackage": { + "input_token": "NextToken", + "output_token": "NextToken", + "limit_key": "MaxResults" + }, "ListElasticsearchInstanceTypes": { "input_token": "NextToken", "output_token": "NextToken", @@ -24,6 +34,11 @@ "input_token": "NextToken", "output_token": "NextToken", "limit_key": "MaxResults" + }, + "ListPackagesForDomain": { + "input_token": "NextToken", + "output_token": "NextToken", + "limit_key": "MaxResults" } } } diff --git a/models/apis/fms/2018-01-01/api-2.json b/models/apis/fms/2018-01-01/api-2.json index 724166c7a50..3c7b711245f 100644 --- a/models/apis/fms/2018-01-01/api-2.json +++ b/models/apis/fms/2018-01-01/api-2.json @@ -531,8 +531,9 @@ }, "ManagedServiceData":{ "type":"string", - "max":1024, - "min":1 + "max":4096, + "min":1, + "pattern":".*" }, "MemberAccounts":{ "type":"list", @@ -737,6 +738,7 @@ "type":"string", "enum":[ "WAF", + "WAFV2", "SHIELD_ADVANCED", "SECURITY_GROUPS_COMMON", "SECURITY_GROUPS_CONTENT_AUDIT", diff --git a/models/apis/fms/2018-01-01/docs-2.json b/models/apis/fms/2018-01-01/docs-2.json index e9964821b6c..19fb2bf51bf 100644 --- a/models/apis/fms/2018-01-01/docs-2.json +++ b/models/apis/fms/2018-01-01/docs-2.json @@ -24,7 +24,7 @@ "AWSAccountId": { "base": null, "refs": { - "AssociateAdminAccountRequest$AdminAccount": "The AWS account ID to associate with AWS Firewall Manager as the AWS Firewall Manager administrator account. This can be an AWS Organizations master account or a member account. For more information about AWS Organizations and master accounts, see Managing the AWS Accounts in Your Organization.
", + "AssociateAdminAccountRequest$AdminAccount": "The AWS account ID to associate with AWS Firewall Manager as the AWS Firewall Manager administrator account. This can be an AWS Organizations master account or a member account. For more information about AWS Organizations and master accounts, see Managing the AWS Accounts in Your Organization.
", "GetAdminAccountResponse$AdminAccount": "The AWS account that is set as the AWS Firewall Manager administrator.
", "GetComplianceDetailRequest$MemberAccount": "The AWS account that owns the resources that you want to get the details for.
", "GetProtectionStatusRequest$MemberAccountId": "The AWS account that is in scope of the policy that you want to get the details for.
", @@ -270,7 +270,7 @@ "ManagedServiceData": { "base": null, "refs": { - "SecurityServicePolicyData$ManagedServiceData": "Details about the service that are specific to the service type, in JSON format. For service type SHIELD_ADVANCED
, this is an empty string.
Example: WAF
ManagedServiceData\": \"{\\\"type\\\": \\\"WAF\\\", \\\"ruleGroups\\\": [{\\\"id\\\": \\\"12345678-1bcd-9012-efga-0987654321ab\\\", \\\"overrideAction\\\" : {\\\"type\\\": \\\"COUNT\\\"}}], \\\"defaultAction\\\": {\\\"type\\\": \\\"BLOCK\\\"}}
Example: SECURITY_GROUPS_COMMON
\"SecurityServicePolicyData\":{\"Type\":\"SECURITY_GROUPS_COMMON\",\"ManagedServiceData\":\"{\\\"type\\\":\\\"SECURITY_GROUPS_COMMON\\\",\\\"revertManualSecurityGroupChanges\\\":false,\\\"exclusiveResourceSecurityGroupManagement\\\":false,\\\"securityGroups\\\":[{\\\"id\\\":\\\" sg-000e55995d61a06bd\\\"}]}\"},\"RemediationEnabled\":false,\"ResourceType\":\"AWS::EC2::NetworkInterface\"}
Example: SECURITY_GROUPS_CONTENT_AUDIT
\"SecurityServicePolicyData\":{\"Type\":\"SECURITY_GROUPS_CONTENT_AUDIT\",\"ManagedServiceData\":\"{\\\"type\\\":\\\"SECURITY_GROUPS_CONTENT_AUDIT\\\",\\\"securityGroups\\\":[{\\\"id\\\":\\\" sg-000e55995d61a06bd \\\"}],\\\"securityGroupAction\\\":{\\\"type\\\":\\\"ALLOW\\\"}}\"},\"RemediationEnabled\":false,\"ResourceType\":\"AWS::EC2::NetworkInterface\"}
The security group action for content audit can be ALLOW
or DENY
. For ALLOW
, all in-scope security group rules must be within the allowed range of the policy's security group rules. For DENY
, all in-scope security group rules must not contain a value or a range that matches a rule value or range in the policy security group.
Example: SECURITY_GROUPS_USAGE_AUDIT
\"SecurityServicePolicyData\":{\"Type\":\"SECURITY_GROUPS_USAGE_AUDIT\",\"ManagedServiceData\":\"{\\\"type\\\":\\\"SECURITY_GROUPS_USAGE_AUDIT\\\",\\\"deleteUnusedSecurityGroups\\\":true,\\\"coalesceRedundantSecurityGroups\\\":true}\"},\"RemediationEnabled\":false,\"Resou rceType\":\"AWS::EC2::SecurityGroup\"}
Details about the service that are specific to the service type, in JSON format. For service type SHIELD_ADVANCED
, this is an empty string.
Example: WAFV2
\"SecurityServicePolicyData\": \"{ \\\"type\\\": \\\"WAFV2\\\", \\\"postProcessRuleGroups\\\": [ { \\\"managedRuleGroupIdentifier\\\": { \\\"managedRuleGroupName\\\": \\\"AWSManagedRulesAdminProtectionRuleSet\\\", \\\"vendor\\\": \\\"AWS\\\" } \\\"ruleGroupARN\\\": \\\"rule group arn\", \\\"overrideAction\\\": { \\\"type\\\": \\\"COUNT|\\\" }, \\\"excludedRules\\\": [ { \\\"name\\\" : \\\"EntityName\\\" } ], \\\"type\\\": \\\"ManagedRuleGroup|RuleGroup\\\" } ], \\\"preProcessRuleGroups\\\": [ { \\\"managedRuleGroupIdentifier\\\": { \\\"managedRuleGroupName\\\": \\\"AWSManagedRulesAdminProtectionRuleSet\\\", \\\"vendor\\\": \\\"AWS\\\" } \\\"ruleGroupARN\\\": \\\"rule group arn\\\", \\\"overrideAction\\\": { \\\"type\\\": \\\"COUNT\\\" }, \\\"excludedRules\\\": [ { \\\"name\\\" : \\\"EntityName\\\" } ], \\\"type\\\": \\\"ManagedRuleGroup|RuleGroup\\\" } ], \\\"defaultAction\\\": { \\\"type\\\": \\\"BLOCK\\\" }}\"
Example: WAF
\"ManagedServiceData\": \"{\\\"type\\\": \\\"WAF\\\", \\\"ruleGroups\\\": [{\\\"id\\\": \\\"12345678-1bcd-9012-efga-0987654321ab\\\", \\\"overrideAction\\\" : {\\\"type\\\": \\\"COUNT\\\"}}], \\\"defaultAction\\\": {\\\"type\\\": \\\"BLOCK\\\"}}
Example: SECURITY_GROUPS_COMMON
\"SecurityServicePolicyData\":{\"Type\":\"SECURITY_GROUPS_COMMON\",\"ManagedServiceData\":\"{\\\"type\\\":\\\"SECURITY_GROUPS_COMMON\\\",\\\"revertManualSecurityGroupChanges\\\":false,\\\"exclusiveResourceSecurityGroupManagement\\\":false,\\\"securityGroups\\\":[{\\\"id\\\":\\\" sg-000e55995d61a06bd\\\"}]}\"},\"RemediationEnabled\":false,\"ResourceType\":\"AWS::EC2::NetworkInterface\"}
Example: SECURITY_GROUPS_CONTENT_AUDIT
\"SecurityServicePolicyData\":{\"Type\":\"SECURITY_GROUPS_CONTENT_AUDIT\",\"ManagedServiceData\":\"{\\\"type\\\":\\\"SECURITY_GROUPS_CONTENT_AUDIT\\\",\\\"securityGroups\\\":[{\\\"id\\\":\\\" sg-000e55995d61a06bd \\\"}],\\\"securityGroupAction\\\":{\\\"type\\\":\\\"ALLOW\\\"}}\"},\"RemediationEnabled\":false,\"ResourceType\":\"AWS::EC2::NetworkInterface\"}
The security group action for content audit can be ALLOW
or DENY
. For ALLOW
, all in-scope security group rules must be within the allowed range of the policy's security group rules. For DENY
, all in-scope security group rules must not contain a value or a range that matches a rule value or range in the policy security group.
Example: SECURITY_GROUPS_USAGE_AUDIT
\"SecurityServicePolicyData\":{\"Type\":\"SECURITY_GROUPS_USAGE_AUDIT\",\"ManagedServiceData\":\"{\\\"type\\\":\\\"SECURITY_GROUPS_USAGE_AUDIT\\\",\\\"deleteUnusedSecurityGroups\\\":true,\\\"coalesceRedundantSecurityGroups\\\":true}\"},\"RemediationEnabled\":false,\"Resou rceType\":\"AWS::EC2::SecurityGroup\"}
Creates a version of the model using the specified model type.
", "CreateRule": "Creates a rule for use with the specified detector.
", "CreateVariable": "Creates a variable.
", - "DeleteDetectorVersion": "Deletes the detector version.
", + "DeleteDetector": "Deletes the detector. Before deleting a detector, you must first delete all detector versions and rule versions associated with the detector.
", + "DeleteDetectorVersion": "Deletes the detector version. You cannot delete detector versions that are in ACTIVE
status.
Deletes the specified event.
", + "DeleteRuleVersion": "Deletes the rule version. You cannot delete a rule version if it is used by an ACTIVE
or INACTIVE
detector version.
Gets all versions for a specified detector.
", "DescribeModelVersions": "Gets all of the model versions for the specified model type or for the specified model type and model ID. You can also get details for a single, specified model version.
", "GetDetectorVersion": "Gets a particular detector version.
", @@ -78,6 +80,11 @@ "refs": { } }, + "ConflictException": { + "base": "An exception indicating there was a conflict during a delete operation. The following delete operations can cause a conflict exception:
DeleteDetector: A conflict exception will occur if the detector has associated Rules
or DetectorVersions
. You can only delete a detector if it has no Rules
or DetectorVersions
.
DeleteDetectorVersion: A conflict exception will occur if the DetectorVersion
status is ACTIVE
.
DeleteRuleVersion: A conflict exception will occur if the RuleVersion
is in use by an associated ACTIVE
or INACTIVE DetectorVersion
.
The data type of the variable.
" } }, + "DeleteDetectorRequest": { + "base": null, + "refs": { + } + }, + "DeleteDetectorResult": { + "base": null, + "refs": { + } + }, "DeleteDetectorVersionRequest": { "base": null, "refs": { @@ -158,6 +175,16 @@ "refs": { } }, + "DeleteRuleVersionRequest": { + "base": null, + "refs": { + } + }, + "DeleteRuleVersionResult": { + "base": null, + "refs": { + } + }, "DescribeDetectorRequest": { "base": null, "refs": { @@ -396,6 +423,12 @@ "UpdateDetectorVersionRequest$modelVersions": "The model versions to include in the detector version.
" } }, + "ListOfRuleResults": { + "base": null, + "refs": { + "GetPredictionResult$ruleResults": "The rule results in the prediction.
" + } + }, "ListOfStrings": { "base": null, "refs": { @@ -403,6 +436,7 @@ "GetDetectorVersionResult$externalModelEndpoints": "The Amazon SageMaker model endpoints included in the detector version.
", "GetPredictionResult$outcomes": "The prediction outcomes.
", "LabelMapper$value": null, + "RuleResult$outcomes": "The outcomes of the matched rule, based on the rule execution mode.
", "UpdateDetectorVersionRequest$externalModelEndpoints": "The Amazon SageMaker model endpoints to include in the detector version.
" } }, @@ -657,6 +691,14 @@ "GetRulesResult$ruleDetails": "The details of the requested rule.
" } }, + "RuleExecutionMode": { + "base": null, + "refs": { + "CreateDetectorVersionRequest$ruleExecutionMode": "The rule execution mode for the rules included in the detector version.
You can define and edit the rule mode at the detector version level, when it is in draft status.
If you specify FIRST_MATCHED
, Amazon Fraud Detector evaluates rules sequentially, first to last, stopping at the first matched rule. Amazon Fraud dectector then provides the outcomes for that single rule.
If you specifiy ALL_MATCHED
, Amazon Fraud Detector evaluates all rules and returns the outcomes for all matched rules.
The default behavior is FIRST_MATCHED
.
The execution mode of the rule in the dectector
FIRST_MATCHED
indicates that Amazon Fraud Detector evaluates rules sequentially, first to last, stopping at the first matched rule. Amazon Fraud dectector then provides the outcomes for that single rule.
ALL_MATCHED
indicates that Amazon Fraud Detector evaluates all rules and returns the outcomes for all matched rules. You can define and edit the rule mode at the detector version level, when it is in draft status.
The rule execution mode to add to the detector.
If you specify FIRST_MATCHED
, Amazon Fraud Detector evaluates rules sequentially, first to last, stopping at the first matched rule. Amazon Fraud dectector then provides the outcomes for that single rule.
If you specifiy ALL_MATCHED
, Amazon Fraud Detector evaluates all rules and returns the outcomes for all matched rules. You can define and edit the rule mode at the detector version level, when it is in draft status.
The default behavior is FIRST_MATCHED
.
The rules to include in the detector version.
" } }, + "RuleResult": { + "base": "The rule results.
", + "refs": { + "ListOfRuleResults$member": null + } + }, "RulesMaxResults": { "base": null, "refs": { @@ -859,7 +907,10 @@ "CreateModelVersionResult$modelId": "The model ID.
", "CreateRuleRequest$ruleId": "The rule ID.
", "CreateRuleRequest$detectorId": "The detector ID for the rule's parent detector.
", + "DeleteDetectorRequest$detectorId": "The ID of the detector to delete.
", "DeleteDetectorVersionRequest$detectorId": "The ID of the parent detector for the detector version to delete.
", + "DeleteRuleVersionRequest$detectorId": "The ID of the detector that includes the rule version to delete.
", + "DeleteRuleVersionRequest$ruleId": "The rule ID of the rule version to delete.
", "DescribeDetectorRequest$detectorId": "The detector ID.
", "DescribeDetectorResult$detectorId": "The detector ID.
", "DescribeModelVersionsRequest$modelId": "The model ID.
", @@ -903,6 +954,7 @@ "CreateDetectorVersionResult$detectorVersionId": "The ID for the created detector.
", "CreateModelVersionResult$modelVersionNumber": "The version of the model.
", "DeleteDetectorVersionRequest$detectorVersionId": "The ID of the detector version to delete.
", + "DeleteRuleVersionRequest$ruleVersion": "The rule version to delete.
", "DescribeModelVersionsRequest$modelVersionNumber": "The model version.
", "DetectorVersionSummary$detectorVersionId": "The detector version ID.
", "GetDetectorVersionRequest$detectorVersionId": "The detector version ID.
", @@ -941,6 +993,7 @@ "BatchCreateVariableError$message": "The error message.
", "BatchGetVariableError$name": "The error name.
", "BatchGetVariableError$message": "The error message.
", + "ConflictException$message": null, "CreateModelVersionResult$status": "The model version status.
", "CreateVariableRequest$name": "The name of the variable.
", "CreateVariableRequest$defaultValue": "The default value for the variable when no value is received.
", @@ -992,6 +1045,7 @@ "ResourceNotFoundException$message": null, "Role$arn": "The role ARN.
", "Role$name": "The role name.
", + "RuleResult$ruleId": "The rule ID that was matched, based on the rule execution mode.
", "ThrottlingException$message": null, "UpdateVariableRequest$name": "The name of the variable.
", "UpdateVariableRequest$defaultValue": "The new default value of the variable.
", diff --git a/models/apis/fsx/2018-03-01/api-2.json b/models/apis/fsx/2018-03-01/api-2.json index cd12f0d0a73..96255517704 100644 --- a/models/apis/fsx/2018-03-01/api-2.json +++ b/models/apis/fsx/2018-03-01/api-2.json @@ -8,6 +8,7 @@ "serviceFullName":"Amazon FSx", "serviceId":"FSx", "signatureVersion":"v4", + "signingName":"fsx", "targetPrefix":"AWSSimbaAPIService_v20180301", "uid":"fsx-2018-03-01" }, @@ -482,7 +483,8 @@ "SubnetIds":{"shape":"SubnetIds"}, "SecurityGroupIds":{"shape":"SecurityGroupIds"}, "Tags":{"shape":"Tags"}, - "WindowsConfiguration":{"shape":"CreateFileSystemWindowsConfiguration"} + "WindowsConfiguration":{"shape":"CreateFileSystemWindowsConfiguration"}, + "StorageType":{"shape":"StorageType"} } }, "CreateFileSystemFromBackupResponse":{ @@ -516,6 +518,7 @@ }, "FileSystemType":{"shape":"FileSystemType"}, "StorageCapacity":{"shape":"StorageCapacity"}, + "StorageType":{"shape":"StorageType"}, "SubnetIds":{"shape":"SubnetIds"}, "SecurityGroupIds":{"shape":"SecurityGroupIds"}, "Tags":{"shape":"Tags"}, @@ -829,6 +832,7 @@ "Lifecycle":{"shape":"FileSystemLifecycle"}, "FailureDetails":{"shape":"FileSystemFailureDetails"}, "StorageCapacity":{"shape":"StorageCapacity"}, + "StorageType":{"shape":"StorageType"}, "VpcId":{"shape":"VpcId"}, "SubnetIds":{"shape":"SubnetIds"}, "NetworkInterfaceIds":{"shape":"NetworkInterfaceIds"}, @@ -993,7 +997,7 @@ "type":"string", "max":2048, "min":1, - "pattern":"^[a-fA-F0-9]{8}-[a-fA-F0-9]{4}-4[a-fA-F0-9]{3}-[89aAbB][a-fA-F0-9]{3}-[a-fA-F0-9]{12}|arn:aws[a-z-]{0,7}:kms:[a-z]{2}-[a-z-]{4,}-\\d+:\\d{12}:(key|alias)\\/([a-fA-F0-9]{8}-[a-fA-F0-9]{4}-4[a-fA-F0-9]{3}-[89aAbB][a-fA-F0-9]{3}-[a-fA-F0-9]{12}|[a-zA-Z0-9:\\/_-]+)|alias\\/[a-zA-Z0-9:\\/_-]+$" + "pattern":"^.{1,2048}$" }, "LastUpdatedTime":{"type":"timestamp"}, "ListTagsForResourceRequest":{ @@ -1115,7 +1119,7 @@ "type":"string", "max":512, "min":8, - "pattern":"^arn:aws[a-z-]{0,7}:[A-Za-z0-9][A-za-z0-9_/.-]{0,62}:[A-za-z0-9_/.-]{0,63}:[A-za-z0-9_/.-]{0,63}:[A-Za-z0-9][A-za-z0-9_/.-]{0,127}$" + "pattern":"^arn:(?=[^:]+:fsx:[^:]+:\\d{12}:)((|(?=[a-z0-9-.]{1,63})(?!\\d{1,3}(\\.\\d{1,3}){3})(?![^:]*-{2})(?![^:]*-\\.)(?![^:]*\\.-)[a-z0-9].*(?Cancels an existing Amazon FSx for Lustre data repository task if that task is in either thePENDING
or EXECUTING
state. When you cancel a task, Amazon FSx does the following. Any files that FSx has already exported are not reverted.
FSx continues to export any files that are \"in-flight\" when the cancel operation is received.
FSx does not export any files that have not yet been exported.
Creates a backup of an existing Amazon FSx for Windows File Server file system. Creating regular backups for your file system is a best practice that complements the replication that Amazon FSx for Windows File Server performs for your file system. It also enables you to restore from user modification of data.
If a backup with the specified client request token exists, and the parameters match, this operation returns the description of the existing backup. If a backup specified client request token exists, and the parameters don't match, this operation returns IncompatibleParameterError
. If a backup with the specified client request token doesn't exist, CreateBackup
does the following:
Creates a new Amazon FSx backup with an assigned ID, and an initial lifecycle state of CREATING
.
Returns the description of the backup.
By using the idempotent operation, you can retry a CreateBackup
operation without the risk of creating an extra backup. This approach can be useful when an initial call fails in a way that makes it unclear whether a backup was created. If you use the same client request token and the initial call created a backup, the operation returns a successful result because all the parameters are the same.
The CreateFileSystem
operation returns while the backup's lifecycle state is still CREATING
. You can check the file system creation status by calling the DescribeBackups operation, which returns the backup state along with other information.
Creates an Amazon FSx for Lustre data repository task. You use data repository tasks to perform bulk operations between your Amazon FSx file system and its linked data repository. An example of a data repository task is exporting any data and metadata changes, including POSIX metadata, to files, directories, and symbolic links (symlinks) from your FSx file system to its linked data repository. A CreateDataRepositoryTask
operation will fail if a data repository is not linked to the FSx file system. To learn more about data repository tasks, see Using Data Repository Tasks. To learn more about linking a data repository to your file system, see Step 1: Create Your Amazon FSx for Lustre File System.
Creates an Amazon FSx for Lustre data repository task. You use data repository tasks to perform bulk operations between your Amazon FSx file system and its linked data repository. An example of a data repository task is exporting any data and metadata changes, including POSIX metadata, to files, directories, and symbolic links (symlinks) from your FSx file system to its linked data repository. A CreateDataRepositoryTask
operation will fail if a data repository is not linked to the FSx file system. To learn more about data repository tasks, see Using Data Repository Tasks. To learn more about linking a data repository to your file system, see Setting the Export Prefix.
Creates a new, empty Amazon FSx file system.
If a file system with the specified client request token exists and the parameters match, CreateFileSystem
returns the description of the existing file system. If a file system specified client request token exists and the parameters don't match, this call returns IncompatibleParameterError
. If a file system with the specified client request token doesn't exist, CreateFileSystem
does the following:
Creates a new, empty Amazon FSx file system with an assigned ID, and an initial lifecycle state of CREATING
.
Returns the description of the file system.
This operation requires a client request token in the request that Amazon FSx uses to ensure idempotent creation. This means that calling the operation multiple times with the same client request token has no effect. By using the idempotent operation, you can retry a CreateFileSystem
operation without the risk of creating an extra file system. This approach can be useful when an initial call fails in a way that makes it unclear whether a file system was created. Examples are if a transport level timeout occurred, or your connection was reset. If you use the same client request token and the initial call created a file system, the client receives success as long as the parameters are the same.
The CreateFileSystem
call returns while the file system's lifecycle state is still CREATING
. You can check the file-system creation status by calling the DescribeFileSystems operation, which returns the file system state along with other information.
Creates a new Amazon FSx file system from an existing Amazon FSx for Windows File Server backup.
If a file system with the specified client request token exists and the parameters match, this operation returns the description of the file system. If a client request token specified by the file system exists and the parameters don't match, this call returns IncompatibleParameterError
. If a file system with the specified client request token doesn't exist, this operation does the following:
Creates a new Amazon FSx file system from backup with an assigned ID, and an initial lifecycle state of CREATING
.
Returns the description of the file system.
Parameters like Active Directory, default share name, automatic backup, and backup settings default to the parameters of the file system that was backed up, unless overridden. You can explicitly supply other settings.
By using the idempotent operation, you can retry a CreateFileSystemFromBackup
call without the risk of creating an extra file system. This approach can be useful when an initial call fails in a way that makes it unclear whether a file system was created. Examples are if a transport level timeout occurred, or your connection was reset. If you use the same client request token and the initial call created a file system, the client receives success as long as the parameters are the same.
The CreateFileSystemFromBackup
call returns while the file system's lifecycle state is still CREATING
. You can check the file-system creation status by calling the DescribeFileSystems operation, which returns the file system state along with other information.
Deletes an Amazon FSx for Windows File Server backup, deleting its contents. After deletion, the backup no longer exists, and its data is gone.
The DeleteBackup
call returns instantly. The backup will not show up in later DescribeBackups
calls.
The data in a deleted backup is also deleted and can't be recovered by any means.
Provides a report detailing the data repository task results of the files processed that match the criteria specified in the report Scope
parameter. FSx delivers the report to the file system's linked data repository in Amazon S3, using the path specified in the report Path
parameter. You can specify whether or not a report gets generated for a task using the Enabled
parameter.
Defines whether or not Amazon FSx provides a CompletionReport once the task has completed. A CompletionReport provides a detailed report on the files that Amazon FSx processed that meet the criteria specified by the Scope
parameter.
Defines whether or not Amazon FSx provides a CompletionReport once the task has completed. A CompletionReport provides a detailed report on the files that Amazon FSx processed that meet the criteria specified by the Scope
parameter. For more information, see Working with Task Completion Reports.
The Lustre configuration for the file system being created. This value is required if FileSystemType
is set to LUSTRE
.
The Lustre configuration for the file system being created.
", "refs": { "CreateFileSystemRequest$LustreConfiguration": null } @@ -215,7 +215,7 @@ "base": "The configuration object for the Microsoft Windows file system used in CreateFileSystem
and CreateFileSystemFromBackup
operations.
The configuration for this Microsoft Windows file system.
", - "CreateFileSystemRequest$WindowsConfiguration": "The Microsoft Windows configuration for the file system being created. This value is required if FileSystemType
is set to WINDOWS
.
The Microsoft Windows configuration for the file system being created.
" } }, "CreationTime": { @@ -230,7 +230,7 @@ "base": "The Domain Name Service (DNS) name for the file system. You can mount your file system using its DNS name.
", "refs": { "FileSystem$DNSName": "The DNS name for the file system.
", - "WindowsFileSystemConfiguration$RemoteAdministrationEndpoint": "For MULTI_AZ_1
deployment types, use this endpoint when performing administrative tasks on the file system using Amazon FSx Remote PowerShell.
For SINGLE_AZ_1
deployment types, this is the DNS name of the file system.
This endpoint is temporarily unavailable when the file system is undergoing maintenance.
" + "WindowsFileSystemConfiguration$RemoteAdministrationEndpoint": "For MULTI_AZ_1
deployment types, use this endpoint when performing administrative tasks on the file system using Amazon FSx Remote PowerShell.
For SINGLE_AZ_1
and SINGLE_AZ_2
deployment types, this is the DNS name of the file system.
This endpoint is temporarily unavailable when the file system is undergoing maintenance.
" } }, "DailyTime": { @@ -321,7 +321,7 @@ "DataRepositoryTaskPaths": { "base": null, "refs": { - "CreateDataRepositoryTaskRequest$Paths": "(Optional) The path or paths on the Amazon FSx file system to use when the data repository task is processed. The default path is the file system root directory.
", + "CreateDataRepositoryTaskRequest$Paths": "(Optional) The path or paths on the Amazon FSx file system to use when the data repository task is processed. The default path is the file system root directory. The paths you provide need to be relative to the mount point of the file system. If the mount point is /mnt/fsx
and /mnt/fsx/path1
is a directory or file on the file system you want to export, then the path to provide is path1
. If a path that you provide isn't valid, the task fails.
An array of paths on the Amazon FSx for Lustre file system that specify the data for the data repository task to process. For example, in an EXPORT_TO_REPOSITORY task, the paths specify which data to export to the linked data repository.
(Default) If Paths
is not specified, Amazon FSx uses the file system root directory.
For MULTI_AZ_1
deployment types, the IP address of the primary, or preferred, file server.
Use this IP address when mounting the file system on Linux SMB clients or Windows SMB clients that are not joined to a Microsoft Active Directory. Applicable for both SINGLE_AZ_1
and MULTI_AZ_1
deployment types. This IP address is temporarily unavailable when the file system is undergoing maintenance. For Linux and Windows SMB clients that are joined to an Active Directory, use the file system's DNSName instead. For more information and instruction on mapping and mounting file shares, see https://docs.aws.amazon.com/fsx/latest/WindowsGuide/accessing-file-shares.html.
For MULTI_AZ_1
deployment types, the IP address of the primary, or preferred, file server.
Use this IP address when mounting the file system on Linux SMB clients or Windows SMB clients that are not joined to a Microsoft Active Directory. Applicable for all Windows file system deployment types. This IP address is temporarily unavailable when the file system is undergoing maintenance. For Linux and Windows SMB clients that are joined to an Active Directory, use the file system's DNSName instead. For more information on mapping and mounting file shares, see Accessing File Shares.
" } }, "KmsKeyId": { @@ -752,7 +752,7 @@ "PerUnitStorageThroughput": { "base": null, "refs": { - "CreateFileSystemLustreConfiguration$PerUnitStorageThroughput": " (Optional) For the PERSISTENT_1
deployment type, describes the amount of read and write throughput for each 1 tebibyte of storage, in MB/s/TiB. File system throughput capacity is calculated by multiplying file system storage capacity (TiB) by the PerUnitStorageThroughput (MB/s/TiB). For a 2.4 TiB file system, provisioning 50 MB/s/TiB of PerUnitStorageThroughput yields 120 MB/s of file system throughput. You pay for the amount of throughput that you provision. (Default = 200 MB/s/TiB)
Valid values are 50, 100, 200.
", + "CreateFileSystemLustreConfiguration$PerUnitStorageThroughput": " Required for the PERSISTENT_1
deployment type, describes the amount of read and write throughput for each 1 tebibyte of storage, in MB/s/TiB. File system throughput capacity is calculated by multiplying file system storage capacity (TiB) by the PerUnitStorageThroughput (MB/s/TiB). For a 2.4 TiB file system, provisioning 50 MB/s/TiB of PerUnitStorageThroughput yields 117 MB/s of file system throughput. You pay for the amount of throughput that you provision.
Valid values are 50, 100, 200.
", "LustreFileSystemConfiguration$PerUnitStorageThroughput": " Per unit storage throughput represents the megabytes per second of read or write throughput per 1 tebibyte of storage provisioned. File system throughput capacity is equal to Storage capacity (TiB) * PerUnitStorageThroughput (MB/s/TiB). This option is only valid for PERSISTENT_1
deployment types. Valid values are 50, 100, 200.
A list of security group IDs.
", "refs": { - "CreateFileSystemFromBackupRequest$SecurityGroupIds": "A list of IDs for the security groups that apply to the specified network interfaces created for file system access. These security groups apply to all network interfaces. This value isn't returned in later describe requests.
", + "CreateFileSystemFromBackupRequest$SecurityGroupIds": "A list of IDs for the security groups that apply to the specified network interfaces created for file system access. These security groups apply to all network interfaces. This value isn't returned in later DescribeFileSystem requests.
", "CreateFileSystemRequest$SecurityGroupIds": "A list of IDs specifying the security groups to apply to all network interfaces created for file system access. This list isn't returned in later requests to describe the file system.
" } }, @@ -850,25 +850,33 @@ "StorageCapacity": { "base": "The storage capacity for your Amazon FSx file system, in gibibytes.
", "refs": { - "CreateFileSystemRequest$StorageCapacity": "The storage capacity of the file system being created.
For Windows file systems, valid values are 32 GiB - 65,536 GiB.
For SCRATCH_1
Lustre file systems, valid values are 1,200, 2,400, 3,600, then continuing in increments of 3600 GiB. For SCRATCH_2
and PERSISTENT_1
file systems, valid values are 1200, 2400, then continuing in increments of 2400 GiB.
Sets the storage capacity of the file system that you're creating.
For Lustre file systems:
For SCRATCH_2
and PERSISTENT_1
deployment types, valid values are 1.2, 2.4, and increments of 2.4 TiB.
For SCRATCH_1
deployment type, valid values are 1.2, 2.4, and increments of 3.6 TiB.
For Windows file systems:
If StorageType=SSD
, valid values are 32 GiB - 65,536 GiB (64 TiB).
If StorageType=HDD
, valid values are 2000 GiB - 65,536 GiB (64 TiB).
The storage capacity of the file system in gigabytes (GB).
" } }, + "StorageType": { + "base": "The storage type for your Amazon FSx file system.
", + "refs": { + "CreateFileSystemFromBackupRequest$StorageType": "Sets the storage type for the Windows file system you're creating from a backup. Valid values are SSD
and HDD
.
Set to SSD
to use solid state drive storage. Supported on all Windows deployment types.
Set to HDD
to use hard disk drive storage. Supported on SINGLE_AZ_2
and MULTI_AZ_1
Windows file system deployment types.
Default value is SSD
.
HDD and SSD storage types have different minimum storage capacity requirements. A restored file system's storage capacity is tied to the file system that was backed up. You can create a file system that uses HDD storage from a backup of a file system that used SSD storage only if the original SSD file system had a storage capacity of at least 2000 GiB.
Sets the storage type for the Amazon FSx for Windows file system you're creating. Valid values are SSD
and HDD
.
Set to SSD
to use solid state drive storage. SSD is supported on all Windows deployment types.
Set to HDD
to use hard disk drive storage. HDD is supported on SINGLE_AZ_2
and MULTI_AZ_1
Windows file system deployment types.
Default value is SSD
. For more information, see Storage Type Options in the Amazon FSx for Windows User Guide.
The storage type of the file system. Valid values are SSD
and HDD
. If set to SSD
, the file system uses solid state drive storage. If set to HDD
, the file system uses hard disk drive storage.
The ID for a subnet. A subnet is a range of IP addresses in your virtual private cloud (VPC). For more information, see VPC and Subnets in the Amazon VPC User Guide.
", "refs": { "CreateFileSystemWindowsConfiguration$PreferredSubnetId": "Required when DeploymentType
is set to MULTI_AZ_1
. This specifies the subnet in which you want the preferred file server to be located. For in-AWS applications, we recommend that you launch your clients in the same Availability Zone (AZ) as your preferred file server to reduce cross-AZ data transfer costs and minimize latency.
For MULTI_AZ_1
deployment types, it specifies the ID of the subnet where the preferred file server is located. Must be one of the two subnet IDs specified in SubnetIds
property. Amazon FSx serves traffic from this subnet except in the event of a failover to the secondary file server.
For SINGLE_AZ_1
deployment types, this value is the same as that for SubnetIDs
.
For MULTI_AZ_1
deployment types, it specifies the ID of the subnet where the preferred file server is located. Must be one of the two subnet IDs specified in SubnetIds
property. Amazon FSx serves traffic from this subnet except in the event of a failover to the secondary file server.
For SINGLE_AZ_1
and SINGLE_AZ_2
deployment types, this value is the same as that for SubnetIDs
. For more information, see Availability and Durability: Single-AZ and Multi-AZ File Systems
A list of subnet IDs. Currently, you can specify only one subnet ID in a call to the CreateFileSystem
operation.
A list of IDs for the subnets that the file system will be accessible from. Currently, you can specify only one subnet. The file server is also launched in that subnet's Availability Zone.
", - "CreateFileSystemRequest$SubnetIds": "Specifies the IDs of the subnets that the file system will be accessible from. For Windows MULTI_AZ_1
file system deployment types, provide exactly two subnet IDs, one for the preferred file server and one for the standby file server. You specify one of these subnets as the preferred subnet using the WindowsConfiguration > PreferredSubnetID
property.
For Windows SINGLE_AZ_1
file system deployment types and Lustre file systems, provide exactly one subnet ID. The file server is launched in that subnet's Availability Zone.
The ID of the subnet to contain the endpoint for the file system. One and only one is supported. The file system is launched in the Availability Zone associated with this subnet.
" + "CreateFileSystemFromBackupRequest$SubnetIds": "Specifies the IDs of the subnets that the file system will be accessible from. For Windows MULTI_AZ_1
file system deployment types, provide exactly two subnet IDs, one for the preferred file server and one for the standby file server. You specify one of these subnets as the preferred subnet using the WindowsConfiguration > PreferredSubnetID
property.
For Windows SINGLE_AZ_1
and SINGLE_AZ_2
deployment types and Lustre file systems, provide exactly one subnet ID. The file server is launched in that subnet's Availability Zone.
Specifies the IDs of the subnets that the file system will be accessible from. For Windows MULTI_AZ_1
file system deployment types, provide exactly two subnet IDs, one for the preferred file server and one for the standby file server. You specify one of these subnets as the preferred subnet using the WindowsConfiguration > PreferredSubnetID
property.
For Windows SINGLE_AZ_1
and SINGLE_AZ_2
file system deployment types and Lustre file systems, provide exactly one subnet ID. The file server is launched in that subnet's Availability Zone.
Specifies the IDs of the subnets that the file system is accessible from. For Windows MULTI_AZ_1
file system deployment type, there are two subnet IDs, one for the preferred file server and one for the standby file server. The preferred file server subnet identified in the PreferredSubnetID
property. All other file systems have only one subnet ID.
For Lustre file systems, and Single-AZ Windows file systems, this is the ID of the subnet that contains the endpoint for the file system. For MULTI_AZ_1
Windows file systems, the endpoint for the file system is available in the PreferredSubnetID
.
Specifies the file system deployment type, valid values are the following:
MULTI_AZ_1 - Deploys a high availability file system that is configured for Multi-AZ redundancy to tolerate temporary Availability Zone (AZ) unavailability. You can only deploy a Multi-AZ file system in AWS Regions that have a minimum of three Availability Zones.
SINGLE_AZ_1 - (Default) Choose to deploy a file system that is configured for single AZ redundancy.
To learn more about high availability Multi-AZ file systems, see High Availability for Amazon FSx for Windows File Server.
", - "WindowsFileSystemConfiguration$DeploymentType": "Specifies the file system deployment type, valid values are the following:
MULTI_AZ_1
- Specifies a high availability file system that is configured for Multi-AZ redundancy to tolerate temporary Availability Zone (AZ) unavailability.
SINGLE_AZ_1
- (Default) Specifies a file system that is configured for single AZ redundancy.
Specifies the file system deployment type, valid values are the following:
MULTI_AZ_1
- Deploys a high availability file system that is configured for Multi-AZ redundancy to tolerate temporary Availability Zone (AZ) unavailability. You can only deploy a Multi-AZ file system in AWS Regions that have a minimum of three Availability Zones. Also supports HDD storage type
SINGLE_AZ_1
- (Default) Choose to deploy a file system that is configured for single AZ redundancy.
SINGLE_AZ_2
- The latest generation Single AZ file system. Specifies a file system that is configured for single AZ redundancy and supports HDD storage type.
For more information, see Availability and Durability: Single-AZ and Multi-AZ File Systems.
", + "WindowsFileSystemConfiguration$DeploymentType": "Specifies the file system deployment type, valid values are the following:
MULTI_AZ_1
- Specifies a high availability file system that is configured for Multi-AZ redundancy to tolerate temporary Availability Zone (AZ) unavailability, and supports SSD and HDD storage.
SINGLE_AZ_1
- (Default) Specifies a file system that is configured for single AZ redundancy, only supports SSD storage.
SINGLE_AZ_2
- Latest generation Single AZ file system. Specifies a file system that is configured for single AZ redundancy and supports SSD and HDD storage.
For more information, see Single-AZ and Multi-AZ File Systems.
" } }, "WindowsFileSystemConfiguration": { diff --git a/models/apis/gamelift/2015-10-01/api-2.json b/models/apis/gamelift/2015-10-01/api-2.json index f4f73db5fa6..5bcea695450 100644 --- a/models/apis/gamelift/2015-10-01/api-2.json +++ b/models/apis/gamelift/2015-10-01/api-2.json @@ -27,6 +27,23 @@ {"shape":"UnsupportedRegionException"} ] }, + "ClaimGameServer":{ + "name":"ClaimGameServer", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ClaimGameServerInput"}, + "output":{"shape":"ClaimGameServerOutput"}, + "errors":[ + {"shape":"InvalidRequestException"}, + {"shape":"NotFoundException"}, + {"shape":"ConflictException"}, + {"shape":"OutOfCapacityException"}, + {"shape":"UnauthorizedException"}, + {"shape":"InternalServiceException"} + ] + }, "CreateAlias":{ "name":"CreateAlias", "http":{ @@ -78,6 +95,22 @@ {"shape":"TaggingFailedException"} ] }, + "CreateGameServerGroup":{ + "name":"CreateGameServerGroup", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CreateGameServerGroupInput"}, + "output":{"shape":"CreateGameServerGroupOutput"}, + "errors":[ + {"shape":"InvalidRequestException"}, + {"shape":"ConflictException"}, + {"shape":"UnauthorizedException"}, + {"shape":"InternalServiceException"}, + {"shape":"LimitExceededException"} + ] + }, "CreateGameSession":{ "name":"CreateGameSession", "http":{ @@ -275,6 +308,21 @@ {"shape":"TaggingFailedException"} ] }, + "DeleteGameServerGroup":{ + "name":"DeleteGameServerGroup", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteGameServerGroupInput"}, + "output":{"shape":"DeleteGameServerGroupOutput"}, + "errors":[ + {"shape":"InvalidRequestException"}, + {"shape":"NotFoundException"}, + {"shape":"UnauthorizedException"}, + {"shape":"InternalServiceException"} + ] + }, "DeleteGameSessionQueue":{ "name":"DeleteGameSessionQueue", "http":{ @@ -382,6 +430,20 @@ {"shape":"InternalServiceException"} ] }, + "DeregisterGameServer":{ + "name":"DeregisterGameServer", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeregisterGameServerInput"}, + "errors":[ + {"shape":"InvalidRequestException"}, + {"shape":"NotFoundException"}, + {"shape":"UnauthorizedException"}, + {"shape":"InternalServiceException"} + ] + }, "DescribeAlias":{ "name":"DescribeAlias", "http":{ @@ -501,6 +563,36 @@ {"shape":"UnauthorizedException"} ] }, + "DescribeGameServer":{ + "name":"DescribeGameServer", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeGameServerInput"}, + "output":{"shape":"DescribeGameServerOutput"}, + "errors":[ + {"shape":"InvalidRequestException"}, + {"shape":"NotFoundException"}, + {"shape":"UnauthorizedException"}, + {"shape":"InternalServiceException"} + ] + }, + "DescribeGameServerGroup":{ + "name":"DescribeGameServerGroup", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeGameServerGroupInput"}, + "output":{"shape":"DescribeGameServerGroupOutput"}, + "errors":[ + {"shape":"InvalidRequestException"}, + {"shape":"NotFoundException"}, + {"shape":"UnauthorizedException"}, + {"shape":"InternalServiceException"} + ] + }, "DescribeGameSessionDetails":{ "name":"DescribeGameSessionDetails", "http":{ @@ -783,6 +875,34 @@ {"shape":"UnauthorizedException"} ] }, + "ListGameServerGroups":{ + "name":"ListGameServerGroups", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListGameServerGroupsInput"}, + "output":{"shape":"ListGameServerGroupsOutput"}, + "errors":[ + {"shape":"InvalidRequestException"}, + {"shape":"UnauthorizedException"}, + {"shape":"InternalServiceException"} + ] + }, + "ListGameServers":{ + "name":"ListGameServers", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListGameServersInput"}, + "output":{"shape":"ListGameServersOutput"}, + "errors":[ + {"shape":"InvalidRequestException"}, + {"shape":"UnauthorizedException"}, + {"shape":"InternalServiceException"} + ] + }, "ListScripts":{ "name":"ListScripts", "http":{ @@ -827,6 +947,22 @@ {"shape":"NotFoundException"} ] }, + "RegisterGameServer":{ + "name":"RegisterGameServer", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"RegisterGameServerInput"}, + "output":{"shape":"RegisterGameServerOutput"}, + "errors":[ + {"shape":"InvalidRequestException"}, + {"shape":"ConflictException"}, + {"shape":"UnauthorizedException"}, + {"shape":"InternalServiceException"}, + {"shape":"LimitExceededException"} + ] + }, "RequestUploadCredentials":{ "name":"RequestUploadCredentials", "http":{ @@ -858,6 +994,21 @@ {"shape":"InternalServiceException"} ] }, + "ResumeGameServerGroup":{ + "name":"ResumeGameServerGroup", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ResumeGameServerGroupInput"}, + "output":{"shape":"ResumeGameServerGroupOutput"}, + "errors":[ + {"shape":"InvalidRequestException"}, + {"shape":"NotFoundException"}, + {"shape":"UnauthorizedException"}, + {"shape":"InternalServiceException"} + ] + }, "SearchGameSessions":{ "name":"SearchGameSessions", "http":{ @@ -979,6 +1130,21 @@ {"shape":"UnsupportedRegionException"} ] }, + "SuspendGameServerGroup":{ + "name":"SuspendGameServerGroup", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"SuspendGameServerGroupInput"}, + "output":{"shape":"SuspendGameServerGroupOutput"}, + "errors":[ + {"shape":"InvalidRequestException"}, + {"shape":"NotFoundException"}, + {"shape":"UnauthorizedException"}, + {"shape":"InternalServiceException"} + ] + }, "TagResource":{ "name":"TagResource", "http":{ @@ -1093,6 +1259,36 @@ {"shape":"UnauthorizedException"} ] }, + "UpdateGameServer":{ + "name":"UpdateGameServer", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"UpdateGameServerInput"}, + "output":{"shape":"UpdateGameServerOutput"}, + "errors":[ + {"shape":"InvalidRequestException"}, + {"shape":"NotFoundException"}, + {"shape":"UnauthorizedException"}, + {"shape":"InternalServiceException"} + ] + }, + "UpdateGameServerGroup":{ + "name":"UpdateGameServerGroup", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"UpdateGameServerGroupInput"}, + "output":{"shape":"UpdateGameServerGroupOutput"}, + "errors":[ + {"shape":"InvalidRequestException"}, + {"shape":"NotFoundException"}, + {"shape":"UnauthorizedException"}, + {"shape":"InternalServiceException"} + ] + }, "UpdateGameSession":{ "name":"UpdateGameSession", "http":{ @@ -1217,14 +1413,22 @@ "members":{ "AliasId":{"shape":"AliasId"}, "Name":{"shape":"NonBlankAndLengthConstraintString"}, - "AliasArn":{"shape":"ArnStringModel"}, + "AliasArn":{"shape":"AliasArn"}, "Description":{"shape":"FreeText"}, "RoutingStrategy":{"shape":"RoutingStrategy"}, "CreationTime":{"shape":"Timestamp"}, "LastUpdatedTime":{"shape":"Timestamp"} } }, + "AliasArn":{ + "type":"string", + "pattern":"^arn:.*:alias\\/alias-\\S+" + }, "AliasId":{ + "type":"string", + "pattern":"^alias-\\S+" + }, + "AliasIdOrArn":{ "type":"string", "pattern":"^alias-\\S+|^arn:.*:alias\\/alias-\\S+" }, @@ -1252,6 +1456,12 @@ "SDM":{"shape":"StringDoubleMap"} } }, + "AutoScalingGroupArn":{ + "type":"string", + "max":256, + "min":0, + "pattern":"[\\u0020-\\uD7FF\\uE000-\\uFFFD\\uD800\\uDC00-\\uDBFF\\uDFFF\\r\\n\\t]*" + }, "AwsCredentials":{ "type":"structure", "members":{ @@ -1268,6 +1478,13 @@ "MANUAL" ] }, + "BalancingStrategy":{ + "type":"string", + "enum":[ + "SPOT_ONLY", + "SPOT_PREFERRED" + ] + }, "BooleanModel":{"type":"boolean"}, "Build":{ "type":"structure", @@ -1287,6 +1504,10 @@ "pattern":"^arn:.*:build\\/build-\\S+" }, "BuildId":{ + "type":"string", + "pattern":"^build-\\S+" + }, + "BuildIdOrArn":{ "type":"string", "pattern":"^build-\\S+|^arn:.*:build\\/build-\\S+" }, @@ -1316,6 +1537,21 @@ "GENERATED" ] }, + "ClaimGameServerInput":{ + "type":"structure", + "required":["GameServerGroupName"], + "members":{ + "GameServerGroupName":{"shape":"GameServerGroupNameOrArn"}, + "GameServerId":{"shape":"GameServerId"}, + "GameServerData":{"shape":"GameServerData"} + } + }, + "ClaimGameServerOutput":{ + "type":"structure", + "members":{ + "GameServer":{"shape":"GameServer"} + } + }, "ComparisonOperatorType":{ "type":"string", "enum":[ @@ -1378,8 +1614,8 @@ "members":{ "Name":{"shape":"NonZeroAndMaxString"}, "Description":{"shape":"NonZeroAndMaxString"}, - "BuildId":{"shape":"BuildId"}, - "ScriptId":{"shape":"ScriptId"}, + "BuildId":{"shape":"BuildIdOrArn"}, + "ScriptId":{"shape":"ScriptIdOrArn"}, "ServerLaunchPath":{"shape":"NonZeroAndMaxString"}, "ServerLaunchParameters":{"shape":"NonZeroAndMaxString"}, "LogPaths":{"shape":"StringList"}, @@ -1403,12 +1639,42 @@ "FleetAttributes":{"shape":"FleetAttributes"} } }, + "CreateGameServerGroupInput":{ + "type":"structure", + "required":[ + "GameServerGroupName", + "RoleArn", + "MinSize", + "MaxSize", + "LaunchTemplate", + "InstanceDefinitions" + ], + "members":{ + "GameServerGroupName":{"shape":"GameServerGroupName"}, + "RoleArn":{"shape":"IamRoleArn"}, + "MinSize":{"shape":"WholeNumber"}, + "MaxSize":{"shape":"PositiveInteger"}, + "LaunchTemplate":{"shape":"LaunchTemplateSpecification"}, + "InstanceDefinitions":{"shape":"InstanceDefinitions"}, + "AutoScalingPolicy":{"shape":"GameServerGroupAutoScalingPolicy"}, + "BalancingStrategy":{"shape":"BalancingStrategy"}, + "GameServerProtectionPolicy":{"shape":"GameServerProtectionPolicy"}, + "VpcSubnets":{"shape":"VpcSubnets"}, + "Tags":{"shape":"TagList"} + } + }, + "CreateGameServerGroupOutput":{ + "type":"structure", + "members":{ + "GameServerGroup":{"shape":"GameServerGroup"} + } + }, "CreateGameSessionInput":{ "type":"structure", "required":["MaximumPlayerSessionCount"], "members":{ - "FleetId":{"shape":"FleetId"}, - "AliasId":{"shape":"AliasId"}, + "FleetId":{"shape":"FleetIdOrArn"}, + "AliasId":{"shape":"AliasIdOrArn"}, "MaximumPlayerSessionCount":{"shape":"WholeNumber"}, "Name":{"shape":"NonZeroAndMaxString"}, "GameProperties":{"shape":"GamePropertyList"}, @@ -1588,28 +1854,42 @@ "type":"structure", "required":["AliasId"], "members":{ - "AliasId":{"shape":"AliasId"} + "AliasId":{"shape":"AliasIdOrArn"} } }, "DeleteBuildInput":{ "type":"structure", "required":["BuildId"], "members":{ - "BuildId":{"shape":"BuildId"} + "BuildId":{"shape":"BuildIdOrArn"} } }, "DeleteFleetInput":{ "type":"structure", "required":["FleetId"], "members":{ - "FleetId":{"shape":"FleetId"} + "FleetId":{"shape":"FleetIdOrArn"} + } + }, + "DeleteGameServerGroupInput":{ + "type":"structure", + "required":["GameServerGroupName"], + "members":{ + "GameServerGroupName":{"shape":"GameServerGroupNameOrArn"}, + "DeleteOption":{"shape":"GameServerGroupDeleteOption"} + } + }, + "DeleteGameServerGroupOutput":{ + "type":"structure", + "members":{ + "GameServerGroup":{"shape":"GameServerGroup"} } }, "DeleteGameSessionQueueInput":{ "type":"structure", "required":["Name"], "members":{ - "Name":{"shape":"GameSessionQueueName"} + "Name":{"shape":"GameSessionQueueNameOrArn"} } }, "DeleteGameSessionQueueOutput":{ @@ -1649,14 +1929,14 @@ ], "members":{ "Name":{"shape":"NonZeroAndMaxString"}, - "FleetId":{"shape":"FleetId"} + "FleetId":{"shape":"FleetIdOrArn"} } }, "DeleteScriptInput":{ "type":"structure", "required":["ScriptId"], "members":{ - "ScriptId":{"shape":"ScriptId"} + "ScriptId":{"shape":"ScriptIdOrArn"} } }, "DeleteVpcPeeringAuthorizationInput":{ @@ -1691,11 +1971,22 @@ "members":{ } }, + "DeregisterGameServerInput":{ + "type":"structure", + "required":[ + "GameServerGroupName", + "GameServerId" + ], + "members":{ + "GameServerGroupName":{"shape":"GameServerGroupNameOrArn"}, + "GameServerId":{"shape":"GameServerId"} + } + }, "DescribeAliasInput":{ "type":"structure", "required":["AliasId"], "members":{ - "AliasId":{"shape":"AliasId"} + "AliasId":{"shape":"AliasIdOrArn"} } }, "DescribeAliasOutput":{ @@ -1708,7 +1999,7 @@ "type":"structure", "required":["BuildId"], "members":{ - "BuildId":{"shape":"BuildId"} + "BuildId":{"shape":"BuildIdOrArn"} } }, "DescribeBuildOutput":{ @@ -1732,7 +2023,7 @@ "DescribeFleetAttributesInput":{ "type":"structure", "members":{ - "FleetIds":{"shape":"FleetIdList"}, + "FleetIds":{"shape":"FleetIdOrArnList"}, "Limit":{"shape":"PositiveInteger"}, "NextToken":{"shape":"NonZeroAndMaxString"} } @@ -1747,7 +2038,7 @@ "DescribeFleetCapacityInput":{ "type":"structure", "members":{ - "FleetIds":{"shape":"FleetIdList"}, + "FleetIds":{"shape":"FleetIdOrArnList"}, "Limit":{"shape":"PositiveInteger"}, "NextToken":{"shape":"NonZeroAndMaxString"} } @@ -1763,7 +2054,7 @@ "type":"structure", "required":["FleetId"], "members":{ - "FleetId":{"shape":"FleetId"}, + "FleetId":{"shape":"FleetIdOrArn"}, "StartTime":{"shape":"Timestamp"}, "EndTime":{"shape":"Timestamp"}, "Limit":{"shape":"PositiveInteger"}, @@ -1781,7 +2072,7 @@ "type":"structure", "required":["FleetId"], "members":{ - "FleetId":{"shape":"FleetId"} + "FleetId":{"shape":"FleetIdOrArn"} } }, "DescribeFleetPortSettingsOutput":{ @@ -1793,7 +2084,7 @@ "DescribeFleetUtilizationInput":{ "type":"structure", "members":{ - "FleetIds":{"shape":"FleetIdList"}, + "FleetIds":{"shape":"FleetIdOrArnList"}, "Limit":{"shape":"PositiveInteger"}, "NextToken":{"shape":"NonZeroAndMaxString"} } @@ -1805,12 +2096,42 @@ "NextToken":{"shape":"NonZeroAndMaxString"} } }, + "DescribeGameServerGroupInput":{ + "type":"structure", + "required":["GameServerGroupName"], + "members":{ + "GameServerGroupName":{"shape":"GameServerGroupNameOrArn"} + } + }, + "DescribeGameServerGroupOutput":{ + "type":"structure", + "members":{ + "GameServerGroup":{"shape":"GameServerGroup"} + } + }, + "DescribeGameServerInput":{ + "type":"structure", + "required":[ + "GameServerGroupName", + "GameServerId" + ], + "members":{ + "GameServerGroupName":{"shape":"GameServerGroupNameOrArn"}, + "GameServerId":{"shape":"GameServerId"} + } + }, + "DescribeGameServerOutput":{ + "type":"structure", + "members":{ + "GameServer":{"shape":"GameServer"} + } + }, "DescribeGameSessionDetailsInput":{ "type":"structure", "members":{ - "FleetId":{"shape":"FleetId"}, + "FleetId":{"shape":"FleetIdOrArn"}, "GameSessionId":{"shape":"ArnStringModel"}, - "AliasId":{"shape":"AliasId"}, + "AliasId":{"shape":"AliasIdOrArn"}, "StatusFilter":{"shape":"NonZeroAndMaxString"}, "Limit":{"shape":"PositiveInteger"}, "NextToken":{"shape":"NonZeroAndMaxString"} @@ -1839,7 +2160,7 @@ "DescribeGameSessionQueuesInput":{ "type":"structure", "members":{ - "Names":{"shape":"GameSessionQueueNameList"}, + "Names":{"shape":"GameSessionQueueNameOrArnList"}, "Limit":{"shape":"PositiveInteger"}, "NextToken":{"shape":"NonZeroAndMaxString"} } @@ -1854,9 +2175,9 @@ "DescribeGameSessionsInput":{ "type":"structure", "members":{ - "FleetId":{"shape":"FleetId"}, + "FleetId":{"shape":"FleetIdOrArn"}, "GameSessionId":{"shape":"ArnStringModel"}, - "AliasId":{"shape":"AliasId"}, + "AliasId":{"shape":"AliasIdOrArn"}, "StatusFilter":{"shape":"NonZeroAndMaxString"}, "Limit":{"shape":"PositiveInteger"}, "NextToken":{"shape":"NonZeroAndMaxString"} @@ -1873,7 +2194,7 @@ "type":"structure", "required":["FleetId"], "members":{ - "FleetId":{"shape":"FleetId"}, + "FleetId":{"shape":"FleetIdOrArn"}, "InstanceId":{"shape":"InstanceId"}, "Limit":{"shape":"PositiveInteger"}, "NextToken":{"shape":"NonZeroAndMaxString"} @@ -1953,7 +2274,7 @@ "type":"structure", "required":["FleetId"], "members":{ - "FleetId":{"shape":"FleetId"} + "FleetId":{"shape":"FleetIdOrArn"} } }, "DescribeRuntimeConfigurationOutput":{ @@ -1966,7 +2287,7 @@ "type":"structure", "required":["FleetId"], "members":{ - "FleetId":{"shape":"FleetId"}, + "FleetId":{"shape":"FleetIdOrArn"}, "StatusFilter":{"shape":"ScalingStatusType"}, "Limit":{"shape":"PositiveInteger"}, "NextToken":{"shape":"NonZeroAndMaxString"} @@ -1983,7 +2304,7 @@ "type":"structure", "required":["ScriptId"], "members":{ - "ScriptId":{"shape":"ScriptId"} + "ScriptId":{"shape":"ScriptIdOrArn"} } }, "DescribeScriptOutput":{ @@ -2179,11 +2500,15 @@ "max":1, "min":1 }, + "FleetArn":{ + "type":"string", + "pattern":"^arn:.*:fleet\\/fleet-\\S+" + }, "FleetAttributes":{ "type":"structure", "members":{ "FleetId":{"shape":"FleetId"}, - "FleetArn":{"shape":"ArnStringModel"}, + "FleetArn":{"shape":"FleetArn"}, "FleetType":{"shape":"FleetType"}, "InstanceType":{"shape":"EC2InstanceType"}, "Description":{"shape":"NonZeroAndMaxString"}, @@ -2232,13 +2557,22 @@ }, "FleetId":{ "type":"string", - "pattern":"^fleet-\\S+|^arn:.*:fleet\\/fleet-\\S+" + "pattern":"^fleet-\\S+" }, "FleetIdList":{ "type":"list", "member":{"shape":"FleetId"}, "min":1 }, + "FleetIdOrArn":{ + "type":"string", + "pattern":"^fleet-\\S+|^arn:.*:fleet\\/fleet-\\S+" + }, + "FleetIdOrArnList":{ + "type":"list", + "member":{"shape":"FleetIdOrArn"}, + "min":1 + }, "FleetStatus":{ "type":"string", "enum":[ @@ -2300,13 +2634,208 @@ "type":"string", "max":96 }, + "GameServer":{ + "type":"structure", + "members":{ + "GameServerGroupName":{"shape":"GameServerGroupName"}, + "GameServerGroupArn":{"shape":"GameServerGroupArn"}, + "GameServerId":{"shape":"GameServerId"}, + "InstanceId":{"shape":"GameServerInstanceId"}, + "ConnectionInfo":{"shape":"GameServerConnectionInfo"}, + "GameServerData":{"shape":"GameServerData"}, + "CustomSortKey":{"shape":"GameServerSortKey"}, + "ClaimStatus":{"shape":"GameServerClaimStatus"}, + "UtilizationStatus":{"shape":"GameServerUtilizationStatus"}, + "RegistrationTime":{"shape":"Timestamp"}, + "LastClaimTime":{"shape":"Timestamp"}, + "LastHealthCheckTime":{"shape":"Timestamp"} + } + }, + "GameServerClaimStatus":{ + "type":"string", + "enum":["CLAIMED"] + }, + "GameServerConnectionInfo":{ + "type":"string", + "max":512, + "min":1, + "pattern":".*\\S.*" + }, + "GameServerData":{ + "type":"string", + "max":1024, + "min":1, + "pattern":".*\\S.*" + }, + "GameServerGroup":{ + "type":"structure", + "members":{ + "GameServerGroupName":{"shape":"GameServerGroupName"}, + "GameServerGroupArn":{"shape":"GameServerGroupArn"}, + "RoleArn":{"shape":"IamRoleArn"}, + "InstanceDefinitions":{"shape":"InstanceDefinitions"}, + "BalancingStrategy":{"shape":"BalancingStrategy"}, + "GameServerProtectionPolicy":{"shape":"GameServerProtectionPolicy"}, + "AutoScalingGroupArn":{"shape":"AutoScalingGroupArn"}, + "Status":{"shape":"GameServerGroupStatus"}, + "StatusReason":{"shape":"NonZeroAndMaxString"}, + "SuspendedActions":{"shape":"GameServerGroupActions"}, + "CreationTime":{"shape":"Timestamp"}, + "LastUpdatedTime":{"shape":"Timestamp"} + } + }, + "GameServerGroupAction":{ + "type":"string", + "enum":["REPLACE_INSTANCE_TYPES"] + }, + "GameServerGroupActions":{ + "type":"list", + "member":{"shape":"GameServerGroupAction"}, + "max":1, + "min":1 + }, + "GameServerGroupArn":{ + "type":"string", + "max":256, + "min":1, + "pattern":"^arn:.*:gameservergroup\\/[a-zA-Z0-9-\\.]*" + }, + "GameServerGroupAutoScalingPolicy":{ + "type":"structure", + "required":["TargetTrackingConfiguration"], + "members":{ + "EstimatedInstanceWarmup":{"shape":"PositiveInteger"}, + "TargetTrackingConfiguration":{"shape":"TargetTrackingConfiguration"} + } + }, + "GameServerGroupDeleteOption":{ + "type":"string", + "enum":[ + "SAFE_DELETE", + "FORCE_DELETE", + "RETAIN" + ] + }, + "GameServerGroupInstanceType":{ + "type":"string", + "enum":[ + "c4.large", + "c4.xlarge", + "c4.2xlarge", + "c4.4xlarge", + "c4.8xlarge", + "c5.large", + "c5.xlarge", + "c5.2xlarge", + "c5.4xlarge", + "c5.9xlarge", + "c5.12xlarge", + "c5.18xlarge", + "c5.24xlarge", + "r4.large", + "r4.xlarge", + "r4.2xlarge", + "r4.4xlarge", + "r4.8xlarge", + "r4.16xlarge", + "r5.large", + "r5.xlarge", + "r5.2xlarge", + "r5.4xlarge", + "r5.8xlarge", + "r5.12xlarge", + "r5.16xlarge", + "r5.24xlarge", + "m4.large", + "m4.xlarge", + "m4.2xlarge", + "m4.4xlarge", + "m4.10xlarge", + "m5.large", + "m5.xlarge", + "m5.2xlarge", + "m5.4xlarge", + "m5.8xlarge", + "m5.12xlarge", + "m5.16xlarge", + "m5.24xlarge" + ] + }, + "GameServerGroupName":{ + "type":"string", + "max":128, + "min":1, + "pattern":"[a-zA-Z0-9-\\.]+" + }, + "GameServerGroupNameOrArn":{ + "type":"string", + "max":256, + "min":1, + "pattern":"[a-zA-Z0-9-\\.]+|^arn:.*:gameservergroup\\/[a-zA-Z0-9-\\.]+" + }, + "GameServerGroupStatus":{ + "type":"string", + "enum":[ + "NEW", + "ACTIVATING", + "ACTIVE", + "DELETE_SCHEDULED", + "DELETING", + "DELETED", + "ERROR" + ] + }, + "GameServerGroups":{ + "type":"list", + "member":{"shape":"GameServerGroup"} + }, + "GameServerHealthCheck":{ + "type":"string", + "enum":["HEALTHY"] + }, + "GameServerId":{ + "type":"string", + "max":128, + "min":3, + "pattern":"[a-zA-Z0-9-\\.]+" + }, + "GameServerInstanceId":{ + "type":"string", + "max":19, + "min":19, + "pattern":"^i-[0-9a-zA-Z]{17}$" + }, + "GameServerProtectionPolicy":{ + "type":"string", + "enum":[ + "NO_PROTECTION", + "FULL_PROTECTION" + ] + }, + "GameServerSortKey":{ + "type":"string", + "max":64, + "min":1, + "pattern":"[a-zA-Z0-9-\\.]+" + }, + "GameServerUtilizationStatus":{ + "type":"string", + "enum":[ + "AVAILABLE", + "UTILIZED" + ] + }, + "GameServers":{ + "type":"list", + "member":{"shape":"GameServer"} + }, "GameSession":{ "type":"structure", "members":{ "GameSessionId":{"shape":"NonZeroAndMaxString"}, "Name":{"shape":"NonZeroAndMaxString"}, "FleetId":{"shape":"FleetId"}, - "FleetArn":{"shape":"ArnStringModel"}, + "FleetArn":{"shape":"FleetArn"}, "CreationTime":{"shape":"Timestamp"}, "TerminationTime":{"shape":"Timestamp"}, "CurrentPlayerSessionCount":{"shape":"WholeNumber"}, @@ -2402,12 +2931,18 @@ "type":"structure", "members":{ "Name":{"shape":"GameSessionQueueName"}, - "GameSessionQueueArn":{"shape":"ArnStringModel"}, + "GameSessionQueueArn":{"shape":"GameSessionQueueArn"}, "TimeoutInSeconds":{"shape":"WholeNumber"}, "PlayerLatencyPolicies":{"shape":"PlayerLatencyPolicyList"}, "Destinations":{"shape":"GameSessionQueueDestinationList"} } }, + "GameSessionQueueArn":{ + "type":"string", + "max":256, + "min":1, + "pattern":"^arn:.*:gamesessionqueue\\/[a-zA-Z0-9-]+" + }, "GameSessionQueueDestination":{ "type":"structure", "members":{ @@ -2423,14 +2958,20 @@ "member":{"shape":"GameSessionQueue"} }, "GameSessionQueueName":{ + "type":"string", + "max":128, + "min":1, + "pattern":"[a-zA-Z0-9-]+" + }, + "GameSessionQueueNameOrArn":{ "type":"string", "max":256, "min":1, "pattern":"[a-zA-Z0-9-]+|^arn:.*:gamesessionqueue\\/[a-zA-Z0-9-]+" }, - "GameSessionQueueNameList":{ + "GameSessionQueueNameOrArnList":{ "type":"list", - "member":{"shape":"GameSessionQueueName"} + "member":{"shape":"GameSessionQueueNameOrArn"} }, "GameSessionStatus":{ "type":"string", @@ -2466,7 +3007,7 @@ "InstanceId" ], "members":{ - "FleetId":{"shape":"FleetId"}, + "FleetId":{"shape":"FleetIdOrArn"}, "InstanceId":{"shape":"InstanceId"} } }, @@ -2476,6 +3017,12 @@ "InstanceAccess":{"shape":"InstanceAccess"} } }, + "IamRoleArn":{ + "type":"string", + "max":256, + "min":1, + "pattern":"^arn:.*:role\\/[\\w+=,.@-]+" + }, "IdStringModel":{ "type":"string", "max":48, @@ -2520,6 +3067,20 @@ }, "sensitive":true }, + "InstanceDefinition":{ + "type":"structure", + "required":["InstanceType"], + "members":{ + "InstanceType":{"shape":"GameServerGroupInstanceType"}, + "WeightedCapacity":{"shape":"WeightedCapacity"} + } + }, + "InstanceDefinitions":{ + "type":"list", + "member":{"shape":"InstanceDefinition"}, + "max":20, + "min":2 + }, "InstanceId":{ "type":"string", "pattern":"[a-zA-Z0-9\\.-]+" @@ -2599,6 +3160,32 @@ "key":{"shape":"NonEmptyString"}, "value":{"shape":"PositiveInteger"} }, + "LaunchTemplateId":{ + "type":"string", + "max":255, + "min":1, + "pattern":"[\\u0020-\\uD7FF\\uE000-\\uFFFD\\uD800\\uDC00-\\uDBFF\\uDFFF\\r\\n\\t]+" + }, + "LaunchTemplateName":{ + "type":"string", + "max":128, + "min":3, + "pattern":"[a-zA-Z0-9\\(\\)\\.\\-/_]+" + }, + "LaunchTemplateSpecification":{ + "type":"structure", + "members":{ + "LaunchTemplateId":{"shape":"LaunchTemplateId"}, + "LaunchTemplateName":{"shape":"LaunchTemplateName"}, + "Version":{"shape":"LaunchTemplateVersion"} + } + }, + "LaunchTemplateVersion":{ + "type":"string", + "max":128, + "min":1, + "pattern":"[\\u0020-\\uD7FF\\uE000-\\uFFFD\\uD800\\uDC00-\\uDBFF\\uDFFF\\r\\n\\t]+" + }, "LimitExceededException":{ "type":"structure", "members":{ @@ -2640,8 +3227,8 @@ "ListFleetsInput":{ "type":"structure", "members":{ - "BuildId":{"shape":"BuildId"}, - "ScriptId":{"shape":"ScriptId"}, + "BuildId":{"shape":"BuildIdOrArn"}, + "ScriptId":{"shape":"ScriptIdOrArn"}, "Limit":{"shape":"PositiveInteger"}, "NextToken":{"shape":"NonZeroAndMaxString"} } @@ -2653,6 +3240,37 @@ "NextToken":{"shape":"NonZeroAndMaxString"} } }, + "ListGameServerGroupsInput":{ + "type":"structure", + "members":{ + "Limit":{"shape":"PositiveInteger"}, + "NextToken":{"shape":"NonZeroAndMaxString"} + } + }, + "ListGameServerGroupsOutput":{ + "type":"structure", + "members":{ + "GameServerGroups":{"shape":"GameServerGroups"}, + "NextToken":{"shape":"NonZeroAndMaxString"} + } + }, + "ListGameServersInput":{ + "type":"structure", + "required":["GameServerGroupName"], + "members":{ + "GameServerGroupName":{"shape":"GameServerGroupNameOrArn"}, + "SortOrder":{"shape":"SortOrder"}, + "Limit":{"shape":"PositiveInteger"}, + "NextToken":{"shape":"NonZeroAndMaxString"} + } + }, + "ListGameServersOutput":{ + "type":"structure", + "members":{ + "GameServers":{"shape":"GameServers"}, + "NextToken":{"shape":"NonZeroAndMaxString"} + } + }, "ListScriptsInput":{ "type":"structure", "members":{ @@ -2862,6 +3480,10 @@ "type":"string", "min":1 }, + "NonNegativeDouble":{ + "type":"double", + "min":0 + }, "NonZeroAndMaxString":{ "type":"string", "max":1024, @@ -2882,6 +3504,13 @@ "AMAZON_LINUX_2" ] }, + "OutOfCapacityException":{ + "type":"structure", + "members":{ + "Message":{"shape":"NonEmptyString"} + }, + "exception":true + }, "PlacedPlayerSession":{ "type":"structure", "members":{ @@ -2957,7 +3586,7 @@ "PlayerId":{"shape":"NonZeroAndMaxString"}, "GameSessionId":{"shape":"NonZeroAndMaxString"}, "FleetId":{"shape":"FleetId"}, - "FleetArn":{"shape":"ArnStringModel"}, + "FleetArn":{"shape":"FleetArn"}, "CreationTime":{"shape":"Timestamp"}, "TerminationTime":{"shape":"Timestamp"}, "Status":{"shape":"PlayerSessionStatus"}, @@ -3027,7 +3656,7 @@ ], "members":{ "Name":{"shape":"NonZeroAndMaxString"}, - "FleetId":{"shape":"FleetId"}, + "FleetId":{"shape":"FleetIdOrArn"}, "ScalingAdjustment":{"shape":"Integer"}, "ScalingAdjustmentType":{"shape":"ScalingAdjustmentType"}, "Threshold":{"shape":"Double"}, @@ -3048,11 +3677,34 @@ "type":"list", "member":{"shape":"ArnStringModel"} }, + "RegisterGameServerInput":{ + "type":"structure", + "required":[ + "GameServerGroupName", + "GameServerId", + "InstanceId" + ], + "members":{ + "GameServerGroupName":{"shape":"GameServerGroupNameOrArn"}, + "GameServerId":{"shape":"GameServerId"}, + "InstanceId":{"shape":"GameServerInstanceId"}, + "ConnectionInfo":{"shape":"GameServerConnectionInfo"}, + "GameServerData":{"shape":"GameServerData"}, + "CustomSortKey":{"shape":"GameServerSortKey"}, + "Tags":{"shape":"TagList"} + } + }, + "RegisterGameServerOutput":{ + "type":"structure", + "members":{ + "GameServer":{"shape":"GameServer"} + } + }, "RequestUploadCredentialsInput":{ "type":"structure", "required":["BuildId"], "members":{ - "BuildId":{"shape":"BuildId"} + "BuildId":{"shape":"BuildIdOrArn"} } }, "RequestUploadCredentialsOutput":{ @@ -3066,14 +3718,14 @@ "type":"structure", "required":["AliasId"], "members":{ - "AliasId":{"shape":"AliasId"} + "AliasId":{"shape":"AliasIdOrArn"} } }, "ResolveAliasOutput":{ "type":"structure", "members":{ "FleetId":{"shape":"FleetId"}, - "FleetArn":{"shape":"ArnStringModel"} + "FleetArn":{"shape":"FleetArn"} } }, "ResourceCreationLimitPolicy":{ @@ -3083,6 +3735,23 @@ "PolicyPeriodInMinutes":{"shape":"WholeNumber"} } }, + "ResumeGameServerGroupInput":{ + "type":"structure", + "required":[ + "GameServerGroupName", + "ResumeActions" + ], + "members":{ + "GameServerGroupName":{"shape":"GameServerGroupNameOrArn"}, + "ResumeActions":{"shape":"GameServerGroupActions"} + } + }, + "ResumeGameServerGroupOutput":{ + "type":"structure", + "members":{ + "GameServerGroup":{"shape":"GameServerGroup"} + } + }, "RoutingStrategy":{ "type":"structure", "members":{ @@ -3182,6 +3851,10 @@ "pattern":"^arn:.*:script\\/script-\\S+" }, "ScriptId":{ + "type":"string", + "pattern":"^script-\\S+" + }, + "ScriptIdOrArn":{ "type":"string", "pattern":"^script-\\S+|^arn:.*:script\\/script-\\S+" }, @@ -3192,8 +3865,8 @@ "SearchGameSessionsInput":{ "type":"structure", "members":{ - "FleetId":{"shape":"FleetId"}, - "AliasId":{"shape":"AliasId"}, + "FleetId":{"shape":"FleetIdOrArn"}, + "AliasId":{"shape":"AliasIdOrArn"}, "FilterExpression":{"shape":"NonZeroAndMaxString"}, "SortExpression":{"shape":"NonZeroAndMaxString"}, "Limit":{"shape":"PositiveInteger"}, @@ -3231,6 +3904,13 @@ "min":0, "pattern":"[a-zA-Z0-9:_/-]*" }, + "SortOrder":{ + "type":"string", + "enum":[ + "ASCENDING", + "DESCENDING" + ] + }, "StartFleetActionsInput":{ "type":"structure", "required":[ @@ -3238,7 +3918,7 @@ "Actions" ], "members":{ - "FleetId":{"shape":"FleetId"}, + "FleetId":{"shape":"FleetIdOrArn"}, "Actions":{"shape":"FleetActionList"} } }, @@ -3256,7 +3936,7 @@ ], "members":{ "PlacementId":{"shape":"IdStringModel"}, - "GameSessionQueueName":{"shape":"GameSessionQueueName"}, + "GameSessionQueueName":{"shape":"GameSessionQueueNameOrArn"}, "GameProperties":{"shape":"GamePropertyList"}, "MaximumPlayerSessionCount":{"shape":"WholeNumber"}, "GameSessionName":{"shape":"NonZeroAndMaxString"}, @@ -3316,7 +3996,7 @@ "Actions" ], "members":{ - "FleetId":{"shape":"FleetId"}, + "FleetId":{"shape":"FleetIdOrArn"}, "Actions":{"shape":"FleetActionList"} } }, @@ -3360,6 +4040,23 @@ "member":{"shape":"NonZeroAndMaxString"} }, "StringModel":{"type":"string"}, + "SuspendGameServerGroupInput":{ + "type":"structure", + "required":[ + "GameServerGroupName", + "SuspendActions" + ], + "members":{ + "GameServerGroupName":{"shape":"GameServerGroupNameOrArn"}, + "SuspendActions":{"shape":"GameServerGroupActions"} + } + }, + "SuspendGameServerGroupOutput":{ + "type":"structure", + "members":{ + "GameServerGroup":{"shape":"GameServerGroup"} + } + }, "Tag":{ "type":"structure", "required":[ @@ -3423,6 +4120,13 @@ "TargetValue":{"shape":"Double"} } }, + "TargetTrackingConfiguration":{ + "type":"structure", + "required":["TargetValue"], + "members":{ + "TargetValue":{"shape":"NonNegativeDouble"} + } + }, "TerminalRoutingStrategyException":{ "type":"structure", "members":{ @@ -3465,7 +4169,7 @@ "type":"structure", "required":["AliasId"], "members":{ - "AliasId":{"shape":"AliasId"}, + "AliasId":{"shape":"AliasIdOrArn"}, "Name":{"shape":"NonBlankAndLengthConstraintString"}, "Description":{"shape":"NonZeroAndMaxString"}, "RoutingStrategy":{"shape":"RoutingStrategy"} @@ -3481,7 +4185,7 @@ "type":"structure", "required":["BuildId"], "members":{ - "BuildId":{"shape":"BuildId"}, + "BuildId":{"shape":"BuildIdOrArn"}, "Name":{"shape":"NonZeroAndMaxString"}, "Version":{"shape":"NonZeroAndMaxString"} } @@ -3496,7 +4200,7 @@ "type":"structure", "required":["FleetId"], "members":{ - "FleetId":{"shape":"FleetId"}, + "FleetId":{"shape":"FleetIdOrArn"}, "Name":{"shape":"NonZeroAndMaxString"}, "Description":{"shape":"NonZeroAndMaxString"}, "NewGameSessionProtectionPolicy":{"shape":"ProtectionPolicy"}, @@ -3514,7 +4218,7 @@ "type":"structure", "required":["FleetId"], "members":{ - "FleetId":{"shape":"FleetId"}, + "FleetId":{"shape":"FleetIdOrArn"}, "DesiredInstances":{"shape":"WholeNumber"}, "MinSize":{"shape":"WholeNumber"}, "MaxSize":{"shape":"WholeNumber"} @@ -3530,7 +4234,7 @@ "type":"structure", "required":["FleetId"], "members":{ - "FleetId":{"shape":"FleetId"}, + "FleetId":{"shape":"FleetIdOrArn"}, "InboundPermissionAuthorizations":{"shape":"IpPermissionsList"}, "InboundPermissionRevocations":{"shape":"IpPermissionsList"} } @@ -3541,6 +4245,44 @@ "FleetId":{"shape":"FleetId"} } }, + "UpdateGameServerGroupInput":{ + "type":"structure", + "required":["GameServerGroupName"], + "members":{ + "GameServerGroupName":{"shape":"GameServerGroupNameOrArn"}, + "RoleArn":{"shape":"IamRoleArn"}, + "InstanceDefinitions":{"shape":"InstanceDefinitions"}, + "GameServerProtectionPolicy":{"shape":"GameServerProtectionPolicy"}, + "BalancingStrategy":{"shape":"BalancingStrategy"} + } + }, + "UpdateGameServerGroupOutput":{ + "type":"structure", + "members":{ + "GameServerGroup":{"shape":"GameServerGroup"} + } + }, + "UpdateGameServerInput":{ + "type":"structure", + "required":[ + "GameServerGroupName", + "GameServerId" + ], + "members":{ + "GameServerGroupName":{"shape":"GameServerGroupNameOrArn"}, + "GameServerId":{"shape":"GameServerId"}, + "GameServerData":{"shape":"GameServerData"}, + "CustomSortKey":{"shape":"GameServerSortKey"}, + "UtilizationStatus":{"shape":"GameServerUtilizationStatus"}, + "HealthCheck":{"shape":"GameServerHealthCheck"} + } + }, + "UpdateGameServerOutput":{ + "type":"structure", + "members":{ + "GameServer":{"shape":"GameServer"} + } + }, "UpdateGameSessionInput":{ "type":"structure", "required":["GameSessionId"], @@ -3562,7 +4304,7 @@ "type":"structure", "required":["Name"], "members":{ - "Name":{"shape":"GameSessionQueueName"}, + "Name":{"shape":"GameSessionQueueNameOrArn"}, "TimeoutInSeconds":{"shape":"WholeNumber"}, "PlayerLatencyPolicies":{"shape":"PlayerLatencyPolicyList"}, "Destinations":{"shape":"GameSessionQueueDestinationList"} @@ -3606,7 +4348,7 @@ "RuntimeConfiguration" ], "members":{ - "FleetId":{"shape":"FleetId"}, + "FleetId":{"shape":"FleetIdOrArn"}, "RuntimeConfiguration":{"shape":"RuntimeConfiguration"} } }, @@ -3620,7 +4362,7 @@ "type":"structure", "required":["ScriptId"], "members":{ - "ScriptId":{"shape":"ScriptId"}, + "ScriptId":{"shape":"ScriptIdOrArn"}, "Name":{"shape":"NonZeroAndMaxString"}, "Version":{"shape":"NonZeroAndMaxString"}, "StorageLocation":{"shape":"S3Location"}, @@ -3664,7 +4406,7 @@ "type":"structure", "members":{ "FleetId":{"shape":"FleetId"}, - "FleetArn":{"shape":"ArnStringModel"}, + "FleetArn":{"shape":"FleetArn"}, "IpV4CidrBlock":{"shape":"NonZeroAndMaxString"}, "VpcPeeringConnectionId":{"shape":"NonZeroAndMaxString"}, "Status":{"shape":"VpcPeeringConnectionStatus"}, @@ -3683,6 +4425,24 @@ "Message":{"shape":"NonZeroAndMaxString"} } }, + "VpcSubnet":{ + "type":"string", + "max":15, + "min":15, + "pattern":"^subnet-[0-9a-z]{8}$" + }, + "VpcSubnets":{ + "type":"list", + "member":{"shape":"VpcSubnet"}, + "max":20, + "min":1 + }, + "WeightedCapacity":{ + "type":"string", + "max":3, + "min":1, + "pattern":"^[\\u0031-\\u0039][\\u0030-\\u0039]{0,2}$" + }, "WholeNumber":{ "type":"integer", "min":0 diff --git a/models/apis/gamelift/2015-10-01/docs-2.json b/models/apis/gamelift/2015-10-01/docs-2.json index 8ee9551ddd4..fad16b89945 100644 --- a/models/apis/gamelift/2015-10-01/docs-2.json +++ b/models/apis/gamelift/2015-10-01/docs-2.json @@ -1,13 +1,15 @@ { "version": "2.0", - "service": "Amazon GameLift is a managed service for developers who need a scalable, dedicated server solution for their multiplayer games. Use Amazon GameLift for these tasks: (1) set up computing resources and deploy your game servers, (2) run game sessions and get players into games, (3) automatically scale your resources to meet player demand and manage costs, and (4) track in-depth metrics on game server performance and player usage.
When setting up hosting resources, you can deploy your custom game server or use the Amazon GameLift Realtime Servers. Realtime Servers gives you the ability to quickly stand up lightweight, efficient game servers with the core Amazon GameLift infrastructure already built in.
Get Amazon GameLift Tools and Resources
This reference guide describes the low-level service API for Amazon GameLift and provides links to language-specific SDK reference topics. See also Amazon GameLift Tools and Resources.
API Summary
The Amazon GameLift service API includes two key sets of actions:
Manage game sessions and player access -- Integrate this functionality into game client services in order to create new game sessions, retrieve information on existing game sessions; reserve a player slot in a game session, request matchmaking, etc.
Configure and manage game server resources -- Manage your Amazon GameLift hosting resources, including builds, scripts, fleets, queues, and aliases. Set up matchmakers, configure auto-scaling, retrieve game logs, and get hosting and game metrics.
Task-based list of API actions
", + "service": "Amazon GameLift provides a range of multiplayer game hosting solutions. As a fully managed service, GameLift helps you:
Set up EC2-based computing resources and use GameLift FleetIQ to and deploy your game servers on low-cost, reliable Spot instances.
Track game server availability and route players into game sessions to minimize latency.
Automatically scale your resources to meet player demand and manage costs
Optionally add FlexMatch matchmaking.
With GameLift as a managed service, you have the option to deploy your custom game server or use Amazon GameLift Realtime Servers to quickly stand up lightweight game servers for your game. Realtime Servers provides an efficient game server framework with core Amazon GameLift infrastructure already built in.
Now in Public Preview:
Use GameLift FleetIQ as a standalone feature with EC2 instances and Auto Scaling groups. GameLift FleetIQ provides optimizations that make low-cost Spot instances viable for game hosting. This extension of GameLift FleetIQ gives you access to these optimizations while managing your EC2 instances and Auto Scaling groups within your own AWS account.
Get Amazon GameLift Tools and Resources
This reference guide describes the low-level service API for Amazon GameLift and provides links to language-specific SDK reference topics. See also Amazon GameLift Tools and Resources.
API Summary
The Amazon GameLift service API includes two key sets of actions:
Manage game sessions and player access -- Integrate this functionality into game client services in order to create new game sessions, retrieve information on existing game sessions; reserve a player slot in a game session, request matchmaking, etc.
Configure and manage game server resources -- Manage your Amazon GameLift hosting resources, including builds, scripts, fleets, queues, and aliases. Set up matchmakers, configure auto-scaling, retrieve game logs, and get hosting and game metrics.
Task-based list of API actions
", "operations": { "AcceptMatch": "Registers a player's acceptance or rejection of a proposed FlexMatch match. A matchmaking configuration may require player acceptance; if so, then matches built with that configuration cannot be completed unless all players accept the proposed match within a specified time limit.
When FlexMatch builds a match, all the matchmaking tickets involved in the proposed match are placed into status REQUIRES_ACCEPTANCE
. This is a trigger for your game to get acceptance from all players in the ticket. Acceptances are only valid for tickets when they are in this status; all other acceptances result in an error.
To register acceptance, specify the ticket ID, a response, and one or more players. Once all players have registered acceptance, the matchmaking tickets advance to status PLACING
, where a new game session is created for the match.
If any player rejects the match, or if acceptances are not received before a specified timeout, the proposed match is dropped. The matchmaking tickets are then handled in one of two ways: For tickets where one or more players rejected the match, the ticket status is returned to SEARCHING
to find a new match. For tickets where one or more players failed to respond, the ticket status is set to CANCELLED
, and processing is terminated. A new matchmaking request for these players can be submitted as needed.
Learn more
Add FlexMatch to a Game Client
Related operations
", + "ClaimGameServer": "This action is part of Amazon GameLift FleetIQ with game server groups, which is in preview release and is subject to change.
Locates an available game server and temporarily reserves it to host gameplay and players. This action is called by a game client or client service (such as a matchmaker) to request hosting resources for a new game session. In response, GameLift FleetIQ searches for an available game server in the specified game server group, places the game server in \"claimed\" status for 60 seconds, and returns connection information back to the requester so that players can connect to the game server.
There are two ways you can claim a game server. For the first option, you provide a game server group ID only, which prompts GameLift FleetIQ to search for an available game server in the specified group and claim it. With this option, GameLift FleetIQ attempts to consolidate gameplay on as few instances as possible to minimize hosting costs. For the second option, you request a specific game server by its ID. This option results in a less efficient claiming process because it does not take advantage of consolidation and may fail if the requested game server is unavailable.
To claim a game server, identify a game server group and (optionally) a game server ID. If your game requires that game data be provided to the game server at the start of a game, such as a game map or player information, you can provide it in your claim request.
When a game server is successfully claimed, connection information is returned. A claimed game server's utilization status remains AVAILABLE, while the claim status is set to CLAIMED for up to 60 seconds. This time period allows the game server to be prompted to update its status to UTILIZED (using UpdateGameServer). If the game server's status is not updated within 60 seconds, the game server reverts to unclaimed status and is available to be claimed by another request.
If you try to claim a specific game server, this request will fail in the following cases: (1) if the game server utilization status is UTILIZED, (2) if the game server claim status is CLAIMED, or (3) if the instance that the game server is running on is flagged as draining.
Learn more
Related operations
Creates an alias for a fleet. In most situations, you can use an alias ID in place of a fleet ID. An alias provides a level of abstraction for a fleet that is useful when redirecting player traffic from one fleet to another, such as when updating your game build.
Amazon GameLift supports two types of routing strategies for aliases: simple and terminal. A simple alias points to an active fleet. A terminal alias is used to display messaging or link to a URL instead of routing players to an active fleet. For example, you might use a terminal alias when a game version is no longer supported and you want to direct players to an upgrade site.
To create a fleet alias, specify an alias name, routing strategy, and optional description. Each simple alias can point to only one fleet, but a fleet can have multiple aliases. If successful, a new alias record is returned, including an alias ID and an ARN. You can reassign an alias to another fleet by calling UpdateAlias
.
Creates a new Amazon GameLift build record for your game server binary files and points to the location of your game server build files in an Amazon Simple Storage Service (Amazon S3) location.
Game server binaries must be combined into a zip file for use with Amazon GameLift.
To create new builds directly from a file directory, use the AWS CLI command upload-build . This helper command uploads build files and creates a new build record in one step, and automatically handles the necessary permissions.
The CreateBuild
operation should be used only in the following scenarios:
To create a new game build with build files that are in an Amazon S3 bucket under your own AWS account. To use this option, you must first give Amazon GameLift access to that Amazon S3 bucket. Then call CreateBuild
and specify a build name, operating system, and the Amazon S3 storage location of your game build.
To upload build files directly to Amazon GameLift's Amazon S3 account. To use this option, first call CreateBuild
and specify a build name and operating system. This action creates a new build record and returns an Amazon S3 storage location (bucket and key only) and temporary access credentials. Use the credentials to manually upload your build file to the provided storage location (see the Amazon S3 topic Uploading Objects). You can upload build files to the GameLift Amazon S3 location only once.
If successful, this operation creates a new build record with a unique build ID and places it in INITIALIZED
status. You can use DescribeBuild to check the status of your build. A build must be in READY
status before it can be used to create fleets.
Learn more
Uploading Your Game https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html
Create a Build with Files in Amazon S3
Related operations
", - "CreateFleet": "Creates a new fleet to run your game servers. whether they are custom game builds or Realtime Servers with game-specific script. A fleet is a set of Amazon Elastic Compute Cloud (Amazon EC2) instances, each of which can host multiple game sessions. When creating a fleet, you choose the hardware specifications, set some configuration options, and specify the game server to deploy on the new fleet.
To create a new fleet, you must provide the following: (1) a fleet name, (2) an EC2 instance type and fleet type (spot or on-demand), (3) the build ID for your game build or script ID if using Realtime Servers, and (4) a runtime configuration, which determines how game servers will run on each instance in the fleet.
If the CreateFleet
call is successful, Amazon GameLift performs the following tasks. You can track the process of a fleet by checking the fleet status or by monitoring fleet creation events:
Creates a fleet record. Status: NEW
.
Begins writing events to the fleet event log, which can be accessed in the Amazon GameLift console.
Sets the fleet's target capacity to 1 (desired instances), which triggers Amazon GameLift to start one new EC2 instance.
Downloads the game build or Realtime script to the new instance and installs it. Statuses: DOWNLOADING
, VALIDATING
, BUILDING
.
Starts launching server processes on the instance. If the fleet is configured to run multiple server processes per instance, Amazon GameLift staggers each process launch by a few seconds. Status: ACTIVATING
.
Sets the fleet's status to ACTIVE
as soon as one server process is ready to host a game session.
Learn more
Related operations
Manage fleet actions:
Creates a new Amazon GameLift build resource for your game server binary files. Game server binaries must be combined into a zip file for use with Amazon GameLift.
When setting up a new game build for GameLift, we recommend using the AWS CLI command upload-build . This helper command combines two tasks: (1) it uploads your build files from a file directory to a GameLift Amazon S3 location, and (2) it creates a new build resource.
The CreateBuild
operation can used in the following scenarios:
To create a new game build with build files that are in an S3 location under an AWS account that you control. To use this option, you must first give Amazon GameLift access to the S3 bucket. With permissions in place, call CreateBuild
and specify a build name, operating system, and the S3 storage location of your game build.
To directly upload your build files to a GameLift S3 location. To use this option, first call CreateBuild
and specify a build name and operating system. This action creates a new build resource and also returns an S3 location with temporary access credentials. Use the credentials to manually upload your build files to the specified S3 location. For more information, see Uploading Objects in the Amazon S3 Developer Guide. Build files can be uploaded to the GameLift S3 location once only; that can't be updated.
If successful, this operation creates a new build resource with a unique build ID and places it in INITIALIZED
status. A build must be in READY
status before you can create fleets with it.
Learn more
Create a Build with Files in Amazon S3
Related operations
", + "CreateFleet": "Creates a new fleet to run your game servers. whether they are custom game builds or Realtime Servers with game-specific script. A fleet is a set of Amazon Elastic Compute Cloud (Amazon EC2) instances, each of which can host multiple game sessions. When creating a fleet, you choose the hardware specifications, set some configuration options, and specify the game server to deploy on the new fleet.
To create a new fleet, provide the following: (1) a fleet name, (2) an EC2 instance type and fleet type (spot or on-demand), (3) the build ID for your game build or script ID if using Realtime Servers, and (4) a runtime configuration, which determines how game servers will run on each instance in the fleet.
If the CreateFleet
call is successful, Amazon GameLift performs the following tasks. You can track the process of a fleet by checking the fleet status or by monitoring fleet creation events:
Creates a fleet resource. Status: NEW
.
Begins writing events to the fleet event log, which can be accessed in the Amazon GameLift console.
Sets the fleet's target capacity to 1 (desired instances), which triggers Amazon GameLift to start one new EC2 instance.
Downloads the game build or Realtime script to the new instance and installs it. Statuses: DOWNLOADING
, VALIDATING
, BUILDING
.
Starts launching server processes on the instance. If the fleet is configured to run multiple server processes per instance, Amazon GameLift staggers each process launch by a few seconds. Status: ACTIVATING
.
Sets the fleet's status to ACTIVE
as soon as one server process is ready to host a game session.
Learn more
Related operations
This action is part of Amazon GameLift FleetIQ with game server groups, which is in preview release and is subject to change.
Creates a GameLift FleetIQ game server group to manage a collection of EC2 instances for game hosting. In addition to creating the game server group, this action also creates an Auto Scaling group in your AWS account and establishes a link between the two groups. You have full control over configuration of the Auto Scaling group, but GameLift FleetIQ routinely certain Auto Scaling group properties in order to optimize the group's instances for low-cost game hosting. You can view the status of your game server groups in the GameLift Console. Game server group metrics and events are emitted to Amazon CloudWatch.
Prior creating a new game server group, you must set up the following:
An EC2 launch template. The template provides configuration settings for a set of EC2 instances and includes the game server build that you want to deploy and run on each instance. For more information on creating a launch template, see Launching an Instance from a Launch Template in the Amazon EC2 User Guide.
An IAM role. The role sets up limited access to your AWS account, allowing GameLift FleetIQ to create and manage the EC2 Auto Scaling group, get instance data, and emit metrics and events to CloudWatch. For more information on setting up an IAM permissions policy with principal access for GameLift, see Specifying a Principal in a Policy in the Amazon S3 Developer Guide.
To create a new game server group, provide a name and specify the IAM role and EC2 launch template. You also need to provide a list of instance types to be used in the group and set initial maximum and minimum limits on the group's instance count. You can optionally set an autoscaling policy with target tracking based on a GameLift FleetIQ metric.
Once the game server group and corresponding Auto Scaling group are created, you have full access to change the Auto Scaling group's configuration as needed. Keep in mind, however, that some properties are periodically updated by GameLift FleetIQ as it balances the group's instances based on availability and cost.
Learn more
Updating a GameLift FleetIQ-Linked Auto Scaling Group
Related operations
Creates a multiplayer game session for players. This action creates a game session record and assigns an available server process in the specified fleet to host the game session. A fleet must have an ACTIVE
status before a game session can be created in it.
To create a game session, specify either fleet ID or alias ID and indicate a maximum number of players to allow in the game session. You can also provide a name and game-specific properties for this game session. If successful, a GameSession object is returned containing the game session properties and other settings you specified.
Idempotency tokens. You can add a token that uniquely identifies game session requests. This is useful for ensuring that game session requests are idempotent. Multiple requests with the same idempotency token are processed only once; subsequent requests return the original result. All response values are the same with the exception of game session status, which may change.
Resource creation limits. If you are creating a game session on a fleet with a resource creation limit policy in force, then you must specify a creator ID. Without this ID, Amazon GameLift has no way to evaluate the policy for this new game session request.
Player acceptance policy. By default, newly created game sessions are open to new players. You can restrict new player access by using UpdateGameSession to change the game session's player session creation policy.
Game session logs. Logs are retained for all active game sessions for 14 days. To access the logs, call GetGameSessionLogUrl to download the log files.
Available in Amazon GameLift Local.
Game session placements
Establishes a new queue for processing requests to place new game sessions. A queue identifies where new game sessions can be hosted -- by specifying a list of destinations (fleets or aliases) -- and how long requests can wait in the queue before timing out. You can set up a queue to try to place game sessions on fleets in multiple Regions. To add placement requests to a queue, call StartGameSessionPlacement and reference the queue name.
Destination order. When processing a request for a game session, Amazon GameLift tries each destination in order until it finds one with available resources to host the new game session. A queue's default order is determined by how destinations are listed. The default order is overridden when a game session placement request provides player latency information. Player latency information enables Amazon GameLift to prioritize destinations where players report the lowest average latency, as a result placing the new game session where the majority of players will have the best possible gameplay experience.
Player latency policies. For placement requests containing player latency information, use player latency policies to protect individual players from very high latencies. With a latency cap, even when a destination can deliver a low latency for most players, the game is not placed where any individual player is reporting latency higher than a policy's maximum. A queue can have multiple latency policies, which are enforced consecutively starting with the policy with the lowest latency cap. Use multiple policies to gradually relax latency controls; for example, you might set a policy with a low latency cap for the first 60 seconds, a second policy with a higher cap for the next 60 seconds, etc.
To create a new queue, provide a name, timeout value, a list of destinations and, if desired, a set of latency policies. If successful, a new queue object is returned.
", + "CreateGameSessionQueue": "Establishes a new queue for processing requests to place new game sessions. A queue identifies where new game sessions can be hosted -- by specifying a list of destinations (fleets or aliases) -- and how long requests can wait in the queue before timing out. You can set up a queue to try to place game sessions on fleets in multiple Regions. To add placement requests to a queue, call StartGameSessionPlacement and reference the queue name.
Destination order. When processing a request for a game session, Amazon GameLift tries each destination in order until it finds one with available resources to host the new game session. A queue's default order is determined by how destinations are listed. The default order is overridden when a game session placement request provides player latency information. Player latency information enables Amazon GameLift to prioritize destinations where players report the lowest average latency, as a result placing the new game session where the majority of players will have the best possible gameplay experience.
Player latency policies. For placement requests containing player latency information, use player latency policies to protect individual players from very high latencies. With a latency cap, even when a destination can deliver a low latency for most players, the game is not placed where any individual player is reporting latency higher than a policy's maximum. A queue can have multiple latency policies, which are enforced consecutively starting with the policy with the lowest latency cap. Use multiple policies to gradually relax latency controls; for example, you might set a policy with a low latency cap for the first 60 seconds, a second policy with a higher cap for the next 60 seconds, etc.
To create a new queue, provide a name, timeout value, a list of destinations and, if desired, a set of latency policies. If successful, a new queue object is returned.
Learn more
Related operations
", "CreateMatchmakingConfiguration": "Defines a new matchmaking configuration for use with FlexMatch. A matchmaking configuration sets out guidelines for matching players and getting the matches into games. You can set up multiple matchmaking configurations to handle the scenarios needed for your game. Each matchmaking ticket (StartMatchmaking or StartMatchBackfill) specifies a configuration for the match and provides player attributes to support the configuration being used.
To create a matchmaking configuration, at a minimum you must specify the following: configuration name; a rule set that governs how to evaluate players and find acceptable matches; a game session queue to use when placing a new game session for the match; and the maximum time allowed for a matchmaking attempt.
There are two ways to track the progress of matchmaking tickets: (1) polling ticket status with DescribeMatchmaking; or (2) receiving notifications with Amazon Simple Notification Service (SNS). To use notifications, you first need to set up an SNS topic to receive the notifications, and provide the topic ARN in the matchmaking configuration. Since notifications promise only \"best effort\" delivery, we recommend calling DescribeMatchmaking
if no notifications are received within 30 seconds.
Learn more
Setting up Notifications for Matchmaking
Related operations
Creates a new rule set for FlexMatch matchmaking. A rule set describes the type of match to create, such as the number and size of teams. It also sets the parameters for acceptable player matches, such as minimum skill level or character type. A rule set is used by a MatchmakingConfiguration.
To create a matchmaking rule set, provide unique rule set name and the rule set body in JSON format. Rule sets must be defined in the same Region as the matchmaking configuration they are used with.
Since matchmaking rule sets cannot be edited, it is a good idea to check the rule set syntax using ValidateMatchmakingRuleSet before creating a new rule set.
Learn more
Related operations
Reserves an open player slot in an active game session. Before a player can be added, a game session must have an ACTIVE
status, have a creation policy of ALLOW_ALL
, and have an open player slot. To add a group of players to a game session, use CreatePlayerSessions. When the player connects to the game server and references a player session ID, the game server contacts the Amazon GameLift service to validate the player reservation and accept the player.
To create a player session, specify a game session ID, player ID, and optionally a string of player data. If successful, a slot is reserved in the game session for the player and a new PlayerSession object is returned. Player sessions cannot be updated.
Available in Amazon GameLift Local.
Game session placements
Requests authorization to create or delete a peer connection between the VPC for your Amazon GameLift fleet and a virtual private cloud (VPC) in your AWS account. VPC peering enables the game servers on your fleet to communicate directly with other AWS resources. Once you've received authorization, call CreateVpcPeeringConnection to establish the peering connection. For more information, see VPC Peering with Amazon GameLift Fleets.
You can peer with VPCs that are owned by any AWS account you have access to, including the account that you use to manage your Amazon GameLift fleets. You cannot peer with VPCs that are in different Regions.
To request authorization to create a connection, call this operation from the AWS account with the VPC that you want to peer to your Amazon GameLift fleet. For example, to enable your game servers to retrieve data from a DynamoDB table, use the account that manages that DynamoDB resource. Identify the following values: (1) The ID of the VPC that you want to peer with, and (2) the ID of the AWS account that you use to manage Amazon GameLift. If successful, VPC peering is authorized for the specified VPC.
To request authorization to delete a connection, call this operation from the AWS account with the VPC that is peered with your Amazon GameLift fleet. Identify the following values: (1) VPC ID that you want to delete the peering connection for, and (2) ID of the AWS account that you use to manage Amazon GameLift.
The authorization remains valid for 24 hours unless it is canceled by a call to DeleteVpcPeeringAuthorization. You must create or delete the peering connection while the authorization is valid.
Establishes a VPC peering connection between a virtual private cloud (VPC) in an AWS account with the VPC for your Amazon GameLift fleet. VPC peering enables the game servers on your fleet to communicate directly with other AWS resources. You can peer with VPCs in any AWS account that you have access to, including the account that you use to manage your Amazon GameLift fleets. You cannot peer with VPCs that are in different Regions. For more information, see VPC Peering with Amazon GameLift Fleets.
Before calling this operation to establish the peering connection, you first need to call CreateVpcPeeringAuthorization and identify the VPC you want to peer with. Once the authorization for the specified VPC is issued, you have 24 hours to establish the connection. These two operations handle all tasks necessary to peer the two VPCs, including acceptance, updating routing tables, etc.
To establish the connection, call this operation from the AWS account that is used to manage the Amazon GameLift fleets. Identify the following values: (1) The ID of the fleet you want to be enable a VPC peering connection for; (2) The AWS account with the VPC that you want to peer with; and (3) The ID of the VPC you want to peer with. This operation is asynchronous. If successful, a VpcPeeringConnection request is created. You can use continuous polling to track the request's status using DescribeVpcPeeringConnections, or by monitoring fleet events for success or failure using DescribeFleetEvents.
Deletes an alias. This action removes all record of the alias. Game clients attempting to access a server process using the deleted alias receive an error. To delete an alias, specify the alias ID to be deleted.
", - "DeleteBuild": "Deletes a build. This action permanently deletes the build record and any uploaded build files.
To delete a build, specify its ID. Deleting a build does not affect the status of any active fleets using the build, but you can no longer create new fleets with the deleted build.
Learn more
Related operations
", - "DeleteFleet": "Deletes everything related to a fleet. Before deleting a fleet, you must set the fleet's desired capacity to zero. See UpdateFleetCapacity.
If the fleet being deleted has a VPC peering connection, you first need to get a valid authorization (good for 24 hours) by calling CreateVpcPeeringAuthorization. You do not need to explicitly delete the VPC peering connection--this is done as part of the delete fleet process.
This action removes the fleet's resources and the fleet record. Once a fleet is deleted, you can no longer use that fleet.
Learn more
Related operations
Manage fleet actions:
Deletes a game session queue. This action means that any StartGameSessionPlacement requests that reference this queue will fail. To delete a queue, specify the queue name.
", + "DeleteBuild": "Deletes a build. This action permanently deletes the build resource and any uploaded build files. Deleting a build does not affect the status of any active fleets using the build, but you can no longer create new fleets with the deleted build.
To delete a build, specify the build ID.
Learn more
Related operations
", + "DeleteFleet": "Deletes everything related to a fleet. Before deleting a fleet, you must set the fleet's desired capacity to zero. See UpdateFleetCapacity.
If the fleet being deleted has a VPC peering connection, you first need to get a valid authorization (good for 24 hours) by calling CreateVpcPeeringAuthorization. You do not need to explicitly delete the VPC peering connection--this is done as part of the delete fleet process.
This action removes the fleet and its resources. Once a fleet is deleted, you can no longer use any of the resource in that fleet.
Learn more
Related operations
This action is part of Amazon GameLift FleetIQ with game server groups, which is in preview release and is subject to change.
Terminates a game server group and permanently deletes the game server group record. You have several options for how these resources are impacted when deleting the game server group. Depending on the type of delete action selected, this action may affect three types of resources: the game server group, the corresponding Auto Scaling group, and all game servers currently running in the group.
To delete a game server group, identify the game server group to delete and specify the type of delete action to initiate. Game server groups can only be deleted if they are in ACTIVE or ERROR status.
If the delete request is successful, a series of actions are kicked off. The game server group status is changed to DELETE_SCHEDULED, which prevents new game servers from being registered and stops autoscaling activity. Once all game servers in the game server group are de-registered, GameLift FleetIQ can begin deleting resources. If any of the delete actions fail, the game server group is placed in ERROR status.
GameLift FleetIQ emits delete events to Amazon CloudWatch.
Learn more
Related operations
Deletes a game session queue. This action means that any StartGameSessionPlacement requests that reference this queue will fail. To delete a queue, specify the queue name.
Learn more
Related operations
", "DeleteMatchmakingConfiguration": "Permanently removes a FlexMatch matchmaking configuration. To delete, specify the configuration name. A matchmaking configuration cannot be deleted if it is being used in any active matchmaking tickets.
Related operations
Deletes an existing matchmaking rule set. To delete the rule set, provide the rule set name. Rule sets cannot be deleted if they are currently being used by a matchmaking configuration.
Learn more
Related operations
Deletes a fleet scaling policy. This action means that the policy is no longer in force and removes all record of it. To delete a scaling policy, specify both the scaling policy name and the fleet ID it is associated with.
To temporarily suspend scaling policies, call StopFleetActions. This operation suspends all policies for the fleet.
Manage scaling policies:
PutScalingPolicy (auto-scaling)
DescribeScalingPolicies (auto-scaling)
DeleteScalingPolicy (auto-scaling)
Manage fleet actions:
Deletes a Realtime script. This action permanently deletes the script record. If script files were uploaded, they are also deleted (files stored in an S3 bucket are not deleted).
To delete a script, specify the script ID. Before deleting a script, be sure to terminate all fleets that are deployed with the script being deleted. Fleet instances periodically check for script updates, and if the script record no longer exists, the instance will go into an error state and be unable to host game sessions.
Learn more
Amazon GameLift Realtime Servers
Related operations
", "DeleteVpcPeeringAuthorization": "Cancels a pending VPC peering authorization for the specified VPC. If you need to delete an existing VPC peering connection, call DeleteVpcPeeringConnection.
Removes a VPC peering connection. To delete the connection, you must have a valid authorization for the VPC peering connection that you want to delete. You can check for an authorization by calling DescribeVpcPeeringAuthorizations or request a new one using CreateVpcPeeringAuthorization.
Once a valid authorization exists, call this operation from the AWS account that is used to manage the Amazon GameLift fleets. Identify the connection to delete by the connection ID and fleet ID. If successful, the connection is removed.
This action is part of Amazon GameLift FleetIQ with game server groups, which is in preview release and is subject to change.
Removes the game server resource from the game server group. As a result of this action, the de-registered game server can no longer be claimed and will not returned in a list of active game servers.
To de-register a game server, specify the game server group and game server ID. If successful, this action emits a CloudWatch event with termination time stamp and reason.
Learn more
Related operations
Retrieves properties for an alias. This operation returns all alias metadata and settings. To get an alias's target fleet ID only, use ResolveAlias
.
To get alias properties, specify the alias ID. If successful, the requested alias record is returned.
", - "DescribeBuild": "Retrieves properties for a build. To request a build record, specify a build ID. If successful, an object containing the build properties is returned.
Learn more
Related operations
", - "DescribeEC2InstanceLimits": "Retrieves the following information for the specified EC2 instance type:
maximum number of instances allowed per AWS account (service limit)
current usage level for the AWS account
Service limits vary depending on Region. Available Regions for Amazon GameLift can be found in the AWS Management Console for Amazon GameLift (see the drop-down list in the upper right corner).
Learn more
Related operations
Describe fleets:
Update fleets:
Manage fleet actions:
Retrieves fleet properties, including metadata, status, and configuration, for one or more fleets. You can request attributes for all fleets, or specify a list of one or more fleet IDs. When requesting multiple fleets, use the pagination parameters to retrieve results as a set of sequential pages. If successful, a FleetAttributes object is returned for each requested fleet ID. When specifying a list of fleet IDs, attribute objects are returned only for fleets that currently exist.
Some API actions may limit the number of fleet IDs allowed in one request. If a request exceeds this limit, the request fails and the error message includes the maximum allowed.
Learn more
Related operations
Describe fleets:
Manage fleet actions:
Retrieves the current status of fleet capacity for one or more fleets. This information includes the number of instances that have been requested for the fleet and the number currently active. You can request capacity for all fleets, or specify a list of one or more fleet IDs. When requesting multiple fleets, use the pagination parameters to retrieve results as a set of sequential pages. If successful, a FleetCapacity object is returned for each requested fleet ID. When specifying a list of fleet IDs, attribute objects are returned only for fleets that currently exist.
Some API actions may limit the number of fleet IDs allowed in one request. If a request exceeds this limit, the request fails and the error message includes the maximum allowed.
Learn more
Related operations
Describe fleets:
Manage fleet actions:
Retrieves entries from the specified fleet's event log. You can specify a time range to limit the result set. Use the pagination parameters to retrieve results as a set of sequential pages. If successful, a collection of event log entries matching the request are returned.
Learn more
Related operations
Describe fleets:
Manage fleet actions:
Retrieves the inbound connection permissions for a fleet. Connection permissions include a range of IP addresses and port settings that incoming traffic can use to access server processes in the fleet. To get a fleet's inbound connection permissions, specify a fleet ID. If successful, a collection of IpPermission objects is returned for the requested fleet ID. If the requested fleet has been deleted, the result set is empty.
Learn more
Related operations
Describe fleets:
Manage fleet actions:
Retrieves utilization statistics for one or more fleets. You can request utilization data for all fleets, or specify a list of one or more fleet IDs. When requesting multiple fleets, use the pagination parameters to retrieve results as a set of sequential pages. If successful, a FleetUtilization object is returned for each requested fleet ID. When specifying a list of fleet IDs, utilization objects are returned only for fleets that currently exist.
Some API actions may limit the number of fleet IDs allowed in one request. If a request exceeds this limit, the request fails and the error message includes the maximum allowed.
Learn more
Related operations
Describe fleets:
Manage fleet actions:
Retrieves properties for a custom game build. To request a build resource, specify a build ID. If successful, an object containing the build properties is returned.
Learn more
Related operations
", + "DescribeEC2InstanceLimits": "Retrieves the following information for the specified EC2 instance type:
Maximum number of instances allowed per AWS account (service limit).
Current usage for the AWS account.
To learn more about the capabilities of each instance type, see Amazon EC2 Instance Types. Note that the instance types offered may vary depending on the region.
Learn more
Related operations
Retrieves core properties, including configuration, status, and metadata, for a fleet.
To get attributes for one or more fleets, provide a list of fleet IDs or fleet ARNs. To get attributes for all fleets, do not specify a fleet identifier. When requesting attributes for multiple fleets, use the pagination parameters to retrieve results as a set of sequential pages. If successful, a FleetAttributes object is returned for each fleet requested, unless the fleet identifier is not found.
Some API actions may limit the number of fleet IDs allowed in one request. If a request exceeds this limit, the request fails and the error message includes the maximum allowed number.
Learn more
Related operations
Describe fleets:
Retrieves the current capacity statistics for one or more fleets. These statistics present a snapshot of the fleet's instances and provide insight on current or imminent scaling activity. To get statistics on game hosting activity in the fleet, see DescribeFleetUtilization.
You can request capacity for all fleets or specify a list of one or more fleet identifiers. When requesting multiple fleets, use the pagination parameters to retrieve results as a set of sequential pages. If successful, a FleetCapacity object is returned for each requested fleet ID. When a list of fleet IDs is provided, attribute objects are returned only for fleets that currently exist.
Some API actions may limit the number of fleet IDs allowed in one request. If a request exceeds this limit, the request fails and the error message includes the maximum allowed.
Learn more
Related operations
Describe fleets:
Retrieves entries from the specified fleet's event log. You can specify a time range to limit the result set. Use the pagination parameters to retrieve results as a set of sequential pages. If successful, a collection of event log entries matching the request are returned.
Learn more
Related operations
Describe fleets:
Retrieves a fleet's inbound connection permissions. Connection permissions specify the range of IP addresses and port settings that incoming traffic can use to access server processes in the fleet. Game sessions that are running on instances in the fleet use connections that fall in this range.
To get a fleet's inbound connection permissions, specify the fleet's unique identifier. If successful, a collection of IpPermission objects is returned for the requested fleet ID. If the requested fleet has been deleted, the result set is empty.
Learn more
Related operations
Describe fleets:
Retrieves utilization statistics for one or more fleets. These statistics provide insight into how available hosting resources are currently being used. To get statistics on available hosting resources, see DescribeFleetCapacity.
You can request utilization data for all fleets, or specify a list of one or more fleet IDs. When requesting multiple fleets, use the pagination parameters to retrieve results as a set of sequential pages. If successful, a FleetUtilization object is returned for each requested fleet ID, unless the fleet identifier is not found.
Some API actions may limit the number of fleet IDs allowed in one request. If a request exceeds this limit, the request fails and the error message includes the maximum allowed.
Learn more
Related operations
Describe fleets:
This action is part of Amazon GameLift FleetIQ with game server groups, which is in preview release and is subject to change.
Retrieves information for a game server resource. Information includes the game server statuses, health check info, and the instance the game server is running on.
To retrieve game server information, specify the game server ID. If successful, the requested game server object is returned.
Learn more
Related operations
This action is part of Amazon GameLift FleetIQ with game server groups, which is in preview release and is subject to change.
Retrieves information on a game server group.
To get attributes for a game server group, provide a group name or ARN value. If successful, a GameServerGroup object is returned.
Learn more
Related operations
Retrieves properties, including the protection policy in force, for one or more game sessions. This action can be used in several ways: (1) provide a GameSessionId
or GameSessionArn
to request details for a specific game session; (2) provide either a FleetId
or an AliasId
to request properties for all game sessions running on a fleet.
To get game session record(s), specify just one of the following: game session ID, fleet ID, or alias ID. You can filter this request by game session status. Use the pagination parameters to retrieve results as a set of sequential pages. If successful, a GameSessionDetail object is returned for each session matching the request.
Game session placements
Retrieves properties and current status of a game session placement request. To get game session placement details, specify the placement ID. If successful, a GameSessionPlacement object is returned.
Game session placements
Retrieves the properties for one or more game session queues. When requesting multiple queues, use the pagination parameters to retrieve results as a set of sequential pages. If successful, a GameSessionQueue object is returned for each requested queue. When specifying a list of queues, objects are returned only for queues that currently exist in the Region.
", + "DescribeGameSessionQueues": "Retrieves the properties for one or more game session queues. When requesting multiple queues, use the pagination parameters to retrieve results as a set of sequential pages. If successful, a GameSessionQueue object is returned for each requested queue. When specifying a list of queues, objects are returned only for queues that currently exist in the Region.
Learn more
Related operations
", "DescribeGameSessions": "Retrieves a set of one or more game sessions. Request a specific game session or request all game sessions on a fleet. Alternatively, use SearchGameSessions to request a set of active game sessions that are filtered by certain criteria. To retrieve protection policy settings for game sessions, use DescribeGameSessionDetails.
To get game sessions, specify one of the following: game session ID, fleet ID, or alias ID. You can filter this request by game session status. Use the pagination parameters to retrieve results as a set of sequential pages. If successful, a GameSession object is returned for each game session matching the request.
Available in Amazon GameLift Local.
Game session placements
Retrieves information about a fleet's instances, including instance IDs. Use this action to get details on all instances in the fleet or get details on one specific instance.
To get a specific instance, specify fleet ID and instance ID. To get all instances in a fleet, specify a fleet ID only. Use the pagination parameters to retrieve results as a set of sequential pages. If successful, an Instance object is returned for each result.
", + "DescribeInstances": "Retrieves information about a fleet's instances, including instance IDs. Use this action to get details on all instances in the fleet or get details on one specific instance.
To get a specific instance, specify fleet ID and instance ID. To get all instances in a fleet, specify a fleet ID only. Use the pagination parameters to retrieve results as a set of sequential pages. If successful, an Instance object is returned for each result.
Learn more
Remotely Access Fleet Instances
Related operations
", "DescribeMatchmaking": "Retrieves one or more matchmaking tickets. Use this operation to retrieve ticket information, including status and--once a successful match is made--acquire connection information for the resulting new game session.
You can use this operation to track the progress of matchmaking requests (through polling) as an alternative to using event notifications. See more details on tracking matchmaking requests through polling or notifications in StartMatchmaking.
To request matchmaking tickets, provide a list of up to 10 ticket IDs. If the request is successful, a ticket object is returned for each requested ID that currently exists.
Learn more
Add FlexMatch to a Game Client
Set Up FlexMatch Event Notification
Related operations
", "DescribeMatchmakingConfigurations": "Retrieves the details of FlexMatch matchmaking configurations. With this operation, you have the following options: (1) retrieve all existing configurations, (2) provide the names of one or more configurations to retrieve, or (3) retrieve all configurations that use a specified rule set name. When requesting multiple items, use the pagination parameters to retrieve results as a set of sequential pages. If successful, a configuration is returned for each requested name. When specifying a list of names, only configurations that currently exist are returned.
Learn more
Setting Up FlexMatch Matchmakers
Related operations
Retrieves the details for FlexMatch matchmaking rule sets. You can request all existing rule sets for the Region, or provide a list of one or more rule set names. When requesting multiple items, use the pagination parameters to retrieve results as a set of sequential pages. If successful, a rule set is returned for each requested name.
Learn more
Related operations
Retrieves properties for one or more player sessions. This action can be used in several ways: (1) provide a PlayerSessionId
to request properties for a specific player session; (2) provide a GameSessionId
to request properties for all player sessions in the specified game session; (3) provide a PlayerId
to request properties for all player sessions of a specified player.
To get game session record(s), specify only one of the following: a player session ID, a game session ID, or a player ID. You can filter this request by player session status. Use the pagination parameters to retrieve results as a set of sequential pages. If successful, a PlayerSession object is returned for each session matching the request.
Available in Amazon GameLift Local.
Game session placements
Retrieves the current runtime configuration for the specified fleet. The runtime configuration tells Amazon GameLift how to launch server processes on instances in the fleet.
Learn more
Related operations
Describe fleets:
Manage fleet actions:
Retrieves a fleet's runtime configuration settings. The runtime configuration tells Amazon GameLift which server processes to run (and how) on each instance in the fleet.
To get a runtime configuration, specify the fleet's unique identifier. If successful, a RuntimeConfiguration object is returned for the requested fleet. If the requested fleet has been deleted, the result set is empty.
Learn more
Running Multiple Processes on a Fleet
Related operations
Describe fleets:
Retrieves all scaling policies applied to a fleet.
To get a fleet's scaling policies, specify the fleet ID. You can filter this request by policy status, such as to retrieve only active scaling policies. Use the pagination parameters to retrieve results as a set of sequential pages. If successful, set of ScalingPolicy objects is returned for the fleet.
A fleet may have all of its scaling policies suspended (StopFleetActions). This action does not affect the status of the scaling policies, which remains ACTIVE. To see whether a fleet's scaling policies are in force or suspended, call DescribeFleetAttributes and check the stopped actions.
Manage scaling policies:
PutScalingPolicy (auto-scaling)
DescribeScalingPolicies (auto-scaling)
DeleteScalingPolicy (auto-scaling)
Manage fleet actions:
Retrieves properties for a Realtime script.
To request a script record, specify the script ID. If successful, an object containing the script properties is returned.
Learn more
Amazon GameLift Realtime Servers
Related operations
", "DescribeVpcPeeringAuthorizations": "Retrieves valid VPC peering authorizations that are pending for the AWS account. This operation returns all VPC peering authorizations and requests for peering. This includes those initiated and received by this account.
Retrieves information on VPC peering connections. Use this operation to get peering information for all fleets or for one specific fleet ID.
To retrieve connection information, call this operation from the AWS account that is used to manage the Amazon GameLift fleets. Specify a fleet ID or leave the parameter empty to retrieve all connection records. If successful, the retrieved information includes both active and pending connections. Active connections identify the IpV4 CIDR block that the VPC uses to connect.
Retrieves the location of stored game session logs for a specified game session. When a game session is terminated, Amazon GameLift automatically stores the logs in Amazon S3 and retains them for 14 days. Use this URL to download the logs.
See the AWS Service Limits page for maximum log file sizes. Log files that exceed this limit are not saved.
Game session placements
Requests remote access to a fleet instance. Remote access is useful for debugging, gathering benchmarking data, or watching activity in real time.
Access requires credentials that match the operating system of the instance. For a Windows instance, Amazon GameLift returns a user name and password as strings for use with a Windows Remote Desktop client. For a Linux instance, Amazon GameLift returns a user name and RSA private key, also as strings, for use with an SSH client. The private key must be saved in the proper format to a .pem
file before using. If you're making this request using the AWS CLI, saving the secret can be handled as part of the GetInstanceAccess request. (See the example later in this topic). For more information on remote access, see Remotely Accessing an Instance.
To request access to a specific instance, specify the IDs of both the instance and the fleet it belongs to. You can retrieve a fleet's instance IDs by calling DescribeInstances. If successful, an InstanceAccess object is returned containing the instance's IP address and a set of credentials.
", + "GetInstanceAccess": "Requests remote access to a fleet instance. Remote access is useful for debugging, gathering benchmarking data, or observing activity in real time.
To remotely access an instance, you need credentials that match the operating system of the instance. For a Windows instance, Amazon GameLift returns a user name and password as strings for use with a Windows Remote Desktop client. For a Linux instance, Amazon GameLift returns a user name and RSA private key, also as strings, for use with an SSH client. The private key must be saved in the proper format to a .pem
file before using. If you're making this request using the AWS CLI, saving the secret can be handled as part of the GetInstanceAccess request, as shown in one of the examples for this action.
To request access to a specific instance, specify the IDs of both the instance and the fleet it belongs to. You can retrieve a fleet's instance IDs by calling DescribeInstances. If successful, an InstanceAccess object is returned that contains the instance's IP address and a set of credentials.
Learn more
Remotely Access Fleet Instances
Related operations
", "ListAliases": "Retrieves all aliases for this AWS account. You can filter the result set by alias name and/or routing strategy type. Use the pagination parameters to retrieve results in sequential pages.
Returned aliases are not listed in any particular order.
Retrieves build records for all builds associated with the AWS account in use. You can limit results to builds that are in a specific status by using the Status
parameter. Use the pagination parameters to retrieve results in a set of sequential pages.
Build records are not listed in any particular order.
Learn more
Related operations
", - "ListFleets": "Retrieves a collection of fleet records for this AWS account. You can filter the result set to find only those fleets that are deployed with a specific build or script. Use the pagination parameters to retrieve results in sequential pages.
Fleet records are not listed in a particular order.
Learn more
Related operations
Manage fleet actions:
Retrieves build resources for all builds associated with the AWS account in use. You can limit results to builds that are in a specific status by using the Status
parameter. Use the pagination parameters to retrieve results in a set of sequential pages.
Build resources are not listed in any particular order.
Learn more
Related operations
", + "ListFleets": "Retrieves a collection of fleet resources for this AWS account. You can filter the result set to find only those fleets that are deployed with a specific build or script. Use the pagination parameters to retrieve results in sequential pages.
Fleet resources are not listed in a particular order.
Learn more
Related operations
This action is part of Amazon GameLift FleetIQ with game server groups, which is in preview release and is subject to change.
Retrieves information on all game servers groups that exist in the current AWS account for the selected region. Use the pagination parameters to retrieve results in a set of sequential pages.
Learn more
Related operations
This action is part of Amazon GameLift FleetIQ with game server groups, which is in preview release and is subject to change.
Retrieves information on all game servers that are currently running in a specified game server group. If there are custom key sort values for your game servers, you can opt to have the returned list sorted based on these values. Use the pagination parameters to retrieve results in a set of sequential pages.
Learn more
Related operations
Retrieves script records for all Realtime scripts that are associated with the AWS account in use.
Learn more
Amazon GameLift Realtime Servers
Related operations
", "ListTagsForResource": "Retrieves all tags that are assigned to a GameLift resource. Resource tags are used to organize AWS resources for a range of purposes. This action handles the permissions necessary to manage tags for the following GameLift resource types:
Build
Script
Fleet
Alias
GameSessionQueue
MatchmakingConfiguration
MatchmakingRuleSet
To list tags for a resource, specify the unique ARN value for the resource.
Learn more
Tagging AWS Resources in the AWS General Reference
Related operations
", "PutScalingPolicy": "Creates or updates a scaling policy for a fleet. Scaling policies are used to automatically scale a fleet's hosting capacity to meet player demand. An active scaling policy instructs Amazon GameLift to track a fleet metric and automatically change the fleet's capacity when a certain threshold is reached. There are two types of scaling policies: target-based and rule-based. Use a target-based policy to quickly and efficiently manage fleet scaling; this option is the most commonly used. Use rule-based policies when you need to exert fine-grained control over auto-scaling.
Fleets can have multiple scaling policies of each type in force at the same time; you can have one target-based policy, one or multiple rule-based scaling policies, or both. We recommend caution, however, because multiple auto-scaling policies can have unintended consequences.
You can temporarily suspend all scaling policies for a fleet by calling StopFleetActions with the fleet action AUTO_SCALING. To resume scaling policies, call StartFleetActions with the same fleet action. To stop just one scaling policy--or to permanently remove it, you must delete the policy with DeleteScalingPolicy.
Learn more about how to work with auto-scaling in Set Up Fleet Automatic Scaling.
Target-based policy
A target-based policy tracks a single metric: PercentAvailableGameSessions. This metric tells us how much of a fleet's hosting capacity is ready to host game sessions but is not currently in use. This is the fleet's buffer; it measures the additional player demand that the fleet could handle at current capacity. With a target-based policy, you set your ideal buffer size and leave it to Amazon GameLift to take whatever action is needed to maintain that target.
For example, you might choose to maintain a 10% buffer for a fleet that has the capacity to host 100 simultaneous game sessions. This policy tells Amazon GameLift to take action whenever the fleet's available capacity falls below or rises above 10 game sessions. Amazon GameLift will start new instances or stop unused instances in order to return to the 10% buffer.
To create or update a target-based policy, specify a fleet ID and name, and set the policy type to \"TargetBased\". Specify the metric to track (PercentAvailableGameSessions) and reference a TargetConfiguration object with your desired buffer value. Exclude all other parameters. On a successful request, the policy name is returned. The scaling policy is automatically in force as soon as it's successfully created. If the fleet's auto-scaling actions are temporarily suspended, the new policy will be in force once the fleet actions are restarted.
Rule-based policy
A rule-based policy tracks specified fleet metric, sets a threshold value, and specifies the type of action to initiate when triggered. With a rule-based policy, you can select from several available fleet metrics. Each policy specifies whether to scale up or scale down (and by how much), so you need one policy for each type of action.
For example, a policy may make the following statement: \"If the percentage of idle instances is greater than 20% for more than 15 minutes, then reduce the fleet capacity by 10%.\"
A policy's rule statement has the following structure:
If [MetricName]
is [ComparisonOperator]
[Threshold]
for [EvaluationPeriods]
minutes, then [ScalingAdjustmentType]
to/by [ScalingAdjustment]
.
To implement the example, the rule statement would look like this:
If [PercentIdleInstances]
is [GreaterThanThreshold]
[20]
for [15]
minutes, then [PercentChangeInCapacity]
to/by [10]
.
To create or update a scaling policy, specify a unique combination of name and fleet ID, and set the policy type to \"RuleBased\". Specify the parameter values for a policy rule statement. On a successful request, the policy name is returned. Scaling policies are automatically in force as soon as they're successfully created. If the fleet's auto-scaling actions are temporarily suspended, the new policy will be in force once the fleet actions are restarted.
Manage scaling policies:
PutScalingPolicy (auto-scaling)
DescribeScalingPolicies (auto-scaling)
DeleteScalingPolicy (auto-scaling)
Manage fleet actions:
Retrieves a fresh set of credentials for use when uploading a new set of game build files to Amazon GameLift's Amazon S3. This is done as part of the build creation process; see CreateBuild.
To request new credentials, specify the build ID as returned with an initial CreateBuild
request. If successful, a new set of credentials are returned, along with the S3 storage location associated with the build ID.
Learn more
Related operations
", + "RegisterGameServer": "This action is part of Amazon GameLift FleetIQ with game server groups, which is in preview release and is subject to change.
Creates a new game server resource and notifies GameLift FleetIQ that the game server is ready to host gameplay and players. This action is called by a game server process that is running on an instance in a game server group. Registering game servers enables GameLift FleetIQ to track available game servers and enables game clients and services to claim a game server for a new game session.
To register a game server, identify the game server group and instance where the game server is running, and provide a unique identifier for the game server. You can also include connection and game server data; when a game client or service requests a game server by calling ClaimGameServer, this information is returned in response.
Once a game server is successfully registered, it is put in status AVAILABLE. A request to register a game server may fail if the instance it is in the process of shutting down as part of instance rebalancing or scale-down activity.
Learn more
Related operations
Retrieves a fresh set of credentials for use when uploading a new set of game build files to Amazon GameLift's Amazon S3. This is done as part of the build creation process; see CreateBuild.
To request new credentials, specify the build ID as returned with an initial CreateBuild
request. If successful, a new set of credentials are returned, along with the S3 storage location associated with the build ID.
Learn more
Create a Build with Files in S3
Related operations
", "ResolveAlias": "Retrieves the fleet ID that an alias is currently pointing to.
", + "ResumeGameServerGroup": "This action is part of Amazon GameLift FleetIQ with game server groups, which is in preview release and is subject to change.
Reinstates activity on a game server group after it has been suspended. A game server group may be suspended by calling SuspendGameServerGroup, or it may have been involuntarily suspended due to a configuration problem. You can manually resume activity on the group once the configuration problem has been resolved. Refer to the game server group status and status reason for more information on why group activity is suspended.
To resume activity, specify a game server group ARN and the type of activity to be resumed.
Learn more
Related operations
Retrieves all active game sessions that match a set of search criteria and sorts them in a specified order. You can search or sort by the following game session attributes:
gameSessionId -- A unique identifier for the game session. You can use either a GameSessionId
or GameSessionArn
value.
gameSessionName -- Name assigned to a game session. This value is set when requesting a new game session with CreateGameSession or updating with UpdateGameSession. Game session names do not need to be unique to a game session.
gameSessionProperties -- Custom data defined in a game session's GameProperty
parameter. GameProperty
values are stored as key:value pairs; the filter expression must indicate the key and a string to search the data values for. For example, to search for game sessions with custom data containing the key:value pair \"gameMode:brawl\", specify the following: gameSessionProperties.gameMode = \"brawl\"
. All custom data values are searched as strings.
maximumSessions -- Maximum number of player sessions allowed for a game session. This value is set when requesting a new game session with CreateGameSession or updating with UpdateGameSession.
creationTimeMillis -- Value indicating when a game session was created. It is expressed in Unix time as milliseconds.
playerSessionCount -- Number of players currently connected to a game session. This value changes rapidly as players join the session or drop out.
hasAvailablePlayerSessions -- Boolean value indicating whether a game session has reached its maximum number of players. It is highly recommended that all search requests include this filter attribute to optimize search performance and return only sessions that players can join.
Returned values for playerSessionCount
and hasAvailablePlayerSessions
change quickly as players join sessions and others drop out. Results should be considered a snapshot in time. Be sure to refresh search results often, and handle sessions that fill up before a player can join.
To search or sort, specify either a fleet ID or an alias ID, and provide a search filter expression, a sort expression, or both. If successful, a collection of GameSession objects matching the request is returned. Use the pagination parameters to retrieve results as a set of sequential pages.
You can search for game sessions one fleet at a time only. To find game sessions across multiple fleets, you must search each fleet separately and combine the results. This search feature finds only game sessions that are in ACTIVE
status. To locate games in statuses other than active, use DescribeGameSessionDetails.
Game session placements
Resumes activity on a fleet that was suspended with StopFleetActions. Currently, this operation is used to restart a fleet's auto-scaling activity.
To start fleet actions, specify the fleet ID and the type of actions to restart. When auto-scaling fleet actions are restarted, Amazon GameLift once again initiates scaling events as triggered by the fleet's scaling policies. If actions on the fleet were never stopped, this operation will have no effect. You can view a fleet's stopped actions using DescribeFleetAttributes.
Learn more
Related operations
Describe fleets:
Update fleets:
Manage fleet actions:
Resumes activity on a fleet that was suspended with StopFleetActions. Currently, this operation is used to restart a fleet's auto-scaling activity.
To start fleet actions, specify the fleet ID and the type of actions to restart. When auto-scaling fleet actions are restarted, Amazon GameLift once again initiates scaling events as triggered by the fleet's scaling policies. If actions on the fleet were never stopped, this operation will have no effect. You can view a fleet's stopped actions using DescribeFleetAttributes.
Learn more
Related operations
Places a request for a new game session in a queue (see CreateGameSessionQueue). When processing a placement request, Amazon GameLift searches for available resources on the queue's destinations, scanning each until it finds resources or the placement request times out.
A game session placement request can also request player sessions. When a new game session is successfully created, Amazon GameLift creates a player session for each player included in the request.
When placing a game session, by default Amazon GameLift tries each fleet in the order they are listed in the queue configuration. Ideally, a queue's destinations are listed in preference order.
Alternatively, when requesting a game session with players, you can also provide latency data for each player in relevant Regions. Latency data indicates the performance lag a player experiences when connected to a fleet in the Region. Amazon GameLift uses latency data to reorder the list of destinations to place the game session in a Region with minimal lag. If latency data is provided for multiple players, Amazon GameLift calculates each Region's average lag for all players and reorders to get the best game play across all players.
To place a new game session request, specify the following:
The queue name and a set of game session properties and settings
A unique ID (such as a UUID) for the placement. You use this ID to track the status of the placement request
(Optional) A set of player data and a unique player ID for each player that you are joining to the new game session (player data is optional, but if you include it, you must also provide a unique ID for each player)
Latency data for all players (if you want to optimize game play for the players)
If successful, a new game session placement is created.
To track the status of a placement request, call DescribeGameSessionPlacement and check the request's status. If the status is FULFILLED
, a new game session has been created and a game session ARN and Region are referenced. If the placement request times out, you can resubmit the request or retry it with a different queue.
Game session placements
Finds new players to fill open slots in an existing game session. This operation can be used to add players to matched games that start with fewer than the maximum number of players or to replace players when they drop out. By backfilling with the same matchmaker used to create the original match, you ensure that new players meet the match criteria and maintain a consistent experience throughout the game session. You can backfill a match anytime after a game session has been created.
To request a match backfill, specify a unique ticket ID, the existing game session's ARN, a matchmaking configuration, and a set of data that describes all current players in the game session. If successful, a match backfill ticket is created and returned with status set to QUEUED. The ticket is placed in the matchmaker's ticket pool and processed. Track the status of the ticket to respond as needed.
The process of finding backfill matches is essentially identical to the initial matchmaking process. The matchmaker searches the pool and groups tickets together to form potential matches, allowing only one backfill ticket per potential match. Once the a match is formed, the matchmaker creates player sessions for the new players. All tickets in the match are updated with the game session's connection information, and the GameSession object is updated to include matchmaker data on the new players. For more detail on how match backfill requests are processed, see How Amazon GameLift FlexMatch Works.
Learn more
Backfill Existing Games with FlexMatch
Related operations
", "StartMatchmaking": "Uses FlexMatch to create a game match for a group of players based on custom matchmaking rules, and starts a new game for the matched players. Each matchmaking request specifies the type of match to build (team configuration, rules for an acceptable match, etc.). The request also specifies the players to find a match for and where to host the new game session for optimal performance. A matchmaking request might start with a single player or a group of players who want to play together. FlexMatch finds additional players as needed to fill the match. Match type, rules, and the queue used to place a new game session are defined in a MatchmakingConfiguration
.
To start matchmaking, provide a unique ticket ID, specify a matchmaking configuration, and include the players to be matched. You must also include a set of player attributes relevant for the matchmaking configuration. If successful, a matchmaking ticket is returned with status set to QUEUED
. Track the status of the ticket to respond as needed and acquire game session connection information for successfully completed matches.
Tracking ticket status -- A couple of options are available for tracking the status of matchmaking requests:
Polling -- Call DescribeMatchmaking
. This operation returns the full ticket object, including current status and (for completed tickets) game session connection info. We recommend polling no more than once every 10 seconds.
Notifications -- Get event notifications for changes in ticket status using Amazon Simple Notification Service (SNS). Notifications are easy to set up (see CreateMatchmakingConfiguration) and typically deliver match status changes faster and more efficiently than polling. We recommend that you use polling to back up to notifications (since delivery is not guaranteed) and call DescribeMatchmaking
only when notifications are not received within 30 seconds.
Processing a matchmaking request -- FlexMatch handles a matchmaking request as follows:
Your client code submits a StartMatchmaking
request for one or more players and tracks the status of the request ticket.
FlexMatch uses this ticket and others in process to build an acceptable match. When a potential match is identified, all tickets in the proposed match are advanced to the next status.
If the match requires player acceptance (set in the matchmaking configuration), the tickets move into status REQUIRES_ACCEPTANCE
. This status triggers your client code to solicit acceptance from all players in every ticket involved in the match, and then call AcceptMatch for each player. If any player rejects or fails to accept the match before a specified timeout, the proposed match is dropped (see AcceptMatch
for more details).
Once a match is proposed and accepted, the matchmaking tickets move into status PLACING
. FlexMatch locates resources for a new game session using the game session queue (set in the matchmaking configuration) and creates the game session based on the match data.
When the match is successfully placed, the matchmaking tickets move into COMPLETED
status. Connection information (including game session endpoint and player session) is added to the matchmaking tickets. Matched players can use the connection information to join the game.
Learn more
Add FlexMatch to a Game Client
Set Up FlexMatch Event Notification
Related operations
", - "StopFleetActions": "Suspends activity on a fleet. Currently, this operation is used to stop a fleet's auto-scaling activity. It is used to temporarily stop scaling events triggered by the fleet's scaling policies. The policies can be retained and auto-scaling activity can be restarted using StartFleetActions. You can view a fleet's stopped actions using DescribeFleetAttributes.
To stop fleet actions, specify the fleet ID and the type of actions to suspend. When auto-scaling fleet actions are stopped, Amazon GameLift no longer initiates scaling events except to maintain the fleet's desired instances setting (FleetCapacity. Changes to the fleet's capacity must be done manually using UpdateFleetCapacity.
Learn more
Related operations
Describe fleets:
Update fleets:
Manage fleet actions:
Suspends activity on a fleet. Currently, this operation is used to stop a fleet's auto-scaling activity. It is used to temporarily stop triggering scaling events. The policies can be retained and auto-scaling activity can be restarted using StartFleetActions. You can view a fleet's stopped actions using DescribeFleetAttributes.
To stop fleet actions, specify the fleet ID and the type of actions to suspend. When auto-scaling fleet actions are stopped, Amazon GameLift no longer initiates scaling events except in response to manual changes using UpdateFleetCapacity.
Learn more
Related operations
Cancels a game session placement that is in PENDING
status. To stop a placement, provide the placement ID values. If successful, the placement is moved to CANCELLED
status.
Game session placements
Cancels a matchmaking ticket or match backfill ticket that is currently being processed. To stop the matchmaking operation, specify the ticket ID. If successful, work on the ticket is stopped, and the ticket status is changed to CANCELLED
.
This call is also used to turn off automatic backfill for an individual game session. This is for game sessions that are created with a matchmaking configuration that has automatic backfill enabled. The ticket ID is included in the MatchmakerData
of an updated game session object, which is provided to the game server.
If the action is successful, the service sends back an empty JSON struct with the HTTP 200 response (not an empty HTTP body).
Learn more
Add FlexMatch to a Game Client
Related operations
", - "TagResource": "Assigns a tag to a GameLift resource. AWS resource tags provide an additional management tool set. You can use tags to organize resources, create IAM permissions policies to manage access to groups of resources, customize AWS cost breakdowns, etc. This action handles the permissions necessary to manage tags for the following GameLift resource types:
Build
Script
Fleet
Alias
GameSessionQueue
MatchmakingConfiguration
MatchmakingRuleSet
To add a tag to a resource, specify the unique ARN value for the resource and provide a trig list containing one or more tags. The operation succeeds even if the list includes tags that are already assigned to the specified resource.
Learn more
Tagging AWS Resources in the AWS General Reference
Related operations
", + "SuspendGameServerGroup": "This action is part of Amazon GameLift FleetIQ with game server groups, which is in preview release and is subject to change.
Temporarily stops activity on a game server group without terminating instances or the game server group. Activity can be restarted by calling ResumeGameServerGroup. Activities that can suspended are:
Instance type replacement. This activity evaluates the current Spot viability of all instance types that are defined for the game server group. It updates the Auto Scaling group to remove nonviable Spot instance types (which have a higher chance of game server interruptions) and rebalances capacity across the remaining viable Spot instance types. When this activity is suspended, the Auto Scaling group continues with its current balance, regardless of viability. Instance protection, utilization metrics, and capacity autoscaling activities continue to be active.
To suspend activity, specify a game server group ARN and the type of activity to be suspended.
Learn more
Related operations
Assigns a tag to a GameLift resource. AWS resource tags provide an additional management tool set. You can use tags to organize resources, create IAM permissions policies to manage access to groups of resources, customize AWS cost breakdowns, etc. This action handles the permissions necessary to manage tags for the following GameLift resource types:
Build
Script
Fleet
Alias
GameSessionQueue
MatchmakingConfiguration
MatchmakingRuleSet
To add a tag to a resource, specify the unique ARN value for the resource and provide a tag list containing one or more tags. The operation succeeds even if the list includes tags that are already assigned to the specified resource.
Learn more
Tagging AWS Resources in the AWS General Reference
Related operations
", "UntagResource": "Removes a tag that is assigned to a GameLift resource. Resource tags are used to organize AWS resources for a range of purposes. This action handles the permissions necessary to manage tags for the following GameLift resource types:
Build
Script
Fleet
Alias
GameSessionQueue
MatchmakingConfiguration
MatchmakingRuleSet
To remove a tag from a resource, specify the unique ARN value for the resource and provide a string list containing one or more tags to be removed. This action succeeds even if the list includes tags that are not currently assigned to the specified resource.
Learn more
Tagging AWS Resources in the AWS General Reference
Related operations
", "UpdateAlias": "Updates properties for an alias. To update properties, specify the alias ID to be updated and provide the information to be changed. To reassign an alias to another fleet, provide an updated routing strategy. If successful, the updated alias record is returned.
", - "UpdateBuild": "Updates metadata in a build record, including the build name and version. To update the metadata, specify the build ID to update and provide the new values. If successful, a build object containing the updated metadata is returned.
Learn more
Related operations
", - "UpdateFleetAttributes": "Updates fleet properties, including name and description, for a fleet. To update metadata, specify the fleet ID and the property values that you want to change. If successful, the fleet ID for the updated fleet is returned.
Learn more
Related operations
Update fleets:
Manage fleet actions:
Updates capacity settings for a fleet. Use this action to specify the number of EC2 instances (hosts) that you want this fleet to contain. Before calling this action, you may want to call DescribeEC2InstanceLimits to get the maximum capacity based on the fleet's EC2 instance type.
Specify minimum and maximum number of instances. Amazon GameLift will not change fleet capacity to values fall outside of this range. This is particularly important when using auto-scaling (see PutScalingPolicy) to allow capacity to adjust based on player demand while imposing limits on automatic adjustments.
To update fleet capacity, specify the fleet ID and the number of instances you want the fleet to host. If successful, Amazon GameLift starts or terminates instances so that the fleet's active instance count matches the desired instance count. You can view a fleet's current capacity information by calling DescribeFleetCapacity. If the desired instance count is higher than the instance type's limit, the \"Limit Exceeded\" exception occurs.
Learn more
Related operations
Update fleets:
Manage fleet actions:
Updates port settings for a fleet. To update settings, specify the fleet ID to be updated and list the permissions you want to update. List the permissions you want to add in InboundPermissionAuthorizations
, and permissions you want to remove in InboundPermissionRevocations
. Permissions to be removed must match existing fleet permissions. If successful, the fleet ID for the updated fleet is returned.
Learn more
Related operations
Update fleets:
Manage fleet actions:
Updates metadata in a build resource, including the build name and version. To update the metadata, specify the build ID to update and provide the new values. If successful, a build object containing the updated metadata is returned.
Learn more
Related operations
", + "UpdateFleetAttributes": "Updates fleet properties, including name and description, for a fleet. To update metadata, specify the fleet ID and the property values that you want to change. If successful, the fleet ID for the updated fleet is returned.
Learn more
Related operations
Update fleets:
Updates capacity settings for a fleet. Use this action to specify the number of EC2 instances (hosts) that you want this fleet to contain. Before calling this action, you may want to call DescribeEC2InstanceLimits to get the maximum capacity based on the fleet's EC2 instance type.
Specify minimum and maximum number of instances. Amazon GameLift will not change fleet capacity to values fall outside of this range. This is particularly important when using auto-scaling (see PutScalingPolicy) to allow capacity to adjust based on player demand while imposing limits on automatic adjustments.
To update fleet capacity, specify the fleet ID and the number of instances you want the fleet to host. If successful, Amazon GameLift starts or terminates instances so that the fleet's active instance count matches the desired instance count. You can view a fleet's current capacity information by calling DescribeFleetCapacity. If the desired instance count is higher than the instance type's limit, the \"Limit Exceeded\" exception occurs.
Learn more
Related operations
Update fleets:
Updates port settings for a fleet. To update settings, specify the fleet ID to be updated and list the permissions you want to update. List the permissions you want to add in InboundPermissionAuthorizations
, and permissions you want to remove in InboundPermissionRevocations
. Permissions to be removed must match existing fleet permissions. If successful, the fleet ID for the updated fleet is returned.
Learn more
Related operations
Update fleets:
This action is part of Amazon GameLift FleetIQ with game server groups, which is in preview release and is subject to change.
Updates information about a registered game server. This action is called by a game server process that is running on an instance in a game server group. There are three reasons to update game server information: (1) to change the utilization status of the game server, (2) to report game server health status, and (3) to change game server metadata. A registered game server should regularly report health and should update utilization status when it is supporting gameplay so that GameLift FleetIQ can accurately track game server availability. You can make all three types of updates in the same request.
To update the game server's utilization status, identify the game server and game server group and specify the current utilization status. Use this status to identify when game servers are currently hosting games and when they are available to be claimed.
To report health status, identify the game server and game server group and set health check to HEALTHY. If a game server does not report health status for a certain length of time, the game server is no longer considered healthy and will be eventually de-registered from the game server group to avoid affecting utilization metrics. The best practice is to report health every 60 seconds.
To change game server metadata, provide updated game server data and custom sort key values.
Once a game server is successfully updated, the relevant statuses and timestamps are updated.
Learn more
Related operations
This action is part of Amazon GameLift FleetIQ with game server groups, which is in preview release and is subject to change.
Updates GameLift FleetIQ-specific properties for a game server group. These properties include instance rebalancing and game server protection. Many Auto Scaling group properties are updated directly. These include autoscaling policies, minimum/maximum/desired instance counts, and launch template.
To update the game server group, specify the game server group ID and provide the updated values.
Updated properties are validated to ensure that GameLift FleetIQ can continue to perform its core instance rebalancing activity. When you change Auto Scaling group properties directly and the changes cause errors with GameLift FleetIQ activities, an alert is sent.
Learn more
Updating a GameLift FleetIQ-Linked Auto Scaling Group
Related operations
Updates game session properties. This includes the session name, maximum player count, protection policy, which controls whether or not an active game session can be terminated during a scale-down event, and the player session creation policy, which controls whether or not new players can join the session. To update a game session, specify the game session ID and the values you want to change. If successful, an updated GameSession object is returned.
Game session placements
Updates settings for a game session queue, which determines how new game session requests in the queue are processed. To update settings, specify the queue name to be updated and provide the new settings. When updating destinations, provide a complete list of destinations.
", + "UpdateGameSessionQueue": "Updates settings for a game session queue, which determines how new game session requests in the queue are processed. To update settings, specify the queue name to be updated and provide the new settings. When updating destinations, provide a complete list of destinations.
Learn more
Related operations
", "UpdateMatchmakingConfiguration": "Updates settings for a FlexMatch matchmaking configuration. These changes affect all matches and game sessions that are created after the update. To update settings, specify the configuration name to be updated and provide the new settings.
Learn more
Related operations
Updates the current runtime configuration for the specified fleet, which tells Amazon GameLift how to launch server processes on instances in the fleet. You can update a fleet's runtime configuration at any time after the fleet is created; it does not need to be in an ACTIVE
status.
To update runtime configuration, specify the fleet ID and provide a RuntimeConfiguration
object with an updated set of server process configurations.
Each instance in a Amazon GameLift fleet checks regularly for an updated runtime configuration and changes how it launches server processes to comply with the latest version. Existing server processes are not affected by the update; runtime configuration changes are applied gradually as existing processes shut down and new processes are launched during Amazon GameLift's normal process recycling activity.
Learn more
Related operations
Update fleets:
Manage fleet actions:
Updates the current runtime configuration for the specified fleet, which tells Amazon GameLift how to launch server processes on instances in the fleet. You can update a fleet's runtime configuration at any time after the fleet is created; it does not need to be in an ACTIVE
status.
To update runtime configuration, specify the fleet ID and provide a RuntimeConfiguration
object with an updated set of server process configurations.
Each instance in a Amazon GameLift fleet checks regularly for an updated runtime configuration and changes how it launches server processes to comply with the latest version. Existing server processes are not affected by the update; runtime configuration changes are applied gradually as existing processes shut down and new processes are launched during Amazon GameLift's normal process recycling activity.
Learn more
Related operations
Update fleets:
Updates Realtime script metadata and content.
To update script metadata, specify the script ID and provide updated name and/or version values.
To update script content, provide an updated zip file by pointing to either a local file or an Amazon S3 bucket location. You can use either method regardless of how the original script was uploaded. Use the Version parameter to track updates to the script.
If the call is successful, the updated metadata is stored in the script record and a revised script is uploaded to the Amazon GameLift service. Once the script is updated and acquired by a fleet instance, the new version is used for all new game sessions.
Learn more
Amazon GameLift Realtime Servers
Related operations
", "ValidateMatchmakingRuleSet": "Validates the syntax of a matchmaking rule or rule set. This operation checks that the rule set is using syntactically correct JSON and that it conforms to allowed property expressions. To validate syntax, provide a rule set JSON string.
Learn more
Related operations
The updated alias resource.
" } }, + "AliasArn": { + "base": null, + "refs": { + "Alias$AliasArn": "Amazon Resource Name (ARN) that is assigned to a GameLift alias resource and uniquely identifies it. ARNs are unique across all Regions. In a GameLift alias ARN, the resource ID matches the alias ID value.
" + } + }, "AliasId": { "base": null, "refs": { - "Alias$AliasId": "A unique identifier for an alias. Alias IDs are unique within a Region.
", + "Alias$AliasId": "A unique identifier for an alias. Alias IDs are unique within a Region.
" + } + }, + "AliasIdOrArn": { + "base": null, + "refs": { "CreateGameSessionInput$AliasId": "A unique identifier for an alias associated with the fleet to create a game session in. You can use either the alias ID or ARN value. Each request must reference either a fleet ID or alias ID, but not both.
", "DeleteAliasInput$AliasId": "A unique identifier of the alias that you want to delete. You can use either the alias ID or ARN value.
", "DescribeAliasInput$AliasId": "The unique identifier for the fleet alias that you want to retrieve. You can use either the alias ID or ARN value.
", @@ -130,30 +154,23 @@ "refs": { "ListTagsForResourceRequest$ResourceARN": "The Amazon Resource Name (ARN) that is assigned to and uniquely identifies the GameLift resource that you want to retrieve tags for. GameLift resource ARNs are included in the data object for the resource, which can be retrieved by calling a List or Describe action for the resource type.
", "TagResourceRequest$ResourceARN": "The Amazon Resource Name (ARN) that is assigned to and uniquely identifies the GameLift resource that you want to assign tags to. GameLift resource ARNs are included in the data object for the resource, which can be retrieved by calling a List or Describe action for the resource type.
", - "UntagResourceRequest$ResourceARN": "The Amazon Resource Name (ARN) that is assigned to and uniquely identifies the GameLift resource that you want to remove tags from. GameLift resource ARNs are included in the data object for the resource, which can be retrieved by calling a List or Describe action for the resource type.
" + "UntagResourceRequest$ResourceARN": "The Amazon Resource Name (ARN) that is assigned to and uniquely identifies the GameLift resource that you want to remove tags from. GameLift resource ARNs are included in the data object for the resource, which can be retrieved by calling a List or Describe action for the resource type.
" } }, "ArnStringModel": { "base": null, "refs": { - "Alias$AliasArn": "Amazon Resource Name (ARN) that is assigned to a GameLift alias resource and uniquely identifies it. ARNs are unique across all Regions.. In a GameLift alias ARN, the resource ID matches the alias ID value.
", "CreatePlayerSessionInput$GameSessionId": "A unique identifier for the game session to add a player to.
", "CreatePlayerSessionsInput$GameSessionId": "A unique identifier for the game session to add players to.
", "DescribeGameSessionDetailsInput$GameSessionId": "A unique identifier for the game session to retrieve.
", "DescribeGameSessionsInput$GameSessionId": "A unique identifier for the game session to retrieve.
", "DescribePlayerSessionsInput$GameSessionId": "A unique identifier for the game session to retrieve player sessions for.
", - "FleetAttributes$FleetArn": "The Amazon Resource Name (ARN) that is assigned to a GameLift fleet resource and uniquely identifies it. ARNs are unique across all Regions. In a GameLift fleet ARN, the resource ID matches the FleetId value.
", - "GameSession$FleetArn": "The Amazon Resource Name (ARN) associated with the GameLift fleet that this game session is running on.
", "GameSessionConnectionInfo$GameSessionArn": "Amazon Resource Name (ARN) that is assigned to a game session and uniquely identifies it.
", - "GameSessionQueue$GameSessionQueueArn": "Amazon Resource Name (ARN) that is assigned to a GameLift game session queue resource and uniquely identifies it. ARNs are unique across all Regions. In a GameLift game session queue ARN, the resource ID matches the Name value.
", "GameSessionQueueDestination$DestinationArn": "The Amazon Resource Name (ARN) that is assigned to fleet or fleet alias. ARNs, which include a fleet ID or alias ID and a Region name, provide a unique identifier across all Regions.
", "GetGameSessionLogUrlInput$GameSessionId": "A unique identifier for the game session to get logs for.
", - "PlayerSession$FleetArn": "The Amazon Resource Name (ARN) associated with the GameLift fleet that the player's game session is running on.
", "QueueArnsList$member": null, - "ResolveAliasOutput$FleetArn": "The Amazon Resource Name (ARN) associated with the GameLift fleet resource that this alias points to.
", "StartMatchBackfillInput$GameSessionArn": "Amazon Resource Name (ARN) that is assigned to a game session and uniquely identifies it. This is the same as the game session ID.
", - "UpdateGameSessionInput$GameSessionId": "A unique identifier for the game session to update.
", - "VpcPeeringConnection$FleetArn": "The Amazon Resource Name (ARN) associated with the GameLift fleet resource for this connection.
" + "UpdateGameSessionInput$GameSessionId": "A unique identifier for the game session to update.
" } }, "AttributeValue": { @@ -162,10 +179,16 @@ "PlayerAttributeMap$value": null } }, + "AutoScalingGroupArn": { + "base": null, + "refs": { + "GameServerGroup$AutoScalingGroupArn": "A generated unique ID for the EC2 Auto Scaling group with is associated with this game server group.
" + } + }, "AwsCredentials": { "base": "Temporary access credentials used for uploading game build files to Amazon GameLift. They are valid for a limited time. If they expire before you upload your game build, get a new set by calling RequestUploadCredentials.
", "refs": { - "CreateBuildOutput$UploadCredentials": "This element is returned only when the operation is called without a storage location. It contains credentials to use when you are uploading a build file to an Amazon S3 bucket that is owned by Amazon GameLift. Credentials have a limited life span. To refresh these credentials, call RequestUploadCredentials.
", + "CreateBuildOutput$UploadCredentials": "This element is returned only when the operation is called without a storage location. It contains credentials to use when you are uploading a build file to an S3 bucket that is owned by Amazon GameLift. Credentials have a limited life span. To refresh these credentials, call RequestUploadCredentials.
", "RequestUploadCredentialsOutput$UploadCredentials": "AWS credentials required when uploading a game build to the storage location. These credentials have a limited lifespan and are valid only for the build they were issued for.
" } }, @@ -177,6 +200,14 @@ "UpdateMatchmakingConfigurationInput$BackfillMode": "The method that is used to backfill game sessions created with this matchmaking configuration. Specify MANUAL when your game manages backfill requests manually or does not use the match backfill feature. Specify AUTOMATIC to have GameLift create a StartMatchBackfill request whenever a game session has one or more open slots. Learn more about manual and automatic backfill in Backfill Existing Games with FlexMatch.
" } }, + "BalancingStrategy": { + "base": null, + "refs": { + "CreateGameServerGroupInput$BalancingStrategy": "The fallback balancing method to use for the game server group when Spot instances in a Region become unavailable or are not viable for game hosting. Once triggered, this method remains active until Spot instances can once again be used. Method options include:
SPOT_ONLY -- If Spot instances are unavailable, the game server group provides no hosting capacity. No new instances are started, and the existing nonviable Spot instances are terminated (once current gameplay ends) and not replaced.
SPOT_PREFERRED -- If Spot instances are unavailable, the game server group continues to provide hosting capacity by using On-Demand instances. Existing nonviable Spot instances are terminated (once current gameplay ends) and replaced with new On-Demand instances.
The fallback balancing method to use for the game server group when Spot instances in a Region become unavailable or are not viable for game hosting. Once triggered, this method remains active until Spot instances can once again be used. Method options include:
SPOT_ONLY -- If Spot instances are unavailable, the game server group provides no hosting capacity. No new instances are started, and the existing nonviable Spot instances are terminated (once current gameplay ends) and not replaced.
SPOT_PREFERRED -- If Spot instances are unavailable, the game server group continues to provide hosting capacity by using On-Demand instances. Existing nonviable Spot instances are terminated (once current gameplay ends) and replaced with new On-Demand instances.
The fallback balancing method to use for the game server group when Spot instances in a Region become unavailable or are not viable for game hosting. Once triggered, this method remains active until Spot instances can once again be used. Method options include:
SPOT_ONLY -- If Spot instances are unavailable, the game server group provides no hosting capacity. No new instances are started, and the existing nonviable Spot instances are terminated (once current gameplay ends) and not replaced.
SPOT_PREFERRED -- If Spot instances are unavailable, the game server group continues to provide hosting capacity by using On-Demand instances. Existing nonviable Spot instances are terminated (once current gameplay ends) and replaced with new On-Demand instances.
Properties describing a custom game build.
Related operations
", "refs": { "BuildList$member": null, - "CreateBuildOutput$Build": "The newly created build record, including a unique build IDs and status.
", + "CreateBuildOutput$Build": "The newly created build resource, including a unique build IDs and status.
", "DescribeBuildOutput$Build": "Set of properties describing the requested build.
", - "UpdateBuildOutput$Build": "The updated build record.
" + "UpdateBuildOutput$Build": "The updated build resource.
" } }, "BuildArn": { @@ -206,11 +237,16 @@ "base": null, "refs": { "Build$BuildId": "A unique identifier for a build.
", + "FleetAttributes$BuildId": "A unique identifier for a build.
" + } + }, + "BuildIdOrArn": { + "base": null, + "refs": { "CreateFleetInput$BuildId": "A unique identifier for a build to be deployed on the new fleet. You can use either the build ID or ARN value. The custom game server build must have been successfully uploaded to Amazon GameLift and be in a READY
status. This fleet setting cannot be changed once the fleet is created.
A unique identifier for a build to delete. You can use either the build ID or ARN value.
", "DescribeBuildInput$BuildId": "A unique identifier for a build to retrieve properties for. You can use either the build ID or ARN value.
", - "FleetAttributes$BuildId": "A unique identifier for a build.
", - "ListFleetsInput$BuildId": "A unique identifier for a build to return fleets for. Use this parameter to return only fleets using the specified build. Use either the build ID or ARN value.To retrieve all fleets, leave this parameter empty.
", + "ListFleetsInput$BuildId": "A unique identifier for a build to return fleets for. Use this parameter to return only fleets using a specified build. Use either the build ID or ARN value. To retrieve all fleets, do not include either a BuildId and ScriptID parameter.
", "RequestUploadCredentialsInput$BuildId": "A unique identifier for a build to get credentials for. You can use either the build ID or ARN value.
", "UpdateBuildInput$BuildId": "A unique identifier for a build to update. You can use either the build ID or ARN value.
" } @@ -218,7 +254,7 @@ "BuildList": { "base": null, "refs": { - "ListBuildsOutput$Builds": "A collection of build records that match the request.
" + "ListBuildsOutput$Builds": "A collection of build resources that match the request.
" } }, "BuildStatus": { @@ -241,6 +277,16 @@ "CertificateConfiguration$CertificateType": "Indicates whether a TLS/SSL certificate was generated for a fleet.
" } }, + "ClaimGameServerInput": { + "base": null, + "refs": { + } + }, + "ClaimGameServerOutput": { + "base": null, + "refs": { + } + }, "ComparisonOperatorType": { "base": null, "refs": { @@ -283,6 +329,16 @@ "refs": { } }, + "CreateGameServerGroupInput": { + "base": null, + "refs": { + } + }, + "CreateGameServerGroupOutput": { + "base": null, + "refs": { + } + }, "CreateGameSessionInput": { "base": "
Represents the input for a request action.
", "refs": { @@ -396,6 +452,16 @@ "refs": { } }, + "DeleteGameServerGroupInput": { + "base": null, + "refs": { + } + }, + "DeleteGameServerGroupOutput": { + "base": null, + "refs": { + } + }, "DeleteGameSessionQueueInput": { "base": "Represents the input for a request action.
", "refs": { @@ -456,6 +522,11 @@ "refs": { } }, + "DeregisterGameServerInput": { + "base": null, + "refs": { + } + }, "DescribeAliasInput": { "base": "Represents the input for a request action.
", "refs": { @@ -536,6 +607,26 @@ "refs": { } }, + "DescribeGameServerGroupInput": { + "base": null, + "refs": { + } + }, + "DescribeGameServerGroupOutput": { + "base": null, + "refs": { + } + }, + "DescribeGameServerInput": { + "base": null, + "refs": { + } + }, + "DescribeGameServerOutput": { + "base": null, + "refs": { + } + }, "DescribeGameSessionDetailsInput": { "base": "Represents the input for a request action.
", "refs": { @@ -714,7 +805,7 @@ } }, "EC2InstanceCounts": { - "base": "Current status of fleet capacity. The number of active instances should match or be in the process of matching the number of desired instances. Pending and terminating counts are non-zero only if fleet capacity is adjusting to an UpdateFleetCapacity request, or if access to resources is temporarily affected.
Manage fleet actions:
Current status of fleet capacity. The number of active instances should match or be in the process of matching the number of desired instances. Pending and terminating counts are non-zero only if fleet capacity is adjusting to an UpdateFleetCapacity request, or if access to resources is temporarily affected.
Current status of fleet capacity.
" } @@ -751,7 +842,7 @@ "EventCode": { "base": null, "refs": { - "Event$EventCode": "The type of event being logged.
Fleet creation events (ordered by fleet creation activity):
FLEET_CREATED -- A fleet record was successfully created with a status of NEW
. Event messaging includes the fleet ID.
FLEET_STATE_DOWNLOADING -- Fleet status changed from NEW
to DOWNLOADING
. The compressed build has started downloading to a fleet instance for installation.
FLEET_BINARY_DOWNLOAD_FAILED -- The build failed to download to the fleet instance.
FLEET_CREATION_EXTRACTING_BUILD – The game server build was successfully downloaded to an instance, and the build files are now being extracted from the uploaded build and saved to an instance. Failure at this stage prevents a fleet from moving to ACTIVE
status. Logs for this stage display a list of the files that are extracted and saved on the instance. Access the logs by using the URL in PreSignedLogUrl.
FLEET_CREATION_RUNNING_INSTALLER – The game server build files were successfully extracted, and the Amazon GameLift is now running the build's install script (if one is included). Failure in this stage prevents a fleet from moving to ACTIVE
status. Logs for this stage list the installation steps and whether or not the install completed successfully. Access the logs by using the URL in PreSignedLogUrl.
FLEET_CREATION_VALIDATING_RUNTIME_CONFIG -- The build process was successful, and the Amazon GameLift is now verifying that the game server launch paths, which are specified in the fleet's runtime configuration, exist. If any listed launch path exists, Amazon GameLift tries to launch a game server process and waits for the process to report ready. Failures in this stage prevent a fleet from moving to ACTIVE
status. Logs for this stage list the launch paths in the runtime configuration and indicate whether each is found. Access the logs by using the URL in PreSignedLogUrl.
FLEET_STATE_VALIDATING -- Fleet status changed from DOWNLOADING
to VALIDATING
.
FLEET_VALIDATION_LAUNCH_PATH_NOT_FOUND -- Validation of the runtime configuration failed because the executable specified in a launch path does not exist on the instance.
FLEET_STATE_BUILDING -- Fleet status changed from VALIDATING
to BUILDING
.
FLEET_VALIDATION_EXECUTABLE_RUNTIME_FAILURE -- Validation of the runtime configuration failed because the executable specified in a launch path failed to run on the fleet instance.
FLEET_STATE_ACTIVATING -- Fleet status changed from BUILDING
to ACTIVATING
.
FLEET_ACTIVATION_FAILED - The fleet failed to successfully complete one of the steps in the fleet activation process. This event code indicates that the game build was successfully downloaded to a fleet instance, built, and validated, but was not able to start a server process. Learn more at Debug Fleet Creation Issues
FLEET_STATE_ACTIVE -- The fleet's status changed from ACTIVATING
to ACTIVE
. The fleet is now ready to host game sessions.
VPC peering events:
FLEET_VPC_PEERING_SUCCEEDED -- A VPC peering connection has been established between the VPC for an Amazon GameLift fleet and a VPC in your AWS account.
FLEET_VPC_PEERING_FAILED -- A requested VPC peering connection has failed. Event details and status information (see DescribeVpcPeeringConnections) provide additional detail. A common reason for peering failure is that the two VPCs have overlapping CIDR blocks of IPv4 addresses. To resolve this, change the CIDR block for the VPC in your AWS account. For more information on VPC peering failures, see https://docs.aws.amazon.com/AmazonVPC/latest/PeeringGuide/invalid-peering-configurations.html
FLEET_VPC_PEERING_DELETED -- A VPC peering connection has been successfully deleted.
Spot instance events:
INSTANCE_INTERRUPTED -- A spot instance was interrupted by EC2 with a two-minute notification.
Other fleet events:
FLEET_SCALING_EVENT -- A change was made to the fleet's capacity settings (desired instances, minimum/maximum scaling limits). Event messaging includes the new capacity settings.
FLEET_NEW_GAME_SESSION_PROTECTION_POLICY_UPDATED -- A change was made to the fleet's game session protection policy setting. Event messaging includes both the old and new policy setting.
FLEET_DELETED -- A request to delete a fleet was initiated.
GENERIC_EVENT -- An unspecified event has occurred.
The type of event being logged.
Fleet creation events (ordered by fleet creation activity):
FLEET_CREATED -- A fleet resource was successfully created with a status of NEW
. Event messaging includes the fleet ID.
FLEET_STATE_DOWNLOADING -- Fleet status changed from NEW
to DOWNLOADING
. The compressed build has started downloading to a fleet instance for installation.
FLEET_BINARY_DOWNLOAD_FAILED -- The build failed to download to the fleet instance.
FLEET_CREATION_EXTRACTING_BUILD – The game server build was successfully downloaded to an instance, and the build files are now being extracted from the uploaded build and saved to an instance. Failure at this stage prevents a fleet from moving to ACTIVE
status. Logs for this stage display a list of the files that are extracted and saved on the instance. Access the logs by using the URL in PreSignedLogUrl.
FLEET_CREATION_RUNNING_INSTALLER – The game server build files were successfully extracted, and the Amazon GameLift is now running the build's install script (if one is included). Failure in this stage prevents a fleet from moving to ACTIVE
status. Logs for this stage list the installation steps and whether or not the install completed successfully. Access the logs by using the URL in PreSignedLogUrl.
FLEET_CREATION_VALIDATING_RUNTIME_CONFIG -- The build process was successful, and the Amazon GameLift is now verifying that the game server launch paths, which are specified in the fleet's runtime configuration, exist. If any listed launch path exists, Amazon GameLift tries to launch a game server process and waits for the process to report ready. Failures in this stage prevent a fleet from moving to ACTIVE
status. Logs for this stage list the launch paths in the runtime configuration and indicate whether each is found. Access the logs by using the URL in PreSignedLogUrl.
FLEET_STATE_VALIDATING -- Fleet status changed from DOWNLOADING
to VALIDATING
.
FLEET_VALIDATION_LAUNCH_PATH_NOT_FOUND -- Validation of the runtime configuration failed because the executable specified in a launch path does not exist on the instance.
FLEET_STATE_BUILDING -- Fleet status changed from VALIDATING
to BUILDING
.
FLEET_VALIDATION_EXECUTABLE_RUNTIME_FAILURE -- Validation of the runtime configuration failed because the executable specified in a launch path failed to run on the fleet instance.
FLEET_STATE_ACTIVATING -- Fleet status changed from BUILDING
to ACTIVATING
.
FLEET_ACTIVATION_FAILED - The fleet failed to successfully complete one of the steps in the fleet activation process. This event code indicates that the game build was successfully downloaded to a fleet instance, built, and validated, but was not able to start a server process. Learn more at Debug Fleet Creation Issues
FLEET_STATE_ACTIVE -- The fleet's status changed from ACTIVATING
to ACTIVE
. The fleet is now ready to host game sessions.
VPC peering events:
FLEET_VPC_PEERING_SUCCEEDED -- A VPC peering connection has been established between the VPC for an Amazon GameLift fleet and a VPC in your AWS account.
FLEET_VPC_PEERING_FAILED -- A requested VPC peering connection has failed. Event details and status information (see DescribeVpcPeeringConnections) provide additional detail. A common reason for peering failure is that the two VPCs have overlapping CIDR blocks of IPv4 addresses. To resolve this, change the CIDR block for the VPC in your AWS account. For more information on VPC peering failures, see https://docs.aws.amazon.com/AmazonVPC/latest/PeeringGuide/invalid-peering-configurations.html
FLEET_VPC_PEERING_DELETED -- A VPC peering connection has been successfully deleted.
Spot instance events:
INSTANCE_INTERRUPTED -- A spot instance was interrupted by EC2 with a two-minute notification.
Other fleet events:
FLEET_SCALING_EVENT -- A change was made to the fleet's capacity settings (desired instances, minimum/maximum scaling limits). Event messaging includes the new capacity settings.
FLEET_NEW_GAME_SESSION_PROTECTION_POLICY_UPDATED -- A change was made to the fleet's game session protection policy setting. Event messaging includes both the old and new policy setting.
FLEET_DELETED -- A request to delete a fleet was initiated.
GENERIC_EVENT -- An unspecified event has occurred.
List of actions to suspend on the fleet.
" } }, + "FleetArn": { + "base": null, + "refs": { + "FleetAttributes$FleetArn": "The Amazon Resource Name (ARN) that is assigned to a GameLift fleet resource and uniquely identifies it. ARNs are unique across all Regions. In a GameLift fleet ARN, the resource ID matches the FleetId value.
", + "GameSession$FleetArn": "The Amazon Resource Name (ARN) associated with the GameLift fleet that this game session is running on.
", + "PlayerSession$FleetArn": "The Amazon Resource Name (ARN) associated with the GameLift fleet that the player's game session is running on.
", + "ResolveAliasOutput$FleetArn": "The Amazon Resource Name (ARN) associated with the GameLift fleet resource that this alias points to.
", + "VpcPeeringConnection$FleetArn": "The Amazon Resource Name (ARN) associated with the GameLift fleet resource for this connection.
" + } + }, "FleetAttributes": { - "base": "General properties describing a fleet.
Manage fleet actions:
General properties describing a fleet.
Properties for the newly created fleet.
", "FleetAttributesList$member": null @@ -784,11 +885,11 @@ "FleetAttributesList": { "base": null, "refs": { - "DescribeFleetAttributesOutput$FleetAttributes": "A collection of objects containing attribute metadata for each requested fleet ID.
" + "DescribeFleetAttributesOutput$FleetAttributes": "A collection of objects containing attribute metadata for each requested fleet ID. Attribute objects are returned only for fleets that currently exist.
" } }, "FleetCapacity": { - "base": "Information about the fleet's capacity. Fleet capacity is measured in EC2 instances. By default, new fleets have a capacity of one instance, but can be updated as needed. The maximum number of instances for a fleet is determined by the fleet's instance type.
Manage fleet actions:
Information about the fleet's capacity. Fleet capacity is measured in EC2 instances. By default, new fleets have a capacity of one instance, but can be updated as needed. The maximum number of instances for a fleet is determined by the fleet's instance type.
A unique identifier for a fleet to create a game session in. You can use either the fleet ID or ARN value. Each request must reference either a fleet ID or alias ID, but not both.
", "CreateVpcPeeringConnectionInput$FleetId": "A unique identifier for a fleet. You can use either the fleet ID or ARN value. This tells Amazon GameLift which GameLift VPC to peer with.
", - "DeleteFleetInput$FleetId": "A unique identifier for a fleet to be deleted. You can use either the fleet ID or ARN value.
", - "DeleteScalingPolicyInput$FleetId": "A unique identifier for a fleet to be deleted. You can use either the fleet ID or ARN value.
", "DeleteVpcPeeringConnectionInput$FleetId": "A unique identifier for a fleet. This fleet specified must match the fleet referenced in the VPC peering connection record. You can use either the fleet ID or ARN value.
", - "DescribeFleetEventsInput$FleetId": "A unique identifier for a fleet to get event logs for. You can use either the fleet ID or ARN value.
", - "DescribeFleetPortSettingsInput$FleetId": "A unique identifier for a fleet to retrieve port settings for. You can use either the fleet ID or ARN value.
", - "DescribeGameSessionDetailsInput$FleetId": "A unique identifier for a fleet to retrieve all game sessions active on the fleet. You can use either the fleet ID or ARN value.
", - "DescribeGameSessionsInput$FleetId": "A unique identifier for a fleet to retrieve all game sessions for. You can use either the fleet ID or ARN value.
", - "DescribeInstancesInput$FleetId": "A unique identifier for a fleet to retrieve instance information for. You can use either the fleet ID or ARN value.
", - "DescribeRuntimeConfigurationInput$FleetId": "A unique identifier for a fleet to get the runtime configuration for. You can use either the fleet ID or ARN value.
", - "DescribeScalingPoliciesInput$FleetId": "A unique identifier for a fleet to retrieve scaling policies for. You can use either the fleet ID or ARN value.
", "DescribeVpcPeeringConnectionsInput$FleetId": "A unique identifier for a fleet. You can use either the fleet ID or ARN value.
", "FleetAttributes$FleetId": "A unique identifier for a fleet.
", "FleetCapacity$FleetId": "A unique identifier for a fleet.
", "FleetIdList$member": null, "FleetUtilization$FleetId": "A unique identifier for a fleet.
", "GameSession$FleetId": "A unique identifier for a fleet that the game session is running on.
", - "GetInstanceAccessInput$FleetId": "A unique identifier for a fleet that contains the instance you want access to. You can use either the fleet ID or ARN value. The fleet can be in any of the following statuses: ACTIVATING
, ACTIVE
, or ERROR
. Fleets with an ERROR
status may be accessible for a short time before they are deleted.
A unique identifier for a fleet that the instance is in.
", "InstanceAccess$FleetId": "A unique identifier for a fleet containing the instance being accessed.
", "PlayerSession$FleetId": "A unique identifier for a fleet that the player's game session is running on.
", - "PutScalingPolicyInput$FleetId": "A unique identifier for a fleet to apply this policy to. You can use either the fleet ID or ARN value. The fleet cannot be in any of the following statuses: ERROR or DELETING.
", "ResolveAliasOutput$FleetId": "The fleet identifier that the alias is pointing to.
", "RoutingStrategy$FleetId": "The unique identifier for a fleet that the alias points to. This value is the fleet ID, not the fleet ARN.
", "ScalingPolicy$FleetId": "A unique identifier for a fleet that is associated with this scaling policy.
", + "UpdateFleetAttributesOutput$FleetId": "A unique identifier for a fleet that was updated. Use either the fleet ID or ARN value.
", + "UpdateFleetCapacityOutput$FleetId": "A unique identifier for a fleet that was updated.
", + "UpdateFleetPortSettingsOutput$FleetId": "A unique identifier for a fleet that was updated.
", + "VpcPeeringConnection$FleetId": "A unique identifier for a fleet. This ID determines the ID of the Amazon GameLift VPC for your fleet.
" + } + }, + "FleetIdList": { + "base": null, + "refs": { + "ListFleetsOutput$FleetIds": "Set of fleet IDs matching the list request. You can retrieve additional information about all returned fleets by passing this result set to a call to DescribeFleetAttributes, DescribeFleetCapacity, or DescribeFleetUtilization.
" + } + }, + "FleetIdOrArn": { + "base": null, + "refs": { + "CreateGameSessionInput$FleetId": "A unique identifier for a fleet to create a game session in. You can use either the fleet ID or ARN value. Each request must reference either a fleet ID or alias ID, but not both.
", + "DeleteFleetInput$FleetId": "A unique identifier for a fleet to be deleted. You can use either the fleet ID or ARN value.
", + "DeleteScalingPolicyInput$FleetId": "A unique identifier for a fleet to be deleted. You can use either the fleet ID or ARN value.
", + "DescribeFleetEventsInput$FleetId": "A unique identifier for a fleet to get event logs for. You can use either the fleet ID or ARN value.
", + "DescribeFleetPortSettingsInput$FleetId": "A unique identifier for a fleet to retrieve port settings for. You can use either the fleet ID or ARN value.
", + "DescribeGameSessionDetailsInput$FleetId": "A unique identifier for a fleet to retrieve all game sessions active on the fleet. You can use either the fleet ID or ARN value.
", + "DescribeGameSessionsInput$FleetId": "A unique identifier for a fleet to retrieve all game sessions for. You can use either the fleet ID or ARN value.
", + "DescribeInstancesInput$FleetId": "A unique identifier for a fleet to retrieve instance information for. You can use either the fleet ID or ARN value.
", + "DescribeRuntimeConfigurationInput$FleetId": "A unique identifier for a fleet to get the runtime configuration for. You can use either the fleet ID or ARN value.
", + "DescribeScalingPoliciesInput$FleetId": "A unique identifier for a fleet to retrieve scaling policies for. You can use either the fleet ID or ARN value.
", + "FleetIdOrArnList$member": null, + "GetInstanceAccessInput$FleetId": "A unique identifier for a fleet that contains the instance you want access to. You can use either the fleet ID or ARN value. The fleet can be in any of the following statuses: ACTIVATING
, ACTIVE
, or ERROR
. Fleets with an ERROR
status may be accessible for a short time before they are deleted.
A unique identifier for a fleet to apply this policy to. You can use either the fleet ID or ARN value. The fleet cannot be in any of the following statuses: ERROR or DELETING.
", "SearchGameSessionsInput$FleetId": "A unique identifier for a fleet to search for active game sessions. You can use either the fleet ID or ARN value. Each request must reference either a fleet ID or alias ID, but not both.
", "StartFleetActionsInput$FleetId": "A unique identifier for a fleet to start actions on. You can use either the fleet ID or ARN value.
", "StopFleetActionsInput$FleetId": "A unique identifier for a fleet to stop actions on. You can use either the fleet ID or ARN value.
", "UpdateFleetAttributesInput$FleetId": "A unique identifier for a fleet to update attribute metadata for. You can use either the fleet ID or ARN value.
", - "UpdateFleetAttributesOutput$FleetId": "A unique identifier for a fleet that was updated. Use either the fleet ID or ARN value.
", "UpdateFleetCapacityInput$FleetId": "A unique identifier for a fleet to update capacity for. You can use either the fleet ID or ARN value.
", - "UpdateFleetCapacityOutput$FleetId": "A unique identifier for a fleet that was updated.
", "UpdateFleetPortSettingsInput$FleetId": "A unique identifier for a fleet to update port settings for. You can use either the fleet ID or ARN value.
", - "UpdateFleetPortSettingsOutput$FleetId": "A unique identifier for a fleet that was updated.
", - "UpdateRuntimeConfigurationInput$FleetId": "A unique identifier for a fleet to update runtime configuration for. You can use either the fleet ID or ARN value.
", - "VpcPeeringConnection$FleetId": "A unique identifier for a fleet. This ID determines the ID of the Amazon GameLift VPC for your fleet.
" + "UpdateRuntimeConfigurationInput$FleetId": "A unique identifier for a fleet to update runtime configuration for. You can use either the fleet ID or ARN value.
" } }, - "FleetIdList": { + "FleetIdOrArnList": { "base": null, "refs": { - "DescribeFleetAttributesInput$FleetIds": "A unique identifier for a fleet(s) to retrieve attributes for. You can use either the fleet ID or ARN value.
", + "DescribeFleetAttributesInput$FleetIds": "A list of unique fleet identifiers to retrieve attributes for. You can use either the fleet ID or ARN value. To retrieve attributes for all current fleets, do not include this parameter. If the list of fleet identifiers includes fleets that don't currently exist, the request succeeds but no attributes for that fleet are returned.
", "DescribeFleetCapacityInput$FleetIds": "A unique identifier for a fleet(s) to retrieve capacity information for. You can use either the fleet ID or ARN value.
", - "DescribeFleetUtilizationInput$FleetIds": "A unique identifier for a fleet(s) to retrieve utilization data for. You can use either the fleet ID or ARN value.
", - "ListFleetsOutput$FleetIds": "Set of fleet IDs matching the list request. You can retrieve additional information about all returned fleets by passing this result set to a call to DescribeFleetAttributes, DescribeFleetCapacity, or DescribeFleetUtilization.
" + "DescribeFleetUtilizationInput$FleetIds": "A unique identifier for a fleet(s) to retrieve utilization data for. You can use either the fleet ID or ARN value. To retrieve attributes for all current fleets, do not include this parameter. If the list of fleet identifiers includes fleets that don't currently exist, the request succeeds but no attributes for that fleet are returned.
" } }, "FleetStatus": { @@ -869,7 +981,7 @@ } }, "FleetUtilization": { - "base": "Current status of fleet utilization, including the number of game and player sessions being hosted.
Manage fleet actions:
Current status of fleet utilization, including the number of game and player sessions being hosted.
The game property value.
" } }, + "GameServer": { + "base": "This data type is part of Amazon GameLift FleetIQ with game server groups, which is in preview release and is subject to change.
Properties describing a game server resource.
A game server resource is created by a successful call to RegisterGameServer and deleted by calling DeregisterGameServer.
", + "refs": { + "ClaimGameServerOutput$GameServer": "Object that describes the newly claimed game server resource.
", + "DescribeGameServerOutput$GameServer": "Object that describes the requested game server resource.
", + "GameServers$member": null, + "RegisterGameServerOutput$GameServer": "Object that describes the newly created game server resource.
", + "UpdateGameServerOutput$GameServer": "Object that describes the newly updated game server resource.
" + } + }, + "GameServerClaimStatus": { + "base": null, + "refs": { + "GameServer$ClaimStatus": "Indicates when an available game server has been reserved but has not yet started hosting a game. Once it is claimed, game server remains in CLAIMED status for a maximum of one minute. During this time, game clients must connect to the game server and start the game, which triggers the game server to update its utilization status. After one minute, the game server claim status reverts to null.
" + } + }, + "GameServerConnectionInfo": { + "base": null, + "refs": { + "GameServer$ConnectionInfo": "The port and IP address that must be used to establish a client connection to the game server.
", + "RegisterGameServerInput$ConnectionInfo": "Information needed to make inbound client connections to the game server. This might include IP address and port, DNS name, etc.
" + } + }, + "GameServerData": { + "base": null, + "refs": { + "ClaimGameServerInput$GameServerData": "A set of custom game server properties, formatted as a single string value, to be passed to the claimed game server.
", + "GameServer$GameServerData": "A set of custom game server properties, formatted as a single string value. This data is passed to a game client or service in response to requests ListGameServers or ClaimGameServer. This property can be updated using UpdateGameServer.
", + "RegisterGameServerInput$GameServerData": "A set of custom game server properties, formatted as a single string value. This data is passed to a game client or service when it requests information on a game servers using ListGameServers or ClaimGameServer.
", + "UpdateGameServerInput$GameServerData": "A set of custom game server properties, formatted as a single string value. This data is passed to a game client or service when it requests information on a game servers using DescribeGameServer or ClaimGameServer.
" + } + }, + "GameServerGroup": { + "base": "This data type is part of Amazon GameLift FleetIQ with game server groups, which is in preview release and is subject to change.
Properties describing a game server group resource. A game server group manages certain properties of a corresponding EC2 Auto Scaling group.
A game server group is created by a successful call to CreateGameServerGroup and deleted by calling DeleteGameServerGroup. Game server group activity can be temporarily suspended and resumed by calling SuspendGameServerGroup and ResumeGameServerGroup.
", + "refs": { + "CreateGameServerGroupOutput$GameServerGroup": "The newly created game server group object, including the new ARN value for the GameLift FleetIQ game server group and the object's status. The EC2 Auto Scaling group ARN is initially null, since the group has not yet been created. This value is added once the game server group status reaches ACTIVE.
", + "DeleteGameServerGroupOutput$GameServerGroup": "An object that describes the deleted game server group resource, with status updated to DELETE_SCHEDULED.
", + "DescribeGameServerGroupOutput$GameServerGroup": "An object that describes the requested game server group resource.
", + "GameServerGroups$member": null, + "ResumeGameServerGroupOutput$GameServerGroup": "An object that describes the game server group resource, with the SuspendedActions property updated to reflect the resumed activity.
", + "SuspendGameServerGroupOutput$GameServerGroup": "An object that describes the game server group resource, with the SuspendedActions property updated to reflect the suspended activity.
", + "UpdateGameServerGroupOutput$GameServerGroup": "An object that describes the game server group resource with updated properties.
" + } + }, + "GameServerGroupAction": { + "base": null, + "refs": { + "GameServerGroupActions$member": null + } + }, + "GameServerGroupActions": { + "base": null, + "refs": { + "GameServerGroup$SuspendedActions": "A list of activities that are currently suspended for this game server group. If this property is empty, all activities are occurring.
", + "ResumeGameServerGroupInput$ResumeActions": "The action to resume for this game server group.
", + "SuspendGameServerGroupInput$SuspendActions": "The action to suspend for this game server group.
" + } + }, + "GameServerGroupArn": { + "base": null, + "refs": { + "GameServer$GameServerGroupArn": "The ARN identifier for the game server group where the game server is located.
", + "GameServerGroup$GameServerGroupArn": "A generated unique ID for the game server group.
" + } + }, + "GameServerGroupAutoScalingPolicy": { + "base": "This data type is part of Amazon GameLift FleetIQ with game server groups, which is in preview release and is subject to change.
Configuration settings for intelligent autoscaling that uses target tracking. An autoscaling policy can be specified when a new game server group is created with CreateGameServerGroup. If a group has an autoscaling policy, the Auto Scaling group takes action based on this policy, in addition to (and potentially in conflict with) any other autoscaling policies that are separately applied to the Auto Scaling group.
", + "refs": { + "CreateGameServerGroupInput$AutoScalingPolicy": "Configuration settings to define a scaling policy for the Auto Scaling group that is optimized for game hosting. The scaling policy uses the metric \"PercentUtilizedGameServers\" to maintain a buffer of idle game servers that can immediately accommodate new games and players. Once the game server and Auto Scaling groups are created, you can update the scaling policy settings directly in Auto Scaling Groups.
" + } + }, + "GameServerGroupDeleteOption": { + "base": null, + "refs": { + "DeleteGameServerGroupInput$DeleteOption": "The type of delete to perform. Options include:
SAFE_DELETE – Terminates the game server group and EC2 Auto Scaling group only when it has no game servers that are in IN_USE status.
FORCE_DELETE – Terminates the game server group, including all active game servers regardless of their utilization status, and the EC2 Auto Scaling group.
RETAIN – Does a safe delete of the game server group but retains the EC2 Auto Scaling group as is.
An EC2 instance type designation.
" + } + }, + "GameServerGroupName": { + "base": null, + "refs": { + "CreateGameServerGroupInput$GameServerGroupName": "An identifier for the new game server group. This value is used to generate unique ARN identifiers for the EC2 Auto Scaling group and the GameLift FleetIQ game server group. The name must be unique per Region per AWS account.
", + "GameServer$GameServerGroupName": "The name identifier for the game server group where the game server is located.
", + "GameServerGroup$GameServerGroupName": "A developer-defined identifier for the game server group. The name is unique per Region per AWS account.
" + } + }, + "GameServerGroupNameOrArn": { + "base": null, + "refs": { + "ClaimGameServerInput$GameServerGroupName": "An identifier for the game server group. When claiming a specific game server, this is the game server group whether the game server is located. When requesting that GameLift FleetIQ locate an available game server, this is the game server group to search on. You can use either the GameServerGroup name or ARN value.
", + "DeleteGameServerGroupInput$GameServerGroupName": "The unique identifier of the game server group to delete. Use either the GameServerGroup name or ARN value.
", + "DeregisterGameServerInput$GameServerGroupName": "An identifier for the game server group where the game server to be de-registered is running. Use either the GameServerGroup name or ARN value.
", + "DescribeGameServerGroupInput$GameServerGroupName": "The unique identifier for the game server group being requested. Use either the GameServerGroup name or ARN value.
", + "DescribeGameServerInput$GameServerGroupName": "An identifier for the game server group where the game server is running. Use either the GameServerGroup name or ARN value.
", + "ListGameServersInput$GameServerGroupName": "An identifier for the game server group for the game server you want to list. Use either the GameServerGroup name or ARN value.
", + "RegisterGameServerInput$GameServerGroupName": "An identifier for the game server group where the game server is running. You can use either the GameServerGroup name or ARN value.
", + "ResumeGameServerGroupInput$GameServerGroupName": "The unique identifier of the game server group to resume activity on. Use either the GameServerGroup name or ARN value.
", + "SuspendGameServerGroupInput$GameServerGroupName": "The unique identifier of the game server group to stop activity on. Use either the GameServerGroup name or ARN value.
", + "UpdateGameServerGroupInput$GameServerGroupName": "The unique identifier of the game server group to update. Use either the GameServerGroup name or ARN value.
", + "UpdateGameServerInput$GameServerGroupName": "An identifier for the game server group where the game server is running. Use either the GameServerGroup name or ARN value.
" + } + }, + "GameServerGroupStatus": { + "base": null, + "refs": { + "GameServerGroup$Status": "The current status of the game server group. Possible statuses include:
NEW - GameLift FleetIQ has validated the CreateGameServerGroup()
request.
ACTIVATING - GameLift FleetIQ is setting up a game server group, which includes creating an autoscaling group in your AWS account.
ACTIVE - The game server group has been successfully created.
DELETE_SCHEDULED - A request to delete the game server group has been received.
DELETING - GameLift FleetIQ has received a valid DeleteGameServerGroup()
request and is processing it. GameLift FleetIQ must first complete and release hosts before it deletes the autoscaling group and the game server group.
DELETED - The game server group has been successfully deleted.
ERROR - The asynchronous processes of activating or deleting a game server group has failed, resulting in an error state.
A collection of game server group objects that match the request.
" + } + }, + "GameServerHealthCheck": { + "base": null, + "refs": { + "UpdateGameServerInput$HealthCheck": "Indicates health status of the game server. An update that explicitly includes this parameter updates the game server's LastHealthCheckTime time stamp.
" + } + }, + "GameServerId": { + "base": null, + "refs": { + "ClaimGameServerInput$GameServerId": "A custom string that uniquely identifies the game server to claim. If this parameter is left empty, GameLift FleetIQ searches for an available game server in the specified game server group.
", + "DeregisterGameServerInput$GameServerId": "The identifier for the game server to be de-registered.
", + "DescribeGameServerInput$GameServerId": "The identifier for the game server to be retrieved.
", + "GameServer$GameServerId": "A custom string that uniquely identifies the game server. Game server IDs are developer-defined and are unique across all game server groups in an AWS account.
", + "RegisterGameServerInput$GameServerId": "A custom string that uniquely identifies the new game server. Game server IDs are developer-defined and must be unique across all game server groups in your AWS account.
", + "UpdateGameServerInput$GameServerId": "The identifier for the game server to be updated.
" + } + }, + "GameServerInstanceId": { + "base": null, + "refs": { + "GameServer$InstanceId": "The unique identifier for the instance where the game server is located.
", + "RegisterGameServerInput$InstanceId": "The unique identifier for the instance where the game server is running. This ID is available in the instance metadata.
" + } + }, + "GameServerProtectionPolicy": { + "base": null, + "refs": { + "CreateGameServerGroupInput$GameServerProtectionPolicy": "A flag that indicates whether instances in the game server group are protected from early termination. Unprotected instances that have active game servers running may by terminated during a scale-down event, causing players to be dropped from the game. Protected instances cannot be terminated while there are active game servers running. An exception to this is Spot Instances, which may be terminated by AWS regardless of protection status. This property is set to NO_PROTECTION by default.
", + "GameServerGroup$GameServerProtectionPolicy": "A flag that indicates whether instances in the game server group are protected from early termination. Unprotected instances that have active game servers running may be terminated during a scale-down event, causing players to be dropped from the game. Protected instances cannot be terminated while there are active game servers running except in the event of a forced game server group deletion (see DeleteGameServerGroup). An exception to this is Spot Instances, which may be terminated by AWS regardless of protection status.
", + "UpdateGameServerGroupInput$GameServerProtectionPolicy": "A flag that indicates whether instances in the game server group are protected from early termination. Unprotected instances that have active game servers running may by terminated during a scale-down event, causing players to be dropped from the game. Protected instances cannot be terminated while there are active game servers running. An exception to this is Spot Instances, which may be terminated by AWS regardless of protection status. This property is set to NO_PROTECTION by default.
" + } + }, + "GameServerSortKey": { + "base": null, + "refs": { + "GameServer$CustomSortKey": "A game server tag that can be used to request sorted lists of game servers when calling ListGameServers. Custom sort keys are developer-defined. This property can be updated using UpdateGameServer.
", + "RegisterGameServerInput$CustomSortKey": "A game server tag that can be used to request sorted lists of game servers using ListGameServers. Custom sort keys are developer-defined based on how you want to organize the retrieved game server information.
", + "UpdateGameServerInput$CustomSortKey": "A game server tag that can be used to request sorted lists of game servers using ListGameServers. Custom sort keys are developer-defined based on how you want to organize the retrieved game server information.
" + } + }, + "GameServerUtilizationStatus": { + "base": null, + "refs": { + "GameServer$UtilizationStatus": "Indicates whether the game server is currently available for new games or is busy. Possible statuses include:
AVAILABLE - The game server is available to be claimed. A game server that has been claimed remains in this status until it reports game hosting activity.
IN_USE - The game server is currently hosting a game session with players.
Indicates whether the game server is available or is currently hosting gameplay.
" + } + }, + "GameServers": { + "base": null, + "refs": { + "ListGameServersOutput$GameServers": "A collection of game server objects that match the request.
" + } + }, "GameSession": { "base": "Properties describing a game session.
A game session in ACTIVE status can host players. When a game session ends, its status is set to TERMINATED
.
Once the session ends, the game session object is retained for 30 days. This means you can reuse idempotency token values after this time. Game session logs are retained for 14 days.
Game session placements
An object that describes the newly updated game session queue.
" } }, + "GameSessionQueueArn": { + "base": null, + "refs": { + "GameSessionQueue$GameSessionQueueArn": "Amazon Resource Name (ARN) that is assigned to a GameLift game session queue resource and uniquely identifies it. ARNs are unique across all Regions. In a GameLift game session queue ARN, the resource ID matches the Name value.
" + } + }, "GameSessionQueueDestination": { "base": "Fleet designated in a game session queue. Requests for new game sessions in the queue are fulfilled by starting a new game session on any destination that is configured for a queue.
", "refs": { @@ -1028,15 +1318,20 @@ "base": null, "refs": { "CreateGameSessionQueueInput$Name": "A descriptive label that is associated with game session queue. Queue names must be unique within each Region.
", - "DeleteGameSessionQueueInput$Name": "A descriptive label that is associated with game session queue. Queue names must be unique within each Region. You can use either the queue ID or ARN value.
", "GameSessionPlacement$GameSessionQueueName": "A descriptive label that is associated with game session queue. Queue names must be unique within each Region.
", - "GameSessionQueue$Name": "A descriptive label that is associated with game session queue. Queue names must be unique within each Region.
", - "GameSessionQueueNameList$member": null, - "StartGameSessionPlacementInput$GameSessionQueueName": "Name of the queue to use to place the new game session. You can use either the qieue name or ARN value.
", + "GameSessionQueue$Name": "A descriptive label that is associated with game session queue. Queue names must be unique within each Region.
" + } + }, + "GameSessionQueueNameOrArn": { + "base": null, + "refs": { + "DeleteGameSessionQueueInput$Name": "A descriptive label that is associated with game session queue. Queue names must be unique within each Region. You can use either the queue ID or ARN value.
", + "GameSessionQueueNameOrArnList$member": null, + "StartGameSessionPlacementInput$GameSessionQueueName": "Name of the queue to use to place the new game session. You can use either the queue name or ARN value.
", "UpdateGameSessionQueueInput$Name": "A descriptive label that is associated with game session queue. Queue names must be unique within each Region. You can use either the queue ID or ARN value.
" } }, - "GameSessionQueueNameList": { + "GameSessionQueueNameOrArnList": { "base": null, "refs": { "DescribeGameSessionQueuesInput$Names": "A list of queue names to retrieve information for. You can use either the queue ID or ARN value. To request settings for all queues, leave this parameter empty.
" @@ -1074,6 +1369,14 @@ "refs": { } }, + "IamRoleArn": { + "base": null, + "refs": { + "CreateGameServerGroupInput$RoleArn": "The Amazon Resource Name (ARN) for an IAM role that allows Amazon GameLift to access your EC2 Auto Scaling groups. The submitted role is validated to ensure that it contains the necessary permissions for game server groups.
", + "GameServerGroup$RoleArn": "The Amazon Resource Name (ARN) for an IAM role that allows Amazon GameLift to access your EC2 Auto Scaling groups. The submitted role is validated to ensure that it contains the necessary permissions for game server groups.
", + "UpdateGameServerGroupInput$RoleArn": "The Amazon Resource Name (ARN) for an IAM role that allows Amazon GameLift to access your EC2 Auto Scaling groups. The submitted role is validated to ensure that it contains the necessary permissions for game server groups.
" + } + }, "IdStringModel": { "base": null, "refs": { @@ -1108,6 +1411,20 @@ "InstanceAccess$Credentials": "Credentials required to access the instance.
" } }, + "InstanceDefinition": { + "base": "This data type is part of Amazon GameLift FleetIQ with game server groups, which is in preview release and is subject to change.
An allowed instance type for your game server group. GameLift FleetIQ periodically evaluates each defined instance type for viability. It then updates the Auto Scaling group with the list of viable instance types.
", + "refs": { + "InstanceDefinitions$member": null + } + }, + "InstanceDefinitions": { + "base": null, + "refs": { + "CreateGameServerGroupInput$InstanceDefinitions": "A set of EC2 instance types to use when creating instances in the group. The instance definitions must specify at least two different instance types that are supported by GameLift FleetIQ. For more information on instance types, see EC2 Instance Types in the Amazon EC2 User Guide.
", + "GameServerGroup$InstanceDefinitions": "The set of EC2 instance types that GameLift FleetIQ can use when rebalancing and autoscaling instances in the group.
", + "UpdateGameServerGroupInput$InstanceDefinitions": "An updated list of EC2 instance types to use when creating instances in the group. The instance definition must specify instance types that are supported by GameLift FleetIQ, and must include at least two instance types. This updated list replaces the entire current list of instance definitions for the game server group. For more information on instance types, see EC2 Instance Types in the Amazon EC2 User Guide..
" + } + }, "InstanceId": { "base": null, "refs": { @@ -1177,8 +1494,8 @@ "refs": { "CreateFleetInput$EC2InboundPermissions": "Range of IP addresses and port settings that permit inbound traffic to access game sessions that are running on the fleet. For fleets using a custom game build, this parameter is required before game sessions running on the fleet can accept connections. For Realtime Servers fleets, Amazon GameLift automatically sets TCP and UDP ranges for use by the Realtime servers. You can specify multiple permission settings or add more by updating the fleet.
", "DescribeFleetPortSettingsOutput$InboundPermissions": "The port settings for the requested fleet ID.
", - "UpdateFleetPortSettingsInput$InboundPermissionAuthorizations": "A collection of port settings to be added to the fleet record.
", - "UpdateFleetPortSettingsInput$InboundPermissionRevocations": "A collection of port settings to be removed from the fleet record.
" + "UpdateFleetPortSettingsInput$InboundPermissionAuthorizations": "A collection of port settings to be added to the fleet resource.
", + "UpdateFleetPortSettingsInput$InboundPermissionRevocations": "A collection of port settings to be removed from the fleet resource.
" } }, "IpProtocol": { @@ -1193,6 +1510,30 @@ "Player$LatencyInMs": "Set of values, expressed in milliseconds, indicating the amount of latency that a player experiences when connected to AWS Regions. If this property is present, FlexMatch considers placing the match only in Regions for which latency is reported.
If a matchmaker has a rule that evaluates player latency, players must report latency in order to be matched. If no latency is reported in this scenario, FlexMatch assumes that no Regions are available to the player and the ticket is not matchable.
" } }, + "LaunchTemplateId": { + "base": null, + "refs": { + "LaunchTemplateSpecification$LaunchTemplateId": "A unique identifier for an existing EC2 launch template.
" + } + }, + "LaunchTemplateName": { + "base": null, + "refs": { + "LaunchTemplateSpecification$LaunchTemplateName": "A readable identifier for an existing EC2 launch template.
" + } + }, + "LaunchTemplateSpecification": { + "base": "This data type is part of Amazon GameLift FleetIQ with game server groups, which is in preview release and is subject to change.
An EC2 launch template that contains configuration settings and game server code to be deployed to all instances in a game server group.
", + "refs": { + "CreateGameServerGroupInput$LaunchTemplate": "The EC2 launch template that contains configuration settings and game server code to be deployed to all instances in the game server group. You can specify the template using either the template name or ID. For help with creating a launch template, see Creating a Launch Template for an Auto Scaling Group in the Amazon EC2 Auto Scaling User Guide.
" + } + }, + "LaunchTemplateVersion": { + "base": null, + "refs": { + "LaunchTemplateSpecification$Version": "The version of the EC2 launch template to use. If no version is specified, the default version will be used. EC2 allows you to specify a default version for a launch template, if none is set, the default is the first version created.
" + } + }, "LimitExceededException": { "base": "The requested operation would cause the resource to exceed the allowed service limit. Resolve the issue before retrying.
", "refs": { @@ -1228,6 +1569,26 @@ "refs": { } }, + "ListGameServerGroupsInput": { + "base": null, + "refs": { + } + }, + "ListGameServerGroupsOutput": { + "base": null, + "refs": { + } + }, + "ListGameServersInput": { + "base": null, + "refs": { + } + }, + "ListGameServersOutput": { + "base": null, + "refs": { + } + }, "ListScriptsInput": { "base": null, "refs": { @@ -1469,7 +1830,8 @@ "ListScriptsInput$NextToken": "A token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To start at the beginning of the result set, do not specify a value.
", "ListScriptsOutput$NextToken": "A token that indicates where to resume retrieving results on the next call to this action. If no token is returned, these results represent the end of the list.
", "NotFoundException$Message": null, - "S3Location$Bucket": "An Amazon S3 bucket identifier. This is the name of the S3 bucket.
", + "OutOfCapacityException$Message": null, + "S3Location$Bucket": "An S3 bucket identifier. This is the name of the S3 bucket.
", "S3Location$Key": "The name of the zip file that contains the build files or script files.
", "S3Location$RoleArn": "The Amazon Resource Name (ARN) for an IAM role that allows Amazon GameLift to access the S3 bucket.
", "S3Location$ObjectVersion": "The version of the file, if object versioning is turned on for the bucket. Amazon GameLift uses this information when retrieving files from an S3 bucket that you own. Use this parameter to specify a specific version of the file. If not set, the latest version of the file is retrieved.
", @@ -1479,6 +1841,12 @@ "UnsupportedRegionException$Message": null } }, + "NonNegativeDouble": { + "base": null, + "refs": { + "TargetTrackingConfiguration$TargetValue": "Desired value to use with a game server group target-based scaling policy.
" + } + }, "NonZeroAndMaxString": { "base": null, "refs": { @@ -1542,6 +1910,7 @@ "FleetAttributes$Name": "A descriptive label that is associated with a fleet. Fleet names do not need to be unique.
", "FleetAttributes$ServerLaunchPath": "Path to a game server executable in the fleet's build, specified for fleets created before 2016-08-04 (or AWS SDK v. 0.12.16). Server launch paths for fleets created after this date are specified in the fleet's RuntimeConfiguration.
", "FleetAttributes$ServerLaunchParameters": "Game server launch parameters specified for fleets created before 2016-08-04 (or AWS SDK v. 0.12.16). Server launch parameters for fleets created after this date are specified in the fleet's RuntimeConfiguration.
", + "GameServerGroup$StatusReason": "Additional information about the current game server group status. This information may provide additional insight on groups that in ERROR status.
", "GameSession$GameSessionId": "A unique identifier for the game session. A game session ARN has the following format: arn:aws:gamelift:<region>::gamesession/<fleet ID>/<custom ID string or idempotency token>
.
A descriptive label that is associated with a game session. Session names do not need to be unique.
", "GameSession$CreatorId": "A unique identifier for a player. This ID is used to enforce a resource protection policy (if one exists), that limits the number of game sessions a player can create.
", @@ -1552,6 +1921,10 @@ "GetGameSessionLogUrlOutput$PreSignedUrl": "Location of the requested game session logs, available for download. This URL is valid for 15 minutes, after which S3 will reject any download request using this URL. You can request a new URL any time within the 14-day period that the logs are retained.
", "ListFleetsInput$NextToken": "Token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To start at the beginning of the result set, do not specify a value.
", "ListFleetsOutput$NextToken": "Token that indicates where to resume retrieving results on the next call to this action. If no token is returned, these results represent the end of the list.
", + "ListGameServerGroupsInput$NextToken": "A token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To start at the beginning of the result set, do not specify a value.
", + "ListGameServerGroupsOutput$NextToken": "A token that indicates where to resume retrieving results on the next call to this action. If no token is returned, these results represent the end of the list.
", + "ListGameServersInput$NextToken": "A token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To start at the beginning of the result set, do not specify a value.
", + "ListGameServersOutput$NextToken": "A token that indicates where to resume retrieving results on the next call to this action. If no token is returned, these results represent the end of the list.
", "MatchedPlayerSession$PlayerId": "A unique identifier for a player
", "MatchmakingConfiguration$Description": "A descriptive label that is associated with matchmaking configuration.
", "PlacedPlayerSession$PlayerId": "A unique identifier for a player that is associated with this player session.
", @@ -1613,6 +1986,11 @@ "InstanceAccess$OperatingSystem": "Operating system that is running on the instance.
" } }, + "OutOfCapacityException": { + "base": "The specified game server group has no available game servers to fulfill a ClaimGameServer
request. Clients can retry such requests immediately or after a waiting period.
Information about a player session that was created as part of a StartGameSessionPlacement request. This object contains only the player ID and player session ID. To retrieve full details on a player session, call DescribePlayerSessions with the player session ID.
Game session placements
The maximum number of instances allowed in the EC2 Auto Scaling group. During autoscaling events, GameLift FleetIQ and EC2 do not scale up the group above this maximum.
", "DescribeFleetAttributesInput$Limit": "The maximum number of results to return. Use this parameter with NextToken
to get results as a set of sequential pages. This parameter is ignored when the request specifies one or a list of fleet IDs.
The maximum number of results to return. Use this parameter with NextToken
to get results as a set of sequential pages. This parameter is ignored when the request specifies one or a list of fleet IDs.
The maximum number of results to return. Use this parameter with NextToken
to get results as a set of sequential pages.
The maximum number of results to return. Use this parameter with NextToken
to get results as a set of sequential pages. This parameter is limited to 10.
The maximum number of results to return. Use this parameter with NextToken
to get results as a set of sequential pages. If a player session ID is specified, this parameter is ignored.
The maximum number of results to return. Use this parameter with NextToken
to get results as a set of sequential pages.
Length of time, in seconds, it takes for a new instance to start new game server processes and register with GameLift FleetIQ. Specifying a warm-up time can be useful, particularly with game servers that take a long time to start up, because it avoids prematurely starting new instances
", "GameSessionConnectionInfo$Port": "Port number for the game session. To connect to a Amazon GameLift game server, an app needs both the IP address and port number.
", "LatencyMap$value": null, "ListAliasesInput$Limit": "The maximum number of results to return. Use this parameter with NextToken
to get results as a set of sequential pages.
The maximum number of results to return. Use this parameter with NextToken
to get results as a set of sequential pages.
The maximum number of results to return. Use this parameter with NextToken
to get results as a set of sequential pages.
The maximum number of results to return. Use this parameter with NextToken
to get results as a set of sequential pages.
The maximum number of results to return. Use this parameter with NextToken
to get results as a set of sequential pages.
The maximum number of results to return. Use this parameter with NextToken
to get results as a set of sequential pages.
Length of time (in minutes) the metric must be at or beyond the threshold before a scaling event is triggered.
", "ScalingPolicy$EvaluationPeriods": "Length of time (in minutes) the metric must be at or beyond the threshold before a scaling event is triggered.
", @@ -1807,6 +2189,16 @@ "UpdateMatchmakingConfigurationInput$GameSessionQueueArns": "Amazon Resource Name (ARN) that is assigned to a GameLift game session queue resource and uniquely identifies it. ARNs are unique across all Regions. These queues are used when placing game sessions for matches that are created with this matchmaking configuration. Queues can be located in any Region.
" } }, + "RegisterGameServerInput": { + "base": null, + "refs": { + } + }, + "RegisterGameServerOutput": { + "base": null, + "refs": { + } + }, "RequestUploadCredentialsInput": { "base": "Represents the input for a request action.
", "refs": { @@ -1835,6 +2227,16 @@ "UpdateFleetAttributesInput$ResourceCreationLimitPolicy": "Policy that limits the number of game sessions an individual player can create over a span of time.
" } }, + "ResumeGameServerGroupInput": { + "base": null, + "refs": { + } + }, + "ResumeGameServerGroupOutput": { + "base": null, + "refs": { + } + }, "RoutingStrategy": { "base": "The routing configuration for a fleet alias.
", "refs": { @@ -1865,7 +2267,7 @@ } }, "RuntimeConfiguration": { - "base": "A collection of server process configurations that describe what processes to run on each instance in a fleet. Server processes run either a custom game build executable or a Realtime Servers script. Each instance in the fleet starts the specified server processes and continues to start new processes as existing processes end. Each instance regularly checks for an updated runtime configuration.
The runtime configuration enables the instances in a fleet to run multiple processes simultaneously. Learn more about Running Multiple Processes on a Fleet .
A Amazon GameLift instance is limited to 50 processes running simultaneously. To calculate the total number of processes in a runtime configuration, add the values of the ConcurrentExecutions
parameter for each ServerProcess object.
Manage fleet actions:
A collection of server process configurations that describe what processes to run on each instance in a fleet. Server processes run either a custom game build executable or a Realtime Servers script. Each instance in the fleet starts the specified server processes and continues to start new processes as existing processes end. Each instance regularly checks for an updated runtime configuration.
The runtime configuration enables the instances in a fleet to run multiple processes simultaneously. Learn more about Running Multiple Processes on a Fleet .
A Amazon GameLift instance is limited to 50 processes running simultaneously. To calculate the total number of processes in a runtime configuration, add the values of the ConcurrentExecutions
parameter for each ServerProcess object.
Instructions for launching server processes on each instance in the fleet. Server processes run either a custom game build executable or a Realtime script. The runtime configuration defines the server executables or launch script file, launch parameters, and the number of processes to run concurrently on each instance. When creating a fleet, the runtime configuration must have at least one server process configuration; otherwise the request fails with an invalid request exception. (This parameter replaces the parameters ServerLaunchPath
and ServerLaunchParameters
, although requests that contain values for these parameters instead of a runtime configuration will continue to work.) This parameter is required unless the parameters ServerLaunchPath
and ServerLaunchParameters
are defined. Runtime configuration replaced these parameters, but fleets that use them will continue to work.
Instructions describing how server processes should be launched and maintained on each instance in the fleet.
", @@ -1874,9 +2276,9 @@ } }, "S3Location": { - "base": "The location in Amazon S3 where build or script files are stored for access by Amazon GameLift. This location is specified in CreateBuild, CreateScript, and UpdateScript requests.
", + "base": "The location in S3 where build or script files are stored for access by Amazon GameLift. This location is specified in CreateBuild, CreateScript, and UpdateScript requests.
", "refs": { - "CreateBuildInput$StorageLocation": "Information indicating where your game build files are stored. Use this parameter only when creating a build with files stored in an Amazon S3 bucket that you own. The storage location must specify an Amazon S3 bucket name and key. The location must also specify a role ARN that you set up to allow Amazon GameLift to access your Amazon S3 bucket. The S3 bucket and your new build must be in the same Region.
", + "CreateBuildInput$StorageLocation": "Information indicating where your game build files are stored. Use this parameter only when creating a build with files stored in an S3 bucket that you own. The storage location must specify an S3 bucket name and key. The location must also specify a role ARN that you set up to allow Amazon GameLift to access your S3 bucket. The S3 bucket and your new build must be in the same Region.
", "CreateBuildOutput$StorageLocation": "Amazon S3 location for your game build file, including bucket name and key.
", "CreateScriptInput$StorageLocation": "The location of the Amazon S3 bucket where a zipped file containing your Realtime scripts is stored. The storage location must specify the Amazon S3 bucket name, the zip file name (the \"key\"), and a role ARN that allows Amazon GameLift to access the Amazon S3 storage location. The S3 bucket must be in the same Region where you want to create a new script. By default, Amazon GameLift uploads the latest version of the zip file; if you have S3 object versioning turned on, you can use the ObjectVersion
parameter to specify an earlier version.
Amazon S3 path and key, identifying where the game build files are stored.
", @@ -1927,14 +2329,19 @@ } }, "ScriptId": { + "base": null, + "refs": { + "FleetAttributes$ScriptId": "A unique identifier for a Realtime script.
", + "Script$ScriptId": "A unique identifier for a Realtime script
" + } + }, + "ScriptIdOrArn": { "base": null, "refs": { "CreateFleetInput$ScriptId": "A unique identifier for a Realtime script to be deployed on the new fleet. You can use either the script ID or ARN value. The Realtime script must have been successfully uploaded to Amazon GameLift. This fleet setting cannot be changed once the fleet is created.
", "DeleteScriptInput$ScriptId": "A unique identifier for a Realtime script to delete. You can use either the script ID or ARN value.
", "DescribeScriptInput$ScriptId": "A unique identifier for a Realtime script to retrieve properties for. You can use either the script ID or ARN value.
", - "FleetAttributes$ScriptId": "A unique identifier for a Realtime script.
", - "ListFleetsInput$ScriptId": "A unique identifier for a Realtime script to return fleets for. Use this parameter to return only fleets using the specified script. Use either the script ID or ARN value.To retrieve all fleets, leave this parameter empty.
", - "Script$ScriptId": "A unique identifier for a Realtime script
", + "ListFleetsInput$ScriptId": "A unique identifier for a Realtime script to return fleets for. Use this parameter to return only fleets using a specified script. Use either the script ID or ARN value. To retrieve all fleets, leave this parameter empty.
", "UpdateScriptInput$ScriptId": "A unique identifier for a Realtime script to update. You can use either the script ID or ARN value.
" } }, @@ -1974,6 +2381,12 @@ "UpdateMatchmakingConfigurationInput$NotificationTarget": "An SNS topic ARN that is set up to receive matchmaking notifications. See Setting up Notifications for Matchmaking for more information.
" } }, + "SortOrder": { + "base": null, + "refs": { + "ListGameServersInput$SortOrder": "Indicates how to sort the returned data based on the game servers' custom key sort value. If this parameter is left empty, the list of game servers is returned in no particular order.
" + } + }, "StartFleetActionsInput": { "base": null, "refs": { @@ -2067,6 +2480,16 @@ "MatchmakingTicket$StatusMessage": "Additional information about the current status.
" } }, + "SuspendGameServerGroupInput": { + "base": null, + "refs": { + } + }, + "SuspendGameServerGroupOutput": { + "base": null, + "refs": { + } + }, "Tag": { "base": "A label that can be assigned to a GameLift resource.
Learn more
Tagging AWS Resources in the AWS General Reference
Related operations
", "refs": { @@ -2083,7 +2506,7 @@ "TagKeyList": { "base": null, "refs": { - "UntagResourceRequest$TagKeys": "A list of one or more tags to remove from the specified GameLift resource. Tags are developer-defined and structured as key-value pairs.
" + "UntagResourceRequest$TagKeys": "A list of one or more tag keys to remove from the specified GameLift resource. An AWS resource can have only one tag with a specific tag key, so specifying the tag key identifies which tag to remove.
" } }, "TagList": { @@ -2092,11 +2515,13 @@ "CreateAliasInput$Tags": "A list of labels to assign to the new alias resource. Tags are developer-defined key-value pairs. Tagging AWS resources are useful for resource management, access management and cost allocation. For more information, see Tagging AWS Resources in the AWS General Reference. Once the resource is created, you can use TagResource, UntagResource, and ListTagsForResource to add, remove, and view tags. The maximum tag limit may be lower than stated. See the AWS General Reference for actual tagging limits.
", "CreateBuildInput$Tags": "A list of labels to assign to the new build resource. Tags are developer-defined key-value pairs. Tagging AWS resources are useful for resource management, access management and cost allocation. For more information, see Tagging AWS Resources in the AWS General Reference. Once the resource is created, you can use TagResource, UntagResource, and ListTagsForResource to add, remove, and view tags. The maximum tag limit may be lower than stated. See the AWS General Reference for actual tagging limits.
", "CreateFleetInput$Tags": "A list of labels to assign to the new fleet resource. Tags are developer-defined key-value pairs. Tagging AWS resources are useful for resource management, access management and cost allocation. For more information, see Tagging AWS Resources in the AWS General Reference. Once the resource is created, you can use TagResource, UntagResource, and ListTagsForResource to add, remove, and view tags. The maximum tag limit may be lower than stated. See the AWS General Reference for actual tagging limits.
", + "CreateGameServerGroupInput$Tags": "A list of labels to assign to the new game server group resource. Tags are developer-defined key-value pairs. Tagging AWS resources are useful for resource management, access management, and cost allocation. For more information, see Tagging AWS Resources in the AWS General Reference. Once the resource is created, you can use TagResource, UntagResource, and ListTagsForResource to add, remove, and view tags. The maximum tag limit may be lower than stated. See the AWS General Reference for actual tagging limits.
", "CreateGameSessionQueueInput$Tags": "A list of labels to assign to the new game session queue resource. Tags are developer-defined key-value pairs. Tagging AWS resources are useful for resource management, access management and cost allocation. For more information, see Tagging AWS Resources in the AWS General Reference. Once the resource is created, you can use TagResource, UntagResource, and ListTagsForResource to add, remove, and view tags. The maximum tag limit may be lower than stated. See the AWS General Reference for actual tagging limits.
", "CreateMatchmakingConfigurationInput$Tags": "A list of labels to assign to the new matchmaking configuration resource. Tags are developer-defined key-value pairs. Tagging AWS resources are useful for resource management, access management and cost allocation. For more information, see Tagging AWS Resources in the AWS General Reference. Once the resource is created, you can use TagResource, UntagResource, and ListTagsForResource to add, remove, and view tags. The maximum tag limit may be lower than stated. See the AWS General Reference for actual tagging limits.
", "CreateMatchmakingRuleSetInput$Tags": "A list of labels to assign to the new matchmaking rule set resource. Tags are developer-defined key-value pairs. Tagging AWS resources are useful for resource management, access management and cost allocation. For more information, see Tagging AWS Resources in the AWS General Reference. Once the resource is created, you can use TagResource, UntagResource, and ListTagsForResource to add, remove, and view tags. The maximum tag limit may be lower than stated. See the AWS General Reference for actual tagging limits.
", "CreateScriptInput$Tags": "A list of labels to assign to the new script resource. Tags are developer-defined key-value pairs. Tagging AWS resources are useful for resource management, access management and cost allocation. For more information, see Tagging AWS Resources in the AWS General Reference. Once the resource is created, you can use TagResource, UntagResource, and ListTagsForResource to add, remove, and view tags. The maximum tag limit may be lower than stated. See the AWS General Reference for actual tagging limits.
", "ListTagsForResourceResponse$Tags": "The collection of tags that have been assigned to the specified resource.
", + "RegisterGameServerInput$Tags": "A list of labels to assign to the new game server resource. Tags are developer-defined key-value pairs. Tagging AWS resources are useful for resource management, access management, and cost allocation. For more information, see Tagging AWS Resources in the AWS General Reference. Once the resource is created, you can use TagResource, UntagResource, and ListTagsForResource to add, remove, and view tags. The maximum tag limit may be lower than stated. See the AWS General Reference for actual tagging limits.
", "TagResourceRequest$Tags": "A list of one or more tags to assign to the specified GameLift resource. Tags are developer-defined and structured as key-value pairs. The maximum tag limit may be lower than stated. See Tagging AWS Resources for actual tagging limits.
" } }, @@ -2128,6 +2553,12 @@ "ScalingPolicy$TargetConfiguration": "The settings for a target-based scaling policy.
" } }, + "TargetTrackingConfiguration": { + "base": "This data type is part of Amazon GameLift FleetIQ with game server groups, which is in preview release and is subject to change.
Settings for a target-based scaling policy applied to Auto Scaling group. These settings are used to create a target-based policy that tracks the GameLift FleetIQ metric \"PercentUtilizedGameServers\" and specifies a target value for the metric. As player usage changes, the policy triggers to adjust the game server group capacity so that the metric returns to the target value.
", + "refs": { + "GameServerGroupAutoScalingPolicy$TargetTrackingConfiguration": "Settings for a target-based scaling policy applied to Auto Scaling group. These settings are used to create a target-based policy that tracks the GameLift FleetIQ metric \"PercentUtilizedGameServers\" and specifies a target value for the metric. As player usage changes, the policy triggers to adjust the game server group capacity so that the metric returns to the target value.
" + } + }, "TerminalRoutingStrategyException": { "base": "The service is unable to resolve the routing for a particular alias because it has a terminal RoutingStrategy associated with it. The message returned in this exception is the message defined in the routing strategy itself. Such requests should only be retried if the routing strategy for the specified alias is modified.
", "refs": { @@ -2144,6 +2575,11 @@ "Event$EventTime": "Time stamp indicating when this event occurred. Format is a number expressed in Unix time as milliseconds (for example \"1469498468.057\").
", "FleetAttributes$CreationTime": "Time stamp indicating when this data object was created. Format is a number expressed in Unix time as milliseconds (for example \"1469498468.057\").
", "FleetAttributes$TerminationTime": "Time stamp indicating when this data object was terminated. Format is a number expressed in Unix time as milliseconds (for example \"1469498468.057\").
", + "GameServer$RegistrationTime": "Time stamp indicating when the game server resource was created with a RegisterGameServer request. Format is a number expressed in Unix time as milliseconds (for example \"1469498468.057\").
", + "GameServer$LastClaimTime": "Time stamp indicating the last time the game server was claimed with a ClaimGameServer request. Format is a number expressed in Unix time as milliseconds (for example \"1469498468.057\"). This value is used to calculate when the game server's claim status.
", + "GameServer$LastHealthCheckTime": "Time stamp indicating the last time the game server was updated with health status using an UpdateGameServer request. Format is a number expressed in Unix time as milliseconds (for example \"1469498468.057\"). After game server registration, this property is only changed when a game server update specifies a health check value.
", + "GameServerGroup$CreationTime": "A time stamp indicating when this data object was created. Format is a number expressed in Unix time as milliseconds (for example \"1469498468.057\").
", + "GameServerGroup$LastUpdatedTime": "A time stamp indicating when this game server group was last updated.
", "GameSession$CreationTime": "Time stamp indicating when this data object was created. Format is a number expressed in Unix time as milliseconds (for example \"1469498468.057\").
", "GameSession$TerminationTime": "Time stamp indicating when this data object was terminated. Format is a number expressed in Unix time as milliseconds (for example \"1469498468.057\").
", "GameSessionPlacement$StartTime": "Time stamp indicating when this request was placed in the queue. Format is a number expressed in Unix time as milliseconds (for example \"1469498468.057\").
", @@ -2230,6 +2666,26 @@ "refs": { } }, + "UpdateGameServerGroupInput": { + "base": null, + "refs": { + } + }, + "UpdateGameServerGroupOutput": { + "base": null, + "refs": { + } + }, + "UpdateGameServerInput": { + "base": null, + "refs": { + } + }, + "UpdateGameServerOutput": { + "base": null, + "refs": { + } + }, "UpdateGameSessionInput": { "base": "Represents the input for a request action.
", "refs": { @@ -2321,9 +2777,28 @@ "VpcPeeringConnection$Status": "The status information about the connection. Status indicates if a connection is pending, successful, or failed.
" } }, + "VpcSubnet": { + "base": null, + "refs": { + "VpcSubnets$member": null + } + }, + "VpcSubnets": { + "base": null, + "refs": { + "CreateGameServerGroupInput$VpcSubnets": "A list of virtual private cloud (VPC) subnets to use with instances in the game server group. By default, all GameLift FleetIQ-supported availability zones are used; this parameter allows you to specify VPCs that you've set up.
" + } + }, + "WeightedCapacity": { + "base": null, + "refs": { + "InstanceDefinition$WeightedCapacity": "Instance weighting that indicates how much this instance type contributes to the total capacity of a game server group. Instance weights are used by GameLift FleetIQ to calculate the instance type's cost per unit hour and better identify the most cost-effective options. For detailed information on weighting instance capacity, see Instance Weighting in the Amazon EC2 Auto Scaling User Guide. Default value is \"1\".
" + } + }, "WholeNumber": { "base": null, "refs": { + "CreateGameServerGroupInput$MinSize": "The minimum number of instances allowed in the EC2 Auto Scaling group. During autoscaling events, GameLift FleetIQ and EC2 do not scale down the group below this minimum. In production, this value should be set to at least 1.
", "CreateGameSessionInput$MaximumPlayerSessionCount": "The maximum number of players that can be connected simultaneously to the game session.
", "CreateGameSessionQueueInput$TimeoutInSeconds": "The maximum time, in seconds, that a new game session placement request remains in the queue. When a request exceeds this time, the game session placement changes to a TIMED_OUT
status.
The number of player slots in a match to keep open for future players. For example, assume that the configuration's rule set specifies a match for a single 12-person team. If the additional player count is set to 2, only 10 players are initially selected for the match.
", diff --git a/models/apis/globalaccelerator/2018-08-08/api-2.json b/models/apis/globalaccelerator/2018-08-08/api-2.json index d4b6ddf4aa7..633093f6add 100644 --- a/models/apis/globalaccelerator/2018-08-08/api-2.json +++ b/models/apis/globalaccelerator/2018-08-08/api-2.json @@ -472,9 +472,21 @@ "type":"structure", "members":{ "Cidr":{"shape":"GenericString"}, - "State":{"shape":"ByoipCidrState"} + "State":{"shape":"ByoipCidrState"}, + "Events":{"shape":"ByoipCidrEvents"} } }, + "ByoipCidrEvent":{ + "type":"structure", + "members":{ + "Message":{"shape":"GenericString"}, + "Timestamp":{"shape":"Timestamp"} + } + }, + "ByoipCidrEvents":{ + "type":"list", + "member":{"shape":"ByoipCidrEvent"} + }, "ByoipCidrNotFoundException":{ "type":"structure", "members":{ diff --git a/models/apis/globalaccelerator/2018-08-08/docs-2.json b/models/apis/globalaccelerator/2018-08-08/docs-2.json index 0380c65f4f6..d974204ba35 100644 --- a/models/apis/globalaccelerator/2018-08-08/docs-2.json +++ b/models/apis/globalaccelerator/2018-08-08/docs-2.json @@ -12,10 +12,10 @@ "DeprovisionByoipCidr": "Releases the specified address range that you provisioned to use with your AWS resources through bring your own IP addresses (BYOIP) and deletes the corresponding address pool. To see an AWS CLI example of deprovisioning an address range, scroll down to Example.
Before you can release an address range, you must stop advertising it by using WithdrawByoipCidr and you must not have any accelerators that are using static IP addresses allocated from its address range.
For more information, see Bring Your Own IP Addresses (BYOIP) in the AWS Global Accelerator Developer Guide.
", "DescribeAccelerator": "Describe an accelerator. To see an AWS CLI example of describing an accelerator, scroll down to Example.
", "DescribeAcceleratorAttributes": "Describe the attributes of an accelerator. To see an AWS CLI example of describing the attributes of an accelerator, scroll down to Example.
", - "DescribeEndpointGroup": "Describe an endpoint group.
", + "DescribeEndpointGroup": "Describe an endpoint group. To see an AWS CLI example of describing an endpoint group, scroll down to Example.
", "DescribeListener": "Describe a listener. To see an AWS CLI example of describing a listener, scroll down to Example.
", "ListAccelerators": "List the accelerators for an AWS account. To see an AWS CLI example of listing the accelerators for an AWS account, scroll down to Example.
", - "ListByoipCidrs": "Lists the IP address ranges that were specified in calls to ProvisionByoipCidr.
To see an AWS CLI example of listing BYOIP CIDR addresses, scroll down to Example.
", + "ListByoipCidrs": "Lists the IP address ranges that were specified in calls to ProvisionByoipCidr, including the current state and a history of state changes.
To see an AWS CLI example of listing BYOIP CIDR addresses, scroll down to Example.
", "ListEndpointGroups": "List the endpoint groups that are associated with a listener. To see an AWS CLI example of listing the endpoint groups for listener, scroll down to Example.
", "ListListeners": "List the listeners for an accelerator. To see an AWS CLI example of listing the listeners for an accelerator, scroll down to Example.
", "ListTagsForResource": "List all tags for an accelerator. To see an AWS CLI example of listing tags for an accelerator, scroll down to Example.
For more information, see Tagging in AWS Global Accelerator in the AWS Global Accelerator Developer Guide.
", @@ -93,7 +93,7 @@ } }, "ByoipCidr": { - "base": "Information about an IP address range that is provisioned for use with your AWS resources through bring your own IP addresses (BYOIP).
The following describes each BYOIP State
that your IP address range can be in.
PENDING_PROVISIONING — You’ve submitted a request to provision an IP address range but it is not yet provisioned with AWS Global Accelerator.
READY — The address range is provisioned with AWS Global Accelerator and can be advertised.
PENDING_ADVERTISING — You’ve submitted a request for AWS Global Accelerator to advertise an address range but it is not yet being advertised.
ADVERTISING — The address range is being advertised by AWS Global Accelerator.
PENDING_WITHDRAWING — You’ve submitted a request to withdraw an address range from being advertised but it is still being advertised by AWS Global Accelerator.
PENDING_DEPROVISIONING — You’ve submitted a request to deprovision an address range from AWS Global Accelerator but it is still provisioned.
DEPROVISIONED — The address range is deprovisioned from AWS Global Accelerator.
FAILED_PROVISION — The request to provision the address range from AWS Global Accelerator was not successful. Please make sure that you provide all of the correct information, and try again. If the request fails a second time, contact AWS support.
FAILED_ADVERTISING — The request for AWS Global Accelerator to advertise the address range was not successful. Please make sure that you provide all of the correct information, and try again. If the request fails a second time, contact AWS support.
FAILED_WITHDRAW — The request to withdraw the address range from advertising by AWS Global Accelerator was not successful. Please make sure that you provide all of the correct information, and try again. If the request fails a second time, contact AWS support.
FAILED_DEPROVISION — The request to deprovision the address range from AWS Global Accelerator was not successful. Please make sure that you provide all of the correct information, and try again. If the request fails a second time, contact AWS support.
Information about an IP address range that is provisioned for use with your AWS resources through bring your own IP address (BYOIP).
The following describes each BYOIP State
that your IP address range can be in.
PENDING_PROVISIONING — You’ve submitted a request to provision an IP address range but it is not yet provisioned with AWS Global Accelerator.
READY — The address range is provisioned with AWS Global Accelerator and can be advertised.
PENDING_ADVERTISING — You’ve submitted a request for AWS Global Accelerator to advertise an address range but it is not yet being advertised.
ADVERTISING — The address range is being advertised by AWS Global Accelerator.
PENDING_WITHDRAWING — You’ve submitted a request to withdraw an address range from being advertised but it is still being advertised by AWS Global Accelerator.
PENDING_DEPROVISIONING — You’ve submitted a request to deprovision an address range from AWS Global Accelerator but it is still provisioned.
DEPROVISIONED — The address range is deprovisioned from AWS Global Accelerator.
FAILED_PROVISION — The request to provision the address range from AWS Global Accelerator was not successful. Please make sure that you provide all of the correct information, and try again. If the request fails a second time, contact AWS support.
FAILED_ADVERTISING — The request for AWS Global Accelerator to advertise the address range was not successful. Please make sure that you provide all of the correct information, and try again. If the request fails a second time, contact AWS support.
FAILED_WITHDRAW — The request to withdraw the address range from advertising by AWS Global Accelerator was not successful. Please make sure that you provide all of the correct information, and try again. If the request fails a second time, contact AWS support.
FAILED_DEPROVISION — The request to deprovision the address range from AWS Global Accelerator was not successful. Please make sure that you provide all of the correct information, and try again. If the request fails a second time, contact AWS support.
Information about the address range.
", "ByoipCidrs$member": null, @@ -102,6 +102,18 @@ "WithdrawByoipCidrResponse$ByoipCidr": "Information about the address pool.
" } }, + "ByoipCidrEvent": { + "base": "A complex type that contains a Message
and a Timestamp
value for changes that you make in the status an IP address range that you bring to AWS Global Accelerator through bring your own IP address (BYOIP).
A history of status changes for an IP address range that that you bring to AWS Global Accelerator through bring your own IP address (BYOIP).
" + } + }, "ByoipCidrNotFoundException": { "base": "The CIDR that you specified was not found or is incorrect.
", "refs": { @@ -311,7 +323,7 @@ "Accelerator$Enabled": "Indicates whether the accelerator is enabled. The value is true or false. The default value is true.
If the value is set to true, the accelerator cannot be deleted. If set to false, accelerator can be deleted.
", "AcceleratorAttributes$FlowLogsEnabled": "Indicates whether flow logs are enabled. The default value is false. If the value is true, FlowLogsS3Bucket
and FlowLogsS3Prefix
must be specified.
For more information, see Flow Logs in the AWS Global Accelerator Developer Guide.
", "CreateAcceleratorRequest$Enabled": "Indicates whether an accelerator is enabled. The value is true or false. The default value is true.
If the value is set to true, an accelerator cannot be deleted. If set to false, the accelerator can be deleted.
", - "EndpointConfiguration$ClientIPPreservationEnabled": "Indicates whether client IP address preservation is enabled for an Application Load Balancer endpoint. The value is true or false. The default value is true for new accelerators.
If the value is set to true, the client's IP address is preserved in the X-Forwarded-For
request header as traffic travels to applications on the Application Load Balancer endpoint fronted by the accelerator.
For more information, see Viewing Client IP Addresses in AWS Global Accelerator in the AWS Global Accelerator Developer Guide.
", + "EndpointConfiguration$ClientIPPreservationEnabled": "Indicates whether client IP address preservation is enabled for an Application Load Balancer endpoint. The value is true or false. The default value is true for new accelerators.
If the value is set to true, the client's IP address is preserved in the X-Forwarded-For
request header as traffic travels to applications on the Application Load Balancer endpoint fronted by the accelerator.
For more information, see Preserve Client IP Addresses in AWS Global Accelerator in the AWS Global Accelerator Developer Guide.
", "EndpointDescription$ClientIPPreservationEnabled": "Indicates whether client IP address preservation is enabled for an Application Load Balancer endpoint. The value is true or false. The default value is true for new accelerators.
If the value is set to true, the client's IP address is preserved in the X-Forwarded-For
request header as traffic travels to applications on the Application Load Balancer endpoint fronted by the accelerator.
For more information, see Viewing Client IP Addresses in AWS Global Accelerator in the AWS Global Accelerator Developer Guide.
", "UpdateAcceleratorAttributesRequest$FlowLogsEnabled": "Update whether flow logs are enabled. The default value is false. If the value is true, FlowLogsS3Bucket
and FlowLogsS3Prefix
must be specified.
For more information, see Flow Logs in the AWS Global Accelerator Developer Guide.
", "UpdateAcceleratorRequest$Enabled": "Indicates whether an accelerator is enabled. The value is true or false. The default value is true.
If the value is set to true, the accelerator cannot be deleted. If set to false, the accelerator can be deleted.
" @@ -327,6 +339,7 @@ "AcceleratorAttributes$FlowLogsS3Prefix": "The prefix for the location in the Amazon S3 bucket for the flow logs. Attribute is required if FlowLogsEnabled
is true
.
If you don’t specify a prefix, the flow logs are stored in the root of the bucket. If you specify slash (/) for the S3 bucket prefix, the log file bucket folder structure will include a double slash (//), like the following:
s3-bucket_name//AWSLogs/aws_account_id
", "AdvertiseByoipCidrRequest$Cidr": "The address range, in CIDR notation. This must be the exact range that you provisioned. You can't advertise only a portion of the provisioned range.
", "ByoipCidr$Cidr": "The address range, in CIDR notation.
", + "ByoipCidrEvent$Message": "A string that contains an Event
message describing changes that you make in the status of an IP address range that you bring to AWS Global Accelerator through bring your own IP address (BYOIP).
The plain-text authorization message for the prefix and account.
", "CidrAuthorizationContext$Signature": "The signed authorization message for the prefix and account.
", "CreateAcceleratorRequest$Name": "The name of an accelerator. The name can have a maximum of 32 characters, must contain only alphanumeric characters or hyphens (-), and must not begin or end with a hyphen.
", @@ -654,7 +667,8 @@ "base": null, "refs": { "Accelerator$CreatedTime": "The date and time that the accelerator was created.
", - "Accelerator$LastModifiedTime": "The date and time that the accelerator was last modified.
" + "Accelerator$LastModifiedTime": "The date and time that the accelerator was last modified.
", + "ByoipCidrEvent$Timestamp": "A timestamp when you make a status change for an IP address range that you bring to AWS Global Accelerator through bring your own IP address (BYOIP).
" } }, "TrafficDialPercentage": { diff --git a/models/apis/glue/2017-03-31/api-2.json b/models/apis/glue/2017-03-31/api-2.json index 0ae4b6537c1..40b9b6d5938 100644 --- a/models/apis/glue/2017-03-31/api-2.json +++ b/models/apis/glue/2017-03-31/api-2.json @@ -2520,14 +2520,18 @@ "JDBC_ENFORCE_SSL", "CUSTOM_JDBC_CERT", "SKIP_CUSTOM_JDBC_CERT_VALIDATION", - "CUSTOM_JDBC_CERT_STRING" + "CUSTOM_JDBC_CERT_STRING", + "CONNECTION_URL", + "KAFKA_BOOTSTRAP_SERVERS" ] }, "ConnectionType":{ "type":"string", "enum":[ "JDBC", - "SFTP" + "SFTP", + "MONGODB", + "KAFKA" ] }, "ConnectionsList":{ @@ -4173,10 +4177,7 @@ }, "GetUserDefinedFunctionsRequest":{ "type":"structure", - "required":[ - "DatabaseName", - "Pattern" - ], + "required":["Pattern"], "members":{ "CatalogId":{"shape":"CatalogIdString"}, "DatabaseName":{"shape":"NameString"}, diff --git a/models/apis/glue/2017-03-31/docs-2.json b/models/apis/glue/2017-03-31/docs-2.json index 3d0ab8f5a05..87529438955 100644 --- a/models/apis/glue/2017-03-31/docs-2.json +++ b/models/apis/glue/2017-03-31/docs-2.json @@ -686,7 +686,7 @@ "ConnectionProperties": { "base": null, "refs": { - "Connection$ConnectionProperties": "These key-value pairs define parameters for the connection:
HOST
- The host URI: either the fully qualified domain name (FQDN) or the IPv4 address of the database host.
PORT
- The port number, between 1024 and 65535, of the port on which the database host is listening for database connections.
USER_NAME
- The name under which to log in to the database. The value string for USER_NAME
is \"USERNAME
\".
PASSWORD
- A password, if one is used, for the user name.
ENCRYPTED_PASSWORD
- When you enable connection password protection by setting ConnectionPasswordEncryption
in the Data Catalog encryption settings, this field stores the encrypted password.
JDBC_DRIVER_JAR_URI
- The Amazon Simple Storage Service (Amazon S3) path of the JAR file that contains the JDBC driver to use.
JDBC_DRIVER_CLASS_NAME
- The class name of the JDBC driver to use.
JDBC_ENGINE
- The name of the JDBC engine to use.
JDBC_ENGINE_VERSION
- The version of the JDBC engine to use.
CONFIG_FILES
- (Reserved for future use.)
INSTANCE_ID
- The instance ID to use.
JDBC_CONNECTION_URL
- The URL for the JDBC connection.
JDBC_ENFORCE_SSL
- A Boolean string (true, false) specifying whether Secure Sockets Layer (SSL) with hostname matching is enforced for the JDBC connection on the client. The default is false.
CUSTOM_JDBC_CERT
- An Amazon S3 location specifying the customer's root certificate. AWS Glue uses this root certificate to validate the customer’s certificate when connecting to the customer database. AWS Glue only handles X.509 certificates. The certificate provided must be DER-encoded and supplied in Base64 encoding PEM format.
SKIP_CUSTOM_JDBC_CERT_VALIDATION
- By default, this is false
. AWS Glue validates the Signature algorithm and Subject Public Key Algorithm for the customer certificate. The only permitted algorithms for the Signature algorithm are SHA256withRSA, SHA384withRSA or SHA512withRSA. For the Subject Public Key Algorithm, the key length must be at least 2048. You can set the value of this property to true
to skip AWS Glue’s validation of the customer certificate.
CUSTOM_JDBC_CERT_STRING
- A custom JDBC certificate string which is used for domain match or distinguished name match to prevent a man-in-the-middle attack. In Oracle database, this is used as the SSL_SERVER_CERT_DN
; in Microsoft SQL Server, this is used as the hostNameInCertificate
.
These key-value pairs define parameters for the connection:
HOST
- The host URI: either the fully qualified domain name (FQDN) or the IPv4 address of the database host.
PORT
- The port number, between 1024 and 65535, of the port on which the database host is listening for database connections.
USER_NAME
- The name under which to log in to the database. The value string for USER_NAME
is \"USERNAME
\".
PASSWORD
- A password, if one is used, for the user name.
ENCRYPTED_PASSWORD
- When you enable connection password protection by setting ConnectionPasswordEncryption
in the Data Catalog encryption settings, this field stores the encrypted password.
JDBC_DRIVER_JAR_URI
- The Amazon Simple Storage Service (Amazon S3) path of the JAR file that contains the JDBC driver to use.
JDBC_DRIVER_CLASS_NAME
- The class name of the JDBC driver to use.
JDBC_ENGINE
- The name of the JDBC engine to use.
JDBC_ENGINE_VERSION
- The version of the JDBC engine to use.
CONFIG_FILES
- (Reserved for future use.)
INSTANCE_ID
- The instance ID to use.
JDBC_CONNECTION_URL
- The URL for connecting to a JDBC data source.
JDBC_ENFORCE_SSL
- A Boolean string (true, false) specifying whether Secure Sockets Layer (SSL) with hostname matching is enforced for the JDBC connection on the client. The default is false.
CUSTOM_JDBC_CERT
- An Amazon S3 location specifying the customer's root certificate. AWS Glue uses this root certificate to validate the customer’s certificate when connecting to the customer database. AWS Glue only handles X.509 certificates. The certificate provided must be DER-encoded and supplied in Base64 encoding PEM format.
SKIP_CUSTOM_JDBC_CERT_VALIDATION
- By default, this is false
. AWS Glue validates the Signature algorithm and Subject Public Key Algorithm for the customer certificate. The only permitted algorithms for the Signature algorithm are SHA256withRSA, SHA384withRSA or SHA512withRSA. For the Subject Public Key Algorithm, the key length must be at least 2048. You can set the value of this property to true
to skip AWS Glue’s validation of the customer certificate.
CUSTOM_JDBC_CERT_STRING
- A custom JDBC certificate string which is used for domain match or distinguished name match to prevent a man-in-the-middle attack. In Oracle database, this is used as the SSL_SERVER_CERT_DN
; in Microsoft SQL Server, this is used as the hostNameInCertificate
.
CONNECTION_URL
- The URL for connecting to a general (non-JDBC) data source.
KAFKA_BOOTSTRAP_SERVERS
- A comma-separated list of host and port pairs that are the addresses of the Apache Kafka brokers in a Kafka cluster to which a Kafka client will connect to and bootstrap itself.
These key-value pairs define parameters for the connection.
" } }, @@ -700,7 +700,7 @@ "base": null, "refs": { "Connection$ConnectionType": "The type of the connection. Currently, only JDBC is supported; SFTP is not supported.
", - "ConnectionInput$ConnectionType": "The type of the connection. Currently, only JDBC is supported; SFTP is not supported.
", + "ConnectionInput$ConnectionType": "The type of the connection. Currently, these types are supported:
JDBC
- Designates a connection to a database through Java Database Connectivity (JDBC).
KAFKA
- Designates a connection to an Apache Kafka streaming platform.
MONGODB
- Designates a connection to a MongoDB document database.
SFTP is not supported.
", "GetConnectionsFilter$ConnectionType": "The type of connections to return. Currently, only JDBC is supported; SFTP is not supported.
" } }, diff --git a/models/apis/guardduty/2017-11-28/api-2.json b/models/apis/guardduty/2017-11-28/api-2.json index f4602c18836..fec0b1407d0 100644 --- a/models/apis/guardduty/2017-11-28/api-2.json +++ b/models/apis/guardduty/2017-11-28/api-2.json @@ -250,6 +250,20 @@ {"shape":"InternalServerErrorException"} ] }, + "DescribeOrganizationConfiguration":{ + "name":"DescribeOrganizationConfiguration", + "http":{ + "method":"GET", + "requestUri":"/detector/{detectorId}/admin", + "responseCode":200 + }, + "input":{"shape":"DescribeOrganizationConfigurationRequest"}, + "output":{"shape":"DescribeOrganizationConfigurationResponse"}, + "errors":[ + {"shape":"BadRequestException"}, + {"shape":"InternalServerErrorException"} + ] + }, "DescribePublishingDestination":{ "name":"DescribePublishingDestination", "http":{ @@ -264,6 +278,20 @@ {"shape":"InternalServerErrorException"} ] }, + "DisableOrganizationAdminAccount":{ + "name":"DisableOrganizationAdminAccount", + "http":{ + "method":"POST", + "requestUri":"/admin/disable", + "responseCode":200 + }, + "input":{"shape":"DisableOrganizationAdminAccountRequest"}, + "output":{"shape":"DisableOrganizationAdminAccountResponse"}, + "errors":[ + {"shape":"BadRequestException"}, + {"shape":"InternalServerErrorException"} + ] + }, "DisassociateFromMasterAccount":{ "name":"DisassociateFromMasterAccount", "http":{ @@ -292,6 +320,20 @@ {"shape":"InternalServerErrorException"} ] }, + "EnableOrganizationAdminAccount":{ + "name":"EnableOrganizationAdminAccount", + "http":{ + "method":"POST", + "requestUri":"/admin/enable", + "responseCode":200 + }, + "input":{"shape":"EnableOrganizationAdminAccountRequest"}, + "output":{"shape":"EnableOrganizationAdminAccountResponse"}, + "errors":[ + {"shape":"BadRequestException"}, + {"shape":"InternalServerErrorException"} + ] + }, "GetDetector":{ "name":"GetDetector", "http":{ @@ -516,6 +558,20 @@ {"shape":"InternalServerErrorException"} ] }, + "ListOrganizationAdminAccounts":{ + "name":"ListOrganizationAdminAccounts", + "http":{ + "method":"GET", + "requestUri":"/admin", + "responseCode":200 + }, + "input":{"shape":"ListOrganizationAdminAccountsRequest"}, + "output":{"shape":"ListOrganizationAdminAccountsResponse"}, + "errors":[ + {"shape":"BadRequestException"}, + {"shape":"InternalServerErrorException"} + ] + }, "ListPublishingDestinations":{ "name":"ListPublishingDestinations", "http":{ @@ -684,6 +740,20 @@ {"shape":"InternalServerErrorException"} ] }, + "UpdateOrganizationConfiguration":{ + "name":"UpdateOrganizationConfiguration", + "http":{ + "method":"POST", + "requestUri":"/detector/{detectorId}/admin", + "responseCode":200 + }, + "input":{"shape":"UpdateOrganizationConfigurationRequest"}, + "output":{"shape":"UpdateOrganizationConfigurationResponse"}, + "errors":[ + {"shape":"BadRequestException"}, + {"shape":"InternalServerErrorException"} + ] + }, "UpdatePublishingDestination":{ "name":"UpdatePublishingDestination", "http":{ @@ -822,6 +892,34 @@ } } }, + "AdminAccount":{ + "type":"structure", + "members":{ + "AdminAccountId":{ + "shape":"String", + "locationName":"adminAccountId" + }, + "AdminStatus":{ + "shape":"AdminStatus", + "locationName":"adminStatus" + } + } + }, + "AdminAccounts":{ + "type":"list", + "member":{"shape":"AdminAccount"}, + "max":1, + "min":0 + }, + "AdminStatus":{ + "type":"string", + "enum":[ + "ENABLED", + "DISABLE_IN_PROGRESS" + ], + "max":300, + "min":1 + }, "ArchiveFindingsRequest":{ "type":"structure", "required":[ @@ -1438,6 +1536,34 @@ "members":{ } }, + "DescribeOrganizationConfigurationRequest":{ + "type":"structure", + "required":["DetectorId"], + "members":{ + "DetectorId":{ + "shape":"DetectorId", + "location":"uri", + "locationName":"detectorId" + } + } + }, + "DescribeOrganizationConfigurationResponse":{ + "type":"structure", + "required":[ + "AutoEnable", + "MemberAccountLimitReached" + ], + "members":{ + "AutoEnable":{ + "shape":"Boolean", + "locationName":"autoEnable" + }, + "MemberAccountLimitReached":{ + "shape":"Boolean", + "locationName":"memberAccountLimitReached" + } + } + }, "DescribePublishingDestinationRequest":{ "type":"structure", "required":[ @@ -1554,6 +1680,21 @@ "max":300, "min":1 }, + "DisableOrganizationAdminAccountRequest":{ + "type":"structure", + "required":["AdminAccountId"], + "members":{ + "AdminAccountId":{ + "shape":"String", + "locationName":"adminAccountId" + } + } + }, + "DisableOrganizationAdminAccountResponse":{ + "type":"structure", + "members":{ + } + }, "DisassociateFromMasterAccountRequest":{ "type":"structure", "required":["DetectorId"], @@ -1622,6 +1763,21 @@ "max":64, "min":1 }, + "EnableOrganizationAdminAccountRequest":{ + "type":"structure", + "required":["AdminAccountId"], + "members":{ + "AdminAccountId":{ + "shape":"String", + "locationName":"adminAccountId" + } + } + }, + "EnableOrganizationAdminAccountResponse":{ + "type":"structure", + "members":{ + } + }, "Eq":{ "type":"list", "member":{"shape":"String"} @@ -2558,6 +2714,34 @@ } } }, + "ListOrganizationAdminAccountsRequest":{ + "type":"structure", + "members":{ + "MaxResults":{ + "shape":"MaxResults", + "location":"querystring", + "locationName":"maxResults" + }, + "NextToken":{ + "shape":"String", + "location":"querystring", + "locationName":"nextToken" + } + } + }, + "ListOrganizationAdminAccountsResponse":{ + "type":"structure", + "members":{ + "AdminAccounts":{ + "shape":"AdminAccounts", + "locationName":"adminAccounts" + }, + "NextToken":{ + "shape":"String", + "locationName":"nextToken" + } + } + }, "ListPublishingDestinationsRequest":{ "type":"structure", "required":["DetectorId"], @@ -3461,6 +3645,29 @@ "members":{ } }, + "UpdateOrganizationConfigurationRequest":{ + "type":"structure", + "required":[ + "DetectorId", + "AutoEnable" + ], + "members":{ + "DetectorId":{ + "shape":"DetectorId", + "location":"uri", + "locationName":"detectorId" + }, + "AutoEnable":{ + "shape":"Boolean", + "locationName":"autoEnable" + } + } + }, + "UpdateOrganizationConfigurationResponse":{ + "type":"structure", + "members":{ + } + }, "UpdatePublishingDestinationRequest":{ "type":"structure", "required":[ diff --git a/models/apis/guardduty/2017-11-28/docs-2.json b/models/apis/guardduty/2017-11-28/docs-2.json index 70c761adaa6..ad7e3b02cef 100644 --- a/models/apis/guardduty/2017-11-28/docs-2.json +++ b/models/apis/guardduty/2017-11-28/docs-2.json @@ -1,48 +1,52 @@ { "version": "2.0", - "service": "Amazon GuardDuty is a continuous security monitoring service that analyzes and processes the following data sources: VPC Flow Logs, AWS CloudTrail event logs, and DNS logs. It uses threat intelligence feeds, such as lists of malicious IPs and domains, and machine learning to identify unexpected and potentially unauthorized and malicious activity within your AWS environment. This can include issues like escalations of privileges, uses of exposed credentials, or communication with malicious IPs, URLs, or domains. For example, GuardDuty can detect compromised EC2 instances serving malware or mining bitcoin. It also monitors AWS account access behavior for signs of compromise, such as unauthorized infrastructure deployments, like instances deployed in a region that has never been used, or unusual API calls, like a password policy change to reduce password strength. GuardDuty informs you of the status of your AWS environment by producing security findings that you can view in the GuardDuty console or through Amazon CloudWatch events. For more information, see Amazon GuardDuty User Guide.
", + "service": "Amazon GuardDuty is a continuous security monitoring service that analyzes and processes the following data sources: VPC Flow Logs, AWS CloudTrail event logs, and DNS logs. It uses threat intelligence feeds (such as lists of malicious IPs and domains) and machine learning to identify unexpected, potentially unauthorized, and malicious activity within your AWS environment. This can include issues like escalations of privileges, uses of exposed credentials, or communication with malicious IPs, URLs, or domains. For example, GuardDuty can detect compromised EC2 instances that serve malware or mine bitcoin.
GuardDuty also monitors AWS account access behavior for signs of compromise. Some examples of this are unauthorized infrastructure deployments such as EC2 instances deployed in a Region that has never been used, or unusual API calls like a password policy change to reduce password strength.
GuardDuty informs you of the status of your AWS environment by producing security findings that you can view in the GuardDuty console or through Amazon CloudWatch events. For more information, see the Amazon GuardDuty User Guide .
", "operations": { "AcceptInvitation": "Accepts the invitation to be monitored by a master GuardDuty account.
", - "ArchiveFindings": "Archives GuardDuty findings specified by the list of finding IDs.
Only the master account can archive findings. Member accounts do not have permission to archive findings from their accounts.
Creates a single Amazon GuardDuty detector. A detector is a resource that represents the GuardDuty service. To start using GuardDuty, you must create a detector in each region that you enable the service. You can have only one detector per account per region.
", + "ArchiveFindings": "Archives GuardDuty findings that are specified by the list of finding IDs.
Only the master account can archive findings. Member accounts don't have permission to archive findings from their accounts.
Creates a single Amazon GuardDuty detector. A detector is a resource that represents the GuardDuty service. To start using GuardDuty, you must create a detector in each Region where you enable the service. You can have only one detector per account per Region.
", "CreateFilter": "Creates a filter using the specified finding criteria.
", - "CreateIPSet": "Creates a new IPSet, called Trusted IP list in the consoler user interface. An IPSet is a list IP addresses trusted for secure communication with AWS infrastructure and applications. GuardDuty does not generate findings for IP addresses included in IPSets. Only users from the master account can use this operation.
", + "CreateIPSet": "Creates a new IPSet, which is called a trusted IP list in the console user interface. An IPSet is a list of IP addresses that are trusted for secure communication with AWS infrastructure and applications. GuardDuty doesn't generate findings for IP addresses that are included in IPSets. Only users from the master account can use this operation.
", "CreateMembers": "Creates member accounts of the current AWS account by specifying a list of AWS account IDs. The current AWS account can then invite these members to manage GuardDuty in their accounts.
", - "CreatePublishingDestination": "Creates a publishing destination to send findings to. The resource to send findings to must exist before you use this operation.
", + "CreatePublishingDestination": "Creates a publishing destination to export findings to. The resource to export findings to must exist before you use this operation.
", "CreateSampleFindings": "Generates example findings of types specified by the list of finding types. If 'NULL' is specified for findingTypes
, the API generates example findings of all supported finding types.
Create a new ThreatIntelSet. ThreatIntelSets consist of known malicious IP addresses. GuardDuty generates findings based on ThreatIntelSets. Only users of the master account can use this operation.
", - "DeclineInvitations": "Declines invitations sent to the current member account by AWS account specified by their account IDs.
", - "DeleteDetector": "Deletes a Amazon GuardDuty detector specified by the detector ID.
", + "CreateThreatIntelSet": "Creates a new ThreatIntelSet. ThreatIntelSets consist of known malicious IP addresses. GuardDuty generates findings based on ThreatIntelSets. Only users of the master account can use this operation.
", + "DeclineInvitations": "Declines invitations sent to the current member account by AWS accounts specified by their account IDs.
", + "DeleteDetector": "Deletes an Amazon GuardDuty detector that is specified by the detector ID.
", "DeleteFilter": "Deletes the filter specified by the filter name.
", - "DeleteIPSet": "Deletes the IPSet specified by the ipSetId
. IPSets are called Trusted IP lists in the console user interface.
Deletes the IPSet specified by the ipSetId
. IPSets are called trusted IP lists in the console user interface.
Deletes invitations sent to the current member account by AWS accounts specified by their account IDs.
", "DeleteMembers": "Deletes GuardDuty member accounts (to the current GuardDuty master account) specified by the account IDs.
", "DeletePublishingDestination": "Deletes the publishing definition with the specified destinationId
.
Deletes ThreatIntelSet specified by the ThreatIntelSet ID.
", + "DeleteThreatIntelSet": "Deletes the ThreatIntelSet specified by the ThreatIntelSet ID.
", + "DescribeOrganizationConfiguration": "Returns information about the account selected as the delegated administrator for GuardDuty.
", "DescribePublishingDestination": "Returns information about the publishing destination specified by the provided destinationId
.
Disables GuardDuty administrator permissions for an AWS account within the Organization.
", "DisassociateFromMasterAccount": "Disassociates the current GuardDuty member account from its master account.
", "DisassociateMembers": "Disassociates GuardDuty member accounts (to the current GuardDuty master account) specified by the account IDs.
", + "EnableOrganizationAdminAccount": "Enables GuardDuty administrator permissions for an AWS account within the organization.
", "GetDetector": "Retrieves an Amazon GuardDuty detector specified by the detectorId.
", "GetFilter": "Returns the details of the filter specified by the filter name.
", "GetFindings": "Describes Amazon GuardDuty findings specified by finding IDs.
", - "GetFindingsStatistics": "Lists Amazon GuardDuty findings' statistics for the specified detector ID.
", + "GetFindingsStatistics": "Lists Amazon GuardDuty findings statistics for the specified detector ID.
", "GetIPSet": "Retrieves the IPSet specified by the ipSetId
.
Returns the count of all GuardDuty membership invitations that were sent to the current member account except the currently accepted invitation.
", "GetMasterAccount": "Provides the details for the GuardDuty master account associated with the current GuardDuty member account.
", "GetMembers": "Retrieves GuardDuty member accounts (to the current GuardDuty master account) specified by the account IDs.
", "GetThreatIntelSet": "Retrieves the ThreatIntelSet that is specified by the ThreatIntelSet ID.
", - "InviteMembers": "Invites other AWS accounts (created as members of the current AWS account by CreateMembers) to enable GuardDuty and allow the current AWS account to view and manage these accounts' GuardDuty findings on their behalf as the master account.
", + "InviteMembers": "Invites other AWS accounts (created as members of the current AWS account by CreateMembers) to enable GuardDuty, and allow the current AWS account to view and manage these accounts' GuardDuty findings on their behalf as the master account.
", "ListDetectors": "Lists detectorIds of all the existing Amazon GuardDuty detector resources.
", "ListFilters": "Returns a paginated list of the current filters.
", "ListFindings": "Lists Amazon GuardDuty findings for the specified detector ID.
", "ListIPSets": "Lists the IPSets of the GuardDuty service specified by the detector ID. If you use this operation from a member account, the IPSets returned are the IPSets from the associated master account.
", "ListInvitations": "Lists all GuardDuty membership invitations that were sent to the current AWS account.
", - "ListMembers": "Lists details about all member accounts for the current GuardDuty master account.
", + "ListMembers": "Lists details about associated member accounts for the current GuardDuty master account.
", + "ListOrganizationAdminAccounts": "Lists the accounts configured as AWS Organization delegated administrators.
", "ListPublishingDestinations": "Returns a list of publishing destinations associated with the specified dectectorId
.
Lists tags for a resource. Tagging is currently supported for detectors, finding filters, IP sets, and Threat Intel sets, with a limit of 50 tags per resource. When invoked, this operation returns all assigned tags for a given resource..
", + "ListTagsForResource": "Lists tags for a resource. Tagging is currently supported for detectors, finding filters, IP sets, and threat intel sets, with a limit of 50 tags per resource. When invoked, this operation returns all assigned tags for a given resource.
", "ListThreatIntelSets": "Lists the ThreatIntelSets of the GuardDuty service specified by the detector ID. If you use this operation from a member account, the ThreatIntelSets associated with the master account are returned.
", "StartMonitoringMembers": "Turns on GuardDuty monitoring of the specified member accounts. Use this operation to restart monitoring of accounts that you stopped monitoring with the StopMonitoringMembers
operation.
Stops GuardDuty monitoring for the specified member accounnts. Use the StartMonitoringMembers
to restart monitoring for those accounts.
Stops GuardDuty monitoring for the specified member accounts. Use the StartMonitoringMembers
operation to restart monitoring for those accounts.
Adds tags to a resource.
", "UnarchiveFindings": "Unarchives GuardDuty findings specified by the findingIds
.
Removes tags from a resource.
", @@ -50,8 +54,9 @@ "UpdateFilter": "Updates the filter specified by the filter name.
", "UpdateFindingsFeedback": "Marks the specified GuardDuty findings as useful or not useful.
", "UpdateIPSet": "Updates the IPSet specified by the IPSet ID.
", + "UpdateOrganizationConfiguration": "Updates the delegated administrator account with the values provided.
", "UpdatePublishingDestination": "Updates information about the publishing destination specified by the destinationId
.
Updates the ThreatIntelSet specified by ThreatIntelSet ID.
" + "UpdateThreatIntelSet": "Updates the ThreatIntelSet specified by the ThreatIntelSet ID.
" }, "shapes": { "AcceptInvitationRequest": { @@ -85,12 +90,12 @@ "AccountId": { "base": null, "refs": { - "AccountDetail$AccountId": "Member account ID.
", + "AccountDetail$AccountId": "The member account ID.
", "AccountIds$member": null, - "Invitation$AccountId": "The ID of the account from which the invitations was sent.
", - "Master$AccountId": "The ID of the account used as the Master account.
", - "Member$AccountId": "Member account ID.
", - "UnprocessedAccount$AccountId": "AWS Account ID.
" + "Invitation$AccountId": "The ID of the account that the invitation was sent from.
", + "Master$AccountId": "The ID of the account used as the master account.
", + "Member$AccountId": "The ID of the member account.
", + "UnprocessedAccount$AccountId": "The AWS account ID.
" } }, "AccountIds": { @@ -99,17 +104,35 @@ "DeclineInvitationsRequest$AccountIds": "A list of account IDs of the AWS accounts that sent invitations to the current member account that you want to decline invitations from.
", "DeleteInvitationsRequest$AccountIds": "A list of account IDs of the AWS accounts that sent invitations to the current member account that you want to delete invitations from.
", "DeleteMembersRequest$AccountIds": "A list of account IDs of the GuardDuty member accounts that you want to delete.
", - "DisassociateMembersRequest$AccountIds": "A list of account IDs of the GuardDuty member accounts that you want to disassociate from master.
", + "DisassociateMembersRequest$AccountIds": "A list of account IDs of the GuardDuty member accounts that you want to disassociate from the master account.
", "GetMembersRequest$AccountIds": "A list of account IDs of the GuardDuty member accounts that you want to describe.
", "InviteMembersRequest$AccountIds": "A list of account IDs of the accounts that you want to invite to GuardDuty as members.
", "StartMonitoringMembersRequest$AccountIds": "A list of account IDs of the GuardDuty member accounts to start monitoring.
", - "StopMonitoringMembersRequest$AccountIds": "A list of account IDs of the GuardDuty member accounts whose findings you want the master account to stop monitoring.
" + "StopMonitoringMembersRequest$AccountIds": "A list of account IDs for the member accounts to stop monitoring.
" } }, "Action": { - "base": "Contains information about action.
", + "base": "Contains information about actions.
", "refs": { - "Service$Action": "Information about the activity described in a finding.
" + "Service$Action": "Information about the activity that is described in a finding.
" + } + }, + "AdminAccount": { + "base": "The account within the organization specified as the GuardDuty delegated administrator.
", + "refs": { + "AdminAccounts$member": null + } + }, + "AdminAccounts": { + "base": null, + "refs": { + "ListOrganizationAdminAccountsResponse$AdminAccounts": "An AdminAccounts object that includes a list of accounts configured as GuardDuty delegated administrators.
" + } + }, + "AdminStatus": { + "base": null, + "refs": { + "AdminAccount$AdminStatus": "Indicates whether the account is enabled as the delegated administrator.
" } }, "ArchiveFindingsRequest": { @@ -129,29 +152,32 @@ } }, "BadRequestException": { - "base": "Bad request exception object.
", + "base": "A bad request exception object.
", "refs": { } }, "Boolean": { "base": null, "refs": { - "CreateDetectorRequest$Enable": "A boolean value that specifies whether the detector is to be enabled.
", - "CreateIPSetRequest$Activate": "A boolean value that indicates whether GuardDuty is to start using the uploaded IPSet.
", - "CreateThreatIntelSetRequest$Activate": "A boolean value that indicates whether GuardDuty is to start using the uploaded ThreatIntelSet.
", - "InviteMembersRequest$DisableEmailNotification": "A boolean value that specifies whether you want to disable email notification to the accounts that you’re inviting to GuardDuty as members.
", - "NetworkConnectionAction$Blocked": "Network connection blocked information.
", - "PortProbeAction$Blocked": "Port probe blocked information.
", + "CreateDetectorRequest$Enable": "A Boolean value that specifies whether the detector is to be enabled.
", + "CreateIPSetRequest$Activate": "A Boolean value that indicates whether GuardDuty is to start using the uploaded IPSet.
", + "CreateThreatIntelSetRequest$Activate": "A Boolean value that indicates whether GuardDuty is to start using the uploaded ThreatIntelSet.
", + "DescribeOrganizationConfigurationResponse$AutoEnable": "Indicates whether GuardDuty is automatically enabled for accounts added to the organization.
", + "DescribeOrganizationConfigurationResponse$MemberAccountLimitReached": "Indicates whether the maximum number of allowed member accounts are already associated with the delegated administrator master account.
", + "InviteMembersRequest$DisableEmailNotification": "A Boolean value that specifies whether you want to disable email notification to the accounts that you’re inviting to GuardDuty as members.
", + "NetworkConnectionAction$Blocked": "Indicates whether EC2 blocked the network connection to your instance.
", + "PortProbeAction$Blocked": "Indicates whether EC2 blocked the port probe to the instance, such as with an ACL.
", "Service$Archived": "Indicates whether this finding is archived.
", "UpdateDetectorRequest$Enable": "Specifies whether the detector is enabled or not enabled.
", - "UpdateIPSetRequest$Activate": "The updated boolean value that specifies whether the IPSet is active or not.
", - "UpdateThreatIntelSetRequest$Activate": "The updated boolean value that specifies whether the ThreateIntelSet is active or not.
" + "UpdateIPSetRequest$Activate": "The updated Boolean value that specifies whether the IPSet is active or not.
", + "UpdateOrganizationConfigurationRequest$AutoEnable": "Indicates whether to automatically enable member accounts in the organization.
", + "UpdateThreatIntelSetRequest$Activate": "The updated Boolean value that specifies whether the ThreateIntelSet is active or not.
" } }, "City": { "base": "Contains information about the city associated with the IP address.
", "refs": { - "RemoteIpDetails$City": "City information of the remote IP address.
" + "RemoteIpDetails$City": "The city information of the remote IP address.
" } }, "ClientToken": { @@ -173,13 +199,13 @@ "CountBySeverity": { "base": null, "refs": { - "FindingStatistics$CountBySeverity": "Represents a map of severity to count statistic for a set of findings
" + "FindingStatistics$CountBySeverity": "Represents a map of severity to count statistics for a set of findings.
" } }, "Country": { - "base": "Contains information about the country in which the remote IP address is located.
", + "base": "Contains information about the country where the remote IP address is located.
", "refs": { - "RemoteIpDetails$Country": "Country code of the remote IP address.
" + "RemoteIpDetails$Country": "The country code of the remote IP address.
" } }, "CreateDetectorRequest": { @@ -338,6 +364,16 @@ "refs": { } }, + "DescribeOrganizationConfigurationRequest": { + "base": null, + "refs": { + } + }, + "DescribeOrganizationConfigurationResponse": { + "base": null, + "refs": { + } + }, "DescribePublishingDestinationRequest": { "base": null, "refs": { @@ -349,15 +385,15 @@ } }, "Destination": { - "base": "Contains information about a publishing destination, including the ID, type, and status.
", + "base": "Contains information about the publishing destination, including the ID, type, and status.
", "refs": { "Destinations$member": null } }, "DestinationProperties": { - "base": "Contains the ARN of the resource to publish to, such as an S3 bucket, and the ARN of the KMS key to use to encrypt published findings.
", + "base": "Contains the Amazon Resource Name (ARN) of the resource to publish to, such as an S3 bucket, and the ARN of the KMS key to use to encrypt published findings.
", "refs": { - "CreatePublishingDestinationRequest$DestinationProperties": "Properties of the publishing destination, including the ARNs for the destination and the KMS key used for encryption.
", + "CreatePublishingDestinationRequest$DestinationProperties": "The properties of the publishing destination, including the ARNs for the destination and the KMS key used for encryption.
", "DescribePublishingDestinationResponse$DestinationProperties": "A DestinationProperties
object that includes the DestinationArn
and KmsKeyArn
of the publishing destination.
A DestinationProperties
object that includes the DestinationArn
and KmsKeyArn
of the publishing destination.
The type of resource for the publishing destination. Currently only S3 is supported.
", - "DescribePublishingDestinationResponse$DestinationType": "The type of the publishing destination. Currently, only S3 is supported.
", - "Destination$DestinationType": "The type of resource used for the publishing destination. Currently, only S3 is supported.
" + "CreatePublishingDestinationRequest$DestinationType": "The type of resource for the publishing destination. Currently only Amazon S3 buckets are supported.
", + "DescribePublishingDestinationResponse$DestinationType": "The type of publishing destination. Currently, only Amazon S3 buckets are supported.
", + "Destination$DestinationType": "The type of resource used for the publishing destination. Currently, only Amazon S3 buckets are supported.
" } }, "Destinations": { "base": null, "refs": { - "ListPublishingDestinationsResponse$Destinations": "A Destinations
obect that includes information about each publishing destination returned.
A Destinations
object that includes information about each publishing destination returned.
The unique ID of the detector of the GuardDuty member account.
", "ArchiveFindingsRequest$DetectorId": "The ID of the detector that specifies the GuardDuty service whose findings you want to archive.
", "CreateDetectorResponse$DetectorId": "The unique ID of the created detector.
", - "CreateFilterRequest$DetectorId": "The unique ID of the detector of the GuardDuty account for which you want to create a filter.
", - "CreateIPSetRequest$DetectorId": "The unique ID of the detector of the GuardDuty account for which you want to create an IPSet.
", - "CreateMembersRequest$DetectorId": "The unique ID of the detector of the GuardDuty account with which you want to associate member accounts.
", + "CreateFilterRequest$DetectorId": "The unique ID of the detector of the GuardDuty account that you want to create a filter for.
", + "CreateIPSetRequest$DetectorId": "The unique ID of the detector of the GuardDuty account that you want to create an IPSet for.
", + "CreateMembersRequest$DetectorId": "The unique ID of the detector of the GuardDuty account that you want to associate member accounts with.
", "CreatePublishingDestinationRequest$DetectorId": "The ID of the GuardDuty detector associated with the publishing destination.
", "CreateSampleFindingsRequest$DetectorId": "The ID of the detector to create sample findings for.
", - "CreateThreatIntelSetRequest$DetectorId": "The unique ID of the detector of the GuardDuty account for which you want to create a threatIntelSet.
", + "CreateThreatIntelSetRequest$DetectorId": "The unique ID of the detector of the GuardDuty account that you want to create a threatIntelSet for.
", "DeleteDetectorRequest$DetectorId": "The unique ID of the detector that you want to delete.
", - "DeleteFilterRequest$DetectorId": "The unique ID of the detector the filter is associated with.
", + "DeleteFilterRequest$DetectorId": "The unique ID of the detector that the filter is associated with.
", "DeleteIPSetRequest$DetectorId": "The unique ID of the detector associated with the IPSet.
", "DeleteMembersRequest$DetectorId": "The unique ID of the detector of the GuardDuty account whose members you want to delete.
", "DeletePublishingDestinationRequest$DetectorId": "The unique ID of the detector associated with the publishing destination to delete.
", - "DeleteThreatIntelSetRequest$DetectorId": "The unique ID of the detector the threatIntelSet is associated with.
", + "DeleteThreatIntelSetRequest$DetectorId": "The unique ID of the detector that the threatIntelSet is associated with.
", + "DescribeOrganizationConfigurationRequest$DetectorId": "The ID of the detector to retrieve information about the delegated administrator from.
", "DescribePublishingDestinationRequest$DetectorId": "The unique ID of the detector associated with the publishing destination to retrieve.
", "DetectorIds$member": null, "DisassociateFromMasterAccountRequest$DetectorId": "The unique ID of the detector of the GuardDuty member account.
", - "DisassociateMembersRequest$DetectorId": "The unique ID of the detector of the GuardDuty account whose members you want to disassociate from master.
", + "DisassociateMembersRequest$DetectorId": "The unique ID of the detector of the GuardDuty account whose members you want to disassociate from the master account.
", "GetDetectorRequest$DetectorId": "The unique ID of the detector that you want to get.
", - "GetFilterRequest$DetectorId": "The unique ID of the detector the filter is associated with.
", + "GetFilterRequest$DetectorId": "The unique ID of the detector that the filter is associated with.
", "GetFindingsRequest$DetectorId": "The ID of the detector that specifies the GuardDuty service whose findings you want to retrieve.
", "GetFindingsStatisticsRequest$DetectorId": "The ID of the detector that specifies the GuardDuty service whose findings' statistics you want to retrieve.
", - "GetIPSetRequest$DetectorId": "The unique ID of the detector the ipSet is associated with.
", + "GetIPSetRequest$DetectorId": "The unique ID of the detector that the IPSet is associated with.
", "GetMasterAccountRequest$DetectorId": "The unique ID of the detector of the GuardDuty member account.
", "GetMembersRequest$DetectorId": "The unique ID of the detector of the GuardDuty account whose members you want to retrieve.
", - "GetThreatIntelSetRequest$DetectorId": "The unique ID of the detector the threatIntelSet is associated with.
", - "InviteMembersRequest$DetectorId": "The unique ID of the detector of the GuardDuty account with which you want to invite members.
", - "ListFiltersRequest$DetectorId": "The unique ID of the detector the filter is associated with.
", + "GetThreatIntelSetRequest$DetectorId": "The unique ID of the detector that the threatIntelSet is associated with.
", + "InviteMembersRequest$DetectorId": "The unique ID of the detector of the GuardDuty account that you want to invite members with.
", + "ListFiltersRequest$DetectorId": "The unique ID of the detector that the filter is associated with.
", "ListFindingsRequest$DetectorId": "The ID of the detector that specifies the GuardDuty service whose findings you want to list.
", - "ListIPSetsRequest$DetectorId": "The unique ID of the detector the ipSet is associated with.
", + "ListIPSetsRequest$DetectorId": "The unique ID of the detector that the IPSet is associated with.
", "ListMembersRequest$DetectorId": "The unique ID of the detector the member is associated with.
", "ListPublishingDestinationsRequest$DetectorId": "The ID of the detector to retrieve publishing destinations for.
", - "ListThreatIntelSetsRequest$DetectorId": "The unique ID of the detector the threatIntelSet is associated with.
", - "Member$DetectorId": "Member account's detector ID.
", - "Service$DetectorId": "Detector ID for the GuardDuty service.
", + "ListThreatIntelSetsRequest$DetectorId": "The unique ID of the detector that the threatIntelSet is associated with.
", + "Member$DetectorId": "The detector ID of the member account.
", + "Service$DetectorId": "The detector ID for the GuardDuty service.
", "StartMonitoringMembersRequest$DetectorId": "The unique ID of the detector of the GuardDuty master account associated with the member accounts to monitor.
", - "StopMonitoringMembersRequest$DetectorId": "The unique ID of the detector of the GuardDuty account that you want to stop from monitor members' findings.
", + "StopMonitoringMembersRequest$DetectorId": "The unique ID of the detector associated with the GuardDuty master account that is monitoring member accounts.
", "UnarchiveFindingsRequest$DetectorId": "The ID of the detector associated with the findings to unarchive.
", "UpdateDetectorRequest$DetectorId": "The unique ID of the detector to update.
", "UpdateFilterRequest$DetectorId": "The unique ID of the detector that specifies the GuardDuty service where you want to update a filter.
", "UpdateFindingsFeedbackRequest$DetectorId": "The ID of the detector associated with the findings to update feedback for.
", "UpdateIPSetRequest$DetectorId": "The detectorID that specifies the GuardDuty service whose IPSet you want to update.
", - "UpdatePublishingDestinationRequest$DetectorId": "The ID of the
", + "UpdateOrganizationConfigurationRequest$DetectorId": "The ID of the detector to update the delegated administrator for.
", + "UpdatePublishingDestinationRequest$DetectorId": "The ID of the detector associated with the publishing destinations to update.
", "UpdateThreatIntelSetRequest$DetectorId": "The detectorID that specifies the GuardDuty service whose ThreatIntelSet you want to update.
" } }, "DetectorIds": { "base": null, "refs": { - "ListDetectorsResponse$DetectorIds": "A list of detector Ids.
" + "ListDetectorsResponse$DetectorIds": "A list of detector IDs.
" } }, "DetectorStatus": { @@ -438,6 +476,16 @@ "GetDetectorResponse$Status": "The detector status.
" } }, + "DisableOrganizationAdminAccountRequest": { + "base": null, + "refs": { + } + }, + "DisableOrganizationAdminAccountResponse": { + "base": null, + "refs": { + } + }, "DisassociateFromMasterAccountRequest": { "base": null, "refs": { @@ -467,7 +515,7 @@ "DomainDetails": { "base": "Contains information about the domain.
", "refs": { - "AwsApiCallAction$DomainDetails": "Domain information for the AWS API call.
" + "AwsApiCallAction$DomainDetails": "The domain information for the AWS API call.
" } }, "Double": { @@ -475,27 +523,37 @@ "refs": { "Finding$Confidence": "The confidence score for the finding.
", "Finding$Severity": "The severity of the finding.
", - "GeoLocation$Lat": "Latitude information of remote IP address.
", - "GeoLocation$Lon": "Longitude information of remote IP address.
" + "GeoLocation$Lat": "The latitude information of the remote IP address.
", + "GeoLocation$Lon": "The longitude information of the remote IP address.
" } }, "Email": { "base": null, "refs": { - "AccountDetail$Email": "Member account's email address.
", - "Member$Email": "Member account's email address.
" + "AccountDetail$Email": "The email address of the member account.
", + "Member$Email": "The email address of the member account.
" + } + }, + "EnableOrganizationAdminAccountRequest": { + "base": null, + "refs": { + } + }, + "EnableOrganizationAdminAccountResponse": { + "base": null, + "refs": { } }, "Eq": { "base": null, "refs": { - "Condition$Eq": "Represents the equal condition to be applied to a single field when querying for findings.
" + "Condition$Eq": "Represents the equal condition to be applied to a single field when querying for findings.
" } }, "Equals": { "base": null, "refs": { - "Condition$Equals": "Represents an equal condition to be applied to a single field when querying for findings.
" + "Condition$Equals": "Represents an equal condition to be applied to a single field when querying for findings.
" } }, "Evidence": { @@ -539,7 +597,7 @@ "FilterNames": { "base": null, "refs": { - "ListFiltersResponse$FilterNames": "A list of filter names
" + "ListFiltersResponse$FilterNames": "A list of filter names.
" } }, "FilterRank": { @@ -559,10 +617,10 @@ "FindingCriteria": { "base": "Contains information about the criteria used for querying findings.
", "refs": { - "CreateFilterRequest$FindingCriteria": "Represents the criteria to be used in the filter for querying findings.
", + "CreateFilterRequest$FindingCriteria": "Represents the criteria to be used in the filter for querying findings.
You can only use the following attributes to query findings:
accountId
region
confidence
id
resource.accessKeyDetails.accessKeyId
resource.accessKeyDetails.principalId
resource.accessKeyDetails.userName
resource.accessKeyDetails.userType
resource.instanceDetails.iamInstanceProfile.id
resource.instanceDetails.imageId
resource.instanceDetails.instanceId
resource.instanceDetails.outpostArn
resource.instanceDetails.networkInterfaces.ipv6Addresses
resource.instanceDetails.networkInterfaces.privateIpAddresses.privateIpAddress
resource.instanceDetails.networkInterfaces.publicDnsName
resource.instanceDetails.networkInterfaces.publicIp
resource.instanceDetails.networkInterfaces.securityGroups.groupId
resource.instanceDetails.networkInterfaces.securityGroups.groupName
resource.instanceDetails.networkInterfaces.subnetId
resource.instanceDetails.networkInterfaces.vpcId
resource.instanceDetails.tags.key
resource.instanceDetails.tags.value
resource.resourceType
service.action.actionType
service.action.awsApiCallAction.api
service.action.awsApiCallAction.callerType
service.action.awsApiCallAction.remoteIpDetails.city.cityName
service.action.awsApiCallAction.remoteIpDetails.country.countryName
service.action.awsApiCallAction.remoteIpDetails.ipAddressV4
service.action.awsApiCallAction.remoteIpDetails.organization.asn
service.action.awsApiCallAction.remoteIpDetails.organization.asnOrg
service.action.awsApiCallAction.serviceName
service.action.dnsRequestAction.domain
service.action.networkConnectionAction.blocked
service.action.networkConnectionAction.connectionDirection
service.action.networkConnectionAction.localPortDetails.port
service.action.networkConnectionAction.protocol
service.action.networkConnectionAction.remoteIpDetails.city.cityName
service.action.networkConnectionAction.remoteIpDetails.country.countryName
service.action.networkConnectionAction.remoteIpDetails.ipAddressV4
service.action.networkConnectionAction.remoteIpDetails.organization.asn
service.action.networkConnectionAction.remoteIpDetails.organization.asnOrg
service.action.networkConnectionAction.remotePortDetails.port
service.additionalInfo.threatListName
service.archived
When this attribute is set to TRUE, only archived findings are listed. When it's set to FALSE, only unarchived findings are listed. When this attribute is not set, all existing findings are listed.
service.resourceRole
severity
type
updatedAt
Type: ISO 8601 string format: YYYY-MM-DDTHH:MM:SS.SSSZ or YYYY-MM-DDTHH:MM:SSZ depending on whether the value contains milliseconds.
Represents the criteria to be used in the filter for querying findings.
", - "GetFindingsStatisticsRequest$FindingCriteria": "Represents the criteria used for querying findings.
", - "ListFindingsRequest$FindingCriteria": "Represents the criteria used for querying findings. Valid values include:
JSON field name
accountId
region
confidence
id
resource.accessKeyDetails.accessKeyId
resource.accessKeyDetails.principalId
resource.accessKeyDetails.userName
resource.accessKeyDetails.userType
resource.instanceDetails.iamInstanceProfile.id
resource.instanceDetails.imageId
resource.instanceDetails.instanceId
resource.instanceDetails.outpostArn
resource.instanceDetails.networkInterfaces.ipv6Addresses
resource.instanceDetails.networkInterfaces.privateIpAddresses.privateIpAddress
resource.instanceDetails.networkInterfaces.publicDnsName
resource.instanceDetails.networkInterfaces.publicIp
resource.instanceDetails.networkInterfaces.securityGroups.groupId
resource.instanceDetails.networkInterfaces.securityGroups.groupName
resource.instanceDetails.networkInterfaces.subnetId
resource.instanceDetails.networkInterfaces.vpcId
resource.instanceDetails.tags.key
resource.instanceDetails.tags.value
resource.resourceType
service.action.actionType
service.action.awsApiCallAction.api
service.action.awsApiCallAction.callerType
service.action.awsApiCallAction.remoteIpDetails.city.cityName
service.action.awsApiCallAction.remoteIpDetails.country.countryName
service.action.awsApiCallAction.remoteIpDetails.ipAddressV4
service.action.awsApiCallAction.remoteIpDetails.organization.asn
service.action.awsApiCallAction.remoteIpDetails.organization.asnOrg
service.action.awsApiCallAction.serviceName
service.action.dnsRequestAction.domain
service.action.networkConnectionAction.blocked
service.action.networkConnectionAction.connectionDirection
service.action.networkConnectionAction.localPortDetails.port
service.action.networkConnectionAction.protocol
service.action.networkConnectionAction.localIpDetails.ipAddressV4
service.action.networkConnectionAction.remoteIpDetails.city.cityName
service.action.networkConnectionAction.remoteIpDetails.country.countryName
service.action.networkConnectionAction.remoteIpDetails.ipAddressV4
service.action.networkConnectionAction.remoteIpDetails.organization.asn
service.action.networkConnectionAction.remoteIpDetails.organization.asnOrg
service.action.networkConnectionAction.remotePortDetails.port
service.additionalInfo.threatListName
service.archived
When this attribute is set to 'true', only archived findings are listed. When it's set to 'false', only unarchived findings are listed. When this attribute is not set, all existing findings are listed.
service.resourceRole
severity
type
updatedAt
Type: Timestamp in Unix Epoch millisecond format: 1486685375000
Represents the criteria that is used for querying findings.
", + "ListFindingsRequest$FindingCriteria": "Represents the criteria used for querying findings. Valid values include:
JSON field name
accountId
region
confidence
id
resource.accessKeyDetails.accessKeyId
resource.accessKeyDetails.principalId
resource.accessKeyDetails.userName
resource.accessKeyDetails.userType
resource.instanceDetails.iamInstanceProfile.id
resource.instanceDetails.imageId
resource.instanceDetails.instanceId
resource.instanceDetails.networkInterfaces.ipv6Addresses
resource.instanceDetails.networkInterfaces.privateIpAddresses.privateIpAddress
resource.instanceDetails.networkInterfaces.publicDnsName
resource.instanceDetails.networkInterfaces.publicIp
resource.instanceDetails.networkInterfaces.securityGroups.groupId
resource.instanceDetails.networkInterfaces.securityGroups.groupName
resource.instanceDetails.networkInterfaces.subnetId
resource.instanceDetails.networkInterfaces.vpcId
resource.instanceDetails.tags.key
resource.instanceDetails.tags.value
resource.resourceType
service.action.actionType
service.action.awsApiCallAction.api
service.action.awsApiCallAction.callerType
service.action.awsApiCallAction.remoteIpDetails.city.cityName
service.action.awsApiCallAction.remoteIpDetails.country.countryName
service.action.awsApiCallAction.remoteIpDetails.ipAddressV4
service.action.awsApiCallAction.remoteIpDetails.organization.asn
service.action.awsApiCallAction.remoteIpDetails.organization.asnOrg
service.action.awsApiCallAction.serviceName
service.action.dnsRequestAction.domain
service.action.networkConnectionAction.blocked
service.action.networkConnectionAction.connectionDirection
service.action.networkConnectionAction.localPortDetails.port
service.action.networkConnectionAction.protocol
service.action.networkConnectionAction.remoteIpDetails.city.cityName
service.action.networkConnectionAction.remoteIpDetails.country.countryName
service.action.networkConnectionAction.remoteIpDetails.ipAddressV4
service.action.networkConnectionAction.remoteIpDetails.organization.asn
service.action.networkConnectionAction.remoteIpDetails.organization.asnOrg
service.action.networkConnectionAction.remotePortDetails.port
service.additionalInfo.threatListName
service.archived
When this attribute is set to 'true', only archived findings are listed. When it's set to 'false', only unarchived findings are listed. When this attribute is not set, all existing findings are listed.
service.resourceRole
severity
type
updatedAt
Type: Timestamp in Unix Epoch millisecond format: 1486685375000
Represents the criteria to be used in the filter for querying findings.
" } }, @@ -575,19 +633,19 @@ "FindingIds": { "base": null, "refs": { - "ArchiveFindingsRequest$FindingIds": "IDs of the findings that you want to archive.
", - "GetFindingsRequest$FindingIds": "IDs of the findings that you want to retrieve.
", - "ListFindingsResponse$FindingIds": "The IDs of the findings you are listing.
", - "UnarchiveFindingsRequest$FindingIds": "IDs of the findings to unarchive.
", - "UpdateFindingsFeedbackRequest$FindingIds": "IDs of the findings that you want to mark as useful or not useful.
" + "ArchiveFindingsRequest$FindingIds": "The IDs of the findings that you want to archive.
", + "GetFindingsRequest$FindingIds": "The IDs of the findings that you want to retrieve.
", + "ListFindingsResponse$FindingIds": "The IDs of the findings that you're listing.
", + "UnarchiveFindingsRequest$FindingIds": "The IDs of the findings to unarchive.
", + "UpdateFindingsFeedbackRequest$FindingIds": "The IDs of the findings that you want to mark as useful or not useful.
" } }, "FindingPublishingFrequency": { "base": null, "refs": { - "CreateDetectorRequest$FindingPublishingFrequency": "A enum value that specifies how frequently customer got Finding updates published.
", - "GetDetectorResponse$FindingPublishingFrequency": "Finding publishing frequency.
", - "UpdateDetectorRequest$FindingPublishingFrequency": "A enum value that specifies how frequently findings are exported, such as to CloudWatch Events.
" + "CreateDetectorRequest$FindingPublishingFrequency": "An enum value that specifies how frequently updated findings are exported.
", + "GetDetectorResponse$FindingPublishingFrequency": "The publishing frequency of the finding.
", + "UpdateDetectorRequest$FindingPublishingFrequency": "An enum value that specifies how frequently findings are exported, such as to CloudWatch Events.
" } }, "FindingStatisticType": { @@ -599,26 +657,26 @@ "FindingStatisticTypes": { "base": null, "refs": { - "GetFindingsStatisticsRequest$FindingStatisticTypes": "Types of finding statistics to retrieve.
" + "GetFindingsStatisticsRequest$FindingStatisticTypes": "The types of finding statistics to retrieve.
" } }, "FindingStatistics": { "base": "Contains information about finding statistics.
", "refs": { - "GetFindingsStatisticsResponse$FindingStatistics": "Finding statistics object.
" + "GetFindingsStatisticsResponse$FindingStatistics": "The finding statistics object.
" } }, "FindingType": { "base": null, "refs": { - "Finding$Type": "The type of the finding.
", + "Finding$Type": "The type of finding.
", "FindingTypes$member": null } }, "FindingTypes": { "base": null, "refs": { - "CreateSampleFindingsRequest$FindingTypes": "Types of sample findings to generate.
" + "CreateSampleFindingsRequest$FindingTypes": "The types of sample findings to generate.
" } }, "Findings": { @@ -630,7 +688,7 @@ "GeoLocation": { "base": "Contains information about the location of the remote IP address.
", "refs": { - "RemoteIpDetails$GeoLocation": "Location information of the remote IP address.
" + "RemoteIpDetails$GeoLocation": "The location information of the remote IP address.
" } }, "GetDetectorRequest": { @@ -726,7 +784,7 @@ "GuardDutyArn": { "base": null, "refs": { - "ListTagsForResourceRequest$ResourceArn": "The Amazon Resource Name (ARN) for the given GuardDuty resource
", + "ListTagsForResourceRequest$ResourceArn": "The Amazon Resource Name (ARN) for the given GuardDuty resource.
", "TagResourceRequest$ResourceArn": "The Amazon Resource Name (ARN) for the GuardDuty resource to apply a tag to.
", "UntagResourceRequest$ResourceArn": "The Amazon Resource Name (ARN) for the resource to remove tags from.
" } @@ -746,19 +804,19 @@ "Integer": { "base": null, "refs": { - "Condition$Gt": "Represents a greater than condition to be applied to a single field when querying for findings.
", - "Condition$Gte": "Represents a greater than equal condition to be applied to a single field when querying for findings.
", - "Condition$Lt": "Represents a less than condition to be applied to a single field when querying for findings.
", - "Condition$Lte": "Represents a less than equal condition to be applied to a single field when querying for findings.
", + "Condition$Gt": "Represents a greater than condition to be applied to a single field when querying for findings.
", + "Condition$Gte": "Represents a greater than or equal condition to be applied to a single field when querying for findings.
", + "Condition$Lt": "Represents a less than condition to be applied to a single field when querying for findings.
", + "Condition$Lte": "Represents a less than or equal condition to be applied to a single field when querying for findings.
", "CountBySeverity$value": null, "GetInvitationsCountResponse$InvitationsCount": "The number of received invitations.
", - "LocalPortDetails$Port": "Port number of the local connection.
", - "RemotePortDetails$Port": "Port number of the remote connection.
", - "Service$Count": "Total count of the occurrences of this finding type.
" + "LocalPortDetails$Port": "The port number of the local connection.
", + "RemotePortDetails$Port": "The port number of the remote connection.
", + "Service$Count": "The total count of the occurrences of this finding type.
" } }, "InternalServerErrorException": { - "base": "Internal server error exception object.
", + "base": "An internal server error exception object.
", "refs": { } }, @@ -800,13 +858,13 @@ "IpSetStatus": { "base": null, "refs": { - "GetIPSetResponse$Status": "The status of ipSet file uploaded.
" + "GetIPSetResponse$Status": "The status of IPSet file that was uploaded.
" } }, "Ipv6Addresses": { "base": null, "refs": { - "NetworkInterface$Ipv6Addresses": "A list of EC2 instance IPv6 address information.
" + "NetworkInterface$Ipv6Addresses": "A list of IPv6 addresses for the EC2 instance.
" } }, "ListDetectorsRequest": { @@ -869,6 +927,16 @@ "refs": { } }, + "ListOrganizationAdminAccountsRequest": { + "base": null, + "refs": { + } + }, + "ListOrganizationAdminAccountsResponse": { + "base": null, + "refs": { + } + }, "ListPublishingDestinationsRequest": { "base": null, "refs": { @@ -902,59 +970,60 @@ "LocalIpDetails": { "base": "Contains information about the local IP address of the connection.
", "refs": { - "NetworkConnectionAction$LocalIpDetails": "Local IP information of the connection.
", - "PortProbeDetail$LocalIpDetails": "Local IP information of the connection.
" + "NetworkConnectionAction$LocalIpDetails": "The local IP information of the connection.
", + "PortProbeDetail$LocalIpDetails": "The local IP information of the connection.
" } }, "LocalPortDetails": { "base": "Contains information about the port for the local connection.
", "refs": { - "NetworkConnectionAction$LocalPortDetails": "Local port information of the connection.
", - "PortProbeDetail$LocalPortDetails": "Local port information of the connection.
" + "NetworkConnectionAction$LocalPortDetails": "The local port information of the connection.
", + "PortProbeDetail$LocalPortDetails": "The local port information of the connection.
" } }, "Location": { "base": null, "refs": { - "CreateIPSetRequest$Location": "The URI of the file that contains the IPSet. For example (https://s3.us-west-2.amazonaws.com/my-bucket/my-object-key)
", - "CreateThreatIntelSetRequest$Location": "The URI of the file that contains the ThreatIntelSet. For example (https://s3.us-west-2.amazonaws.com/my-bucket/my-object-key).
", - "GetIPSetResponse$Location": "The URI of the file that contains the IPSet. For example (https://s3.us-west-2.amazonaws.com/my-bucket/my-object-key)
", - "GetThreatIntelSetResponse$Location": "The URI of the file that contains the ThreatIntelSet. For example (https://s3.us-west-2.amazonaws.com/my-bucket/my-object-key).
", - "UpdateIPSetRequest$Location": "The updated URI of the file that contains the IPSet. For example (https://s3.us-west-2.amazonaws.com/my-bucket/my-object-key).
", - "UpdateThreatIntelSetRequest$Location": "The updated URI of the file that contains the ThreateIntelSet. For example (https://s3.us-west-2.amazonaws.com/my-bucket/my-object-key)
" + "CreateIPSetRequest$Location": "The URI of the file that contains the IPSet. For example: https://s3.us-west-2.amazonaws.com/my-bucket/my-object-key.
", + "CreateThreatIntelSetRequest$Location": "The URI of the file that contains the ThreatIntelSet. For example: https://s3.us-west-2.amazonaws.com/my-bucket/my-object-key.
", + "GetIPSetResponse$Location": "The URI of the file that contains the IPSet. For example: https://s3.us-west-2.amazonaws.com/my-bucket/my-object-key.
", + "GetThreatIntelSetResponse$Location": "The URI of the file that contains the ThreatIntelSet. For example: https://s3.us-west-2.amazonaws.com/my-bucket/my-object-key.
", + "UpdateIPSetRequest$Location": "The updated URI of the file that contains the IPSet. For example: https://s3.us-west-2.amazonaws.com/my-bucket/my-object-key.
", + "UpdateThreatIntelSetRequest$Location": "The updated URI of the file that contains the ThreateIntelSet. For example: https://s3.us-west-2.amazonaws.com/my-bucket/my-object-key.
" } }, "Long": { "base": null, "refs": { - "Condition$GreaterThan": "Represents a greater than condition to be applied to a single field when querying for findings.
", - "Condition$GreaterThanOrEqual": "Represents a greater than equal condition to be applied to a single field when querying for findings.
", - "Condition$LessThan": "Represents a less than condition to be applied to a single field when querying for findings.
", - "Condition$LessThanOrEqual": "Represents a less than equal condition to be applied to a single field when querying for findings.
", + "Condition$GreaterThan": "Represents a greater than condition to be applied to a single field when querying for findings.
", + "Condition$GreaterThanOrEqual": "Represents a greater than or equal condition to be applied to a single field when querying for findings.
", + "Condition$LessThan": "Represents a less than condition to be applied to a single field when querying for findings.
", + "Condition$LessThanOrEqual": "Represents a less than or equal condition to be applied to a single field when querying for findings.
", "DescribePublishingDestinationResponse$PublishingFailureStartTimestamp": "The time, in epoch millisecond format, at which GuardDuty was first unable to publish findings to the destination.
" } }, "Master": { - "base": "Contains information about the Master account and invitation.
", + "base": "Contains information about the master account and invitation.
", "refs": { - "GetMasterAccountResponse$Master": "Master account details.
" + "GetMasterAccountResponse$Master": "The master account details.
" } }, "MaxResults": { "base": null, "refs": { - "ListDetectorsRequest$MaxResults": "You can use this parameter to indicate the maximum number of items you want in the response. The default value is 50. The maximum value is 50.
", - "ListFiltersRequest$MaxResults": "You can use this parameter to indicate the maximum number of items you want in the response. The default value is 50. The maximum value is 50.
", + "ListDetectorsRequest$MaxResults": "You can use this parameter to indicate the maximum number of items that you want in the response. The default value is 50. The maximum value is 50.
", + "ListFiltersRequest$MaxResults": "You can use this parameter to indicate the maximum number of items that you want in the response. The default value is 50. The maximum value is 50.
", "ListFindingsRequest$MaxResults": "You can use this parameter to indicate the maximum number of items you want in the response. The default value is 50. The maximum value is 50.
", "ListIPSetsRequest$MaxResults": "You can use this parameter to indicate the maximum number of items you want in the response. The default value is 50. The maximum value is 50.
", - "ListInvitationsRequest$MaxResults": "You can use this parameter to indicate the maximum number of items you want in the response. The default value is 50. The maximum value is 50.
", + "ListInvitationsRequest$MaxResults": "You can use this parameter to indicate the maximum number of items that you want in the response. The default value is 50. The maximum value is 50.
", "ListMembersRequest$MaxResults": "You can use this parameter to indicate the maximum number of items you want in the response. The default value is 50. The maximum value is 50.
", + "ListOrganizationAdminAccountsRequest$MaxResults": "The maximum number of results to return in the response.
", "ListPublishingDestinationsRequest$MaxResults": "The maximum number of results to return in the response.
", - "ListThreatIntelSetsRequest$MaxResults": "You can use this parameter to indicate the maximum number of items you want in the response. The default value is 50. The maximum value is 50.
" + "ListThreatIntelSetsRequest$MaxResults": "You can use this parameter to indicate the maximum number of items that you want in the response. The default value is 50. The maximum value is 50.
" } }, "Member": { - "base": "Continas information about the member account
", + "base": "Contains information about the member account.
", "refs": { "Members$member": null } @@ -969,10 +1038,10 @@ "Name": { "base": null, "refs": { - "CreateIPSetRequest$Name": "The user friendly name to identify the IPSet. This name is displayed in all findings that are triggered by activity that involves IP addresses included in this IPSet.
", - "CreateThreatIntelSetRequest$Name": "A user-friendly ThreatIntelSet name that is displayed in all finding generated by activity that involves IP addresses included in this ThreatIntelSet.
", - "GetIPSetResponse$Name": "The user friendly name for the IPSet.
", - "GetThreatIntelSetResponse$Name": "A user-friendly ThreatIntelSet name that is displayed in all finding generated by activity that involves IP addresses included in this ThreatIntelSet.
", + "CreateIPSetRequest$Name": "The user-friendly name to identify the IPSet.
Allowed characters are alphanumerics, spaces, hyphens (-), and underscores (_).
", + "CreateThreatIntelSetRequest$Name": "A user-friendly ThreatIntelSet name displayed in all findings that are generated by activity that involves IP addresses included in this ThreatIntelSet.
", + "GetIPSetResponse$Name": "The user-friendly name for the IPSet.
", + "GetThreatIntelSetResponse$Name": "A user-friendly ThreatIntelSet name displayed in all findings that are generated by activity that involves IP addresses included in this ThreatIntelSet.
", "UpdateIPSetRequest$Name": "The unique ID that specifies the IPSet that you want to update.
", "UpdateThreatIntelSetRequest$Name": "The unique ID that specifies the ThreatIntelSet that you want to update.
" } @@ -980,7 +1049,7 @@ "Neq": { "base": null, "refs": { - "Condition$Neq": "Represents the not equal condition to be applied to a single field when querying for findings.
" + "Condition$Neq": "Represents the not equal condition to be applied to a single field when querying for findings.
" } }, "NetworkConnectionAction": { @@ -990,7 +1059,7 @@ } }, "NetworkInterface": { - "base": "Contains information about the network interface of the Ec2 instance.
", + "base": "Contains information about the elastic network interface of the EC2 instance.
", "refs": { "NetworkInterfaces$member": null } @@ -998,25 +1067,25 @@ "NetworkInterfaces": { "base": null, "refs": { - "InstanceDetails$NetworkInterfaces": "The network interface information of the EC2 instance.
" + "InstanceDetails$NetworkInterfaces": "The elastic network interface information of the EC2 instance.
" } }, "NotEquals": { "base": null, "refs": { - "Condition$NotEquals": "Represents an not equal condition to be applied to a single field when querying for findings.
" + "Condition$NotEquals": "Represents a not equal condition to be applied to a single field when querying for findings.
" } }, "OrderBy": { "base": null, "refs": { - "SortCriteria$OrderBy": "Order by which the sorted findings are to be displayed.
" + "SortCriteria$OrderBy": "The order by which the sorted findings are to be displayed.
" } }, "Organization": { - "base": "Continas information about the ISP organization of the remote IP address.
", + "base": "Contains information about the ISP organization of the remote IP address.
", "refs": { - "RemoteIpDetails$Organization": "ISP Organization information of the remote IP address.
" + "RemoteIpDetails$Organization": "The ISP organization information of the remote IP address.
" } }, "PortProbeAction": { @@ -1034,7 +1103,7 @@ "PortProbeDetails": { "base": null, "refs": { - "PortProbeAction$PortProbeDetails": "A list of port probe details objects.
" + "PortProbeAction$PortProbeDetails": "A list of objects related to port probe details.
" } }, "PrivateIpAddressDetails": { @@ -1050,7 +1119,7 @@ } }, "ProductCode": { - "base": "Contains information about the product code for the Ec2 instance.
", + "base": "Contains information about the product code for the EC2 instance.
", "refs": { "ProductCodes$member": null } @@ -1069,17 +1138,17 @@ } }, "RemoteIpDetails": { - "base": "Continas information about the remote IP address of the connection.
", + "base": "Contains information about the remote IP address of the connection.
", "refs": { - "AwsApiCallAction$RemoteIpDetails": "Remote IP information of the connection.
", - "NetworkConnectionAction$RemoteIpDetails": "Remote IP information of the connection.
", - "PortProbeDetail$RemoteIpDetails": "Remote IP information of the connection.
" + "AwsApiCallAction$RemoteIpDetails": "The remote IP information of the connection.
", + "NetworkConnectionAction$RemoteIpDetails": "The remote IP information of the connection.
", + "PortProbeDetail$RemoteIpDetails": "The remote IP information of the connection.
" } }, "RemotePortDetails": { "base": "Contains information about the remote port.
", "refs": { - "NetworkConnectionAction$RemotePortDetails": "Remote port information of the connection.
" + "NetworkConnectionAction$RemotePortDetails": "The remote port information of the connection.
" } }, "Resource": { @@ -1097,7 +1166,7 @@ "SecurityGroups": { "base": null, "refs": { - "NetworkInterface$SecurityGroups": "Security groups associated with the EC2 instance.
" + "NetworkInterface$SecurityGroups": "The security groups associated with the EC2 instance.
" } }, "Service": { @@ -1137,57 +1206,60 @@ "base": null, "refs": { "AcceptInvitationRequest$MasterId": "The account ID of the master GuardDuty account whose invitation you're accepting.
", - "AcceptInvitationRequest$InvitationId": "This value is used to validate the master account to the member account.
", - "AccessKeyDetails$AccessKeyId": "Access key ID of the user.
", + "AcceptInvitationRequest$InvitationId": "The value that is used to validate the master account to the member account.
", + "AccessKeyDetails$AccessKeyId": "The access key ID of the user.
", "AccessKeyDetails$PrincipalId": "The principal ID of the user.
", "AccessKeyDetails$UserName": "The name of the user.
", "AccessKeyDetails$UserType": "The type of the user.
", - "Action$ActionType": "GuardDuty Finding activity type.
", - "AwsApiCallAction$Api": "AWS API name.
", - "AwsApiCallAction$CallerType": "AWS API caller type.
", - "AwsApiCallAction$ServiceName": "AWS service name whose API was invoked.
", + "Action$ActionType": "The GuardDuty finding activity type.
", + "AdminAccount$AdminAccountId": "The AWS account ID for the account.
", + "AwsApiCallAction$Api": "The AWS API name.
", + "AwsApiCallAction$CallerType": "The AWS API caller type.
", + "AwsApiCallAction$ServiceName": "The AWS service name whose API was invoked.
", "BadRequestException$Message": "The error message.
", "BadRequestException$Type": "The error type.
", - "City$CityName": "City name of the remote IP address.
", + "City$CityName": "The city name of the remote IP address.
", "CountBySeverity$key": null, - "Country$CountryCode": "Country code of the remote IP address.
", - "Country$CountryName": "Country name of the remote IP address.
", + "Country$CountryCode": "The country code of the remote IP address.
", + "Country$CountryName": "The country name of the remote IP address.
", "CreateIPSetResponse$IpSetId": "The ID of the IPSet resource.
", - "CreatePublishingDestinationResponse$DestinationId": "The ID of the publishing destination created.
", + "CreatePublishingDestinationResponse$DestinationId": "The ID of the publishing destination that is created.
", "CreateThreatIntelSetResponse$ThreatIntelSetId": "The ID of the ThreatIntelSet resource.
", "Criterion$key": null, - "DeleteFilterRequest$FilterName": "The name of the filter you want to delete.
", + "DeleteFilterRequest$FilterName": "The name of the filter that you want to delete.
", "DeleteIPSetRequest$IpSetId": "The unique ID of the IPSet to delete.
", "DeletePublishingDestinationRequest$DestinationId": "The ID of the publishing destination to delete.
", - "DeleteThreatIntelSetRequest$ThreatIntelSetId": "The unique ID of the threatIntelSet you want to delete.
", + "DeleteThreatIntelSetRequest$ThreatIntelSetId": "The unique ID of the threatIntelSet that you want to delete.
", "DescribePublishingDestinationRequest$DestinationId": "The ID of the publishing destination to retrieve.
", "DescribePublishingDestinationResponse$DestinationId": "The ID of the publishing destination.
", "Destination$DestinationId": "The unique ID of the publishing destination.
", "DestinationProperties$DestinationArn": "The ARN of the resource to publish to.
", "DestinationProperties$KmsKeyArn": "The ARN of the KMS key to use for encryption.
", - "DnsRequestAction$Domain": "Domain information for the API request.
", - "DomainDetails$Domain": "Domain information for the AWS API call.
", + "DisableOrganizationAdminAccountRequest$AdminAccountId": "The AWS Account ID for the Organizations account to be disabled as a GuardDuty delegated administrator.
", + "DnsRequestAction$Domain": "The domain information for the API request.
", + "DomainDetails$Domain": "The domain information for the AWS API call.
", + "EnableOrganizationAdminAccountRequest$AdminAccountId": "The AWS Account ID for the Organizations account to be enabled as a GuardDuty delegated administrator.
", "Eq$member": null, "Equals$member": null, "Finding$AccountId": "The ID of the account in which the finding was generated.
", - "Finding$Arn": "The ARN for the finding.
", - "Finding$CreatedAt": "The time and date at which the finding was created.
", + "Finding$Arn": "The ARN of the finding.
", + "Finding$CreatedAt": "The time and date when the finding was created.
", "Finding$Description": "The description of the finding.
", "Finding$Id": "The ID of the finding.
", "Finding$Partition": "The partition associated with the finding.
", - "Finding$Region": "The Region in which the finding was generated.
", + "Finding$Region": "The Region where the finding was generated.
", "Finding$SchemaVersion": "The version of the schema used for the finding.
", - "Finding$Title": "The title for the finding.
", - "Finding$UpdatedAt": "The time and date at which the finding was laste updated.
", - "GetDetectorResponse$CreatedAt": "Detector creation timestamp.
", + "Finding$Title": "The title of the finding.
", + "Finding$UpdatedAt": "The time and date when the finding was last updated.
", + "GetDetectorResponse$CreatedAt": "The timestamp of when the detector was created.
", "GetDetectorResponse$ServiceRole": "The GuardDuty service role.
", - "GetDetectorResponse$UpdatedAt": "Detector last update timestamp.
", + "GetDetectorResponse$UpdatedAt": "The last-updated timestamp for the detector.
", "GetFilterRequest$FilterName": "The name of the filter you want to get.
", "GetIPSetRequest$IpSetId": "The unique ID of the IPSet to retrieve.
", - "GetThreatIntelSetRequest$ThreatIntelSetId": "The unique ID of the threatIntelSet you want to get.
", - "IamInstanceProfile$Arn": "AWS EC2 instance profile ARN.
", - "IamInstanceProfile$Id": "AWS EC2 instance profile ID.
", - "InstanceDetails$AvailabilityZone": "The availability zone of the EC2 instance.
", + "GetThreatIntelSetRequest$ThreatIntelSetId": "The unique ID of the threatIntelSet that you want to get.
", + "IamInstanceProfile$Arn": "The profile ARN of the EC2 instance.
", + "IamInstanceProfile$Id": "The profile ID of the EC2 instance.
", + "InstanceDetails$AvailabilityZone": "The Availability Zone of the EC2 instance.
", "InstanceDetails$ImageDescription": "The image description of the EC2 instance.
", "InstanceDetails$ImageId": "The image ID of the EC2 instance.
", "InstanceDetails$InstanceId": "The ID of the EC2 instance.
", @@ -1200,68 +1272,70 @@ "InternalServerErrorException$Type": "The error type.
", "Invitation$InvitationId": "The ID of the invitation. This value is used to validate the inviter account to the member account.
", "Invitation$RelationshipStatus": "The status of the relationship between the inviter and invitee accounts.
", - "Invitation$InvitedAt": "Timestamp at which the invitation was sent.
", + "Invitation$InvitedAt": "The timestamp when the invitation was sent.
", "InviteMembersRequest$Message": "The invitation message that you want to send to the accounts that you’re inviting to GuardDuty as members.
", "IpSetIds$member": null, "Ipv6Addresses$member": null, - "ListDetectorsRequest$NextToken": "You can use this parameter when paginating results. Set the value of this parameter to null on your first call to the list action. For subsequent calls to the action fill nextToken in the request with the value of NextToken from the previous response to continue listing data.
", - "ListDetectorsResponse$NextToken": "Pagination parameter to be used on the next list operation to retrieve more items.
", - "ListFiltersRequest$NextToken": "You can use this parameter when paginating results. Set the value of this parameter to null on your first call to the list action. For subsequent calls to the action fill nextToken in the request with the value of NextToken from the previous response to continue listing data.
", - "ListFiltersResponse$NextToken": "Pagination parameter to be used on the next list operation to retrieve more items.
", - "ListFindingsRequest$NextToken": "You can use this parameter when paginating results. Set the value of this parameter to null on your first call to the list action. For subsequent calls to the action fill nextToken in the request with the value of NextToken from the previous response to continue listing data.
", - "ListFindingsResponse$NextToken": "Pagination parameter to be used on the next list operation to retrieve more items.
", - "ListIPSetsRequest$NextToken": "You can use this parameter when paginating results. Set the value of this parameter to null on your first call to the list action. For subsequent calls to the action fill nextToken in the request with the value of NextToken from the previous response to continue listing data.
", - "ListIPSetsResponse$NextToken": "Pagination parameter to be used on the next list operation to retrieve more items.
", - "ListInvitationsRequest$NextToken": "You can use this parameter when paginating results. Set the value of this parameter to null on your first call to the list action. For subsequent calls to the action fill nextToken in the request with the value of NextToken from the previous response to continue listing data.
", - "ListInvitationsResponse$NextToken": "Pagination parameter to be used on the next list operation to retrieve more items.
", - "ListMembersRequest$NextToken": "You can use this parameter when paginating results. Set the value of this parameter to null on your first call to the list action. For subsequent calls to the action fill nextToken in the request with the value of NextToken from the previous response to continue listing data.
", - "ListMembersRequest$OnlyAssociated": "Specifies whether to only return associated members or to return all members (including members which haven't been invited yet or have been disassociated).
", - "ListMembersResponse$NextToken": "Pagination parameter to be used on the next list operation to retrieve more items.
", - "ListPublishingDestinationsRequest$NextToken": "A token to use for paginating results returned in the repsonse. Set the value of this parameter to null for the first request to a list action. For subsequent calls, use the NextToken
value returned from the previous request to continue listing results after the first page.
A token to use for paginating results returned in the repsonse. Set the value of this parameter to null for the first request to a list action. For subsequent calls, use the NextToken
value returned from the previous request to continue listing results after the first page.
You can use this parameter to paginate results in the response. Set the value of this parameter to null on your first call to the list action. For subsequent calls to the action fill nextToken in the request with the value of NextToken from the previous response to continue listing data.
", - "ListThreatIntelSetsResponse$NextToken": "Pagination parameter to be used on the next list operation to retrieve more items.
", - "LocalIpDetails$IpAddressV4": "IPV4 remote address of the connection.
", - "LocalPortDetails$PortName": "Port name of the local connection.
", - "Master$InvitationId": "This value is used to validate the master account to the member account.
", + "ListDetectorsRequest$NextToken": "You can use this parameter when paginating results. Set the value of this parameter to null on your first call to the list action. For subsequent calls to the action, fill nextToken in the request with the value of NextToken from the previous response to continue listing data.
", + "ListDetectorsResponse$NextToken": "The pagination parameter to be used on the next list operation to retrieve more items.
", + "ListFiltersRequest$NextToken": "You can use this parameter when paginating results. Set the value of this parameter to null on your first call to the list action. For subsequent calls to the action, fill nextToken in the request with the value of NextToken from the previous response to continue listing data.
", + "ListFiltersResponse$NextToken": "The pagination parameter to be used on the next list operation to retrieve more items.
", + "ListFindingsRequest$NextToken": "You can use this parameter when paginating results. Set the value of this parameter to null on your first call to the list action. For subsequent calls to the action, fill nextToken in the request with the value of NextToken from the previous response to continue listing data.
", + "ListFindingsResponse$NextToken": "The pagination parameter to be used on the next list operation to retrieve more items.
", + "ListIPSetsRequest$NextToken": "You can use this parameter when paginating results. Set the value of this parameter to null on your first call to the list action. For subsequent calls to the action, fill nextToken in the request with the value of NextToken from the previous response to continue listing data.
", + "ListIPSetsResponse$NextToken": "The pagination parameter to be used on the next list operation to retrieve more items.
", + "ListInvitationsRequest$NextToken": "You can use this parameter when paginating results. Set the value of this parameter to null on your first call to the list action. For subsequent calls to the action, fill nextToken in the request with the value of NextToken from the previous response to continue listing data.
", + "ListInvitationsResponse$NextToken": "The pagination parameter to be used on the next list operation to retrieve more items.
", + "ListMembersRequest$NextToken": "You can use this parameter when paginating results. Set the value of this parameter to null on your first call to the list action. For subsequent calls to the action, fill nextToken in the request with the value of NextToken from the previous response to continue listing data.
", + "ListMembersRequest$OnlyAssociated": "Specifies what member accounts the response includes based on their relationship status with the master account. The default value is \"true\". If set to \"false\" the response includes all existing member accounts (including members who haven't been invited yet or have been disassociated).
", + "ListMembersResponse$NextToken": "The pagination parameter to be used on the next list operation to retrieve more items.
", + "ListOrganizationAdminAccountsRequest$NextToken": "A token to use for paginating results that are returned in the response. Set the value of this parameter to null for the first request to a list action. For subsequent calls, use the NextToken
value returned from the previous request to continue listing results after the first page.
The pagination parameter to be used on the next list operation to retrieve more items.
", + "ListPublishingDestinationsRequest$NextToken": "A token to use for paginating results that are returned in the response. Set the value of this parameter to null for the first request to a list action. For subsequent calls, use the NextToken
value returned from the previous request to continue listing results after the first page.
A token to use for paginating results that are returned in the response. Set the value of this parameter to null for the first request to a list action. For subsequent calls, use the NextToken
value returned from the previous request to continue listing results after the first page.
You can use this parameter to paginate results in the response. Set the value of this parameter to null on your first call to the list action. For subsequent calls to the action, fill nextToken in the request with the value of NextToken from the previous response to continue listing data.
", + "ListThreatIntelSetsResponse$NextToken": "The pagination parameter to be used on the next list operation to retrieve more items.
", + "LocalIpDetails$IpAddressV4": "The IPv4 local address of the connection.
", + "LocalPortDetails$PortName": "The port name of the local connection.
", + "Master$InvitationId": "The value used to validate the master account to the member account.
", "Master$RelationshipStatus": "The status of the relationship between the master and member accounts.
", - "Master$InvitedAt": "Timestamp at which the invitation was sent.
", - "Member$MasterId": "Master account ID.
", + "Master$InvitedAt": "The timestamp when the invitation was sent.
", + "Member$MasterId": "The master account ID.
", "Member$RelationshipStatus": "The status of the relationship between the member and the master.
", - "Member$InvitedAt": "Timestamp at which the invitation was sent
", - "Member$UpdatedAt": "Member last updated timestamp.
", + "Member$InvitedAt": "The timestamp when the invitation was sent.
", + "Member$UpdatedAt": "The last-updated timestamp of the member.
", "Neq$member": null, - "NetworkConnectionAction$ConnectionDirection": "Network connection direction.
", - "NetworkConnectionAction$Protocol": "Network connection protocol.
", - "NetworkInterface$NetworkInterfaceId": "The ID of the network interface
", - "NetworkInterface$PrivateDnsName": "Private DNS name of the EC2 instance.
", - "NetworkInterface$PrivateIpAddress": "Private IP address of the EC2 instance.
", - "NetworkInterface$PublicDnsName": "Public DNS name of the EC2 instance.
", - "NetworkInterface$PublicIp": "Public IP address of the EC2 instance.
", + "NetworkConnectionAction$ConnectionDirection": "The network connection direction.
", + "NetworkConnectionAction$Protocol": "The network connection protocol.
", + "NetworkInterface$NetworkInterfaceId": "The ID of the network interface.
", + "NetworkInterface$PrivateDnsName": "The private DNS name of the EC2 instance.
", + "NetworkInterface$PrivateIpAddress": "The private IP address of the EC2 instance.
", + "NetworkInterface$PublicDnsName": "The public DNS name of the EC2 instance.
", + "NetworkInterface$PublicIp": "The public IP address of the EC2 instance.
", "NetworkInterface$SubnetId": "The subnet ID of the EC2 instance.
", "NetworkInterface$VpcId": "The VPC ID of the EC2 instance.
", "NotEquals$member": null, - "Organization$Asn": "Autonomous system number of the internet provider of the remote IP address.
", - "Organization$AsnOrg": "Organization that registered this ASN.
", - "Organization$Isp": "ISP information for the internet provider.
", - "Organization$Org": "Name of the internet provider.
", - "PrivateIpAddressDetails$PrivateDnsName": "Private DNS name of the EC2 instance.
", - "PrivateIpAddressDetails$PrivateIpAddress": "Private IP address of the EC2 instance.
", - "ProductCode$Code": "Product code information.
", - "ProductCode$ProductType": "Product code type.
", - "RemoteIpDetails$IpAddressV4": "IPV4 remote address of the connection.
", - "RemotePortDetails$PortName": "Port name of the remote connection.
", - "Resource$ResourceType": "The type of the AWS resource.
", - "SecurityGroup$GroupId": "EC2 instance's security group ID.
", - "SecurityGroup$GroupName": "EC2 instance's security group name.
", - "Service$EventFirstSeen": "First seen timestamp of the activity that prompted GuardDuty to generate this finding.
", - "Service$EventLastSeen": "Last seen timestamp of the activity that prompted GuardDuty to generate this finding.
", - "Service$ResourceRole": "Resource role information for this finding.
", + "Organization$Asn": "The Autonomous System Number (ASN) of the internet provider of the remote IP address.
", + "Organization$AsnOrg": "The organization that registered this ASN.
", + "Organization$Isp": "The ISP information for the internet provider.
", + "Organization$Org": "The name of the internet provider.
", + "PrivateIpAddressDetails$PrivateDnsName": "The private DNS name of the EC2 instance.
", + "PrivateIpAddressDetails$PrivateIpAddress": "The private IP address of the EC2 instance.
", + "ProductCode$Code": "The product code information.
", + "ProductCode$ProductType": "The product code type.
", + "RemoteIpDetails$IpAddressV4": "The IPv4 remote address of the connection.
", + "RemotePortDetails$PortName": "The port name of the remote connection.
", + "Resource$ResourceType": "The type of AWS resource.
", + "SecurityGroup$GroupId": "The security group ID of the EC2 instance.
", + "SecurityGroup$GroupName": "The security group name of the EC2 instance.
", + "Service$EventFirstSeen": "The first-seen timestamp of the activity that prompted GuardDuty to generate this finding.
", + "Service$EventLastSeen": "The last-seen timestamp of the activity that prompted GuardDuty to generate this finding.
", + "Service$ResourceRole": "The resource role information for this finding.
", "Service$ServiceName": "The name of the AWS service (GuardDuty) that generated a finding.
", - "Service$UserFeedback": "Feedback left about the finding.
", - "SortCriteria$AttributeName": "Represents the finding attribute (for example, accountId) by which to sort findings.
", - "Tag$Key": "EC2 instance tag key.
", - "Tag$Value": "EC2 instance tag value.
", + "Service$UserFeedback": "Feedback that was submitted about the finding.
", + "SortCriteria$AttributeName": "Represents the finding attribute (for example, accountId) to sort findings by.
", + "Tag$Key": "The EC2 instance tag key.
", + "Tag$Value": "The EC2 instance tag value.
", "ThreatIntelSetIds$member": null, "ThreatIntelligenceDetail$ThreatListName": "The name of the threat intelligence list that triggered the finding.
", "ThreatNames$member": null, @@ -1269,12 +1343,12 @@ "UpdateFilterRequest$FilterName": "The name of the filter.
", "UpdateFindingsFeedbackRequest$Comments": "Additional feedback about the GuardDuty findings.
", "UpdateIPSetRequest$IpSetId": "The unique ID that specifies the IPSet that you want to update.
", - "UpdatePublishingDestinationRequest$DestinationId": "The ID of the detector associated with the publishing destinations to update.
", + "UpdatePublishingDestinationRequest$DestinationId": "The ID of the publishing destination to update.
", "UpdateThreatIntelSetRequest$ThreatIntelSetId": "The unique ID that specifies the ThreatIntelSet that you want to update.
" } }, "Tag": { - "base": "Contains information about a tag associated with the Ec2 instance.
", + "base": "Contains information about a tag associated with the EC2 instance.
", "refs": { "Tags$member": null } @@ -1298,11 +1372,11 @@ "CreateDetectorRequest$Tags": "The tags to be added to a new detector resource.
", "CreateFilterRequest$Tags": "The tags to be added to a new filter resource.
", "CreateIPSetRequest$Tags": "The tags to be added to a new IP set resource.
", - "CreateThreatIntelSetRequest$Tags": "The tags to be added to a new Threat List resource.
", + "CreateThreatIntelSetRequest$Tags": "The tags to be added to a new threat list resource.
", "GetDetectorResponse$Tags": "The tags of the detector resource.
", "GetFilterResponse$Tags": "The tags of the filter resource.
", - "GetIPSetResponse$Tags": "The tags of the IP set resource.
", - "GetThreatIntelSetResponse$Tags": "The tags of the Threat List resource.
", + "GetIPSetResponse$Tags": "The tags of the IPSet resource.
", + "GetThreatIntelSetResponse$Tags": "The tags of the threat list resource.
", "ListTagsForResourceResponse$Tags": "The tags associated with the resource.
", "TagResourceRequest$Tags": "The tags to be added to a resource.
" } @@ -1377,7 +1451,7 @@ } }, "UnprocessedAccount": { - "base": "Contains information about the accounts that were not processed.
", + "base": "Contains information about the accounts that weren't processed.
", "refs": { "UnprocessedAccounts$member": null } @@ -1385,15 +1459,15 @@ "UnprocessedAccounts": { "base": null, "refs": { - "CreateMembersResponse$UnprocessedAccounts": "A list of objects containing the unprocessed account and a result string explaining why it was unprocessed.
", - "DeclineInvitationsResponse$UnprocessedAccounts": "A list of objects containing the unprocessed account and a result string explaining why it was unprocessed.
", - "DeleteInvitationsResponse$UnprocessedAccounts": "A list of objects containing the unprocessed account and a result string explaining why it was unprocessed.
", + "CreateMembersResponse$UnprocessedAccounts": "A list of objects that include the accountIds
of the unprocessed accounts and a result string that explains why each was unprocessed.
A list of objects that contain the unprocessed account and a result string that explains why it was unprocessed.
", + "DeleteInvitationsResponse$UnprocessedAccounts": "A list of objects that contain the unprocessed account and a result string that explains why it was unprocessed.
", "DeleteMembersResponse$UnprocessedAccounts": "The accounts that could not be processed.
", - "DisassociateMembersResponse$UnprocessedAccounts": "A list of objects containing the unprocessed account and a result string explaining why it was unprocessed.
", - "GetMembersResponse$UnprocessedAccounts": "A list of objects containing the unprocessed account and a result string explaining why it was unprocessed.
", - "InviteMembersResponse$UnprocessedAccounts": "A list of objects containing the unprocessed account and a result string explaining why it was unprocessed.
", - "StartMonitoringMembersResponse$UnprocessedAccounts": "A list of objects containing the unprocessed account and a result string explaining why it was unprocessed.
", - "StopMonitoringMembersResponse$UnprocessedAccounts": "A list of objects containing the unprocessed account and a result string explaining why it was unprocessed.
" + "DisassociateMembersResponse$UnprocessedAccounts": "A list of objects that contain the unprocessed account and a result string that explains why it was unprocessed.
", + "GetMembersResponse$UnprocessedAccounts": "A list of objects that contain the unprocessed account and a result string that explains why it was unprocessed.
", + "InviteMembersResponse$UnprocessedAccounts": "A list of objects that contain the unprocessed account and a result string that explains why it was unprocessed.
", + "StartMonitoringMembersResponse$UnprocessedAccounts": "A list of objects that contain the unprocessed account and a result string that explains why it was unprocessed.
", + "StopMonitoringMembersResponse$UnprocessedAccounts": "A list of objects that contain an accountId for each account that could not be processed, and a result string that indicates why the account was not processed.
" } }, "UntagResourceRequest": { @@ -1446,6 +1520,16 @@ "refs": { } }, + "UpdateOrganizationConfigurationRequest": { + "base": null, + "refs": { + } + }, + "UpdateOrganizationConfigurationResponse": { + "base": null, + "refs": { + } + }, "UpdatePublishingDestinationRequest": { "base": null, "refs": { diff --git a/models/apis/guardduty/2017-11-28/paginators-1.json b/models/apis/guardduty/2017-11-28/paginators-1.json index 717e540366d..83ef33cce13 100644 --- a/models/apis/guardduty/2017-11-28/paginators-1.json +++ b/models/apis/guardduty/2017-11-28/paginators-1.json @@ -36,6 +36,12 @@ "limit_key": "MaxResults", "result_key": "Members" }, + "ListOrganizationAdminAccounts": { + "input_token": "NextToken", + "output_token": "NextToken", + "limit_key": "MaxResults", + "result_key": "AdminAccounts" + }, "ListPublishingDestinations": { "input_token": "NextToken", "output_token": "NextToken", diff --git a/models/apis/iam/2010-05-08/docs-2.json b/models/apis/iam/2010-05-08/docs-2.json index c616de66da2..ef1aa7ecd48 100644 --- a/models/apis/iam/2010-05-08/docs-2.json +++ b/models/apis/iam/2010-05-08/docs-2.json @@ -53,7 +53,7 @@ "EnableMFADevice": "Enables the specified MFA device and associates it with the specified IAM user. When enabled, the MFA device is required for every subsequent login by the IAM user associated with the device.
", "GenerateCredentialReport": "Generates a credential report for the AWS account. For more information about the credential report, see Getting Credential Reports in the IAM User Guide.
", "GenerateOrganizationsAccessReport": "Generates a report for service last accessed data for AWS Organizations. You can generate a report for any entities (organization root, organizational unit, or account) or policies in your organization.
To call this operation, you must be signed in using your AWS Organizations master account credentials. You can use your long-term IAM user or root user credentials, or temporary credentials from assuming an IAM role. SCPs must be enabled for your organization root. You must have the required IAM and AWS Organizations permissions. For more information, see Refining Permissions Using Service Last Accessed Data in the IAM User Guide.
You can generate a service last accessed data report for entities by specifying only the entity's path. This data includes a list of services that are allowed by any service control policies (SCPs) that apply to the entity.
You can generate a service last accessed data report for a policy by specifying an entity's path and an optional AWS Organizations policy ID. This data includes a list of services that are allowed by the specified SCP.
For each service in both report types, the data includes the most recent account activity that the policy allows to account principals in the entity or the entity's children. For important information about the data, reporting period, permissions required, troubleshooting, and supported Regions see Reducing Permissions Using Service Last Accessed Data in the IAM User Guide.
The data includes all attempts to access AWS, not just the successful ones. This includes all attempts that were made using the AWS Management Console, the AWS API through any of the SDKs, or any of the command line tools. An unexpected entry in the service last accessed data does not mean that an account has been compromised, because the request might have been denied. Refer to your CloudTrail logs as the authoritative source for information about all API calls and whether they were successful or denied access. For more information, see Logging IAM Events with CloudTrail in the IAM User Guide.
This operation returns a JobId
. Use this parameter in the GetOrganizationsAccessReport
operation to check the status of the report generation. To check the status of this request, use the JobId
parameter in the GetOrganizationsAccessReport
operation and test the JobStatus
response parameter. When the job is complete, you can retrieve the report.
To generate a service last accessed data report for entities, specify an entity path without specifying the optional AWS Organizations policy ID. The type of entity that you specify determines the data returned in the report.
Root – When you specify the organizations root as the entity, the resulting report lists all of the services allowed by SCPs that are attached to your root. For each service, the report includes data for all accounts in your organization except the master account, because the master account is not limited by SCPs.
OU – When you specify an organizational unit (OU) as the entity, the resulting report lists all of the services allowed by SCPs that are attached to the OU and its parents. For each service, the report includes data for all accounts in the OU or its children. This data excludes the master account, because the master account is not limited by SCPs.
Master account – When you specify the master account, the resulting report lists all AWS services, because the master account is not limited by SCPs. For each service, the report includes data for only the master account.
Account – When you specify another account as the entity, the resulting report lists all of the services allowed by SCPs that are attached to the account and its parents. For each service, the report includes data for only the specified account.
To generate a service last accessed data report for policies, specify an entity path and the optional AWS Organizations policy ID. The type of entity that you specify determines the data returned for each service.
Root – When you specify the root entity and a policy ID, the resulting report lists all of the services that are allowed by the specified SCP. For each service, the report includes data for all accounts in your organization to which the SCP applies. This data excludes the master account, because the master account is not limited by SCPs. If the SCP is not attached to any entities in the organization, then the report will return a list of services with no data.
OU – When you specify an OU entity and a policy ID, the resulting report lists all of the services that are allowed by the specified SCP. For each service, the report includes data for all accounts in the OU or its children to which the SCP applies. This means that other accounts outside the OU that are affected by the SCP might not be included in the data. This data excludes the master account, because the master account is not limited by SCPs. If the SCP is not attached to the OU or one of its children, the report will return a list of services with no data.
Master account – When you specify the master account, the resulting report lists all AWS services, because the master account is not limited by SCPs. If you specify a policy ID in the CLI or API, the policy is ignored. For each service, the report includes data for only the master account.
Account – When you specify another account entity and a policy ID, the resulting report lists all of the services that are allowed by the specified SCP. For each service, the report includes data for only the specified account. This means that other accounts in the organization that are affected by the SCP might not be included in the data. If the SCP is not attached to the account, the report will return a list of services with no data.
Service last accessed data does not use other policy types when determining whether a principal could access a service. These other policy types include identity-based policies, resource-based policies, access control lists, IAM permissions boundaries, and STS assume role policies. It only applies SCP logic. For more about the evaluation of policy types, see Evaluating Policies in the IAM User Guide.
For more information about service last accessed data, see Reducing Policy Scope by Viewing User Activity in the IAM User Guide.
", - "GenerateServiceLastAccessedDetails": "Generates a report that includes details about when an IAM resource (user, group, role, or policy) was last used in an attempt to access AWS services. Recent activity usually appears within four hours. IAM reports activity for the last 365 days, or less if your Region began supporting this feature within the last year. For more information, see Regions Where Data Is Tracked.
The service last accessed data includes all attempts to access an AWS API, not just the successful ones. This includes all attempts that were made using the AWS Management Console, the AWS API through any of the SDKs, or any of the command line tools. An unexpected entry in the service last accessed data does not mean that your account has been compromised, because the request might have been denied. Refer to your CloudTrail logs as the authoritative source for information about all API calls and whether they were successful or denied access. For more information, see Logging IAM Events with CloudTrail in the IAM User Guide.
The GenerateServiceLastAccessedDetails
operation returns a JobId
. Use this parameter in the following operations to retrieve the following details from your report:
GetServiceLastAccessedDetails – Use this operation for users, groups, roles, or policies to list every AWS service that the resource could access using permissions policies. For each service, the response includes information about the most recent access attempt.
GetServiceLastAccessedDetailsWithEntities – Use this operation for groups and policies to list information about the associated entities (users or roles) that attempted to access a specific AWS service.
To check the status of the GenerateServiceLastAccessedDetails
request, use the JobId
parameter in the same operations and test the JobStatus
response parameter.
For additional information about the permissions policies that allow an identity (user, group, or role) to access specific services, use the ListPoliciesGrantingServiceAccess operation.
Service last accessed data does not use other policy types when determining whether a resource could access a service. These other policy types include resource-based policies, access control lists, AWS Organizations policies, IAM permissions boundaries, and AWS STS assume role policies. It only applies permissions policy logic. For more about the evaluation of policy types, see Evaluating Policies in the IAM User Guide.
For more information about service last accessed data, see Reducing Policy Scope by Viewing User Activity in the IAM User Guide.
", + "GenerateServiceLastAccessedDetails": "Generates a report that includes details about when an IAM resource (user, group, role, or policy) was last used in an attempt to access AWS services. Recent activity usually appears within four hours. IAM reports activity for the last 365 days, or less if your Region began supporting this feature within the last year. For more information, see Regions Where Data Is Tracked.
The service last accessed data includes all attempts to access an AWS API, not just the successful ones. This includes all attempts that were made using the AWS Management Console, the AWS API through any of the SDKs, or any of the command line tools. An unexpected entry in the service last accessed data does not mean that your account has been compromised, because the request might have been denied. Refer to your CloudTrail logs as the authoritative source for information about all API calls and whether they were successful or denied access. For more information, see Logging IAM Events with CloudTrail in the IAM User Guide.
The GenerateServiceLastAccessedDetails
operation returns a JobId
. Use this parameter in the following operations to retrieve the following details from your report:
GetServiceLastAccessedDetails – Use this operation for users, groups, roles, or policies to list every AWS service that the resource could access using permissions policies. For each service, the response includes information about the most recent access attempt.
The JobId
returned by GenerateServiceLastAccessedDetail
must be used by the same role within a session, or by the same user when used to call GetServiceLastAccessedDetail
.
GetServiceLastAccessedDetailsWithEntities – Use this operation for groups and policies to list information about the associated entities (users or roles) that attempted to access a specific AWS service.
To check the status of the GenerateServiceLastAccessedDetails
request, use the JobId
parameter in the same operations and test the JobStatus
response parameter.
For additional information about the permissions policies that allow an identity (user, group, or role) to access specific services, use the ListPoliciesGrantingServiceAccess operation.
Service last accessed data does not use other policy types when determining whether a resource could access a service. These other policy types include resource-based policies, access control lists, AWS Organizations policies, IAM permissions boundaries, and AWS STS assume role policies. It only applies permissions policy logic. For more about the evaluation of policy types, see Evaluating Policies in the IAM User Guide.
For more information about service last accessed data, see Reducing Policy Scope by Viewing User Activity in the IAM User Guide.
", "GetAccessKeyLastUsed": "Retrieves information about when the specified access key was last used. The information includes the date and time of last use, along with the AWS service and Region that were specified in the last request made with that key.
", "GetAccountAuthorizationDetails": "Retrieves information about all IAM users, groups, roles, and policies in your AWS account, including their relationships to one another. Use this API to obtain a snapshot of the configuration of IAM permissions (users, groups, roles, and policies) in your account.
Policies returned by this API are URL-encoded compliant with RFC 3986. You can use a URL decoding method to convert the policy back to plain JSON text. For example, if you use Java, you can use the decode
method of the java.net.URLDecoder
utility class in the Java SDK. Other languages and SDKs provide similar functionality.
You can optionally filter the results using the Filter
parameter. You can paginate the results using the MaxItems
and Marker
parameters.
Retrieves the password policy for the AWS account. For more information about using a password policy, go to Managing an IAM Password Policy.
", @@ -269,7 +269,7 @@ } }, "ContextEntry": { - "base": "Contains information about a condition context key. It includes the name of the key and specifies the value (or values, if the context key supports multiple values) to use in the simulation. This information is used when evaluating the Condition
elements of the input policies.
This data type is used as an input parameter to SimulateCustomPolicy
and SimulatePrincipalPolicy
.
Contains information about a condition context key. It includes the name of the key and specifies the value (or values, if the context key supports multiple values) to use in the simulation. This information is used when evaluating the Condition
elements of the input policies.
This data type is used as an input parameter to SimulateCustomPolicy and SimulatePrincipalPolicy.
", "refs": { "ContextEntryListType$member": null } @@ -2523,9 +2523,9 @@ "base": null, "refs": { "GenerateOrganizationsAccessReportResponse$JobId": "The job identifier that you can use in the GetOrganizationsAccessReport operation.
", - "GenerateServiceLastAccessedDetailsResponse$JobId": "The job ID that you can use in the GetServiceLastAccessedDetails or GetServiceLastAccessedDetailsWithEntities operations.
", + "GenerateServiceLastAccessedDetailsResponse$JobId": "The JobId
that you can use in the GetServiceLastAccessedDetails or GetServiceLastAccessedDetailsWithEntities operations. The JobId
returned by GenerateServiceLastAccessedDetail
must be used by the same role within a session, or by the same user when used to call GetServiceLastAccessedDetail
.
The identifier of the request generated by the GenerateOrganizationsAccessReport operation.
", - "GetServiceLastAccessedDetailsRequest$JobId": "The ID of the request generated by the GenerateServiceLastAccessedDetails operation.
", + "GetServiceLastAccessedDetailsRequest$JobId": "The ID of the request generated by the GenerateServiceLastAccessedDetails operation. The JobId
returned by GenerateServiceLastAccessedDetail
must be used by the same role within a session, or by the same user when used to call GetServiceLastAccessedDetail
.
The ID of the request generated by the GenerateServiceLastAccessedDetails
operation.
Returns the list of image build versions for the specified semantic version.
", "ListInfrastructureConfigurations": "Returns a list of infrastructure configurations.
", "ListTagsForResource": "Returns the list of tags for the specified resource.
", - "PutComponentPolicy": "Applies a policy to a component.
", - "PutImagePolicy": "Applies a policy to an image.
", - "PutImageRecipePolicy": "Applies a policy to an image recipe.
", + "PutComponentPolicy": " Applies a policy to a component. We recommend that you call the RAM API CreateResourceShare to share resources. If you call the Image Builder API PutComponentPolicy
, you must also call the RAM API PromoteResourceShareCreatedFromPolicy in order for the resource to be visible to all principals with whom the resource is shared.
Applies a policy to an image. We recommend that you call the RAM API CreateResourceShare to share resources. If you call the Image Builder API PutImagePolicy
, you must also call the RAM API PromoteResourceShareCreatedFromPolicy in order for the resource to be visible to all principals with whom the resource is shared.
Applies a policy to an image recipe. We recommend that you call the RAM API CreateResourceShare to share resources. If you call the Image Builder API PutImageRecipePolicy
, you must also call the RAM API PromoteResourceShareCreatedFromPolicy in order for the resource to be visible to all principals with whom the resource is shared.
Manually triggers a pipeline to create an image.
", "TagResource": "Adds a tag to a resource.
", "UntagResource": "Removes a tag from a resource.
", @@ -49,7 +49,7 @@ "AccountList": { "base": null, "refs": { - "LaunchPermissionConfiguration$userIds": "The AWS account ID.
" + "LaunchPermissionConfiguration$userIds": "The AWS account ID.
" } }, "Ami": { @@ -61,7 +61,7 @@ "AmiDistributionConfiguration": { "base": "Define and configure the output AMIs of the pipeline.
", "refs": { - "Distribution$amiDistributionConfiguration": "The specific AMI settings (for example, launch permissions, AMI tags).
" + "Distribution$amiDistributionConfiguration": "The specific AMI settings (for example, launch permissions, AMI tags).
" } }, "AmiList": { @@ -73,7 +73,7 @@ "AmiNameString": { "base": null, "refs": { - "AmiDistributionConfiguration$name": "The name of the distribution configuration.
" + "AmiDistributionConfiguration$name": "The name of the distribution configuration.
" } }, "Arn": { @@ -89,7 +89,7 @@ "ArnList": { "base": null, "refs": { - "Distribution$licenseConfigurationArns": "The License Manager Configuration to associate with the AMI in the specified Region.
" + "Distribution$licenseConfigurationArns": "The License Manager Configuration to associate with the AMI in the specified Region.
" } }, "CallRateLimitExceededException": { @@ -123,41 +123,41 @@ "CreateDistributionConfigurationResponse$clientToken": "The idempotency token used to make this request idempotent.
", "CreateImagePipelineRequest$clientToken": "The idempotency token used to make this request idempotent.
", "CreateImagePipelineResponse$clientToken": "The idempotency token used to make this request idempotent.
", - "CreateImageRecipeRequest$clientToken": "The idempotency token used to make this request idempotent.
", - "CreateImageRecipeResponse$clientToken": "The idempotency token used to make this request idempotent.
", + "CreateImageRecipeRequest$clientToken": "The idempotency token used to make this request idempotent.
", + "CreateImageRecipeResponse$clientToken": "The idempotency token used to make this request idempotent.
", "CreateImageRequest$clientToken": "The idempotency token used to make this request idempotent.
", "CreateImageResponse$clientToken": "The idempotency token used to make this request idempotent.
", - "CreateInfrastructureConfigurationRequest$clientToken": "The idempotency token used to make this request idempotent.
", - "CreateInfrastructureConfigurationResponse$clientToken": "The idempotency token used to make this request idempotent.
", - "ImportComponentRequest$clientToken": "The idempotency token of the component.
", - "ImportComponentResponse$clientToken": "The idempotency token used to make this request idempotent.
", - "StartImagePipelineExecutionRequest$clientToken": "The idempotency token used to make this request idempotent.
", - "StartImagePipelineExecutionResponse$clientToken": "The idempotency token used to make this request idempotent.
", - "UpdateDistributionConfigurationRequest$clientToken": "The idempotency token of the distribution configuration.
", - "UpdateDistributionConfigurationResponse$clientToken": "The idempotency token used to make this request idempotent.
", - "UpdateImagePipelineRequest$clientToken": "The idempotency token used to make this request idempotent.
", - "UpdateImagePipelineResponse$clientToken": "The idempotency token used to make this request idempotent.
", - "UpdateInfrastructureConfigurationRequest$clientToken": "The idempotency token used to make this request idempotent.
", - "UpdateInfrastructureConfigurationResponse$clientToken": "The idempotency token used to make this request idempotent.
" + "CreateInfrastructureConfigurationRequest$clientToken": "The idempotency token used to make this request idempotent.
", + "CreateInfrastructureConfigurationResponse$clientToken": "The idempotency token used to make this request idempotent.
", + "ImportComponentRequest$clientToken": "The idempotency token of the component.
", + "ImportComponentResponse$clientToken": "The idempotency token used to make this request idempotent.
", + "StartImagePipelineExecutionRequest$clientToken": "The idempotency token used to make this request idempotent.
", + "StartImagePipelineExecutionResponse$clientToken": "The idempotency token used to make this request idempotent.
", + "UpdateDistributionConfigurationRequest$clientToken": "The idempotency token of the distribution configuration.
", + "UpdateDistributionConfigurationResponse$clientToken": "The idempotency token used to make this request idempotent.
", + "UpdateImagePipelineRequest$clientToken": "The idempotency token used to make this request idempotent.
", + "UpdateImagePipelineResponse$clientToken": "The idempotency token used to make this request idempotent.
", + "UpdateInfrastructureConfigurationRequest$clientToken": "The idempotency token used to make this request idempotent.
", + "UpdateInfrastructureConfigurationResponse$clientToken": "The idempotency token used to make this request idempotent.
" } }, "Component": { "base": "A detailed view of a component.
", "refs": { - "GetComponentResponse$component": "The component object associated with the specified ARN.
" + "GetComponentResponse$component": "The component object associated with the specified ARN.
" } }, "ComponentBuildVersionArn": { "base": null, "refs": { "CreateComponentResponse$componentBuildVersionArn": "The Amazon Resource Name (ARN) of the component that was created by this request.
", - "DeleteComponentRequest$componentBuildVersionArn": "The Amazon Resource Name (ARN) of the component build version to delete.
", - "DeleteComponentResponse$componentBuildVersionArn": "The Amazon Resource Name (ARN) of the component build version that was deleted.
", - "GetComponentPolicyRequest$componentArn": "The Amazon Resource Name (ARN) of the component whose policy you want to retrieve.
", - "GetComponentRequest$componentBuildVersionArn": "The Amazon Resource Name (ARN) of the component that you want to retrieve. Regex requires \"/\\d+$\" suffix.
", - "ImportComponentResponse$componentBuildVersionArn": "The Amazon Resource Name (ARN) of the imported component.
", - "PutComponentPolicyRequest$componentArn": "The Amazon Resource Name (ARN) of the component that this policy should be applied to.
", - "PutComponentPolicyResponse$componentArn": "The Amazon Resource Name (ARN) of the component that this policy was applied to.
" + "DeleteComponentRequest$componentBuildVersionArn": "The Amazon Resource Name (ARN) of the component build version to delete.
", + "DeleteComponentResponse$componentBuildVersionArn": "The Amazon Resource Name (ARN) of the component build version that was deleted.
", + "GetComponentPolicyRequest$componentArn": "The Amazon Resource Name (ARN) of the component whose policy you want to retrieve.
", + "GetComponentRequest$componentBuildVersionArn": "The Amazon Resource Name (ARN) of the component that you want to retrieve. Regex requires \"/\\d+$\" suffix.
", + "ImportComponentResponse$componentBuildVersionArn": "The Amazon Resource Name (ARN) of the imported component.
", + "PutComponentPolicyRequest$componentArn": "The Amazon Resource Name (ARN) of the component that this policy should be applied to.
", + "PutComponentPolicyResponse$componentArn": "The Amazon Resource Name (ARN) of the component that this policy was applied to.
" } }, "ComponentConfiguration": { @@ -169,7 +169,7 @@ "ComponentConfigurationList": { "base": null, "refs": { - "CreateImageRecipeRequest$components": "The components of the image recipe.
", + "CreateImageRecipeRequest$components": "The components of the image recipe.
", "ImageRecipe$components": "The components of the image recipe.
" } }, @@ -182,7 +182,7 @@ "ComponentFormat": { "base": null, "refs": { - "ImportComponentRequest$format": "The format of the resource that you want to import as a component.
" + "ImportComponentRequest$format": "The format of the resource that you want to import as a component.
" } }, "ComponentSummary": { @@ -194,7 +194,7 @@ "ComponentSummaryList": { "base": null, "refs": { - "ListComponentBuildVersionsResponse$componentSummaryList": "The list of component summaries for the specified semantic version.
" + "ListComponentBuildVersionsResponse$componentSummaryList": "The list of component summaries for the specified semantic version.
" } }, "ComponentType": { @@ -215,19 +215,19 @@ "ComponentVersionArn": { "base": null, "refs": { - "ListComponentBuildVersionsRequest$componentVersionArn": "The component version Amazon Resource Name (ARN) whose versions you want to list.
" + "ListComponentBuildVersionsRequest$componentVersionArn": "The component version Amazon Resource Name (ARN) whose versions you want to list.
" } }, "ComponentVersionArnOrBuildVersionArn": { "base": null, "refs": { - "ComponentConfiguration$componentArn": "The Amazon Resource Name (ARN) of the component.
" + "ComponentConfiguration$componentArn": "The Amazon Resource Name (ARN) of the component.
" } }, "ComponentVersionList": { "base": null, "refs": { - "ListComponentsResponse$componentVersionList": "The list of component semantic versions.
" + "ListComponentsResponse$componentVersionList": "The list of component semantic versions.
" } }, "CreateComponentRequest": { @@ -384,7 +384,7 @@ "DistributionConfiguration": { "base": "A distribution configuration.
", "refs": { - "GetDistributionConfigurationResponse$distributionConfiguration": "The distribution configuration object.
", + "GetDistributionConfigurationResponse$distributionConfiguration": "The distribution configuration object.
", "Image$distributionConfiguration": "The distribution configuration used when creating this image.
" } }, @@ -394,12 +394,12 @@ "CreateDistributionConfigurationResponse$distributionConfigurationArn": "The Amazon Resource Name (ARN) of the distribution configuration that was created by this request.
", "CreateImagePipelineRequest$distributionConfigurationArn": "The Amazon Resource Name (ARN) of the distribution configuration that will be used to configure and distribute images created by this image pipeline.
", "CreateImageRequest$distributionConfigurationArn": "The Amazon Resource Name (ARN) of the distribution configuration that defines and configures the outputs of your pipeline.
", - "DeleteDistributionConfigurationRequest$distributionConfigurationArn": "The Amazon Resource Name (ARN) of the distribution configuration to delete.
", - "DeleteDistributionConfigurationResponse$distributionConfigurationArn": "The Amazon Resource Name (ARN) of the distribution configuration that was deleted.
", - "GetDistributionConfigurationRequest$distributionConfigurationArn": "The Amazon Resource Name (ARN) of the distribution configuration that you want to retrieve.
", - "UpdateDistributionConfigurationRequest$distributionConfigurationArn": "The Amazon Resource Name (ARN) of the distribution configuration that you want to update.
", - "UpdateDistributionConfigurationResponse$distributionConfigurationArn": "The Amazon Resource Name (ARN) of the distribution configuration that was updated by this request.
", - "UpdateImagePipelineRequest$distributionConfigurationArn": "The Amazon Resource Name (ARN) of the distribution configuration that will be used to configure and distribute images updated by this image pipeline.
" + "DeleteDistributionConfigurationRequest$distributionConfigurationArn": "The Amazon Resource Name (ARN) of the distribution configuration to delete.
", + "DeleteDistributionConfigurationResponse$distributionConfigurationArn": "The Amazon Resource Name (ARN) of the distribution configuration that was deleted.
", + "GetDistributionConfigurationRequest$distributionConfigurationArn": "The Amazon Resource Name (ARN) of the distribution configuration that you want to retrieve.
", + "UpdateDistributionConfigurationRequest$distributionConfigurationArn": "The Amazon Resource Name (ARN) of the distribution configuration that you want to update.
", + "UpdateDistributionConfigurationResponse$distributionConfigurationArn": "The Amazon Resource Name (ARN) of the distribution configuration that was updated by this request.
", + "UpdateImagePipelineRequest$distributionConfigurationArn": "The Amazon Resource Name (ARN) of the distribution configuration that will be used to configure and distribute images updated by this image pipeline.
" } }, "DistributionConfigurationSummary": { @@ -411,7 +411,7 @@ "DistributionConfigurationSummaryList": { "base": null, "refs": { - "ListDistributionConfigurationsResponse$distributionConfigurationSummaryList": "The list of distributions.
" + "ListDistributionConfigurationsResponse$distributionConfigurationSummaryList": "The list of distributions.
" } }, "DistributionList": { @@ -419,7 +419,7 @@ "refs": { "CreateDistributionConfigurationRequest$distributions": "The distributions of the distribution configuration.
", "DistributionConfiguration$distributions": "The distributions of the distribution configuration.
", - "UpdateDistributionConfigurationRequest$distributions": "The distributions of the distribution configuration.
" + "UpdateDistributionConfigurationRequest$distributions": "The distributions of the distribution configuration.
" } }, "DistributionTimeoutMinutes": { @@ -480,7 +480,7 @@ } }, "Filter": { - "base": "A filter name and value pair that is used to return a more specific list of results from a list operation. Filters can be used to match a set of resources by specific criteria, such as tags, attributes, or IDs.
", + "base": "A filter name and value pair that is used to return a more specific list of results from a list operation. Filters can be used to match a set of resources by specific criteria, such as tags, attributes, or IDs.
", "refs": { "FilterList$member": null } @@ -488,20 +488,20 @@ "FilterList": { "base": null, "refs": { - "ListComponentsRequest$filters": "The filters.
", - "ListDistributionConfigurationsRequest$filters": "The filters.
", - "ListImageBuildVersionsRequest$filters": "The filters.
", - "ListImagePipelineImagesRequest$filters": "The filters.
", - "ListImagePipelinesRequest$filters": "The filters.
", - "ListImageRecipesRequest$filters": "The filters.
", - "ListImagesRequest$filters": "The filters.
", - "ListInfrastructureConfigurationsRequest$filters": "The filters.
" + "ListComponentsRequest$filters": "The filters.
", + "ListDistributionConfigurationsRequest$filters": "The filters.
", + "ListImageBuildVersionsRequest$filters": "The filters.
", + "ListImagePipelineImagesRequest$filters": "The filters.
", + "ListImagePipelinesRequest$filters": "The filters.
", + "ListImageRecipesRequest$filters": "The filters.
", + "ListImagesRequest$filters": "The filters.
", + "ListInfrastructureConfigurationsRequest$filters": "The filters.
" } }, "FilterName": { "base": null, "refs": { - "Filter$name": "The name of the filter. Filter names are case-sensitive.
" + "Filter$name": "The name of the filter. Filter names are case-sensitive.
" } }, "FilterValue": { @@ -513,7 +513,7 @@ "FilterValues": { "base": null, "refs": { - "Filter$values": "The filter values. Filter values are case-sensitive.
" + "Filter$values": "The filter values. Filter values are case-sensitive.
" } }, "ForbiddenException": { @@ -619,7 +619,7 @@ "Image": { "base": "An image build version.
", "refs": { - "GetImageResponse$image": "The image object.
" + "GetImageResponse$image": "The image object.
" } }, "ImageBuildVersionArn": { @@ -628,13 +628,13 @@ "CancelImageCreationRequest$imageBuildVersionArn": "The Amazon Resource Name (ARN) of the image whose creation you want to cancel.
", "CancelImageCreationResponse$imageBuildVersionArn": "The Amazon Resource Name (ARN) of the image whose creation has been cancelled.
", "CreateImageResponse$imageBuildVersionArn": "The Amazon Resource Name (ARN) of the image that was created by this request.
", - "DeleteImageRequest$imageBuildVersionArn": "The Amazon Resource Name (ARN) of the image to delete.
", - "DeleteImageResponse$imageBuildVersionArn": "The Amazon Resource Name (ARN) of the image that was deleted.
", - "GetImagePolicyRequest$imageArn": "The Amazon Resource Name (ARN) of the image whose policy you want to retrieve.
", - "GetImageRequest$imageBuildVersionArn": "The Amazon Resource Name (ARN) of the image that you want to retrieve.
", - "PutImagePolicyRequest$imageArn": "The Amazon Resource Name (ARN) of the image that this policy should be applied to.
", - "PutImagePolicyResponse$imageArn": "The Amazon Resource Name (ARN) of the image that this policy was applied to.
", - "StartImagePipelineExecutionResponse$imageBuildVersionArn": "The Amazon Resource Name (ARN) of the image that was created by this request.
" + "DeleteImageRequest$imageBuildVersionArn": "The Amazon Resource Name (ARN) of the image to delete.
", + "DeleteImageResponse$imageBuildVersionArn": "The Amazon Resource Name (ARN) of the image that was deleted.
", + "GetImagePolicyRequest$imageArn": "The Amazon Resource Name (ARN) of the image whose policy you want to retrieve.
", + "GetImageRequest$imageBuildVersionArn": "The Amazon Resource Name (ARN) of the image that you want to retrieve.
", + "PutImagePolicyRequest$imageArn": "The Amazon Resource Name (ARN) of the image that this policy should be applied to.
", + "PutImagePolicyResponse$imageArn": "The Amazon Resource Name (ARN) of the image that this policy was applied to.
", + "StartImagePipelineExecutionResponse$imageBuildVersionArn": "The Amazon Resource Name (ARN) of the image that was created by this request.
" } }, "ImageBuilderArn": { @@ -653,15 +653,15 @@ "ImageVersion$arn": "The Amazon Resource Name (ARN) of the image semantic version.
", "InfrastructureConfiguration$arn": "The Amazon Resource Name (ARN) of the infrastructure configuration.
", "InfrastructureConfigurationSummary$arn": "The Amazon Resource Name (ARN) of the infrastructure configuration.
", - "ListTagsForResourceRequest$resourceArn": "The Amazon Resource Name (ARN) of the resource whose tags you want to retrieve.
", - "TagResourceRequest$resourceArn": "The Amazon Resource Name (ARN) of the resource that you want to tag.
", - "UntagResourceRequest$resourceArn": "The Amazon Resource Name (ARN) of the resource that you want to untag.
" + "ListTagsForResourceRequest$resourceArn": "The Amazon Resource Name (ARN) of the resource whose tags you want to retrieve.
", + "TagResourceRequest$resourceArn": "The Amazon Resource Name (ARN) of the resource that you want to tag.
", + "UntagResourceRequest$resourceArn": "The Amazon Resource Name (ARN) of the resource that you want to untag.
" } }, "ImagePipeline": { "base": "Details of an image pipeline.
", "refs": { - "GetImagePipelineResponse$imagePipeline": "The image pipeline object.
", + "GetImagePipelineResponse$imagePipeline": "The image pipeline object.
", "ImagePipelineList$member": null } }, @@ -669,25 +669,25 @@ "base": null, "refs": { "CreateImagePipelineResponse$imagePipelineArn": "The Amazon Resource Name (ARN) of the image pipeline that was created by this request.
", - "DeleteImagePipelineRequest$imagePipelineArn": "The Amazon Resource Name (ARN) of the image pipeline to delete.
", - "DeleteImagePipelineResponse$imagePipelineArn": "The Amazon Resource Name (ARN) of the image pipeline that was deleted.
", - "GetImagePipelineRequest$imagePipelineArn": "The Amazon Resource Name (ARN) of the image pipeline that you want to retrieve.
", - "ListImagePipelineImagesRequest$imagePipelineArn": "The Amazon Resource Name (ARN) of the image pipeline whose images you want to view.
", - "StartImagePipelineExecutionRequest$imagePipelineArn": "The Amazon Resource Name (ARN) of the image pipeline that you want to manually invoke.
", - "UpdateImagePipelineRequest$imagePipelineArn": "The Amazon Resource Name (ARN) of the image pipeline that you want to update.
", - "UpdateImagePipelineResponse$imagePipelineArn": "The Amazon Resource Name (ARN) of the image pipeline that was updated by this request.
" + "DeleteImagePipelineRequest$imagePipelineArn": "The Amazon Resource Name (ARN) of the image pipeline to delete.
", + "DeleteImagePipelineResponse$imagePipelineArn": "The Amazon Resource Name (ARN) of the image pipeline that was deleted.
", + "GetImagePipelineRequest$imagePipelineArn": "The Amazon Resource Name (ARN) of the image pipeline that you want to retrieve.
", + "ListImagePipelineImagesRequest$imagePipelineArn": "The Amazon Resource Name (ARN) of the image pipeline whose images you want to view.
", + "StartImagePipelineExecutionRequest$imagePipelineArn": "The Amazon Resource Name (ARN) of the image pipeline that you want to manually invoke.
", + "UpdateImagePipelineRequest$imagePipelineArn": "The Amazon Resource Name (ARN) of the image pipeline that you want to update.
", + "UpdateImagePipelineResponse$imagePipelineArn": "The Amazon Resource Name (ARN) of the image pipeline that was updated by this request.
" } }, "ImagePipelineList": { "base": null, "refs": { - "ListImagePipelinesResponse$imagePipelineList": "The list of image pipelines.
" + "ListImagePipelinesResponse$imagePipelineList": "The list of image pipelines.
" } }, "ImageRecipe": { "base": "An image recipe.
", "refs": { - "GetImageRecipeResponse$imageRecipe": "The image recipe object.
", + "GetImageRecipeResponse$imageRecipe": "The image recipe object.
", "Image$imageRecipe": "The image recipe used when creating the image.
" } }, @@ -695,15 +695,15 @@ "base": null, "refs": { "CreateImagePipelineRequest$imageRecipeArn": "The Amazon Resource Name (ARN) of the image recipe that will be used to configure images created by this image pipeline.
", - "CreateImageRecipeResponse$imageRecipeArn": "The Amazon Resource Name (ARN) of the image recipe that was created by this request.
", + "CreateImageRecipeResponse$imageRecipeArn": "The Amazon Resource Name (ARN) of the image recipe that was created by this request.
", "CreateImageRequest$imageRecipeArn": "The Amazon Resource Name (ARN) of the image recipe that defines how images are configured, tested, and assessed.
", - "DeleteImageRecipeRequest$imageRecipeArn": "The Amazon Resource Name (ARN) of the image recipe to delete.
", - "DeleteImageRecipeResponse$imageRecipeArn": "The Amazon Resource Name (ARN) of the image recipe that was deleted.
", - "GetImageRecipePolicyRequest$imageRecipeArn": "The Amazon Resource Name (ARN) of the image recipe whose policy you want to retrieve.
", - "GetImageRecipeRequest$imageRecipeArn": "The Amazon Resource Name (ARN) of the image recipe that you want to retrieve.
", - "PutImageRecipePolicyRequest$imageRecipeArn": "The Amazon Resource Name (ARN) of the image recipe that this policy should be applied to.
", - "PutImageRecipePolicyResponse$imageRecipeArn": "The Amazon Resource Name (ARN) of the image recipe that this policy was applied to.
", - "UpdateImagePipelineRequest$imageRecipeArn": "The Amazon Resource Name (ARN) of the image recipe that will be used to configure images updated by this image pipeline.
" + "DeleteImageRecipeRequest$imageRecipeArn": "The Amazon Resource Name (ARN) of the image recipe to delete.
", + "DeleteImageRecipeResponse$imageRecipeArn": "The Amazon Resource Name (ARN) of the image recipe that was deleted.
", + "GetImageRecipePolicyRequest$imageRecipeArn": "The Amazon Resource Name (ARN) of the image recipe whose policy you want to retrieve.
", + "GetImageRecipeRequest$imageRecipeArn": "The Amazon Resource Name (ARN) of the image recipe that you want to retrieve.
", + "PutImageRecipePolicyRequest$imageRecipeArn": "The Amazon Resource Name (ARN) of the image recipe that this policy should be applied to.
", + "PutImageRecipePolicyResponse$imageRecipeArn": "The Amazon Resource Name (ARN) of the image recipe that this policy was applied to.
", + "UpdateImagePipelineRequest$imageRecipeArn": "The Amazon Resource Name (ARN) of the image recipe that will be used to configure images updated by this image pipeline.
" } }, "ImageRecipeSummary": { @@ -715,7 +715,7 @@ "ImageRecipeSummaryList": { "base": null, "refs": { - "ListImageRecipesResponse$imageRecipeSummaryList": "The list of image pipelines.
" + "ListImageRecipesResponse$imageRecipeSummaryList": "The list of image pipelines.
" } }, "ImageState": { @@ -729,7 +729,7 @@ "ImageStatus": { "base": null, "refs": { - "ImageState$status": "The status of the image.
" + "ImageState$status": "The status of the image.
" } }, "ImageSummary": { @@ -741,8 +741,8 @@ "ImageSummaryList": { "base": null, "refs": { - "ListImageBuildVersionsResponse$imageSummaryList": "The list of image build versions.
", - "ListImagePipelineImagesResponse$imageSummaryList": "The list of images built by this pipeline.
" + "ListImageBuildVersionsResponse$imageSummaryList": "The list of image build versions.
", + "ListImagePipelineImagesResponse$imageSummaryList": "The list of images built by this pipeline.
" } }, "ImageTestsConfiguration": { @@ -752,7 +752,7 @@ "CreateImageRequest$imageTestsConfiguration": "The image tests configuration of the image.
", "Image$imageTestsConfiguration": "The image tests configuration used when creating this image.
", "ImagePipeline$imageTestsConfiguration": "The image tests configuration of the image pipeline.
", - "UpdateImagePipelineRequest$imageTestsConfiguration": "The image test configuration of the image pipeline.
" + "UpdateImagePipelineRequest$imageTestsConfiguration": "The image test configuration of the image pipeline.
" } }, "ImageTestsTimeoutMinutes": { @@ -770,13 +770,13 @@ "ImageVersionArn": { "base": null, "refs": { - "ListImageBuildVersionsRequest$imageVersionArn": "The Amazon Resource Name (ARN) of the image whose build versions you want to retrieve.
" + "ListImageBuildVersionsRequest$imageVersionArn": "The Amazon Resource Name (ARN) of the image whose build versions you want to retrieve.
" } }, "ImageVersionList": { "base": null, "refs": { - "ListImagesResponse$imageVersionList": "The list of image semantic versions.
" + "ListImagesResponse$imageVersionList": "The list of image semantic versions.
" } }, "ImportComponentRequest": { @@ -792,8 +792,8 @@ "InfrastructureConfiguration": { "base": "Details of the infrastructure configuration.
", "refs": { - "GetInfrastructureConfigurationResponse$infrastructureConfiguration": "The infrastructure configuration object.
", - "Image$infrastructureConfiguration": "The infrastructure used when creating this image.
" + "GetInfrastructureConfigurationResponse$infrastructureConfiguration": "The infrastructure configuration object.
", + "Image$infrastructureConfiguration": "The infrastructure used when creating this image.
" } }, "InfrastructureConfigurationArn": { @@ -801,13 +801,13 @@ "refs": { "CreateImagePipelineRequest$infrastructureConfigurationArn": "The Amazon Resource Name (ARN) of the infrastructure configuration that will be used to build images created by this image pipeline.
", "CreateImageRequest$infrastructureConfigurationArn": "The Amazon Resource Name (ARN) of the infrastructure configuration that defines the environment in which your image will be built and tested.
", - "CreateInfrastructureConfigurationResponse$infrastructureConfigurationArn": "The Amazon Resource Name (ARN) of the infrastructure configuration that was created by this request.
", - "DeleteInfrastructureConfigurationRequest$infrastructureConfigurationArn": "The Amazon Resource Name (ARN) of the infrastructure configuration to delete.
", - "DeleteInfrastructureConfigurationResponse$infrastructureConfigurationArn": "The Amazon Resource Name (ARN) of the infrastructure configuration that was deleted.
", + "CreateInfrastructureConfigurationResponse$infrastructureConfigurationArn": "The Amazon Resource Name (ARN) of the infrastructure configuration that was created by this request.
", + "DeleteInfrastructureConfigurationRequest$infrastructureConfigurationArn": "The Amazon Resource Name (ARN) of the infrastructure configuration to delete.
", + "DeleteInfrastructureConfigurationResponse$infrastructureConfigurationArn": "The Amazon Resource Name (ARN) of the infrastructure configuration that was deleted.
", "GetInfrastructureConfigurationRequest$infrastructureConfigurationArn": "The Amazon Resource Name (ARN) of the infrastructure configuration that you want to retrieve.
", - "UpdateImagePipelineRequest$infrastructureConfigurationArn": "The Amazon Resource Name (ARN) of the infrastructure configuration that will be used to build images updated by this image pipeline.
", - "UpdateInfrastructureConfigurationRequest$infrastructureConfigurationArn": "The Amazon Resource Name (ARN) of the infrastructure configuration that you want to update.
", - "UpdateInfrastructureConfigurationResponse$infrastructureConfigurationArn": "The Amazon Resource Name (ARN) of the infrastructure configuration that was updated by this request.
" + "UpdateImagePipelineRequest$infrastructureConfigurationArn": "The Amazon Resource Name (ARN) of the infrastructure configuration that will be used to build images updated by this image pipeline.
", + "UpdateInfrastructureConfigurationRequest$infrastructureConfigurationArn": "The Amazon Resource Name (ARN) of the infrastructure configuration that you want to update.
", + "UpdateInfrastructureConfigurationResponse$infrastructureConfigurationArn": "The Amazon Resource Name (ARN) of the infrastructure configuration that was updated by this request.
" } }, "InfrastructureConfigurationSummary": { @@ -819,7 +819,7 @@ "InfrastructureConfigurationSummaryList": { "base": null, "refs": { - "ListInfrastructureConfigurationsResponse$infrastructureConfigurationSummaryList": "The list of infrastructure configurations.
" + "ListInfrastructureConfigurationsResponse$infrastructureConfigurationSummaryList": "The list of infrastructure configurations.
" } }, "InlineComponentData": { @@ -837,7 +837,7 @@ "InstanceBlockDeviceMappings": { "base": null, "refs": { - "CreateImageRecipeRequest$blockDeviceMappings": "The block device mappings of the image recipe.
", + "CreateImageRecipeRequest$blockDeviceMappings": "The block device mappings of the image recipe.
", "ImageRecipe$blockDeviceMappings": "The block device mappings to apply when creating images from this recipe.
" } }, @@ -850,9 +850,9 @@ "InstanceTypeList": { "base": null, "refs": { - "CreateInfrastructureConfigurationRequest$instanceTypes": "The instance types of the infrastructure configuration. You can specify one or more instance types to use for this build. The service will pick one of these instance types based on availability.
", + "CreateInfrastructureConfigurationRequest$instanceTypes": "The instance types of the infrastructure configuration. You can specify one or more instance types to use for this build. The service will pick one of these instance types based on availability.
", "InfrastructureConfiguration$instanceTypes": "The instance types of the infrastructure configuration.
", - "UpdateInfrastructureConfigurationRequest$instanceTypes": "The instance types of the infrastructure configuration. You can specify one or more instance types to use for this build. The service will pick one of these instance types based on availability.
" + "UpdateInfrastructureConfigurationRequest$instanceTypes": "The instance types of the infrastructure configuration. You can specify one or more instance types to use for this build. The service will pick one of these instance types based on availability.
" } }, "InvalidPaginationTokenException": { @@ -994,20 +994,20 @@ "Logging": { "base": "Logging configuration defines where Image Builder uploads your logs.
", "refs": { - "CreateInfrastructureConfigurationRequest$logging": "The logging configuration of the infrastructure configuration.
", + "CreateInfrastructureConfigurationRequest$logging": "The logging configuration of the infrastructure configuration.
", "InfrastructureConfiguration$logging": "The logging configuration of the infrastructure configuration.
", - "UpdateInfrastructureConfigurationRequest$logging": "The logging configuration of the infrastructure configuration.
" + "UpdateInfrastructureConfigurationRequest$logging": "The logging configuration of the infrastructure configuration.
" } }, "NonEmptyString": { "base": null, "refs": { "AccountList$member": null, - "Ami$region": "The AWS Region of the EC2 AMI.
", - "Ami$image": "The AMI ID of the EC2 AMI.
", - "Ami$name": "The name of the EC2 AMI.
", - "Ami$description": "The description of the EC2 AMI.
", - "AmiDistributionConfiguration$description": "The description of the distribution configuration.
", + "Ami$region": "The AWS Region of the EC2 AMI.
", + "Ami$image": "The AMI ID of the EC2 AMI.
", + "Ami$name": "The name of the EC2 AMI.
", + "Ami$description": "The description of the EC2 AMI.
", + "AmiDistributionConfiguration$description": "The description of the distribution configuration.
", "CancelImageCreationResponse$requestId": "The request ID that uniquely identifies this request.
", "Component$description": "The description of the component.
", "Component$changeDescription": "The change description of the component.
", @@ -1027,48 +1027,48 @@ "CreateImagePipelineRequest$description": "The description of the image pipeline.
", "CreateImagePipelineResponse$requestId": "The request ID that uniquely identifies this request.
", "CreateImageRecipeRequest$description": "The description of the image recipe.
", - "CreateImageRecipeRequest$parentImage": "The parent image of the image recipe.
", - "CreateImageRecipeResponse$requestId": "The request ID that uniquely identifies this request.
", + "CreateImageRecipeRequest$parentImage": "The parent image of the image recipe. The value of the string can be the ARN of the parent image or an AMI ID. The format for the ARN follows this example: arn:aws:imagebuilder:us-west-2:aws:image/windows-server-2016-english-full-base-x86/2019.x.x
. The ARN ends with /20xx.x.x
, which communicates to EC2 Image Builder that you want to use the latest AMI created in 20xx (year). You can provide the specific version that you want to use, or you can use a wildcard in all of the fields. If you enter an AMI ID for the string value, you must have access to the AMI, and the AMI must be in the same Region in which you are using Image Builder.
The request ID that uniquely identifies this request.
", "CreateImageResponse$requestId": "The request ID that uniquely identifies this request.
", - "CreateInfrastructureConfigurationRequest$description": "The description of the infrastructure configuration.
", - "CreateInfrastructureConfigurationRequest$instanceProfileName": "The instance profile to associate with the instance used to customize your EC2 AMI.
", - "CreateInfrastructureConfigurationRequest$subnetId": "The subnet ID in which to place the instance used to customize your EC2 AMI.
", - "CreateInfrastructureConfigurationRequest$keyPair": "The key pair of the infrastructure configuration. This can be used to log on to and debug the instance used to create your image.
", - "CreateInfrastructureConfigurationResponse$requestId": "The request ID that uniquely identifies this request.
", - "DeleteComponentResponse$requestId": "The request ID that uniquely identifies this request.
", - "DeleteDistributionConfigurationResponse$requestId": "The request ID that uniquely identifies this request.
", - "DeleteImagePipelineResponse$requestId": "The request ID that uniquely identifies this request.
", - "DeleteImageRecipeResponse$requestId": "The request ID that uniquely identifies this request.
", - "DeleteImageResponse$requestId": "The request ID that uniquely identifies this request.
", - "DeleteInfrastructureConfigurationResponse$requestId": "The request ID that uniquely identifies this request.
", - "Distribution$region": "The target Region.
", + "CreateInfrastructureConfigurationRequest$description": "The description of the infrastructure configuration.
", + "CreateInfrastructureConfigurationRequest$instanceProfileName": "The instance profile to associate with the instance used to customize your EC2 AMI.
", + "CreateInfrastructureConfigurationRequest$subnetId": "The subnet ID in which to place the instance used to customize your EC2 AMI.
", + "CreateInfrastructureConfigurationRequest$keyPair": "The key pair of the infrastructure configuration. This can be used to log on to and debug the instance used to create your image.
", + "CreateInfrastructureConfigurationResponse$requestId": "The request ID that uniquely identifies this request.
", + "DeleteComponentResponse$requestId": "The request ID that uniquely identifies this request.
", + "DeleteDistributionConfigurationResponse$requestId": "The request ID that uniquely identifies this request.
", + "DeleteImagePipelineResponse$requestId": "The request ID that uniquely identifies this request.
", + "DeleteImageRecipeResponse$requestId": "The request ID that uniquely identifies this request.
", + "DeleteImageResponse$requestId": "The request ID that uniquely identifies this request.
", + "DeleteInfrastructureConfigurationResponse$requestId": "The request ID that uniquely identifies this request.
", + "Distribution$region": "The target Region.
", "DistributionConfiguration$description": "The description of the distribution configuration.
", "DistributionConfigurationSummary$description": "The description of the distribution configuration.
", "EbsInstanceBlockDeviceSpecification$kmsKeyId": "Use to configure the KMS key to use when encrypting the device.
", "EbsInstanceBlockDeviceSpecification$snapshotId": "The snapshot that defines the device contents.
", - "GetComponentPolicyResponse$requestId": "The request ID that uniquely identifies this request.
", - "GetComponentResponse$requestId": "The request ID that uniquely identifies this request.
", - "GetDistributionConfigurationResponse$requestId": "The request ID that uniquely identifies this request.
", - "GetImagePipelineResponse$requestId": "The request ID that uniquely identifies this request.
", - "GetImagePolicyResponse$requestId": "The request ID that uniquely identifies this request.
", - "GetImageRecipePolicyResponse$requestId": "The request ID that uniquely identifies this request.
", - "GetImageRecipeResponse$requestId": "The request ID that uniquely identifies this request.
", - "GetImageResponse$requestId": "The request ID that uniquely identifies this request.
", - "GetInfrastructureConfigurationResponse$requestId": "The request ID that uniquely identifies this request.
", + "GetComponentPolicyResponse$requestId": "The request ID that uniquely identifies this request.
", + "GetComponentResponse$requestId": "The request ID that uniquely identifies this request.
", + "GetDistributionConfigurationResponse$requestId": "The request ID that uniquely identifies this request.
", + "GetImagePipelineResponse$requestId": "The request ID that uniquely identifies this request.
", + "GetImagePolicyResponse$requestId": "The request ID that uniquely identifies this request.
", + "GetImageRecipePolicyResponse$requestId": "The request ID that uniquely identifies this request.
", + "GetImageRecipeResponse$requestId": "The request ID that uniquely identifies this request.
", + "GetImageResponse$requestId": "The request ID that uniquely identifies this request.
", + "GetInfrastructureConfigurationResponse$requestId": "The request ID that uniquely identifies this request.
", "ImagePipeline$description": "The description of the image pipeline.
", "ImageRecipe$description": "The description of the image recipe.
", "ImageRecipe$owner": "The owner of the image recipe.
", "ImageRecipe$parentImage": "The parent image of the image recipe.
", "ImageRecipeSummary$owner": "The owner of the image recipe.
", "ImageRecipeSummary$parentImage": "The parent image of the image recipe.
", - "ImageState$reason": "The reason for the image's status.
", + "ImageState$reason": "The reason for the image's status.
", "ImageSummary$owner": "The owner of the image.
", "ImageVersion$owner": "The owner of the image semantic version.
", "ImportComponentRequest$description": "The description of the component. Describes the contents of the component.
", - "ImportComponentRequest$changeDescription": "The change description of the component. Describes what change has been made in this version, or what makes this version different from other versions of this component.
", + "ImportComponentRequest$changeDescription": "The change description of the component. Describes what change has been made in this version, or what makes this version different from other versions of this component.
", "ImportComponentRequest$data": "The data of the component. Used to specify the data inline. Either data
or uri
can be used to specify the data within the component.
The ID of the KMS key that should be used to encrypt this component.
", - "ImportComponentResponse$requestId": "The request ID that uniquely identifies this request.
", + "ImportComponentRequest$kmsKeyId": "The ID of the KMS key that should be used to encrypt this component.
", + "ImportComponentResponse$requestId": "The request ID that uniquely identifies this request.
", "InfrastructureConfiguration$description": "The description of the infrastructure configuration.
", "InfrastructureConfiguration$instanceProfileName": "The instance profile of the infrastructure configuration.
", "InfrastructureConfiguration$subnetId": "The subnet ID of the infrastructure configuration.
", @@ -1077,63 +1077,76 @@ "InfrastructureConfigurationSummary$description": "The description of the infrastructure configuration.
", "InstanceBlockDeviceMapping$deviceName": "The device to which these mappings apply.
", "InstanceBlockDeviceMapping$virtualName": "Use to manage instance ephemeral devices.
", - "ListComponentBuildVersionsRequest$nextToken": "A token to specify where to start paginating. This is the NextToken from a previously truncated response.
", - "ListComponentBuildVersionsResponse$requestId": "The request ID that uniquely identifies this request.
", - "ListComponentBuildVersionsResponse$nextToken": "The next token used for paginated responses. When this is not empty, there are additional elements that the service has not included in this request. Use this token with the next request to retrieve additional objects.
", - "ListComponentsRequest$nextToken": "A token to specify where to start paginating. This is the NextToken from a previously truncated response.
", - "ListComponentsResponse$requestId": "The request ID that uniquely identifies this request.
", - "ListComponentsResponse$nextToken": "The next token used for paginated responses. When this is not empty, there are additional elements that the service has not included in this request. Use this token with the next request to retrieve additional objects.
", - "ListDistributionConfigurationsRequest$nextToken": "A token to specify where to start paginating. This is the NextToken from a previously truncated response.
", - "ListDistributionConfigurationsResponse$requestId": "The request ID that uniquely identifies this request.
", - "ListDistributionConfigurationsResponse$nextToken": "The next token used for paginated responses. When this is not empty, there are additional elements that the service has not included in this request. Use this token with the next request to retrieve additional objects.
", - "ListImageBuildVersionsRequest$nextToken": "A token to specify where to start paginating. This is the NextToken from a previously truncated response.
", - "ListImageBuildVersionsResponse$requestId": "The request ID that uniquely identifies this request.
", - "ListImageBuildVersionsResponse$nextToken": "The next token used for paginated responses. When this is not empty, there are additional elements that the service has not included in this request. Use this token with the next request to retrieve additional objects.
", - "ListImagePipelineImagesRequest$nextToken": "A token to specify where to start paginating. This is the NextToken from a previously truncated response.
", - "ListImagePipelineImagesResponse$requestId": "The request ID that uniquely identifies this request.
", - "ListImagePipelineImagesResponse$nextToken": "The next token used for paginated responses. When this is not empty, there are additional elements that the service has not included in this request. Use this token with the next request to retrieve additional objects.
", - "ListImagePipelinesRequest$nextToken": "A token to specify where to start paginating. This is the NextToken from a previously truncated response.
", - "ListImagePipelinesResponse$requestId": "The request ID that uniquely identifies this request.
", - "ListImagePipelinesResponse$nextToken": "The next token used for paginated responses. When this is not empty, there are additional elements that the service has not included in this request. Use this token with the next request to retrieve additional objects.
", - "ListImageRecipesRequest$nextToken": "A token to specify where to start paginating. This is the NextToken from a previously truncated response.
", - "ListImageRecipesResponse$requestId": "The request ID that uniquely identifies this request.
", - "ListImageRecipesResponse$nextToken": "The next token used for paginated responses. When this is not empty, there are additional elements that the service has not included in this request. Use this token with the next request to retrieve additional objects.
", - "ListImagesRequest$nextToken": "A token to specify where to start paginating. This is the NextToken from a previously truncated response.
", - "ListImagesResponse$requestId": "The request ID that uniquely identifies this request.
", - "ListImagesResponse$nextToken": "The next token used for paginated responses. When this is not empty, there are additional elements that the service has not included in this request. Use this token with the next request to retrieve additional objects.
", - "ListInfrastructureConfigurationsRequest$nextToken": "A token to specify where to start paginating. This is the NextToken from a previously truncated response.
", - "ListInfrastructureConfigurationsResponse$requestId": "The request ID that uniquely identifies this request.
", - "ListInfrastructureConfigurationsResponse$nextToken": "The next token used for paginated responses. When this is not empty, there are additional elements that the service has not included in this request. Use this token with the next request to retrieve additional objects.
", - "PutComponentPolicyResponse$requestId": "The request ID that uniquely identifies this request.
", - "PutImagePolicyResponse$requestId": "The request ID that uniquely identifies this request.
", - "PutImageRecipePolicyResponse$requestId": "The request ID that uniquely identifies this request.
", + "ListComponentBuildVersionsRequest$nextToken": "A token to specify where to start paginating. This is the NextToken from a previously truncated response.
", + "ListComponentBuildVersionsResponse$requestId": "The request ID that uniquely identifies this request.
", + "ListComponentBuildVersionsResponse$nextToken": "The next token used for paginated responses. When this is not empty, there are additional elements that the service has not included in this request. Use this token with the next request to retrieve additional objects.
", + "ListComponentsRequest$nextToken": "A token to specify where to start paginating. This is the NextToken from a previously truncated response.
", + "ListComponentsResponse$requestId": "The request ID that uniquely identifies this request.
", + "ListComponentsResponse$nextToken": "The next token used for paginated responses. When this is not empty, there are additional elements that the service has not included in this request. Use this token with the next request to retrieve additional objects.
", + "ListDistributionConfigurationsRequest$nextToken": "A token to specify where to start paginating. This is the NextToken from a previously truncated response.
", + "ListDistributionConfigurationsResponse$requestId": "The request ID that uniquely identifies this request.
", + "ListDistributionConfigurationsResponse$nextToken": "The next token used for paginated responses. When this is not empty, there are additional elements that the service has not included in this request. Use this token with the next request to retrieve additional objects.
", + "ListImageBuildVersionsRequest$nextToken": "A token to specify where to start paginating. This is the NextToken from a previously truncated response.
", + "ListImageBuildVersionsResponse$requestId": "The request ID that uniquely identifies this request.
", + "ListImageBuildVersionsResponse$nextToken": "The next token used for paginated responses. When this is not empty, there are additional elements that the service has not included in this request. Use this token with the next request to retrieve additional objects.
", + "ListImagePipelineImagesRequest$nextToken": "A token to specify where to start paginating. This is the NextToken from a previously truncated response.
", + "ListImagePipelineImagesResponse$requestId": "The request ID that uniquely identifies this request.
", + "ListImagePipelineImagesResponse$nextToken": "The next token used for paginated responses. When this is not empty, there are additional elements that the service has not included in this request. Use this token with the next request to retrieve additional objects.
", + "ListImagePipelinesRequest$nextToken": "A token to specify where to start paginating. This is the NextToken from a previously truncated response.
", + "ListImagePipelinesResponse$requestId": "The request ID that uniquely identifies this request.
", + "ListImagePipelinesResponse$nextToken": "The next token used for paginated responses. When this is not empty, there are additional elements that the service has not included in this request. Use this token with the next request to retrieve additional objects.
", + "ListImageRecipesRequest$nextToken": "A token to specify where to start paginating. This is the NextToken from a previously truncated response.
", + "ListImageRecipesResponse$requestId": "The request ID that uniquely identifies this request.
", + "ListImageRecipesResponse$nextToken": "The next token used for paginated responses. When this is not empty, there are additional elements that the service has not included in this request. Use this token with the next request to retrieve additional objects.
", + "ListImagesRequest$nextToken": "A token to specify where to start paginating. This is the NextToken from a previously truncated response.
", + "ListImagesResponse$requestId": "The request ID that uniquely identifies this request.
", + "ListImagesResponse$nextToken": "The next token used for paginated responses. When this is not empty, there are additional elements that the service has not included in this request. Use this token with the next request to retrieve additional objects.
", + "ListInfrastructureConfigurationsRequest$nextToken": "A token to specify where to start paginating. This is the NextToken from a previously truncated response.
", + "ListInfrastructureConfigurationsResponse$requestId": "The request ID that uniquely identifies this request.
", + "ListInfrastructureConfigurationsResponse$nextToken": "The next token used for paginated responses. When this is not empty, there are additional elements that the service has not included in this request. Use this token with the next request to retrieve additional objects.
", + "PutComponentPolicyResponse$requestId": "The request ID that uniquely identifies this request.
", + "PutImagePolicyResponse$requestId": "The request ID that uniquely identifies this request.
", + "PutImageRecipePolicyResponse$requestId": "The request ID that uniquely identifies this request.
", "S3Logs$s3BucketName": "The Amazon S3 bucket in which to store the logs.
", "S3Logs$s3KeyPrefix": "The Amazon S3 path in which to store the logs.
", - "Schedule$scheduleExpression": " The expression determines how often EC2 Image Builder evaluates your pipelineExecutionStartCondition
.
The expression determines how often EC2 Image Builder evaluates your pipelineExecutionStartCondition
.
The request ID that uniquely identifies this request.
", + "StartImagePipelineExecutionResponse$requestId": "The request ID that uniquely identifies this request.
", "StringList$member": null, - "UpdateDistributionConfigurationRequest$description": "The description of the distribution configuration.
", - "UpdateDistributionConfigurationResponse$requestId": "The request ID that uniquely identifies this request.
", - "UpdateImagePipelineRequest$description": "The description of the image pipeline.
", - "UpdateImagePipelineResponse$requestId": "The request ID that uniquely identifies this request.
", - "UpdateInfrastructureConfigurationRequest$description": "The description of the infrastructure configuration.
", - "UpdateInfrastructureConfigurationRequest$instanceProfileName": "The instance profile to associate with the instance used to customize your EC2 AMI.
", - "UpdateInfrastructureConfigurationRequest$subnetId": "The subnet ID to place the instance used to customize your EC2 AMI in.
", - "UpdateInfrastructureConfigurationRequest$keyPair": "The key pair of the infrastructure configuration. This can be used to log on to and debug the instance used to create your image.
", - "UpdateInfrastructureConfigurationResponse$requestId": "The request ID that uniquely identifies this request.
" + "UpdateDistributionConfigurationRequest$description": "The description of the distribution configuration.
", + "UpdateDistributionConfigurationResponse$requestId": "The request ID that uniquely identifies this request.
", + "UpdateImagePipelineRequest$description": "The description of the image pipeline.
", + "UpdateImagePipelineResponse$requestId": "The request ID that uniquely identifies this request.
", + "UpdateInfrastructureConfigurationRequest$description": "The description of the infrastructure configuration.
", + "UpdateInfrastructureConfigurationRequest$instanceProfileName": "The instance profile to associate with the instance used to customize your EC2 AMI.
", + "UpdateInfrastructureConfigurationRequest$subnetId": "The subnet ID to place the instance used to customize your EC2 AMI in.
", + "UpdateInfrastructureConfigurationRequest$keyPair": "The key pair of the infrastructure configuration. This can be used to log on to and debug the instance used to create your image.
", + "UpdateInfrastructureConfigurationResponse$requestId": "The request ID that uniquely identifies this request.
" } }, "NullableBoolean": { "base": null, "refs": { "Component$encrypted": "The encryption status of the component.
", - "CreateInfrastructureConfigurationRequest$terminateInstanceOnFailure": "The terminate instance on failure setting of the infrastructure configuration. Set to false if you want Image Builder to retain the instance used to configure your AMI if the build or test phase of your workflow fails.
", + "CreateImagePipelineRequest$enhancedImageMetadataEnabled": "Collects additional information about the image being created, including the operating system (OS) version and package list. This information is used to enhance the overall experience of using EC2 Image Builder. Enabled by default.
", + "CreateImageRequest$enhancedImageMetadataEnabled": "Collects additional information about the image being created, including the operating system (OS) version and package list. This information is used to enhance the overall experience of using EC2 Image Builder. Enabled by default.
", + "CreateInfrastructureConfigurationRequest$terminateInstanceOnFailure": "The terminate instance on failure setting of the infrastructure configuration. Set to false if you want Image Builder to retain the instance used to configure your AMI if the build or test phase of your workflow fails.
", "EbsInstanceBlockDeviceSpecification$encrypted": "Use to configure device encryption.
", "EbsInstanceBlockDeviceSpecification$deleteOnTermination": "Use to configure delete on termination of the associated device.
", + "Image$enhancedImageMetadataEnabled": "Collects additional information about the image being created, including the operating system (OS) version and package list. This information is used to enhance the overall experience of using EC2 Image Builder. Enabled by default.
", + "ImagePipeline$enhancedImageMetadataEnabled": "Collects additional information about the image being created, including the operating system (OS) version and package list. This information is used to enhance the overall experience of using EC2 Image Builder. Enabled by default.
", "ImageTestsConfiguration$imageTestsEnabled": "Defines if tests should be executed when building this image.
", "InfrastructureConfiguration$terminateInstanceOnFailure": "The terminate instance on failure configuration of the infrastructure configuration.
", - "UpdateInfrastructureConfigurationRequest$terminateInstanceOnFailure": "The terminate instance on failure setting of the infrastructure configuration. Set to false if you want Image Builder to retain the instance used to configure your AMI if the build or test phase of your workflow fails.
" + "UpdateImagePipelineRequest$enhancedImageMetadataEnabled": "Collects additional information about the image being created, including the operating system (OS) version and package list. This information is used to enhance the overall experience of using EC2 Image Builder. Enabled by default.
", + "UpdateInfrastructureConfigurationRequest$terminateInstanceOnFailure": "The terminate instance on failure setting of the infrastructure configuration. Set to false if you want Image Builder to retain the instance used to configure your AMI if the build or test phase of your workflow fails.
" + } + }, + "OsVersion": { + "base": null, + "refs": { + "Image$osVersion": "The operating system version of the instance. For example, Amazon Linux 2, Ubuntu 18, or Microsoft Windows Server 2019.
", + "ImageSummary$osVersion": "The operating system version of the instance. For example, Amazon Linux 2, Ubuntu 18, or Microsoft Windows Server 2019.
", + "ImageVersion$osVersion": "The operating system version of the instance. For example, Amazon Linux 2, Ubuntu 18, or Microsoft Windows Server 2019.
" } }, "OutputResources": { @@ -1146,15 +1159,15 @@ "Ownership": { "base": null, "refs": { - "ListComponentsRequest$owner": "The owner defines which components you want to list. By default, this request will only show components owned by your account. You can use this field to specify if you want to view components owned by yourself, by Amazon, or those components that have been shared with you by other customers.
", - "ListImageRecipesRequest$owner": "The owner defines which image recipes you want to list. By default, this request will only show image recipes owned by your account. You can use this field to specify if you want to view image recipes owned by yourself, by Amazon, or those image recipes that have been shared with you by other customers.
", - "ListImagesRequest$owner": "The owner defines which images you want to list. By default, this request will only show images owned by your account. You can use this field to specify if you want to view images owned by yourself, by Amazon, or those images that have been shared with you by other customers.
" + "ListComponentsRequest$owner": "The owner defines which components you want to list. By default, this request will only show components owned by your account. You can use this field to specify if you want to view components owned by yourself, by Amazon, or those components that have been shared with you by other customers.
", + "ListImageRecipesRequest$owner": "The owner defines which image recipes you want to list. By default, this request will only show image recipes owned by your account. You can use this field to specify if you want to view image recipes owned by yourself, by Amazon, or those image recipes that have been shared with you by other customers.
", + "ListImagesRequest$owner": "The owner defines which images you want to list. By default, this request will only show images owned by your account. You can use this field to specify if you want to view images owned by yourself, by Amazon, or those images that have been shared with you by other customers.
" } }, "PipelineExecutionStartCondition": { "base": null, "refs": { - "Schedule$pipelineExecutionStartCondition": " The condition configures when the pipeline should trigger a new image build. When the pipelineExecutionStartCondition
is set to EXPRESSION_MATCH_AND_DEPENDENCY_UPDATES_AVAILABLE
, EC2 Image Builder will build a new image only when there are known changes pending. When it is set to EXPRESSION_MATCH_ONLY
, it will build a new image every time the CRON expression matches the current time.
The condition configures when the pipeline should trigger a new image build. When the pipelineExecutionStartCondition
is set to EXPRESSION_MATCH_AND_DEPENDENCY_UPDATES_AVAILABLE
, EC2 Image Builder will build a new image only when there are known changes pending. When it is set to EXPRESSION_MATCH_ONLY
, it will build a new image every time the CRON expression matches the current time.
The status of the image pipeline.
", "ImagePipeline$status": "The status of the image pipeline.
", - "UpdateImagePipelineRequest$status": "The status of the image pipeline.
" + "UpdateImagePipelineRequest$status": "The status of the image pipeline.
" } }, "Platform": { @@ -1178,7 +1191,7 @@ "ImageRecipeSummary$platform": "The platform of the image recipe.
", "ImageSummary$platform": "The platform of the image.
", "ImageVersion$platform": "The platform of the image semantic version.
", - "ImportComponentRequest$platform": "The platform of the component.
" + "ImportComponentRequest$platform": "The platform of the component.
" } }, "PutComponentPolicyRequest": { @@ -1236,7 +1249,7 @@ "CreateDistributionConfigurationRequest$name": "The name of the distribution configuration.
", "CreateImagePipelineRequest$name": "The name of the image pipeline.
", "CreateImageRecipeRequest$name": "The name of the image recipe.
", - "CreateInfrastructureConfigurationRequest$name": "The name of the infrastructure configuration.
", + "CreateInfrastructureConfigurationRequest$name": "The name of the infrastructure configuration.
", "DistributionConfiguration$name": "The name of the distribution configuration.
", "DistributionConfigurationSummary$name": "The name of the distribution configuration.
", "Image$name": "The name of the image.
", @@ -1259,26 +1272,26 @@ "ResourcePolicyDocument": { "base": null, "refs": { - "GetComponentPolicyResponse$policy": "The component policy.
", - "GetImagePolicyResponse$policy": "The image policy object.
", - "GetImageRecipePolicyResponse$policy": "The image recipe policy object.
", - "PutComponentPolicyRequest$policy": "The policy to apply.
", - "PutImagePolicyRequest$policy": "The policy to apply.
", - "PutImageRecipePolicyRequest$policy": "The policy to apply.
" + "GetComponentPolicyResponse$policy": "The component policy.
", + "GetImagePolicyResponse$policy": "The image policy object.
", + "GetImageRecipePolicyResponse$policy": "The image recipe policy object.
", + "PutComponentPolicyRequest$policy": "The policy to apply.
", + "PutImagePolicyRequest$policy": "The policy to apply.
", + "PutImageRecipePolicyRequest$policy": "The policy to apply.
" } }, "RestrictedInteger": { "base": null, "refs": { - "ListComponentBuildVersionsRequest$maxResults": "The maximum items to return in a request.
", - "ListComponentsRequest$maxResults": "The maximum items to return in a request.
", - "ListDistributionConfigurationsRequest$maxResults": "The maximum items to return in a request.
", - "ListImageBuildVersionsRequest$maxResults": "The maximum items to return in a request.
", - "ListImagePipelineImagesRequest$maxResults": "The maximum items to return in a request.
", - "ListImagePipelinesRequest$maxResults": "The maximum items to return in a request.
", - "ListImageRecipesRequest$maxResults": "The maximum items to return in a request.
", - "ListImagesRequest$maxResults": "The maximum items to return in a request.
", - "ListInfrastructureConfigurationsRequest$maxResults": "The maximum items to return in a request.
" + "ListComponentBuildVersionsRequest$maxResults": "The maximum items to return in a request.
", + "ListComponentsRequest$maxResults": "The maximum items to return in a request.
", + "ListDistributionConfigurationsRequest$maxResults": "The maximum items to return in a request.
", + "ListImageBuildVersionsRequest$maxResults": "The maximum items to return in a request.
", + "ListImagePipelineImagesRequest$maxResults": "The maximum items to return in a request.
", + "ListImagePipelinesRequest$maxResults": "The maximum items to return in a request.
", + "ListImageRecipesRequest$maxResults": "The maximum items to return in a request.
", + "ListImagesRequest$maxResults": "The maximum items to return in a request.
", + "ListInfrastructureConfigurationsRequest$maxResults": "The maximum items to return in a request.
" } }, "S3Logs": { @@ -1288,19 +1301,19 @@ } }, "Schedule": { - "base": "A schedule configures how often and when a pipeline will automatically create a new image.
", + "base": "A schedule configures how often and when a pipeline will automatically create a new image.
", "refs": { "CreateImagePipelineRequest$schedule": "The schedule of the image pipeline.
", "ImagePipeline$schedule": "The schedule of the image pipeline.
", - "UpdateImagePipelineRequest$schedule": "The schedule of the image pipeline.
" + "UpdateImagePipelineRequest$schedule": "The schedule of the image pipeline.
" } }, "SecurityGroupIds": { "base": null, "refs": { - "CreateInfrastructureConfigurationRequest$securityGroupIds": "The security group IDs to associate with the instance used to customize your EC2 AMI.
", + "CreateInfrastructureConfigurationRequest$securityGroupIds": "The security group IDs to associate with the instance used to customize your EC2 AMI.
", "InfrastructureConfiguration$securityGroupIds": "The security group IDs of the infrastructure configuration.
", - "UpdateInfrastructureConfigurationRequest$securityGroupIds": "The security group IDs to associate with the instance used to customize your EC2 AMI.
" + "UpdateInfrastructureConfigurationRequest$securityGroupIds": "The security group IDs to associate with the instance used to customize your EC2 AMI.
" } }, "ServiceException": { @@ -1316,8 +1329,8 @@ "SnsTopicArn": { "base": null, "refs": { - "CreateInfrastructureConfigurationRequest$snsTopicArn": "The SNS topic on which to send image build events.
", - "UpdateInfrastructureConfigurationRequest$snsTopicArn": "The SNS topic on which to send image build events.
" + "CreateInfrastructureConfigurationRequest$snsTopicArn": "The SNS topic on which to send image build events.
", + "UpdateInfrastructureConfigurationRequest$snsTopicArn": "The SNS topic on which to send image build events.
" } }, "StartImagePipelineExecutionRequest": { @@ -1346,13 +1359,13 @@ "TagKeyList": { "base": null, "refs": { - "UntagResourceRequest$tagKeys": "The tag keys to remove from the resource.
" + "UntagResourceRequest$tagKeys": "The tag keys to remove from the resource.
" } }, "TagMap": { "base": null, "refs": { - "AmiDistributionConfiguration$amiTags": "The tags to apply to AMIs distributed to this Region.
", + "AmiDistributionConfiguration$amiTags": "The tags to apply to AMIs distributed to this Region.
", "Component$tags": "The tags associated with the component.
", "ComponentSummary$tags": "The tags associated with the component.
", "CreateComponentRequest$tags": "The tags of the component.
", @@ -1360,7 +1373,7 @@ "CreateImagePipelineRequest$tags": "The tags of the image pipeline.
", "CreateImageRecipeRequest$tags": "The tags of the image recipe.
", "CreateImageRequest$tags": "The tags of the image.
", - "CreateInfrastructureConfigurationRequest$tags": "The tags of the infrastructure configuration.
", + "CreateInfrastructureConfigurationRequest$tags": "The tags of the infrastructure configuration.
", "DistributionConfiguration$tags": "The tags of the distribution configuration.
", "DistributionConfigurationSummary$tags": "The tags associated with the distribution configuration.
", "Image$tags": "The tags of the image.
", @@ -1368,11 +1381,11 @@ "ImageRecipe$tags": "The tags of the image recipe.
", "ImageRecipeSummary$tags": "The tags of the image recipe.
", "ImageSummary$tags": "The tags of the image.
", - "ImportComponentRequest$tags": "The tags of the component.
", + "ImportComponentRequest$tags": "The tags of the component.
", "InfrastructureConfiguration$tags": "The tags of the infrastructure configuration.
", "InfrastructureConfigurationSummary$tags": "The tags of the infrastructure configuration.
", - "ListTagsForResourceResponse$tags": "The tags for the specified resource.
", - "TagResourceRequest$tags": "The tags to apply to the resource.
" + "ListTagsForResourceResponse$tags": "The tags for the specified resource.
", + "TagResourceRequest$tags": "The tags to apply to the resource.
" } }, "TagResourceRequest": { @@ -1445,7 +1458,7 @@ "ComponentSummary$version": "The version of the component.
", "ComponentVersion$version": "The semantic version of the component.
", "CreateComponentRequest$semanticVersion": "The semantic version of the component. This version follows the semantic version syntax. For example, major.minor.patch. This could be versioned like software (2.0.1) or like a date (2019.12.01).
", - "CreateImageRecipeRequest$semanticVersion": "The semantic version of the image recipe.
", + "CreateImageRecipeRequest$semanticVersion": "The semantic version of the image recipe.
", "Image$version": "The semantic version of the image.
", "ImageRecipe$version": "The version of the image recipe.
", "ImageSummary$version": "The version of the image.
", diff --git a/models/apis/iot/2015-05-28/api-2.json b/models/apis/iot/2015-05-28/api-2.json index e8ff00b58bc..9e40dce2e09 100644 --- a/models/apis/iot/2015-05-28/api-2.json +++ b/models/apis/iot/2015-05-28/api-2.json @@ -303,6 +303,22 @@ {"shape":"InternalFailureException"} ] }, + "CreateDimension":{ + "name":"CreateDimension", + "http":{ + "method":"POST", + "requestUri":"/dimensions/{name}" + }, + "input":{"shape":"CreateDimensionRequest"}, + "output":{"shape":"CreateDimensionResponse"}, + "errors":[ + {"shape":"InternalFailureException"}, + {"shape":"InvalidRequestException"}, + {"shape":"LimitExceededException"}, + {"shape":"ResourceAlreadyExistsException"}, + {"shape":"ThrottlingException"} + ] + }, "CreateDomainConfiguration":{ "name":"CreateDomainConfiguration", "http":{ @@ -731,6 +747,20 @@ {"shape":"ResourceNotFoundException"} ] }, + "DeleteDimension":{ + "name":"DeleteDimension", + "http":{ + "method":"DELETE", + "requestUri":"/dimensions/{name}" + }, + "input":{"shape":"DeleteDimensionRequest"}, + "output":{"shape":"DeleteDimensionResponse"}, + "errors":[ + {"shape":"InternalFailureException"}, + {"shape":"InvalidRequestException"}, + {"shape":"ThrottlingException"} + ] + }, "DeleteDomainConfiguration":{ "name":"DeleteDomainConfiguration", "http":{ @@ -1228,6 +1258,21 @@ {"shape":"InternalFailureException"} ] }, + "DescribeDimension":{ + "name":"DescribeDimension", + "http":{ + "method":"GET", + "requestUri":"/dimensions/{name}" + }, + "input":{"shape":"DescribeDimensionRequest"}, + "output":{"shape":"DescribeDimensionResponse"}, + "errors":[ + {"shape":"InternalFailureException"}, + {"shape":"InvalidRequestException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ThrottlingException"} + ] + }, "DescribeDomainConfiguration":{ "name":"DescribeDomainConfiguration", "http":{ @@ -1992,6 +2037,20 @@ {"shape":"InternalFailureException"} ] }, + "ListDimensions":{ + "name":"ListDimensions", + "http":{ + "method":"GET", + "requestUri":"/dimensions" + }, + "input":{"shape":"ListDimensionsRequest"}, + "output":{"shape":"ListDimensionsResponse"}, + "errors":[ + {"shape":"InternalFailureException"}, + {"shape":"InvalidRequestException"}, + {"shape":"ThrottlingException"} + ] + }, "ListDomainConfigurations":{ "name":"ListDomainConfigurations", "http":{ @@ -2273,7 +2332,8 @@ "errors":[ {"shape":"InvalidRequestException"}, {"shape":"ThrottlingException"}, - {"shape":"InternalFailureException"} + {"shape":"InternalFailureException"}, + {"shape":"ResourceNotFoundException"} ] }, "ListSecurityProfilesForTarget":{ @@ -2992,6 +3052,21 @@ {"shape":"InternalFailureException"} ] }, + "UpdateDimension":{ + "name":"UpdateDimension", + "http":{ + "method":"PATCH", + "requestUri":"/dimensions/{name}" + }, + "input":{"shape":"UpdateDimensionRequest"}, + "output":{"shape":"UpdateDimensionResponse"}, + "errors":[ + {"shape":"InternalFailureException"}, + {"shape":"InvalidRequestException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ThrottlingException"} + ] + }, "UpdateDomainConfiguration":{ "name":"UpdateDomainConfiguration", "http":{ @@ -3401,6 +3476,10 @@ "type":"list", "member":{"shape":"BehaviorMetric"} }, + "AdditionalMetricsToRetainV2List":{ + "type":"list", + "member":{"shape":"MetricToRetain"} + }, "AdditionalParameterMap":{ "type":"map", "key":{"shape":"AttributeKey"}, @@ -3956,6 +4035,7 @@ "members":{ "name":{"shape":"BehaviorName"}, "metric":{"shape":"BehaviorMetric"}, + "metricDimension":{"shape":"MetricDimension"}, "criteria":{"shape":"BehaviorCriteria"} } }, @@ -4490,6 +4570,36 @@ "certificatePem":{"shape":"CertificatePem"} } }, + "CreateDimensionRequest":{ + "type":"structure", + "required":[ + "name", + "type", + "stringValues", + "clientRequestToken" + ], + "members":{ + "name":{ + "shape":"DimensionName", + "location":"uri", + "locationName":"name" + }, + "type":{"shape":"DimensionType"}, + "stringValues":{"shape":"DimensionStringValues"}, + "tags":{"shape":"TagList"}, + "clientRequestToken":{ + "shape":"ClientRequestToken", + "idempotencyToken":true + } + } + }, + "CreateDimensionResponse":{ + "type":"structure", + "members":{ + "name":{"shape":"DimensionName"}, + "arn":{"shape":"DimensionArn"} + } + }, "CreateDomainConfigurationRequest":{ "type":"structure", "required":["domainConfigurationName"], @@ -4842,7 +4952,12 @@ "securityProfileDescription":{"shape":"SecurityProfileDescription"}, "behaviors":{"shape":"Behaviors"}, "alertTargets":{"shape":"AlertTargets"}, - "additionalMetricsToRetain":{"shape":"AdditionalMetricsToRetainList"}, + "additionalMetricsToRetain":{ + "shape":"AdditionalMetricsToRetainList", + "deprecated":true, + "deprecatedMessage":"Use additionalMetricsToRetainV2." + }, + "additionalMetricsToRetainV2":{"shape":"AdditionalMetricsToRetainV2List"}, "tags":{"shape":"TagList"} } }, @@ -5112,6 +5227,22 @@ "error":{"httpStatusCode":409}, "exception":true }, + "DeleteDimensionRequest":{ + "type":"structure", + "required":["name"], + "members":{ + "name":{ + "shape":"DimensionName", + "location":"uri", + "locationName":"name" + } + } + }, + "DeleteDimensionResponse":{ + "type":"structure", + "members":{ + } + }, "DeleteDomainConfigurationRequest":{ "type":"structure", "required":["domainConfigurationName"], @@ -5678,6 +5809,28 @@ "authorizerDescription":{"shape":"AuthorizerDescription"} } }, + "DescribeDimensionRequest":{ + "type":"structure", + "required":["name"], + "members":{ + "name":{ + "shape":"DimensionName", + "location":"uri", + "locationName":"name" + } + } + }, + "DescribeDimensionResponse":{ + "type":"structure", + "members":{ + "name":{"shape":"DimensionName"}, + "arn":{"shape":"DimensionArn"}, + "type":{"shape":"DimensionType"}, + "stringValues":{"shape":"DimensionStringValues"}, + "creationDate":{"shape":"Timestamp"}, + "lastModifiedDate":{"shape":"Timestamp"} + } + }, "DescribeDomainConfigurationRequest":{ "type":"structure", "required":["domainConfigurationName"], @@ -5933,7 +6086,12 @@ "securityProfileDescription":{"shape":"SecurityProfileDescription"}, "behaviors":{"shape":"Behaviors"}, "alertTargets":{"shape":"AlertTargets"}, - "additionalMetricsToRetain":{"shape":"AdditionalMetricsToRetainList"}, + "additionalMetricsToRetain":{ + "shape":"AdditionalMetricsToRetainList", + "deprecated":true, + "deprecatedMessage":"Use additionalMetricsToRetainV2." + }, + "additionalMetricsToRetainV2":{"shape":"AdditionalMetricsToRetainV2List"}, "version":{"shape":"Version"}, "creationDate":{"shape":"Timestamp"}, "lastModifiedDate":{"shape":"Timestamp"} @@ -6170,6 +6328,39 @@ "max":128, "min":1 }, + "DimensionArn":{"type":"string"}, + "DimensionName":{ + "type":"string", + "max":128, + "min":1, + "pattern":"[a-zA-Z0-9:_-]+" + }, + "DimensionNames":{ + "type":"list", + "member":{"shape":"DimensionName"} + }, + "DimensionStringValue":{ + "type":"string", + "max":256, + "min":1 + }, + "DimensionStringValues":{ + "type":"list", + "member":{"shape":"DimensionStringValue"}, + "max":100, + "min":1 + }, + "DimensionType":{ + "type":"string", + "enum":["TOPIC_FILTER"] + }, + "DimensionValueOperator":{ + "type":"string", + "enum":[ + "IN", + "NOT_IN" + ] + }, "DisableAllLogs":{"type":"boolean"}, "DisableTopicRuleRequest":{ "type":"structure", @@ -7534,6 +7725,28 @@ "nextMarker":{"shape":"Marker"} } }, + "ListDimensionsRequest":{ + "type":"structure", + "members":{ + "nextToken":{ + "shape":"NextToken", + "location":"querystring", + "locationName":"nextToken" + }, + "maxResults":{ + "shape":"MaxResults", + "location":"querystring", + "locationName":"maxResults" + } + } + }, + "ListDimensionsResponse":{ + "type":"structure", + "members":{ + "dimensionNames":{"shape":"DimensionNames"}, + "nextToken":{"shape":"NextToken"} + } + }, "ListDomainConfigurationsRequest":{ "type":"structure", "members":{ @@ -8054,6 +8267,11 @@ "shape":"MaxResults", "location":"querystring", "locationName":"maxResults" + }, + "dimensionName":{ + "shape":"DimensionName", + "location":"querystring", + "locationName":"dimensionName" } } }, @@ -8656,6 +8874,22 @@ "type":"string", "max":128 }, + "MetricDimension":{ + "type":"structure", + "required":["dimensionName"], + "members":{ + "dimensionName":{"shape":"DimensionName"}, + "operator":{"shape":"DimensionValueOperator"} + } + }, + "MetricToRetain":{ + "type":"structure", + "required":["metric"], + "members":{ + "metric":{"shape":"BehaviorMetric"}, + "metricDimension":{"shape":"MetricDimension"} + } + }, "MetricValue":{ "type":"structure", "members":{ @@ -10688,6 +10922,32 @@ "action":{"shape":"DeviceCertificateUpdateAction"} } }, + "UpdateDimensionRequest":{ + "type":"structure", + "required":[ + "name", + "stringValues" + ], + "members":{ + "name":{ + "shape":"DimensionName", + "location":"uri", + "locationName":"name" + }, + "stringValues":{"shape":"DimensionStringValues"} + } + }, + "UpdateDimensionResponse":{ + "type":"structure", + "members":{ + "name":{"shape":"DimensionName"}, + "arn":{"shape":"DimensionArn"}, + "type":{"shape":"DimensionType"}, + "stringValues":{"shape":"DimensionStringValues"}, + "creationDate":{"shape":"Timestamp"}, + "lastModifiedDate":{"shape":"Timestamp"} + } + }, "UpdateDomainConfigurationRequest":{ "type":"structure", "required":["domainConfigurationName"], @@ -10866,7 +11126,12 @@ "securityProfileDescription":{"shape":"SecurityProfileDescription"}, "behaviors":{"shape":"Behaviors"}, "alertTargets":{"shape":"AlertTargets"}, - "additionalMetricsToRetain":{"shape":"AdditionalMetricsToRetainList"}, + "additionalMetricsToRetain":{ + "shape":"AdditionalMetricsToRetainList", + "deprecated":true, + "deprecatedMessage":"Use additionalMetricsToRetainV2." + }, + "additionalMetricsToRetainV2":{"shape":"AdditionalMetricsToRetainV2List"}, "deleteBehaviors":{"shape":"DeleteBehaviors"}, "deleteAlertTargets":{"shape":"DeleteAlertTargets"}, "deleteAdditionalMetricsToRetain":{"shape":"DeleteAdditionalMetricsToRetain"}, @@ -10885,7 +11150,12 @@ "securityProfileDescription":{"shape":"SecurityProfileDescription"}, "behaviors":{"shape":"Behaviors"}, "alertTargets":{"shape":"AlertTargets"}, - "additionalMetricsToRetain":{"shape":"AdditionalMetricsToRetainList"}, + "additionalMetricsToRetain":{ + "shape":"AdditionalMetricsToRetainList", + "deprecated":true, + "deprecatedMessage":"Use additionalMetricsToRetainV2." + }, + "additionalMetricsToRetainV2":{"shape":"AdditionalMetricsToRetainV2List"}, "version":{"shape":"Version"}, "creationDate":{"shape":"Timestamp"}, "lastModifiedDate":{"shape":"Timestamp"} diff --git a/models/apis/iot/2015-05-28/docs-2.json b/models/apis/iot/2015-05-28/docs-2.json index a750bbf3aa8..f114284839f 100644 --- a/models/apis/iot/2015-05-28/docs-2.json +++ b/models/apis/iot/2015-05-28/docs-2.json @@ -1,6 +1,6 @@ { "version": "2.0", - "service": "AWS IoT provides secure, bi-directional communication between Internet-connected devices (such as sensors, actuators, embedded devices, or smart appliances) and the AWS cloud. You can discover your custom IoT-Data endpoint to communicate with, configure rules for data processing and integration with other services, organize resources associated with each device (Registry), configure logging, and create and manage policies and credentials to authenticate devices.
For more information about how AWS IoT works, see the Developer Guide.
For information about how to use the credentials provider for AWS IoT, see Authorizing Direct Calls to AWS Services.
", + "service": "AWS IoT provides secure, bi-directional communication between Internet-connected devices (such as sensors, actuators, embedded devices, or smart appliances) and the AWS cloud. You can discover your custom IoT-Data endpoint to communicate with, configure rules for data processing and integration with other services, organize resources associated with each device (Registry), configure logging, and create and manage policies and credentials to authenticate devices.
The service endpoints that expose this API are listed in AWS IoT Core Endpoints and Quotas. You must use the endpoint for the region that has the resources you want to access.
The service name used by AWS Signature Version 4 to sign the request is: execute-api.
For more information about how AWS IoT works, see the Developer Guide.
For information about how to use the credentials provider for AWS IoT, see Authorizing Direct Calls to AWS Services.
", "operations": { "AcceptCertificateTransfer": "Accepts a pending certificate transfer. The default state of the certificate is INACTIVE.
To check for pending certificate transfers, call ListCertificates to enumerate your certificates.
", "AddThingToBillingGroup": "Adds a thing to a billing group.
", @@ -20,6 +20,7 @@ "CreateAuthorizer": "Creates an authorizer.
", "CreateBillingGroup": "Creates a billing group.
", "CreateCertificateFromCsr": "Creates an X.509 certificate using the specified certificate signing request.
Note: The CSR must include a public key that is either an RSA key with a length of at least 2048 bits or an ECC key from NIST P-256 or NIST P-384 curves.
Note: Reusing the same certificate signing request (CSR) results in a distinct certificate.
You can create multiple certificates in a batch by creating a directory, copying multiple .csr files into that directory, and then specifying that directory on the command line. The following commands show how to create a batch of certificates given a batch of CSRs.
Assuming a set of CSRs are located inside of the directory my-csr-directory:
On Linux and OS X, the command is:
$ ls my-csr-directory/ | xargs -I {} aws iot create-certificate-from-csr --certificate-signing-request file://my-csr-directory/{}
This command lists all of the CSRs in my-csr-directory and pipes each CSR file name to the aws iot create-certificate-from-csr AWS CLI command to create a certificate for the corresponding CSR.
The aws iot create-certificate-from-csr part of the command can also be run in parallel to speed up the certificate creation process:
$ ls my-csr-directory/ | xargs -P 10 -I {} aws iot create-certificate-from-csr --certificate-signing-request file://my-csr-directory/{}
On Windows PowerShell, the command to create certificates for all CSRs in my-csr-directory is:
> ls -Name my-csr-directory | %{aws iot create-certificate-from-csr --certificate-signing-request file://my-csr-directory/$_}
On a Windows command prompt, the command to create certificates for all CSRs in my-csr-directory is:
> forfiles /p my-csr-directory /c \"cmd /c aws iot create-certificate-from-csr --certificate-signing-request file://@path\"
", + "CreateDimension": "Create a dimension that you can use to limit the scope of a metric used in a security profile for AWS IoT Device Defender. For example, using a TOPIC_FILTER
dimension, you can narrow down the scope of the metric only to MQTT topics whose name match the pattern specified in the dimension.
Creates a domain configuration.
The domain configuration feature is in public preview and is subject to change.
Creates a dynamic thing group.
", "CreateJob": "Creates a job.
", @@ -45,6 +46,7 @@ "DeleteBillingGroup": "Deletes the billing group.
", "DeleteCACertificate": "Deletes a registered CA certificate.
", "DeleteCertificate": "Deletes the specified certificate.
A certificate cannot be deleted if it has a policy or IoT thing attached to it or if its status is set to ACTIVE. To delete a certificate, first use the DetachPrincipalPolicy API to detach all policies. Next, use the UpdateCertificate API to set the certificate to the INACTIVE status.
", + "DeleteDimension": "Removes the specified dimension from your AWS account.
", "DeleteDomainConfiguration": "Deletes the specified domain configuration.
The domain configuration feature is in public preview and is subject to change.
Deletes a dynamic thing group.
", "DeleteJob": "Deletes a job and its related job executions.
Deleting a job may take time, depending on the number of job executions created for the job and various other factors. While the job is being deleted, the status of the job will be shown as \"DELETION_IN_PROGRESS\". Attempting to delete or cancel a job whose status is already \"DELETION_IN_PROGRESS\" will result in an error.
Only 10 jobs may have status \"DELETION_IN_PROGRESS\" at the same time, or a LimitExceededException will occur.
", @@ -76,6 +78,7 @@ "DescribeCACertificate": "Describes a registered CA certificate.
", "DescribeCertificate": "Gets information about the specified certificate.
", "DescribeDefaultAuthorizer": "Describes the default authorizer.
", + "DescribeDimension": "Provides details about a dimension that is defined in your AWS account.
", "DescribeDomainConfiguration": "Gets summary information about a domain configuration.
The domain configuration feature is in public preview and is subject to change.
Returns a unique endpoint specific to the AWS account making the call.
", "DescribeEventConfigurations": "Describes event configurations.
", @@ -124,6 +127,7 @@ "ListCACertificates": "Lists the CA certificates registered for your AWS account.
The results are paginated with a default page size of 25. You can use the returned marker to retrieve additional results.
", "ListCertificates": "Lists the certificates registered in your AWS account.
The results are paginated with a default page size of 25. You can use the returned marker to retrieve additional results.
", "ListCertificatesByCA": "List the device certificates signed by the specified CA certificate.
", + "ListDimensions": "List the set of dimensions that are defined for your AWS account.
", "ListDomainConfigurations": "Gets a list of domain configurations for the user. This list is sorted alphabetically by domain configuration name.
The domain configuration feature is in public preview and is subject to change.
Lists the search indices.
", "ListJobExecutionsForJob": "Lists the job executions for a job.
", @@ -187,6 +191,7 @@ "UpdateBillingGroup": "Updates information about the billing group.
", "UpdateCACertificate": "Updates a registered CA certificate.
", "UpdateCertificate": "Updates the status of the specified certificate. This operation is idempotent.
Moving a certificate from the ACTIVE state (including REVOKED) will not disconnect currently connected devices, but these devices will be unable to reconnect.
The ACTIVE state is required to authenticate devices connecting to AWS IoT using a certificate.
", + "UpdateDimension": "Updates the definition for a dimension. You cannot change the type of a dimension after it is created (you can delete it and re-create it).
", "UpdateDomainConfiguration": "Updates values stored in the domain configuration. Domain configurations for default endpoints can't be updated.
The domain configuration feature is in public preview and is subject to change.
Updates a dynamic thing group.
", "UpdateEventConfigurations": "Updates the event configurations.
", @@ -312,10 +317,19 @@ "AdditionalMetricsToRetainList": { "base": null, "refs": { - "CreateSecurityProfileRequest$additionalMetricsToRetain": "A list of metrics whose data is retained (stored). By default, data is retained for any metric used in the profile's behaviors
, but it is also retained for any metric specified here.
A list of metrics whose data is retained (stored). By default, data is retained for any metric used in the profile's behaviors
, but it is also retained for any metric specified here.
A list of metrics whose data is retained (stored). By default, data is retained for any metric used in the profile's behaviors
, but it is also retained for any metric specified here.
A list of metrics whose data is retained (stored). By default, data is retained for any metric used in the security profile's behaviors
, but it is also retained for any metric specified here.
A list of metrics whose data is retained (stored). By default, data is retained for any metric used in the profile's behaviors
, but it is also retained for any metric specified here.
Note: This API field is deprecated. Please use CreateSecurityProfileRequest$additionalMetricsToRetainV2 instead.
", + "DescribeSecurityProfileResponse$additionalMetricsToRetain": "A list of metrics whose data is retained (stored). By default, data is retained for any metric used in the profile's behaviors
, but it is also retained for any metric specified here.
Note: This API field is deprecated. Please use DescribeSecurityProfileResponse$additionalMetricsToRetainV2 instead.
", + "UpdateSecurityProfileRequest$additionalMetricsToRetain": "A list of metrics whose data is retained (stored). By default, data is retained for any metric used in the profile's behaviors
, but it is also retained for any metric specified here.
Note: This API field is deprecated. Please use UpdateSecurityProfileRequest$additionalMetricsToRetainV2 instead.
", + "UpdateSecurityProfileResponse$additionalMetricsToRetain": "A list of metrics whose data is retained (stored). By default, data is retained for any metric used in the security profile's behaviors
, but it is also retained for any metric specified here.
Note: This API field is deprecated. Please use UpdateSecurityProfileResponse$additionalMetricsToRetainV2 instead.
" + } + }, + "AdditionalMetricsToRetainV2List": { + "base": null, + "refs": { + "CreateSecurityProfileRequest$additionalMetricsToRetainV2": "A list of metrics whose data is retained (stored). By default, data is retained for any metric used in the profile's behaviors
, but it is also retained for any metric specified here.
A list of metrics whose data is retained (stored). By default, data is retained for any metric used in the profile's behaviors, but it is also retained for any metric specified here.
", + "UpdateSecurityProfileRequest$additionalMetricsToRetainV2": "A list of metrics whose data is retained (stored). By default, data is retained for any metric used in the profile's behaviors, but it is also retained for any metric specified here.
", + "UpdateSecurityProfileResponse$additionalMetricsToRetainV2": "A list of metrics whose data is retained (stored). By default, data is retained for any metric used in the profile's behaviors, but it is also retained for any metric specified here.
" } }, "AdditionalParameterMap": { @@ -1003,7 +1017,8 @@ "base": null, "refs": { "AdditionalMetricsToRetainList$member": null, - "Behavior$metric": "What is measured by the behavior.
" + "Behavior$metric": "What is measured by the behavior.
", + "MetricToRetain$metric": "What is measured by the behavior.
" } }, "BehaviorName": { @@ -1370,6 +1385,7 @@ "ClientRequestToken": { "base": null, "refs": { + "CreateDimensionRequest$clientRequestToken": "Each dimension must have a unique client request token. If you try to create a new dimension with the same token as a dimension that already exists, an exception occurs. If you omit this value, AWS SDKs will automatically generate a unique client request.
", "StartAuditMitigationActionsTaskRequest$clientRequestToken": "Each audit mitigation task must have a unique client request token. If you try to start a new task with the same token as a task that already exists, an exception occurs. If you omit this value, a unique client request token is generated automatically.
" } }, @@ -1380,9 +1396,9 @@ } }, "CloudwatchLogsAction": { - "base": "Describes an action that sends data to CloudWatch logs.
", + "base": "Describes an action that sends data to CloudWatch Logs.
", "refs": { - "Action$cloudwatchLogs": "Send data to CloudWatch logs.
" + "Action$cloudwatchLogs": "Send data to CloudWatch Logs.
" } }, "CloudwatchMetricAction": { @@ -1527,6 +1543,16 @@ "refs": { } }, + "CreateDimensionRequest": { + "base": null, + "refs": { + } + }, + "CreateDimensionResponse": { + "base": null, + "refs": { + } + }, "CreateDomainConfigurationRequest": { "base": null, "refs": { @@ -1897,6 +1923,16 @@ "refs": { } }, + "DeleteDimensionRequest": { + "base": null, + "refs": { + } + }, + "DeleteDimensionResponse": { + "base": null, + "refs": { + } + }, "DeleteDomainConfigurationRequest": { "base": null, "refs": { @@ -2207,6 +2243,16 @@ "refs": { } }, + "DescribeDimensionRequest": { + "base": null, + "refs": { + } + }, + "DescribeDimensionResponse": { + "base": null, + "refs": { + } + }, "DescribeDomainConfigurationRequest": { "base": null, "refs": { @@ -2454,6 +2500,64 @@ "ViolationEvent$thingName": "The name of the thing responsible for the violation event.
" } }, + "DimensionArn": { + "base": null, + "refs": { + "CreateDimensionResponse$arn": "The ARN (Amazon resource name) of the created dimension.
", + "DescribeDimensionResponse$arn": "The ARN (Amazon resource name) for the dimension.
", + "UpdateDimensionResponse$arn": "The ARN (Amazon resource name) of the created dimension.
" + } + }, + "DimensionName": { + "base": null, + "refs": { + "CreateDimensionRequest$name": "A unique identifier for the dimension. Choose something that describes the type and value to make it easy to remember what it does.
", + "CreateDimensionResponse$name": "A unique identifier for the dimension.
", + "DeleteDimensionRequest$name": "The unique identifier for the dimension that you want to delete.
", + "DescribeDimensionRequest$name": "The unique identifier for the dimension.
", + "DescribeDimensionResponse$name": "The unique identifier for the dimension.
", + "DimensionNames$member": null, + "ListSecurityProfilesRequest$dimensionName": "A filter to limit results to the security profiles that use the defined dimension.
", + "MetricDimension$dimensionName": "A unique identifier for the dimension.
", + "UpdateDimensionRequest$name": "A unique identifier for the dimension. Choose something that describes the type and value to make it easy to remember what it does.
", + "UpdateDimensionResponse$name": "A unique identifier for the dimension.
" + } + }, + "DimensionNames": { + "base": null, + "refs": { + "ListDimensionsResponse$dimensionNames": "A list of the names of the defined dimensions. Use DescribeDimension
to get details for a dimension.
Specifies the value or list of values for the dimension. For TOPIC_FILTER
dimensions, this is a pattern used to match the MQTT topic (for example, \"admin/#\").
The value or list of values used to scope the dimension. For example, for topic filters, this is the pattern used to match the MQTT topic name.
", + "UpdateDimensionRequest$stringValues": "Specifies the value or list of values for the dimension. For TOPIC_FILTER
dimensions, this is a pattern used to match the MQTT topic (for example, \"admin/#\").
The value or list of values used to scope the dimension. For example, for topic filters, this is the pattern used to match the MQTT topic name.
" + } + }, + "DimensionType": { + "base": null, + "refs": { + "CreateDimensionRequest$type": "Specifies the type of dimension. Supported types: TOPIC_FILTER.
The type of the dimension.
", + "UpdateDimensionResponse$type": "The type of the dimension.
" + } + }, + "DimensionValueOperator": { + "base": null, + "refs": { + "MetricDimension$operator": "Defines how the dimensionValues
of a dimension are interpreted. For example, for DimensionType TOPIC_FILTER, with IN
operator, a message will be counted only if its topic matches one of the topic filters. With NOT_IN
Operator, a message will be counted only if it doesn't match any of the topic filters. The operator is optional: if it's not provided (is null
), it will be interpreted as IN
.
The maximum number of results to return at one time. The default is 25.
", "ListAuditMitigationActionsTasksRequest$maxResults": "The maximum number of results to return at one time. The default is 25.
", "ListAuditTasksRequest$maxResults": "The maximum number of results to return at one time. The default is 25.
", + "ListDimensionsRequest$maxResults": "The maximum number of results to retrieve at one time.
", "ListMitigationActionsRequest$maxResults": "The maximum number of results to return at one time. The default is 25.
", "ListOTAUpdatesRequest$maxResults": "The maximum number of results to return at one time.
", "ListProvisioningTemplateVersionsRequest$maxResults": "The maximum number of results to return at one time.
", @@ -4115,6 +4230,19 @@ "IotEventsAction$messageId": "[Optional] Use this to ensure that only one input (message) with a given messageId will be processed by an AWS IoT Events detector.
" } }, + "MetricDimension": { + "base": "The dimension of a metric.
", + "refs": { + "Behavior$metricDimension": "The dimension for a metric in your behavior. For example, using a TOPIC_FILTER
dimension, you can narrow down the scope of the metric only to MQTT topics whose name match the pattern specified in the dimension.
The dimension of a metric.
" + } + }, + "MetricToRetain": { + "base": "The metric you want to retain. Dimensions are optional.
", + "refs": { + "AdditionalMetricsToRetainV2List$member": null + } + }, "MetricValue": { "base": "The value to be compared with the metric
.
A token that can be used to retrieve the next set of results, or null
if there are no additional results.
The token to retrieve the next set of results.
", "ListBillingGroupsResponse$nextToken": "The token used to get the next set of results, or null if there are no additional results.
", + "ListDimensionsRequest$nextToken": "The token for the next set of results.
", + "ListDimensionsResponse$nextToken": "A token that can be used to retrieve the next set of results, or null
if there are no additional results.
The token used to get the next set of results, or null
if there are no additional results.
The token used to get the next set of results, or null
if there are no additional results.
The token to retrieve the next set of results.
", @@ -5932,6 +6062,7 @@ "base": null, "refs": { "CreateBillingGroupRequest$tags": "Metadata which can be used to manage the billing group.
", + "CreateDimensionRequest$tags": "Metadata that can be used to manage the dimension.
", "CreateDynamicThingGroupRequest$tags": "Metadata which can be used to manage the dynamic thing group.
", "CreateJobRequest$tags": "Metadata which can be used to manage the job.
", "CreateMitigationActionRequest$tags": "Metadata that can be used to manage the mitigation action.
", @@ -6450,6 +6581,8 @@ "DescribeAuditMitigationActionsTaskResponse$startTime": "The date and time when the task was started.
", "DescribeAuditMitigationActionsTaskResponse$endTime": "The date and time when the task was completed or canceled.
", "DescribeAuditTaskResponse$taskStartTime": "The time the audit started.
", + "DescribeDimensionResponse$creationDate": "The date the dimension was created.
", + "DescribeDimensionResponse$lastModifiedDate": "The date the dimension was last modified.
", "DescribeMitigationActionResponse$creationDate": "The date and time when the mitigation action was added to your AWS account.
", "DescribeMitigationActionResponse$lastModifiedDate": "The date and time when the mitigation action was last changed.
", "DescribeSecurityProfileResponse$creationDate": "The time the security profile was created.
", @@ -6463,6 +6596,8 @@ "ListViolationEventsRequest$startTime": "The start time for the alerts to be listed.
", "ListViolationEventsRequest$endTime": "The end time for the alerts to be listed.
", "MitigationActionIdentifier$creationDate": "The date when this mitigation action was created.
", + "UpdateDimensionResponse$creationDate": "The date and time, in milliseconds since epoch, when the dimension was initially created.
", + "UpdateDimensionResponse$lastModifiedDate": "The date and time, in milliseconds since epoch, when the dimension was most recently updated.
", "UpdateSecurityProfileResponse$creationDate": "The time the security profile was created.
", "UpdateSecurityProfileResponse$lastModifiedDate": "The time the security profile was last modified.
", "ViolationEvent$violationEventTime": "The time the violation event occurred.
" @@ -6700,6 +6835,16 @@ "MitigationActionParams$updateDeviceCertificateParams": "Parameters to define a mitigation action that changes the state of the device certificate to inactive.
" } }, + "UpdateDimensionRequest": { + "base": null, + "refs": { + } + }, + "UpdateDimensionResponse": { + "base": null, + "refs": { + } + }, "UpdateDomainConfigurationRequest": { "base": null, "refs": { @@ -6849,7 +6994,7 @@ "base": null, "refs": { "HttpAction$url": "The endpoint URL. If substitution templates are used in the URL, you must also specify a confirmationUrl
. If this is a new destination, a new TopicRuleDestination
is created if possible.
The URL to which AWS IoT sends a confirmation message. The value of the confirmation URL must be a prefix of the endpoint URL. If you do not specify a confirmation URL AWS IoT uses the endpoint URL as the confirmation URL. If you use substitution templates in the confirmationUrl, you must create and enable topic rule destinations that match each possible value of the substituion template before traffic is allowed to your endpoint URL.
", + "HttpAction$confirmationUrl": "The URL to which AWS IoT sends a confirmation message. The value of the confirmation URL must be a prefix of the endpoint URL. If you do not specify a confirmation URL AWS IoT uses the endpoint URL as the confirmation URL. If you use substitution templates in the confirmationUrl, you must create and enable topic rule destinations that match each possible value of the substitution template before traffic is allowed to your endpoint URL.
", "HttpUrlDestinationConfiguration$confirmationUrl": "The URL AWS IoT uses to confirm ownership of or access to the topic rule destination URL.
", "HttpUrlDestinationProperties$confirmationUrl": "The URL used to confirm the HTTP topic rule destination URL.
", "HttpUrlDestinationSummary$confirmationUrl": "The URL used to confirm ownership of or access to the HTTP topic rule destination URL.
" diff --git a/models/apis/iotevents/2018-07-27/api-2.json b/models/apis/iotevents/2018-07-27/api-2.json index f488846422f..305b38fef65 100644 --- a/models/apis/iotevents/2018-07-27/api-2.json +++ b/models/apis/iotevents/2018-07-27/api-2.json @@ -274,6 +274,24 @@ {"shape":"ServiceUnavailableException"}, {"shape":"ResourceInUseException"} ] + }, + "VerifyResourcesExistForTagris":{ + "name":"VerifyResourcesExistForTagris", + "http":{ + "method":"GET", + "requestUri":"/internal/tags/resource-status" + }, + "input":{"shape":"TagrisVerifyResourcesExistInput"}, + "output":{"shape":"TagrisVerifyResourcesExistOutput"}, + "errors":[ + {"shape":"TagrisAccessDeniedException"}, + {"shape":"TagrisInternalServiceException"}, + {"shape":"TagrisInvalidArnException"}, + {"shape":"TagrisInvalidParameterException"}, + {"shape":"TagrisPartialResourcesExistResultsException"}, + {"shape":"TagrisThrottledException"} + ], + "internalonly":true } }, "shapes":{ @@ -289,7 +307,10 @@ "lambda":{"shape":"LambdaAction"}, "iotEvents":{"shape":"IotEventsAction"}, "sqs":{"shape":"SqsAction"}, - "firehose":{"shape":"FirehoseAction"} + "firehose":{"shape":"FirehoseAction"}, + "dynamoDB":{"shape":"DynamoDBAction"}, + "dynamoDBv2":{"shape":"DynamoDBv2Action"}, + "iotSiteWise":{"shape":"IotSiteWiseAction"} } }, "Actions":{ @@ -301,6 +322,43 @@ "max":2048, "min":1 }, + "AssetId":{"type":"string"}, + "AssetPropertyAlias":{"type":"string"}, + "AssetPropertyBooleanValue":{"type":"string"}, + "AssetPropertyDoubleValue":{"type":"string"}, + "AssetPropertyEntryId":{"type":"string"}, + "AssetPropertyId":{"type":"string"}, + "AssetPropertyIntegerValue":{"type":"string"}, + "AssetPropertyOffsetInNanos":{"type":"string"}, + "AssetPropertyQuality":{"type":"string"}, + "AssetPropertyStringValue":{"type":"string"}, + "AssetPropertyTimeInSeconds":{"type":"string"}, + "AssetPropertyTimestamp":{ + "type":"structure", + "required":["timeInSeconds"], + "members":{ + "timeInSeconds":{"shape":"AssetPropertyTimeInSeconds"}, + "offsetInNanos":{"shape":"AssetPropertyOffsetInNanos"} + } + }, + "AssetPropertyValue":{ + "type":"structure", + "required":["value"], + "members":{ + "value":{"shape":"AssetPropertyVariant"}, + "timestamp":{"shape":"AssetPropertyTimestamp"}, + "quality":{"shape":"AssetPropertyQuality"} + } + }, + "AssetPropertyVariant":{ + "type":"structure", + "members":{ + "stringValue":{"shape":"AssetPropertyStringValue"}, + "integerValue":{"shape":"AssetPropertyIntegerValue"}, + "doubleValue":{"shape":"AssetPropertyDoubleValue"}, + "booleanValue":{"shape":"AssetPropertyBooleanValue"} + } + }, "Attribute":{ "type":"structure", "required":["jsonPath"], @@ -331,6 +389,10 @@ "type":"string", "max":512 }, + "ContentExpression":{ + "type":"string", + "min":1 + }, "CreateDetectorModelRequest":{ "type":"structure", "required":[ @@ -559,6 +621,39 @@ "evaluationMethod":{"shape":"EvaluationMethod"} } }, + "DynamoDBAction":{ + "type":"structure", + "required":[ + "hashKeyField", + "hashKeyValue", + "tableName" + ], + "members":{ + "hashKeyType":{"shape":"DynamoKeyType"}, + "hashKeyField":{"shape":"DynamoKeyField"}, + "hashKeyValue":{"shape":"DynamoKeyValue"}, + "rangeKeyType":{"shape":"DynamoKeyType"}, + "rangeKeyField":{"shape":"DynamoKeyField"}, + "rangeKeyValue":{"shape":"DynamoKeyValue"}, + "operation":{"shape":"DynamoOperation"}, + "payloadField":{"shape":"DynamoKeyField"}, + "tableName":{"shape":"DynamoTableName"}, + "payload":{"shape":"Payload"} + } + }, + "DynamoDBv2Action":{ + "type":"structure", + "required":["tableName"], + "members":{ + "tableName":{"shape":"DynamoTableName"}, + "payload":{"shape":"Payload"} + } + }, + "DynamoKeyField":{"type":"string"}, + "DynamoKeyType":{"type":"string"}, + "DynamoKeyValue":{"type":"string"}, + "DynamoOperation":{"type":"string"}, + "DynamoTableName":{"type":"string"}, "EvaluationMethod":{ "type":"string", "enum":[ @@ -588,7 +683,8 @@ "required":["deliveryStreamName"], "members":{ "deliveryStreamName":{"shape":"DeliveryStreamName"}, - "separator":{"shape":"FirehoseSeparator"} + "separator":{"shape":"FirehoseSeparator"}, + "payload":{"shape":"Payload"} } }, "FirehoseSeparator":{ @@ -683,14 +779,27 @@ "type":"structure", "required":["inputName"], "members":{ - "inputName":{"shape":"InputName"} + "inputName":{"shape":"InputName"}, + "payload":{"shape":"Payload"} + } + }, + "IotSiteWiseAction":{ + "type":"structure", + "required":["propertyValue"], + "members":{ + "entryId":{"shape":"AssetPropertyEntryId"}, + "assetId":{"shape":"AssetId"}, + "propertyId":{"shape":"AssetPropertyId"}, + "propertyAlias":{"shape":"AssetPropertyAlias"}, + "propertyValue":{"shape":"AssetPropertyValue"} } }, "IotTopicPublishAction":{ "type":"structure", "required":["mqttTopic"], "members":{ - "mqttTopic":{"shape":"MQTTTopic"} + "mqttTopic":{"shape":"MQTTTopic"}, + "payload":{"shape":"Payload"} } }, "KeyValue":{ @@ -703,7 +812,8 @@ "type":"structure", "required":["functionArn"], "members":{ - "functionArn":{"shape":"AmazonResourceName"} + "functionArn":{"shape":"AmazonResourceName"}, + "payload":{"shape":"Payload"} } }, "LimitExceededException":{ @@ -856,6 +966,24 @@ "transitionEvents":{"shape":"TransitionEvents"} } }, + "Payload":{ + "type":"structure", + "required":[ + "contentExpression", + "type" + ], + "members":{ + "contentExpression":{"shape":"ContentExpression"}, + "type":{"shape":"PayloadType"} + } + }, + "PayloadType":{ + "type":"string", + "enum":[ + "STRING", + "JSON" + ] + }, "PutLoggingOptionsRequest":{ "type":"structure", "required":["loggingOptions"], @@ -901,7 +1029,8 @@ "type":"structure", "required":["targetArn"], "members":{ - "targetArn":{"shape":"AmazonResourceName"} + "targetArn":{"shape":"AmazonResourceName"}, + "payload":{"shape":"Payload"} } }, "Seconds":{ @@ -947,7 +1076,8 @@ "required":["queueUrl"], "members":{ "queueUrl":{"shape":"QueueUrl"}, - "useBase64":{"shape":"UseBase64"} + "useBase64":{"shape":"UseBase64"}, + "payload":{"shape":"Payload"} } }, "State":{ @@ -1015,6 +1145,111 @@ "max":256, "min":0 }, + "TagrisAccessDeniedException":{ + "type":"structure", + "members":{ + "message":{"shape":"TagrisExceptionMessage"} + }, + "exception":true + }, + "TagrisAccountId":{ + "type":"string", + "max":12, + "min":12 + }, + "TagrisAmazonResourceName":{ + "type":"string", + "max":1011, + "min":1 + }, + "TagrisExceptionMessage":{ + "type":"string", + "max":2048, + "min":0 + }, + "TagrisInternalId":{ + "type":"string", + "max":64, + "min":0 + }, + "TagrisInternalServiceException":{ + "type":"structure", + "members":{ + "message":{"shape":"TagrisExceptionMessage"} + }, + "exception":true, + "fault":true + }, + "TagrisInvalidArnException":{ + "type":"structure", + "members":{ + "message":{"shape":"TagrisExceptionMessage"}, + "sweepListItem":{"shape":"TagrisSweepListItem"} + }, + "exception":true + }, + "TagrisInvalidParameterException":{ + "type":"structure", + "members":{ + "message":{"shape":"TagrisExceptionMessage"} + }, + "exception":true + }, + "TagrisPartialResourcesExistResultsException":{ + "type":"structure", + "members":{ + "message":{"shape":"TagrisExceptionMessage"}, + "resourceExistenceInformation":{"shape":"TagrisSweepListResult"} + }, + "exception":true + }, + "TagrisStatus":{ + "type":"string", + "enum":[ + "ACTIVE", + "NOT_ACTIVE" + ] + }, + "TagrisSweepList":{ + "type":"list", + "member":{"shape":"TagrisSweepListItem"} + }, + "TagrisSweepListItem":{ + "type":"structure", + "members":{ + "TagrisAccountId":{"shape":"TagrisAccountId"}, + "TagrisAmazonResourceName":{"shape":"TagrisAmazonResourceName"}, + "TagrisInternalId":{"shape":"TagrisInternalId"}, + "TagrisVersion":{"shape":"TagrisVersion"} + } + }, + "TagrisSweepListResult":{ + "type":"map", + "key":{"shape":"TagrisAmazonResourceName"}, + "value":{"shape":"TagrisStatus"} + }, + "TagrisThrottledException":{ + "type":"structure", + "members":{ + "message":{"shape":"TagrisExceptionMessage"} + }, + "exception":true + }, + "TagrisVerifyResourcesExistInput":{ + "type":"structure", + "required":["TagrisSweepList"], + "members":{ + "TagrisSweepList":{"shape":"TagrisSweepList"} + } + }, + "TagrisVerifyResourcesExistOutput":{ + "type":"structure", + "required":["TagrisSweepListResult"], + "members":{ + "TagrisSweepListResult":{"shape":"TagrisSweepListResult"} + } + }, + "TagrisVersion":{"type":"long"}, "Tags":{ "type":"list", "member":{"shape":"Tag"} diff --git a/models/apis/iotevents/2018-07-27/docs-2.json b/models/apis/iotevents/2018-07-27/docs-2.json index 840281e1803..f0470d68a4d 100644 --- a/models/apis/iotevents/2018-07-27/docs-2.json +++ b/models/apis/iotevents/2018-07-27/docs-2.json @@ -1,6 +1,6 @@ { "version": "2.0", - "service": "AWS IoT Events monitors your equipment or device fleets for failures or changes in operation, and triggers actions when such events occur. You can use AWS IoT Events API commands to create, read, update, and delete inputs and detector models, and to list their versions.
", + "service": "AWS IoT Events monitors your equipment or device fleets for failures or changes in operation, and triggers actions when such events occur. You can use AWS IoT Events API operations to create, read, update, and delete inputs and detector models, and to list their versions.
", "operations": { "CreateDetectorModel": "Creates a detector model.
", "CreateInput": "Creates an input.
", @@ -17,7 +17,8 @@ "TagResource": "Adds to or modifies the tags of the given resource. Tags are metadata that can be used to manage a resource.
", "UntagResource": "Removes the given tags (metadata) from the resource.
", "UpdateDetectorModel": "Updates a detector model. Detectors (instances) spawned by the previous version are deleted and then re-created as new inputs arrive.
", - "UpdateInput": "Updates an input.
" + "UpdateInput": "Updates an input.
", + "VerifyResourcesExistForTagris": null }, "shapes": { "Action": { @@ -48,6 +49,90 @@ "UpdateDetectorModelRequest$roleArn": "The ARN of the role that grants permission to AWS IoT Events to perform its operations.
" } }, + "AssetId": { + "base": null, + "refs": { + "IotSiteWiseAction$assetId": "The ID of the asset that has the specified property. You can specify an expression.
" + } + }, + "AssetPropertyAlias": { + "base": null, + "refs": { + "IotSiteWiseAction$propertyAlias": "The alias of the asset property. You can also specify an expression.
" + } + }, + "AssetPropertyBooleanValue": { + "base": null, + "refs": { + "AssetPropertyVariant$booleanValue": "The asset property value is a Boolean value that must be TRUE
or FALSE
. You can also specify an expression. If you use an expression, the evaluated result should be a Boolean value.
The asset property value is a double. You can also specify an expression. If you use an expression, the evaluated result should be a double.
" + } + }, + "AssetPropertyEntryId": { + "base": null, + "refs": { + "IotSiteWiseAction$entryId": "A unique identifier for this entry. You can use the entry ID to track which data entry causes an error in case of failure. The default is a new unique identifier. You can also specify an expression.
" + } + }, + "AssetPropertyId": { + "base": null, + "refs": { + "IotSiteWiseAction$propertyId": "The ID of the asset property. You can specify an expression.
" + } + }, + "AssetPropertyIntegerValue": { + "base": null, + "refs": { + "AssetPropertyVariant$integerValue": "The asset property value is an integer. You can also specify an expression. If you use an expression, the evaluated result should be an integer.
" + } + }, + "AssetPropertyOffsetInNanos": { + "base": null, + "refs": { + "AssetPropertyTimestamp$offsetInNanos": "The nanosecond offset converted from timeInSeconds
. The valid range is between 0-999999999. You can also specify an expression.
The quality of the asset property value. The value must be GOOD
, BAD
, or UNCERTAIN
. You can also specify an expression.
The asset property value is a string. You can also specify an expression. If you use an expression, the evaluated result should be a string.
" + } + }, + "AssetPropertyTimeInSeconds": { + "base": null, + "refs": { + "AssetPropertyTimestamp$timeInSeconds": "The timestamp, in seconds, in the Unix epoch format. The valid range is between 1-31556889864403199. You can also specify an expression.
" + } + }, + "AssetPropertyTimestamp": { + "base": "A structure that contains timestamp information. For more information, see TimeInNanos in the AWS IoT SiteWise API Reference.
For parameters that are string data type, you can specify the following options:
Use a string. For example, the timeInSeconds
value can be '1586400675'
.
Use an expression. For example, the timeInSeconds
value can be '${$input.TemperatureInput.sensorData.timestamp/1000}'
.
For more information, see Expressions in the AWS IoT Events Developer Guide.
The timestamp associated with the asset property value. The default is the current event time.
" + } + }, + "AssetPropertyValue": { + "base": "A structure that contains value information. For more information, see AssetPropertyValue in the AWS IoT SiteWise API Reference.
For parameters that are string data type, you can specify the following options:
Use a string. For example, the quality
value can be 'GOOD'
.
Use an expression. For example, the quality
value can be $input.TemperatureInput.sensorData.quality
.
For more information, see Expressions in the AWS IoT Events Developer Guide.
The value to send to the asset property. This value contains timestamp, quality, and value (TQV) information.
" + } + }, + "AssetPropertyVariant": { + "base": "A structure that contains an asset property value. For more information, see Variant in the AWS IoT SiteWise API Reference.
You must specify one of the following value types, depending on the dataType
of the specified asset property. For more information, see AssetProperty in the AWS IoT SiteWise API Reference.
For parameters that are string data type, you can specify the following options:
Use a string. For example, the doubleValue
value can be '47.9'
.
Use an expression. For example, the doubleValue
value can be $input.TemperatureInput.sensorData.temperature
.
For more information, see Expressions in the AWS IoT Events Developer Guide.
The value to send to an asset property.
" + } + }, "Attribute": { "base": "The attributes from the JSON payload that are made available by the input. Inputs are derived from messages sent to the AWS IoT Events system using BatchPutMessage
. Each such message contains a JSON payload. Those attributes (and their paired values) specified here are available for use in the condition
expressions used by detectors.
An expression that specifies an attribute-value pair in a JSON structure. Use this to specify an attribute from the JSON payload that is made available by the input. Inputs are derived from messages sent to AWS IoT Events (BatchPutMessage
). Each such message contains a JSON payload. The attribute (and its paired value) specified here are available for use in the condition
expressions used by detectors.
Syntax: <field-name>.<field-name>...
The input attribute key used to identify a device or system to create a detector (an instance of the detector model) and then to route each input received to the appropriate detector (instance). This parameter uses a JSON-path expression in the message payload of each input to specify the attribute-value pair that is used to identify the device associated with the input.
", - "DetectorModelConfiguration$key": "The input attribute key used to identify a device or system to create a detector (an instance of the detector model) and then to route each input received to the appropriate detector (instance). This parameter uses a JSON-path expression in the message payload of each input to specify the attribute-value pair that is used to identify the device associated with the input.
" + "DetectorModelConfiguration$key": "The value used to identify a detector instance. When a device or system sends input, a new detector instance with a unique key value is created. AWS IoT Events can continue to route input to its corresponding detector instance based on this identifying information.
This parameter uses a JSON-path expression to select the attribute-value pair in the message payload that is used for identification. To route the message to the correct detector instance, the device must send a message payload that contains the same attribute-value.
" } }, "Attributes": { @@ -81,6 +166,12 @@ "TransitionEvent$condition": "Required. A Boolean expression that when TRUE causes the actions to be performed and the nextState
to be entered.
The content of the payload. You can use a string expression that includes quoted strings ('<string>'
), variables ($variable.<variable-name>
), input values ($input.<input-name>.<path-to-datum>
), string concatenations, and quoted strings that contain ${}
as the content. The recommended maximum size of a content expression is 1 KB.
Defines an action to write to the Amazon DynamoDB table that you created. The standard action payload contains all attribute-value pairs that have the information about the detector model instance and the event that triggered the action. You can also customize the payload. One column of the DynamoDB table receives all attribute-value pairs in the payload that you specify.
The tableName
and hashKeyField
values must match the table name and the partition key of the DynamoDB table.
If the DynamoDB table also has a sort key, you must specify rangeKeyField
. The rangeKeyField
value must match the sort key.
The hashKeyValue
and rangeKeyValue
use substitution templates. These templates provide data at runtime. The syntax is ${sql-expression}
.
You can use expressions for parameters that are string data type. For more information, see Expressions in the AWS IoT Events Developer Guide.
If the defined payload type is a string, DynamoDBAction
writes non-JSON data to the DynamoDB table as binary data. The DynamoDB console displays the data as Base64-encoded text. The payloadField
is <payload-field>_raw
.
Writes to the DynamoDB table that you created. The default action payload contains all attribute-value pairs that have the information about the detector model instance and the event that triggered the action. You can also customize the payload. One column of the DynamoDB table receives all attribute-value pairs in the payload that you specify. For more information, see Actions in AWS IoT Events Developer Guide.
" + } + }, + "DynamoDBv2Action": { + "base": "Defines an action to write to the Amazon DynamoDB table that you created. The default action payload contains all attribute-value pairs that have the information about the detector model instance and the event that triggered the action. You can also customize the payload. A separate column of the DynamoDB table receives one attribute-value pair in the payload that you specify.
The type
value for Payload
must be JSON
.
You can use expressions for parameters that are strings. For more information, see Expressions in the AWS IoT Events Developer Guide.
", + "refs": { + "Action$dynamoDBv2": "Writes to the DynamoDB table that you created. The default action payload contains all attribute-value pairs that have the information about the detector model instance and the event that triggered the action. You can also customize the payload. A separate column of the DynamoDB table receives one attribute-value pair in the payload that you specify. For more information, see Actions in AWS IoT Events Developer Guide.
" + } + }, + "DynamoKeyField": { + "base": null, + "refs": { + "DynamoDBAction$hashKeyField": "The name of the hash key (also called the partition key).
", + "DynamoDBAction$rangeKeyField": "The name of the range key (also called the sort key).
", + "DynamoDBAction$payloadField": "The name of the DynamoDB column that receives the action payload.
If you don't specify this parameter, the name of the DynamoDB column is payload
.
The data type for the hash key (also called the partition key). You can specify the following values:
STRING
- The hash key is a string.
NUMBER
- The hash key is a number.
If you don't specify hashKeyType
, the default value is STRING
.
The data type for the range key (also called the sort key), You can specify the following values:
STRING
- The range key is a string.
NUMBER
- The range key is number.
If you don't specify rangeKeyField
, the default value is STRING
.
The value of the hash key (also called the partition key).
", + "DynamoDBAction$rangeKeyValue": "The value of the range key (also called the sort key).
" + } + }, + "DynamoOperation": { + "base": null, + "refs": { + "DynamoDBAction$operation": "The type of operation to perform. You can specify the following values:
INSERT
- Insert data as a new item into the DynamoDB table. This item uses the specified hash key as a partition key. If you specified a range key, the item uses the range key as a sort key.
UPDATE
- Update an existing item of the DynamoDB table with new data. This item's partition key must match the specified hash key. If you specified a range key, the range key must match the item's sort key.
DELETE
- Delete an existing item of the DynamoDB table. This item's partition key must match the specified hash key. If you specified a range key, the range key must match the item's sort key.
If you don't specify this parameter, AWS IoT Events triggers the INSERT
operation.
The name of the DynamoDB table.
", + "DynamoDBv2Action$tableName": "The name of the DynamoDB table.
" + } + }, "EvaluationMethod": { "base": null, "refs": { @@ -285,8 +423,8 @@ "Events": { "base": null, "refs": { - "OnEnterLifecycle$events": "Specifies the actions that are performed when the state is entered and the condition
is TRUE.
Specifies the actions
that are performed when the state is exited and the condition
is TRUE.
Specifies the actions that are performed when the state is entered and the condition
is TRUE
.
Specifies the actions
that are performed when the state is exited and the condition
is TRUE
.
Specifies the actions performed when the condition
evaluates to TRUE.
Sends an AWS IoT Events input, passing in information about the detector model instance and the event that triggered the action.
", "refs": { - "Action$iotEvents": "Sends an AWS IoT Events input, passing in information about the detector model instance and the event that triggered the action.
" + "Action$iotEvents": "Sends AWS IoT Events input, which passes information about the detector model instance and the event that triggered the action.
" + } + }, + "IotSiteWiseAction": { + "base": "Sends information about the detector model instance and the event that triggered the action to a specified asset property in AWS IoT SiteWise.
You must specify either propertyAlias
or both assetId
and propertyId
to identify the target asset property in AWS IoT SiteWise.
For parameters that are string data type, you can specify the following options:
Use a string. For example, the propertyAlias
value can be '/company/windfarm/3/turbine/7/temperature'
.
Use an expression. For example, the propertyAlias
value can be 'company/windfarm/${$input.TemperatureInput.sensorData.windfarmID}/turbine/${$input.TemperatureInput.sensorData.turbineID}/temperature'
.
For more information, see Expressions in the AWS IoT Events Developer Guide.
Sends information about the detector model instance and the event that triggered the action to an AWS IoT SiteWise asset property.
" } }, "IotTopicPublishAction": { @@ -501,9 +645,9 @@ } }, "OnExitLifecycle": { - "base": "When exiting this state, perform these actions
if the specified condition
is TRUE.
When exiting this state, perform these actions
if the specified condition
is TRUE
.
When exiting this state, perform these actions
if the specified condition
is TRUE.
When exiting this state, perform these actions
if the specified condition
is TRUE
.
When an input is received and the condition
is TRUE, perform the specified actions
.
Information needed to configure the payload.
By default, AWS IoT Events generates a standard payload in JSON for any action. This action payload contains all attribute-value pairs that have the information about the detector model instance and the event triggered the action. To configure the action payload, you can use contentExpression
.
You can configure the action payload when you send a message to an Amazon Kinesis Data Firehose delivery stream.
", + "IotEventsAction$payload": "You can configure the action payload when you send a message to an AWS IoT Events input.
", + "IotTopicPublishAction$payload": "You can configure the action payload when you publish a message to an AWS IoT Core topic.
", + "LambdaAction$payload": "You can configure the action payload when you send a message to a Lambda function.
", + "SNSTopicPublishAction$payload": "You can configure the action payload when you send a message as an Amazon SNS push notification.
", + "SqsAction$payload": "You can configure the action payload when you send a message to an Amazon SQS queue.
" + } + }, + "PayloadType": { + "base": null, + "refs": { + "Payload$type": "The value of the payload type can be either STRING
or JSON
.
Information required to reset the timer. The timer is reset to the previously evaluated result of the duration.
", + "base": "Information required to reset the timer. The timer is reset to the previously evaluated result of the duration. The duration expression isn't reevaluated when you reset the timer.
", "refs": { "Action$resetTimer": "Information needed to reset the timer.
" } @@ -553,7 +716,7 @@ "Seconds": { "base": null, "refs": { - "SetTimerAction$seconds": "The number of seconds until the timer expires. The minimum value is 60 seconds to ensure accuracy.
" + "SetTimerAction$seconds": "The number of seconds until the timer expires. The minimum value is 60 seconds to ensure accuracy. The maximum value is 31622400 seconds.
" } }, "ServiceUnavailableException": { @@ -634,6 +797,108 @@ "Tag$value": "The tag's value.
" } }, + "TagrisAccessDeniedException": { + "base": null, + "refs": { + } + }, + "TagrisAccountId": { + "base": null, + "refs": { + "TagrisSweepListItem$TagrisAccountId": null + } + }, + "TagrisAmazonResourceName": { + "base": null, + "refs": { + "TagrisSweepListItem$TagrisAmazonResourceName": null, + "TagrisSweepListResult$key": null + } + }, + "TagrisExceptionMessage": { + "base": null, + "refs": { + "TagrisAccessDeniedException$message": null, + "TagrisInternalServiceException$message": null, + "TagrisInvalidArnException$message": null, + "TagrisInvalidParameterException$message": null, + "TagrisPartialResourcesExistResultsException$message": null, + "TagrisThrottledException$message": null + } + }, + "TagrisInternalId": { + "base": null, + "refs": { + "TagrisSweepListItem$TagrisInternalId": null + } + }, + "TagrisInternalServiceException": { + "base": null, + "refs": { + } + }, + "TagrisInvalidArnException": { + "base": null, + "refs": { + } + }, + "TagrisInvalidParameterException": { + "base": null, + "refs": { + } + }, + "TagrisPartialResourcesExistResultsException": { + "base": null, + "refs": { + } + }, + "TagrisStatus": { + "base": null, + "refs": { + "TagrisSweepListResult$value": null + } + }, + "TagrisSweepList": { + "base": null, + "refs": { + "TagrisVerifyResourcesExistInput$TagrisSweepList": null + } + }, + "TagrisSweepListItem": { + "base": null, + "refs": { + "TagrisInvalidArnException$sweepListItem": null, + "TagrisSweepList$member": null + } + }, + "TagrisSweepListResult": { + "base": null, + "refs": { + "TagrisPartialResourcesExistResultsException$resourceExistenceInformation": null, + "TagrisVerifyResourcesExistOutput$TagrisSweepListResult": null + } + }, + "TagrisThrottledException": { + "base": null, + "refs": { + } + }, + "TagrisVerifyResourcesExistInput": { + "base": null, + "refs": { + } + }, + "TagrisVerifyResourcesExistOutput": { + "base": null, + "refs": { + } + }, + "TagrisVersion": { + "base": null, + "refs": { + "TagrisSweepListItem$TagrisVersion": null + } + }, "Tags": { "base": null, "refs": { @@ -720,7 +985,7 @@ "UseBase64": { "base": null, "refs": { - "SqsAction$useBase64": "Set this to TRUE if you want the data to be base-64 encoded before it is written to the queue.
" + "SqsAction$useBase64": "Set this to TRUE if you want the data to be base-64 encoded before it is written to the queue. Otherwise, set this to FALSE.
" } }, "VariableName": { diff --git a/models/apis/kendra/2019-02-03/api-2.json b/models/apis/kendra/2019-02-03/api-2.json index ea741a9bd2c..e98313edcf0 100644 --- a/models/apis/kendra/2019-02-03/api-2.json +++ b/models/apis/kendra/2019-02-03/api-2.json @@ -100,6 +100,7 @@ {"shape":"ServiceQuotaExceededException"}, {"shape":"ThrottlingException"}, {"shape":"AccessDeniedException"}, + {"shape":"ConflictException"}, {"shape":"InternalServerException"} ] }, @@ -501,6 +502,11 @@ "type":"list", "member":{"shape":"ClickFeedback"} }, + "ClientTokenName":{ + "type":"string", + "max":100, + "min":1 + }, "ColumnConfiguration":{ "type":"structure", "required":[ @@ -614,7 +620,11 @@ "Name":{"shape":"IndexName"}, "RoleArn":{"shape":"RoleArn"}, "ServerSideEncryptionConfiguration":{"shape":"ServerSideEncryptionConfiguration"}, - "Description":{"shape":"Description"} + "Description":{"shape":"Description"}, + "ClientToken":{ + "shape":"ClientTokenName", + "idempotencyToken":true + } } }, "CreateIndexResponse":{ @@ -640,7 +650,7 @@ "type":"string", "max":100, "min":1, - "pattern":"^[a-zA-Z][a-zA-Z0-9_]*$" + "pattern":"^[a-zA-Z][a-zA-Z0-9_.]*$" }, "DataSourceId":{ "type":"string", @@ -657,8 +667,7 @@ "DataSourceInclusionsExclusionsStringsMember":{ "type":"string", "max":50, - "min":1, - "pattern":"^\\P{C}*$" + "min":1 }, "DataSourceName":{ "type":"string", @@ -947,8 +956,7 @@ "DocumentAttributeStringValue":{ "type":"string", "max":2048, - "min":1, - "pattern":"^\\P{C}*$" + "min":1 }, "DocumentAttributeValue":{ "type":"structure", @@ -1505,8 +1513,7 @@ "S3ObjectKey":{ "type":"string", "max":1024, - "min":1, - "pattern":".*" + "min":1 }, "S3Path":{ "type":"structure", @@ -1565,6 +1572,9 @@ "Urls":{"shape":"SharePointUrlList"}, "SecretArn":{"shape":"SecretArn"}, "CrawlAttachments":{"shape":"Boolean"}, + "UseChangeLog":{"shape":"Boolean"}, + "InclusionPatterns":{"shape":"DataSourceInclusionsExclusionsStrings"}, + "ExclusionPatterns":{"shape":"DataSourceInclusionsExclusionsStrings"}, "VpcConfiguration":{"shape":"DataSourceVpcConfiguration"}, "FieldMappings":{"shape":"DataSourceToIndexFieldMappingList"}, "DocumentTitleFieldName":{"shape":"DataSourceFieldName"} diff --git a/models/apis/kendra/2019-02-03/docs-2.json b/models/apis/kendra/2019-02-03/docs-2.json index 6c4222dac7e..4f22291fd00 100644 --- a/models/apis/kendra/2019-02-03/docs-2.json +++ b/models/apis/kendra/2019-02-03/docs-2.json @@ -66,7 +66,7 @@ } }, "AttributeFilter": { - "base": "Provides filtering the query results based on document attributes.
", + "base": "Provides filtering the query results based on document attributes.
When you use the AndAllFilters
or OrAllFilters
, filters you can use a total of 3 layers. For example, you can use:
<AndAllFilters>
<OrAllFilters>
<EqualTo>
Performs a logical NOT
operation on all supplied filters.
The contents of the document as a base-64 encoded string.
" + "Document$Blob": "The contents of the document.
Documents passed to the Blob
parameter must be base64 encoded. Your code might not need to encode the document file bytes if you're using an AWS SDK to call Amazon Kendra operations. If you are calling the Amazon Kendra endpoint directly using REST, you must base64 encode the contents before sending.
Indicates that the field can be used to create search facets, a count of results for each value in the field. The default is false
.
Determines whether the field is used in the search. If the Searchable
field is true
, you can use relevance tuning to manually tune how Amazon Kendra weights the field in the search. The default is true
for string fields and false
for number and date fields.
Determines whether the field is returned in the query response. The default is true
.
TRUE
to include attachments to documents stored in your Microsoft SharePoint site in the index; otherwise, FALSE
.
TRUE
to include attachments to documents stored in your Microsoft SharePoint site in the index; otherwise, FALSE
.
Set to TRUE
to use the Microsoft SharePoint change log to determine the documents that need to be updated in the index. Depending on the size of the SharePoint change log, it may take longer for Amazon Kendra to use the change log than it takes it to determine the changed documents using the Amazon Kendra document crawler.
Tells Amazon Kendra that a particular search result link was chosen by the user.
" } }, + "ClientTokenName": { + "base": null, + "refs": { + "CreateIndexRequest$ClientToken": "A token that you provide to identify the request to create an index. Multiple calls to the CreateIndex
operation with the same client token will create only one index.”
Provides information about how Amazon Kendra should use the columns of a database in an index.
", "refs": { @@ -259,7 +266,9 @@ "base": null, "refs": { "S3DataSourceConfiguration$InclusionPrefixes": "A list of S3 prefixes for the documents that should be included in the index.
", - "S3DataSourceConfiguration$ExclusionPatterns": "A list of glob patterns for documents that should not be indexed. If a document that matches an inclusion prefix also matches an exclusion pattern, the document is not indexed.
For more information about glob patterns, see glob (programming) in Wikipedia.
" + "S3DataSourceConfiguration$ExclusionPatterns": "A list of glob patterns for documents that should not be indexed. If a document that matches an inclusion prefix also matches an exclusion pattern, the document is not indexed.
For more information about glob patterns, see glob (programming) in Wikipedia.
", + "SharePointConfiguration$InclusionPatterns": "A list of regular expression patterns. Documents that match the patterns are included in the index. Documents that don't match the patterns are excluded from the index. If a document matches both an inclusion pattern and an exclusion pattern, the document is not included in the index.
The regex is applied to the display URL of the SharePoint document.
", + "SharePointConfiguration$ExclusionPatterns": "A list of regular expression patterns. Documents that match the patterns are excluded from the index. Documents that don't match the patterns are included in the index. If a document matches both an exclusion pattern and an inclusion pattern, the document is not included in the index.
The regex is applied to the display URL of the SharePoint document.
" } }, "DataSourceInclusionsExclusionsStringsMember": { diff --git a/models/apis/lambda/2015-03-31/api-2.json b/models/apis/lambda/2015-03-31/api-2.json index d480da5b427..a766eb5f217 100644 --- a/models/apis/lambda/2015-03-31/api-2.json +++ b/models/apis/lambda/2015-03-31/api-2.json @@ -2610,6 +2610,7 @@ "dotnetcore1.0", "dotnetcore2.0", "dotnetcore2.1", + "dotnetcore3.1", "nodejs4.3-edge", "go1.x", "ruby2.5", diff --git a/models/apis/lambda/2015-03-31/docs-2.json b/models/apis/lambda/2015-03-31/docs-2.json index 401ca9599c2..641d4bf24b8 100644 --- a/models/apis/lambda/2015-03-31/docs-2.json +++ b/models/apis/lambda/2015-03-31/docs-2.json @@ -3,7 +3,7 @@ "service": "Overview
This is the AWS Lambda API Reference. The AWS Lambda Developer Guide provides additional information. For the service overview, see What is AWS Lambda, and for information about how the service works, see AWS Lambda: How it Works in the AWS Lambda Developer Guide.
", "operations": { "AddLayerVersionPermission": "Adds permissions to the resource-based policy of a version of an AWS Lambda layer. Use this action to grant layer usage permission to other accounts. You can grant permission to a single account, all AWS accounts, or all accounts in an organization.
To revoke permission, call RemoveLayerVersionPermission with the statement ID that you specified when you added it.
", - "AddPermission": "Grants an AWS service or another account permission to use a function. You can apply the policy at the function level, or specify a qualifier to restrict access to a single version or alias. If you use a qualifier, the invoker must use the full Amazon Resource Name (ARN) of that version or alias to invoke the function.
To grant permission to another account, specify the account ID as the Principal
. For AWS services, the principal is a domain-style identifier defined by the service, like s3.amazonaws.com
or sns.amazonaws.com
. For AWS services, you can also specify the ARN or owning account of the associated resource as the SourceArn
or SourceAccount
. If you grant permission to a service principal without specifying the source, other accounts could potentially configure resources in their account to invoke your Lambda function.
This action adds a statement to a resource-based permissions policy for the function. For more information about function policies, see Lambda Function Policies.
", + "AddPermission": "Grants an AWS service or another account permission to use a function. You can apply the policy at the function level, or specify a qualifier to restrict access to a single version or alias. If you use a qualifier, the invoker must use the full Amazon Resource Name (ARN) of that version or alias to invoke the function.
To grant permission to another account, specify the account ID as the Principal
. For AWS services, the principal is a domain-style identifier defined by the service, like s3.amazonaws.com
or sns.amazonaws.com
. For AWS services, you can also specify the ARN of the associated resource as the SourceArn
. If you grant permission to a service principal without specifying the source, other accounts could potentially configure resources in their account to invoke your Lambda function.
This action adds a statement to a resource-based permissions policy for the function. For more information about function policies, see Lambda Function Policies.
", "CreateAlias": "Creates an alias for a Lambda function version. Use aliases to provide clients with a function identifier that you can update to invoke a different version.
You can also map an alias to split invocation requests between two versions. Use the RoutingConfig
parameter to specify a second version and the percentage of invocation requests that it receives.
Creates a mapping between an event source and an AWS Lambda function. Lambda reads items from the event source and triggers the function.
For details about each event source type, see the following topics.
The following error handling options are only available for stream sources (DynamoDB and Kinesis):
BisectBatchOnFunctionError
- If the function returns an error, split the batch in two and retry.
DestinationConfig
- Send discarded records to an Amazon SQS queue or Amazon SNS topic.
MaximumRecordAgeInSeconds
- Discard records older than the specified age.
MaximumRetryAttempts
- Discard records after the specified number of retries.
ParallelizationFactor
- Process multiple batches from each shard concurrently.
Creates a Lambda function. To create a function, you need a deployment package and an execution role. The deployment package contains your function code. The execution role grants the function permission to use AWS services, such as Amazon CloudWatch Logs for log streaming and AWS X-Ray for request tracing.
When you create a function, Lambda provisions an instance of the function and its supporting resources. If your function connects to a VPC, this process can take a minute or so. During this time, you can't invoke or modify the function. The State
, StateReason
, and StateReasonCode
fields in the response from GetFunctionConfiguration indicate when the function is ready to invoke. For more information, see Function States.
A function has an unpublished version, and can have published versions and aliases. The unpublished version changes when you update your function's code and configuration. A published version is a snapshot of your function code and configuration that can't be changed. An alias is a named resource that maps to a version, and can be changed to map to a different version. Use the Publish
parameter to create version 1
of your function from its initial configuration.
The other parameters let you configure version-specific and function-level settings. You can modify version-specific settings later with UpdateFunctionConfiguration. Function-level settings apply to both the unpublished and published versions of the function, and include tags (TagResource) and per-function concurrency limits (PutFunctionConcurrency).
If another account or an AWS service invokes your function, use AddPermission to grant permission by creating a resource-based IAM policy. You can grant permissions at the function level, on a version, or on an alias.
To invoke your function directly, use Invoke. To invoke your function in response to events in other AWS services, create an event source mapping (CreateEventSourceMapping), or configure a function trigger in the other service. For more information, see Invoking Functions.
", @@ -26,7 +26,7 @@ "GetLayerVersionPolicy": "Returns the permission policy for a version of an AWS Lambda layer. For more information, see AddLayerVersionPermission.
", "GetPolicy": "Returns the resource-based IAM policy for a function, version, or alias.
", "GetProvisionedConcurrencyConfig": "Retrieves the provisioned concurrency configuration for a function's alias or version.
", - "Invoke": "Invokes a Lambda function. You can invoke a function synchronously (and wait for the response), or asynchronously. To invoke a function asynchronously, set InvocationType
to Event
.
For synchronous invocation, details about the function response, including errors, are included in the response body and headers. For either invocation type, you can find more information in the execution log and trace.
When an error occurs, your function may be invoked multiple times. Retry behavior varies by error type, client, event source, and invocation type. For example, if you invoke a function asynchronously and it returns an error, Lambda executes the function up to two more times. For more information, see Retry Behavior.
For asynchronous invocation, Lambda adds events to a queue before sending them to your function. If your function does not have enough capacity to keep up with the queue, events may be lost. Occasionally, your function may receive the same event multiple times, even if no error occurs. To retain events that were not processed, configure your function with a dead-letter queue.
The status code in the API response doesn't reflect function errors. Error codes are reserved for errors that prevent your function from executing, such as permissions errors, limit errors, or issues with your function's code and configuration. For example, Lambda returns TooManyRequestsException
if executing the function would cause you to exceed a concurrency limit at either the account level (ConcurrentInvocationLimitExceeded
) or function level (ReservedFunctionConcurrentInvocationLimitExceeded
).
For functions with a long timeout, your client might be disconnected during synchronous invocation while it waits for a response. Configure your HTTP client, SDK, firewall, proxy, or operating system to allow for long connections with timeout or keep-alive settings.
This operation requires permission for the lambda:InvokeFunction
action.
Invokes a Lambda function. You can invoke a function synchronously (and wait for the response), or asynchronously. To invoke a function asynchronously, set InvocationType
to Event
.
For synchronous invocation, details about the function response, including errors, are included in the response body and headers. For either invocation type, you can find more information in the execution log and trace.
When an error occurs, your function may be invoked multiple times. Retry behavior varies by error type, client, event source, and invocation type. For example, if you invoke a function asynchronously and it returns an error, Lambda executes the function up to two more times. For more information, see Retry Behavior.
For asynchronous invocation, Lambda adds events to a queue before sending them to your function. If your function does not have enough capacity to keep up with the queue, events may be lost. Occasionally, your function may receive the same event multiple times, even if no error occurs. To retain events that were not processed, configure your function with a dead-letter queue.
The status code in the API response doesn't reflect function errors. Error codes are reserved for errors that prevent your function from executing, such as permissions errors, limit errors, or issues with your function's code and configuration. For example, Lambda returns TooManyRequestsException
if executing the function would cause you to exceed a concurrency limit at either the account level (ConcurrentInvocationLimitExceeded
) or function level (ReservedFunctionConcurrentInvocationLimitExceeded
).
For functions with a long timeout, your client might be disconnected during synchronous invocation while it waits for a response. Configure your HTTP client, SDK, firewall, proxy, or operating system to allow for long connections with timeout or keep-alive settings.
This operation requires permission for the lambda:InvokeFunction action.
", "InvokeAsync": "For asynchronous function invocation, use Invoke.
Invokes a function asynchronously.
", "ListAliases": "Returns a list of aliases for a Lambda function.
", "ListEventSourceMappings": "Lists event source mappings. Specify an EventSourceArn
to only show event source mappings for a single event source.
The maximum amount of time to gather records before invoking the function, in seconds.
", - "EventSourceMappingConfiguration$MaximumBatchingWindowInSeconds": "The maximum amount of time to gather records before invoking the function, in seconds.
", - "UpdateEventSourceMappingRequest$MaximumBatchingWindowInSeconds": "The maximum amount of time to gather records before invoking the function, in seconds.
" + "CreateEventSourceMappingRequest$MaximumBatchingWindowInSeconds": "(Streams) The maximum amount of time to gather records before invoking the function, in seconds.
", + "EventSourceMappingConfiguration$MaximumBatchingWindowInSeconds": "(Streams) The maximum amount of time to gather records before invoking the function, in seconds.
", + "UpdateEventSourceMappingRequest$MaximumBatchingWindowInSeconds": "(Streams) The maximum amount of time to gather records before invoking the function, in seconds.
" } }, "MaximumEventAgeInSeconds": { @@ -1296,7 +1296,7 @@ "SourceOwner": { "base": null, "refs": { - "AddPermissionRequest$SourceAccount": "For AWS services, the ID of the account that owns the resource. Use this instead of SourceArn
to grant permission to resources that are owned by another account (for example, all of an account's Amazon S3 buckets). Or use it together with SourceArn
to ensure that the resource is owned by the specified account. For example, an Amazon S3 bucket could be deleted by its owner and recreated by another account.
For Amazon S3, the ID of the account that owns the resource. Use this together with SourceArn
to ensure that the resource is owned by the specified account. It is possible for an Amazon S3 bucket to be deleted by its owner and recreated by another account.
Returns the listing of votes for a specified proposal, including the value of each vote and the unique identifier of the member that cast the vote.
", "ListProposals": "Returns a listing of proposals for the network.
", "RejectInvitation": "Rejects an invitation to join a network. This action can be called by a principal in an AWS account that has received an invitation to create a member and join a network.
", + "UpdateMember": "Updates a member configuration with new parameters.
", + "UpdateNode": "Updates a node configuration with new parameters.
", "VoteOnProposal": "Casts a vote for a specified ProposalId
on behalf of a member. The member to vote as, specified by VoterMemberId
, must be in the same AWS account as the principal that calls the action.
The edition of Amazon Managed Blockchain that Hyperledger Fabric uses. For more information, see Amazon Managed Blockchain Pricing.
", - "NetworkFabricConfiguration$Edition": "The edition of Amazon Managed Blockchain that the network uses. For more information, see Amazon Managed Blockchain Pricing.
" + "NetworkFabricAttributes$Edition": "The edition of Amazon Managed Blockchain that Hyperledger Fabric uses. For more information, see Amazon Managed Blockchain Pricing.
", + "NetworkFabricConfiguration$Edition": "The edition of Amazon Managed Blockchain that the network uses. For more information, see Amazon Managed Blockchain Pricing.
" + } + }, + "Enabled": { + "base": null, + "refs": { + "LogConfiguration$Enabled": "Indicates whether logging is enabled.
" } }, "Framework": { @@ -308,6 +316,20 @@ "refs": { } }, + "LogConfiguration": { + "base": "A configuration for logging events.
", + "refs": { + "LogConfigurations$Cloudwatch": "Parameters for publishing logs to Amazon CloudWatch Logs.
" + } + }, + "LogConfigurations": { + "base": "A collection of log configurations.
", + "refs": { + "MemberFabricLogPublishingConfiguration$CaLogs": "Configuration properties for logging events associated with a member's Certificate Authority (CA). CA logs help you determine when a member in your account joins the network, or when new peers register with a member CA.
", + "NodeFabricLogPublishingConfiguration$ChaincodeLogs": "Configuration properties for logging events associated with chaincode execution on a peer node. Chaincode logs contain the results of instantiating, invoking, and querying the chaincode. A peer can run multiple instances of chaincode. When enabled, a log stream is created for all chaincodes, with an individual log stream for each chaincode.
", + "NodeFabricLogPublishingConfiguration$PeerLogs": "Configuration properties for a peer node log. Peer node logs contain messages generated when your client submits transaction proposals to peer nodes, requests to join channels, enrolls an admin peer, and lists the chaincode instances on a peer node.
" + } + }, "Member": { "base": "Member configuration properties.
", "refs": { @@ -333,6 +355,12 @@ "MemberFrameworkConfiguration$Fabric": "Attributes of Hyperledger Fabric for a member on a Managed Blockchain network that uses Hyperledger Fabric.
" } }, + "MemberFabricLogPublishingConfiguration": { + "base": "Configuration properties for logging events associated with a member of a Managed Blockchain network using the Hyperledger Fabric framework.
", + "refs": { + "MemberLogPublishingConfiguration$Fabric": "Configuration properties for logging events associated with a member of a Managed Blockchain network using the Hyperledger Fabric framework.
" + } + }, "MemberFrameworkAttributes": { "base": "Attributes relevant to a member for the blockchain framework that the Managed Blockchain network uses.
", "refs": { @@ -351,6 +379,14 @@ "ListMembersInput$MaxResults": "The maximum number of members to return in the request.
" } }, + "MemberLogPublishingConfiguration": { + "base": "Configuration properties for logging events associated with a member of a Managed Blockchain network.
", + "refs": { + "Member$LogPublishingConfiguration": "Configuration properties for logging events associated with a member.
", + "MemberConfiguration$LogPublishingConfiguration": "", + "UpdateMemberInput$LogPublishingConfiguration": "Configuration properties for publishing to Amazon CloudWatch Logs.
" + } + }, "MemberStatus": { "base": null, "refs": { @@ -465,6 +501,12 @@ "NodeFrameworkAttributes$Fabric": "Attributes of Hyperledger Fabric for a peer node on a Managed Blockchain network that uses Hyperledger Fabric.
" } }, + "NodeFabricLogPublishingConfiguration": { + "base": "Configuration properties for logging events associated with a peer node owned by a member in a Managed Blockchain network.
", + "refs": { + "NodeLogPublishingConfiguration$Fabric": "Configuration properties for logging events associated with a node that is owned by a member of a Managed Blockchain network using the Hyperledger Fabric framework.
" + } + }, "NodeFrameworkAttributes": { "base": "Attributes relevant to a peer node on a Managed Blockchain network for the blockchain framework that the network uses.
", "refs": { @@ -477,6 +519,14 @@ "ListNodesInput$MaxResults": "The maximum number of nodes to list.
" } }, + "NodeLogPublishingConfiguration": { + "base": "Configuration properties for logging events associated with a peer node owned by a member in a Managed Blockchain network.
", + "refs": { + "Node$LogPublishingConfiguration": "", + "NodeConfiguration$LogPublishingConfiguration": "", + "UpdateNodeInput$LogPublishingConfiguration": "Configuration properties for publishing to Amazon CloudWatch Logs.
" + } + }, "NodeStatus": { "base": null, "refs": { @@ -556,7 +606,7 @@ "ProposalStatus": { "base": null, "refs": { - "Proposal$Status": "The status of the proposal. Values are as follows:
IN_PROGRESS
- The proposal is active and open for member voting.
APPROVED
- The proposal was approved with sufficient YES
votes among members according to the VotingPolicy
specified for the Network
. The specified proposal actions are carried out.
REJECTED
- The proposal was rejected with insufficient YES
votes among members according to the VotingPolicy
specified for the Network
. The specified ProposalActions
are not carried out.
EXPIRED
- Members did not cast the number of votes required to determine the proposal outcome before the proposal expired. The specified ProposalActions
are not carried out.
ACTION_FAILED
- One or more of the specified ProposalActions
in a proposal that was approved could not be completed because of an error.
The status of the proposal. Values are as follows:
IN_PROGRESS
- The proposal is active and open for member voting.
APPROVED
- The proposal was approved with sufficient YES
votes among members according to the VotingPolicy
specified for the Network
. The specified proposal actions are carried out.
REJECTED
- The proposal was rejected with insufficient YES
votes among members according to the VotingPolicy
specified for the Network
. The specified ProposalActions
are not carried out.
EXPIRED
- Members did not cast the number of votes required to determine the proposal outcome before the proposal expired. The specified ProposalActions
are not carried out.
ACTION_FAILED
- One or more of the specified ProposalActions
in a proposal that was approved could not be completed because of an error. The ACTION_FAILED
status occurs even if only one ProposalAction fails and other actions are successful.
The status of the proposal. Values are as follows:
IN_PROGRESS
- The proposal is active and open for member voting.
APPROVED
- The proposal was approved with sufficient YES
votes among members according to the VotingPolicy
specified for the Network
. The specified proposal actions are carried out.
REJECTED
- The proposal was rejected with insufficient YES
votes among members according to the VotingPolicy
specified for the Network
. The specified ProposalActions
are not carried out.
EXPIRED
- Members did not cast the number of votes required to determine the proposal outcome before the proposal expired. The specified ProposalActions
are not carried out.
ACTION_FAILED
- One or more of the specified ProposalActions
in a proposal that was approved could not be completed because of an error.
The unique identifier of the member that created the proposal.
", "RejectInvitationInput$InvitationId": "The unique identifier of the invitation to reject.
", "RemoveAction$MemberId": "The unique identifier of the member to remove.
", + "UpdateMemberInput$NetworkId": "The unique ID of the Managed Blockchain network to which the member belongs.
", + "UpdateMemberInput$MemberId": "The unique ID of the member.
", + "UpdateNodeInput$NetworkId": "The unique ID of the Managed Blockchain network to which the node belongs.
", + "UpdateNodeInput$MemberId": "The unique ID of the member that owns the node.
", + "UpdateNodeInput$NodeId": "The unique ID of the node.
", "VoteOnProposalInput$NetworkId": "The unique identifier of the network.
", "VoteOnProposalInput$ProposalId": "The unique identifier of the proposal.
", "VoteOnProposalInput$VoterMemberId": "The unique identifier of the member casting the vote.
", @@ -728,6 +783,26 @@ "ProposalSummary$ExpirationDate": " The date and time that the proposal expires. This is the CreationDate
plus the ProposalDurationInHours
that is specified in the ProposalThresholdPolicy
. After this date and time, if members have not cast enough votes to determine the outcome according to the voting policy, the proposal is EXPIRED
and Actions
are not carried out.
Deletes the access policy that is associated with the specified container.
", "DeleteCorsPolicy": "Deletes the cross-origin resource sharing (CORS) configuration information that is set for the container.
To use this operation, you must have permission to perform the MediaStore:DeleteCorsPolicy
action. The container owner has this permission by default and can grant this permission to others.
Removes an object lifecycle policy from a container. It takes up to 20 minutes for the change to take effect.
", + "DeleteMetricPolicy": "Deletes the metric policy that is associated with the specified container. If there is no metric policy associated with the container, MediaStore doesn't send metrics to CloudWatch.
", "DescribeContainer": "Retrieves the properties of the requested container. This request is commonly used to retrieve the endpoint of a container. An endpoint is a value assigned by the service when a new container is created. A container's endpoint does not change after it has been assigned. The DescribeContainer
request returns a single Container
object based on ContainerName
. To return all Container
objects that are associated with a specified AWS account, use ListContainers.
Retrieves the access policy for the specified container. For information about the data that is included in an access policy, see the AWS Identity and Access Management User Guide.
", "GetCorsPolicy": "Returns the cross-origin resource sharing (CORS) configuration information that is set for the container.
To use this operation, you must have permission to perform the MediaStore:GetCorsPolicy
action. By default, the container owner has this permission and can grant it to others.
Retrieves the object lifecycle policy that is assigned to a container.
", + "GetMetricPolicy": "Returns the metric policy for the specified container.
", "ListContainers": "Lists the properties of all containers in AWS Elemental MediaStore.
You can query to receive all the containers in one response. Or you can include the MaxResults
parameter to receive a limited number of containers in each response. In this case, the response includes a token. To get the next set of containers, send the command again, this time with the NextToken
parameter (with the returned token as its value). The next set of responses appears, with a token if there are still more containers to receive.
See also DescribeContainer, which gets the properties of one container.
", "ListTagsForResource": "Returns a list of the tags assigned to the specified container.
", "PutContainerPolicy": "Creates an access policy for the specified container to restrict the users and clients that can access it. For information about the data that is included in an access policy, see the AWS Identity and Access Management User Guide.
For this release of the REST API, you can create only one policy for a container. If you enter PutContainerPolicy
twice, the second command modifies the existing policy.
Sets the cross-origin resource sharing (CORS) configuration on a container so that the container can service cross-origin requests. For example, you might want to enable a request whose origin is http://www.example.com to access your AWS Elemental MediaStore container at my.example.container.com by using the browser's XMLHttpRequest capability.
To enable CORS on a container, you attach a CORS policy to the container. In the CORS policy, you configure rules that identify origins and the HTTP methods that can be executed on your container. The policy can contain up to 398,000 characters. You can add up to 100 rules to a CORS policy. If more than one rule applies, the service uses the first applicable rule listed.
To learn more about CORS, see Cross-Origin Resource Sharing (CORS) in AWS Elemental MediaStore.
", "PutLifecyclePolicy": "Writes an object lifecycle policy to a container. If the container already has an object lifecycle policy, the service replaces the existing policy with the new policy. It takes up to 20 minutes for the change to take effect.
For information about how to construct an object lifecycle policy, see Components of an Object Lifecycle Policy.
", + "PutMetricPolicy": "The metric policy that you want to add to the container. A metric policy allows AWS Elemental MediaStore to send metrics to Amazon CloudWatch. It takes up to 20 minutes for the new policy to take effect.
", "StartAccessLogging": "Starts access logging on the specified container. When you enable access logging on a container, MediaStore delivers access logs for objects stored in that container to Amazon CloudWatch Logs.
", "StopAccessLogging": "Stops access logging on the specified container. When you stop access logging on a container, MediaStore stops sending access logs to Amazon CloudWatch Logs. These access logs are not saved and are not retrievable.
", "TagResource": "Adds tags to the specified AWS Elemental MediaStore container. Tags are key:value pairs that you can associate with AWS resources. For example, the tag key might be \"customer\" and the tag value might be \"companyA.\" You can specify one or more tags to add to each container. You can add up to 50 tags to each container. For more information about tagging, including naming and usage conventions, see Tagging Resources in MediaStore.
", @@ -68,6 +71,12 @@ "refs": { } }, + "ContainerLevelMetrics": { + "base": null, + "refs": { + "MetricPolicy$ContainerLevelMetrics": "A setting to enable or disable metrics at the container level.
" + } + }, "ContainerList": { "base": null, "refs": { @@ -89,13 +98,16 @@ "DeleteContainerPolicyInput$ContainerName": "The name of the container that holds the policy.
", "DeleteCorsPolicyInput$ContainerName": "The name of the container to remove the policy from.
", "DeleteLifecyclePolicyInput$ContainerName": "The name of the container that holds the object lifecycle policy.
", + "DeleteMetricPolicyInput$ContainerName": "The name of the container that is associated with the metric policy that you want to delete.
", "DescribeContainerInput$ContainerName": "The name of the container to query.
", "GetContainerPolicyInput$ContainerName": "The name of the container.
", "GetCorsPolicyInput$ContainerName": "The name of the container that the policy is assigned to.
", "GetLifecyclePolicyInput$ContainerName": "The name of the container that the object lifecycle policy is assigned to.
", + "GetMetricPolicyInput$ContainerName": "The name of the container that is associated with the metric policy.
", "PutContainerPolicyInput$ContainerName": "The name of the container.
", "PutCorsPolicyInput$ContainerName": "The name of the container that you want to assign the CORS policy to.
", "PutLifecyclePolicyInput$ContainerName": "The name of the container that you want to assign the object lifecycle policy to.
", + "PutMetricPolicyInput$ContainerName": "The name of the container that you want to add the metric policy to.
", "StartAccessLoggingInput$ContainerName": "The name of the container that you want to start access logging on.
", "StopAccessLoggingInput$ContainerName": "The name of the container that you want to stop access logging on.
" } @@ -186,6 +198,16 @@ "refs": { } }, + "DeleteMetricPolicyInput": { + "base": null, + "refs": { + } + }, + "DeleteMetricPolicyOutput": { + "base": null, + "refs": { + } + }, "DescribeContainerInput": { "base": null, "refs": { @@ -249,6 +271,16 @@ "refs": { } }, + "GetMetricPolicyInput": { + "base": null, + "refs": { + } + }, + "GetMetricPolicyOutput": { + "base": null, + "refs": { + } + }, "Header": { "base": null, "refs": { @@ -305,6 +337,37 @@ "AllowedMethods$member": null } }, + "MetricPolicy": { + "base": "The metric policy that is associated with the container. A metric policy allows AWS Elemental MediaStore to send metrics to Amazon CloudWatch. In the policy, you must indicate whether you want MediaStore to send container-level metrics. You can also include rules to define groups of objects that you want MediaStore to send object-level metrics for.
To view examples of how to construct a metric policy for your use case, see Example Metric Policies.
", + "refs": { + "GetMetricPolicyOutput$MetricPolicy": "The metric policy that is associated with the specific container.
", + "PutMetricPolicyInput$MetricPolicy": "The metric policy that you want to associate with the container. In the policy, you must indicate whether you want MediaStore to send container-level metrics. You can also include up to five rules to define groups of objects that you want MediaStore to send object-level metrics for. If you include rules in the policy, construct each rule with both of the following:
An object group that defines which objects to include in the group. The definition can be a path or a file name, but it can't have more than 900 characters. Valid characters are: a-z, A-Z, 0-9, _ (underscore), = (equal), : (colon), . (period), - (hyphen), ~ (tilde), / (forward slash), and * (asterisk). Wildcards (*) are acceptable.
An object group name that allows you to refer to the object group. The name can't have more than 30 characters. Valid characters are: a-z, A-Z, 0-9, and _ (underscore).
A setting that enables metrics at the object level. Each rule contains an object group and an object group name. If the policy includes the MetricPolicyRules parameter, you must include at least one rule. Each metric policy can include up to five rules by default. You can also request a quota increase to allow up to 300 rules per policy.
", + "refs": { + "MetricPolicyRules$member": null + } + }, + "MetricPolicyRules": { + "base": null, + "refs": { + "MetricPolicy$MetricPolicyRules": "A parameter that holds an array of rules that enable metrics at the object level. This parameter is optional, but if you choose to include it, you must also include at least one rule. By default, you can include up to five rules. You can also request a quota increase to allow up to 300 rules per policy.
" + } + }, + "ObjectGroup": { + "base": null, + "refs": { + "MetricPolicyRule$ObjectGroup": "A path or file name that defines which objects to include in the group. Wildcards (*) are acceptable.
" + } + }, + "ObjectGroupName": { + "base": null, + "refs": { + "MetricPolicyRule$ObjectGroupName": "A name that allows you to refer to the object group.
" + } + }, "Origin": { "base": null, "refs": { @@ -353,6 +416,16 @@ "refs": { } }, + "PutMetricPolicyInput": { + "base": null, + "refs": { + } + }, + "PutMetricPolicyOutput": { + "base": null, + "refs": { + } + }, "StartAccessLoggingInput": { "base": null, "refs": { diff --git a/models/apis/mediatailor/2018-04-23/api-2.json b/models/apis/mediatailor/2018-04-23/api-2.json index e61bce1333b..1af22e03fb0 100644 --- a/models/apis/mediatailor/2018-04-23/api-2.json +++ b/models/apis/mediatailor/2018-04-23/api-2.json @@ -125,6 +125,17 @@ } }, "shapes": { + "AvailSuppression" : { + "type" : "structure", + "members" : { + "Mode" : { + "shape" : "Mode" + }, + "Value" : { + "shape" : "__string" + } + } + }, "BadRequestException": { "error": { "httpStatusCode": 400 @@ -207,7 +218,10 @@ "members": { "AdDecisionServerUrl": { "shape": "__string" - }, + }, + "AvailSuppression" : { + "shape" : "AvailSuppression" + }, "CdnConfiguration": { "shape": "CdnConfiguration" }, @@ -314,6 +328,13 @@ ], "type": "string" }, + "Mode": { + "enum": [ + "OFF", + "BEHIND_LIVE_EDGE" + ], + "type": "string" + }, "PlaybackConfiguration": { "members": { "AdDecisionServerUrl": { @@ -374,7 +395,10 @@ "members": { "AdDecisionServerUrl": { "shape": "__string" - }, + }, + "AvailSuppression" : { + "shape" : "AvailSuppression" + }, "CdnConfiguration": { "shape": "CdnConfiguration" }, @@ -410,7 +434,10 @@ "members": { "AdDecisionServerUrl": { "shape": "__string" - }, + }, + "AvailSuppression" : { + "shape" : "AvailSuppression" + }, "CdnConfiguration": { "shape": "CdnConfiguration" }, diff --git a/models/apis/mediatailor/2018-04-23/docs-2.json b/models/apis/mediatailor/2018-04-23/docs-2.json index 70656e54d8e..e7785d6fa38 100644 --- a/models/apis/mediatailor/2018-04-23/docs-2.json +++ b/models/apis/mediatailor/2018-04-23/docs-2.json @@ -10,6 +10,13 @@ }, "service": "Use the AWS Elemental MediaTailor SDK to configure scalable ad insertion for your live and VOD content. With AWS Elemental MediaTailor, you can serve targeted ads to viewers while maintaining broadcast quality in over-the-top (OTT) video applications. For information about using the service, including detailed information about the settings covered in this guide, see the AWS Elemental MediaTailor User Guide.
Through the SDK, you manage AWS Elemental MediaTailor configurations the same as you do through the console. For example, you specify ad insertion behavior and mapping information for the origin server and the ad decision server (ADS).
", "shapes": { + "AvailSuppression" : { + "base" : null, + "refs" : { + "GetPlaybackConfigurationResponse$AvailSuppression" : "The configuration for Avail Suppression.
", + "PutPlaybackConfigurationRequest$AvailSuppression" : "The configuration for Avail Suppression.
" + } + }, "BadRequestException": { "base": "One of the parameters in the request is invalid.
", "refs": {} @@ -103,6 +110,7 @@ "__string": { "base": null, "refs": { + "AvailSuppression$Value" : "Sets the mode for avail suppression, also known as ad suppression. By default, ad suppression is off and all ad breaks are filled by MediaTailor with ads or slate.", "BadRequestException$Message": "One of the parameters in the request is invalid.
", "CdnConfiguration$AdSegmentUrlPrefix": "A non-default content delivery network (CDN) to serve ad segments. By default, AWS Elemental MediaTailor uses Amazon CloudFront with default cache settings as its CDN for ad segments. To set up an alternate CDN, create a rule in your CDN for the following origin: ads.mediatailor.<region>.amazonaws.com. Then specify the rule's name in this AdSegmentUrlPrefix. When AWS Elemental MediaTailor serves a manifest, it reports your CDN as the source for ad segments.
", "CdnConfiguration$ContentSegmentUrlPrefix": "A content delivery network (CDN) to cache content segments, so that content requests don’t always have to go to the origin server. First, create a rule in your CDN for the content segment origin server. Then specify the rule's name in this ContentSegmentUrlPrefix. When AWS Elemental MediaTailor serves a manifest, it reports your CDN as the source for content segments.
", diff --git a/models/apis/migrationhub-config/2019-06-30/api-2.json b/models/apis/migrationhub-config/2019-06-30/api-2.json index f2ea8f956b7..55719dccd6d 100644 --- a/models/apis/migrationhub-config/2019-06-30/api-2.json +++ b/models/apis/migrationhub-config/2019-06-30/api-2.json @@ -25,6 +25,7 @@ {"shape":"InternalServerError"}, {"shape":"ServiceUnavailableException"}, {"shape":"AccessDeniedException"}, + {"shape":"ThrottlingException"}, {"shape":"DryRunOperation"}, {"shape":"InvalidInputException"} ] @@ -41,6 +42,7 @@ {"shape":"InternalServerError"}, {"shape":"ServiceUnavailableException"}, {"shape":"AccessDeniedException"}, + {"shape":"ThrottlingException"}, {"shape":"InvalidInputException"} ] }, @@ -56,6 +58,7 @@ {"shape":"InternalServerError"}, {"shape":"ServiceUnavailableException"}, {"shape":"AccessDeniedException"}, + {"shape":"ThrottlingException"}, {"shape":"InvalidInputException"} ] } @@ -171,6 +174,7 @@ "exception":true }, "RequestedTime":{"type":"timestamp"}, + "RetryAfterSeconds":{"type":"integer"}, "ServiceUnavailableException":{ "type":"structure", "members":{ @@ -197,6 +201,15 @@ "type":"string", "enum":["ACCOUNT"] }, + "ThrottlingException":{ + "type":"structure", + "required":["Message"], + "members":{ + "Message":{"shape":"ErrorMessage"}, + "RetryAfterSeconds":{"shape":"RetryAfterSeconds"} + }, + "exception":true + }, "Token":{ "type":"string", "max":2048, diff --git a/models/apis/migrationhub-config/2019-06-30/docs-2.json b/models/apis/migrationhub-config/2019-06-30/docs-2.json index e6e91ce1d19..2a1f500ab01 100644 --- a/models/apis/migrationhub-config/2019-06-30/docs-2.json +++ b/models/apis/migrationhub-config/2019-06-30/docs-2.json @@ -1,9 +1,9 @@ { "version": "2.0", - "service": "The AWS Migration Hub home region APIs are available specifically for working with your Migration Hub home region. You can use these APIs to determine a home region, as well as to create and work with controls that describe the home region.
You can use these APIs within your home region only. If you call these APIs from outside your home region, your calls are rejected, except for the ability to register your agents and connectors.
You must call GetHomeRegion
at least once before you call any other AWS Application Discovery Service and AWS Migration Hub APIs, to obtain the account's Migration Hub home region.
The StartDataCollection
API call in AWS Application Discovery Service allows your agents and connectors to begin collecting data that flows directly into the home region, and it will prevent you from enabling data collection information to be sent outside the home region.
For specific API usage, see the sections that follow in this AWS Migration Hub Home Region API reference.
The Migration Hub Home Region APIs do not support AWS Organizations.
The AWS Migration Hub home region APIs are available specifically for working with your Migration Hub home region. You can use these APIs to determine a home region, as well as to create and work with controls that describe the home region.
You must make API calls for write actions (create, notify, associate, disassociate, import, or put) while in your home region, or a HomeRegionNotSetException
error is returned.
API calls for read actions (list, describe, stop, and delete) are permitted outside of your home region.
If you call a write API outside the home region, an InvalidInputException
is returned.
You can call GetHomeRegion
action to obtain the account's Migration Hub home region.
For specific API usage, see the sections that follow in this AWS Migration Hub Home Region API reference.
", "operations": { "CreateHomeRegionControl": "This API sets up the home region for the calling account only.
", - "DescribeHomeRegionControls": "This API permits filtering on the ControlId
, HomeRegion
, and RegionControlScope
fields.
This API permits filtering on the ControlId
and HomeRegion
fields.
Returns the calling account’s home region, if configured. This API is used by other AWS services to determine the regional endpoint for calling AWS Application Discovery Service and Migration Hub. You must call GetHomeRegion
at least once before you call any other AWS Application Discovery Service and AWS Migration Hub APIs, to obtain the account's Migration Hub home region.
A timestamp representing the time when the customer called CreateHomeregionControl
and set the home region for the account.
The number of seconds the caller should wait before retrying.
" + } + }, "ServiceUnavailableException": { "base": "Exception raised when a request fails due to temporary unavailability of the service.
", "refs": { @@ -139,6 +146,11 @@ "Target$Type": "The target type is always an ACCOUNT
.
The request was denied due to request throttling.
", + "refs": { + } + }, "Token": { "base": null, "refs": { diff --git a/models/apis/monitoring/2010-08-01/api-2.json b/models/apis/monitoring/2010-08-01/api-2.json index e4eb57d6493..6aa50f5d5fa 100644 --- a/models/apis/monitoring/2010-08-01/api-2.json +++ b/models/apis/monitoring/2010-08-01/api-2.json @@ -535,7 +535,7 @@ "Namespace":{"shape":"Namespace"}, "MetricName":{"shape":"MetricName"}, "Dimensions":{"shape":"Dimensions"}, - "Stat":{"shape":"Stat"}, + "Stat":{"shape":"AnomalyDetectorMetricStat"}, "Configuration":{"shape":"AnomalyDetectorConfiguration"}, "StateValue":{"shape":"AnomalyDetectorStateValue"} } @@ -551,7 +551,15 @@ "type":"list", "member":{"shape":"Range"} }, - "AnomalyDetectorMetricTimezone":{"type":"string"}, + "AnomalyDetectorMetricStat":{ + "type":"string", + "pattern":"(SampleCount|Average|Sum|Minimum|Maximum|p(\\d{1,2}|100)(\\.\\d{0,2})?|[ou]\\d+(\\.\\d*)?)(_E|_L|_H)?" + }, + "AnomalyDetectorMetricTimezone":{ + "type":"string", + "max":50, + "pattern":".*" + }, "AnomalyDetectorStateValue":{ "type":"string", "enum":[ @@ -751,7 +759,7 @@ "Namespace":{"shape":"Namespace"}, "MetricName":{"shape":"MetricName"}, "Dimensions":{"shape":"Dimensions"}, - "Stat":{"shape":"Stat"} + "Stat":{"shape":"AnomalyDetectorMetricStat"} } }, "DeleteAnomalyDetectorOutput":{ @@ -1612,7 +1620,7 @@ "Namespace":{"shape":"Namespace"}, "MetricName":{"shape":"MetricName"}, "Dimensions":{"shape":"Dimensions"}, - "Stat":{"shape":"Stat"}, + "Stat":{"shape":"AnomalyDetectorMetricStat"}, "Configuration":{"shape":"AnomalyDetectorConfiguration"} } }, @@ -1664,7 +1672,8 @@ "members":{ "RuleName":{"shape":"InsightRuleName"}, "RuleState":{"shape":"InsightRuleState"}, - "RuleDefinition":{"shape":"InsightRuleDefinition"} + "RuleDefinition":{"shape":"InsightRuleDefinition"}, + "Tags":{"shape":"TagList"} } }, "PutInsightRuleOutput":{ diff --git a/models/apis/monitoring/2010-08-01/docs-2.json b/models/apis/monitoring/2010-08-01/docs-2.json index ed7494f4058..638ddc7f013 100644 --- a/models/apis/monitoring/2010-08-01/docs-2.json +++ b/models/apis/monitoring/2010-08-01/docs-2.json @@ -22,7 +22,7 @@ "GetMetricWidgetImage": "You can use the GetMetricWidgetImage
API to retrieve a snapshot graph of one or more Amazon CloudWatch metrics as a bitmap image. You can then embed this image into your services and products, such as wiki pages, reports, and documents. You could also retrieve images regularly, such as every minute, and create your own custom live dashboard.
The graph you retrieve can include all CloudWatch metric graph features, including metric math and horizontal and vertical annotations.
There is a limit of 20 transactions per second for this API. Each GetMetricWidgetImage
action has the following limits:
As many as 100 metrics in the graph.
Up to 100 KB uncompressed payload.
Returns a list of the dashboards for your account. If you include DashboardNamePrefix
, only those dashboards with names starting with the prefix are listed. Otherwise, all dashboards in your account are listed.
ListDashboards
returns up to 1000 results on one page. If there are more than 1000 dashboards, you can call ListDashboards
again and include the value you received for NextToken
in the first call, to receive the next 1000 results.
List the specified metrics. You can use the returned metrics with GetMetricData or GetMetricStatistics to obtain statistical data.
Up to 500 results are returned for any one call. To retrieve additional results, use the returned token with subsequent calls.
After you create a metric, allow up to fifteen minutes before the metric appears. Statistics about the metric, however, are available sooner using GetMetricData or GetMetricStatistics.
", - "ListTagsForResource": "Displays the tags associated with a CloudWatch resource. Alarms support tagging.
", + "ListTagsForResource": "Displays the tags associated with a CloudWatch resource. Currently, alarms and Contributor Insights rules support tagging.
", "PutAnomalyDetector": "Creates an anomaly detection model for a CloudWatch metric. You can use the model to display a band of expected normal values when the metric is graphed.
For more information, see CloudWatch Anomaly Detection.
", "PutCompositeAlarm": "Creates or updates a composite alarm. When you create a composite alarm, you specify a rule expression for the alarm that takes into account the alarm states of other alarms that you have created. The composite alarm goes into ALARM state only if all conditions of the rule are met.
The alarms specified in a composite alarm's rule expression can include metric alarms and other composite alarms.
Using composite alarms can reduce alarm noise. You can create multiple metric alarms, and also create a composite alarm and set up alerts only for the composite alarm. For example, you could create a composite alarm that goes into ALARM state only when more than one of the underlying metric alarms are in ALARM state.
Currently, the only alarm actions that can be taken by composite alarms are notifying SNS topics.
It is possible to create a loop or cycle of composite alarms, where composite alarm A depends on composite alarm B, and composite alarm B also depends on composite alarm A. In this scenario, you can't delete any composite alarm that is part of the cycle because there is always still a composite alarm that depends on that alarm that you want to delete.
To get out of such a situation, you must break the cycle by changing the rule of one of the composite alarms in the cycle to remove a dependency that creates the cycle. The simplest change to make to break a cycle is to change the AlarmRule
of one of the alarms to False
.
Additionally, the evaluation of composite alarms stops if CloudWatch detects a cycle in the evaluation path.
When this operation creates an alarm, the alarm state is immediately set to INSUFFICIENT_DATA
. The alarm is then evaluated and its state is set appropriately. Any actions associated with the new state are then executed. For a composite alarm, this initial time after creation is the only time that the alarm can be in INSUFFICIENT_DATA
state.
When you update an existing alarm, its state is left unchanged, but the update completely overwrites the previous configuration of the alarm.
", "PutDashboard": "Creates a dashboard if it does not already exist, or updates an existing dashboard. If you update a dashboard, the entire contents are replaced with what you specify here.
All dashboards in your account are global, not region-specific.
A simple way to create a dashboard using PutDashboard
is to copy an existing dashboard. To copy an existing dashboard using the console, you can load the dashboard and then use the View/edit source command in the Actions menu to display the JSON block for that dashboard. Another way to copy a dashboard is to use GetDashboard
, and then use the data returned within DashboardBody
as the template for the new dashboard when you call PutDashboard
.
When you create a dashboard with PutDashboard
, a good practice is to add a text widget at the top of the dashboard with a message that the dashboard was created by script and should not be changed in the console. This message could also point console users to the location of the DashboardBody
script or the CloudFormation template used to create the dashboard.
Creates or updates an alarm and associates it with the specified metric, metric math expression, or anomaly detection model.
Alarms based on anomaly detection models cannot have Auto Scaling actions.
When this operation creates an alarm, the alarm state is immediately set to INSUFFICIENT_DATA
. The alarm is then evaluated and its state is set appropriately. Any actions associated with the new state are then executed.
When you update an existing alarm, its state is left unchanged, but the update completely overwrites the previous configuration of the alarm.
If you are an IAM user, you must have Amazon EC2 permissions for some alarm operations:
iam:CreateServiceLinkedRole
for all alarms with EC2 actions
ec2:DescribeInstanceStatus
and ec2:DescribeInstances
for all alarms on EC2 instance status metrics
ec2:StopInstances
for alarms with stop actions
ec2:TerminateInstances
for alarms with terminate actions
No specific permissions are needed for alarms with recover actions
If you have read/write permissions for Amazon CloudWatch but not for Amazon EC2, you can still create an alarm, but the stop or terminate actions are not performed. However, if you are later granted the required permissions, the alarm actions that you created earlier are performed.
If you are using an IAM role (for example, an EC2 instance profile), you cannot stop or terminate the instance using alarm actions. However, you can still see the alarm state and perform any other actions such as Amazon SNS notifications or Auto Scaling policies.
If you are using temporary security credentials granted using AWS STS, you cannot stop or terminate an EC2 instance using alarm actions.
The first time you create an alarm in the AWS Management Console, the CLI, or by using the PutMetricAlarm API, CloudWatch creates the necessary service-linked role for you. The service-linked role is called AWSServiceRoleForCloudWatchEvents
. For more information, see AWS service-linked role.
Publishes metric data points to Amazon CloudWatch. CloudWatch associates the data points with the specified metric. If the specified metric does not exist, CloudWatch creates the metric. When CloudWatch creates a metric, it can take up to fifteen minutes for the metric to appear in calls to ListMetrics.
You can publish either individual data points in the Value
field, or arrays of values and the number of times each value occurred during the period by using the Values
and Counts
fields in the MetricDatum
structure. Using the Values
and Counts
method enables you to publish up to 150 values per metric with one PutMetricData
request, and supports retrieving percentile statistics on this data.
Each PutMetricData
request is limited to 40 KB in size for HTTP POST requests. You can send a payload compressed by gzip. Each request is also limited to no more than 20 different metrics.
Although the Value
parameter accepts numbers of type Double
, CloudWatch rejects values that are either too small or too large. Values must be in the range of -2^360 to 2^360. In addition, special values (for example, NaN, +Infinity, -Infinity) are not supported.
You can use up to 10 dimensions per metric to further clarify what data the metric collects. Each dimension consists of a Name and Value pair. For more information about specifying dimensions, see Publishing Metrics in the Amazon CloudWatch User Guide.
Data points with time stamps from 24 hours ago or longer can take at least 48 hours to become available for GetMetricData or GetMetricStatistics from the time they are submitted. Data points with time stamps between 3 and 24 hours ago can take as much as 2 hours to become available for for GetMetricData or GetMetricStatistics.
CloudWatch needs raw data points to calculate percentile statistics. If you publish data using a statistic set instead, you can only retrieve percentile statistics for this data if one of the following conditions is true:
The SampleCount
value of the statistic set is 1 and Min
, Max
, and Sum
are all equal.
The Min
and Max
are equal, and Sum
is equal to Min
multiplied by SampleCount
.
Temporarily sets the state of an alarm for testing purposes. When the updated state differs from the previous value, the action configured for the appropriate state is invoked. For example, if your alarm is configured to send an Amazon SNS message when an alarm is triggered, temporarily changing the alarm state to ALARM
sends an SNS message.
Metric alarms returns to their actual state quickly, often within seconds. Because the metric alarm state change happens quickly, it is typically only visible in the alarm's History tab in the Amazon CloudWatch console or through DescribeAlarmHistory.
If you use SetAlarmState
on a composite alarm, the composite alarm is not guaranteed to return to its actual state. It will return to its actual state only once any of its children alarms change state. It is also re-evaluated if you update its configuration.
If an alarm triggers EC2 Auto Scaling policies or application Auto Scaling policies, you must include information in the StateReasonData
parameter to enable the policy to take the correct action.
Assigns one or more tags (key-value pairs) to the specified CloudWatch resource. Currently, the only CloudWatch resources that can be tagged are alarms.
Tags can help you organize and categorize your resources. You can also use them to scope user permissions, by granting a user permission to access or change only resources with certain tag values.
Tags don't have any semantic meaning to AWS and are interpreted strictly as strings of characters.
You can use the TagResource
action with an alarm that already has tags. If you specify a new tag key for the alarm, this tag is appended to the list of tags associated with the alarm. If you specify a tag key that is already associated with the alarm, the new tag value that you specify replaces the previous value for that tag.
You can associate as many as 50 tags with a resource.
", + "TagResource": "Assigns one or more tags (key-value pairs) to the specified CloudWatch resource. Currently, the only CloudWatch resources that can be tagged are alarms and Contributor Insights rules.
Tags can help you organize and categorize your resources. You can also use them to scope user permissions, by granting a user permission to access or change only resources with certain tag values.
Tags don't have any semantic meaning to AWS and are interpreted strictly as strings of characters.
You can use the TagResource
action with an alarm that already has tags. If you specify a new tag key for the alarm, this tag is appended to the list of tags associated with the alarm. If you specify a tag key that is already associated with the alarm, the new tag value that you specify replaces the previous value for that tag.
You can associate as many as 50 tags with a CloudWatch resource.
", "UntagResource": "Removes one or more tags from the specified resource.
" }, "shapes": { @@ -131,9 +131,9 @@ "AmazonResourceName": { "base": null, "refs": { - "ListTagsForResourceInput$ResourceARN": "The ARN of the CloudWatch resource that you want to view tags for. For more information on ARN format, see Example ARNs in the Amazon Web Services General Reference.
", - "TagResourceInput$ResourceARN": "The ARN of the CloudWatch alarm that you're adding tags to. The ARN format is arn:aws:cloudwatch:Region:account-id:alarm:alarm-name
The ARN of the CloudWatch resource that you're removing tags from. For more information on ARN format, see Example ARNs in the Amazon Web Services General Reference.
" + "ListTagsForResourceInput$ResourceARN": "The ARN of the CloudWatch resource that you want to view tags for.
The ARN format of an alarm is arn:aws:cloudwatch:Region:account-id:alarm:alarm-name
The ARN format of a Contributor Insights rule is arn:aws:cloudwatch:Region:account-id:insight-rule:insight-rule-name
For more information on ARN format, see Resource Types Defined by Amazon CloudWatch in the Amazon Web Services General Reference.
", + "TagResourceInput$ResourceARN": "The ARN of the CloudWatch resource that you're adding tags to.
The ARN format of an alarm is arn:aws:cloudwatch:Region:account-id:alarm:alarm-name
The ARN format of a Contributor Insights rule is arn:aws:cloudwatch:Region:account-id:insight-rule:insight-rule-name
For more information on ARN format, see Resource Types Defined by Amazon CloudWatch in the Amazon Web Services General Reference.
", + "UntagResourceInput$ResourceARN": "The ARN of the CloudWatch resource that you're removing tags from.
The ARN format of an alarm is arn:aws:cloudwatch:Region:account-id:alarm:alarm-name
The ARN format of a Contributor Insights rule is arn:aws:cloudwatch:Region:account-id:insight-rule:insight-rule-name
For more information on ARN format, see Resource Types Defined by Amazon CloudWatch in the Amazon Web Services General Reference.
" } }, "AnomalyDetector": { @@ -155,6 +155,14 @@ "AnomalyDetectorConfiguration$ExcludedTimeRanges": "An array of time ranges to exclude from use when the anomaly detection model is trained. Use this to make sure that events that could cause unusual values for the metric, such as deployments, aren't used when CloudWatch creates the model.
" } }, + "AnomalyDetectorMetricStat": { + "base": null, + "refs": { + "AnomalyDetector$Stat": "The statistic associated with the anomaly detection model.
", + "DeleteAnomalyDetectorInput$Stat": "The statistic associated with the anomaly detection model to delete.
", + "PutAnomalyDetectorInput$Stat": "The statistic to use for the metric and the anomaly detection model.
" + } + }, "AnomalyDetectorMetricTimezone": { "base": null, "refs": { @@ -1260,10 +1268,7 @@ "Stat": { "base": null, "refs": { - "AnomalyDetector$Stat": "The statistic associated with the anomaly detection model.
", - "DeleteAnomalyDetectorInput$Stat": "The statistic associated with the anomaly detection model to delete.
", - "MetricStat$Stat": "The statistic to return. It can include any CloudWatch statistic or extended statistic.
", - "PutAnomalyDetectorInput$Stat": "The statistic to use for the metric and the anomaly detection model.
" + "MetricStat$Stat": "The statistic to return. It can include any CloudWatch statistic or extended statistic.
" } }, "StateReason": { @@ -1348,6 +1353,7 @@ "refs": { "ListTagsForResourceOutput$Tags": "The list of tag keys and values associated with the resource you specified.
", "PutCompositeAlarmInput$Tags": "A list of key-value pairs to associate with the composite alarm. You can associate as many as 50 tags with an alarm.
Tags can help you organize and categorize your resources. You can also use them to scope user permissions, by granting a user permission to access or change only resources with certain tag values.
", + "PutInsightRuleInput$Tags": "A list of key-value pairs to associate with the Contributor Insights rule. You can associate as many as 50 tags with a rule.
Tags can help you organize and categorize your resources. You can also use them to scope user permissions, by granting a user permission to access or change only the resources that have certain tag values.
To be able to associate tags with a rule, you must have the cloudwatch:TagResource
permission in addition to the cloudwatch:PutInsightRule
permission.
If you are using this operation to update an existing Contributor Insights rule, any tags you specify in this parameter are ignored. To change the tags of an existing rule, use TagResource.
", "PutMetricAlarmInput$Tags": "A list of key-value pairs to associate with the alarm. You can associate as many as 50 tags with an alarm.
Tags can help you organize and categorize your resources. You can also use them to scope user permissions, by granting a user permission to access or change only resources with certain tag values.
", "TagResourceInput$Tags": "The list of key-value pairs to associate with the alarm.
" } diff --git a/models/apis/opsworkscm/2016-11-01/docs-2.json b/models/apis/opsworkscm/2016-11-01/docs-2.json index cff19e613f4..c29e09160c8 100644 --- a/models/apis/opsworkscm/2016-11-01/docs-2.json +++ b/models/apis/opsworkscm/2016-11-01/docs-2.json @@ -139,20 +139,20 @@ "CustomCertificate": { "base": null, "refs": { - "CreateServerRequest$CustomCertificate": "Supported on servers running Chef Automate 2. A PEM-formatted HTTPS certificate. The value can be be a single, self-signed certificate, or a certificate chain. If you specify a custom certificate, you must also specify values for CustomDomain
and CustomPrivateKey
. The following are requirements for the CustomCertificate
value:
You can provide either a self-signed, custom certificate, or the full certificate chain.
The certificate must be a valid X509 certificate, or a certificate chain in PEM format.
The certificate must be valid at the time of upload. A certificate can't be used before its validity period begins (the certificate's NotBefore
date), or after it expires (the certificate's NotAfter
date).
The certificate’s common name or subject alternative names (SANs), if present, must match the value of CustomDomain
.
The certificate must match the value of CustomPrivateKey
.
A PEM-formatted HTTPS certificate. The value can be be a single, self-signed certificate, or a certificate chain. If you specify a custom certificate, you must also specify values for CustomDomain
and CustomPrivateKey
. The following are requirements for the CustomCertificate
value:
You can provide either a self-signed, custom certificate, or the full certificate chain.
The certificate must be a valid X509 certificate, or a certificate chain in PEM format.
The certificate must be valid at the time of upload. A certificate can't be used before its validity period begins (the certificate's NotBefore
date), or after it expires (the certificate's NotAfter
date).
The certificate’s common name or subject alternative names (SANs), if present, must match the value of CustomDomain
.
The certificate must match the value of CustomPrivateKey
.
Supported on servers running Chef Automate 2. An optional public endpoint of a server, such as https://aws.my-company.com
. To access the server, create a CNAME DNS record in your preferred DNS service that points the custom domain to the endpoint that is generated when the server is created (the value of the CreateServer Endpoint attribute). You cannot access the server by using the generated Endpoint
value if the server is using a custom domain. If you specify a custom domain, you must also specify values for CustomCertificate
and CustomPrivateKey
.
An optional public endpoint of a server, such as https://aws.my-company.com
. To access the server, create a CNAME DNS record in your preferred DNS service that points the custom domain to the endpoint that is generated when the server is created (the value of the CreateServer Endpoint attribute). You cannot access the server by using the generated Endpoint
value if the server is using a custom domain. If you specify a custom domain, you must also specify values for CustomCertificate
and CustomPrivateKey
.
An optional public endpoint of a server, such as https://aws.my-company.com
. You cannot access the server by using the Endpoint
value if the server has a CustomDomain
specified.
Supported on servers running Chef Automate 2. A private key in PEM format for connecting to the server by using HTTPS. The private key must not be encrypted; it cannot be protected by a password or passphrase. If you specify a custom private key, you must also specify values for CustomDomain
and CustomCertificate
.
A private key in PEM format for connecting to the server by using HTTPS. The private key must not be encrypted; it cannot be protected by a password or passphrase. If you specify a custom private key, you must also specify values for CustomDomain
and CustomCertificate
.
A map that contains tag keys and tag values to attach to an AWS OpsWorks-CM server backup.
The key cannot be empty.
The key can be a maximum of 127 characters, and can contain only Unicode letters, numbers, or separators, or the following special characters: + - = . _ : /
The value can be a maximum 255 characters, and contain only Unicode letters, numbers, or separators, or the following special characters: + - = . _ : /
Leading and trailing white spaces are trimmed from both the key and value.
A maximum of 50 user-applied tags is allowed for tag-supported AWS OpsWorks-CM resources.
A map that contains tag keys and tag values to attach to an AWS OpsWorks for Chef Automate or AWS OpsWorks for Puppet Enterprise server.
The key cannot be empty.
The key can be a maximum of 127 characters, and can contain only Unicode letters, numbers, or separators, or the following special characters: + - = . _ : /
The value can be a maximum 255 characters, and contain only Unicode letters, numbers, or separators, or the following special characters: + - = . _ : /
Leading and trailing white spaces are trimmed from both the key and value.
A maximum of 50 user-applied tags is allowed for any AWS OpsWorks-CM server.
A map that contains tag keys and tag values to attach to an AWS OpsWorks for Chef Automate or AWS OpsWorks for Puppet Enterprise server.
The key cannot be empty.
The key can be a maximum of 127 characters, and can contain only Unicode letters, numbers, or separators, or the following special characters: + - = . _ : / @
The value can be a maximum 255 characters, and contain only Unicode letters, numbers, or separators, or the following special characters: + - = . _ : / @
Leading and trailing white spaces are trimmed from both the key and value.
A maximum of 50 user-applied tags is allowed for any AWS OpsWorks-CM server.
Tags that have been applied to the resource.
", "TagResourceRequest$Tags": "A map that contains tag keys and tag values to attach to AWS OpsWorks-CM servers or backups.
The key cannot be empty.
The key can be a maximum of 127 characters, and can contain only Unicode letters, numbers, or separators, or the following special characters: + - = . _ : /
The value can be a maximum 255 characters, and contain only Unicode letters, numbers, or separators, or the following special characters: + - = . _ : /
Leading and trailing white spaces are trimmed from both the key and value.
A maximum of 50 user-applied tags is allowed for any AWS OpsWorks-CM server or backup.
Sends a response to the originator of a handshake agreeing to the action proposed by the handshake request.
This operation can be called only by the following principals when they also have the relevant IAM permissions:
Invitation to join or Approve all features request handshakes: only a principal from the member account.
The user who calls the API for an invitation to join must have the organizations:AcceptHandshake
permission. If you enabled all features in the organization, the user must also have the iam:CreateServiceLinkedRole
permission so that AWS Organizations can create the required service-linked role named AWSServiceRoleForOrganizations
. For more information, see AWS Organizations and Service-Linked Roles in the AWS Organizations User Guide.
Enable all features final confirmation handshake: only a principal from the master account.
For more information about invitations, see Inviting an AWS Account to Join Your Organization in the AWS Organizations User Guide. For more information about requests to enable all features in the organization, see Enabling All Features in Your Organization in the AWS Organizations User Guide.
After you accept a handshake, it continues to appear in the results of relevant APIs for only 30 days. After that, it's deleted.
", - "AttachPolicy": "Attaches a policy to a root, an organizational unit (OU), or an individual account.
How the policy affects accounts depends on the type of policy:
For more information about attaching SCPs, see How SCPs Work in the AWS Organizations User Guide.
For information about attaching tag policies, see How Policy Inheritance Works in the AWS Organizations User Guide.
This operation can be called only from the organization's master account.
", + "AttachPolicy": "Attaches a policy to a root, an organizational unit (OU), or an individual account. How the policy affects accounts depends on the type of policy:
Service control policy (SCP) - An SCP specifies what permissions can be delegated to users in affected member accounts. The scope of influence for a policy depends on what you attach the policy to:
If you attach an SCP to a root, it affects all accounts in the organization.
If you attach an SCP to an OU, it affects all accounts in that OU and in any child OUs.
If you attach the policy directly to an account, it affects only that account.
SCPs are JSON policies that specify the maximum permissions for an organization or organizational unit (OU). You can attach one SCP to a higher level root or OU, and a different SCP to a child OU or to an account. The child policy can further restrict only the permissions that pass through the parent filter and are available to the child. An SCP that is attached to a child can't grant a permission that the parent hasn't already granted. For example, imagine that the parent SCP allows permissions A, B, C, D, and E. The child SCP allows C, D, E, F, and G. The result is that the accounts affected by the child SCP are allowed to use only C, D, and E. They can't use A or B because the child OU filtered them out. They also can't use F and G because the parent OU filtered them out. They can't be granted back by the child SCP; child SCPs can only filter the permissions they receive from the parent SCP.
AWS Organizations attaches a default SCP named \"FullAWSAccess
to every root, OU, and account. This default SCP allows all services and actions, enabling any new child OU or account to inherit the permissions of the parent root or OU. If you detach the default policy, you must replace it with a policy that specifies the permissions that you want to allow in that OU or account.
For more information about how AWS Organizations policies permissions work, see Using Service Control Policies in the AWS Organizations User Guide.
This operation can be called only from the organization's master account.
", "CancelHandshake": "Cancels a handshake. Canceling a handshake sets the handshake state to CANCELED
.
This operation can be called only from the account that originated the handshake. The recipient of the handshake can't cancel it, but can use DeclineHandshake instead. After a handshake is canceled, the recipient can no longer respond to that handshake.
After you cancel a handshake, it continues to appear in the results of relevant APIs for only 30 days. After that, it's deleted.
", - "CreateAccount": "Creates an AWS account that is automatically a member of the organization whose credentials made the request. This is an asynchronous request that AWS performs in the background. Because CreateAccount
operates asynchronously, it can return a successful completion message even though account initialization might still be in progress. You might need to wait a few minutes before you can successfully access the account. To check the status of the request, do one of the following:
Use the OperationId
response element from this operation to provide as a parameter to the DescribeCreateAccountStatus operation.
Check the AWS CloudTrail log for the CreateAccountResult
event. For information on using AWS CloudTrail with AWS Organizations, see Monitoring the Activity in Your Organization in the AWS Organizations User Guide.
The user who calls the API to create an account must have the organizations:CreateAccount
permission. If you enabled all features in the organization, AWS Organizations creates the required service-linked role named AWSServiceRoleForOrganizations
. For more information, see AWS Organizations and Service-Linked Roles in the AWS Organizations User Guide.
AWS Organizations preconfigures the new member account with a role (named OrganizationAccountAccessRole
by default) that grants users in the master account administrator permissions in the new member account. Principals in the master account can assume the role. AWS Organizations clones the company name and address information for the new account from the organization's master account.
This operation can be called only from the organization's master account.
For more information about creating accounts, see Creating an AWS Account in Your Organization in the AWS Organizations User Guide.
When you create an account in an organization, the information required for the account to operate as a standalone account is not automatically collected. For example, information about the payment method and signing the end user license agreement (EULA) is not collected. If you must remove an account from your organization later, you can do so only after you provide the missing information. Follow the steps at To leave an organization as a member account in the AWS Organizations User Guide.
If you get an exception that indicates that you exceeded your account limits for the organization, contact AWS Support.
If you get an exception that indicates that the operation failed because your organization is still initializing, wait one hour and then try again. If the error persists, contact AWS Support.
Using CreateAccount
to create multiple temporary accounts isn't recommended. You can only close an account from the Billing and Cost Management Console, and you must be signed in as the root user. For information on the requirements and process for closing an account, see Closing an AWS Account in the AWS Organizations User Guide.
When you create a member account with this operation, you can choose whether to create the account with the IAM User and Role Access to Billing Information switch enabled. If you enable it, IAM users and roles that have appropriate permissions can view billing information for the account. If you disable it, only the account root user can access billing information. For information about how to disable this switch for an account, see Granting Access to Your Billing Information and Tools.
This action is available if all of the following are true:
You're authorized to create accounts in the AWS GovCloud (US) Region. For more information on the AWS GovCloud (US) Region, see the AWS GovCloud User Guide.
You already have an account in the AWS GovCloud (US) Region that is associated with your master account in the commercial Region.
You call this action from the master account of your organization in the commercial Region.
You have the organizations:CreateGovCloudAccount
permission. AWS Organizations creates the required service-linked role named AWSServiceRoleForOrganizations
. For more information, see AWS Organizations and Service-Linked Roles in the AWS Organizations User Guide.
AWS automatically enables AWS CloudTrail for AWS GovCloud (US) accounts, but you should also do the following:
Verify that AWS CloudTrail is enabled to store logs.
Create an S3 bucket for AWS CloudTrail log storage.
For more information, see Verifying AWS CloudTrail Is Enabled in the AWS GovCloud User Guide.
You call this action from the master account of your organization in the commercial Region to create a standalone AWS account in the AWS GovCloud (US) Region. After the account is created, the master account of an organization in the AWS GovCloud (US) Region can invite it to that organization. For more information on inviting standalone accounts in the AWS GovCloud (US) to join an organization, see AWS Organizations in the AWS GovCloud User Guide.
Calling CreateGovCloudAccount
is an asynchronous request that AWS performs in the background. Because CreateGovCloudAccount
operates asynchronously, it can return a successful completion message even though account initialization might still be in progress. You might need to wait a few minutes before you can successfully access the account. To check the status of the request, do one of the following:
Use the OperationId
response element from this operation to provide as a parameter to the DescribeCreateAccountStatus operation.
Check the AWS CloudTrail log for the CreateAccountResult
event. For information on using AWS CloudTrail with Organizations, see Monitoring the Activity in Your Organization in the AWS Organizations User Guide.
When you call the CreateGovCloudAccount
action, you create two accounts: a standalone account in the AWS GovCloud (US) Region and an associated account in the commercial Region for billing and support purposes. The account in the commercial Region is automatically a member of the organization whose credentials made the request. Both accounts are associated with the same email address.
A role is created in the new account in the commercial Region that allows the master account in the organization in the commercial Region to assume it. An AWS GovCloud (US) account is then created and associated with the commercial account that you just created. A role is created in the new AWS GovCloud (US) account. This role can be assumed by the AWS GovCloud (US) account that is associated with the master account of the commercial organization. For more information and to view a diagram that explains how account access works, see AWS Organizations in the AWS GovCloud User Guide.
For more information about creating accounts, see Creating an AWS Account in Your Organization in the AWS Organizations User Guide.
You can create an account in an organization using the AWS Organizations console, API, or CLI commands. When you do, the information required for the account to operate as a standalone account, such as a payment method, is not automatically collected. If you must remove an account from your organization later, you can do so only after you provide the missing information. Follow the steps at To leave an organization as a member account in the AWS Organizations User Guide.
If you get an exception that indicates that you exceeded your account limits for the organization, contact AWS Support.
If you get an exception that indicates that the operation failed because your organization is still initializing, wait one hour and then try again. If the error persists, contact AWS Support.
Using CreateGovCloudAccount
to create multiple temporary accounts isn't recommended. You can only close an account from the AWS Billing and Cost Management console, and you must be signed in as the root user. For information on the requirements and process for closing an account, see Closing an AWS Account in the AWS Organizations User Guide.
When you create a member account with this operation, you can choose whether to create the account with the IAM User and Role Access to Billing Information switch enabled. If you enable it, IAM users and roles that have appropriate permissions can view billing information for the account. If you disable it, only the account root user can access billing information. For information about how to disable this switch for an account, see Granting Access to Your Billing Information and Tools.
Creates an AWS organization. The account whose user is calling the CreateOrganization
operation automatically becomes the master account of the new organization.
This operation must be called using credentials from the account that is to become the new organization's master account. The principal must also have the relevant IAM permissions.
By default (or if you set the FeatureSet
parameter to ALL
), the new organization is created with all features enabled. In addition, service control policies are automatically enabled in the root. If you instead create the organization supporting only the consolidated billing features, no policy types are enabled by default, and you can't use organization policies.
Creates an AWS account that is automatically a member of the organization whose credentials made the request. This is an asynchronous request that AWS performs in the background. Because CreateAccount
operates asynchronously, it can return a successful completion message even though account initialization might still be in progress. You might need to wait a few minutes before you can successfully access the account. To check the status of the request, do one of the following:
Use the OperationId
response element from this operation to provide as a parameter to the DescribeCreateAccountStatus operation.
Check the AWS CloudTrail log for the CreateAccountResult
event. For information on using AWS CloudTrail with AWS Organizations, see Monitoring the Activity in Your Organization in the AWS Organizations User Guide.
The user who calls the API to create an account must have the organizations:CreateAccount
permission. If you enabled all features in the organization, AWS Organizations creates the required service-linked role named AWSServiceRoleForOrganizations
. For more information, see AWS Organizations and Service-Linked Roles in the AWS Organizations User Guide.
AWS Organizations preconfigures the new member account with a role (named OrganizationAccountAccessRole
by default) that grants users in the master account administrator permissions in the new member account. Principals in the master account can assume the role. AWS Organizations clones the company name and address information for the new account from the organization's master account.
This operation can be called only from the organization's master account.
For more information about creating accounts, see Creating an AWS Account in Your Organization in the AWS Organizations User Guide.
When you create an account in an organization using the AWS Organizations console, API, or CLI commands, the information required for the account to operate as a standalone account, such as a payment method and signing the end user license agreement (EULA) is not automatically collected. If you must remove an account from your organization later, you can do so only after you provide the missing information. Follow the steps at To leave an organization as a member account in the AWS Organizations User Guide.
If you get an exception that indicates that you exceeded your account limits for the organization, contact AWS Support.
If you get an exception that indicates that the operation failed because your organization is still initializing, wait one hour and then try again. If the error persists, contact AWS Support.
Using CreateAccount
to create multiple temporary accounts isn't recommended. You can only close an account from the Billing and Cost Management Console, and you must be signed in as the root user. For information on the requirements and process for closing an account, see Closing an AWS Account in the AWS Organizations User Guide.
When you create a member account with this operation, you can choose whether to create the account with the IAM User and Role Access to Billing Information switch enabled. If you enable it, IAM users and roles that have appropriate permissions can view billing information for the account. If you disable it, only the account root user can access billing information. For information about how to disable this switch for an account, see Granting Access to Your Billing Information and Tools.
This action is available if all of the following are true:
You're authorized to create accounts in the AWS GovCloud (US) Region. For more information on the AWS GovCloud (US) Region, see the AWS GovCloud User Guide.
You already have an account in the AWS GovCloud (US) Region that is associated with your master account in the commercial Region.
You call this action from the master account of your organization in the commercial Region.
You have the organizations:CreateGovCloudAccount
permission. AWS Organizations creates the required service-linked role named AWSServiceRoleForOrganizations
. For more information, see AWS Organizations and Service-Linked Roles in the AWS Organizations User Guide.
AWS automatically enables AWS CloudTrail for AWS GovCloud (US) accounts, but you should also do the following:
Verify that AWS CloudTrail is enabled to store logs.
Create an S3 bucket for AWS CloudTrail log storage.
For more information, see Verifying AWS CloudTrail Is Enabled in the AWS GovCloud User Guide.
You call this action from the master account of your organization in the commercial Region to create a standalone AWS account in the AWS GovCloud (US) Region. After the account is created, the master account of an organization in the AWS GovCloud (US) Region can invite it to that organization. For more information on inviting standalone accounts in the AWS GovCloud (US) to join an organization, see AWS Organizations in the AWS GovCloud User Guide.
Calling CreateGovCloudAccount
is an asynchronous request that AWS performs in the background. Because CreateGovCloudAccount
operates asynchronously, it can return a successful completion message even though account initialization might still be in progress. You might need to wait a few minutes before you can successfully access the account. To check the status of the request, do one of the following:
Use the OperationId
response element from this operation to provide as a parameter to the DescribeCreateAccountStatus operation.
Check the AWS CloudTrail log for the CreateAccountResult
event. For information on using AWS CloudTrail with Organizations, see Monitoring the Activity in Your Organization in the AWS Organizations User Guide.
When you call the CreateGovCloudAccount
action, you create two accounts: a standalone account in the AWS GovCloud (US) Region and an associated account in the commercial Region for billing and support purposes. The account in the commercial Region is automatically a member of the organization whose credentials made the request. Both accounts are associated with the same email address.
A role is created in the new account in the commercial Region that allows the master account in the organization in the commercial Region to assume it. An AWS GovCloud (US) account is then created and associated with the commercial account that you just created. A role is created in the new AWS GovCloud (US) account that can be assumed by the AWS GovCloud (US) account that is associated with the master account of the commercial organization. For more information and to view a diagram that explains how account access works, see AWS Organizations in the AWS GovCloud User Guide.
For more information about creating accounts, see Creating an AWS Account in Your Organization in the AWS Organizations User Guide.
When you create an account in an organization using the AWS Organizations console, API, or CLI commands, the information required for the account to operate as a standalone account, such as a payment method and signing the end user license agreement (EULA) is not automatically collected. If you must remove an account from your organization later, you can do so only after you provide the missing information. Follow the steps at To leave an organization as a member account in the AWS Organizations User Guide.
If you get an exception that indicates that you exceeded your account limits for the organization, contact AWS Support.
If you get an exception that indicates that the operation failed because your organization is still initializing, wait one hour and then try again. If the error persists, contact AWS Support.
Using CreateGovCloudAccount
to create multiple temporary accounts isn't recommended. You can only close an account from the AWS Billing and Cost Management console, and you must be signed in as the root user. For information on the requirements and process for closing an account, see Closing an AWS Account in the AWS Organizations User Guide.
When you create a member account with this operation, you can choose whether to create the account with the IAM User and Role Access to Billing Information switch enabled. If you enable it, IAM users and roles that have appropriate permissions can view billing information for the account. If you disable it, only the account root user can access billing information. For information about how to disable this switch for an account, see Granting Access to Your Billing Information and Tools.
Creates an AWS organization. The account whose user is calling the CreateOrganization
operation automatically becomes the master account of the new organization.
This operation must be called using credentials from the account that is to become the new organization's master account. The principal must also have the relevant IAM permissions.
By default (or if you set the FeatureSet
parameter to ALL
), the new organization is created with all features enabled and service control policies automatically enabled in the root. If you instead choose to create the organization supporting only the consolidated billing features by setting the FeatureSet
parameter to CONSOLIDATED_BILLING\"
, no policy types are enabled by default, and you can't use organization policies
Creates an organizational unit (OU) within a root or parent OU. An OU is a container for accounts that enables you to organize your accounts to apply policies according to your business requirements. The number of levels deep that you can nest OUs is dependent upon the policy types enabled for that root. For service control policies, the limit is five.
For more information about OUs, see Managing Organizational Units in the AWS Organizations User Guide.
This operation can be called only from the organization's master account.
", "CreatePolicy": "Creates a policy of a specified type that you can attach to a root, an organizational unit (OU), or an individual AWS account.
For more information about policies and their use, see Managing Organization Policies.
This operation can be called only from the organization's master account.
", - "DeclineHandshake": "Declines a handshake request. This sets the handshake state to DECLINED
and effectively deactivates the request.
This operation can be called only from the account that received the handshake. The originator of the handshake can use CancelHandshake instead. The originator can't reactivate a declined request, but can reinitiate the process with a new handshake request.
After you decline a handshake, it continues to appear in the results of relevant API operations for only 30 days. After that, it's deleted.
", + "DeclineHandshake": "Declines a handshake request. This sets the handshake state to DECLINED
and effectively deactivates the request.
This operation can be called only from the account that received the handshake. The originator of the handshake can use CancelHandshake instead. The originator can't reactivate a declined request, but can reinitiate the process with a new handshake request.
After you decline a handshake, it continues to appear in the results of relevant APIs for only 30 days. After that, it's deleted.
", "DeleteOrganization": "Deletes the organization. You can delete an organization only by using credentials from the master account. The organization must be empty of member accounts.
", "DeleteOrganizationalUnit": "Deletes an organizational unit (OU) from a root or another OU. You must first remove all accounts and child OUs from the OU that you want to delete.
This operation can be called only from the organization's master account.
", "DeletePolicy": "Deletes the specified policy from your organization. Before you perform this operation, you must first detach the policy from all organizational units (OUs), roots, and accounts.
This operation can be called only from the organization's master account.
", - "DescribeAccount": "Retrieves AWS Organizations related information about the specified account.
This operation can be called only from the organization's master account.
", - "DescribeCreateAccountStatus": "Retrieves the current status of an asynchronous request to create an account.
This operation can be called only from the organization's master account.
", - "DescribeEffectivePolicy": "Returns the contents of the effective tag policy for the account. The effective tag policy is the aggregation of any tag policies the account inherits, plus any policy directly that is attached to the account.
This action returns information on tag policies only.
For more information on policy inheritance, see How Policy Inheritance Works in the AWS Organizations User Guide.
This operation can be called from any account in the organization.
", + "DeregisterDelegatedAdministrator": "Removes the specified member AWS account as a delegated administrator for the specified AWS service.
You can run this action only for AWS services that support this feature. For a current list of services that support it, see the column Supports Delegated Administrator in the table at AWS Services that you can use with AWS Organizations in the AWS Organizations User Guide.
This operation can be called only from the organization's master account.
", + "DescribeAccount": "Retrieves AWS Organizations-related information about the specified account.
This operation can be called only from the organization's master account or by a member account that is a delegated administrator for an AWS service.
", + "DescribeCreateAccountStatus": "Retrieves the current status of an asynchronous request to create an account.
This operation can be called only from the organization's master account or by a member account that is a delegated administrator for an AWS service.
", + "DescribeEffectivePolicy": "Returns the contents of the effective tag policy for the account. The effective tag policy is the aggregation of any tag policies the account inherits, plus any policy directly that is attached to the account.
This action returns information on tag policies only.
For more information on policy inheritance, see How Policy Inheritance Works in the AWS Organizations User Guide.
This operation can be called only from the organization's master account or by a member account that is a delegated administrator for an AWS service.
", "DescribeHandshake": "Retrieves information about a previously requested handshake. The handshake ID comes from the response to the original InviteAccountToOrganization operation that generated the handshake.
You can access handshakes that are ACCEPTED
, DECLINED
, or CANCELED
for only 30 days after they change to that state. They're then deleted and no longer accessible.
This operation can be called from any account in the organization.
", "DescribeOrganization": "Retrieves information about the organization that the user's account belongs to.
This operation can be called from any account in the organization.
Even if a policy type is shown as available in the organization, you can disable it separately at the root level with DisablePolicyType. Use ListRoots to see the status of policy types for a specified root.
Retrieves information about an organizational unit (OU).
This operation can be called only from the organization's master account.
", - "DescribePolicy": "Retrieves information about a policy.
This operation can be called only from the organization's master account.
", - "DetachPolicy": "Detaches a policy from a target root, organizational unit (OU), or account. If the policy being detached is a service control policy (SCP), the changes to permissions for IAM users and roles in affected accounts are immediate.
Note: Every root, OU, and account must have at least one SCP attached. You can replace the default FullAWSAccess
policy with one that limits the permissions that can be delegated. To do that, you must attach the replacement policy before you can remove the default one. This is the authorization strategy of using an allow list. You could instead attach a second SCP and leave the FullAWSAccess
SCP still attached. You could then specify \"Effect\": \"Deny\"
in the second SCP to override the \"Effect\": \"Allow\"
in the FullAWSAccess
policy (or any other attached SCP). If you take these steps, you're using the authorization strategy of a deny list.
This operation can be called only from the organization's master account.
", - "DisableAWSServiceAccess": "Disables the integration of an AWS service (the service that is specified by ServicePrincipal
) with AWS Organizations. When you disable integration, the specified service no longer can create a service-linked role in new accounts in your organization. This means the service can't perform operations on your behalf on any new accounts in your organization. The service can still perform operations in older accounts until the service completes its clean-up from AWS Organizations.
We recommend that you disable integration between AWS Organizations and the specified AWS service by using the console or commands that are provided by the specified service. Doing so ensures that the other service is aware that it can clean up any resources that are required only for the integration. How the service cleans up its resources in the organization's accounts depends on that service. For more information, see the documentation for the other AWS service.
After you perform the DisableAWSServiceAccess
operation, the specified service can no longer perform operations in your organization's accounts. The only exception is when the operations are explicitly permitted by IAM policies that are attached to your roles.
For more information about integrating other services with AWS Organizations, including the list of services that work with Organizations, see Integrating AWS Organizations with Other AWS Services in the AWS Organizations User Guide.
This operation can be called only from the organization's master account.
", - "DisablePolicyType": "Disables an organizational control policy type in a root and detaches all policies of that type from the organization root, OUs, and accounts. A policy of a certain type can be attached to entities in a root only if that type is enabled in the root. After you perform this operation, you no longer can attach policies of the specified type to that root or to any organizational unit (OU) or account in that root. You can undo this by using the EnablePolicyType operation.
This is an asynchronous request that AWS performs in the background. If you disable a policy for a root, it still appears enabled for the organization if all features are enabled for the organization. AWS recommends that you first use ListRoots to see the status of policy types for a specified root, and then use this operation.
This operation can be called only from the organization's master account.
To view the status of available policy types in the organization, use DescribeOrganization.
", + "DescribeOrganizationalUnit": "Retrieves information about an organizational unit (OU).
This operation can be called only from the organization's master account or by a member account that is a delegated administrator for an AWS service.
", + "DescribePolicy": "Retrieves information about a policy.
This operation can be called only from the organization's master account or by a member account that is a delegated administrator for an AWS service.
", + "DetachPolicy": "Detaches a policy from a target root, organizational unit (OU), or account. If the policy being detached is a service control policy (SCP), the changes to permissions for IAM users and roles in affected accounts are immediate.
Note: Every root, OU, and account must have at least one SCP attached. If you want to replace the default FullAWSAccess
policy with one that limits the permissions that can be delegated, you must attach the replacement policy before you can remove the default one. This is the authorization strategy of an \"allow list\". If you instead attach a second SCP and leave the FullAWSAccess
SCP still attached, and specify \"Effect\": \"Deny\"
in the second SCP to override the \"Effect\": \"Allow\"
in the FullAWSAccess
policy (or any other attached SCP), you're using the authorization strategy of a \"deny list\".
This operation can be called only from the organization's master account.
", + "DisableAWSServiceAccess": "Disables the integration of an AWS service (the service that is specified by ServicePrincipal
) with AWS Organizations. When you disable integration, the specified service no longer can create a service-linked role in new accounts in your organization. This means the service can't perform operations on your behalf on any new accounts in your organization. The service can still perform operations in older accounts until the service completes its clean-up from AWS Organizations.
We recommend that you disable integration between AWS Organizations and the specified AWS service by using the console or commands that are provided by the specified service. Doing so ensures that the other service is aware that it can clean up any resources that are required only for the integration. How the service cleans up its resources in the organization's accounts depends on that service. For more information, see the documentation for the other AWS service.
After you perform the DisableAWSServiceAccess
operation, the specified service can no longer perform operations in your organization's accounts unless the operations are explicitly permitted by the IAM policies that are attached to your roles.
For more information about integrating other services with AWS Organizations, including the list of services that work with Organizations, see Integrating AWS Organizations with Other AWS Services in the AWS Organizations User Guide.
This operation can be called only from the organization's master account.
", + "DisablePolicyType": "Disables an organizational control policy type in a root. A policy of a certain type can be attached to entities in a root only if that type is enabled in the root. After you perform this operation, you no longer can attach policies of the specified type to that root or to any organizational unit (OU) or account in that root. You can undo this by using the EnablePolicyType operation.
This is an asynchronous request that AWS performs in the background. If you disable a policy for a root, it still appears enabled for the organization if all features are enabled for the organization. AWS recommends that you first use ListRoots to see the status of policy types for a specified root, and then use this operation.
This operation can be called only from the organization's master account.
To view the status of available policy types in the organization, use DescribeOrganization.
", "EnableAWSServiceAccess": "Enables the integration of an AWS service (the service that is specified by ServicePrincipal
) with AWS Organizations. When you enable integration, you allow the specified service to create a service-linked role in all the accounts in your organization. This allows the service to perform operations on your behalf in your organization and its accounts.
We recommend that you enable integration between AWS Organizations and the specified AWS service by using the console or commands that are provided by the specified service. Doing so ensures that the service is aware that it can create the resources that are required for the integration. How the service creates those resources in the organization's accounts depends on that service. For more information, see the documentation for the other AWS service.
For more information about enabling services to integrate with AWS Organizations, see Integrating AWS Organizations with Other AWS Services in the AWS Organizations User Guide.
This operation can be called only from the organization's master account and only if the organization has enabled all features.
", - "EnableAllFeatures": "Enables all features in an organization. This enables the use of organization policies that can restrict the services and actions that can be called in each account. Until you enable all features, you have access only to consolidated billing. You can't use any of the advanced account administration features that AWS Organizations supports. For more information, see Enabling All Features in Your Organization in the AWS Organizations User Guide.
This operation is required only for organizations that were created explicitly with only the consolidated billing features enabled. Calling this operation sends a handshake to every invited account in the organization. The feature set change can be finalized and the additional features enabled only after all administrators in the invited accounts approve the change. Accepting the handshake approves the change.
After you enable all features, you can separately enable or disable individual policy types in a root using EnablePolicyType and DisablePolicyType. To see the status of policy types in a root, use ListRoots.
After all invited member accounts accept the handshake, you finalize the feature set change by accepting the handshake that contains \"Action\": \"ENABLE_ALL_FEATURES\"
. This completes the change.
After you enable all features in your organization, the master account in the organization can apply policies on all member accounts. These policies can restrict what users and even administrators in those accounts can do. The master account can apply policies that prevent accounts from leaving the organization. Ensure that your account administrators are aware of this.
This operation can be called only from the organization's master account.
", + "EnableAllFeatures": "Enables all features in an organization. This enables the use of organization policies that can restrict the services and actions that can be called in each account. Until you enable all features, you have access only to consolidated billing, and you can't use any of the advanced account administration features that AWS Organizations supports. For more information, see Enabling All Features in Your Organization in the AWS Organizations User Guide.
This operation is required only for organizations that were created explicitly with only the consolidated billing features enabled. Calling this operation sends a handshake to every invited account in the organization. The feature set change can be finalized and the additional features enabled only after all administrators in the invited accounts approve the change by accepting the handshake.
After you enable all features, you can separately enable or disable individual policy types in a root using EnablePolicyType and DisablePolicyType. To see the status of policy types in a root, use ListRoots.
After all invited member accounts accept the handshake, you finalize the feature set change by accepting the handshake that contains \"Action\": \"ENABLE_ALL_FEATURES\"
. This completes the change.
After you enable all features in your organization, the master account in the organization can apply policies on all member accounts. These policies can restrict what users and even administrators in those accounts can do. The master account can apply policies that prevent accounts from leaving the organization. Ensure that your account administrators are aware of this.
This operation can be called only from the organization's master account.
", "EnablePolicyType": "Enables a policy type in a root. After you enable a policy type in a root, you can attach policies of that type to the root, any organizational unit (OU), or account in that root. You can undo this by using the DisablePolicyType operation.
This is an asynchronous request that AWS performs in the background. AWS recommends that you first use ListRoots to see the status of policy types for a specified root, and then use this operation.
This operation can be called only from the organization's master account.
You can enable a policy type in a root only if that policy type is available in the organization. To view the status of available policy types in the organization, use DescribeOrganization.
", - "InviteAccountToOrganization": "Sends an invitation to another account to join your organization as a member account. AWS Organizations sends email on your behalf to the email address that is associated with the other account's owner. The invitation is implemented as a Handshake whose details are in the response.
You can invite AWS accounts only from the same seller as the master account. For example, assume that your organization's master account was created by Amazon Internet Services Pvt. Ltd (AISPL), an AWS seller in India. You can invite only other AISPL accounts to your organization. You can't combine accounts from AISPL and AWS or from any other AWS seller. For more information, see Consolidated Billing in India.
You might receive an exception that indicates that you exceeded your account limits for the organization or that the operation failed because your organization is still initializing. If so, wait one hour and then try again. If the error persists after an hour, contact AWS Support.
This operation can be called only from the organization's master account.
", - "LeaveOrganization": "Removes a member account from its parent organization. This version of the operation is performed by the account that wants to leave. To remove a member account as a user in the master account, use RemoveAccountFromOrganization instead.
This operation can be called only from a member account in the organization.
The master account in an organization with all features enabled can set service control policies (SCPs) that can restrict what administrators of member accounts can do. These restrictions can include preventing member accounts from successfully calling LeaveOrganization
.
You can leave an organization as a member account only if the account is configured with the information required to operate as a standalone account. When you create an account in an organization using the AWS Organizations console, API, or CLI, the information required of standalone accounts is not automatically collected. For each account that you want to make standalone, you must accept the end user license agreement (EULA). You must also choose a support plan, provide and verify the required contact information, and provide a current payment method. AWS uses the payment method to charge for any billable (not free tier) AWS activity that occurs while the account isn't attached to an organization. Follow the steps at To leave an organization when all required account information has not yet been provided in the AWS Organizations User Guide.
You can leave an organization only after you enable IAM user access to billing in your account. For more information, see Activating Access to the Billing and Cost Management Console in the AWS Billing and Cost Management User Guide.
Returns a list of the AWS services that you enabled to integrate with your organization. After a service on this list creates the resources that it requires for the integration, it can perform operations on your organization and its accounts.
For more information about integrating other services with AWS Organizations, including the list of services that currently work with Organizations, see Integrating AWS Organizations with Other AWS Services in the AWS Organizations User Guide.
This operation can be called only from the organization's master account.
", - "ListAccounts": "Lists all the accounts in the organization. To request only the accounts in a specified root or organizational unit (OU), use the ListAccountsForParent operation instead.
Always check the NextToken
response parameter for a null
value when calling a List*
operation. These operations can occasionally return an empty set of results even when there are more results available. The NextToken
response parameter value is null
only when there are no more results to display.
This operation can be called only from the organization's master account.
", - "ListAccountsForParent": "Lists the accounts in an organization that are contained by the specified target root or organizational unit (OU). If you specify the root, you get a list of all the accounts that aren't in any OU. If you specify an OU, you get a list of all the accounts in only that OU and not in any child OUs. To get a list of all accounts in the organization, use the ListAccounts operation.
Always check the NextToken
response parameter for a null
value when calling a List*
operation. These operations can occasionally return an empty set of results even when there are more results available. The NextToken
response parameter value is null
only when there are no more results to display.
This operation can be called only from the organization's master account.
", - "ListChildren": "Lists all of the organizational units (OUs) or accounts that are contained in the specified parent OU or root. This operation, along with ListParents enables you to traverse the tree structure that makes up this root.
Always check the NextToken
response parameter for a null
value when calling a List*
operation. These operations can occasionally return an empty set of results even when there are more results available. The NextToken
response parameter value is null
only when there are no more results to display.
This operation can be called only from the organization's master account.
", - "ListCreateAccountStatus": "Lists the account creation requests that match the specified status that is currently being tracked for the organization.
Always check the NextToken
response parameter for a null
value when calling a List*
operation. These operations can occasionally return an empty set of results even when there are more results available. The NextToken
response parameter value is null
only when there are no more results to display.
This operation can be called only from the organization's master account.
", - "ListHandshakesForAccount": "Lists the current handshakes that are associated with the account of the requesting user.
Handshakes that are ACCEPTED
, DECLINED
, or CANCELED
appear in the results of this API for only 30 days after changing to that state. After that, they're deleted and no longer accessible.
Always check the NextToken
response parameter for a null
value when calling a List*
operation. These operations can occasionally return an empty set of results even when there are more results available. The NextToken
response parameter value is null
only when there are no more results to display.
This operation can be called from any account in the organization.
", - "ListHandshakesForOrganization": "Lists the handshakes that are associated with the organization that the requesting user is part of. The ListHandshakesForOrganization
operation returns a list of handshake structures. Each structure contains details and status about a handshake.
Handshakes that are ACCEPTED
, DECLINED
, or CANCELED
appear in the results of this API for only 30 days after changing to that state. After that, they're deleted and no longer accessible.
Always check the NextToken
response parameter for a null
value when calling a List*
operation. These operations can occasionally return an empty set of results even when there are more results available. The NextToken
response parameter value is null
only when there are no more results to display.
This operation can be called only from the organization's master account.
", - "ListOrganizationalUnitsForParent": "Lists the organizational units (OUs) in a parent organizational unit or root.
Always check the NextToken
response parameter for a null
value when calling a List*
operation. These operations can occasionally return an empty set of results even when there are more results available. The NextToken
response parameter value is null
only when there are no more results to display.
This operation can be called only from the organization's master account.
", - "ListParents": "Lists the root or organizational units (OUs) that serve as the immediate parent of the specified child OU or account. This operation, along with ListChildren enables you to traverse the tree structure that makes up this root.
Always check the NextToken
response parameter for a null
value when calling a List*
operation. These operations can occasionally return an empty set of results even when there are more results available. The NextToken
response parameter value is null
only when there are no more results to display.
This operation can be called only from the organization's master account.
In the current release, a child can have only a single parent.
Retrieves the list of all policies in an organization of a specified type.
Always check the NextToken
response parameter for a null
value when calling a List*
operation. These operations can occasionally return an empty set of results even when there are more results available. The NextToken
response parameter value is null
only when there are no more results to display.
This operation can be called only from the organization's master account.
", - "ListPoliciesForTarget": "Lists the policies that are directly attached to the specified target root, organizational unit (OU), or account. You must specify the policy type that you want included in the returned list.
Always check the NextToken
response parameter for a null
value when calling a List*
operation. These operations can occasionally return an empty set of results even when there are more results available. The NextToken
response parameter value is null
only when there are no more results to display.
This operation can be called only from the organization's master account.
", - "ListRoots": "Lists the roots that are defined in the current organization.
Always check the NextToken
response parameter for a null
value when calling a List*
operation. These operations can occasionally return an empty set of results even when there are more results available. The NextToken
response parameter value is null
only when there are no more results to display.
This operation can be called only from the organization's master account.
Policy types can be enabled and disabled in roots. This is distinct from whether they're available in the organization. When you enable all features, you make policy types available for use in that organization. Individual policy types can then be enabled and disabled in a root. To see the availability of a policy type in an organization, use DescribeOrganization.
Lists tags for the specified resource.
Currently, you can list tags on an account in AWS Organizations.
This operation can be called only from the organization's master account.
", - "ListTargetsForPolicy": "Lists all the roots, organizational units (OUs), and accounts that the specified policy is attached to.
Always check the NextToken
response parameter for a null
value when calling a List*
operation. These operations can occasionally return an empty set of results even when there are more results available. The NextToken
response parameter value is null
only when there are no more results to display.
This operation can be called only from the organization's master account.
", + "InviteAccountToOrganization": "Sends an invitation to another account to join your organization as a member account. AWS Organizations sends email on your behalf to the email address that is associated with the other account's owner. The invitation is implemented as a Handshake whose details are in the response.
You can invite AWS accounts only from the same seller as the master account. For example, if your organization's master account was created by Amazon Internet Services Pvt. Ltd (AISPL), an AWS seller in India, you can invite only other AISPL accounts to your organization. You can't combine accounts from AISPL and AWS or from any other AWS seller. For more information, see Consolidated Billing in India.
If you receive an exception that indicates that you exceeded your account limits for the organization or that the operation failed because your organization is still initializing, wait one hour and then try again. If the error persists after an hour, contact AWS Support.
This operation can be called only from the organization's master account.
", + "LeaveOrganization": "Removes a member account from its parent organization. This version of the operation is performed by the account that wants to leave. To remove a member account as a user in the master account, use RemoveAccountFromOrganization instead.
This operation can be called only from a member account in the organization.
The master account in an organization with all features enabled can set service control policies (SCPs) that can restrict what administrators of member accounts can do. This includes preventing them from successfully calling LeaveOrganization
and leaving the organization.
You can leave an organization as a member account only if the account is configured with the information required to operate as a standalone account. When you create an account in an organization using the AWS Organizations console, API, or CLI commands, the information required of standalone accounts is not automatically collected. For each account that you want to make standalone, you must do the following steps:
Accept the end user license agreement (EULA)
Choose a support plan
Provide and verify the required contact information
Provide a current payment method
AWS uses the payment method to charge for any billable (not free tier) AWS activity that occurs while the account isn't attached to an organization. Follow the steps at To leave an organization when all required account information has not yet been provided in the AWS Organizations User Guide.
You can leave an organization only after you enable IAM user access to billing in your account. For more information, see Activating Access to the Billing and Cost Management Console in the AWS Billing and Cost Management User Guide.
Returns a list of the AWS services that you enabled to integrate with your organization. After a service on this list creates the resources that it requires for the integration, it can perform operations on your organization and its accounts.
For more information about integrating other services with AWS Organizations, including the list of services that currently work with Organizations, see Integrating AWS Organizations with Other AWS Services in the AWS Organizations User Guide.
This operation can be called only from the organization's master account or by a member account that is a delegated administrator for an AWS service.
", + "ListAccounts": "Lists all the accounts in the organization. To request only the accounts in a specified root or organizational unit (OU), use the ListAccountsForParent operation instead.
Always check the NextToken
response parameter for a null
value when calling a List*
operation. These operations can occasionally return an empty set of results even when there are more results available. The NextToken
response parameter value is null
only when there are no more results to display.
This operation can be called only from the organization's master account or by a member account that is a delegated administrator for an AWS service.
", + "ListAccountsForParent": "Lists the accounts in an organization that are contained by the specified target root or organizational unit (OU). If you specify the root, you get a list of all the accounts that aren't in any OU. If you specify an OU, you get a list of all the accounts in only that OU and not in any child OUs. To get a list of all accounts in the organization, use the ListAccounts operation.
Always check the NextToken
response parameter for a null
value when calling a List*
operation. These operations can occasionally return an empty set of results even when there are more results available. The NextToken
response parameter value is null
only when there are no more results to display.
This operation can be called only from the organization's master account or by a member account that is a delegated administrator for an AWS service.
", + "ListChildren": "Lists all of the organizational units (OUs) or accounts that are contained in the specified parent OU or root. This operation, along with ListParents enables you to traverse the tree structure that makes up this root.
Always check the NextToken
response parameter for a null
value when calling a List*
operation. These operations can occasionally return an empty set of results even when there are more results available. The NextToken
response parameter value is null
only when there are no more results to display.
This operation can be called only from the organization's master account or by a member account that is a delegated administrator for an AWS service.
", + "ListCreateAccountStatus": "Lists the account creation requests that match the specified status that is currently being tracked for the organization.
Always check the NextToken
response parameter for a null
value when calling a List*
operation. These operations can occasionally return an empty set of results even when there are more results available. The NextToken
response parameter value is null
only when there are no more results to display.
This operation can be called only from the organization's master account or by a member account that is a delegated administrator for an AWS service.
", + "ListDelegatedAdministrators": "Lists the AWS accounts that are designated as delegated administrators in this organization.
This operation can be called only from the organization's master account or by a member account that is a delegated administrator for an AWS service.
", + "ListDelegatedServicesForAccount": "List the AWS services for which the specified account is a delegated administrator.
This operation can be called only from the organization's master account or by a member account that is a delegated administrator for an AWS service.
", + "ListHandshakesForAccount": "Lists the current handshakes that are associated with the account of the requesting user.
Handshakes that are ACCEPTED
, DECLINED
, or CANCELED
appear in the results of this API for only 30 days after changing to that state. After that, they're deleted and no longer accessible.
Always check the NextToken
response parameter for a null
value when calling a List*
operation. These operations can occasionally return an empty set of results even when there are more results available. The NextToken
response parameter value is null
only when there are no more results to display.
This operation can be called only from the organization's master account or by a member account that is a delegated administrator for an AWS service.
", + "ListHandshakesForOrganization": "Lists the handshakes that are associated with the organization that the requesting user is part of. The ListHandshakesForOrganization
operation returns a list of handshake structures. Each structure contains details and status about a handshake.
Handshakes that are ACCEPTED
, DECLINED
, or CANCELED
appear in the results of this API for only 30 days after changing to that state. After that, they're deleted and no longer accessible.
Always check the NextToken
response parameter for a null
value when calling a List*
operation. These operations can occasionally return an empty set of results even when there are more results available. The NextToken
response parameter value is null
only when there are no more results to display.
This operation can be called only from the organization's master account or by a member account that is a delegated administrator for an AWS service.
", + "ListOrganizationalUnitsForParent": "Lists the organizational units (OUs) in a parent organizational unit or root.
Always check the NextToken
response parameter for a null
value when calling a List*
operation. These operations can occasionally return an empty set of results even when there are more results available. The NextToken
response parameter value is null
only when there are no more results to display.
This operation can be called only from the organization's master account or by a member account that is a delegated administrator for an AWS service.
", + "ListParents": "Lists the root or organizational units (OUs) that serve as the immediate parent of the specified child OU or account. This operation, along with ListChildren enables you to traverse the tree structure that makes up this root.
Always check the NextToken
response parameter for a null
value when calling a List*
operation. These operations can occasionally return an empty set of results even when there are more results available. The NextToken
response parameter value is null
only when there are no more results to display.
This operation can be called only from the organization's master account or by a member account that is a delegated administrator for an AWS service.
In the current release, a child can have only a single parent.
Retrieves the list of all policies in an organization of a specified type.
Always check the NextToken
response parameter for a null
value when calling a List*
operation. These operations can occasionally return an empty set of results even when there are more results available. The NextToken
response parameter value is null
only when there are no more results to display.
This operation can be called only from the organization's master account or by a member account that is a delegated administrator for an AWS service.
", + "ListPoliciesForTarget": "Lists the policies that are directly attached to the specified target root, organizational unit (OU), or account. You must specify the policy type that you want included in the returned list.
Always check the NextToken
response parameter for a null
value when calling a List*
operation. These operations can occasionally return an empty set of results even when there are more results available. The NextToken
response parameter value is null
only when there are no more results to display.
This operation can be called only from the organization's master account or by a member account that is a delegated administrator for an AWS service.
", + "ListRoots": "Lists the roots that are defined in the current organization.
Always check the NextToken
response parameter for a null
value when calling a List*
operation. These operations can occasionally return an empty set of results even when there are more results available. The NextToken
response parameter value is null
only when there are no more results to display.
This operation can be called only from the organization's master account or by a member account that is a delegated administrator for an AWS service.
Policy types can be enabled and disabled in roots. This is distinct from whether they're available in the organization. When you enable all features, you make policy types available for use in that organization. Individual policy types can then be enabled and disabled in a root. To see the availability of a policy type in an organization, use DescribeOrganization.
Lists tags for the specified resource.
Currently, you can list tags on an account in AWS Organizations.
This operation can be called only from the organization's master account or by a member account that is a delegated administrator for an AWS service.
", + "ListTargetsForPolicy": "Lists all the roots, organizational units (OUs), and accounts that the specified policy is attached to.
Always check the NextToken
response parameter for a null
value when calling a List*
operation. These operations can occasionally return an empty set of results even when there are more results available. The NextToken
response parameter value is null
only when there are no more results to display.
This operation can be called only from the organization's master account or by a member account that is a delegated administrator for an AWS service.
", "MoveAccount": "Moves an account from its current source parent root or organizational unit (OU) to the specified destination parent root or OU.
This operation can be called only from the organization's master account.
", - "RemoveAccountFromOrganization": "Removes the specified account from the organization.
The removed account becomes a standalone account that isn't a member of any organization. It's no longer subject to any policies and is responsible for its own bill payments. The organization's master account is no longer charged for any expenses accrued by the member account after it's removed from the organization.
This operation can be called only from the organization's master account. Member accounts can remove themselves with LeaveOrganization instead.
You can remove an account from your organization only if the account is configured with the information required to operate as a standalone account. When you create an account in an organization using the AWS Organizations console, API, or CLI, the information required of standalone accounts is not automatically collected. For an account that you want to make standalone, you must accept the end user license agreement (EULA). You must also choose a support plan, provide and verify the required contact information, and provide a current payment method. AWS uses the payment method to charge for any billable (not free tier) AWS activity that occurs while the account isn't attached to an organization. To remove an account that doesn't yet have this information, you must sign in as the member account. Then follow the steps at To leave an organization when all required account information has not yet been provided in the AWS Organizations User Guide.
Enables the specified member account to administer the Organizations features of the specified AWS service. It grants read-only access to AWS Organizations service data. The account still requires IAM permissions to access and administer the AWS service.
You can run this action only for AWS services that support this feature. For a current list of services that support it, see the column Supports Delegated Administrator in the table at AWS Services that you can use with AWS Organizations in the AWS Organizations User Guide.
This operation can be called only from the organization's master account.
", + "RemoveAccountFromOrganization": "Removes the specified account from the organization.
The removed account becomes a standalone account that isn't a member of any organization. It's no longer subject to any policies and is responsible for its own bill payments. The organization's master account is no longer charged for any expenses accrued by the member account after it's removed from the organization.
This operation can be called only from the organization's master account. Member accounts can remove themselves with LeaveOrganization instead.
You can remove an account from your organization only if the account is configured with the information required to operate as a standalone account. When you create an account in an organization using the AWS Organizations console, API, or CLI commands, the information required of standalone accounts is not automatically collected. For an account that you want to make standalone, you must accept the end user license agreement (EULA), choose a support plan, provide and verify the required contact information, and provide a current payment method. AWS uses the payment method to charge for any billable (not free tier) AWS activity that occurs while the account isn't attached to an organization. To remove an account that doesn't yet have this information, you must sign in as the member account and follow the steps at To leave an organization when all required account information has not yet been provided in the AWS Organizations User Guide.
Adds one or more tags to the specified resource.
Currently, you can tag and untag accounts in AWS Organizations.
This operation can be called only from the organization's master account.
", "UntagResource": "Removes a tag from the specified resource.
Currently, you can tag and untag accounts in AWS Organizations.
This operation can be called only from the organization's master account.
", "UpdateOrganizationalUnit": "Renames the specified organizational unit (OU). The ID and ARN don't change. The child OUs and accounts remain in place, and any attached policies of the OU remain attached.
This operation can be called only from the organization's master account.
", @@ -89,10 +93,16 @@ "DescribeAccountResponse$Account": "A structure that contains information about the requested account.
" } }, + "AccountAlreadyRegisteredException": { + "base": "The specified account is already a delegated administrator for this AWS service.
", + "refs": { + } + }, "AccountArn": { "base": null, "refs": { "Account$Arn": "The Amazon Resource Name (ARN) of the account.
For more information about ARNs in Organizations, see ARN Formats Supported by Organizations in the AWS Organizations User Guide.
", + "DelegatedAdministrator$Arn": "The Amazon Resource Name (ARN) of the delegated administrator's account.
", "Organization$MasterAccountArn": "The Amazon Resource Name (ARN) of the account that is designated as the master account for the organization.
For more information about ARNs in Organizations, see ARN Formats Supported by Organizations in the AWS Organizations User Guide.
" } }, @@ -102,16 +112,21 @@ "Account$Id": "The unique identifier (ID) of the account.
The regex pattern for an account ID string requires exactly 12 digits.
", "CreateAccountStatus$AccountId": "If the account was created successfully, the unique identifier (ID) of the new account.
The regex pattern for an account ID string requires exactly 12 digits.
", "CreateAccountStatus$GovCloudAccountId": "If the account was created successfully, the unique identifier (ID) of the new account in the AWS GovCloud (US) Region.
", + "DelegatedAdministrator$Id": "The unique identifier (ID) of the delegated administrator's account.
", + "DeregisterDelegatedAdministratorRequest$AccountId": "The account ID number of the member account in the organization that you want to deregister as a delegated administrator.
", "DescribeAccountRequest$AccountId": "The unique identifier (ID) of the AWS account that you want information about. You can get the ID from the ListAccounts or ListAccountsForParent operations.
The regex pattern for an account ID string requires exactly 12 digits.
", + "ListDelegatedServicesForAccountRequest$AccountId": "The account ID number of a delegated administrator account in the organization.
", "MoveAccountRequest$AccountId": "The unique identifier (ID) of the account that you want to move.
The regex pattern for an account ID string requires exactly 12 digits.
", "Organization$MasterAccountId": "The unique identifier (ID) of the master account of an organization.
The regex pattern for an account ID string requires exactly 12 digits.
", + "RegisterDelegatedAdministratorRequest$AccountId": "The account ID number of the member account in the organization to register as a delegated administrator.
", "RemoveAccountFromOrganizationRequest$AccountId": "The unique identifier (ID) of the member account that you want to remove from the organization.
The regex pattern for an account ID string requires exactly 12 digits.
" } }, "AccountJoinedMethod": { "base": null, "refs": { - "Account$JoinedMethod": "The method by which the account joined the organization.
" + "Account$JoinedMethod": "The method by which the account joined the organization.
", + "DelegatedAdministrator$JoinedMethod": "The method by which the delegated administrator's account joined the organization.
" } }, "AccountName": { @@ -120,11 +135,17 @@ "Account$Name": "The friendly name of the account.
The regex pattern that is used to validate this parameter is a string of any of the characters in the ASCII character range.
", "CreateAccountRequest$AccountName": "The friendly name of the member account.
", "CreateAccountStatus$AccountName": "The account name given to the account when it was created.
", - "CreateGovCloudAccountRequest$AccountName": "The friendly name of the member account.
" + "CreateGovCloudAccountRequest$AccountName": "The friendly name of the member account.
", + "DelegatedAdministrator$Name": "The friendly name of the delegated administrator's account.
" } }, "AccountNotFoundException": { - "base": " We can't find an AWS account with the AccountId
that you specified. Or the account whose credentials you used to make this request isn't a member of an organization.
We can't find an AWS account with the AccountId
that you specified, or the account whose credentials you used to make this request isn't a member of an organization.
The specified account is not a delegated administrator for this AWS service.
", "refs": { } }, @@ -136,7 +157,8 @@ "AccountStatus": { "base": null, "refs": { - "Account$Status": "The status of the account in the organization.
" + "Account$Status": "The status of the account in the organization.
", + "DelegatedAdministrator$Status": "The status of the delegated administrator's account in the organization.
" } }, "Accounts": { @@ -166,7 +188,7 @@ "AwsManagedPolicy": { "base": null, "refs": { - "PolicySummary$AwsManaged": "A Boolean value that indicates whether the specified policy is an AWS managed policy. If true, then you can attach the policy to roots, OUs, or accounts, but you cannot edit it.
" + "PolicySummary$AwsManaged": "A boolean value that indicates whether the specified policy is an AWS managed policy. If true, then you can attach the policy to roots, OUs, or accounts, but you cannot edit it.
" } }, "CancelHandshakeRequest": { @@ -216,7 +238,7 @@ } }, "ConstraintViolationException": { - "base": "Performing this operation violates a minimum or maximum value limit. Examples include attempting to remove the last service control policy (SCP) from an OU or root, or attaching too many policies to an account, OU, or root. This exception includes a reason that contains additional information about the violated limit.
Some of the reasons in the following list might not be applicable to this specific API or operation:
ACCOUNT_CANNOT_LEAVE_WITHOUT_EULA: You attempted to remove an account from the organization that doesn't yet have enough information to exist as a standalone account. This account requires you to first agree to the AWS Customer Agreement. Follow the steps at To leave an organization when all required account information has not yet been provided in the AWS Organizations User Guide.
ACCOUNT_CANNOT_LEAVE_WITHOUT_PHONE_VERIFICATION: You attempted to remove an account from the organization that doesn't yet have enough information to exist as a standalone account. This account requires you to first complete phone verification. Follow the steps at To leave an organization when all required account information has not yet been provided in the AWS Organizations User Guide.
ACCOUNT_CREATION_RATE_LIMIT_EXCEEDED: You attempted to exceed the number of accounts that you can create in one day.
ACCOUNT_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the limit on the number of accounts in an organization. If you need more accounts, contact AWS Support to request an increase in your limit.
Or the number of invitations that you tried to send would cause you to exceed the limit of accounts in your organization. Send fewer invitations or contact AWS Support to request an increase in the number of accounts.
Deleted and closed accounts still count toward your limit.
If you get receive this exception when running a command immediately after creating the organization, wait one hour and try again. If after an hour it continues to fail with this error, contact AWS Support.
HANDSHAKE_RATE_LIMIT_EXCEEDED: You attempted to exceed the number of handshakes that you can send in one day.
MASTER_ACCOUNT_ADDRESS_DOES_NOT_MATCH_MARKETPLACE: To create an account in this organization, you first must migrate the organization's master account to the marketplace that corresponds to the master account's address. For example, accounts with India addresses must be associated with the AISPL marketplace. All accounts in an organization must be associated with the same marketplace.
MASTER_ACCOUNT_MISSING_CONTACT_INFO: To complete this operation, you must first provide contact a valid address and phone number for the master account. Then try the operation again.
MASTER_ACCOUNT_NOT_GOVCLOUD_ENABLED: To complete this operation, the master account must have an associated account in the AWS GovCloud (US-West) Region. For more information, see AWS Organizations in the AWS GovCloud User Guide.
MASTER_ACCOUNT_PAYMENT_INSTRUMENT_REQUIRED: To create an organization with this master account, you first must associate a valid payment instrument, such as a credit card, with the account. Follow the steps at To leave an organization when all required account information has not yet been provided in the AWS Organizations User Guide.
MAX_POLICY_TYPE_ATTACHMENT_LIMIT_EXCEEDED: You attempted to exceed the number of policies of a certain type that can be attached to an entity at one time.
MAX_TAG_LIMIT_EXCEEDED: You have exceeded the number of tags allowed on this resource.
MEMBER_ACCOUNT_PAYMENT_INSTRUMENT_REQUIRED: To complete this operation with this member account, you first must associate a valid payment instrument, such as a credit card, with the account. Follow the steps at To leave an organization when all required account information has not yet been provided in the AWS Organizations User Guide.
MIN_POLICY_TYPE_ATTACHMENT_LIMIT_EXCEEDED: You attempted to detach a policy from an entity, which would cause the entity to have fewer than the minimum number of policies of the required type.
OU_DEPTH_LIMIT_EXCEEDED: You attempted to create an OU tree that is too many levels deep.
ORGANIZATION_NOT_IN_ALL_FEATURES_MODE: You attempted to perform an operation that requires the organization to be configured to support all features. An organization that supports only consolidated billing features can't perform this operation.
OU_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the number of OUs that you can have in an organization.
POLICY_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the number of policies that you can have in an organization.
TAG_POLICY_VIOLATION: Tags associated with the resource must be compliant with the tag policy that’s in effect for the account. For more information, see Tag Policies in the AWS Organizations User Guide.
Performing this operation violates a minimum or maximum value limit. For example, attempting to remove the last service control policy (SCP) from an OU or root, inviting or creating too many accounts to the organization, or attaching too many policies to an account, OU, or root. This exception includes a reason that contains additional information about the violated limit.
Some of the reasons in the following list might not be applicable to this specific API or operation:
ACCOUNT_CANNOT_LEAVE_WITHOUT_EULA: You attempted to remove an account from the organization that doesn't yet have enough information to exist as a standalone account. This account requires you to first agree to the AWS Customer Agreement. Follow the steps at To leave an organization when all required account information has not yet been provided in the AWS Organizations User Guide.
ACCOUNT_CANNOT_LEAVE_WITHOUT_PHONE_VERIFICATION: You attempted to remove an account from the organization that doesn't yet have enough information to exist as a standalone account. This account requires you to first complete phone verification. Follow the steps at To leave an organization when all required account information has not yet been provided in the AWS Organizations User Guide.
ACCOUNT_CREATION_RATE_LIMIT_EXCEEDED: You attempted to exceed the number of accounts that you can create in one day.
ACCOUNT_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the limit on the number of accounts in an organization. If you need more accounts, contact AWS Support to request an increase in your limit.
Or the number of invitations that you tried to send would cause you to exceed the limit of accounts in your organization. Send fewer invitations or contact AWS Support to request an increase in the number of accounts.
Deleted and closed accounts still count toward your limit.
If you get receive this exception when running a command immediately after creating the organization, wait one hour and try again. If after an hour it continues to fail with this error, contact AWS Support.
CANNOT_REGISTER_MASTER_AS_DELEGATED_ADMINISTRATOR: You can designate only a member account as a delegated administrator.
CANNOT_REMOVE_DELEGATED_ADMINISTRATOR_FROM_ORG: To complete this operation, you must first deregister this account as a delegated administrator.
DELEGATED_ADMINISTRATOR_EXISTS_FOR_THIS_SERVICE: To complete this operation, you must first deregister all delegated administrators for this service.
HANDSHAKE_RATE_LIMIT_EXCEEDED: You attempted to exceed the number of handshakes that you can send in one day.
MASTER_ACCOUNT_ADDRESS_DOES_NOT_MATCH_MARKETPLACE: To create an account in this organization, you first must migrate the organization's master account to the marketplace that corresponds to the master account's address. For example, accounts with India addresses must be associated with the AISPL marketplace. All accounts in an organization must be associated with the same marketplace.
MASTER_ACCOUNT_MISSING_CONTACT_INFO: To complete this operation, you must first provide contact a valid address and phone number for the master account. Then try the operation again.
MASTER_ACCOUNT_NOT_GOVCLOUD_ENABLED: To complete this operation, the master account must have an associated account in the AWS GovCloud (US-West) Region. For more information, see AWS Organizations in the AWS GovCloud User Guide.
MASTER_ACCOUNT_PAYMENT_INSTRUMENT_REQUIRED: To create an organization with this master account, you first must associate a valid payment instrument, such as a credit card, with the account. Follow the steps at To leave an organization when all required account information has not yet been provided in the AWS Organizations User Guide.
MAX_DELEGATED_ADMINISTRATORS_FOR_SERVICE_LIMIT_EXCEEDED: You attempted to register more delegated administrators than allowed for the service principal.
MAX_POLICY_TYPE_ATTACHMENT_LIMIT_EXCEEDED: You attempted to exceed the number of policies of a certain type that can be attached to an entity at one time.
MAX_TAG_LIMIT_EXCEEDED: You have exceeded the number of tags allowed on this resource.
MEMBER_ACCOUNT_PAYMENT_INSTRUMENT_REQUIRED: To complete this operation with this member account, you first must associate a valid payment instrument, such as a credit card, with the account. Follow the steps at To leave an organization when all required account information has not yet been provided in the AWS Organizations User Guide.
MIN_POLICY_TYPE_ATTACHMENT_LIMIT_EXCEEDED: You attempted to detach a policy from an entity that would cause the entity to have fewer than the minimum number of policies of a certain type required.
OU_DEPTH_LIMIT_EXCEEDED: You attempted to create an OU tree that is too many levels deep.
ORGANIZATION_NOT_IN_ALL_FEATURES_MODE: You attempted to perform an operation that requires the organization to be configured to support all features. An organization that supports only consolidated billing features can't perform this operation.
OU_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the number of OUs that you can have in an organization.
POLICY_NUMBER_LIMIT_EXCEEDED. You attempted to exceed the number of policies that you can have in an organization.
If the request failed, a description of the reason for the failure.
ACCOUNT_LIMIT_EXCEEDED: The account could not be created because you have reached the limit on the number of accounts in your organization.
EMAIL_ALREADY_EXISTS: The account could not be created because another AWS account with that email address already exists.
GOVCLOUD_ACCOUNT_ALREADY_EXISTS: The account in the AWS GovCloud (US) Region could not be created because this Region already includes an account with that email address.
INVALID_ADDRESS: The account could not be created because the address you provided is not valid.
INVALID_EMAIL: The account could not be created because the email address you provided is not valid.
INTERNAL_FAILURE: The account could not be created because of an internal failure. Try again later. If the problem persists, contact AWS Support.
If the request failed, a description of the reason for the failure.
ACCOUNT_LIMIT_EXCEEDED: The account could not be created because you have reached the limit on the number of accounts in your organization.
EMAIL_ALREADY_EXISTS: The account could not be created because another AWS account with that email address already exists.
GOVCLOUD_ACCOUNT_ALREADY_EXISTS: The account in the AWS GovCloud (US) Region could not be created because this Region already includes an account with that email address.
INVALID_ADDRESS: The account could not be created because the address you provided is not valid.
INVALID_EMAIL: The account could not be created because the email address you provided is not valid.
INTERNAL_FAILURE: The account could not be created because of an internal failure. Try again later. If the problem persists, contact Customer Support.
We can't find a create account request with the CreateAccountRequestId
that you specified.
We can't find an create account request with the CreateAccountRequestId
that you specified.
Contains information about the delegated administrator.
", + "refs": { + "DelegatedAdministrators$member": null + } + }, + "DelegatedAdministrators": { + "base": null, + "refs": { + "ListDelegatedAdministratorsResponse$DelegatedAdministrators": "The list of delegated administrators in your organization.
" + } + }, + "DelegatedService": { + "base": "Contains information about the AWS service for which the account is a delegated administrator.
", + "refs": { + "DelegatedServices$member": null + } + }, + "DelegatedServices": { + "base": null, + "refs": { + "ListDelegatedServicesForAccountResponse$DelegatedServices": "The services for which the account is a delegated administrator.
" + } + }, "DeleteOrganizationalUnitRequest": { "base": null, "refs": { @@ -342,6 +388,11 @@ "refs": { } }, + "DeregisterDelegatedAdministratorRequest": { + "base": null, + "refs": { + } + }, "DescribeAccountRequest": { "base": null, "refs": { @@ -480,7 +531,8 @@ "refs": { "Account$Email": "The email address associated with the AWS account.
The regex pattern for this parameter is a string of characters that represents a standard internet email address.
", "CreateAccountRequest$Email": "The email address of the owner to assign to the new member account. This email address must not already be associated with another AWS account. You must use a valid email address to complete account creation. You can't access the root user of the account or remove an account that was created with an invalid email address.
", - "CreateGovCloudAccountRequest$Email": "The email address of the owner to assign to the new member account in the commercial Region. This email address must not already be associated with another AWS account. You must use a valid email address to complete account creation. You can't access the root user of the account or remove an account that was created with an invalid email address. Like all request parameters for CreateGovCloudAccount
, the request for the email address for the AWS GovCloud (US) account originates from the commercial Region. It does not come from the AWS GovCloud (US) Region.
The email address of the owner to assign to the new member account in the commercial Region. This email address must not already be associated with another AWS account. You must use a valid email address to complete account creation. You can't access the root user of the account or remove an account that was created with an invalid email address. Like all request parameters for CreateGovCloudAccount
, the request for the email address for the AWS GovCloud (US) account originates from the commercial Region, not from the AWS GovCloud (US) Region.
The email address that is associated with the delegated administrator's AWS account.
", "Organization$MasterAccountEmail": "The email address that is associated with the AWS account that is designated as the master account for the organization.
" } }, @@ -510,7 +562,7 @@ } }, "EnabledServicePrincipal": { - "base": "A structure that contains details of a service principal that is enabled to integrate with AWS Organizations.
", + "base": "A structure that contains details of a service principal that represents an AWS service that is enabled to integrate with AWS Organizations.
", "refs": { "EnabledServicePrincipals$member": null } @@ -527,7 +579,9 @@ "AWSOrganizationsNotInUseException$Message": null, "AccessDeniedException$Message": null, "AccessDeniedForDependencyException$Message": null, + "AccountAlreadyRegisteredException$Message": null, "AccountNotFoundException$Message": null, + "AccountNotRegisteredException$Message": null, "AccountOwnerNotVerifiedException$Message": null, "AlreadyInOrganizationException$Message": null, "ChildNotFoundException$Message": null, @@ -586,7 +640,7 @@ } }, "Handshake": { - "base": "Contains information that must be exchanged to securely establish a relationship between two accounts (an originator and a recipient). For example, assume that a master account (the originator) invites another account (the recipient) to join its organization. In that case, the two accounts exchange information as a series of handshake requests and responses.
Note: Handshakes that are CANCELED, ACCEPTED, or DECLINED show up in lists for only 30 days after entering that state. After that, they are deleted.
", + "base": "Contains information that must be exchanged to securely establish a relationship between two accounts (an originator and a recipient). For example, when a master account (the originator) invites another account (the recipient) to join its organization, the two accounts exchange information as a series of handshake requests and responses.
Note: Handshakes that are CANCELED, ACCEPTED, or DECLINED show up in lists for only 30 days after entering that state After that they are deleted.
", "refs": { "AcceptHandshakeResponse$Handshake": "A structure that contains details about the accepted handshake.
", "CancelHandshakeResponse$Handshake": "A structure that contains details about the handshake that you canceled.
", @@ -622,8 +676,8 @@ "HandshakeFilter": { "base": "Specifies the criteria that are used to select the handshakes for the operation.
", "refs": { - "ListHandshakesForAccountRequest$Filter": "Filters the handshakes that you want included in the response. The default is all types. Use the ActionType
element to limit the output to only a specified type, such as INVITE
, ENABLE_ALL_FEATURES
, or APPROVE_ALL_FEATURES
. Alternatively, you can specify the ENABLE_ALL_FEATURES
handshake, which generates a separate child handshake for each member account. When you do specify ParentHandshakeId
to see only the handshakes that were generated by that parent request.
A filter of the handshakes that you want included in the response. The default is all types. Use the ActionType
element to limit the output to only a specified type, such as INVITE
, ENABLE-ALL-FEATURES
, or APPROVE-ALL-FEATURES
. Alternatively, you can specify the ENABLE-ALL-FEATURES
handshake, which generates a separate child handshake for each member account. When you do, specify the ParentHandshakeId
to see only the handshakes that were generated by that parent request.
Filters the handshakes that you want included in the response. The default is all types. Use the ActionType
element to limit the output to only a specified type, such as INVITE
, ENABLE_ALL_FEATURES
, or APPROVE_ALL_FEATURES
. Alternatively, for the ENABLE_ALL_FEATURES
handshake that generates a separate child handshake for each member account, you can specify ParentHandshakeId
to see only the handshakes that were generated by that parent request.
A filter of the handshakes that you want included in the response. The default is all types. Use the ActionType
element to limit the output to only a specified type, such as INVITE
, ENABLE-ALL-FEATURES
, or APPROVE-ALL-FEATURES
. Alternatively, for the ENABLE-ALL-FEATURES
handshake that generates a separate child handshake for each member account, you can specify the ParentHandshakeId
to see only the handshakes that were generated by that parent request.
If set to ALLOW
, the new account enables IAM users to access account billing information if they have the required permissions. If set to DENY
, only the root user of the new account can access account billing information. For more information, see Activating Access to the Billing and Cost Management Console in the AWS Billing and Cost Management User Guide.
If you don't specify this parameter, the value defaults to ALLOW
. This value allows IAM users and roles with the required permissions to access billing information for the new account.
If set to ALLOW
, the new account enables IAM users to access account billing information if they have the required permissions. If set to DENY
, only the root user of the new account can access account billing information. For more information, see Activating Access to the Billing and Cost Management Console in the AWS Billing and Cost Management User Guide.
If you don't specify this parameter, the value defaults to ALLOW
, and IAM users and roles with the required permissions can access billing information for the new account.
If set to ALLOW
, the new linked account in the commercial Region enables IAM users to access account billing information if they have the required permissions. If set to DENY
, only the root user of the new account can access account billing information. For more information, see Activating Access to the Billing and Cost Management Console in the AWS Billing and Cost Management User Guide.
If you don't specify this parameter, the value defaults to ALLOW
, and IAM users and roles with the required permissions can access billing information for the new account.
The requested operation failed because you provided invalid values for one or more of the request parameters. This exception includes a reason that contains additional information about the violated limit:
Some of the reasons in the following list might not be applicable to this specific API or operation:
IMMUTABLE_POLICY: You specified a policy that is managed by AWS and can't be modified.
INPUT_REQUIRED: You must include a value for all required parameters.
INVALID_ENUM: You specified an invalid value.
INVALID_ENUM_POLICY_TYPE: You specified an invalid policy type.
INVALID_FULL_NAME_TARGET: You specified a full name that contains invalid characters.
INVALID_LIST_MEMBER: You provided a list to a parameter that contains at least one invalid value.
INVALID_PAGINATION_TOKEN: Get the value for the NextToken
parameter from the response to a previous call of the operation.
INVALID_PARTY_TYPE_TARGET: You specified the wrong type of entity (account, organization, or email) as a party.
INVALID_PATTERN: You provided a value that doesn't match the required pattern.
INVALID_PATTERN_TARGET_ID: You specified a policy target ID that doesn't match the required pattern.
INVALID_ROLE_NAME: You provided a role name that isn't valid. A role name can't begin with the reserved prefix AWSServiceRoleFor
.
INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid Amazon Resource Name (ARN) for the organization.
INVALID_SYNTAX_POLICY_ID: You specified an invalid policy ID.
INVALID_SYSTEM_TAGS_PARAMETER: You specified a tag key that is a system tag. You can’t add, edit, or delete system tag keys because they're reserved for AWS use. System tags don’t count against your tags per resource limit.
MAX_FILTER_LIMIT_EXCEEDED: You can specify only one filter parameter for the operation.
MAX_LENGTH_EXCEEDED: You provided a string parameter that is longer than allowed.
MAX_VALUE_EXCEEDED: You provided a numeric parameter that has a larger value than allowed.
MIN_LENGTH_EXCEEDED: You provided a string parameter that is shorter than allowed.
MIN_VALUE_EXCEEDED: You provided a numeric parameter that has a smaller value than allowed.
MOVING_ACCOUNT_BETWEEN_DIFFERENT_ROOTS: You can move an account only between entities in the same root.
The requested operation failed because you provided invalid values for one or more of the request parameters. This exception includes a reason that contains additional information about the violated limit:
Some of the reasons in the following list might not be applicable to this specific API or operation:
IMMUTABLE_POLICY: You specified a policy that is managed by AWS and can't be modified.
INPUT_REQUIRED: You must include a value for all required parameters.
INVALID_ENUM: You specified an invalid value.
INVALID_FULL_NAME_TARGET: You specified a full name that contains invalid characters.
INVALID_LIST_MEMBER: You provided a list to a parameter that contains at least one invalid value.
INVALID_PAGINATION_TOKEN: Get the value for the NextToken
parameter from the response to a previous call of the operation.
INVALID_PARTY_TYPE_TARGET: You specified the wrong type of entity (account, organization, or email) as a party.
INVALID_PATTERN: You provided a value that doesn't match the required pattern.
INVALID_PATTERN_TARGET_ID: You specified a policy target ID that doesn't match the required pattern.
INVALID_ROLE_NAME: You provided a role name that isn't valid. A role name can't begin with the reserved prefix AWSServiceRoleFor
.
INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid Amazon Resource Name (ARN) for the organization.
INVALID_SYNTAX_POLICY_ID: You specified an invalid policy ID.
INVALID_SYSTEM_TAGS_PARAMETER: You specified a tag key that is a system tag. You can’t add, edit, or delete system tag keys because they're reserved for AWS use. System tags don’t count against your tags per resource limit.
MAX_FILTER_LIMIT_EXCEEDED: You can specify only one filter parameter for the operation.
MAX_LENGTH_EXCEEDED: You provided a string parameter that is longer than allowed.
MAX_VALUE_EXCEEDED: You provided a numeric parameter that has a larger value than allowed.
MIN_LENGTH_EXCEEDED: You provided a string parameter that is shorter than allowed.
MIN_VALUE_EXCEEDED: You provided a numeric parameter that has a smaller value than allowed.
MOVING_ACCOUNT_BETWEEN_DIFFERENT_ROOTS: You can move an account only between entities in the same root.
(Optional) Use this to limit the number of results you want included per page in the response. If you do not include this parameter, it defaults to a value that is specific to the operation. If additional items exist beyond the maximum you specify, the NextToken
response element is present and has a value (is not null). Include that value as the NextToken
request parameter in the next call to the operation to get the next part of the results. Note that Organizations might return fewer results than the maximum even when there are more results available. You should check NextToken
after every operation to ensure that you receive all of the results.
(Optional) Use this to limit the number of results you want included per page in the response. If you do not include this parameter, it defaults to a value that is specific to the operation. If additional items exist beyond the maximum you specify, the NextToken
response element is present and has a value (is not null). Include that value as the NextToken
request parameter in the next call to the operation to get the next part of the results. Note that Organizations might return fewer results than the maximum even when there are more results available. You should check NextToken
after every operation to ensure that you receive all of the results.
(Optional) Use this to limit the number of results you want included per page in the response. If you do not include this parameter, it defaults to a value that is specific to the operation. If additional items exist beyond the maximum you specify, the NextToken
response element is present and has a value (is not null). Include that value as the NextToken
request parameter in the next call to the operation to get the next part of the results. Note that Organizations might return fewer results than the maximum even when there are more results available. You should check NextToken
after every operation to ensure that you receive all of the results.
(Optional) Use this to limit the number of results you want included per page in the response. If you do not include this parameter, it defaults to a value that is specific to the operation. If additional items exist beyond the maximum you specify, the NextToken
response element is present and has a value (is not null). Include that value as the NextToken
request parameter in the next call to the operation to get the next part of the results. Note that Organizations might return fewer results than the maximum even when there are more results available. You should check NextToken
after every operation to ensure that you receive all of the results.
(Optional) Use this to limit the number of results you want included per page in the response. If you do not include this parameter, it defaults to a value that is specific to the operation. If additional items exist beyond the maximum you specify, the NextToken
response element is present and has a value (is not null). Include that value as the NextToken
request parameter in the next call to the operation to get the next part of the results. Note that Organizations might return fewer results than the maximum even when there are more results available. You should check NextToken
after every operation to ensure that you receive all of the results.
(Optional) Use this to limit the number of results you want included per page in the response. If you do not include this parameter, it defaults to a value that is specific to the operation. If additional items exist beyond the maximum you specify, the NextToken
response element is present and has a value (is not null). Include that value as the NextToken
request parameter in the next call to the operation to get the next part of the results. Note that Organizations might return fewer results than the maximum even when there are more results available. You should check NextToken
after every operation to ensure that you receive all of the results.
(Optional) Use this to limit the number of results you want included per page in the response. If you do not include this parameter, it defaults to a value that is specific to the operation. If additional items exist beyond the maximum you specify, the NextToken
response element is present and has a value (is not null). Include that value as the NextToken
request parameter in the next call to the operation to get the next part of the results. Note that Organizations might return fewer results than the maximum even when there are more results available. You should check NextToken
after every operation to ensure that you receive all of the results.
(Optional) Use this to limit the number of results you want included per page in the response. If you do not include this parameter, it defaults to a value that is specific to the operation. If additional items exist beyond the maximum you specify, the NextToken
response element is present and has a value (is not null). Include that value as the NextToken
request parameter in the next call to the operation to get the next part of the results. Note that Organizations might return fewer results than the maximum even when there are more results available. You should check NextToken
after every operation to ensure that you receive all of the results.
(Optional) Use this to limit the number of results you want included per page in the response. If you do not include this parameter, it defaults to a value that is specific to the operation. If additional items exist beyond the maximum you specify, the NextToken
response element is present and has a value (is not null). Include that value as the NextToken
request parameter in the next call to the operation to get the next part of the results. Note that Organizations might return fewer results than the maximum even when there are more results available. You should check NextToken
after every operation to ensure that you receive all of the results.
(Optional) Use this to limit the number of results you want included per page in the response. If you do not include this parameter, it defaults to a value that is specific to the operation. If additional items exist beyond the maximum you specify, the NextToken
response element is present and has a value (is not null). Include that value as the NextToken
request parameter in the next call to the operation to get the next part of the results. Note that Organizations might return fewer results than the maximum even when there are more results available. You should check NextToken
after every operation to ensure that you receive all of the results.
(Optional) Use this to limit the number of results you want included per page in the response. If you do not include this parameter, it defaults to a value that is specific to the operation. If additional items exist beyond the maximum you specify, the NextToken
response element is present and has a value (is not null). Include that value as the NextToken
request parameter in the next call to the operation to get the next part of the results. Note that Organizations might return fewer results than the maximum even when there are more results available. You should check NextToken
after every operation to ensure that you receive all of the results.
(Optional) Use this to limit the number of results you want included per page in the response. If you do not include this parameter, it defaults to a value that is specific to the operation. If additional items exist beyond the maximum you specify, the NextToken
response element is present and has a value (is not null). Include that value as the NextToken
request parameter in the next call to the operation to get the next part of the results. Note that Organizations might return fewer results than the maximum even when there are more results available. You should check NextToken
after every operation to ensure that you receive all of the results.
(Optional) Use this to limit the number of results you want included per page in the response. If you do not include this parameter, it defaults to a value that is specific to the operation. If additional items exist beyond the maximum you specify, the NextToken
response element is present and has a value (is not null). Include that value as the NextToken
request parameter in the next call to the operation to get the next part of the results. Note that Organizations might return fewer results than the maximum even when there are more results available. You should check NextToken
after every operation to ensure that you receive all of the results.
The total number of results that you want included on each page of the response. If you do not include this parameter, it defaults to a value that is specific to the operation. If additional items exist beyond the maximum you specify, the NextToken
response element is present and has a value (is not null). Include that value as the NextToken
request parameter in the next call to the operation to get the next part of the results. Note that Organizations might return fewer results than the maximum even when there are more results available. You should check NextToken
after every operation to ensure that you receive all of the results.
The total number of results that you want included on each page of the response. If you do not include this parameter, it defaults to a value that is specific to the operation. If additional items exist beyond the maximum you specify, the NextToken
response element is present and has a value (is not null). Include that value as the NextToken
request parameter in the next call to the operation to get the next part of the results. Note that Organizations might return fewer results than the maximum even when there are more results available. You should check NextToken
after every operation to ensure that you receive all of the results.
The total number of results that you want included on each page of the response. If you do not include this parameter, it defaults to a value that is specific to the operation. If additional items exist beyond the maximum you specify, the NextToken
response element is present and has a value (is not null). Include that value as the NextToken
request parameter in the next call to the operation to get the next part of the results. Note that Organizations might return fewer results than the maximum even when there are more results available. You should check NextToken
after every operation to ensure that you receive all of the results.
The total number of results that you want included on each page of the response. If you do not include this parameter, it defaults to a value that is specific to the operation. If additional items exist beyond the maximum you specify, the NextToken
response element is present and has a value (is not null). Include that value as the NextToken
request parameter in the next call to the operation to get the next part of the results. Note that Organizations might return fewer results than the maximum even when there are more results available. You should check NextToken
after every operation to ensure that you receive all of the results.
The total number of results that you want included on each page of the response. If you do not include this parameter, it defaults to a value that is specific to the operation. If additional items exist beyond the maximum you specify, the NextToken
response element is present and has a value (is not null). Include that value as the NextToken
request parameter in the next call to the operation to get the next part of the results. Note that Organizations might return fewer results than the maximum even when there are more results available. You should check NextToken
after every operation to ensure that you receive all of the results.
The total number of results that you want included on each page of the response. If you do not include this parameter, it defaults to a value that is specific to the operation. If additional items exist beyond the maximum you specify, the NextToken
response element is present and has a value (is not null). Include that value as the NextToken
request parameter in the next call to the operation to get the next part of the results. Note that Organizations might return fewer results than the maximum even when there are more results available. You should check NextToken
after every operation to ensure that you receive all of the results.
The total number of results that you want included on each page of the response. If you do not include this parameter, it defaults to a value that is specific to the operation. If additional items exist beyond the maximum you specify, the NextToken
response element is present and has a value (is not null). Include that value as the NextToken
request parameter in the next call to the operation to get the next part of the results. Note that Organizations might return fewer results than the maximum even when there are more results available. You should check NextToken
after every operation to ensure that you receive all of the results.
The total number of results that you want included on each page of the response. If you do not include this parameter, it defaults to a value that is specific to the operation. If additional items exist beyond the maximum you specify, the NextToken
response element is present and has a value (is not null). Include that value as the NextToken
request parameter in the next call to the operation to get the next part of the results. Note that Organizations might return fewer results than the maximum even when there are more results available. You should check NextToken
after every operation to ensure that you receive all of the results.
The total number of results that you want included on each page of the response. If you do not include this parameter, it defaults to a value that is specific to the operation. If additional items exist beyond the maximum you specify, the NextToken
response element is present and has a value (is not null). Include that value as the NextToken
request parameter in the next call to the operation to get the next part of the results. Note that Organizations might return fewer results than the maximum even when there are more results available. You should check NextToken
after every operation to ensure that you receive all of the results.
The total number of results that you want included on each page of the response. If you do not include this parameter, it defaults to a value that is specific to the operation. If additional items exist beyond the maximum you specify, the NextToken
response element is present and has a value (is not null). Include that value as the NextToken
request parameter in the next call to the operation to get the next part of the results. Note that Organizations might return fewer results than the maximum even when there are more results available. You should check NextToken
after every operation to ensure that you receive all of the results.
The total number of results that you want included on each page of the response. If you do not include this parameter, it defaults to a value that is specific to the operation. If additional items exist beyond the maximum you specify, the NextToken
response element is present and has a value (is not null). Include that value as the NextToken
request parameter in the next call to the operation to get the next part of the results. Note that Organizations might return fewer results than the maximum even when there are more results available. You should check NextToken
after every operation to ensure that you receive all of the results.
The total number of results that you want included on each page of the response. If you do not include this parameter, it defaults to a value that is specific to the operation. If additional items exist beyond the maximum you specify, the NextToken
response element is present and has a value (is not null). Include that value as the NextToken
request parameter in the next call to the operation to get the next part of the results. Note that Organizations might return fewer results than the maximum even when there are more results available. You should check NextToken
after every operation to ensure that you receive all of the results.
The total number of results that you want included on each page of the response. If you do not include this parameter, it defaults to a value that is specific to the operation. If additional items exist beyond the maximum you specify, the NextToken
response element is present and has a value (is not null). Include that value as the NextToken
request parameter in the next call to the operation to get the next part of the results. Note that Organizations might return fewer results than the maximum even when there are more results available. You should check NextToken
after every operation to ensure that you receive all of the results.
The total number of results that you want included on each page of the response. If you do not include this parameter, it defaults to a value that is specific to the operation. If additional items exist beyond the maximum you specify, the NextToken
response element is present and has a value (is not null). Include that value as the NextToken
request parameter in the next call to the operation to get the next part of the results. Note that Organizations might return fewer results than the maximum even when there are more results available. You should check NextToken
after every operation to ensure that you receive all of the results.
The total number of results that you want included on each page of the response. If you do not include this parameter, it defaults to a value that is specific to the operation. If additional items exist beyond the maximum you specify, the NextToken
response element is present and has a value (is not null). Include that value as the NextToken
request parameter in the next call to the operation to get the next part of the results. Note that Organizations might return fewer results than the maximum even when there are more results available. You should check NextToken
after every operation to ensure that you receive all of the results.
Use this parameter if you receive a NextToken
response in a previous request that indicates that there is more output available. Set it to the value of the previous call's NextToken
response to indicate where the output should continue from.
If present, this value indicates that there is more output available than is included in the current response. Use this value in the NextToken
request parameter in a subsequent call to the operation to get the next part of the output. You should repeat this until the NextToken
response element comes back as null
.
Use this parameter if you receive a NextToken
response in a previous request that indicates that there is more output available. Set it to the value of the previous call's NextToken
response to indicate where the output should continue from.
If present, this value indicates that there is more output available than is included in the current response. Use this value in the NextToken
request parameter in a subsequent call to the operation to get the next part of the output. You should repeat this until the NextToken
response element comes back as null
.
Use this parameter if you receive a NextToken
response in a previous request that indicates that there is more output available. Set it to the value of the previous call's NextToken
response to indicate where the output should continue from.
If present, this value indicates that there is more output available than is included in the current response. Use this value in the NextToken
request parameter in a subsequent call to the operation to get the next part of the output. You should repeat this until the NextToken
response element comes back as null
.
Use this parameter if you receive a NextToken
response in a previous request that indicates that there is more output available. Set it to the value of the previous call's NextToken
response to indicate where the output should continue from.
If present, this value indicates that there is more output available than is included in the current response. Use this value in the NextToken
request parameter in a subsequent call to the operation to get the next part of the output. You should repeat this until the NextToken
response element comes back as null
.
Use this parameter if you receive a NextToken
response in a previous request that indicates that there is more output available. Set it to the value of the previous call's NextToken
response to indicate where the output should continue from.
If present, this value indicates that there is more output available than is included in the current response. Use this value in the NextToken
request parameter in a subsequent call to the operation to get the next part of the output. You should repeat this until the NextToken
response element comes back as null
.
Use this parameter if you receive a NextToken
response in a previous request that indicates that there is more output available. Set it to the value of the previous call's NextToken
response to indicate where the output should continue from.
If present, this value indicates that there is more output available than is included in the current response. Use this value in the NextToken
request parameter in a subsequent call to the operation to get the next part of the output. You should repeat this until the NextToken
response element comes back as null
.
Use this parameter if you receive a NextToken
response in a previous request that indicates that there is more output available. Set it to the value of the previous call's NextToken
response to indicate where the output should continue from.
If present, this value indicates that there is more output available than is included in the current response. Use this value in the NextToken
request parameter in a subsequent call to the operation to get the next part of the output. You should repeat this until the NextToken
response element comes back as null
.
Use this parameter if you receive a NextToken
response in a previous request that indicates that there is more output available. Set it to the value of the previous call's NextToken
response to indicate where the output should continue from.
If present, this value indicates that there is more output available than is included in the current response. Use this value in the NextToken
request parameter in a subsequent call to the operation to get the next part of the output. You should repeat this until the NextToken
response element comes back as null
.
Use this parameter if you receive a NextToken
response in a previous request that indicates that there is more output available. Set it to the value of the previous call's NextToken
response to indicate where the output should continue from.
If present, this value indicates that there is more output available than is included in the current response. Use this value in the NextToken
request parameter in a subsequent call to the operation to get the next part of the output. You should repeat this until the NextToken
response element comes back as null
.
Use this parameter if you receive a NextToken
response in a previous request that indicates that there is more output available. Set it to the value of the previous call's NextToken
response to indicate where the output should continue from.
If present, this value indicates that there is more output available than is included in the current response. Use this value in the NextToken
request parameter in a subsequent call to the operation to get the next part of the output. You should repeat this until the NextToken
response element comes back as null
.
Use this parameter if you receive a NextToken
response in a previous request that indicates that there is more output available. Set it to the value of the previous call's NextToken
response to indicate where the output should continue from.
If present, this value indicates that there is more output available than is included in the current response. Use this value in the NextToken
request parameter in a subsequent call to the operation to get the next part of the output. You should repeat this until the NextToken
response element comes back as null
.
Use this parameter if you receive a NextToken
response in a previous request that indicates that there is more output available. Set it to the value of the previous call's NextToken
response to indicate where the output should continue from.
If present, this value indicates that there is more output available than is included in the current response. Use this value in the NextToken
request parameter in a subsequent call to the operation to get the next part of the output. You should repeat this until the NextToken
response element comes back as null
.
Use this parameter if you receive a NextToken
response in a previous request that indicates that there is more output available. Set it to the value of the previous call's NextToken
response to indicate where the output should continue from.
If present, this value indicates that there is more output available than is included in the current response. Use this value in the NextToken
request parameter in a subsequent call to the operation to get the next part of the output. You should repeat this until the NextToken
response element comes back as null
.
Use this parameter if you receive a NextToken
response in a previous request that indicates that there is more output available. Set it to the value of the previous call's NextToken
response to indicate where the output should continue from.
If present, this value indicates that there is more output available than is included in the current response. Use this value in the NextToken
request parameter in a subsequent call to the operation to get the next part of the output. You should repeat this until the NextToken
response element comes back as null
.
The parameter for receiving additional results if you receive a NextToken
response in a previous request. A NextToken
response indicates that more output is available. Set this parameter to the value of the previous call's NextToken
response to indicate where the output should continue from.
If present, indicates that more output is available than is included in the current response. Use this value in the NextToken
request parameter in a subsequent call to the operation to get the next part of the output. You should repeat this until the NextToken
response element comes back as null
.
The parameter for receiving additional results if you receive a NextToken
response in a previous request. A NextToken
response indicates that more output is available. Set this parameter to the value of the previous call's NextToken
response to indicate where the output should continue from.
If present, indicates that more output is available than is included in the current response. Use this value in the NextToken
request parameter in a subsequent call to the operation to get the next part of the output. You should repeat this until the NextToken
response element comes back as null
.
The parameter for receiving additional results if you receive a NextToken
response in a previous request. A NextToken
response indicates that more output is available. Set this parameter to the value of the previous call's NextToken
response to indicate where the output should continue from.
If present, indicates that more output is available than is included in the current response. Use this value in the NextToken
request parameter in a subsequent call to the operation to get the next part of the output. You should repeat this until the NextToken
response element comes back as null
.
The parameter for receiving additional results if you receive a NextToken
response in a previous request. A NextToken
response indicates that more output is available. Set this parameter to the value of the previous call's NextToken
response to indicate where the output should continue from.
If present, indicates that more output is available than is included in the current response. Use this value in the NextToken
request parameter in a subsequent call to the operation to get the next part of the output. You should repeat this until the NextToken
response element comes back as null
.
The parameter for receiving additional results if you receive a NextToken
response in a previous request. A NextToken
response indicates that more output is available. Set this parameter to the value of the previous call's NextToken
response to indicate where the output should continue from.
If present, indicates that more output is available than is included in the current response. Use this value in the NextToken
request parameter in a subsequent call to the operation to get the next part of the output. You should repeat this until the NextToken
response element comes back as null
.
The parameter for receiving additional results if you receive a NextToken
response in a previous request. A NextToken
response indicates that more output is available. Set this parameter to the value of the previous call's NextToken
response to indicate where the output should continue from.
If present, indicates that more output is available than is included in the current response. Use this value in the NextToken
request parameter in a subsequent call to the operation to get the next part of the output. You should repeat this until the NextToken
response element comes back as null
.
The parameter for receiving additional results if you receive a NextToken
response in a previous request. A NextToken
response indicates that more output is available. Set this parameter to the value of the previous call's NextToken
response to indicate where the output should continue from.
If present, indicates that more output is available than is included in the current response. Use this value in the NextToken
request parameter in a subsequent call to the operation to get the next part of the output. You should repeat this until the NextToken
response element comes back as null
.
The parameter for receiving additional results if you receive a NextToken
response in a previous request. A NextToken
response indicates that more output is available. Set this parameter to the value of the previous call's NextToken
response to indicate where the output should continue from.
If present, indicates that more output is available than is included in the current response. Use this value in the NextToken
request parameter in a subsequent call to the operation to get the next part of the output. You should repeat this until the NextToken
response element comes back as null
.
The parameter for receiving additional results if you receive a NextToken
response in a previous request. A NextToken
response indicates that more output is available. Set this parameter to the value of the previous call's NextToken
response to indicate where the output should continue from.
If present, indicates that more output is available than is included in the current response. Use this value in the NextToken
request parameter in a subsequent call to the operation to get the next part of the output. You should repeat this until the NextToken
response element comes back as null
.
The parameter for receiving additional results if you receive a NextToken
response in a previous request. A NextToken
response indicates that more output is available. Set this parameter to the value of the previous call's NextToken
response to indicate where the output should continue from.
If present, indicates that more output is available than is included in the current response. Use this value in the NextToken
request parameter in a subsequent call to the operation to get the next part of the output. You should repeat this until the NextToken
response element comes back as null
.
The parameter for receiving additional results if you receive a NextToken
response in a previous request. A NextToken
response indicates that more output is available. Set this parameter to the value of the previous call's NextToken
response to indicate where the output should continue from.
If present, indicates that more output is available than is included in the current response. Use this value in the NextToken
request parameter in a subsequent call to the operation to get the next part of the output. You should repeat this until the NextToken
response element comes back as null
.
The parameter for receiving additional results if you receive a NextToken
response in a previous request. A NextToken
response indicates that more output is available. Set this parameter to the value of the previous call's NextToken
response to indicate where the output should continue from.
If present, indicates that more output is available than is included in the current response. Use this value in the NextToken
request parameter in a subsequent call to the operation to get the next part of the output. You should repeat this until the NextToken
response element comes back as null
.
The parameter for receiving additional results if you receive a NextToken
response in a previous request. A NextToken
response indicates that more output is available. Set this parameter to the value of the previous call's NextToken
response to indicate where the output should continue from.
If present, indicates that more output is available than is included in the current response. Use this value in the NextToken
request parameter in a subsequent call to the operation to get the next part of the output. You should repeat this until the NextToken
response element comes back as null
.
The parameter for receiving additional results if you receive a NextToken
response in a previous request. A NextToken
response indicates that more output is available. Set this parameter to the value of the previous call's NextToken
response to indicate where the output should continue from.
If present, indicates that more output is available than is included in the current response. Use this value in the NextToken
request parameter in a subsequent call to the operation to get the next part of the output. You should repeat this until the NextToken
response element comes back as null
.
The parameter for receiving additional results if you receive a NextToken
response in a previous request. A NextToken
response indicates that more output is available. Set this parameter to the value of the previous call's NextToken
response to indicate where the output should continue from.
If present, indicates that more output is available than is included in the current response. Use this value in the NextToken
request parameter in a subsequent call to the operation to get the next part of the output. You should repeat this until the NextToken
response element comes back as null
.
The parameter for receiving additional results if you receive a NextToken
response in a previous request. A NextToken
response indicates that more output is available. Set this parameter to the value of the previous call's NextToken
response to indicate where the output should continue from.
If present, indicates that more output is available than is included in the current response. Use this value in the NextToken
request parameter in a subsequent call to the operation to get the next part of the output. You should repeat this until the NextToken
response element comes back as null
.
Contains details about an organization. An organization is a collection of accounts that are centrally managed together using consolidated billing, organized hierarchically with organizational units (OUs), and controlled with policies.
", + "base": "Contains details about an organization. An organization is a collection of accounts that are centrally managed together using consolidated billing, organized hierarchically with organizational units (OUs), and controlled with policies .
", "refs": { "CreateOrganizationResponse$Organization": "A structure that contains details about the newly created organization.
", "DescribeOrganizationResponse$Organization": "A structure that contains information about the organization.
" @@ -966,7 +1046,7 @@ "OrganizationFeatureSet": { "base": null, "refs": { - "CreateOrganizationRequest$FeatureSet": "Specifies the feature set supported by the new organization. Each feature set supports different levels of functionality.
CONSOLIDATED_BILLING
: All member accounts have their bills consolidated to and paid by the master account. For more information, see Consolidated billing in the AWS Organizations User Guide.
The consolidated billing feature subset isn't available for organizations in the AWS GovCloud (US) Region.
ALL
: In addition to all the features that consolidated billing feature set supports, the master account can also apply any policy type to any member account in the organization. For more information, see All features in the AWS Organizations User Guide.
Specifies the feature set supported by the new organization. Each feature set supports different levels of functionality.
CONSOLIDATED_BILLING
: All member accounts have their bills consolidated to and paid by the master account. For more information, see Consolidated billing in the AWS Organizations User Guide.
The consolidated billing feature subset isn't available for organizations in the AWS GovCloud (US) Region.
ALL
: In addition to all the features supported by the consolidated billing feature set, the master account can also apply any policy type to any member account in the organization. For more information, see All features in the AWS Organizations User Guide.
Specifies the functionality that currently is available to the organization. If set to \"ALL\", then all features are enabled and policies can be applied to accounts in the organization. If set to \"CONSOLIDATED_BILLING\", then only consolidated billing functionality is available. For more information, see Enabling All Features in Your Organization in the AWS Organizations User Guide.
" } }, @@ -1093,7 +1173,7 @@ "PolicyContent": { "base": null, "refs": { - "CreatePolicyRequest$Content": "The policy content to add to the new policy. For example, you could create a service control policy (SCP) that specifies the permissions that administrators in attached accounts can delegate to their users, groups, and roles. The string for this SCP must be JSON text. For more information about the SCP syntax, see Service Control Policy Syntax in the AWS Organizations User Guide.
", + "CreatePolicyRequest$Content": "The policy content to add to the new policy. For example, if you create a service control policy (SCP), this string must be JSON text that specifies the permissions that admins in attached accounts can delegate to their users, groups, and roles. For more information about the SCP syntax, see Service Control Policy Syntax in the AWS Organizations User Guide.
", "EffectivePolicy$PolicyContent": "The text content of the policy.
", "Policy$Content": "The text content of the policy.
", "UpdatePolicyRequest$Content": "If provided, the new content for the policy. The text must be correctly formatted JSON that complies with the syntax for the policy's type. For more information, see Service Control Policy Syntax in the AWS Organizations User Guide.
" @@ -1175,7 +1255,7 @@ "PolicyType": { "base": null, "refs": { - "CreatePolicyRequest$Type": "The type of policy to create.
", + "CreatePolicyRequest$Type": "The type of policy to create.
In the current release, the only type of policy that you can create is a service control policy (SCP).
The policy type that you want to disable in this root.
", "EnablePolicyTypeRequest$PolicyType": "The policy type that you want to enable.
", "ListPoliciesForTargetRequest$Filter": "The type of policy that you want to include in the returned list.
", @@ -1202,7 +1282,7 @@ "PolicyTypeStatus": { "base": null, "refs": { - "PolicyTypeSummary$Status": "The status of the policy type as it relates to the associated root. You can attach a policy of the specified type to a root or to an OU or account in that root. To do so, the policy must be available in the organization and enabled for that root.
" + "PolicyTypeSummary$Status": "The status of the policy type as it relates to the associated root. To attach a policy of the specified type to a root or to an OU or account in that root, it must be available in the organization and enabled for that root.
" } }, "PolicyTypeSummary": { @@ -1218,6 +1298,11 @@ "Root$PolicyTypes": "The types of policies that are currently enabled for the root and therefore can be attached to the root or to its OUs or accounts.
Even if a policy type is shown as available in the organization, you can separately enable and disable them at the root level by using EnablePolicyType and DisablePolicyType. Use DescribeOrganization to see the availability of the policy types in that organization.
(Optional)
The name of an IAM role that AWS Organizations automatically preconfigures in the new member account. This role trusts the master account, allowing users in the master account to assume the role, as permitted by the master account administrator. The role has administrator permissions in the new member account.
If you don't specify this parameter, the role name defaults to OrganizationAccountAccessRole
.
For more information about how to use this role to access the member account, see Accessing and Administering the Member Accounts in Your Organization in the AWS Organizations User Guide. Also see steps 2 and 3 in Tutorial: Delegate Access Across AWS Accounts Using IAM Roles in the IAM User Guide.
The regex pattern that is used to validate this parameter. The pattern can include uppercase letters, lowercase letters, digits with no spaces, and any of the following characters: =,.@-
", - "CreateGovCloudAccountRequest$RoleName": "(Optional)
The name of an IAM role that AWS Organizations automatically preconfigures in the new member accounts in both the AWS GovCloud (US) Region and in the commercial Region. This role trusts the master account, allowing users in the master account to assume the role, as permitted by the master account administrator. The role has administrator permissions in the new member account.
If you don't specify this parameter, the role name defaults to OrganizationAccountAccessRole
.
For more information about how to use this role to access the member account, see Accessing and Administering the Member Accounts in Your Organization in the AWS Organizations User Guide. See also steps 2 and 3 in Tutorial: Delegate Access Across AWS Accounts Using IAM Roles in the IAM User Guide.
The regex pattern that is used to validate this parameter. The pattern can include uppercase letters, lowercase letters, digits with no spaces, and any of the following characters: =,.@-
" + "CreateAccountRequest$RoleName": "(Optional)
The name of an IAM role that AWS Organizations automatically preconfigures in the new member account. This role trusts the master account, allowing users in the master account to assume the role, as permitted by the master account administrator. The role has administrator permissions in the new member account.
If you don't specify this parameter, the role name defaults to OrganizationAccountAccessRole
.
For more information about how to use this role to access the member account, see the following links:
Accessing and Administering the Member Accounts in Your Organization in the AWS Organizations User Guide
Steps 2 and 3 in Tutorial: Delegate Access Across AWS Accounts Using IAM Roles in the IAM User Guide
The regex pattern that is used to validate this parameter. The pattern can include uppercase letters, lowercase letters, digits with no spaces, and any of the following characters: =,.@-
", + "CreateGovCloudAccountRequest$RoleName": "(Optional)
The name of an IAM role that AWS Organizations automatically preconfigures in the new member accounts in both the AWS GovCloud (US) Region and in the commercial Region. This role trusts the master account, allowing users in the master account to assume the role, as permitted by the master account administrator. The role has administrator permissions in the new member account.
If you don't specify this parameter, the role name defaults to OrganizationAccountAccessRole
.
For more information about how to use this role to access the member account, see Accessing and Administering the Member Accounts in Your Organization in the AWS Organizations User Guide and steps 2 and 3 in Tutorial: Delegate Access Across AWS Accounts Using IAM Roles in the IAM User Guide.
The regex pattern that is used to validate this parameter. The pattern can include uppercase letters, lowercase letters, digits with no spaces, and any of the following characters: =,.@-
" } }, "Root": { @@ -1277,9 +1362,13 @@ "ServicePrincipal": { "base": null, "refs": { + "DelegatedService$ServicePrincipal": "The name of a service that can request an operation for the specified service. This is typically in the form of a URL, such as: servicename.amazonaws.com
.
The service principal name of an AWS service for which the account is a delegated administrator.
Delegated administrator privileges are revoked for only the specified AWS service from the member account. If the specified service is the only service for which the member account is a delegated administrator, the operation also revokes Organizations read action permissions.
", "DisableAWSServiceAccessRequest$ServicePrincipal": "The service principal name of the AWS service for which you want to disable integration with your organization. This is typically in the form of a URL, such as service-abbreviation.amazonaws.com
.
The service principal name of the AWS service for which you want to enable integration with your organization. This is typically in the form of a URL, such as service-abbreviation.amazonaws.com
.
The name of the service principal. This is typically in the form of a URL, such as: servicename.amazonaws.com
.
The name of the service principal. This is typically in the form of a URL, such as: servicename.amazonaws.com
.
Specifies a service principal name. If specified, then the operation lists the delegated administrators only for the specified service.
If you don't specify a service principal, the operation lists all delegated administrators for all services in your organization.
", + "RegisterDelegatedAdministratorRequest$ServicePrincipal": "The service principal of the AWS service for which you want to make the member account a delegated administrator.
" } }, "SourceParentNotFoundException": { @@ -1355,6 +1444,9 @@ "Account$JoinedTimestamp": "The date the account became a part of the organization.
", "CreateAccountStatus$RequestedTimestamp": "The date and time that the request was made for the account creation.
", "CreateAccountStatus$CompletedTimestamp": "The date and time that the account was created and the request completed.
", + "DelegatedAdministrator$JoinedTimestamp": "The date when the delegated administrator's account became a part of the organization.
", + "DelegatedAdministrator$DelegationEnabledDate": "The date when the account was made a delegated administrator.
", + "DelegatedService$DelegationEnabledDate": "The date that the account became a delegated administrator for this service.
", "EffectivePolicy$LastUpdatedTimestamp": "The time of the last update to this policy.
", "EnabledServicePrincipal$DateEnabled": "The date that the service principal was enabled for integration with AWS Organizations.
", "Handshake$RequestedTimestamp": "The date and time that the handshake request was made.
", diff --git a/models/apis/organizations/2016-11-28/paginators-1.json b/models/apis/organizations/2016-11-28/paginators-1.json index af4ef881eb7..95c13385d05 100644 --- a/models/apis/organizations/2016-11-28/paginators-1.json +++ b/models/apis/organizations/2016-11-28/paginators-1.json @@ -25,6 +25,18 @@ "limit_key": "MaxResults", "output_token": "NextToken" }, + "ListDelegatedAdministrators": { + "input_token": "NextToken", + "limit_key": "MaxResults", + "output_token": "NextToken", + "result_key": "DelegatedAdministrators" + }, + "ListDelegatedServicesForAccount": { + "input_token": "NextToken", + "limit_key": "MaxResults", + "output_token": "NextToken", + "result_key": "DelegatedServices" + }, "ListHandshakesForAccount": { "input_token": "NextToken", "limit_key": "MaxResults", diff --git a/models/apis/outposts/2019-12-03/docs-2.json b/models/apis/outposts/2019-12-03/docs-2.json index 67cfbc7cd71..210193633af 100644 --- a/models/apis/outposts/2019-12-03/docs-2.json +++ b/models/apis/outposts/2019-12-03/docs-2.json @@ -23,14 +23,14 @@ } }, "AvailabilityZone": { - "base": "The Availability Zone.
", + "base": "The Availability Zone.
You must specify AvailabilityZone
or AvailabilityZoneId
.
The ID of the Availability Zone.
", + "base": "The ID of the Availability Zone.
You must specify AvailabilityZone
or AvailabilityZoneId
.
The contextual metadata to use when getting recommendations. Contextual metadata includes any interaction information that might be relevant when getting a user's recommendations, such as the user's current location or device type. For more information, see Contextual Metadata.
", - "GetRecommendationsRequest$context": "The contextual metadata to use when getting recommendations. Contextual metadata includes any interaction information that might be relevant when getting a user's recommendations, such as the user's current location or device type. For more information, see Contextual Metadata.
" + "GetPersonalizedRankingRequest$context": "The contextual metadata to use when getting recommendations. Contextual metadata includes any interaction information that might be relevant when getting a user's recommendations, such as the user's current location or device type.
", + "GetRecommendationsRequest$context": "The contextual metadata to use when getting recommendations. Contextual metadata includes any interaction information that might be relevant when getting a user's recommendations, such as the user's current location or device type.
" } }, "ErrorMessage": { @@ -102,6 +102,12 @@ "refs": { } }, + "Score": { + "base": null, + "refs": { + "PredictedItem$score": "A numeric representation of the model's certainty in the item's suitability. For more information on scoring logic, see how-scores-work.
" + } + }, "UserID": { "base": null, "refs": { diff --git a/models/apis/personalize/2018-05-22/api-2.json b/models/apis/personalize/2018-05-22/api-2.json index 59a15d623a0..c11381b50cb 100644 --- a/models/apis/personalize/2018-05-22/api-2.json +++ b/models/apis/personalize/2018-05-22/api-2.json @@ -668,7 +668,8 @@ "status":{"shape":"Status"}, "creationDateTime":{"shape":"Date"}, "lastUpdatedDateTime":{"shape":"Date"}, - "failureReason":{"shape":"FailureReason"} + "failureReason":{"shape":"FailureReason"}, + "solutionVersionArn":{"shape":"Arn"} } }, "BatchInferenceJobs":{ @@ -1790,6 +1791,7 @@ "solutionConfig":{"shape":"SolutionConfig"}, "trainingHours":{"shape":"TrainingHours"}, "trainingMode":{"shape":"TrainingMode"}, + "tunedHPOParams":{"shape":"TunedHPOParams"}, "status":{"shape":"Status"}, "failureReason":{"shape":"FailureReason"}, "creationDateTime":{"shape":"Date"}, @@ -1844,6 +1846,12 @@ "min":1 }, "Tunable":{"type":"boolean"}, + "TunedHPOParams":{ + "type":"structure", + "members":{ + "algorithmHyperParameters":{"shape":"HyperParameters"} + } + }, "UpdateCampaignRequest":{ "type":"structure", "required":["campaignArn"], diff --git a/models/apis/personalize/2018-05-22/docs-2.json b/models/apis/personalize/2018-05-22/docs-2.json index 2a36836043a..2acbb83e1d4 100644 --- a/models/apis/personalize/2018-05-22/docs-2.json +++ b/models/apis/personalize/2018-05-22/docs-2.json @@ -71,6 +71,7 @@ "BatchInferenceJob$batchInferenceJobArn": "The Amazon Resource Name (ARN) of the batch inference job.
", "BatchInferenceJob$solutionVersionArn": "The Amazon Resource Name (ARN) of the solution version from which the batch inference job was created.
", "BatchInferenceJobSummary$batchInferenceJobArn": "The Amazon Resource Name (ARN) of the batch inference job.
", + "BatchInferenceJobSummary$solutionVersionArn": "The ARN of the solution version used by the batch inference job.
", "Campaign$campaignArn": "The Amazon Resource Name (ARN) of the campaign.
", "Campaign$solutionVersionArn": "The Amazon Resource Name (ARN) of a specific version of the solution.
", "CampaignSummary$campaignArn": "The Amazon Resource Name (ARN) of the campaign.
", @@ -820,7 +821,7 @@ "HPOObjectiveType": { "base": null, "refs": { - "HPOObjective$type": "The data type of the metric.
" + "HPOObjective$type": "The type of the metric. Valid values are Maximize
and Minimize
.
Specifies the default hyperparameters.
", - "SolutionConfig$algorithmHyperParameters": "Lists the hyperparameter names and ranges.
" + "SolutionConfig$algorithmHyperParameters": "Lists the hyperparameter names and ranges.
", + "TunedHPOParams$algorithmHyperParameters": "A list of the hyperparameter values of the best performing model.
" } }, "IntegerHyperParameterRange": { @@ -1338,6 +1340,12 @@ "DefaultIntegerHyperParameterRange$isTunable": "Indicates whether the hyperparameter is tunable.
" } }, + "TunedHPOParams": { + "base": "If hyperparameter optimization (HPO) was performed, contains the hyperparameter values of the best performing model.
", + "refs": { + "SolutionVersion$tunedHPOParams": "If hyperparameter optimization was performed, contains the hyperparameter values of the best performing model.
" + } + }, "UpdateCampaignRequest": { "base": null, "refs": { diff --git a/models/apis/pinpoint/2016-12-01/api-2.json b/models/apis/pinpoint/2016-12-01/api-2.json index f8733076907..9801ce28d3e 100644 --- a/models/apis/pinpoint/2016-12-01/api-2.json +++ b/models/apis/pinpoint/2016-12-01/api-2.json @@ -10032,6 +10032,9 @@ "Keyword": { "shape": "__string" }, + "MediaUrl": { + "shape": "__string" + }, "MessageType": { "shape": "MessageType" }, diff --git a/models/apis/pinpoint/2016-12-01/docs-2.json b/models/apis/pinpoint/2016-12-01/docs-2.json index 7cfae73273d..6dbd297d602 100644 --- a/models/apis/pinpoint/2016-12-01/docs-2.json +++ b/models/apis/pinpoint/2016-12-01/docs-2.json @@ -2132,6 +2132,7 @@ "SMSChannelResponse$ShortCode" : "The registered short code to use when you send messages through the SMS channel.
", "SMSMessage$Body" : "The body of the SMS message.
", "SMSMessage$Keyword" : "The SMS program name that you provided to AWS Support when you requested your dedicated number.
", + "SMSMessage$MediaUrl" : "The URL of an image or video to display in the SMS message.
", "SMSMessage$OriginationNumber" : "The number to send the SMS message from. This value should be one of the dedicated long or short codes that's assigned to your AWS account. If you don't specify a long or short code, Amazon Pinpoint assigns a random long code to the SMS message and sends the message from that code.
", "SMSMessage$SenderId" : "The sender ID to display as the sender of the message on a recipient's device. Support for sender IDs varies by country or region.
", "SMSTemplateRequest$Body" : "The message body to use in text messages that are based on the message template.
", diff --git a/models/apis/rds-data/2018-08-01/docs-2.json b/models/apis/rds-data/2018-08-01/docs-2.json index d10cd2f999c..8dd51b6fe5e 100644 --- a/models/apis/rds-data/2018-08-01/docs-2.json +++ b/models/apis/rds-data/2018-08-01/docs-2.json @@ -6,7 +6,7 @@ "BeginTransaction": "Starts a SQL transaction.
<important> <p>A transaction can run for a maximum of 24 hours. A transaction is terminated and rolled back automatically after 24 hours.</p> <p>A transaction times out if no calls use its transaction ID in three minutes. If a transaction times out before it's committed, it's rolled back automatically.</p> <p>DDL statements inside a transaction cause an implicit commit. We recommend that you run each DDL statement in a separate <code>ExecuteStatement</code> call with <code>continueAfterTimeout</code> enabled.</p> </important>
",
"CommitTransaction": "Ends a SQL transaction started with the BeginTransaction
operation and commits the changes.
Runs one or more SQL statements.
This operation is deprecated. Use the BatchExecuteStatement
or ExecuteStatement
operation.
Runs a SQL statement against a database.
If a call isn't part of a transaction because it doesn't include the transactionID
parameter, changes that result from the call are committed automatically.
The response size limit is 1 MB or 1,000 records. If the call returns more than 1 MB of response data or over 1,000 records, the call is terminated.
", + "ExecuteStatement": "Runs a SQL statement against a database.
If a call isn't part of a transaction because it doesn't include the transactionID
parameter, changes that result from the call are committed automatically.
The response size limit is 1 MB. If the call returns more than 1 MB of response data, the call is terminated.
", "RollbackTransaction": "Performs a rollback of a transaction. Rolling back a transaction cancels its changes.
" }, "shapes": { @@ -347,13 +347,13 @@ "SqlParameterSets": { "base": null, "refs": { - "BatchExecuteStatementRequest$parameterSets": "The parameter set for the batch operation.
The maximum number of parameters in a parameter set is 1,000.
" + "BatchExecuteStatementRequest$parameterSets": "The parameter set for the batch operation.
The SQL statement is executed as many times as the number of parameter sets provided. To execute a SQL statement with no parameters, use one of the following options:
Specify one or more empty parameter sets.
Use the ExecuteStatement
operation instead of the BatchExecuteStatement
operation.
Array parameters are not supported.
The parameters for the SQL statement.
", + "ExecuteStatementRequest$parameters": "The parameters for the SQL statement.
Array parameters are not supported.
Copies the specified DB snapshot. The source DB snapshot must be in the \"available\" state.
You can copy a snapshot from one AWS Region to another. In that case, the AWS Region where you call the CopyDBSnapshot
action is the destination AWS Region for the DB snapshot copy.
For more information about copying snapshots, see Copying a DB Snapshot in the Amazon RDS User Guide.
", "CopyOptionGroup": "Copies the specified option group.
", "CreateCustomAvailabilityZone": "Creates a custom Availability Zone (AZ).
A custom AZ is an on-premises AZ that is integrated with a VMware vSphere cluster.
For more information about RDS on VMware, see the RDS on VMware User Guide.
", - "CreateDBCluster": "Creates a new Amazon Aurora DB cluster.
You can use the ReplicationSourceIdentifier
parameter to create the DB cluster as a Read Replica of another DB cluster or Amazon RDS MySQL DB instance. For cross-region replication where the DB cluster identified by ReplicationSourceIdentifier
is encrypted, you must also specify the PreSignedUrl
parameter.
For more information on Amazon Aurora, see What Is Amazon Aurora? in the Amazon Aurora User Guide.
This action only applies to Aurora DB clusters.
Creates a new Amazon Aurora DB cluster.
You can use the ReplicationSourceIdentifier
parameter to create the DB cluster as a read replica of another DB cluster or Amazon RDS MySQL DB instance. For cross-region replication where the DB cluster identified by ReplicationSourceIdentifier
is encrypted, you must also specify the PreSignedUrl
parameter.
For more information on Amazon Aurora, see What Is Amazon Aurora? in the Amazon Aurora User Guide.
This action only applies to Aurora DB clusters.
Creates a new custom endpoint and associates it with an Amazon Aurora DB cluster.
This action only applies to Aurora DB clusters.
Creates a new DB cluster parameter group.
Parameters in a DB cluster parameter group apply to all of the instances in a DB cluster.
A DB cluster parameter group is initially created with the default parameters for the database engine used by instances in the DB cluster. To provide custom values for any of the parameters, you must modify the group after creating it using ModifyDBClusterParameterGroup
. Once you've created a DB cluster parameter group, you need to associate it with your DB cluster using ModifyDBCluster
. When you associate a new DB cluster parameter group with a running DB cluster, you need to reboot the DB instances in the DB cluster without failover for the new DB cluster parameter group and associated settings to take effect.
After you create a DB cluster parameter group, you should wait at least 5 minutes before creating your first DB cluster that uses that DB cluster parameter group as the default parameter group. This allows Amazon RDS to fully complete the create action before the DB cluster parameter group is used as the default for a new DB cluster. This is especially important for parameters that are critical when creating the default database for a DB cluster, such as the character set for the default database defined by the character_set_database
parameter. You can use the Parameter Groups option of the Amazon RDS console or the DescribeDBClusterParameters
action to verify that your DB cluster parameter group has been created or modified.
For more information on Amazon Aurora, see What Is Amazon Aurora? in the Amazon Aurora User Guide.
This action only applies to Aurora DB clusters.
Creates a snapshot of a DB cluster. For more information on Amazon Aurora, see What Is Amazon Aurora? in the Amazon Aurora User Guide.
This action only applies to Aurora DB clusters.
Creates a new DB instance.
", - "CreateDBInstanceReadReplica": "Creates a new DB instance that acts as a Read Replica for an existing source DB instance. You can create a Read Replica for a DB instance running MySQL, MariaDB, Oracle, or PostgreSQL. For more information, see Working with Read Replicas in the Amazon RDS User Guide.
Amazon Aurora doesn't support this action. You must call the CreateDBInstance
action to create a DB instance for an Aurora DB cluster.
All Read Replica DB instances are created with backups disabled. All other DB instance attributes (including DB security groups and DB parameter groups) are inherited from the source DB instance, except as specified following.
Your source DB instance must have backup retention enabled.
Creates a new DB instance that acts as a read replica for an existing source DB instance. You can create a read replica for a DB instance running MySQL, MariaDB, Oracle, PostgreSQL, or SQL Server. For more information, see Working with Read Replicas in the Amazon RDS User Guide.
Amazon Aurora doesn't support this action. Call the CreateDBInstance
action to create a DB instance for an Aurora DB cluster.
All read replica DB instances are created with backups disabled. All other DB instance attributes (including DB security groups and DB parameter groups) are inherited from the source DB instance, except as specified.
Your source DB instance must have backup retention enabled.
Creates a new DB parameter group.
A DB parameter group is initially created with the default parameters for the database engine used by the DB instance. To provide custom values for any of the parameters, you must modify the group after creating it using ModifyDBParameterGroup. Once you've created a DB parameter group, you need to associate it with your DB instance using ModifyDBInstance. When you associate a new DB parameter group with a running DB instance, you need to reboot the DB instance without failover for the new DB parameter group and associated settings to take effect.
After you create a DB parameter group, you should wait at least 5 minutes before creating your first DB instance that uses that DB parameter group as the default parameter group. This allows Amazon RDS to fully complete the create action before the parameter group is used as the default for a new DB instance. This is especially important for parameters that are critical when creating the default database for a DB instance, such as the character set for the default database defined by the character_set_database
parameter. You can use the Parameter Groups option of the Amazon RDS console or the DescribeDBParameters command to verify that your DB parameter group has been created or modified.
This is prerelease documentation for the RDS Database Proxy feature in preview release. It is subject to change.
Creates a new DB proxy.
", "CreateDBSecurityGroup": "Creates a new DB security group. DB security groups control access to a DB instance.
A DB security group controls access to EC2-Classic DB instances that are not in a VPC.
Deletes a custom endpoint and removes it from an Amazon Aurora DB cluster.
This action only applies to Aurora DB clusters.
Deletes a specified DB cluster parameter group. The DB cluster parameter group to be deleted can't be associated with any DB clusters.
For more information on Amazon Aurora, see What Is Amazon Aurora? in the Amazon Aurora User Guide.
This action only applies to Aurora DB clusters.
Deletes a DB cluster snapshot. If the snapshot is being copied, the copy operation is terminated.
The DB cluster snapshot must be in the available
state to be deleted.
For more information on Amazon Aurora, see What Is Amazon Aurora? in the Amazon Aurora User Guide.
This action only applies to Aurora DB clusters.
The DeleteDBInstance action deletes a previously provisioned DB instance. When you delete a DB instance, all automated backups for that instance are deleted and can't be recovered. Manual DB snapshots of the DB instance to be deleted by DeleteDBInstance
are not deleted.
If you request a final DB snapshot the status of the Amazon RDS DB instance is deleting
until the DB snapshot is created. The API action DescribeDBInstance
is used to monitor the status of this operation. The action can't be canceled or reverted once submitted.
When a DB instance is in a failure state and has a status of failed
, incompatible-restore
, or incompatible-network
, you can only delete it when you skip creation of the final snapshot with the SkipFinalSnapshot
parameter.
If the specified DB instance is part of an Amazon Aurora DB cluster, you can't delete the DB instance if both of the following conditions are true:
The DB cluster is a Read Replica of another Amazon Aurora DB cluster.
The DB instance is the only instance in the DB cluster.
To delete a DB instance in this case, first call the PromoteReadReplicaDBCluster
API action to promote the DB cluster so it's no longer a Read Replica. After the promotion completes, then call the DeleteDBInstance
API action to delete the final instance in the DB cluster.
The DeleteDBInstance action deletes a previously provisioned DB instance. When you delete a DB instance, all automated backups for that instance are deleted and can't be recovered. Manual DB snapshots of the DB instance to be deleted by DeleteDBInstance
are not deleted.
If you request a final DB snapshot the status of the Amazon RDS DB instance is deleting
until the DB snapshot is created. The API action DescribeDBInstance
is used to monitor the status of this operation. The action can't be canceled or reverted once submitted.
When a DB instance is in a failure state and has a status of failed
, incompatible-restore
, or incompatible-network
, you can only delete it when you skip creation of the final snapshot with the SkipFinalSnapshot
parameter.
If the specified DB instance is part of an Amazon Aurora DB cluster, you can't delete the DB instance if both of the following conditions are true:
The DB cluster is a read replica of another Amazon Aurora DB cluster.
The DB instance is the only instance in the DB cluster.
To delete a DB instance in this case, first call the PromoteReadReplicaDBCluster
API action to promote the DB cluster so it's no longer a read replica. After the promotion completes, then call the DeleteDBInstance
API action to delete the final instance in the DB cluster.
Deletes automated backups based on the source instance's DbiResourceId
value or the restorable instance's resource ID.
Deletes a specified DB parameter group. The DB parameter group to be deleted can't be associated with any DB instances.
", "DeleteDBProxy": "This is prerelease documentation for the RDS Database Proxy feature in preview release. It is subject to change.
Deletes an existing proxy.
", @@ -84,7 +84,7 @@ "DescribePendingMaintenanceActions": "Returns a list of resources (for example, DB instances) that have at least one pending maintenance action.
", "DescribeReservedDBInstances": "Returns information about reserved DB instances for this account, or about a specified reserved DB instance.
", "DescribeReservedDBInstancesOfferings": "Lists available reserved DB instance offerings.
", - "DescribeSourceRegions": "Returns a list of the source AWS Regions where the current AWS Region can create a Read Replica or copy a DB snapshot from. This API action supports pagination.
", + "DescribeSourceRegions": "Returns a list of the source AWS Regions where the current AWS Region can create a read replica or copy a DB snapshot from. This API action supports pagination.
", "DescribeValidDBInstanceModifications": "You can call DescribeValidDBInstanceModifications
to learn what modifications you can make to your DB instance. You can use this information when you call ModifyDBInstance
.
Downloads all or a portion of the specified log file, up to 1 MB in size.
", "FailoverDBCluster": "Forces a failover for a DB cluster.
A failover for a DB cluster promotes one of the Aurora Replicas (read-only instances) in the DB cluster to be the primary instance (the cluster writer).
Amazon Aurora will automatically fail over to an Aurora Replica, if one exists, when the primary instance fails. You can force a failover when you want to simulate a failure of a primary instance for testing. Because each instance in a DB cluster has its own endpoint address, you will need to clean up and re-establish any existing connections that use those endpoint addresses when the failover is complete.
For more information on Amazon Aurora, see What Is Amazon Aurora? in the Amazon Aurora User Guide.
This action only applies to Aurora DB clusters.
Modifies the parameters of a DB parameter group. To modify more than one parameter, submit a list of the following: ParameterName
, ParameterValue
, and ApplyMethod
. A maximum of 20 parameters can be modified in a single request.
Changes to dynamic parameters are applied immediately. Changes to static parameters require a reboot without failover to the DB instance associated with the parameter group before the change can take effect.
After you modify a DB parameter group, you should wait at least 5 minutes before creating your first DB instance that uses that DB parameter group as the default parameter group. This allows Amazon RDS to fully complete the modify action before the parameter group is used as the default for a new DB instance. This is especially important for parameters that are critical when creating the default database for a DB instance, such as the character set for the default database defined by the character_set_database
parameter. You can use the Parameter Groups option of the Amazon RDS console or the DescribeDBParameters command to verify that your DB parameter group has been created or modified.
This is prerelease documentation for the RDS Database Proxy feature in preview release. It is subject to change.
Changes the settings for an existing DB proxy.
", "ModifyDBProxyTargetGroup": "This is prerelease documentation for the RDS Database Proxy feature in preview release. It is subject to change.
Modifies the properties of a DBProxyTargetGroup
.
Updates a manual DB snapshot, which can be encrypted or not encrypted, with a new engine version.
Amazon RDS supports upgrading DB snapshots for MySQL, Oracle, and PostgreSQL.
", + "ModifyDBSnapshot": "Updates a manual DB snapshot with a new engine version. The snapshot can be encrypted or unencrypted, but not shared or public.
Amazon RDS supports upgrading DB snapshots for MySQL, Oracle, and PostgreSQL.
", "ModifyDBSnapshotAttribute": "Adds an attribute and values to, or removes an attribute and values from, a manual DB snapshot.
To share a manual DB snapshot with other AWS accounts, specify restore
as the AttributeName
and use the ValuesToAdd
parameter to add a list of IDs of the AWS accounts that are authorized to restore the manual DB snapshot. Uses the value all
to make the manual DB snapshot public, which means it can be copied or restored by all AWS accounts. Do not add the all
value for any manual DB snapshots that contain private information that you don't want available to all AWS accounts. If the manual DB snapshot is encrypted, it can be shared, but only by specifying a list of authorized AWS account IDs for the ValuesToAdd
parameter. You can't use all
as a value for that parameter in this case.
To view which AWS accounts have access to copy or restore a manual DB snapshot, or whether a manual DB snapshot public or private, use the DescribeDBSnapshotAttributes
API action.
Modifies an existing DB subnet group. DB subnet groups must contain at least one subnet in at least two AZs in the AWS Region.
", "ModifyEventSubscription": "Modifies an existing RDS event notification subscription. You can't modify the source identifiers using this call. To change source identifiers for a subscription, use the AddSourceIdentifierToSubscription
and RemoveSourceIdentifierFromSubscription
calls.
You can see a list of the event categories for a given SourceType in the Events topic in the Amazon RDS User Guide or by using the DescribeEventCategories action.
", "ModifyGlobalCluster": "Modify a setting for an Amazon Aurora global cluster. You can change one or more database configuration parameters by specifying these parameters and the new values in the request. For more information on Amazon Aurora, see What Is Amazon Aurora? in the Amazon Aurora User Guide.
This action only applies to Aurora DB clusters.
Modifies an existing option group.
", - "PromoteReadReplica": "Promotes a Read Replica DB instance to a standalone DB instance.
Backup duration is a function of the amount of changes to the database since the previous backup. If you plan to promote a Read Replica to a standalone instance, we recommend that you enable backups and complete at least one backup prior to promotion. In addition, a Read Replica cannot be promoted to a standalone instance when it is in the backing-up
status. If you have enabled backups on your Read Replica, configure the automated backup window so that daily backups do not interfere with Read Replica promotion.
This command doesn't apply to Aurora MySQL and Aurora PostgreSQL.
Promotes a Read Replica DB cluster to a standalone DB cluster.
This action only applies to Aurora DB clusters.
Promotes a read replica DB instance to a standalone DB instance.
Backup duration is a function of the amount of changes to the database since the previous backup. If you plan to promote a read replica to a standalone instance, we recommend that you enable backups and complete at least one backup prior to promotion. In addition, a read replica cannot be promoted to a standalone instance when it is in the backing-up
status. If you have enabled backups on your read replica, configure the automated backup window so that daily backups do not interfere with read replica promotion.
This command doesn't apply to Aurora MySQL and Aurora PostgreSQL.
Promotes a read replica DB cluster to a standalone DB cluster.
This action only applies to Aurora DB clusters.
Purchases a reserved DB instance offering.
", "RebootDBInstance": "You might need to reboot your DB instance, usually for maintenance reasons. For example, if you make certain modifications, or if you change the DB parameter group associated with the DB instance, you must reboot the instance for the changes to take effect.
Rebooting a DB instance restarts the database engine service. Rebooting a DB instance results in a momentary outage, during which the DB instance status is set to rebooting.
For more information about rebooting, see Rebooting a DB Instance in the Amazon RDS User Guide.
", "RegisterDBProxyTargets": "This is prerelease documentation for the RDS Database Proxy feature in preview release. It is subject to change.
Associate one or more DBProxyTarget
data structures with a DBProxyTargetGroup
.
Modifies the parameters of a DB cluster parameter group to the default value. To reset specific parameters submit a list of the following: ParameterName
and ApplyMethod
. To reset the entire DB cluster parameter group, specify the DBClusterParameterGroupName
and ResetAllParameters
parameters.
When resetting the entire group, dynamic parameters are updated immediately and static parameters are set to pending-reboot
to take effect on the next DB instance restart or RebootDBInstance
request. You must call RebootDBInstance
for every DB instance in your DB cluster that you want the updated static parameter to apply to.
For more information on Amazon Aurora, see What Is Amazon Aurora? in the Amazon Aurora User Guide.
This action only applies to Aurora DB clusters.
Modifies the parameters of a DB parameter group to the engine/system default value. To reset specific parameters, provide a list of the following: ParameterName
and ApplyMethod
. To reset the entire DB parameter group, specify the DBParameterGroup
name and ResetAllParameters
parameters. When resetting the entire group, dynamic parameters are updated immediately and static parameters are set to pending-reboot
to take effect on the next DB instance restart or RebootDBInstance
request.
Creates an Amazon Aurora DB cluster from data stored in an Amazon S3 bucket. Amazon RDS must be authorized to access the Amazon S3 bucket and the data must be created using the Percona XtraBackup utility as described in Migrating Data to an Amazon Aurora MySQL DB Cluster in the Amazon Aurora User Guide.
This action only applies to Aurora DB clusters.
Creates a new DB cluster from a DB snapshot or DB cluster snapshot.
If a DB snapshot is specified, the target DB cluster is created from the source DB snapshot with a default configuration and default security group.
If a DB cluster snapshot is specified, the target DB cluster is created from the source DB cluster restore point with the same configuration as the original source DB cluster. If you don't specify a security group, the new DB cluster is associated with the default security group.
For more information on Amazon Aurora, see What Is Amazon Aurora? in the Amazon Aurora User Guide.
This action only applies to Aurora DB clusters.
Creates a new DB cluster from a DB snapshot or DB cluster snapshot. This action only applies to Aurora DB clusters.
The target DB cluster is created from the source snapshot with a default configuration. If you don't specify a security group, the new DB cluster is associated with the default security group.
This action only restores the DB cluster, not the DB instances for that DB cluster. You must invoke the CreateDBInstance
action to create DB instances for the restored DB cluster, specifying the identifier of the restored DB cluster in DBClusterIdentifier
. You can create DB instances only after the RestoreDBClusterFromSnapshot
action has completed and the DB cluster is available.
For more information on Amazon Aurora, see What Is Amazon Aurora? in the Amazon Aurora User Guide.
", "RestoreDBClusterToPointInTime": "Restores a DB cluster to an arbitrary point in time. Users can restore to any point in time before LatestRestorableTime
for up to BackupRetentionPeriod
days. The target DB cluster is created from the source DB cluster with the same configuration as the original DB cluster, except that the new DB cluster is created with the default DB security group.
This action only restores the DB cluster, not the DB instances for that DB cluster. You must invoke the CreateDBInstance
action to create DB instances for the restored DB cluster, specifying the identifier of the restored DB cluster in DBClusterIdentifier
. You can create DB instances only after the RestoreDBClusterToPointInTime
action has completed and the DB cluster is available.
For more information on Amazon Aurora, see What Is Amazon Aurora? in the Amazon Aurora User Guide.
This action only applies to Aurora DB clusters.
Creates a new DB instance from a DB snapshot. The target database is created from the source database restore point with the most of original configuration with the default security group and the default DB parameter group. By default, the new DB instance is created as a single-AZ deployment except when the instance is a SQL Server instance that has an option group that is associated with mirroring; in this case, the instance becomes a mirrored AZ deployment and not a single-AZ deployment.
If your intent is to replace your original DB instance with the new, restored DB instance, then rename your original DB instance before you call the RestoreDBInstanceFromDBSnapshot action. RDS doesn't allow two DB instances with the same name. Once you have renamed your original DB instance with a different identifier, then you can pass the original name of the DB instance as the DBInstanceIdentifier in the call to the RestoreDBInstanceFromDBSnapshot action. The result is that you will replace the original DB instance with the DB instance created from the snapshot.
If you are restoring from a shared manual DB snapshot, the DBSnapshotIdentifier
must be the ARN of the shared DB snapshot.
This command doesn't apply to Aurora MySQL and Aurora PostgreSQL. For Aurora, use RestoreDBClusterFromSnapshot
.
Amazon Relational Database Service (Amazon RDS) supports importing MySQL databases by using backup files. You can create a backup of your on-premises database, store it on Amazon Simple Storage Service (Amazon S3), and then restore the backup file onto a new Amazon RDS DB instance running MySQL. For more information, see Importing Data into an Amazon RDS MySQL DB Instance in the Amazon RDS User Guide.
", @@ -140,7 +140,7 @@ } }, "AccountQuota": { - "base": "Describes a quota for an AWS account.
The following are account quotas:
AllocatedStorage
- The total allocated storage per account, in GiB. The used value is the total allocated storage in the account, in GiB.
AuthorizationsPerDBSecurityGroup
- The number of ingress rules per DB security group. The used value is the highest number of ingress rules in a DB security group in the account. Other DB security groups in the account might have a lower number of ingress rules.
CustomEndpointsPerDBCluster
- The number of custom endpoints per DB cluster. The used value is the highest number of custom endpoints in a DB clusters in the account. Other DB clusters in the account might have a lower number of custom endpoints.
DBClusterParameterGroups
- The number of DB cluster parameter groups per account, excluding default parameter groups. The used value is the count of nondefault DB cluster parameter groups in the account.
DBClusterRoles
- The number of associated AWS Identity and Access Management (IAM) roles per DB cluster. The used value is the highest number of associated IAM roles for a DB cluster in the account. Other DB clusters in the account might have a lower number of associated IAM roles.
DBClusters
- The number of DB clusters per account. The used value is the count of DB clusters in the account.
DBInstanceRoles
- The number of associated IAM roles per DB instance. The used value is the highest number of associated IAM roles for a DB instance in the account. Other DB instances in the account might have a lower number of associated IAM roles.
DBInstances
- The number of DB instances per account. The used value is the count of the DB instances in the account.
Amazon RDS DB instances, Amazon Aurora DB instances, Amazon Neptune instances, and Amazon DocumentDB instances apply to this quota.
DBParameterGroups
- The number of DB parameter groups per account, excluding default parameter groups. The used value is the count of nondefault DB parameter groups in the account.
DBSecurityGroups
- The number of DB security groups (not VPC security groups) per account, excluding the default security group. The used value is the count of nondefault DB security groups in the account.
DBSubnetGroups
- The number of DB subnet groups per account. The used value is the count of the DB subnet groups in the account.
EventSubscriptions
- The number of event subscriptions per account. The used value is the count of the event subscriptions in the account.
ManualSnapshots
- The number of manual DB snapshots per account. The used value is the count of the manual DB snapshots in the account.
OptionGroups
- The number of DB option groups per account, excluding default option groups. The used value is the count of nondefault DB option groups in the account.
ReadReplicasPerMaster
- The number of Read Replicas per DB instance. The used value is the highest number of Read Replicas for a DB instance in the account. Other DB instances in the account might have a lower number of Read Replicas.
ReservedDBInstances
- The number of reserved DB instances per account. The used value is the count of the active reserved DB instances in the account.
SubnetsPerDBSubnetGroup
- The number of subnets per DB subnet group. The used value is highest number of subnets for a DB subnet group in the account. Other DB subnet groups in the account might have a lower number of subnets.
For more information, see Quotas for Amazon RDS in the Amazon RDS User Guide and Quotas for Amazon Aurora in the Amazon Aurora User Guide.
", + "base": "Describes a quota for an AWS account.
The following are account quotas:
AllocatedStorage
- The total allocated storage per account, in GiB. The used value is the total allocated storage in the account, in GiB.
AuthorizationsPerDBSecurityGroup
- The number of ingress rules per DB security group. The used value is the highest number of ingress rules in a DB security group in the account. Other DB security groups in the account might have a lower number of ingress rules.
CustomEndpointsPerDBCluster
- The number of custom endpoints per DB cluster. The used value is the highest number of custom endpoints in a DB clusters in the account. Other DB clusters in the account might have a lower number of custom endpoints.
DBClusterParameterGroups
- The number of DB cluster parameter groups per account, excluding default parameter groups. The used value is the count of nondefault DB cluster parameter groups in the account.
DBClusterRoles
- The number of associated AWS Identity and Access Management (IAM) roles per DB cluster. The used value is the highest number of associated IAM roles for a DB cluster in the account. Other DB clusters in the account might have a lower number of associated IAM roles.
DBClusters
- The number of DB clusters per account. The used value is the count of DB clusters in the account.
DBInstanceRoles
- The number of associated IAM roles per DB instance. The used value is the highest number of associated IAM roles for a DB instance in the account. Other DB instances in the account might have a lower number of associated IAM roles.
DBInstances
- The number of DB instances per account. The used value is the count of the DB instances in the account.
Amazon RDS DB instances, Amazon Aurora DB instances, Amazon Neptune instances, and Amazon DocumentDB instances apply to this quota.
DBParameterGroups
- The number of DB parameter groups per account, excluding default parameter groups. The used value is the count of nondefault DB parameter groups in the account.
DBSecurityGroups
- The number of DB security groups (not VPC security groups) per account, excluding the default security group. The used value is the count of nondefault DB security groups in the account.
DBSubnetGroups
- The number of DB subnet groups per account. The used value is the count of the DB subnet groups in the account.
EventSubscriptions
- The number of event subscriptions per account. The used value is the count of the event subscriptions in the account.
ManualSnapshots
- The number of manual DB snapshots per account. The used value is the count of the manual DB snapshots in the account.
OptionGroups
- The number of DB option groups per account, excluding default option groups. The used value is the count of nondefault DB option groups in the account.
ReadReplicasPerMaster
- The number of read replicas per DB instance. The used value is the highest number of read replicas for a DB instance in the account. Other DB instances in the account might have a lower number of read replicas.
ReservedDBInstances
- The number of reserved DB instances per account. The used value is the count of the active reserved DB instances in the account.
SubnetsPerDBSubnetGroup
- The number of subnets per DB subnet group. The used value is highest number of subnets for a DB subnet group in the account. Other DB subnet groups in the account might have a lower number of subnets.
For more information, see Quotas for Amazon RDS in the Amazon RDS User Guide and Quotas for Amazon Aurora in the Amazon Aurora User Guide.
", "refs": { "AccountQuotaList$member": null } @@ -307,7 +307,7 @@ "DBClusterSnapshot$StorageEncrypted": "Specifies whether the DB cluster snapshot is encrypted.
", "DBClusterSnapshot$IAMDatabaseAuthenticationEnabled": "True if mapping of AWS Identity and Access Management (IAM) accounts to database accounts is enabled, and otherwise false.
", "DBEngineVersion$SupportsLogExportsToCloudwatchLogs": "A value that indicates whether the engine version supports exporting the log types specified by ExportableLogTypes to CloudWatch Logs.
", - "DBEngineVersion$SupportsReadReplica": "Indicates whether the database engine version supports Read Replicas.
", + "DBEngineVersion$SupportsReadReplica": "Indicates whether the database engine version supports read replicas.
", "DBInstance$MultiAZ": "Specifies if the DB instance is a Multi-AZ deployment.
", "DBInstance$AutoMinorVersionUpgrade": "Indicates that minor version patches are applied automatically.
", "DBInstance$PubliclyAccessible": "Specifies the accessibility options for the DB instance. A value of true specifies an Internet-facing instance with a publicly resolvable DNS name, which resolves to a public IP address. A value of false specifies an internal instance with a DNS name that resolves to a private IP address.
", @@ -324,7 +324,7 @@ "DBSnapshot$Encrypted": "Specifies whether the DB snapshot is encrypted.
", "DBSnapshot$IAMDatabaseAuthenticationEnabled": "True if mapping of AWS Identity and Access Management (IAM) accounts to database accounts is enabled, and otherwise false.
", "DeleteDBClusterMessage$SkipFinalSnapshot": "A value that indicates whether to skip the creation of a final DB cluster snapshot before the DB cluster is deleted. If skip is specified, no DB cluster snapshot is created. If skip isn't specified, a DB cluster snapshot is created before the DB cluster is deleted. By default, skip isn't specified, and the DB cluster snapshot is created. By default, this parameter is disabled.
You must specify a FinalDBSnapshotIdentifier
parameter if SkipFinalSnapshot
is disabled.
A value that indicates whether to skip the creation of a final DB snapshot before the DB instance is deleted. If skip is specified, no DB snapshot is created. If skip isn't specified, a DB snapshot is created before the DB instance is deleted. By default, skip isn't specified, and the DB snapshot is created.
When a DB instance is in a failure state and has a status of 'failed', 'incompatible-restore', or 'incompatible-network', it can only be deleted when skip is specified.
Specify skip when deleting a Read Replica.
The FinalDBSnapshotIdentifier parameter must be specified if skip isn't specified.
A value that indicates whether to skip the creation of a final DB snapshot before the DB instance is deleted. If skip is specified, no DB snapshot is created. If skip isn't specified, a DB snapshot is created before the DB instance is deleted. By default, skip isn't specified, and the DB snapshot is created.
When a DB instance is in a failure state and has a status of 'failed', 'incompatible-restore', or 'incompatible-network', it can only be deleted when skip is specified.
Specify skip when deleting a read replica.
The FinalDBSnapshotIdentifier parameter must be specified if skip isn't specified.
A value that indicates whether to include shared manual DB cluster snapshots from other AWS accounts that this AWS account has been given permission to copy or restore. By default, these snapshots are not included.
You can give an AWS account permission to restore a manual DB cluster snapshot from another AWS account by the ModifyDBClusterSnapshotAttribute
API action.
A value that indicates whether to include manual DB cluster snapshots that are public and can be copied or restored by any AWS account. By default, the public snapshots are not included.
You can share a manual DB cluster snapshot as public by using the ModifyDBClusterSnapshotAttribute API action.
", "DescribeDBClustersMessage$IncludeShared": "Optional Boolean parameter that specifies whether the output includes information about clusters shared from other AWS accounts.
", @@ -353,7 +353,7 @@ "OptionSetting$IsCollection": "Indicates if the option setting is part of a collection.
", "OptionVersion$IsDefault": "True if the version is the default version of the option, and otherwise false.
", "OrderableDBInstanceOption$MultiAZCapable": "Indicates whether a DB instance is Multi-AZ capable.
", - "OrderableDBInstanceOption$ReadReplicaCapable": "Indicates whether a DB instance can have a Read Replica.
", + "OrderableDBInstanceOption$ReadReplicaCapable": "Indicates whether a DB instance can have a read replica.
", "OrderableDBInstanceOption$Vpc": "Indicates whether a DB instance is in a VPC.
", "OrderableDBInstanceOption$SupportsStorageEncryption": "Indicates whether a DB instance supports encrypted storage.
", "OrderableDBInstanceOption$SupportsIops": "Indicates whether a DB instance supports provisioned IOPS.
", @@ -394,12 +394,12 @@ "CreateDBInstanceMessage$EnableIAMDatabaseAuthentication": "A value that indicates whether to enable mapping of AWS Identity and Access Management (IAM) accounts to database accounts. By default, mapping is disabled.
You can enable IAM database authentication for the following database engines:
Amazon Aurora
Not applicable. Mapping AWS IAM accounts to database accounts is managed by the DB cluster.
MySQL
For MySQL 5.6, minor version 5.6.34 or higher
For MySQL 5.7, minor version 5.7.16 or higher
For MySQL 8.0, minor version 8.0.16 or higher
PostgreSQL
For PostgreSQL 9.5, minor version 9.5.15 or higher
For PostgreSQL 9.6, minor version 9.6.11 or higher
PostgreSQL 10.6, 10.7, and 10.9
For more information, see IAM Database Authentication for MySQL and PostgreSQL in the Amazon RDS User Guide.
", "CreateDBInstanceMessage$EnablePerformanceInsights": "A value that indicates whether to enable Performance Insights for the DB instance.
For more information, see Using Amazon Performance Insights in the Amazon Relational Database Service User Guide.
", "CreateDBInstanceMessage$DeletionProtection": "A value that indicates whether the DB instance has deletion protection enabled. The database can't be deleted when deletion protection is enabled. By default, deletion protection is disabled. For more information, see Deleting a DB Instance.
Amazon Aurora
Not applicable. You can enable or disable deletion protection for the DB cluster. For more information, see CreateDBCluster
. DB instances in a DB cluster can be deleted even when deletion protection is enabled for the DB cluster.
A value that indicates whether the Read Replica is in a Multi-AZ deployment.
You can create a Read Replica as a Multi-AZ DB instance. RDS creates a standby of your replica in another Availability Zone for failover support for the replica. Creating your Read Replica as a Multi-AZ DB instance is independent of whether the source database is a Multi-AZ DB instance.
", - "CreateDBInstanceReadReplicaMessage$AutoMinorVersionUpgrade": "A value that indicates whether minor engine upgrades are applied automatically to the Read Replica during the maintenance window.
Default: Inherits from the source DB instance
", + "CreateDBInstanceReadReplicaMessage$MultiAZ": "A value that indicates whether the read replica is in a Multi-AZ deployment.
You can create a read replica as a Multi-AZ DB instance. RDS creates a standby of your replica in another Availability Zone for failover support for the replica. Creating your read replica as a Multi-AZ DB instance is independent of whether the source database is a Multi-AZ DB instance.
", + "CreateDBInstanceReadReplicaMessage$AutoMinorVersionUpgrade": "A value that indicates whether minor engine upgrades are applied automatically to the read replica during the maintenance window.
Default: Inherits from the source DB instance
", "CreateDBInstanceReadReplicaMessage$PubliclyAccessible": "A value that indicates whether the DB instance is publicly accessible. When the DB instance is publicly accessible, it is an Internet-facing instance with a publicly resolvable DNS name, which resolves to a public IP address. When the DB instance isn't publicly accessible, it is an internal instance with a DNS name that resolves to a private IP address. For more information, see CreateDBInstance.
", - "CreateDBInstanceReadReplicaMessage$CopyTagsToSnapshot": "A value that indicates whether to copy all tags from the Read Replica to snapshots of the Read Replica. By default, tags are not copied.
", + "CreateDBInstanceReadReplicaMessage$CopyTagsToSnapshot": "A value that indicates whether to copy all tags from the read replica to snapshots of the read replica. By default, tags are not copied.
", "CreateDBInstanceReadReplicaMessage$EnableIAMDatabaseAuthentication": "A value that indicates whether to enable mapping of AWS Identity and Access Management (IAM) accounts to database accounts. By default, mapping is disabled. For information about the supported DB engines, see CreateDBInstance.
For more information about IAM database authentication, see IAM Database Authentication for MySQL and PostgreSQL in the Amazon RDS User Guide.
", - "CreateDBInstanceReadReplicaMessage$EnablePerformanceInsights": "A value that indicates whether to enable Performance Insights for the Read Replica.
For more information, see Using Amazon Performance Insights in the Amazon RDS User Guide.
", + "CreateDBInstanceReadReplicaMessage$EnablePerformanceInsights": "A value that indicates whether to enable Performance Insights for the read replica.
For more information, see Using Amazon Performance Insights in the Amazon RDS User Guide.
", "CreateDBInstanceReadReplicaMessage$UseDefaultProcessorFeatures": "A value that indicates whether the DB instance class of the DB instance uses its default processor features.
", "CreateDBInstanceReadReplicaMessage$DeletionProtection": "A value that indicates whether the DB instance has deletion protection enabled. The database can't be deleted when deletion protection is enabled. By default, deletion protection is disabled. For more information, see Deleting a DB Instance.
", "CreateEventSubscriptionMessage$Enabled": "A value that indicates whether to activate the subscription. If the event notification subscription isn't activated, the subscription is created but not active.
", @@ -1120,7 +1120,7 @@ "DBInstanceStatusInfoList": { "base": null, "refs": { - "DBInstance$StatusInfos": "The status of a Read Replica. If the instance isn't a Read Replica, this is blank.
" + "DBInstance$StatusInfos": "The status of a read replica. If the instance isn't a read replica, this is blank.
" } }, "DBLogFileNotFoundFault": { @@ -1890,14 +1890,14 @@ "EngineFamily": { "base": null, "refs": { - "CreateDBProxyRequest$EngineFamily": "The kinds of databases that the proxy can connect to. This value determines which database network protocol the proxy recognizes when it interprets network traffic to and from the database. Currently, this value is always MYSQL
. The engine family applies to both RDS MySQL and Aurora MySQL.
The kinds of databases that the proxy can connect to. This value determines which database network protocol the proxy recognizes when it interprets network traffic to and from the database. The engine family applies to MySQL and PostgreSQL for both RDS and Aurora.
" } }, "EngineModeList": { "base": null, "refs": { - "DBEngineVersion$SupportedEngineModes": "A list of the supported DB engine modes.
", - "OrderableDBInstanceOption$SupportedEngineModes": "A list of the supported DB engine modes.
", + "DBEngineVersion$SupportedEngineModes": "A list of the supported DB engine modes.
global
engine mode only applies for global database clusters created with Aurora MySQL version 5.6.10a. For higher Aurora MySQL versions, the clusters in a global database use provisioned
engine mode.
A list of the supported DB engine modes.
global
engine mode only applies for global database clusters created with Aurora MySQL version 5.6.10a. For higher Aurora MySQL versions, the clusters in a global database use provisioned
engine mode.
The valid DB engine modes.
" } }, @@ -2243,8 +2243,8 @@ "CreateDBClusterMessage$BackupRetentionPeriod": "The number of days for which automated backups are retained.
Default: 1
Constraints:
Must be a value from 1 to 35
The port number on which the instances in the DB cluster accept connections.
Default: 3306
if engine is set as aurora or 5432
if set to aurora-postgresql.
The amount of storage (in gibibytes) to allocate for the DB instance.
Type: Integer
Amazon Aurora
Not applicable. Aurora cluster volumes automatically grow as the amount of data in your database increases, though you are only charged for the space that you use in an Aurora cluster volume.
MySQL
Constraints to the amount of storage for each storage type are the following:
General Purpose (SSD) storage (gp2): Must be an integer from 20 to 65536.
Provisioned IOPS storage (io1): Must be an integer from 100 to 65536.
Magnetic storage (standard): Must be an integer from 5 to 3072.
MariaDB
Constraints to the amount of storage for each storage type are the following:
General Purpose (SSD) storage (gp2): Must be an integer from 20 to 65536.
Provisioned IOPS storage (io1): Must be an integer from 100 to 65536.
Magnetic storage (standard): Must be an integer from 5 to 3072.
PostgreSQL
Constraints to the amount of storage for each storage type are the following:
General Purpose (SSD) storage (gp2): Must be an integer from 20 to 65536.
Provisioned IOPS storage (io1): Must be an integer from 100 to 65536.
Magnetic storage (standard): Must be an integer from 5 to 3072.
Oracle
Constraints to the amount of storage for each storage type are the following:
General Purpose (SSD) storage (gp2): Must be an integer from 20 to 65536.
Provisioned IOPS storage (io1): Must be an integer from 100 to 65536.
Magnetic storage (standard): Must be an integer from 10 to 3072.
SQL Server
Constraints to the amount of storage for each storage type are the following:
General Purpose (SSD) storage (gp2):
Enterprise and Standard editions: Must be an integer from 200 to 16384.
Web and Express editions: Must be an integer from 20 to 16384.
Provisioned IOPS storage (io1):
Enterprise and Standard editions: Must be an integer from 200 to 16384.
Web and Express editions: Must be an integer from 100 to 16384.
Magnetic storage (standard):
Enterprise and Standard editions: Must be an integer from 200 to 1024.
Web and Express editions: Must be an integer from 20 to 1024.
The number of days for which automated backups are retained. Setting this parameter to a positive number enables backups. Setting this parameter to 0 disables automated backups.
Amazon Aurora
Not applicable. The retention period for automated backups is managed by the DB cluster.
Default: 1
Constraints:
Must be a value from 0 to 35
Can't be set to 0 if the DB instance is a source to Read Replicas
The port number on which the database accepts connections.
MySQL
Default: 3306
Valid Values: 1150-65535
Type: Integer
MariaDB
Default: 3306
Valid Values: 1150-65535
Type: Integer
PostgreSQL
Default: 5432
Valid Values: 1150-65535
Type: Integer
Oracle
Default: 1521
Valid Values: 1150-65535
SQL Server
Default: 1433
Valid Values: 1150-65535
except for 1434
, 3389
, 47001
, 49152
, and 49152
through 49156
.
Amazon Aurora
Default: 3306
Valid Values: 1150-65535
Type: Integer
", + "CreateDBInstanceMessage$BackupRetentionPeriod": "The number of days for which automated backups are retained. Setting this parameter to a positive number enables backups. Setting this parameter to 0 disables automated backups.
Amazon Aurora
Not applicable. The retention period for automated backups is managed by the DB cluster.
Default: 1
Constraints:
Must be a value from 0 to 35
Can't be set to 0 if the DB instance is a source to read replicas
The port number on which the database accepts connections.
MySQL
Default: 3306
Valid values: 1150-65535
Type: Integer
MariaDB
Default: 3306
Valid values: 1150-65535
Type: Integer
PostgreSQL
Default: 5432
Valid values: 1150-65535
Type: Integer
Oracle
Default: 1521
Valid values: 1150-65535
SQL Server
Default: 1433
Valid values: 1150-65535
except 1234
, 1434
, 3260
, 3343
, 3389
, 47001
, and 49152-49156
.
Amazon Aurora
Default: 3306
Valid values: 1150-65535
Type: Integer
", "CreateDBInstanceMessage$Iops": "The amount of Provisioned IOPS (input/output operations per second) to be initially allocated for the DB instance. For information about valid Iops values, see Amazon RDS Provisioned IOPS Storage to Improve Performance in the Amazon RDS User Guide.
Constraints: For MariaDB, MySQL, Oracle, and PostgreSQL DB instances, must be a multiple between .5 and 50 of the storage amount for the DB instance. For SQL Server DB instances, must be a multiple between 1 and 50 of the storage amount for the DB instance.
", "CreateDBInstanceMessage$MonitoringInterval": "The interval, in seconds, between points when Enhanced Monitoring metrics are collected for the DB instance. To disable collecting Enhanced Monitoring metrics, specify 0. The default is 0.
If MonitoringRoleArn
is specified, then you must also set MonitoringInterval
to a value other than 0.
Valid Values: 0, 1, 5, 10, 15, 30, 60
A value that specifies the order in which an Aurora Replica is promoted to the primary instance after a failure of the existing primary instance. For more information, see Fault Tolerance for an Aurora DB Cluster in the Amazon Aurora User Guide.
Default: 1
Valid Values: 0 - 15
", @@ -2252,7 +2252,7 @@ "CreateDBInstanceMessage$MaxAllocatedStorage": "The upper limit to which Amazon RDS can automatically scale the storage of the DB instance.
", "CreateDBInstanceReadReplicaMessage$Port": "The port number that the DB instance uses for connections.
Default: Inherits from the source DB instance
Valid Values: 1150-65535
The amount of Provisioned IOPS (input/output operations per second) to be initially allocated for the DB instance.
", - "CreateDBInstanceReadReplicaMessage$MonitoringInterval": "The interval, in seconds, between points when Enhanced Monitoring metrics are collected for the Read Replica. To disable collecting Enhanced Monitoring metrics, specify 0. The default is 0.
If MonitoringRoleArn
is specified, then you must also set MonitoringInterval
to a value other than 0.
Valid Values: 0, 1, 5, 10, 15, 30, 60
The interval, in seconds, between points when Enhanced Monitoring metrics are collected for the read replica. To disable collecting Enhanced Monitoring metrics, specify 0. The default is 0.
If MonitoringRoleArn
is specified, then you must also set MonitoringInterval
to a value other than 0.
Valid Values: 0, 1, 5, 10, 15, 30, 60
The amount of time, in days, to retain Performance Insights data. Valid values are 7 or 731 (2 years).
", "CreateDBProxyRequest$IdleClientTimeout": "The number of seconds that a connection to the proxy can be inactive before the proxy disconnects it. You can set this value higher or lower than the connection timeout limit for the associated database.
", "DBCluster$AllocatedStorage": "For all database engines except Amazon Aurora, AllocatedStorage
specifies the allocated storage size in gibibytes (GiB). For Aurora, AllocatedStorage
always returns 1, because Aurora DB cluster storage size isn't fixed, but instead automatically adjusts as needed.
The number of days for which automated backups are retained. You must specify a minimum value of 1.
Default: 1
Constraints:
Must be a value from 1 to 35
The port number on which the DB cluster accepts connections.
Constraints: Value must be 1150-65535
Default: The same port as the original DB cluster.
", "ModifyDBInstanceMessage$AllocatedStorage": "The new amount of storage (in gibibytes) to allocate for the DB instance.
For MariaDB, MySQL, Oracle, and PostgreSQL, the value supplied must be at least 10% greater than the current value. Values that are not at least 10% greater than the existing value are rounded up so that they are 10% greater than the current value.
For the valid values for allocated storage for each engine, see CreateDBInstance
.
The number of days to retain automated backups. Setting this parameter to a positive number enables backups. Setting this parameter to 0 disables automated backups.
Changing this parameter can result in an outage if you change from 0 to a non-zero value or from a non-zero value to 0. These changes are applied during the next maintenance window unless the ApplyImmediately
parameter is enabled for this request. If you change the parameter from one non-zero value to another non-zero value, the change is asynchronously applied as soon as possible.
Amazon Aurora
Not applicable. The retention period for automated backups is managed by the DB cluster. For more information, see ModifyDBCluster
.
Default: Uses existing setting
Constraints:
Must be a value from 0 to 35
Can be specified for a MySQL Read Replica only if the source is running MySQL 5.6 or later
Can be specified for a PostgreSQL Read Replica only if the source is running PostgreSQL 9.3.5
Can't be set to 0 if the DB instance is a source to Read Replicas
The new Provisioned IOPS (I/O operations per second) value for the RDS instance.
Changing this setting doesn't result in an outage and the change is applied during the next maintenance window unless the ApplyImmediately
parameter is enabled for this request. If you are migrating from Provisioned IOPS to standard storage, set this value to 0. The DB instance will require a reboot for the change in storage type to take effect.
If you choose to migrate your DB instance from using standard storage to using Provisioned IOPS, or from using Provisioned IOPS to using standard storage, the process can take time. The duration of the migration depends on several factors such as database load, storage size, storage type (standard or Provisioned IOPS), amount of IOPS provisioned (if any), and the number of prior scale storage operations. Typical migration times are under 24 hours, but the process can take up to several days in some cases. During the migration, the DB instance is available for use, but might experience performance degradation. While the migration takes place, nightly backups for the instance are suspended. No other Amazon RDS operations can take place for the instance, including modifying the instance, rebooting the instance, deleting the instance, creating a Read Replica for the instance, and creating a DB snapshot of the instance.
Constraints: For MariaDB, MySQL, Oracle, and PostgreSQL, the value supplied must be at least 10% greater than the current value. Values that are not at least 10% greater than the existing value are rounded up so that they are 10% greater than the current value.
Default: Uses existing setting
", + "ModifyDBInstanceMessage$BackupRetentionPeriod": "The number of days to retain automated backups. Setting this parameter to a positive number enables backups. Setting this parameter to 0 disables automated backups.
Changing this parameter can result in an outage if you change from 0 to a non-zero value or from a non-zero value to 0. These changes are applied during the next maintenance window unless the ApplyImmediately
parameter is enabled for this request. If you change the parameter from one non-zero value to another non-zero value, the change is asynchronously applied as soon as possible.
Amazon Aurora
Not applicable. The retention period for automated backups is managed by the DB cluster. For more information, see ModifyDBCluster
.
Default: Uses existing setting
Constraints:
Must be a value from 0 to 35
Can be specified for a MySQL read replica only if the source is running MySQL 5.6 or later
Can be specified for a PostgreSQL read replica only if the source is running PostgreSQL 9.3.5
Can't be set to 0 if the DB instance is a source to read replicas
The new Provisioned IOPS (I/O operations per second) value for the RDS instance.
Changing this setting doesn't result in an outage and the change is applied during the next maintenance window unless the ApplyImmediately
parameter is enabled for this request. If you are migrating from Provisioned IOPS to standard storage, set this value to 0. The DB instance will require a reboot for the change in storage type to take effect.
If you choose to migrate your DB instance from using standard storage to using Provisioned IOPS, or from using Provisioned IOPS to using standard storage, the process can take time. The duration of the migration depends on several factors such as database load, storage size, storage type (standard or Provisioned IOPS), amount of IOPS provisioned (if any), and the number of prior scale storage operations. Typical migration times are under 24 hours, but the process can take up to several days in some cases. During the migration, the DB instance is available for use, but might experience performance degradation. While the migration takes place, nightly backups for the instance are suspended. No other Amazon RDS operations can take place for the instance, including modifying the instance, rebooting the instance, deleting the instance, creating a read replica for the instance, and creating a DB snapshot of the instance.
Constraints: For MariaDB, MySQL, Oracle, and PostgreSQL, the value supplied must be at least 10% greater than the current value. Values that are not at least 10% greater than the existing value are rounded up so that they are 10% greater than the current value.
Default: Uses existing setting
", "ModifyDBInstanceMessage$MonitoringInterval": "The interval, in seconds, between points when Enhanced Monitoring metrics are collected for the DB instance. To disable collecting Enhanced Monitoring metrics, specify 0. The default is 0.
If MonitoringRoleArn
is specified, then you must also set MonitoringInterval
to a value other than 0.
Valid Values: 0, 1, 5, 10, 15, 30, 60
The port number on which the database accepts connections.
The value of the DBPortNumber
parameter must not match any of the port values specified for options in the option group for the DB instance.
Your database will restart when you change the DBPortNumber
value regardless of the value of the ApplyImmediately
parameter.
MySQL
Default: 3306
Valid Values: 1150-65535
MariaDB
Default: 3306
Valid Values: 1150-65535
PostgreSQL
Default: 5432
Valid Values: 1150-65535
Type: Integer
Oracle
Default: 1521
Valid Values: 1150-65535
SQL Server
Default: 1433
Valid Values: 1150-65535
except for 1434
, 3389
, 47001
, 49152
, and 49152
through 49156
.
Amazon Aurora
Default: 3306
Valid Values: 1150-65535
The port number on which the database accepts connections.
The value of the DBPortNumber
parameter must not match any of the port values specified for options in the option group for the DB instance.
Your database will restart when you change the DBPortNumber
value regardless of the value of the ApplyImmediately
parameter.
MySQL
Default: 3306
Valid values: 1150-65535
MariaDB
Default: 3306
Valid values: 1150-65535
PostgreSQL
Default: 5432
Valid values: 1150-65535
Type: Integer
Oracle
Default: 1521
Valid values: 1150-65535
SQL Server
Default: 1433
Valid values: 1150-65535
except 1234
, 1434
, 3260
, 3343
, 3389
, 47001
, and 49152-49156
.
Amazon Aurora
Default: 3306
Valid values: 1150-65535
A value that specifies the order in which an Aurora Replica is promoted to the primary instance after a failure of the existing primary instance. For more information, see Fault Tolerance for an Aurora DB Cluster in the Amazon Aurora User Guide.
Default: 1
Valid Values: 0 - 15
", "ModifyDBInstanceMessage$PerformanceInsightsRetentionPeriod": "The amount of time, in days, to retain Performance Insights data. Valid values are 7 or 731 (2 years).
", "ModifyDBInstanceMessage$MaxAllocatedStorage": "The upper limit to which Amazon RDS can automatically scale the storage of the DB instance.
", @@ -2325,7 +2325,7 @@ "PendingModifiedValues$Port": "Specifies the pending port for the DB instance.
", "PendingModifiedValues$BackupRetentionPeriod": "Specifies the pending number of days for which automated backups are retained.
", "PendingModifiedValues$Iops": "Specifies the new Provisioned IOPS value for the DB instance that will be applied or is currently being applied.
", - "PromoteReadReplicaMessage$BackupRetentionPeriod": "The number of days for which automated backups are retained. Setting this parameter to a positive number enables backups. Setting this parameter to 0 disables automated backups.
Default: 1
Constraints:
Must be a value from 0 to 35.
Can't be set to 0 if the DB instance is a source to Read Replicas.
The number of days for which automated backups are retained. Setting this parameter to a positive number enables backups. Setting this parameter to 0 disables automated backups.
Default: 1
Constraints:
Must be a value from 0 to 35.
Can't be set to 0 if the DB instance is a source to read replicas.
The number of instances to reserve.
Default: 1
The step value for the range. For example, if you have a range of 5,000 to 10,000, with a step value of 1,000, the valid values start at 5,000 and step up by 1,000. Even though 7,500 is within the range, it isn't a valid value for the range. The valid values are 5,000, 6,000, 7,000, 8,000...
", "RestoreDBClusterFromS3Message$BackupRetentionPeriod": "The number of days for which automated backups of the restored DB cluster are retained. You must specify a minimum value of 1.
Default: 1
Constraints:
Must be a value from 1 to 35
The maximum number of records to include in the response. If more records exist than the specified MaxRecords
value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
", "DescribeDBProxyTargetGroupsRequest$MaxRecords": " The maximum number of records to include in the response. If more records exist than the specified MaxRecords
value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
", - "DescribeDBProxyTargetsRequest$MaxRecords": " The maximum number of records to include in the response. If more records exist than the specified MaxRecords
value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
" + "DescribeDBProxyTargetsRequest$MaxRecords": " The maximum number of records to include in the response. If more records exist than the specified MaxRecords
value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
", + "DescribeExportTasksMessage$MaxRecords": " The maximum number of records to include in the response. If more records exist than the specified value, a pagination token called a marker is included in the response. You can use the marker in a later DescribeExportTasks
request to retrieve the remaining results.
Default: 100
Constraints: Minimum 20, maximum 100.
" } }, "MinimumEngineVersionPerAllowedValue": { @@ -2980,19 +2981,19 @@ "ReadReplicaDBClusterIdentifierList": { "base": null, "refs": { - "DBInstance$ReadReplicaDBClusterIdentifiers": "Contains one or more identifiers of Aurora DB clusters to which the RDS DB instance is replicated as a Read Replica. For example, when you create an Aurora Read Replica of an RDS MySQL DB instance, the Aurora MySQL DB cluster for the Aurora Read Replica is shown. This output does not contain information about cross region Aurora Read Replicas.
Currently, each RDS DB instance can have only one Aurora Read Replica.
Contains one or more identifiers of Aurora DB clusters to which the RDS DB instance is replicated as a read replica. For example, when you create an Aurora read replica of an RDS MySQL DB instance, the Aurora MySQL DB cluster for the Aurora read replica is shown. This output does not contain information about cross region Aurora read replicas.
Currently, each RDS DB instance can have only one Aurora read replica.
Contains one or more identifiers of the Read Replicas associated with this DB instance.
" + "DBInstance$ReadReplicaDBInstanceIdentifiers": "Contains one or more identifiers of the read replicas associated with this DB instance.
" } }, "ReadReplicaIdentifierList": { "base": null, "refs": { - "DBCluster$ReadReplicaIdentifiers": "Contains one or more identifiers of the Read Replicas associated with this DB cluster.
" + "DBCluster$ReadReplicaIdentifiers": "Contains one or more identifiers of the read replicas associated with this DB cluster.
" } }, "ReadersArnList": { @@ -3282,7 +3283,7 @@ "SourceRegionList": { "base": null, "refs": { - "SourceRegionMessage$SourceRegions": "A list of SourceRegion instances that contains each source AWS Region that the current AWS Region can get a Read Replica or a DB snapshot from.
" + "SourceRegionMessage$SourceRegions": "A list of SourceRegion instances that contains each source AWS Region that the current AWS Region can get a read replica or a DB snapshot from.
" } }, "SourceRegionMessage": { @@ -3408,15 +3409,15 @@ "CertificateMessage$Marker": " An optional pagination token provided by a previous DescribeCertificates
request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords
.
The name of the character set.
", "CharacterSet$CharacterSetDescription": "The description of the character set.
", - "ConnectionPoolConfiguration$InitQuery": " One or more SQL statements for the proxy to run when opening each new database connection. Typically used with SET
statements to make sure that each connection has identical settings such as time zone and character set. For multiple statements, use semicolons as the separator. You can also include multiple variables in a single SET
statement, such as SET x=1, y=2
.
Default: no initialization query
", - "ConnectionPoolConfigurationInfo$InitQuery": " One or more SQL statements for the proxy to run when opening each new database connection. Typically used with SET
statements to make sure that each connection has identical settings such as time zone and character set. This setting is empty by default. For multiple statements, use semicolons as the separator. You can also include multiple variables in a single SET
statement, such as SET x=1, y=2
.
One or more SQL statements for the proxy to run when opening each new database connection. Typically used with SET
statements to make sure that each connection has identical settings such as time zone and character set. For multiple statements, use semicolons as the separator. You can also include multiple variables in a single SET
statement, such as SET x=1, y=2
.
InitQuery
is not currently supported for PostgreSQL.
Default: no initialization query
", + "ConnectionPoolConfigurationInfo$InitQuery": " One or more SQL statements for the proxy to run when opening each new database connection. Typically used with SET
statements to make sure that each connection has identical settings such as time zone and character set. This setting is empty by default. For multiple statements, use semicolons as the separator. You can also include multiple variables in a single SET
statement, such as SET x=1, y=2
.
InitQuery
is not currently supported for PostgreSQL.
The identifier or Amazon Resource Name (ARN) for the source DB cluster parameter group. For information about creating an ARN, see Constructing an ARN for Amazon RDS in the Amazon Aurora User Guide.
Constraints:
Must specify a valid DB cluster parameter group.
If the source DB cluster parameter group is in the same AWS Region as the copy, specify a valid DB parameter group identifier, for example my-db-cluster-param-group
, or a valid ARN.
If the source DB parameter group is in a different AWS Region than the copy, specify a valid DB cluster parameter group ARN, for example arn:aws:rds:us-east-1:123456789012:cluster-pg:custom-cluster-group1
.
The identifier for the copied DB cluster parameter group.
Constraints:
Can't be null, empty, or blank
Must contain from 1 to 255 letters, numbers, or hyphens
First character must be a letter
Can't end with a hyphen or contain two consecutive hyphens
Example: my-cluster-param-group1
A description for the copied DB cluster parameter group.
", "CopyDBClusterSnapshotMessage$SourceDBClusterSnapshotIdentifier": "The identifier of the DB cluster snapshot to copy. This parameter isn't case-sensitive.
You can't copy an encrypted, shared DB cluster snapshot from one AWS Region to another.
Constraints:
Must specify a valid system snapshot in the \"available\" state.
If the source snapshot is in the same AWS Region as the copy, specify a valid DB snapshot identifier.
If the source snapshot is in a different AWS Region than the copy, specify a valid DB cluster snapshot ARN. For more information, go to Copying Snapshots Across AWS Regions in the Amazon Aurora User Guide.
Example: my-cluster-snapshot1
The identifier of the new DB cluster snapshot to create from the source DB cluster snapshot. This parameter isn't case-sensitive.
Constraints:
Must contain from 1 to 63 letters, numbers, or hyphens.
First character must be a letter.
Can't end with a hyphen or contain two consecutive hyphens.
Example: my-cluster-snapshot2
The AWS KMS key ID for an encrypted DB cluster snapshot. The KMS key ID is the Amazon Resource Name (ARN), KMS key identifier, or the KMS key alias for the KMS encryption key.
If you copy an encrypted DB cluster snapshot from your AWS account, you can specify a value for KmsKeyId
to encrypt the copy with a new KMS encryption key. If you don't specify a value for KmsKeyId
, then the copy of the DB cluster snapshot is encrypted with the same KMS key as the source DB cluster snapshot.
If you copy an encrypted DB cluster snapshot that is shared from another AWS account, then you must specify a value for KmsKeyId
.
To copy an encrypted DB cluster snapshot to another AWS Region, you must set KmsKeyId
to the KMS key ID you want to use to encrypt the copy of the DB cluster snapshot in the destination AWS Region. KMS encryption keys are specific to the AWS Region that they are created in, and you can't use encryption keys from one AWS Region in another AWS Region.
If you copy an unencrypted DB cluster snapshot and specify a value for the KmsKeyId
parameter, an error is returned.
The URL that contains a Signature Version 4 signed request for the CopyDBClusterSnapshot
API action in the AWS Region that contains the source DB cluster snapshot to copy. The PreSignedUrl
parameter must be used when copying an encrypted DB cluster snapshot from another AWS Region. Don't specify PreSignedUrl
when you are copying an encrypted DB cluster snapshot in the same AWS Region.
The pre-signed URL must be a valid request for the CopyDBSClusterSnapshot
API action that can be executed in the source AWS Region that contains the encrypted DB cluster snapshot to be copied. The pre-signed URL request must contain the following parameter values:
KmsKeyId
- The AWS KMS key identifier for the key to use to encrypt the copy of the DB cluster snapshot in the destination AWS Region. This is the same identifier for both the CopyDBClusterSnapshot
action that is called in the destination AWS Region, and the action contained in the pre-signed URL.
DestinationRegion
- The name of the AWS Region that the DB cluster snapshot is to be created in.
SourceDBClusterSnapshotIdentifier
- The DB cluster snapshot identifier for the encrypted DB cluster snapshot to be copied. This identifier must be in the Amazon Resource Name (ARN) format for the source AWS Region. For example, if you are copying an encrypted DB cluster snapshot from the us-west-2 AWS Region, then your SourceDBClusterSnapshotIdentifier
looks like the following example: arn:aws:rds:us-west-2:123456789012:cluster-snapshot:aurora-cluster1-snapshot-20161115
.
To learn how to generate a Signature Version 4 signed request, see Authenticating Requests: Using Query Parameters (AWS Signature Version 4) and Signature Version 4 Signing Process.
If you are using an AWS SDK tool or the AWS CLI, you can specify SourceRegion
(or --source-region
for the AWS CLI) instead of specifying PreSignedUrl
manually. Specifying SourceRegion
autogenerates a pre-signed URL that is a valid request for the operation that can be executed in the source AWS Region.
The URL that contains a Signature Version 4 signed request for the CopyDBClusterSnapshot
API action in the AWS Region that contains the source DB cluster snapshot to copy. The PreSignedUrl
parameter must be used when copying an encrypted DB cluster snapshot from another AWS Region. Don't specify PreSignedUrl
when you are copying an encrypted DB cluster snapshot in the same AWS Region.
The pre-signed URL must be a valid request for the CopyDBClusterSnapshot
API action that can be executed in the source AWS Region that contains the encrypted DB cluster snapshot to be copied. The pre-signed URL request must contain the following parameter values:
KmsKeyId
- The AWS KMS key identifier for the key to use to encrypt the copy of the DB cluster snapshot in the destination AWS Region. This is the same identifier for both the CopyDBClusterSnapshot
action that is called in the destination AWS Region, and the action contained in the pre-signed URL.
DestinationRegion
- The name of the AWS Region that the DB cluster snapshot is to be created in.
SourceDBClusterSnapshotIdentifier
- The DB cluster snapshot identifier for the encrypted DB cluster snapshot to be copied. This identifier must be in the Amazon Resource Name (ARN) format for the source AWS Region. For example, if you are copying an encrypted DB cluster snapshot from the us-west-2 AWS Region, then your SourceDBClusterSnapshotIdentifier
looks like the following example: arn:aws:rds:us-west-2:123456789012:cluster-snapshot:aurora-cluster1-snapshot-20161115
.
To learn how to generate a Signature Version 4 signed request, see Authenticating Requests: Using Query Parameters (AWS Signature Version 4) and Signature Version 4 Signing Process.
If you are using an AWS SDK tool or the AWS CLI, you can specify SourceRegion
(or --source-region
for the AWS CLI) instead of specifying PreSignedUrl
manually. Specifying SourceRegion
autogenerates a pre-signed URL that is a valid request for the operation that can be executed in the source AWS Region.
The identifier or ARN for the source DB parameter group. For information about creating an ARN, see Constructing an ARN for Amazon RDS in the Amazon RDS User Guide.
Constraints:
Must specify a valid DB parameter group.
Must specify a valid DB parameter group identifier, for example my-db-param-group
, or a valid ARN.
The identifier for the copied DB parameter group.
Constraints:
Can't be null, empty, or blank
Must contain from 1 to 255 letters, numbers, or hyphens
First character must be a letter
Can't end with a hyphen or contain two consecutive hyphens
Example: my-db-parameter-group
A description for the copied DB parameter group.
", @@ -3447,12 +3448,12 @@ "CreateDBClusterMessage$OptionGroupName": "A value that indicates that the DB cluster should be associated with the specified option group.
Permanent options can't be removed from an option group. The option group can't be removed from a DB cluster once it is associated with a DB cluster.
", "CreateDBClusterMessage$PreferredBackupWindow": "The daily time range during which automated backups are created if automated backups are enabled using the BackupRetentionPeriod
parameter.
The default is a 30-minute window selected at random from an 8-hour block of time for each AWS Region. To see the time blocks available, see Adjusting the Preferred DB Cluster Maintenance Window in the Amazon Aurora User Guide.
Constraints:
Must be in the format hh24:mi-hh24:mi
.
Must be in Universal Coordinated Time (UTC).
Must not conflict with the preferred maintenance window.
Must be at least 30 minutes.
The weekly time range during which system maintenance can occur, in Universal Coordinated Time (UTC).
Format: ddd:hh24:mi-ddd:hh24:mi
The default is a 30-minute window selected at random from an 8-hour block of time for each AWS Region, occurring on a random day of the week. To see the time blocks available, see Adjusting the Preferred DB Cluster Maintenance Window in the Amazon Aurora User Guide.
Valid Days: Mon, Tue, Wed, Thu, Fri, Sat, Sun.
Constraints: Minimum 30-minute window.
", - "CreateDBClusterMessage$ReplicationSourceIdentifier": "The Amazon Resource Name (ARN) of the source DB instance or DB cluster if this DB cluster is created as a Read Replica.
", - "CreateDBClusterMessage$KmsKeyId": "The AWS KMS key identifier for an encrypted DB cluster.
The KMS key identifier is the Amazon Resource Name (ARN) for the KMS encryption key. If you are creating a DB cluster with the same AWS account that owns the KMS encryption key used to encrypt the new DB cluster, then you can use the KMS key alias instead of the ARN for the KMS encryption key.
If an encryption key isn't specified in KmsKeyId
:
If ReplicationSourceIdentifier
identifies an encrypted source, then Amazon RDS will use the encryption key used to encrypt the source. Otherwise, Amazon RDS will use your default encryption key.
If the StorageEncrypted
parameter is enabled and ReplicationSourceIdentifier
isn't specified, then Amazon RDS will use your default encryption key.
AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
If you create a Read Replica of an encrypted DB cluster in another AWS Region, you must set KmsKeyId
to a KMS key ID that is valid in the destination AWS Region. This key is used to encrypt the Read Replica in that AWS Region.
A URL that contains a Signature Version 4 signed request for the CreateDBCluster
action to be called in the source AWS Region where the DB cluster is replicated from. You only need to specify PreSignedUrl
when you are performing cross-region replication from an encrypted DB cluster.
The pre-signed URL must be a valid request for the CreateDBCluster
API action that can be executed in the source AWS Region that contains the encrypted DB cluster to be copied.
The pre-signed URL request must contain the following parameter values:
KmsKeyId
- The AWS KMS key identifier for the key to use to encrypt the copy of the DB cluster in the destination AWS Region. This should refer to the same KMS key for both the CreateDBCluster
action that is called in the destination AWS Region, and the action contained in the pre-signed URL.
DestinationRegion
- The name of the AWS Region that Aurora Read Replica will be created in.
ReplicationSourceIdentifier
- The DB cluster identifier for the encrypted DB cluster to be copied. This identifier must be in the Amazon Resource Name (ARN) format for the source AWS Region. For example, if you are copying an encrypted DB cluster from the us-west-2 AWS Region, then your ReplicationSourceIdentifier
would look like Example: arn:aws:rds:us-west-2:123456789012:cluster:aurora-cluster1
.
To learn how to generate a Signature Version 4 signed request, see Authenticating Requests: Using Query Parameters (AWS Signature Version 4) and Signature Version 4 Signing Process.
If you are using an AWS SDK tool or the AWS CLI, you can specify SourceRegion
(or --source-region
for the AWS CLI) instead of specifying PreSignedUrl
manually. Specifying SourceRegion
autogenerates a pre-signed URL that is a valid request for the operation that can be executed in the source AWS Region.
The DB engine mode of the DB cluster, either provisioned
, serverless
, parallelquery
, global
, or multimaster
.
Limitations and requirements apply to some DB engine modes. For more information, see the following sections in the Amazon Aurora User Guide:
The Amazon Resource Name (ARN) of the source DB instance or DB cluster if this DB cluster is created as a read replica.
", + "CreateDBClusterMessage$KmsKeyId": "The AWS KMS key identifier for an encrypted DB cluster.
The KMS key identifier is the Amazon Resource Name (ARN) for the KMS encryption key. If you are creating a DB cluster with the same AWS account that owns the KMS encryption key used to encrypt the new DB cluster, then you can use the KMS key alias instead of the ARN for the KMS encryption key.
If an encryption key isn't specified in KmsKeyId
:
If ReplicationSourceIdentifier
identifies an encrypted source, then Amazon RDS will use the encryption key used to encrypt the source. Otherwise, Amazon RDS will use your default encryption key.
If the StorageEncrypted
parameter is enabled and ReplicationSourceIdentifier
isn't specified, then Amazon RDS will use your default encryption key.
AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
If you create a read replica of an encrypted DB cluster in another AWS Region, you must set KmsKeyId
to a KMS key ID that is valid in the destination AWS Region. This key is used to encrypt the read replica in that AWS Region.
A URL that contains a Signature Version 4 signed request for the CreateDBCluster
action to be called in the source AWS Region where the DB cluster is replicated from. You only need to specify PreSignedUrl
when you are performing cross-region replication from an encrypted DB cluster.
The pre-signed URL must be a valid request for the CreateDBCluster
API action that can be executed in the source AWS Region that contains the encrypted DB cluster to be copied.
The pre-signed URL request must contain the following parameter values:
KmsKeyId
- The AWS KMS key identifier for the key to use to encrypt the copy of the DB cluster in the destination AWS Region. This should refer to the same KMS key for both the CreateDBCluster
action that is called in the destination AWS Region, and the action contained in the pre-signed URL.
DestinationRegion
- The name of the AWS Region that Aurora read replica will be created in.
ReplicationSourceIdentifier
- The DB cluster identifier for the encrypted DB cluster to be copied. This identifier must be in the Amazon Resource Name (ARN) format for the source AWS Region. For example, if you are copying an encrypted DB cluster from the us-west-2 AWS Region, then your ReplicationSourceIdentifier
would look like Example: arn:aws:rds:us-west-2:123456789012:cluster:aurora-cluster1
.
To learn how to generate a Signature Version 4 signed request, see Authenticating Requests: Using Query Parameters (AWS Signature Version 4) and Signature Version 4 Signing Process.
If you are using an AWS SDK tool or the AWS CLI, you can specify SourceRegion
(or --source-region
for the AWS CLI) instead of specifying PreSignedUrl
manually. Specifying SourceRegion
autogenerates a pre-signed URL that is a valid request for the operation that can be executed in the source AWS Region.
The DB engine mode of the DB cluster, either provisioned
, serverless
, parallelquery
, global
, or multimaster
.
global
engine mode only applies for global database clusters created with Aurora MySQL version 5.6.10a. For higher Aurora MySQL versions, the clusters in a global database use provisioned
engine mode.
Limitations and requirements apply to some DB engine modes. For more information, see the following sections in the Amazon Aurora User Guide:
The global cluster ID of an Aurora cluster that becomes the primary cluster in the new global database cluster.
", - "CreateDBClusterMessage$Domain": "The Active Directory directory ID to create the DB cluster in.
For Amazon Aurora DB clusters, Amazon RDS can use Kerberos Authentication to authenticate users that connect to the DB cluster. For more information, see Using Kerberos Authentication for Aurora MySQL in the Amazon Aurora User Guide.
", + "CreateDBClusterMessage$Domain": "The Active Directory directory ID to create the DB cluster in.
For Amazon Aurora DB clusters, Amazon RDS can use Kerberos Authentication to authenticate users that connect to the DB cluster. For more information, see Kerberos Authentication in the Amazon Aurora User Guide.
", "CreateDBClusterMessage$DomainIAMRoleName": "Specify the name of the IAM role to be used when making API calls to the Directory Service.
", "CreateDBClusterParameterGroupMessage$DBClusterParameterGroupName": "The name of the DB cluster parameter group.
Constraints:
Must match the name of an existing DB cluster parameter group.
This value is stored as a lowercase string.
The DB cluster parameter group family name. A DB cluster parameter group can be associated with one and only one DB cluster parameter group family, and can be applied only to a DB cluster running a database engine and engine version compatible with that DB cluster parameter group family.
Aurora MySQL
Example: aurora5.6
, aurora-mysql5.7
Aurora PostgreSQL
Example: aurora-postgresql9.6
The ARN from the key store with which to associate the instance for TDE encryption.
", "CreateDBInstanceMessage$TdeCredentialPassword": "The password for the given ARN from the key store in order to access the device.
", "CreateDBInstanceMessage$KmsKeyId": "The AWS KMS key identifier for an encrypted DB instance.
The KMS key identifier is the Amazon Resource Name (ARN) for the KMS encryption key. If you are creating a DB instance with the same AWS account that owns the KMS encryption key used to encrypt the new DB instance, then you can use the KMS key alias instead of the ARN for the KM encryption key.
Amazon Aurora
Not applicable. The KMS key identifier is managed by the DB cluster. For more information, see CreateDBCluster
.
If StorageEncrypted
is enabled, and you do not specify a value for the KmsKeyId
parameter, then Amazon RDS will use your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
The Active Directory directory ID to create the DB instance in. Currently, only Microsoft SQL Server and Oracle DB instances can be created in an Active Directory Domain.
For Microsoft SQL Server DB instances, Amazon RDS can use Windows Authentication to authenticate users that connect to the DB instance. For more information, see Using Windows Authentication with an Amazon RDS DB Instance Running Microsoft SQL Server in the Amazon RDS User Guide.
For Oracle DB instance, Amazon RDS can use Kerberos Authentication to authenticate users that connect to the DB instance. For more information, see Using Kerberos Authentication with Amazon RDS for Oracle in the Amazon RDS User Guide.
", + "CreateDBInstanceMessage$Domain": "The Active Directory directory ID to create the DB instance in. Currently, only Microsoft SQL Server and Oracle DB instances can be created in an Active Directory Domain.
For Microsoft SQL Server DB instances, Amazon RDS can use Windows Authentication to authenticate users that connect to the DB instance. For more information, see Using Windows Authentication with an Amazon RDS DB Instance Running Microsoft SQL Server in the Amazon RDS User Guide.
For Oracle DB instances, Amazon RDS can use Kerberos Authentication to authenticate users that connect to the DB instance. For more information, see Using Kerberos Authentication with Amazon RDS for Oracle in the Amazon RDS User Guide.
", "CreateDBInstanceMessage$MonitoringRoleArn": "The ARN for the IAM role that permits RDS to send enhanced monitoring metrics to Amazon CloudWatch Logs. For example, arn:aws:iam:123456789012:role/emaccess
. For information on creating a monitoring role, go to Setting Up and Enabling Enhanced Monitoring in the Amazon RDS User Guide.
If MonitoringInterval
is set to a value other than 0, then you must supply a MonitoringRoleArn
value.
Specify the name of the IAM role to be used when making API calls to the Directory Service.
", "CreateDBInstanceMessage$Timezone": "The time zone of the DB instance. The time zone parameter is currently supported only by Microsoft SQL Server.
", "CreateDBInstanceMessage$PerformanceInsightsKMSKeyId": "The AWS KMS key identifier for encryption of Performance Insights data. The KMS key ID is the Amazon Resource Name (ARN), KMS key identifier, or the KMS key alias for the KMS encryption key.
If you do not specify a value for PerformanceInsightsKMSKeyId
, then Amazon RDS uses your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
The DB instance identifier of the Read Replica. This identifier is the unique key that identifies a DB instance. This parameter is stored as a lowercase string.
", - "CreateDBInstanceReadReplicaMessage$SourceDBInstanceIdentifier": "The identifier of the DB instance that will act as the source for the Read Replica. Each DB instance can have up to five Read Replicas.
Constraints:
Must be the identifier of an existing MySQL, MariaDB, Oracle, or PostgreSQL DB instance.
Can specify a DB instance that is a MySQL Read Replica only if the source is running MySQL 5.6 or later.
For the limitations of Oracle Read Replicas, see Read Replica Limitations with Oracle in the Amazon RDS User Guide.
Can specify a DB instance that is a PostgreSQL DB instance only if the source is running PostgreSQL 9.3.5 or later (9.4.7 and higher for cross-region replication).
The specified DB instance must have automatic backups enabled, its backup retention period must be greater than 0.
If the source DB instance is in the same AWS Region as the Read Replica, specify a valid DB instance identifier.
If the source DB instance is in a different AWS Region than the Read Replica, specify a valid DB instance ARN. For more information, go to Constructing an ARN for Amazon RDS in the Amazon RDS User Guide.
The compute and memory capacity of the Read Replica, for example, db.m4.large
. Not all DB instance classes are available in all AWS Regions, or for all database engines. For the full list of DB instance classes, and availability for your engine, see DB Instance Class in the Amazon RDS User Guide.
Default: Inherits from the source DB instance.
", - "CreateDBInstanceReadReplicaMessage$AvailabilityZone": "The Availability Zone (AZ) where the Read Replica will be created.
Default: A random, system-chosen Availability Zone in the endpoint's AWS Region.
Example: us-east-1d
The option group the DB instance is associated with. If omitted, the option group associated with the source instance is used.
", - "CreateDBInstanceReadReplicaMessage$DBParameterGroupName": "The name of the DB parameter group to associate with this DB instance.
If you do not specify a value for DBParameterGroupName
, then Amazon RDS uses the DBParameterGroup
of source DB instance for a same region Read Replica, or the default DBParameterGroup
for the specified DB engine for a cross region Read Replica.
Currently, specifying a parameter group for this operation is only supported for Oracle DB instances.
Constraints:
Must be 1 to 255 letters, numbers, or hyphens.
First character must be a letter
Can't end with a hyphen or contain two consecutive hyphens
Specifies a DB subnet group for the DB instance. The new DB instance is created in the VPC associated with the DB subnet group. If no DB subnet group is specified, then the new DB instance isn't created in a VPC.
Constraints:
Can only be specified if the source DB instance identifier specifies a DB instance in another AWS Region.
If supplied, must match the name of an existing DBSubnetGroup.
The specified DB subnet group must be in the same AWS Region in which the operation is running.
All Read Replicas in one AWS Region that are created from the same source DB instance must either:>
Specify DB subnet groups from the same VPC. All these Read Replicas are created in the same VPC.
Not specify a DB subnet group. All these Read Replicas are created outside of any VPC.
Example: mySubnetgroup
Specifies the storage type to be associated with the Read Replica.
Valid values: standard | gp2 | io1
If you specify io1
, you must also include a value for the Iops
parameter.
Default: io1
if the Iops
parameter is specified, otherwise gp2
The DB instance identifier of the read replica. This identifier is the unique key that identifies a DB instance. This parameter is stored as a lowercase string.
", + "CreateDBInstanceReadReplicaMessage$SourceDBInstanceIdentifier": "The identifier of the DB instance that will act as the source for the read replica. Each DB instance can have up to five read replicas.
Constraints:
Must be the identifier of an existing MySQL, MariaDB, Oracle, PostgreSQL, or SQL Server DB instance.
Can specify a DB instance that is a MySQL read replica only if the source is running MySQL 5.6 or later.
For the limitations of Oracle read replicas, see Read Replica Limitations with Oracle in the Amazon RDS User Guide.
For the limitations of SQL Server read replicas, see Read Replica Limitations with Microsoft SQL Server in the Amazon RDS User Guide.
Can specify a PostgreSQL DB instance only if the source is running PostgreSQL 9.3.5 or later (9.4.7 and higher for cross-region replication).
The specified DB instance must have automatic backups enabled, that is, its backup retention period must be greater than 0.
If the source DB instance is in the same AWS Region as the read replica, specify a valid DB instance identifier.
If the source DB instance is in a different AWS Region from the read replica, specify a valid DB instance ARN. For more information, see Constructing an ARN for Amazon RDS in the Amazon RDS User Guide. This doesn't apply to SQL Server, which doesn't support cross-region replicas.
The compute and memory capacity of the read replica, for example, db.m4.large
. Not all DB instance classes are available in all AWS Regions, or for all database engines. For the full list of DB instance classes, and availability for your engine, see DB Instance Class in the Amazon RDS User Guide.
Default: Inherits from the source DB instance.
", + "CreateDBInstanceReadReplicaMessage$AvailabilityZone": "The Availability Zone (AZ) where the read replica will be created.
Default: A random, system-chosen Availability Zone in the endpoint's AWS Region.
Example: us-east-1d
The option group the DB instance is associated with. If omitted, the option group associated with the source instance is used.
For SQL Server, you must use the option group associated with the source instance.
The name of the DB parameter group to associate with this DB instance.
If you do not specify a value for DBParameterGroupName
, then Amazon RDS uses the DBParameterGroup
of source DB instance for a same region read replica, or the default DBParameterGroup
for the specified DB engine for a cross region read replica.
Currently, specifying a parameter group for this operation is only supported for Oracle DB instances.
Constraints:
Must be 1 to 255 letters, numbers, or hyphens.
First character must be a letter
Can't end with a hyphen or contain two consecutive hyphens
Specifies a DB subnet group for the DB instance. The new DB instance is created in the VPC associated with the DB subnet group. If no DB subnet group is specified, then the new DB instance isn't created in a VPC.
Constraints:
Can only be specified if the source DB instance identifier specifies a DB instance in another AWS Region.
If supplied, must match the name of an existing DBSubnetGroup.
The specified DB subnet group must be in the same AWS Region in which the operation is running.
All read replicas in one AWS Region that are created from the same source DB instance must either:>
Specify DB subnet groups from the same VPC. All these read replicas are created in the same VPC.
Not specify a DB subnet group. All these read replicas are created outside of any VPC.
Example: mySubnetgroup
Specifies the storage type to be associated with the read replica.
Valid values: standard | gp2 | io1
If you specify io1
, you must also include a value for the Iops
parameter.
Default: io1
if the Iops
parameter is specified, otherwise gp2
The ARN for the IAM role that permits RDS to send enhanced monitoring metrics to Amazon CloudWatch Logs. For example, arn:aws:iam:123456789012:role/emaccess
. For information on creating a monitoring role, go to To create an IAM role for Amazon RDS Enhanced Monitoring in the Amazon RDS User Guide.
If MonitoringInterval
is set to a value other than 0, then you must supply a MonitoringRoleArn
value.
The AWS KMS key ID for an encrypted Read Replica. The KMS key ID is the Amazon Resource Name (ARN), KMS key identifier, or the KMS key alias for the KMS encryption key.
If you create an encrypted Read Replica in the same AWS Region as the source DB instance, then you do not have to specify a value for this parameter. The Read Replica is encrypted with the same KMS key as the source DB instance.
If you create an encrypted Read Replica in a different AWS Region, then you must specify a KMS key for the destination AWS Region. KMS encryption keys are specific to the AWS Region that they are created in, and you can't use encryption keys from one AWS Region in another AWS Region.
You can't create an encrypted Read Replica from an unencrypted DB instance.
", - "CreateDBInstanceReadReplicaMessage$PreSignedUrl": "The URL that contains a Signature Version 4 signed request for the CreateDBInstanceReadReplica
API action in the source AWS Region that contains the source DB instance.
You must specify this parameter when you create an encrypted Read Replica from another AWS Region by using the Amazon RDS API. Don't specify PreSignedUrl
when you are creating an encrypted Read Replica in the same AWS Region.
The presigned URL must be a valid request for the CreateDBInstanceReadReplica
API action that can be executed in the source AWS Region that contains the encrypted source DB instance. The presigned URL request must contain the following parameter values:
DestinationRegion
- The AWS Region that the encrypted Read Replica is created in. This AWS Region is the same one where the CreateDBInstanceReadReplica
action is called that contains this presigned URL.
For example, if you create an encrypted DB instance in the us-west-1 AWS Region, from a source DB instance in the us-east-2 AWS Region, then you call the CreateDBInstanceReadReplica
action in the us-east-1 AWS Region and provide a presigned URL that contains a call to the CreateDBInstanceReadReplica
action in the us-west-2 AWS Region. For this example, the DestinationRegion
in the presigned URL must be set to the us-east-1 AWS Region.
KmsKeyId
- The AWS KMS key identifier for the key to use to encrypt the Read Replica in the destination AWS Region. This is the same identifier for both the CreateDBInstanceReadReplica
action that is called in the destination AWS Region, and the action contained in the presigned URL.
SourceDBInstanceIdentifier
- The DB instance identifier for the encrypted DB instance to be replicated. This identifier must be in the Amazon Resource Name (ARN) format for the source AWS Region. For example, if you are creating an encrypted Read Replica from a DB instance in the us-west-2 AWS Region, then your SourceDBInstanceIdentifier
looks like the following example: arn:aws:rds:us-west-2:123456789012:instance:mysql-instance1-20161115
.
To learn how to generate a Signature Version 4 signed request, see Authenticating Requests: Using Query Parameters (AWS Signature Version 4) and Signature Version 4 Signing Process.
If you are using an AWS SDK tool or the AWS CLI, you can specify SourceRegion
(or --source-region
for the AWS CLI) instead of specifying PreSignedUrl
manually. Specifying SourceRegion
autogenerates a pre-signed URL that is a valid request for the operation that can be executed in the source AWS Region.
The AWS KMS key ID for an encrypted read replica. The KMS key ID is the Amazon Resource Name (ARN), KMS key identifier, or the KMS key alias for the KMS encryption key.
If you create an encrypted read replica in the same AWS Region as the source DB instance, then you do not have to specify a value for this parameter. The read replica is encrypted with the same KMS key as the source DB instance.
If you create an encrypted read replica in a different AWS Region, then you must specify a KMS key for the destination AWS Region. KMS encryption keys are specific to the AWS Region that they are created in, and you can't use encryption keys from one AWS Region in another AWS Region.
You can't create an encrypted read replica from an unencrypted DB instance.
", + "CreateDBInstanceReadReplicaMessage$PreSignedUrl": "The URL that contains a Signature Version 4 signed request for the CreateDBInstanceReadReplica
API action in the source AWS Region that contains the source DB instance.
You must specify this parameter when you create an encrypted read replica from another AWS Region by using the Amazon RDS API. Don't specify PreSignedUrl
when you are creating an encrypted read replica in the same AWS Region.
The presigned URL must be a valid request for the CreateDBInstanceReadReplica
API action that can be executed in the source AWS Region that contains the encrypted source DB instance. The presigned URL request must contain the following parameter values:
DestinationRegion
- The AWS Region that the encrypted read replica is created in. This AWS Region is the same one where the CreateDBInstanceReadReplica
action is called that contains this presigned URL.
For example, if you create an encrypted DB instance in the us-west-1 AWS Region, from a source DB instance in the us-east-2 AWS Region, then you call the CreateDBInstanceReadReplica
action in the us-east-1 AWS Region and provide a presigned URL that contains a call to the CreateDBInstanceReadReplica
action in the us-west-2 AWS Region. For this example, the DestinationRegion
in the presigned URL must be set to the us-east-1 AWS Region.
KmsKeyId
- The AWS KMS key identifier for the key to use to encrypt the read replica in the destination AWS Region. This is the same identifier for both the CreateDBInstanceReadReplica
action that is called in the destination AWS Region, and the action contained in the presigned URL.
SourceDBInstanceIdentifier
- The DB instance identifier for the encrypted DB instance to be replicated. This identifier must be in the Amazon Resource Name (ARN) format for the source AWS Region. For example, if you are creating an encrypted read replica from a DB instance in the us-west-2 AWS Region, then your SourceDBInstanceIdentifier
looks like the following example: arn:aws:rds:us-west-2:123456789012:instance:mysql-instance1-20161115
.
To learn how to generate a Signature Version 4 signed request, see Authenticating Requests: Using Query Parameters (AWS Signature Version 4) and Signature Version 4 Signing Process.
If you are using an AWS SDK tool or the AWS CLI, you can specify SourceRegion
(or --source-region
for the AWS CLI) instead of specifying PreSignedUrl
manually. Specifying SourceRegion
autogenerates a presigned URL that is a valid request for the operation that can be executed in the source AWS Region.
SourceRegion
isn't supported for SQL Server, because SQL Server on Amazon RDS doesn't support cross-region read replicas.
The AWS KMS key identifier for encryption of Performance Insights data. The KMS key ID is the Amazon Resource Name (ARN), KMS key identifier, or the KMS key alias for the KMS encryption key.
If you do not specify a value for PerformanceInsightsKMSKeyId
, then Amazon RDS uses your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
The Active Directory directory ID to create the DB instance in.
For Oracle DB instances, Amazon RDS can use Kerberos Authentication to authenticate users that connect to the DB instance. For more information, see Using Kerberos Authentication with Amazon RDS for Oracle in the Amazon RDS User Guide.
", + "CreateDBInstanceReadReplicaMessage$Domain": "The Active Directory directory ID to create the DB instance in.
For Oracle DB instances, Amazon RDS can use Kerberos Authentication to authenticate users that connect to the DB instance. For more information, see Using Kerberos Authentication with Amazon RDS for Oracle in the Amazon RDS User Guide.
For Microsoft SQL Server DB instances, Amazon RDS can use Windows Authentication to authenticate users that connect to the DB instance. For more information, see Using Windows Authentication with an Amazon RDS DB Instance Running Microsoft SQL Server in the Amazon RDS User Guide.
", "CreateDBInstanceReadReplicaMessage$DomainIAMRoleName": "Specify the name of the IAM role to be used when making API calls to the Directory Service.
", "CreateDBParameterGroupMessage$DBParameterGroupName": "The name of the DB parameter group.
Constraints:
Must be 1 to 255 letters, numbers, or hyphens.
First character must be a letter
Can't end with a hyphen or contain two consecutive hyphens
This value is stored as a lowercase string.
The DB parameter group family name. A DB parameter group can be associated with one and only one DB parameter group family, and can be applied only to a DB instance running a database engine and engine version compatible with that DB parameter group family.
To list all of the available parameter group families, use the following command:
aws rds describe-db-engine-versions --query \"DBEngineVersions[].DBParameterGroupFamily\"
The output contains duplicates.
Contains the master username for the DB cluster.
", "DBCluster$PreferredBackupWindow": "Specifies the daily time range during which automated backups are created if automated backups are enabled, as determined by the BackupRetentionPeriod
.
Specifies the weekly time range during which system maintenance can occur, in Universal Coordinated Time (UTC).
", - "DBCluster$ReplicationSourceIdentifier": "Contains the identifier of the source DB cluster if this DB cluster is a Read Replica.
", + "DBCluster$ReplicationSourceIdentifier": "Contains the identifier of the source DB cluster if this DB cluster is a read replica.
", "DBCluster$HostedZoneId": "Specifies the ID that Amazon Route 53 assigns when you create a hosted zone.
", "DBCluster$KmsKeyId": "If StorageEncrypted
is enabled, the AWS KMS key identifier for the encrypted DB cluster.
The AWS Region-unique, immutable identifier for the DB cluster. This identifier is found in AWS CloudTrail log entries whenever the AWS KMS key for the DB cluster is accessed.
", "DBCluster$DBClusterArn": "The Amazon Resource Name (ARN) for the DB cluster.
", "DBCluster$CloneGroupId": "Identifies the clone group to which the DB cluster is associated.
", - "DBCluster$EngineMode": "The DB engine mode of the DB cluster, either provisioned
, serverless
, parallelquery
, global
, or multimaster
.
The DB engine mode of the DB cluster, either provisioned
, serverless
, parallelquery
, global
, or multimaster
.
global
engine mode only applies for global database clusters created with Aurora MySQL version 5.6.10a. For higher Aurora MySQL versions, the clusters in a global database use provisioned
engine mode. To check if a DB cluster is part of a global database, use DescribeGlobalClusters
instead of checking the EngineMode
return value from DescribeDBClusters
.
The AWS KMS key identifier used for encrypting messages in the database activity stream.
", "DBCluster$ActivityStreamKinesisStreamName": "The name of the Amazon Kinesis data stream used for the database activity stream.
", "DBClusterBacktrack$DBClusterIdentifier": "Contains a user-supplied DB cluster identifier. This identifier is the unique key that identifies a DB cluster.
", @@ -3603,14 +3604,14 @@ "DBInstance$DBInstanceIdentifier": "Contains a user-supplied database identifier. This identifier is the unique key that identifies a DB instance.
", "DBInstance$DBInstanceClass": "Contains the name of the compute and memory capacity class of the DB instance.
", "DBInstance$Engine": "Provides the name of the database engine to be used for this DB instance.
", - "DBInstance$DBInstanceStatus": "Specifies the current state of this database.
", + "DBInstance$DBInstanceStatus": "Specifies the current state of this database.
For information about DB instance statuses, see DB Instance Status in the Amazon RDS User Guide.
", "DBInstance$MasterUsername": "Contains the master username for the DB instance.
", "DBInstance$DBName": "The meaning of this parameter differs according to the database engine you use.
MySQL, MariaDB, SQL Server, PostgreSQL
Contains the name of the initial database of this instance that was provided at create time, if one was specified when the DB instance was created. This same name is returned for the life of the DB instance.
Type: String
Oracle
Contains the Oracle System ID (SID) of the created DB instance. Not shown when the returned parameters do not apply to an Oracle DB instance.
", "DBInstance$PreferredBackupWindow": " Specifies the daily time range during which automated backups are created if automated backups are enabled, as determined by the BackupRetentionPeriod
.
Specifies the name of the Availability Zone the DB instance is located in.
", "DBInstance$PreferredMaintenanceWindow": "Specifies the weekly time range during which system maintenance can occur, in Universal Coordinated Time (UTC).
", "DBInstance$EngineVersion": "Indicates the database engine version.
", - "DBInstance$ReadReplicaSourceDBInstanceIdentifier": "Contains the identifier of the source DB instance if this DB instance is a Read Replica.
", + "DBInstance$ReadReplicaSourceDBInstanceIdentifier": "Contains the identifier of the source DB instance if this DB instance is a read replica.
", "DBInstance$LicenseModel": "License model information for this DB instance.
", "DBInstance$CharacterSetName": "If present, specifies the name of the character set that this instance is associated with.
", "DBInstance$SecondaryAvailabilityZone": "If present, specifies the name of the secondary Availability Zone for a DB instance with multi-AZ support.
", @@ -3647,7 +3648,7 @@ "DBInstanceRole$FeatureName": "The name of the feature associated with the AWS Identity and Access Management (IAM) role. For the list of supported feature names, see DBEngineVersion
.
Describes the state of association between the IAM role and the DB instance. The Status property returns one of the following values:
ACTIVE
- the IAM role ARN is associated with the DB instance and can be used to access other AWS services on your behalf.
PENDING
- the IAM role ARN is being associated with the DB instance.
INVALID
- the IAM role ARN is associated with the DB instance, but the DB instance is unable to assume the IAM role in order to access other AWS services on your behalf.
This value is currently \"read replication.\"
", - "DBInstanceStatusInfo$Status": "Status of the DB instance. For a StatusType of Read Replica, the values can be replicating, replication stop point set, replication stop point reached, error, stopped, or terminated.
", + "DBInstanceStatusInfo$Status": "Status of the DB instance. For a StatusType of read replica, the values can be replicating, replication stop point set, replication stop point reached, error, stopped, or terminated.
", "DBInstanceStatusInfo$Message": "Details of the error if there is an error for the instance. If the instance isn't in an error state, this value is blank.
", "DBParameterGroup$DBParameterGroupName": "Provides the name of the DB parameter group.
", "DBParameterGroup$DBParameterGroupFamily": "Provides the name of the DB parameter group family that this DB parameter group is compatible with.
", @@ -3660,7 +3661,7 @@ "DBParameterGroupsMessage$Marker": " An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords
.
The identifier for the proxy. This name must be unique for all proxies owned by your AWS account in the specified AWS Region.
", "DBProxy$DBProxyArn": "The Amazon Resource Name (ARN) for the proxy.
", - "DBProxy$EngineFamily": "Currently, this value is always MYSQL
. The engine family applies to both RDS MySQL and Aurora MySQL.
The engine family applies to MySQL and PostgreSQL for both RDS and Aurora.
", "DBProxy$RoleArn": "The Amazon Resource Name (ARN) for the IAM role that the proxy uses to access Amazon Secrets Manager.
", "DBProxy$Endpoint": "The endpoint that you can use to connect to the proxy. You include the endpoint value in the connection string for a database client application.
", "DBProxyTarget$TargetArn": "The Amazon Resource Name (ARN) for the RDS DB instance or Aurora DB cluster.
", @@ -3716,7 +3717,7 @@ "DeleteDBClusterSnapshotMessage$DBClusterSnapshotIdentifier": "The identifier of the DB cluster snapshot to delete.
Constraints: Must be the name of an existing DB cluster snapshot in the available
state.
The identifier for the source DB instance, which can't be changed and which is unique to an AWS Region.
", "DeleteDBInstanceMessage$DBInstanceIdentifier": "The DB instance identifier for the DB instance to be deleted. This parameter isn't case-sensitive.
Constraints:
Must match the name of an existing DB instance.
The DBSnapshotIdentifier
of the new DBSnapshot
created when the SkipFinalSnapshot
parameter is disabled.
Specifying this parameter and also specifying to skip final DB snapshot creation in SkipFinalShapshot results in an error.
Constraints:
Must be 1 to 255 letters or numbers.
First character must be a letter.
Can't end with a hyphen or contain two consecutive hyphens.
Can't be specified when deleting a Read Replica.
The DBSnapshotIdentifier
of the new DBSnapshot
created when the SkipFinalSnapshot
parameter is disabled.
Specifying this parameter and also specifying to skip final DB snapshot creation in SkipFinalShapshot results in an error.
Constraints:
Must be 1 to 255 letters or numbers.
First character must be a letter.
Can't end with a hyphen or contain two consecutive hyphens.
Can't be specified when deleting a read replica.
The name of the DB parameter group.
Constraints:
Must be the name of an existing DB parameter group
You can't delete a default DB parameter group
Can't be associated with any DB instances
The name of the DB proxy to delete.
", "DeleteDBSecurityGroupMessage$DBSecurityGroupName": "The name of the DB security group to delete.
You can't delete the default DB security group.
Constraints:
Must be 1 to 255 letters, numbers, or hyphens.
First character must be a letter
Can't end with a hyphen or contain two consecutive hyphens
Must not be \"Default\"
The identifier of the snapshot export task to be described.
", "DescribeExportTasksMessage$SourceArn": "The Amazon Resource Name (ARN) of the snapshot exported to Amazon S3.
", "DescribeExportTasksMessage$Marker": " An optional pagination token provided by a previous DescribeExportTasks
request. If you specify this parameter, the response includes only records beyond the marker, up to the value specified by the MaxRecords
parameter.
The maximum number of records to include in the response. If more records exist than the specified value, a pagination token called a marker is included in the response. You can use the marker in a later DescribeExportTasks
request to retrieve the remaining results.
Default: 100
Constraints: Minimum 20, maximum 100.
", "DescribeGlobalClustersMessage$GlobalClusterIdentifier": "The user-supplied DB cluster identifier. If this parameter is specified, information from only the specific DB cluster is returned. This parameter isn't case-sensitive.
Constraints:
If supplied, must match an existing DBClusterIdentifier.
An optional pagination token provided by a previous DescribeGlobalClusters
request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords
.
The installation medium ID.
", @@ -3945,7 +3945,7 @@ "ModifyDBInstanceMessage$LicenseModel": "The license model for the DB instance.
Valid values: license-included
| bring-your-own-license
| general-public-license
Indicates that the DB instance should be associated with the specified option group. Changing this parameter doesn't result in an outage except in the following case and the change is applied during the next maintenance window unless the ApplyImmediately
parameter is enabled for this request. If the parameter change results in an option group that enables OEM, this change can cause a brief (sub-second) period during which new connections are rejected but existing connections are not interrupted.
Permanent options, such as the TDE option for Oracle Advanced Security TDE, can't be removed from an option group, and that option group can't be removed from a DB instance once it is associated with a DB instance
", "ModifyDBInstanceMessage$NewDBInstanceIdentifier": " The new DB instance identifier for the DB instance when renaming a DB instance. When you change the DB instance identifier, an instance reboot occurs immediately if you enable ApplyImmediately
, or will occur during the next maintenance window if you disable Apply Immediately. This value is stored as a lowercase string.
Constraints:
Must contain from 1 to 63 letters, numbers, or hyphens.
The first character must be a letter.
Can't end with a hyphen or contain two consecutive hyphens.
Example: mydbinstance
Specifies the storage type to be associated with the DB instance.
If you specify Provisioned IOPS (io1
), you must also include a value for the Iops
parameter.
If you choose to migrate your DB instance from using standard storage to using Provisioned IOPS, or from using Provisioned IOPS to using standard storage, the process can take time. The duration of the migration depends on several factors such as database load, storage size, storage type (standard or Provisioned IOPS), amount of IOPS provisioned (if any), and the number of prior scale storage operations. Typical migration times are under 24 hours, but the process can take up to several days in some cases. During the migration, the DB instance is available for use, but might experience performance degradation. While the migration takes place, nightly backups for the instance are suspended. No other Amazon RDS operations can take place for the instance, including modifying the instance, rebooting the instance, deleting the instance, creating a Read Replica for the instance, and creating a DB snapshot of the instance.
Valid values: standard | gp2 | io1
Default: io1
if the Iops
parameter is specified, otherwise gp2
Specifies the storage type to be associated with the DB instance.
If you specify Provisioned IOPS (io1
), you must also include a value for the Iops
parameter.
If you choose to migrate your DB instance from using standard storage to using Provisioned IOPS, or from using Provisioned IOPS to using standard storage, the process can take time. The duration of the migration depends on several factors such as database load, storage size, storage type (standard or Provisioned IOPS), amount of IOPS provisioned (if any), and the number of prior scale storage operations. Typical migration times are under 24 hours, but the process can take up to several days in some cases. During the migration, the DB instance is available for use, but might experience performance degradation. While the migration takes place, nightly backups for the instance are suspended. No other Amazon RDS operations can take place for the instance, including modifying the instance, rebooting the instance, deleting the instance, creating a read replica for the instance, and creating a DB snapshot of the instance.
Valid values: standard | gp2 | io1
Default: io1
if the Iops
parameter is specified, otherwise gp2
The ARN from the key store with which to associate the instance for TDE encryption.
", "ModifyDBInstanceMessage$TdeCredentialPassword": "The password for the given ARN from the key store in order to access the device.
", "ModifyDBInstanceMessage$CACertificateIdentifier": "Indicates the certificate that needs to be associated with the instance.
", @@ -4037,8 +4037,8 @@ "PendingModifiedValues$DBSubnetGroupName": "The new DB subnet group for the DB instance.
", "ProcessorFeature$Name": "The name of the processor feature. Valid names are coreCount
and threadsPerCore
.
The value of a processor feature name.
", - "PromoteReadReplicaDBClusterMessage$DBClusterIdentifier": "The identifier of the DB cluster Read Replica to promote. This parameter isn't case-sensitive.
Constraints:
Must match the identifier of an existing DBCluster Read Replica.
Example: my-cluster-replica1
The DB instance identifier. This value is stored as a lowercase string.
Constraints:
Must match the identifier of an existing Read Replica DB instance.
Example: mydbinstance
The identifier of the DB cluster read replica to promote. This parameter isn't case-sensitive.
Constraints:
Must match the identifier of an existing DB cluster read replica.
Example: my-cluster-replica1
The DB instance identifier. This value is stored as a lowercase string.
Constraints:
Must match the identifier of an existing read replica DB instance.
Example: mydbinstance
The daily time range during which automated backups are created if automated backups are enabled, using the BackupRetentionPeriod
parameter.
The default is a 30-minute window selected at random from an 8-hour block of time for each AWS Region. To see the time blocks available, see Adjusting the Preferred Maintenance Window in the Amazon RDS User Guide.
Constraints:
Must be in the format hh24:mi-hh24:mi
.
Must be in Universal Coordinated Time (UTC).
Must not conflict with the preferred maintenance window.
Must be at least 30 minutes.
The ID of the Reserved DB instance offering to purchase.
Example: 438012d3-4052-4cc7-b2e3-8d3372e0e706
", "PurchaseReservedDBInstancesOfferingMessage$ReservedDBInstanceId": "Customer-specified identifier to track this reservation.
Example: myreservationID
", @@ -4098,7 +4098,7 @@ "RestoreDBClusterFromS3Message$S3BucketName": "The name of the Amazon S3 bucket that contains the data used to create the Amazon Aurora DB cluster.
", "RestoreDBClusterFromS3Message$S3Prefix": "The prefix for all of the file names that contain the data used to create the Amazon Aurora DB cluster. If you do not specify a SourceS3Prefix value, then the Amazon Aurora DB cluster is created by using all of the files in the Amazon S3 bucket.
", "RestoreDBClusterFromS3Message$S3IngestionRoleArn": "The Amazon Resource Name (ARN) of the AWS Identity and Access Management (IAM) role that authorizes Amazon RDS to access the Amazon S3 bucket on your behalf.
", - "RestoreDBClusterFromS3Message$Domain": "Specify the Active Directory directory ID to restore the DB cluster in. The domain must be created prior to this operation.
For Amazon Aurora DB clusters, Amazon RDS can use Kerberos Authentication to authenticate users that connect to the DB cluster. For more information, see Using Kerberos Authentication for Aurora MySQL in the Amazon Aurora User Guide.
", + "RestoreDBClusterFromS3Message$Domain": "Specify the Active Directory directory ID to restore the DB cluster in. The domain must be created prior to this operation.
For Amazon Aurora DB clusters, Amazon RDS can use Kerberos Authentication to authenticate users that connect to the DB cluster. For more information, see Kerberos Authentication in the Amazon Aurora User Guide.
", "RestoreDBClusterFromS3Message$DomainIAMRoleName": "Specify the name of the IAM role to be used when making API calls to the Directory Service.
", "RestoreDBClusterFromSnapshotMessage$DBClusterIdentifier": "The name of the DB cluster to create from the DB snapshot or DB cluster snapshot. This parameter isn't case-sensitive.
Constraints:
Must contain from 1 to 63 letters, numbers, or hyphens
First character must be a letter
Can't end with a hyphen or contain two consecutive hyphens
Example: my-snapshot-id
The identifier for the DB snapshot or DB cluster snapshot to restore from.
You can use either the name or the Amazon Resource Name (ARN) to specify a DB cluster snapshot. However, you can use only the ARN to specify a DB snapshot.
Constraints:
Must match the identifier of an existing Snapshot.
The name of the option group for the new DB cluster.
", "RestoreDBClusterToPointInTimeMessage$KmsKeyId": "The AWS KMS key identifier to use when restoring an encrypted DB cluster from an encrypted DB cluster.
The KMS key identifier is the Amazon Resource Name (ARN) for the KMS encryption key. If you are restoring a DB cluster with the same AWS account that owns the KMS encryption key used to encrypt the new DB cluster, then you can use the KMS key alias instead of the ARN for the KMS encryption key.
You can restore to a new DB cluster and encrypt the new DB cluster with a KMS key that is different than the KMS key used to encrypt the source DB cluster. The new DB cluster is encrypted with the KMS key identified by the KmsKeyId
parameter.
If you don't specify a value for the KmsKeyId
parameter, then the following occurs:
If the DB cluster is encrypted, then the restored DB cluster is encrypted using the KMS key that was used to encrypt the source DB cluster.
If the DB cluster isn't encrypted, then the restored DB cluster isn't encrypted.
If DBClusterIdentifier
refers to a DB cluster that isn't encrypted, then the restore request is rejected.
The name of the DB cluster parameter group to associate with this DB cluster. If this argument is omitted, the default DB cluster parameter group for the specified engine is used.
Constraints:
If supplied, must match the name of an existing DB cluster parameter group.
Must be 1 to 255 letters, numbers, or hyphens.
First character must be a letter.
Can't end with a hyphen or contain two consecutive hyphens.
Specify the Active Directory directory ID to restore the DB cluster in. The domain must be created prior to this operation.
For Amazon Aurora DB clusters, Amazon RDS can use Kerberos Authentication to authenticate users that connect to the DB cluster. For more information, see Using Kerberos Authentication for Aurora MySQL in the Amazon Aurora User Guide.
", + "RestoreDBClusterToPointInTimeMessage$Domain": "Specify the Active Directory directory ID to restore the DB cluster in. The domain must be created prior to this operation.
For Amazon Aurora DB clusters, Amazon RDS can use Kerberos Authentication to authenticate users that connect to the DB cluster. For more information, see Kerberos Authentication in the Amazon Aurora User Guide.
", "RestoreDBClusterToPointInTimeMessage$DomainIAMRoleName": "Specify the name of the IAM role to be used when making API calls to the Directory Service.
", "RestoreDBInstanceFromDBSnapshotMessage$DBInstanceIdentifier": "Name of the DB instance to create from the DB snapshot. This parameter isn't case-sensitive.
Constraints:
Must contain from 1 to 63 numbers, letters, or hyphens
First character must be a letter
Can't end with a hyphen or contain two consecutive hyphens
Example: my-snapshot-id
The identifier for the DB snapshot to restore from.
Constraints:
Must match the identifier of an existing DBSnapshot.
If you are restoring from a shared manual DB snapshot, the DBSnapshotIdentifier
must be the ARN of the shared DB snapshot.
A key is the required name of the tag. The string value can be from 1 to 128 Unicode characters in length and can't be prefixed with \"aws:\" or \"rds:\". The string can only contain only the set of Unicode letters, digits, white-space, '_', '.', '/', '=', '+', '-' (Java regex: \"^([\\\\p{L}\\\\p{Z}\\\\p{N}_.:/=+\\\\-]*)$\").
", "Tag$Value": "A value is the optional value of the tag. The string value can be from 1 to 256 Unicode characters in length and can't be prefixed with \"aws:\" or \"rds:\". The string can only contain only the set of Unicode letters, digits, white-space, '_', '.', '/', '=', '+', '-' (Java regex: \"^([\\\\p{L}\\\\p{Z}\\\\p{N}_.:/=+\\\\-]*)$\").
", + "TargetHealth$Description": "A description of the health of the RDS Proxy target. If the State
is AVAILABLE
, a description is not included.
The name of the time zone.
", "UpgradeTarget$Engine": "The name of the upgrade target database engine.
", "UpgradeTarget$EngineVersion": "The version number of the upgrade target database engine.
", @@ -4248,13 +4249,13 @@ "DBProxy$VpcSubnetIds": "The EC2 subnet IDs for the proxy.
", "DeregisterDBProxyTargetsRequest$DBInstanceIdentifiers": "One or more DB instance identifiers.
", "DeregisterDBProxyTargetsRequest$DBClusterIdentifiers": "One or more DB cluster identifiers.
", - "ExportTask$ExportOnly": "The data exported from the snapshot. Valid values are the following:
database
- Export all the data of the snapshot.
database.table [table-name]
- Export a table of the snapshot.
database.schema [schema-name]
- Export a database schema of the snapshot. This value isn't valid for RDS for MySQL, RDS for MariaDB, or Aurora MySQL.
database.schema.table [table-name]
- Export a table of the database schema. This value isn't valid for RDS for MySQL, RDS for MariaDB, or Aurora MySQL.
The data exported from the snapshot. Valid values are the following:
database
- Export all the data from a specified database.
database.table
table-name - Export a table of the snapshot. This format is valid only for RDS for MySQL, RDS for MariaDB, and Aurora MySQL.
database.schema
schema-name - Export a database schema of the snapshot. This format is valid only for RDS for PostgreSQL and Aurora PostgreSQL.
database.schema.table
table-name - Export a table of the database schema. This format is valid only for RDS for PostgreSQL and Aurora PostgreSQL.
List of DB instance identifiers that are part of the custom endpoint group.
", "ModifyDBClusterEndpointMessage$ExcludedMembers": "List of DB instance identifiers that aren't part of the custom endpoint group. All other eligible instances are reachable through the custom endpoint. Only relevant if the list of static members is empty.
", "ModifyDBProxyRequest$SecurityGroups": "The new list of security groups for the DBProxy
.
One or more DB instance identifiers.
", "RegisterDBProxyTargetsRequest$DBClusterIdentifiers": "One or more DB cluster identifiers.
", - "StartExportTaskMessage$ExportOnly": "The data to be exported from the snapshot. If this parameter is not provided, all the snapshot data is exported. Valid values are the following:
database
- Export all the data of the snapshot.
database.table [table-name]
- Export a table of the snapshot.
database.schema [schema-name]
- Export a database schema of the snapshot. This value isn't valid for RDS for MySQL, RDS for MariaDB, or Aurora MySQL.
database.schema.table [table-name]
- Export a table of the database schema. This value isn't valid for RDS for MySQL, RDS for MariaDB, or Aurora MySQL.
The data to be exported from the snapshot. If this parameter is not provided, all the snapshot data is exported. Valid values are the following:
database
- Export all the data from a specified database.
database.table
table-name - Export a table of the snapshot. This format is valid only for RDS for MySQL, RDS for MariaDB, and Aurora MySQL.
database.schema
schema-name - Export a database schema of the snapshot. This format is valid only for RDS for PostgreSQL and Aurora PostgreSQL.
database.schema.table
table-name - Export a table of the database schema. This format is valid only for RDS for PostgreSQL and Aurora PostgreSQL.
An arbitrary number of DBProxyTargetGroup
objects, containing details of the corresponding target groups.
This is prerelease documentation for the RDS Database Proxy feature in preview release. It is subject to change.
Information about the connection health of an RDS Proxy target.
", + "refs": { + "DBProxyTarget$TargetHealth": "Information about the connection health of the RDS Proxy target.
" + } + }, + "TargetHealthReason": { + "base": null, + "refs": { + "TargetHealth$Reason": "The reason for the current health State
of the RDS Proxy target.
One or more DBProxyTarget
objects that are created when you register targets with a target group.
The current state of the connection health lifecycle for the RDS Proxy target. The following is a typical lifecycle example for the states of an RDS Proxy target:
registering
> unavailable
> available
> unavailable
> available
A list of EC2 VPC security groups to associate with this DB cluster.
", "CreateDBInstanceMessage$VpcSecurityGroupIds": "A list of Amazon EC2 VPC security groups to associate with this DB instance.
Amazon Aurora
Not applicable. The associated list of EC2 VPC security groups is managed by the DB cluster.
Default: The default EC2 VPC security group for the DB subnet group's VPC.
", - "CreateDBInstanceReadReplicaMessage$VpcSecurityGroupIds": "A list of EC2 VPC security groups to associate with the Read Replica.
Default: The default EC2 VPC security group for the DB subnet group's VPC.
", + "CreateDBInstanceReadReplicaMessage$VpcSecurityGroupIds": "A list of EC2 VPC security groups to associate with the read replica.
Default: The default EC2 VPC security group for the DB subnet group's VPC.
", "ModifyDBClusterMessage$VpcSecurityGroupIds": "A list of VPC security groups that the DB cluster will belong to.
", "ModifyDBInstanceMessage$VpcSecurityGroupIds": "A list of EC2 VPC security groups to authorize on this DB instance. This change is asynchronously applied as soon as possible.
Amazon Aurora
Not applicable. The associated list of EC2 VPC security groups is managed by the DB cluster. For more information, see ModifyDBCluster
.
Constraints:
If supplied, must match existing VpcSecurityGroupIds.
A list of VpcSecurityGroupMembership name strings used for this option.
", diff --git a/models/apis/redshift/2012-12-01/docs-2.json b/models/apis/redshift/2012-12-01/docs-2.json index c593167e5a9..b08b4e765ae 100644 --- a/models/apis/redshift/2012-12-01/docs-2.json +++ b/models/apis/redshift/2012-12-01/docs-2.json @@ -19,7 +19,7 @@ "CreateHsmConfiguration": "Creates an HSM configuration that contains the information required by an Amazon Redshift cluster to store and use database encryption keys in a Hardware Security Module (HSM). After creating the HSM configuration, you can specify it as a parameter when creating a cluster. The cluster will then store its encryption keys in the HSM.
In addition to creating an HSM configuration, you must also create an HSM client certificate. For more information, go to Hardware Security Modules in the Amazon Redshift Cluster Management Guide.
", "CreateScheduledAction": "Creates a scheduled action. A scheduled action contains a schedule and an Amazon Redshift API action. For example, you can create a schedule of when to run the ResizeCluster
API operation.
Creates a snapshot copy grant that permits Amazon Redshift to use a customer master key (CMK) from AWS Key Management Service (AWS KMS) to encrypt copied snapshots in a destination region.
For more information about managing snapshot copy grants, go to Amazon Redshift Database Encryption in the Amazon Redshift Cluster Management Guide.
", - "CreateSnapshotSchedule": "Creates a snapshot schedule with the rate of every 12 hours.
", + "CreateSnapshotSchedule": "Create a snapshot schedule that can be associated to a cluster and which overrides the default system backup schedule.
", "CreateTags": "Adds tags to a cluster.
A resource can have up to 50 tags. If you try to create more than 50 tags for a resource, you will receive an error and the attempt will fail.
If you specify a key that already exists for the resource, the value for that key will be updated with the new value.
", "DeleteCluster": "Deletes a previously provisioned cluster without its final snapshot being created. A successful response from the web service indicates that the request was received correctly. Use DescribeClusters to monitor the status of the deletion. The delete operation cannot be canceled or reverted once submitted. For more information about managing clusters, go to Amazon Redshift Clusters in the Amazon Redshift Cluster Management Guide.
If you want to shut down the cluster and retain it for future use, set SkipFinalClusterSnapshot to false
and specify a name for FinalClusterSnapshotIdentifier. You can later restore this snapshot to resume using the cluster. If a final cluster snapshot is requested, the status of the cluster will be \"final-snapshot\" while the snapshot is being taken, then it's \"deleting\" once Amazon Redshift begins deleting the cluster.
For more information about managing clusters, go to Amazon Redshift Clusters in the Amazon Redshift Cluster Management Guide.
", "DeleteClusterParameterGroup": "Deletes a specified Amazon Redshift parameter group.
You cannot delete a parameter group if it is associated with a cluster.
Allows you to purchase reserved nodes. Amazon Redshift offers a predefined set of reserved node offerings. You can purchase one or more of the offerings. You can call the DescribeReservedNodeOfferings API to obtain the available reserved node offerings. You can call this API by providing a specific reserved node offering and the number of nodes you want to reserve.
For more information about reserved node offerings, go to Purchasing Reserved Nodes in the Amazon Redshift Cluster Management Guide.
", "RebootCluster": "Reboots a cluster. This action is taken as soon as possible. It results in a momentary outage to the cluster, during which the cluster status is set to rebooting
. A cluster event is created when the reboot is completed. Any pending cluster modifications (see ModifyCluster) are applied at this reboot. For more information about managing clusters, go to Amazon Redshift Clusters in the Amazon Redshift Cluster Management Guide.
Sets one or more parameters of the specified parameter group to their default values and sets the source values of the parameters to \"engine-default\". To reset the entire parameter group specify the ResetAllParameters parameter. For parameter changes to take effect you must reboot any associated clusters.
", - "ResizeCluster": "Changes the size of the cluster. You can change the cluster's type, or change the number or type of nodes. The default behavior is to use the elastic resize method. With an elastic resize, your cluster is available for read and write operations more quickly than with the classic resize method.
Elastic resize operations have the following restrictions:
You can only resize clusters of the following types:
dc2.large
dc2.8xlarge
ds2.xlarge
ds2.8xlarge
ra3.16xlarge
The type of nodes that you add must match the node type for the cluster.
Changes the size of the cluster. You can change the cluster's type, or change the number or type of nodes. The default behavior is to use the elastic resize method. With an elastic resize, your cluster is available for read and write operations more quickly than with the classic resize method.
Elastic resize operations have the following restrictions:
You can only resize clusters of the following types:
dc2.large
dc2.8xlarge
ds2.xlarge
ds2.8xlarge
ra3.4xlarge
ra3.16xlarge
The type of nodes that you add must match the node type for the cluster.
Creates a new cluster from a snapshot. By default, Amazon Redshift creates the resulting cluster with the same configuration as the original cluster from which the snapshot was created, except that the new cluster is created with the default cluster security and parameter groups. After Amazon Redshift creates the cluster, you can use the ModifyCluster API to associate a different security group and different parameter group with the restored cluster. If you are using a DS node type, you can also choose to change to another DS node type of the same size during restore.
If you restore a cluster into a VPC, you must provide a cluster subnet group where you want the cluster restored.
For more information about working with snapshots, go to Amazon Redshift Snapshots in the Amazon Redshift Cluster Management Guide.
", "RestoreTableFromClusterSnapshot": "Creates a new table from a table in an Amazon Redshift cluster snapshot. You must create the new table within the Amazon Redshift cluster that the snapshot was taken from.
You cannot use RestoreTableFromClusterSnapshot
to restore a table with the same name as an existing table in an Amazon Redshift cluster. That is, you cannot overwrite an existing table in a cluster with a restored table. If you want to replace your original table with a new, restored table, then rename or drop your original table before you call RestoreTableFromClusterSnapshot
. When you have renamed your original table, then you can pass the original name of the table as the NewTableName
parameter value in the call to RestoreTableFromClusterSnapshot
. This way, you can replace the original table with the table created from the snapshot.
Resumes a paused cluster.
", @@ -2373,7 +2373,7 @@ "CreateClusterMessage$DBName": "The name of the first database to be created when the cluster is created.
To create additional databases after the cluster is created, connect to the cluster with a SQL client and use SQL commands to create a database. For more information, go to Create a Database in the Amazon Redshift Database Developer Guide.
Default: dev
Constraints:
Must contain 1 to 64 alphanumeric characters.
Must contain only lowercase letters.
Cannot be a word that is reserved by the service. A list of reserved words can be found in Reserved Words in the Amazon Redshift Database Developer Guide.
A unique identifier for the cluster. You use this identifier to refer to the cluster for any subsequent cluster operations such as deleting or modifying. The identifier also appears in the Amazon Redshift console.
Constraints:
Must contain from 1 to 63 alphanumeric characters or hyphens.
Alphabetic characters must be lowercase.
First character must be a letter.
Cannot end with a hyphen or contain two consecutive hyphens.
Must be unique for all clusters within an AWS account.
Example: myexamplecluster
The type of the cluster. When cluster type is specified as
single-node
, the NumberOfNodes parameter is not required.
multi-node
, the NumberOfNodes parameter is required.
Valid Values: multi-node
| single-node
Default: multi-node
The node type to be provisioned for the cluster. For information about node types, go to Working with Clusters in the Amazon Redshift Cluster Management Guide.
Valid Values: ds2.xlarge
| ds2.8xlarge
| dc1.large
| dc1.8xlarge
| dc2.large
| dc2.8xlarge
| ra3.16xlarge
The node type to be provisioned for the cluster. For information about node types, go to Working with Clusters in the Amazon Redshift Cluster Management Guide.
Valid Values: ds2.xlarge
| ds2.8xlarge
| dc1.large
| dc1.8xlarge
| dc2.large
| dc2.8xlarge
| ra3.4xlarge
| ra3.16xlarge
The user name associated with the master user account for the cluster that is being created.
Constraints:
Must be 1 - 128 alphanumeric characters. The user name can't be PUBLIC
.
First character must be a letter.
Cannot be a reserved word. A list of reserved words can be found in Reserved Words in the Amazon Redshift Database Developer Guide.
The password associated with the master user account for the cluster that is being created.
Constraints:
Must be between 8 and 64 characters in length.
Must contain at least one uppercase letter.
Must contain at least one lowercase letter.
Must contain one number.
Can be any printable ASCII character (ASCII code 33 to 126) except ' (single quote), \" (double quote), \\, /, @, or space.
The name of a cluster subnet group to be associated with this cluster.
If this parameter is not provided the resulting cluster will be deployed outside virtual private cloud (VPC).
", @@ -2563,7 +2563,7 @@ "ModifyClusterMaintenanceMessage$DeferMaintenanceIdentifier": "A unique identifier for the deferred maintenance window.
", "ModifyClusterMessage$ClusterIdentifier": "The unique identifier of the cluster to be modified.
Example: examplecluster
The new cluster type.
When you submit your cluster resize request, your existing cluster goes into a read-only mode. After Amazon Redshift provisions a new cluster based on your resize requirements, there will be outage for a period while the old cluster is deleted and your connection is switched to the new cluster. You can use DescribeResize to track the progress of the resize request.
Valid Values: multi-node | single-node
The new node type of the cluster. If you specify a new node type, you must also specify the number of nodes parameter.
For more information about resizing clusters, go to Resizing Clusters in Amazon Redshift in the Amazon Redshift Cluster Management Guide.
Valid Values: ds2.xlarge
| ds2.8xlarge
| dc1.large
| dc1.8xlarge
| dc2.large
| dc2.8xlarge
| ra3.16xlarge
The new node type of the cluster. If you specify a new node type, you must also specify the number of nodes parameter.
For more information about resizing clusters, go to Resizing Clusters in Amazon Redshift in the Amazon Redshift Cluster Management Guide.
Valid Values: ds2.xlarge
| ds2.8xlarge
| dc1.large
| dc1.8xlarge
| dc2.large
| dc2.8xlarge
| ra3.4xlarge
| ra3.16xlarge
The new password for the cluster master user. This change is asynchronously applied as soon as possible. Between the time of the request and the completion of the request, the MasterUserPassword
element exists in the PendingModifiedValues
element of the operation response.
Operations never return the password, so this operation provides a way to regain access to the master user account for a cluster if the password is lost.
Default: Uses existing setting.
Constraints:
Must be between 8 and 64 characters in length.
Must contain at least one uppercase letter.
Must contain at least one lowercase letter.
Must contain one number.
Can be any printable ASCII character (ASCII code 33 to 126) except ' (single quote), \" (double quote), \\, /, @, or space.
The name of the cluster parameter group to apply to this cluster. This change is applied only after the cluster is rebooted. To reboot a cluster use RebootCluster.
Default: Uses existing setting.
Constraints: The cluster parameter group must be in the same parameter group family that matches the cluster version.
", "ModifyClusterMessage$PreferredMaintenanceWindow": "The weekly time range (in UTC) during which system maintenance can occur, if necessary. If system maintenance is necessary during the window, it may result in an outage.
This maintenance window change is made immediately. If the new maintenance window indicates the current time, there must be at least 120 minutes between the current time and end of the window in order to ensure that pending changes are applied.
Default: Uses existing setting.
Format: ddd:hh24:mi-ddd:hh24:mi, for example wed:07:30-wed:08:00
.
Valid Days: Mon | Tue | Wed | Thu | Fri | Sat | Sun
Constraints: Must be at least 30 minutes.
", diff --git a/models/apis/rekognition/2016-06-27/api-2.json b/models/apis/rekognition/2016-06-27/api-2.json index 7b8802581bf..78c909f03f5 100644 --- a/models/apis/rekognition/2016-06-27/api-2.json +++ b/models/apis/rekognition/2016-06-27/api-2.json @@ -137,6 +137,42 @@ {"shape":"ResourceNotFoundException"} ] }, + "DeleteProject":{ + "name":"DeleteProject", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteProjectRequest"}, + "output":{"shape":"DeleteProjectResponse"}, + "errors":[ + {"shape":"ResourceInUseException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"InvalidParameterException"}, + {"shape":"AccessDeniedException"}, + {"shape":"InternalServerError"}, + {"shape":"ThrottlingException"}, + {"shape":"ProvisionedThroughputExceededException"} + ] + }, + "DeleteProjectVersion":{ + "name":"DeleteProjectVersion", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteProjectVersionRequest"}, + "output":{"shape":"DeleteProjectVersionResponse"}, + "errors":[ + {"shape":"ResourceNotFoundException"}, + {"shape":"ResourceInUseException"}, + {"shape":"InvalidParameterException"}, + {"shape":"AccessDeniedException"}, + {"shape":"InternalServerError"}, + {"shape":"ThrottlingException"}, + {"shape":"ProvisionedThroughputExceededException"} + ] + }, "DeleteStreamProcessor":{ "name":"DeleteStreamProcessor", "http":{ @@ -1150,6 +1186,32 @@ "DeletedFaces":{"shape":"FaceIdList"} } }, + "DeleteProjectRequest":{ + "type":"structure", + "required":["ProjectArn"], + "members":{ + "ProjectArn":{"shape":"ProjectArn"} + } + }, + "DeleteProjectResponse":{ + "type":"structure", + "members":{ + "Status":{"shape":"ProjectStatus"} + } + }, + "DeleteProjectVersionRequest":{ + "type":"structure", + "required":["ProjectVersionArn"], + "members":{ + "ProjectVersionArn":{"shape":"ProjectVersionArn"} + } + }, + "DeleteProjectVersionResponse":{ + "type":"structure", + "members":{ + "Status":{"shape":"ProjectVersionStatus"} + } + }, "DeleteStreamProcessorRequest":{ "type":"structure", "required":["Name"], diff --git a/models/apis/rekognition/2016-06-27/docs-2.json b/models/apis/rekognition/2016-06-27/docs-2.json index 885060a4e4f..71578456625 100644 --- a/models/apis/rekognition/2016-06-27/docs-2.json +++ b/models/apis/rekognition/2016-06-27/docs-2.json @@ -9,6 +9,8 @@ "CreateStreamProcessor": "Creates an Amazon Rekognition stream processor that you can use to detect and recognize faces in a streaming video.
Amazon Rekognition Video is a consumer of live video from Amazon Kinesis Video Streams. Amazon Rekognition Video sends analysis results to Amazon Kinesis Data Streams.
You provide as input a Kinesis video stream (Input
) and a Kinesis data stream (Output
) stream. You also specify the face recognition criteria in Settings
. For example, the collection containing faces that you want to recognize. Use Name
to assign an identifier for the stream processor. You use Name
to manage the stream processor. For example, you can start processing the source video by calling StartStreamProcessor with the Name
field.
After you have finished analyzing a streaming video, use StopStreamProcessor to stop processing. You can delete the stream processor by calling DeleteStreamProcessor.
", "DeleteCollection": "Deletes the specified collection. Note that this operation removes all faces in the collection. For an example, see delete-collection-procedure.
This operation requires permissions to perform the rekognition:DeleteCollection
action.
Deletes faces from a collection. You specify a collection ID and an array of face IDs to remove from the collection.
This operation requires permissions to perform the rekognition:DeleteFaces
action.
Deletes an Amazon Rekognition Custom Labels project. To delete a project you must first delete all versions of the model associated with the project. To delete a version of a model, see DeleteProjectVersion.
This operation requires permissions to perform the rekognition:DeleteProject
action.
Deletes a version of a model.
You must first stop the model before you can delete it. To check if a model is running, use the Status
field returned from DescribeProjectVersions. To stop a running model call StopProjectVersion.
This operation requires permissions to perform the rekognition:DeleteProjectVersion
action.
Deletes the stream processor identified by Name
. You assign the value for Name
when you create the stream processor with CreateStreamProcessor. You might not be able to use the same name for a stream processor for a few seconds after calling DeleteStreamProcessor
.
Describes the specified collection. You can use DescribeCollection
to get information, such as the number of faces indexed into a collection and the version of the model used by the collection for face detection.
For more information, see Describing a Collection in the Amazon Rekognition Developer Guide.
", "DescribeProjectVersions": "Lists and describes the models in an Amazon Rekognition Custom Labels project. You can specify up to 10 model versions in ProjectVersionArns
. If you don't specify a value, descriptions for all models are returned.
This operation requires permissions to perform the rekognition:DescribeProjectVersions
action.
The Amazon Resource Name (ARN) of the new project. You can use the ARN to configure IAM access to the project.
", "CreateProjectVersionRequest$ProjectArn": "The ARN of the Amazon Rekognition Custom Labels project that manages the model that you want to train.
", + "DeleteProjectRequest$ProjectArn": "The Amazon Resource Name (ARN) of the project that you want to delete.
", "DescribeProjectVersionsRequest$ProjectArn": "The Amazon Resource Name (ARN) of the project that contains the models you want to describe.
", "ProjectDescription$ProjectArn": "The Amazon Resource Name (ARN) of the project.
" } @@ -1318,6 +1341,7 @@ "ProjectStatus": { "base": null, "refs": { + "DeleteProjectResponse$Status": "The current status of the delete project operation.
", "ProjectDescription$Status": "The current status of the project.
" } }, @@ -1325,6 +1349,7 @@ "base": null, "refs": { "CreateProjectVersionResponse$ProjectVersionArn": "The ARN of the model version that was created. Use DescribeProjectVersion
to get the current status of the training operation.
The Amazon Resource Name (ARN) of the model version that you want to delete.
", "DetectCustomLabelsRequest$ProjectVersionArn": "The ARN of the model version that you want to use.
", "ProjectVersionDescription$ProjectVersionArn": "The Amazon Resource Name (ARN) of the model version.
", "StartProjectVersionRequest$ProjectVersionArn": "The Amazon Resource Name(ARN) of the model version that you want to start.
", @@ -1346,6 +1371,7 @@ "ProjectVersionStatus": { "base": null, "refs": { + "DeleteProjectVersionResponse$Status": "The status of the deletion operation.
", "ProjectVersionDescription$Status": "The current status of the model version.
", "StartProjectVersionResponse$Status": "The current running status of the model.
", "StopProjectVersionResponse$Status": "The current status of the stop operation.
" diff --git a/models/apis/robomaker/2018-06-29/api-2.json b/models/apis/robomaker/2018-06-29/api-2.json index 5f9bb364851..4eba176c6a6 100644 --- a/models/apis/robomaker/2018-06-29/api-2.json +++ b/models/apis/robomaker/2018-06-29/api-2.json @@ -718,6 +718,18 @@ "min":1, "pattern":"[a-zA-Z0-9_.\\-]*" }, + "Compute":{ + "type":"structure", + "members":{ + "simulationUnitLimit":{"shape":"SimulationUnit"} + } + }, + "ComputeResponse":{ + "type":"structure", + "members":{ + "simulationUnitLimit":{"shape":"SimulationUnit"} + } + }, "ConcurrentDeploymentException":{ "type":"structure", "members":{ @@ -921,7 +933,8 @@ "simulationApplications":{"shape":"SimulationApplicationConfigs"}, "dataSources":{"shape":"DataSourceConfigs"}, "tags":{"shape":"TagMap"}, - "vpcConfig":{"shape":"VPCConfig"} + "vpcConfig":{"shape":"VPCConfig"}, + "compute":{"shape":"Compute"} } }, "CreateSimulationJobRequests":{ @@ -948,7 +961,8 @@ "simulationApplications":{"shape":"SimulationApplicationConfigs"}, "dataSources":{"shape":"DataSources"}, "tags":{"shape":"TagMap"}, - "vpcConfig":{"shape":"VPCConfigResponse"} + "vpcConfig":{"shape":"VPCConfigResponse"}, + "compute":{"shape":"ComputeResponse"} } }, "CreatedAt":{"type":"timestamp"}, @@ -1320,7 +1334,8 @@ "dataSources":{"shape":"DataSources"}, "tags":{"shape":"TagMap"}, "vpcConfig":{"shape":"VPCConfigResponse"}, - "networkInterface":{"shape":"NetworkInterface"} + "networkInterface":{"shape":"NetworkInterface"}, + "compute":{"shape":"ComputeResponse"} } }, "EnvironmentVariableKey":{ @@ -2000,7 +2015,8 @@ "dataSources":{"shape":"DataSources"}, "tags":{"shape":"TagMap"}, "vpcConfig":{"shape":"VPCConfigResponse"}, - "networkInterface":{"shape":"NetworkInterface"} + "networkInterface":{"shape":"NetworkInterface"}, + "compute":{"shape":"ComputeResponse"} } }, "SimulationJobBatchErrorCode":{ @@ -2083,6 +2099,7 @@ "simulationApplications":{"shape":"SimulationApplicationConfigs"}, "dataSources":{"shape":"DataSourceConfigs"}, "vpcConfig":{"shape":"VPCConfig"}, + "compute":{"shape":"Compute"}, "tags":{"shape":"TagMap"} } }, @@ -2144,6 +2161,11 @@ "pattern":"7|9|Kinetic|Melodic|Dashing" }, "SimulationTimeMillis":{"type":"long"}, + "SimulationUnit":{ + "type":"integer", + "max":15, + "min":1 + }, "Source":{ "type":"structure", "members":{ diff --git a/models/apis/robomaker/2018-06-29/docs-2.json b/models/apis/robomaker/2018-06-29/docs-2.json index eca8b84662b..41c0f56195f 100644 --- a/models/apis/robomaker/2018-06-29/docs-2.json +++ b/models/apis/robomaker/2018-06-29/docs-2.json @@ -234,6 +234,21 @@ "LaunchConfig$launchFile": "The launch file name.
" } }, + "Compute": { + "base": "Compute information for the simulation job.
", + "refs": { + "CreateSimulationJobRequest$compute": "Compute information for the simulation job.
", + "SimulationJobRequest$compute": "Compute information for the simulation job
" + } + }, + "ComputeResponse": { + "base": "Compute information for the simulation job
", + "refs": { + "CreateSimulationJobResponse$compute": "Compute information for the simulation job.
", + "DescribeSimulationJobResponse$compute": "Compute information for the simulation job.
", + "SimulationJob$compute": "Compute information for the simulation job
" + } + }, "ConcurrentDeploymentException": { "base": "The failure percentage threshold percentage was met.
", "refs": { @@ -1410,6 +1425,13 @@ "SimulationJob$simulationTimeMillis": "The simulation job execution duration in milliseconds.
" } }, + "SimulationUnit": { + "base": null, + "refs": { + "Compute$simulationUnitLimit": "The simulation unit limit. Your simulation is allocated CPU and memory proportional to the supplied simulation unit limit. A simulation unit is 1 vcpu and 2GB of memory. You are only billed for the SU utilization you consume up to the maximim value provided.
", + "ComputeResponse$simulationUnitLimit": "The simulation unit limit. Your simulation is allocated CPU and memory proportional to the supplied simulation unit limit. A simulation unit is 1 vcpu and 2GB of memory. You are only billed for the SU utilization you consume up to the maximim value provided.
" + } + }, "Source": { "base": "Information about a source.
", "refs": { diff --git a/models/apis/route53/2013-04-01/docs-2.json b/models/apis/route53/2013-04-01/docs-2.json index aa867326f09..2022293c743 100644 --- a/models/apis/route53/2013-04-01/docs-2.json +++ b/models/apis/route53/2013-04-01/docs-2.json @@ -2,18 +2,18 @@ "version": "2.0", "service": "Amazon Route 53 is a highly available and scalable Domain Name System (DNS) web service.
", "operations": { - "AssociateVPCWithHostedZone": "Associates an Amazon VPC with a private hosted zone.
To perform the association, the VPC and the private hosted zone must already exist. You can't convert a public hosted zone into a private hosted zone.
If you want to associate a VPC that was created by using one AWS account with a private hosted zone that was created by using a different account, the AWS account that created the private hosted zone must first submit a CreateVPCAssociationAuthorization
request. Then the account that created the VPC must submit an AssociateVPCWithHostedZone
request.
Creates, changes, or deletes a resource record set, which contains authoritative DNS information for a specified domain name or subdomain name. For example, you can use ChangeResourceRecordSets
to create a resource record set that routes traffic for test.example.com to a web server that has an IP address of 192.0.2.44.
Change Batches and Transactional Changes
The request body must include a document with a ChangeResourceRecordSetsRequest
element. The request body contains a list of change items, known as a change batch. Change batches are considered transactional changes. When using the Amazon Route 53 API to change resource record sets, Route 53 either makes all or none of the changes in a change batch request. This ensures that Route 53 never partially implements the intended changes to the resource record sets in a hosted zone.
For example, a change batch request that deletes the CNAME
record for www.example.com and creates an alias resource record set for www.example.com. Route 53 deletes the first resource record set and creates the second resource record set in a single operation. If either the DELETE
or the CREATE
action fails, then both changes (plus any other changes in the batch) fail, and the original CNAME
record continues to exist.
Due to the nature of transactional changes, you can't delete the same resource record set more than once in a single change batch. If you attempt to delete the same change batch more than once, Route 53 returns an InvalidChangeBatch
error.
Traffic Flow
To create resource record sets for complex routing configurations, use either the traffic flow visual editor in the Route 53 console or the API actions for traffic policies and traffic policy instances. Save the configuration as a traffic policy, then associate the traffic policy with one or more domain names (such as example.com) or subdomain names (such as www.example.com), in the same hosted zone or in multiple hosted zones. You can roll back the updates if the new configuration isn't performing as expected. For more information, see Using Traffic Flow to Route DNS Traffic in the Amazon Route 53 Developer Guide.
Create, Delete, and Upsert
Use ChangeResourceRecordsSetsRequest
to perform the following actions:
CREATE
: Creates a resource record set that has the specified values.
DELETE
: Deletes an existing resource record set that has the specified values.
UPSERT
: If a resource record set does not already exist, AWS creates it. If a resource set does exist, Route 53 updates it with the values in the request.
Syntaxes for Creating, Updating, and Deleting Resource Record Sets
The syntax for a request depends on the type of resource record set that you want to create, delete, or update, such as weighted, alias, or failover. The XML elements in your request must appear in the order listed in the syntax.
For an example for each type of resource record set, see \"Examples.\"
Don't refer to the syntax in the \"Parameter Syntax\" section, which includes all of the elements for every kind of resource record set that you can create, delete, or update by using ChangeResourceRecordSets
.
Change Propagation to Route 53 DNS Servers
When you submit a ChangeResourceRecordSets
request, Route 53 propagates your changes to all of the Route 53 authoritative DNS servers. While your changes are propagating, GetChange
returns a status of PENDING
. When propagation is complete, GetChange
returns a status of INSYNC
. Changes generally propagate to all Route 53 name servers within 60 seconds. For more information, see GetChange.
Limits on ChangeResourceRecordSets Requests
For information about the limits on a ChangeResourceRecordSets
request, see Limits in the Amazon Route 53 Developer Guide.
Associates an Amazon VPC with a private hosted zone.
To perform the association, the VPC and the private hosted zone must already exist. Also, you can't convert a public hosted zone into a private hosted zone.
If you want to associate a VPC that was created by one AWS account with a private hosted zone that was created by a different account, do one of the following:
Use the AWS account that created the private hosted zone to submit a CreateVPCAssociationAuthorization request. Then use the account that created the VPC to submit an AssociateVPCWithHostedZone
request.
If a subnet in the VPC was shared with another account, you can use the account that the subnet was shared with to submit an AssociateVPCWithHostedZone
request. For more information about sharing subnets, see Working with Shared VPCs.
Creates, changes, or deletes a resource record set, which contains authoritative DNS information for a specified domain name or subdomain name. For example, you can use ChangeResourceRecordSets
to create a resource record set that routes traffic for test.example.com to a web server that has an IP address of 192.0.2.44.
Change Batches and Transactional Changes
The request body must include a document with a ChangeResourceRecordSetsRequest
element. The request body contains a list of change items, known as a change batch. Change batches are considered transactional changes. When using the Amazon Route 53 API to change resource record sets, Route 53 either makes all or none of the changes in a change batch request. This ensures that Route 53 never partially implements the intended changes to the resource record sets in a hosted zone.
For example, a change batch request that deletes the CNAME
record for www.example.com and creates an alias resource record set for www.example.com. Route 53 deletes the first resource record set and creates the second resource record set in a single operation. If either the DELETE
or the CREATE
action fails, then both changes (plus any other changes in the batch) fail, and the original CNAME
record continues to exist.
Due to the nature of transactional changes, you can't delete the same resource record set more than once in a single change batch. If you attempt to delete the same change batch more than once, Route 53 returns an InvalidChangeBatch
error.
Traffic Flow
To create resource record sets for complex routing configurations, use either the traffic flow visual editor in the Route 53 console or the API actions for traffic policies and traffic policy instances. Save the configuration as a traffic policy, then associate the traffic policy with one or more domain names (such as example.com) or subdomain names (such as www.example.com), in the same hosted zone or in multiple hosted zones. You can roll back the updates if the new configuration isn't performing as expected. For more information, see Using Traffic Flow to Route DNS Traffic in the Amazon Route 53 Developer Guide.
Create, Delete, and Upsert
Use ChangeResourceRecordsSetsRequest
to perform the following actions:
CREATE
: Creates a resource record set that has the specified values.
DELETE
: Deletes an existing resource record set that has the specified values.
UPSERT
: If a resource record set does not already exist, AWS creates it. If a resource set does exist, Route 53 updates it with the values in the request.
Syntaxes for Creating, Updating, and Deleting Resource Record Sets
The syntax for a request depends on the type of resource record set that you want to create, delete, or update, such as weighted, alias, or failover. The XML elements in your request must appear in the order listed in the syntax.
For an example for each type of resource record set, see \"Examples.\"
Don't refer to the syntax in the \"Parameter Syntax\" section, which includes all of the elements for every kind of resource record set that you can create, delete, or update by using ChangeResourceRecordSets
.
Change Propagation to Route 53 DNS Servers
When you submit a ChangeResourceRecordSets
request, Route 53 propagates your changes to all of the Route 53 authoritative DNS servers. While your changes are propagating, GetChange
returns a status of PENDING
. When propagation is complete, GetChange
returns a status of INSYNC
. Changes generally propagate to all Route 53 name servers within 60 seconds. For more information, see GetChange.
Limits on ChangeResourceRecordSets Requests
For information about the limits on a ChangeResourceRecordSets
request, see Limits in the Amazon Route 53 Developer Guide.
Adds, edits, or deletes tags for a health check or a hosted zone.
For information about using tags for cost allocation, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.
", - "CreateHealthCheck": "Creates a new health check.
For information about adding health checks to resource record sets, see HealthCheckId in ChangeResourceRecordSets.
ELB Load Balancers
If you're registering EC2 instances with an Elastic Load Balancing (ELB) load balancer, do not create Amazon Route 53 health checks for the EC2 instances. When you register an EC2 instance with a load balancer, you configure settings for an ELB health check, which performs a similar function to a Route 53 health check.
Private Hosted Zones
You can associate health checks with failover resource record sets in a private hosted zone. Note the following:
Route 53 health checkers are outside the VPC. To check the health of an endpoint within a VPC by IP address, you must assign a public IP address to the instance in the VPC.
You can configure a health checker to check the health of an external resource that the instance relies on, such as a database server.
You can create a CloudWatch metric, associate an alarm with the metric, and then create a health check that is based on the state of the alarm. For example, you might create a CloudWatch metric that checks the status of the Amazon EC2 StatusCheckFailed
metric, add an alarm to the metric, and then create a health check that is based on the state of the alarm. For information about creating CloudWatch metrics and alarms by using the CloudWatch console, see the Amazon CloudWatch User Guide.
Creates a new public or private hosted zone. You create records in a public hosted zone to define how you want to route traffic on the internet for a domain, such as example.com, and its subdomains (apex.example.com, acme.example.com). You create records in a private hosted zone to define how you want to route traffic for a domain and its subdomains within one or more Amazon Virtual Private Clouds (Amazon VPCs).
You can't convert a public hosted zone to a private hosted zone or vice versa. Instead, you must create a new hosted zone with the same name and create new resource record sets.
For more information about charges for hosted zones, see Amazon Route 53 Pricing.
Note the following:
You can't create a hosted zone for a top-level domain (TLD) such as .com.
For public hosted zones, Amazon Route 53 automatically creates a default SOA record and four NS records for the zone. For more information about SOA and NS records, see NS and SOA Records that Route 53 Creates for a Hosted Zone in the Amazon Route 53 Developer Guide.
If you want to use the same name servers for multiple public hosted zones, you can optionally associate a reusable delegation set with the hosted zone. See the DelegationSetId
element.
If your domain is registered with a registrar other than Route 53, you must update the name servers with your registrar to make Route 53 the DNS service for the domain. For more information, see Migrating DNS Service for an Existing Domain to Amazon Route 53 in the Amazon Route 53 Developer Guide.
When you submit a CreateHostedZone
request, the initial status of the hosted zone is PENDING
. For public hosted zones, this means that the NS and SOA records are not yet available on all Route 53 DNS servers. When the NS and SOA records are available, the status of the zone changes to INSYNC
.
Creates a new health check.
For information about adding health checks to resource record sets, see HealthCheckId in ChangeResourceRecordSets.
ELB Load Balancers
If you're registering EC2 instances with an Elastic Load Balancing (ELB) load balancer, do not create Amazon Route 53 health checks for the EC2 instances. When you register an EC2 instance with a load balancer, you configure settings for an ELB health check, which performs a similar function to a Route 53 health check.
Private Hosted Zones
You can associate health checks with failover resource record sets in a private hosted zone. Note the following:
Route 53 health checkers are outside the VPC. To check the health of an endpoint within a VPC by IP address, you must assign a public IP address to the instance in the VPC.
You can configure a health checker to check the health of an external resource that the instance relies on, such as a database server.
You can create a CloudWatch metric, associate an alarm with the metric, and then create a health check that is based on the state of the alarm. For example, you might create a CloudWatch metric that checks the status of the Amazon EC2 StatusCheckFailed
metric, add an alarm to the metric, and then create a health check that is based on the state of the alarm. For information about creating CloudWatch metrics and alarms by using the CloudWatch console, see the Amazon CloudWatch User Guide.
Creates a new public or private hosted zone. You create records in a public hosted zone to define how you want to route traffic on the internet for a domain, such as example.com, and its subdomains (apex.example.com, acme.example.com). You create records in a private hosted zone to define how you want to route traffic for a domain and its subdomains within one or more Amazon Virtual Private Clouds (Amazon VPCs).
You can't convert a public hosted zone to a private hosted zone or vice versa. Instead, you must create a new hosted zone with the same name and create new resource record sets.
For more information about charges for hosted zones, see Amazon Route 53 Pricing.
Note the following:
You can't create a hosted zone for a top-level domain (TLD) such as .com.
For public hosted zones, Route 53 automatically creates a default SOA record and four NS records for the zone. For more information about SOA and NS records, see NS and SOA Records that Route 53 Creates for a Hosted Zone in the Amazon Route 53 Developer Guide.
If you want to use the same name servers for multiple public hosted zones, you can optionally associate a reusable delegation set with the hosted zone. See the DelegationSetId
element.
If your domain is registered with a registrar other than Route 53, you must update the name servers with your registrar to make Route 53 the DNS service for the domain. For more information, see Migrating DNS Service for an Existing Domain to Amazon Route 53 in the Amazon Route 53 Developer Guide.
When you submit a CreateHostedZone
request, the initial status of the hosted zone is PENDING
. For public hosted zones, this means that the NS and SOA records are not yet available on all Route 53 DNS servers. When the NS and SOA records are available, the status of the zone changes to INSYNC
.
Creates a configuration for DNS query logging. After you create a query logging configuration, Amazon Route 53 begins to publish log data to an Amazon CloudWatch Logs log group.
DNS query logs contain information about the queries that Route 53 receives for a specified public hosted zone, such as the following:
Route 53 edge location that responded to the DNS query
Domain or subdomain that was requested
DNS record type, such as A or AAAA
DNS response code, such as NoError
or ServFail
Before you create a query logging configuration, perform the following operations.
If you create a query logging configuration using the Route 53 console, Route 53 performs these operations automatically.
Create a CloudWatch Logs log group, and make note of the ARN, which you specify when you create a query logging configuration. Note the following:
You must create the log group in the us-east-1 region.
You must use the same AWS account to create the log group and the hosted zone that you want to configure query logging for.
When you create log groups for query logging, we recommend that you use a consistent prefix, for example:
/aws/route53/hosted zone name
In the next step, you'll create a resource policy, which controls access to one or more log groups and the associated AWS resources, such as Route 53 hosted zones. There's a limit on the number of resource policies that you can create, so we recommend that you use a consistent prefix so you can use the same resource policy for all the log groups that you create for query logging.
Create a CloudWatch Logs resource policy, and give it the permissions that Route 53 needs to create log streams and to send query logs to log streams. For the value of Resource
, specify the ARN for the log group that you created in the previous step. To use the same resource policy for all the CloudWatch Logs log groups that you created for query logging configurations, replace the hosted zone name with *
, for example:
arn:aws:logs:us-east-1:123412341234:log-group:/aws/route53/*
You can't use the CloudWatch console to create or edit a resource policy. You must use the CloudWatch API, one of the AWS SDKs, or the AWS CLI.
When Route 53 finishes creating the configuration for DNS query logging, it does the following:
Creates a log stream for an edge location the first time that the edge location responds to DNS queries for the specified hosted zone. That log stream is used to log all queries that Route 53 responds to for that edge location.
Begins to send query logs to the applicable log stream.
The name of each log stream is in the following format:
hosted zone ID/edge location code
The edge location code is a three-letter code and an arbitrarily assigned number, for example, DFW3. The three-letter code typically corresponds with the International Air Transport Association airport code for an airport near the edge location. (These abbreviations might change in the future.) For a list of edge locations, see \"The Route 53 Global Network\" on the Route 53 Product Details page.
Query logs contain only the queries that DNS resolvers forward to Route 53. If a DNS resolver has already cached the response to a query (such as the IP address for a load balancer for example.com), the resolver will continue to return the cached response. It doesn't forward another query to Route 53 until the TTL for the corresponding resource record set expires. Depending on how many DNS queries are submitted for a resource record set, and depending on the TTL for that resource record set, query logs might contain information about only one query out of every several thousand queries that are submitted to DNS. For more information about how DNS works, see Routing Internet Traffic to Your Website or Web Application in the Amazon Route 53 Developer Guide.
For a list of the values in each query log and the format of each value, see Logging DNS Queries in the Amazon Route 53 Developer Guide.
For information about charges for query logs, see Amazon CloudWatch Pricing.
If you want Route 53 to stop sending query logs to CloudWatch Logs, delete the query logging configuration. For more information, see DeleteQueryLoggingConfig.
Creates a delegation set (a group of four name servers) that can be reused by multiple hosted zones. If a hosted zoned ID is specified, CreateReusableDelegationSet
marks the delegation set associated with that zone as reusable.
You can't associate a reusable delegation set with a private hosted zone.
For information about using a reusable delegation set to configure white label name servers, see Configuring White Label Name Servers.
The process for migrating existing hosted zones to use a reusable delegation set is comparable to the process for configuring white label name servers. You need to perform the following steps:
Create a reusable delegation set.
Recreate hosted zones, and reduce the TTL to 60 seconds or less.
Recreate resource record sets in the new hosted zones.
Change the registrar's name servers to use the name servers for the new hosted zones.
Monitor traffic for the website or application.
Change TTLs back to their original values.
If you want to migrate existing hosted zones to use a reusable delegation set, the existing hosted zones can't use any of the name servers that are assigned to the reusable delegation set. If one or more hosted zones do use one or more name servers that are assigned to the reusable delegation set, you can do one of the following:
For small numbers of hosted zones—up to a few hundred—it's relatively easy to create reusable delegation sets until you get one that has four name servers that don't overlap with any of the name servers in your hosted zones.
For larger numbers of hosted zones, the easiest solution is to use more than one reusable delegation set.
For larger numbers of hosted zones, you can also migrate hosted zones that have overlapping name servers to hosted zones that don't have overlapping name servers, then migrate the hosted zones again to use the reusable delegation set.
Creates a delegation set (a group of four name servers) that can be reused by multiple hosted zones that were created by the same AWS account.
You can also create a reusable delegation set that uses the four name servers that are associated with an existing hosted zone. Specify the hosted zone ID in the CreateReusableDelegationSet
request.
You can't associate a reusable delegation set with a private hosted zone.
For information about using a reusable delegation set to configure white label name servers, see Configuring White Label Name Servers.
The process for migrating existing hosted zones to use a reusable delegation set is comparable to the process for configuring white label name servers. You need to perform the following steps:
Create a reusable delegation set.
Recreate hosted zones, and reduce the TTL to 60 seconds or less.
Recreate resource record sets in the new hosted zones.
Change the registrar's name servers to use the name servers for the new hosted zones.
Monitor traffic for the website or application.
Change TTLs back to their original values.
If you want to migrate existing hosted zones to use a reusable delegation set, the existing hosted zones can't use any of the name servers that are assigned to the reusable delegation set. If one or more hosted zones do use one or more name servers that are assigned to the reusable delegation set, you can do one of the following:
For small numbers of hosted zones—up to a few hundred—it's relatively easy to create reusable delegation sets until you get one that has four name servers that don't overlap with any of the name servers in your hosted zones.
For larger numbers of hosted zones, the easiest solution is to use more than one reusable delegation set.
For larger numbers of hosted zones, you can also migrate hosted zones that have overlapping name servers to hosted zones that don't have overlapping name servers, then migrate the hosted zones again to use the reusable delegation set.
Creates a traffic policy, which you use to create multiple DNS resource record sets for one domain name (such as example.com) or one subdomain name (such as www.example.com).
", "CreateTrafficPolicyInstance": "Creates resource record sets in a specified hosted zone based on the settings in a specified traffic policy version. In addition, CreateTrafficPolicyInstance
associates the resource record sets with a specified domain name (such as example.com) or subdomain name (such as www.example.com). Amazon Route 53 responds to DNS queries for the domain or subdomain name by using the resource record sets that CreateTrafficPolicyInstance
created.
Creates a new version of an existing traffic policy. When you create a new version of a traffic policy, you specify the ID of the traffic policy that you want to update and a JSON-formatted document that describes the new version. You use traffic policies to create multiple DNS resource record sets for one domain name (such as example.com) or one subdomain name (such as www.example.com). You can create a maximum of 1000 versions of a traffic policy. If you reach the limit and need to create another version, you'll need to start a new traffic policy.
", "CreateVPCAssociationAuthorization": "Authorizes the AWS account that created a specified VPC to submit an AssociateVPCWithHostedZone
request to associate the VPC with a specified hosted zone that was created by a different account. To submit a CreateVPCAssociationAuthorization
request, you must use the account that created the hosted zone. After you authorize the association, use the account that created the VPC to submit an AssociateVPCWithHostedZone
request.
If you want to associate multiple VPCs that you created by using one account with a hosted zone that you created by using a different account, you must submit one authorization request for each VPC.
Deletes a health check.
Amazon Route 53 does not prevent you from deleting a health check even if the health check is associated with one or more resource record sets. If you delete a health check and you don't update the associated resource record sets, the future status of the health check can't be predicted and may change. This will affect the routing of DNS queries for your DNS failover configuration. For more information, see Replacing and Deleting Health Checks in the Amazon Route 53 Developer Guide.
Deletes a health check.
Amazon Route 53 does not prevent you from deleting a health check even if the health check is associated with one or more resource record sets. If you delete a health check and you don't update the associated resource record sets, the future status of the health check can't be predicted and may change. This will affect the routing of DNS queries for your DNS failover configuration. For more information, see Replacing and Deleting Health Checks in the Amazon Route 53 Developer Guide.
If you're using AWS Cloud Map and you configured Cloud Map to create a Route 53 health check when you register an instance, you can't use the Route 53 DeleteHealthCheck
command to delete the health check. The health check is deleted automatically when you deregister the instance; there can be a delay of several hours before the health check is deleted from Route 53.
Deletes a hosted zone.
If the hosted zone was created by another service, such as AWS Cloud Map, see Deleting Public Hosted Zones That Were Created by Another Service in the Amazon Route 53 Developer Guide for information about how to delete it. (The process is the same for public and private hosted zones that were created by another service.)
If you want to keep your domain registration but you want to stop routing internet traffic to your website or web application, we recommend that you delete resource record sets in the hosted zone instead of deleting the hosted zone.
If you delete a hosted zone, you can't undelete it. You must create a new hosted zone and update the name servers for your domain registration, which can require up to 48 hours to take effect. (If you delegated responsibility for a subdomain to a hosted zone and you delete the child hosted zone, you must update the name servers in the parent hosted zone.) In addition, if you delete a hosted zone, someone could hijack the domain and route traffic to their own resources using your domain name.
If you want to avoid the monthly charge for the hosted zone, you can transfer DNS service for the domain to a free DNS service. When you transfer DNS service, you have to update the name servers for the domain registration. If the domain is registered with Route 53, see UpdateDomainNameservers for information about how to replace Route 53 name servers with name servers for the new DNS service. If the domain is registered with another registrar, use the method provided by the registrar to update name servers for the domain registration. For more information, perform an internet search on \"free DNS service.\"
You can delete a hosted zone only if it contains only the default SOA record and NS resource record sets. If the hosted zone contains other resource record sets, you must delete them before you can delete the hosted zone. If you try to delete a hosted zone that contains other resource record sets, the request fails, and Route 53 returns a HostedZoneNotEmpty
error. For information about deleting records from your hosted zone, see ChangeResourceRecordSets.
To verify that the hosted zone has been deleted, do one of the following:
Use the GetHostedZone
action to request information about the hosted zone.
Use the ListHostedZones
action to get a list of the hosted zones associated with the current AWS account.
Deletes a configuration for DNS query logging. If you delete a configuration, Amazon Route 53 stops sending query logs to CloudWatch Logs. Route 53 doesn't delete any logs that are already in CloudWatch Logs.
For more information about DNS query logs, see CreateQueryLoggingConfig.
", "DeleteReusableDelegationSet": "Deletes a reusable delegation set.
You can delete a reusable delegation set only if it isn't associated with any hosted zones.
To verify that the reusable delegation set is not associated with any hosted zones, submit a GetReusableDelegationSet request and specify the ID of the reusable delegation set that you want to delete.
", @@ -23,7 +23,7 @@ "DisassociateVPCFromHostedZone": "Disassociates a VPC from a Amazon Route 53 private hosted zone. Note the following:
You can't disassociate the last VPC from a private hosted zone.
You can't convert a private hosted zone into a public hosted zone.
You can submit a DisassociateVPCFromHostedZone
request using either the account that created the hosted zone or the account that created the VPC.
Gets the specified limit for the current account, for example, the maximum number of health checks that you can create using the account.
For the default limit, see Limits in the Amazon Route 53 Developer Guide. To request a higher limit, open a case.
You can also view account limits in AWS Trusted Advisor. Sign in to the AWS Management Console and open the Trusted Advisor console at https://console.aws.amazon.com/trustedadvisor/. Then choose Service limits in the navigation pane.
Returns the current status of a change batch request. The status is one of the following values:
PENDING
indicates that the changes in this request have not propagated to all Amazon Route 53 DNS servers. This is the initial status of all change batch requests.
INSYNC
indicates that the changes have propagated to all Route 53 DNS servers.
GetCheckerIpRanges
still works, but we recommend that you download ip-ranges.json, which includes IP address ranges for all AWS services. For more information, see IP Address Ranges of Amazon Route 53 Servers in the Amazon Route 53 Developer Guide.
GetCheckerIpRanges
still works, but we recommend that you download ip-ranges.json, which includes IP address ranges for all AWS services. For more information, see IP Address Ranges of Amazon Route 53 Servers in the Amazon Route 53 Developer Guide.
Gets information about whether a specified geographic location is supported for Amazon Route 53 geolocation resource record sets.
Use the following syntax to determine whether a continent is supported for geolocation:
GET /2013-04-01/geolocation?continentcode=two-letter abbreviation for a continent
Use the following syntax to determine whether a country is supported for geolocation:
GET /2013-04-01/geolocation?countrycode=two-character country code
Use the following syntax to determine whether a subdivision of a country is supported for geolocation:
GET /2013-04-01/geolocation?countrycode=two-character country code&subdivisioncode=subdivision code
Gets information about a specified health check.
", "GetHealthCheckCount": "Retrieves the number of health checks that are associated with the current AWS account.
", @@ -38,10 +38,10 @@ "GetTrafficPolicy": "Gets information about a specific traffic policy version.
", "GetTrafficPolicyInstance": "Gets information about a specified traffic policy instance.
After you submit a CreateTrafficPolicyInstance
or an UpdateTrafficPolicyInstance
request, there's a brief delay while Amazon Route 53 creates the resource record sets that are specified in the traffic policy definition. For more information, see the State
response element.
In the Route 53 console, traffic policy instances are known as policy records.
Gets the number of traffic policy instances that are associated with the current AWS account.
", - "ListGeoLocations": "Retrieves a list of supported geographic locations.
Countries are listed first, and continents are listed last. If Amazon Route 53 supports subdivisions for a country (for example, states or provinces), the subdivisions for that country are listed in alphabetical order immediately after the corresponding country.
", + "ListGeoLocations": "Retrieves a list of supported geographic locations.
Countries are listed first, and continents are listed last. If Amazon Route 53 supports subdivisions for a country (for example, states or provinces), the subdivisions for that country are listed in alphabetical order immediately after the corresponding country.
For a list of supported geolocation codes, see the GeoLocation data type.
", "ListHealthChecks": "Retrieve a list of the health checks that are associated with the current AWS account.
", "ListHostedZones": "Retrieves a list of the public and private hosted zones that are associated with the current AWS account. The response includes a HostedZones
child element for each hosted zone.
Amazon Route 53 returns a maximum of 100 items in each response. If you have a lot of hosted zones, you can use the maxitems
parameter to list them in groups of up to 100.
Retrieves a list of your hosted zones in lexicographic order. The response includes a HostedZones
child element for each hosted zone created by the current AWS account.
ListHostedZonesByName
sorts hosted zones by name with the labels reversed. For example:
com.example.www.
Note the trailing dot, which can change the sort order in some circumstances.
If the domain name includes escape characters or Punycode, ListHostedZonesByName
alphabetizes the domain name using the escaped or Punycoded value, which is the format that Amazon Route 53 saves in its database. For example, to create a hosted zone for exämple.com, you specify ex\\344mple.com for the domain name. ListHostedZonesByName
alphabetizes it as:
com.ex\\344mple.
The labels are reversed and alphabetized using the escaped value. For more information about valid domain name formats, including internationalized domain names, see DNS Domain Name Format in the Amazon Route 53 Developer Guide.
Route 53 returns up to 100 items in each response. If you have a lot of hosted zones, use the MaxItems
parameter to list them in groups of up to 100. The response includes values that help navigate from one group of MaxItems
hosted zones to the next:
The DNSName
and HostedZoneId
elements in the response contain the values, if any, specified for the dnsname
and hostedzoneid
parameters in the request that produced the current response.
The MaxItems
element in the response contains the value, if any, that you specified for the maxitems
parameter in the request that produced the current response.
If the value of IsTruncated
in the response is true, there are more hosted zones associated with the current AWS account.
If IsTruncated
is false, this response includes the last hosted zone that is associated with the current account. The NextDNSName
element and NextHostedZoneId
elements are omitted from the response.
The NextDNSName
and NextHostedZoneId
elements in the response contain the domain name and the hosted zone ID of the next hosted zone that is associated with the current AWS account. If you want to list more hosted zones, make another call to ListHostedZonesByName
, and specify the value of NextDNSName
and NextHostedZoneId
in the dnsname
and hostedzoneid
parameters, respectively.
Retrieves a list of your hosted zones in lexicographic order. The response includes a HostedZones
child element for each hosted zone created by the current AWS account.
ListHostedZonesByName
sorts hosted zones by name with the labels reversed. For example:
com.example.www.
Note the trailing dot, which can change the sort order in some circumstances.
If the domain name includes escape characters or Punycode, ListHostedZonesByName
alphabetizes the domain name using the escaped or Punycoded value, which is the format that Amazon Route 53 saves in its database. For example, to create a hosted zone for exämple.com, you specify ex\\344mple.com for the domain name. ListHostedZonesByName
alphabetizes it as:
com.ex\\344mple.
The labels are reversed and alphabetized using the escaped value. For more information about valid domain name formats, including internationalized domain names, see DNS Domain Name Format in the Amazon Route 53 Developer Guide.
Route 53 returns up to 100 items in each response. If you have a lot of hosted zones, use the MaxItems
parameter to list them in groups of up to 100. The response includes values that help navigate from one group of MaxItems
hosted zones to the next:
The DNSName
and HostedZoneId
elements in the response contain the values, if any, specified for the dnsname
and hostedzoneid
parameters in the request that produced the current response.
The MaxItems
element in the response contains the value, if any, that you specified for the maxitems
parameter in the request that produced the current response.
If the value of IsTruncated
in the response is true, there are more hosted zones associated with the current AWS account.
If IsTruncated
is false, this response includes the last hosted zone that is associated with the current account. The NextDNSName
element and NextHostedZoneId
elements are omitted from the response.
The NextDNSName
and NextHostedZoneId
elements in the response contain the domain name and the hosted zone ID of the next hosted zone that is associated with the current AWS account. If you want to list more hosted zones, make another call to ListHostedZonesByName
, and specify the value of NextDNSName
and NextHostedZoneId
in the dnsname
and hostedzoneid
parameters, respectively.
Lists the configurations for DNS query logging that are associated with the current AWS account or the configuration that is associated with a specified hosted zone.
For more information about DNS query logs, see CreateQueryLoggingConfig. Additional information, including the format of DNS query logs, appears in Logging DNS Queries in the Amazon Route 53 Developer Guide.
", "ListResourceRecordSets": "Lists the resource record sets in a specified hosted zone.
ListResourceRecordSets
returns up to 100 resource record sets at a time in ASCII order, beginning at a position specified by the name
and type
elements.
Sort order
ListResourceRecordSets
sorts results first by DNS name with the labels reversed, for example:
com.example.www.
Note the trailing dot, which can change the sort order when the record name contains characters that appear before .
(decimal 46) in the ASCII table. These characters include the following: ! \" # $ % & ' ( ) * + , -
When multiple records have the same DNS name, ListResourceRecordSets
sorts results by the record type.
Specifying where to start listing records
You can use the name and type elements to specify the resource record set that the list begins with:
The results begin with the first resource record set that the hosted zone contains.
The results begin with the first resource record set in the list whose name is greater than or equal to Name
.
Amazon Route 53 returns the InvalidInput
error.
The results begin with the first resource record set in the list whose name is greater than or equal to Name
, and whose type is greater than or equal to Type
.
Resource record sets that are PENDING
This action returns the most current version of the records. This includes records that are PENDING
, and that are not yet available on all Route 53 DNS servers.
Changing resource record sets
To ensure that you get an accurate listing of the resource record sets for a hosted zone at a point in time, do not submit a ChangeResourceRecordSets
request while you're paging through the results of a ListResourceRecordSets
request. If you do, some pages may display results without the latest changes while other pages display results with the latest changes.
Displaying the next page of results
If a ListResourceRecordSets
command returns more than one page of results, the value of IsTruncated
is true
. To display the next page of results, get the values of NextRecordName
, NextRecordType
, and NextRecordIdentifier
(if any) from the response. Then submit another ListResourceRecordSets
request, and specify those values for StartRecordName
, StartRecordType
, and StartRecordIdentifier
.
Retrieves a list of the reusable delegation sets that are associated with the current AWS account.
", @@ -54,7 +54,7 @@ "ListTrafficPolicyVersions": "Gets information about all of the versions for a specified traffic policy.
Traffic policy versions are listed in numerical order by VersionNumber
.
Gets a list of the VPCs that were created by other accounts and that can be associated with a specified hosted zone because you've submitted one or more CreateVPCAssociationAuthorization
requests.
The response includes a VPCs
element with a VPC
child element for each VPC that can be associated with the hosted zone.
Gets the value that Amazon Route 53 returns in response to a DNS request for a specified record name and type. You can optionally specify the IP address of a DNS resolver, an EDNS0 client subnet IP address, and a subnet mask.
", - "UpdateHealthCheck": "Updates an existing health check. Note that some values can't be updated.
For more information about updating health checks, see Creating, Updating, and Deleting Health Checks in the Amazon Route 53 Developer Guide.
", + "UpdateHealthCheck": "Updates an existing health check. Note that some values can't be updated.
For more information about updating health checks, see Creating, Updating, and Deleting Health Checks in the Amazon Route 53 Developer Guide.
", "UpdateHostedZoneComment": "Updates the comment for a specified hosted zone.
", "UpdateTrafficPolicyComment": "Updates the comment for a specified traffic policy version.
", "UpdateTrafficPolicyInstance": "Updates the resource record sets in a specified hosted zone that were created based on the settings in a specified traffic policy version.
When you update a traffic policy instance, Amazon Route 53 continues to respond to DNS queries for the root resource record set name (such as example.com) while it replaces one group of resource record sets with another. Route 53 performs the following operations:
Route 53 creates a new group of resource record sets based on the specified traffic policy. This is true regardless of how significant the differences are between the existing resource record sets and the new resource record sets.
When all of the new resource record sets have been created, Route 53 starts to respond to DNS queries for the root resource record set name (such as example.com) by using the new resource record sets.
Route 53 deletes the old group of resource record sets that are associated with the root resource record set name.
The name of the CloudWatch alarm that you want Amazon Route 53 health checkers to use to determine whether this health check is healthy.
Route 53 supports CloudWatch alarms with the following features:
Standard-resolution metrics. High-resolution metrics aren't supported. For more information, see High-Resolution Metrics in the Amazon CloudWatch User Guide.
Statistics: Average, Minimum, Maximum, Sum, and SampleCount. Extended statistics aren't supported.
The name of the CloudWatch alarm that you want Amazon Route 53 health checkers to use to determine whether this health check is healthy.
Route 53 supports CloudWatch alarms with the following features:
Standard-resolution metrics. High-resolution metrics aren't supported. For more information, see High-Resolution Metrics in the Amazon CloudWatch User Guide.
Statistics: Average, Minimum, Maximum, Sum, and SampleCount. Extended statistics aren't supported.
Applies only to alias, failover alias, geolocation alias, latency alias, and weighted alias resource record sets: When EvaluateTargetHealth
is true
, an alias resource record set inherits the health of the referenced AWS resource, such as an ELB load balancer or another resource record set in the hosted zone.
Note the following:
You can't set EvaluateTargetHealth
to true
when the alias target is a CloudFront distribution.
If you specify an Elastic Beanstalk environment in DNSName
and the environment contains an ELB load balancer, Elastic Load Balancing routes queries only to the healthy Amazon EC2 instances that are registered with the load balancer. (An environment automatically contains an ELB load balancer if it includes more than one Amazon EC2 instance.) If you set EvaluateTargetHealth
to true
and either no Amazon EC2 instances are healthy or the load balancer itself is unhealthy, Route 53 routes queries to other available resources that are healthy, if any.
If the environment contains a single Amazon EC2 instance, there are no special requirements.
Health checking behavior depends on the type of load balancer:
Classic Load Balancers: If you specify an ELB Classic Load Balancer in DNSName
, Elastic Load Balancing routes queries only to the healthy Amazon EC2 instances that are registered with the load balancer. If you set EvaluateTargetHealth
to true
and either no EC2 instances are healthy or the load balancer itself is unhealthy, Route 53 routes queries to other resources.
Application and Network Load Balancers: If you specify an ELB Application or Network Load Balancer and you set EvaluateTargetHealth
to true
, Route 53 routes queries to the load balancer based on the health of the target groups that are associated with the load balancer:
For an Application or Network Load Balancer to be considered healthy, every target group that contains targets must contain at least one healthy target. If any target group contains only unhealthy targets, the load balancer is considered unhealthy, and Route 53 routes queries to other resources.
A target group that has no registered targets is considered unhealthy.
When you create a load balancer, you configure settings for Elastic Load Balancing health checks; they're not Route 53 health checks, but they perform a similar function. Do not create Route 53 health checks for the EC2 instances that you register with an ELB load balancer.
There are no special requirements for setting EvaluateTargetHealth
to true
when the alias target is an S3 bucket.
If the AWS resource that you specify in DNSName
is a record or a group of records (for example, a group of weighted records) but is not another alias record, we recommend that you associate a health check with all of the records in the alias target. For more information, see What Happens When You Omit Health Checks? in the Amazon Route 53 Developer Guide.
For more information and examples, see Amazon Route 53 Health Checks and DNS Failover in the Amazon Route 53 Developer Guide.
" + "AliasTarget$EvaluateTargetHealth": " Applies only to alias, failover alias, geolocation alias, latency alias, and weighted alias resource record sets: When EvaluateTargetHealth
is true
, an alias resource record set inherits the health of the referenced AWS resource, such as an ELB load balancer or another resource record set in the hosted zone.
Note the following:
You can't set EvaluateTargetHealth
to true
when the alias target is a CloudFront distribution.
If you specify an Elastic Beanstalk environment in DNSName
and the environment contains an ELB load balancer, Elastic Load Balancing routes queries only to the healthy Amazon EC2 instances that are registered with the load balancer. (An environment automatically contains an ELB load balancer if it includes more than one Amazon EC2 instance.) If you set EvaluateTargetHealth
to true
and either no Amazon EC2 instances are healthy or the load balancer itself is unhealthy, Route 53 routes queries to other available resources that are healthy, if any.
If the environment contains a single Amazon EC2 instance, there are no special requirements.
Health checking behavior depends on the type of load balancer:
Classic Load Balancers: If you specify an ELB Classic Load Balancer in DNSName
, Elastic Load Balancing routes queries only to the healthy Amazon EC2 instances that are registered with the load balancer. If you set EvaluateTargetHealth
to true
and either no EC2 instances are healthy or the load balancer itself is unhealthy, Route 53 routes queries to other resources.
Application and Network Load Balancers: If you specify an ELB Application or Network Load Balancer and you set EvaluateTargetHealth
to true
, Route 53 routes queries to the load balancer based on the health of the target groups that are associated with the load balancer:
For an Application or Network Load Balancer to be considered healthy, every target group that contains targets must contain at least one healthy target. If any target group contains only unhealthy targets, the load balancer is considered unhealthy, and Route 53 routes queries to other resources.
A target group that has no registered targets is considered unhealthy.
When you create a load balancer, you configure settings for Elastic Load Balancing health checks; they're not Route 53 health checks, but they perform a similar function. Do not create Route 53 health checks for the EC2 instances that you register with an ELB load balancer.
There are no special requirements for setting EvaluateTargetHealth
to true
when the alias target is an S3 bucket.
If the AWS resource that you specify in DNSName
is a record or a group of records (for example, a group of weighted records) but is not another alias record, we recommend that you associate a health check with all of the records in the alias target. For more information, see What Happens When You Omit Health Checks? in the Amazon Route 53 Developer Guide.
For more information and examples, see Amazon Route 53 Health Checks and DNS Failover in the Amazon Route 53 Developer Guide.
" } }, "AliasTarget": { - "base": "Alias resource record sets only: Information about the AWS resource, such as a CloudFront distribution or an Amazon S3 bucket, that you want to route traffic to.
When creating resource record sets for a private hosted zone, note the following:
Creating geolocation alias resource record sets or latency alias resource record sets in a private hosted zone is unsupported.
For information about creating failover resource record sets in a private hosted zone, see Configuring Failover in a Private Hosted Zone.
Alias resource record sets only: Information about the AWS resource, such as a CloudFront distribution or an Amazon S3 bucket, that you want to route traffic to.
When creating resource record sets for a private hosted zone, note the following:
Creating geolocation alias resource record sets or latency alias resource record sets in a private hosted zone is unsupported.
For information about creating failover resource record sets in a private hosted zone, see Configuring Failover in a Private Hosted Zone.
Alias resource record sets only: Information about the AWS resource, such as a CloudFront distribution or an Amazon S3 bucket, that you want to route traffic to.
If you're creating resource records sets for a private hosted zone, note the following:
You can't create an alias resource record set in a private hosted zone to route traffic to a CloudFront distribution.
Creating geolocation alias resource record sets or latency alias resource record sets in a private hosted zone is unsupported.
For information about creating failover resource record sets in a private hosted zone, see Configuring Failover in a Private Hosted Zone in the Amazon Route 53 Developer Guide.
Alias resource record sets only: Information about the AWS resource, such as a CloudFront distribution or an Amazon S3 bucket, that you want to route traffic to.
If you're creating resource records sets for a private hosted zone, note the following:
You can't create an alias resource record set in a private hosted zone to route traffic to a CloudFront distribution.
Creating geolocation alias resource record sets or latency alias resource record sets in a private hosted zone is unsupported.
For information about creating failover resource record sets in a private hosted zone, see Configuring Failover in a Private Hosted Zone in the Amazon Route 53 Developer Guide.
For the CloudWatch alarm that you want Route 53 health checkers to use to determine whether this health check is healthy, the region that the alarm was created in.
For the current list of CloudWatch regions, see Amazon CloudWatch in the AWS Regions and Endpoints chapter of the Amazon Web Services General Reference.
" + "AlarmIdentifier$Region": "For the CloudWatch alarm that you want Route 53 health checkers to use to determine whether this health check is healthy, the region that the alarm was created in.
For the current list of CloudWatch regions, see Amazon CloudWatch in the AWS Service Endpoints chapter of the Amazon Web Services General Reference.
" } }, "ComparisonOperator": { @@ -311,7 +311,7 @@ "DNSName": { "base": null, "refs": { - "AliasTarget$DNSName": "Alias resource record sets only: The value that you specify depends on where you want to route queries:
Specify the applicable domain name for your API. You can get the applicable value using the AWS CLI command get-domain-names:
For regional APIs, specify the value of regionalDomainName
.
For edge-optimized APIs, specify the value of distributionDomainName
. This is the name of the associated CloudFront distribution, such as da1b2c3d4e5.cloudfront.net
.
The name of the record that you're creating must match a custom domain name for your API, such as api.example.com
.
Enter the API endpoint for the interface endpoint, such as vpce-123456789abcdef01-example-us-east-1a.elasticloadbalancing.us-east-1.vpce.amazonaws.com
. For edge-optimized APIs, this is the domain name for the corresponding CloudFront distribution. You can get the value of DnsName
using the AWS CLI command describe-vpc-endpoints.
Specify the domain name that CloudFront assigned when you created your distribution.
Your CloudFront distribution must include an alternate domain name that matches the name of the resource record set. For example, if the name of the resource record set is acme.example.com, your CloudFront distribution must include acme.example.com as one of the alternate domain names. For more information, see Using Alternate Domain Names (CNAMEs) in the Amazon CloudFront Developer Guide.
You can't create a resource record set in a private hosted zone to route traffic to a CloudFront distribution.
For failover alias records, you can't specify a CloudFront distribution for both the primary and secondary records. A distribution must include an alternate domain name that matches the name of the record. However, the primary and secondary records have the same name, and you can't include the same alternate domain name in more than one distribution.
If the domain name for your Elastic Beanstalk environment includes the region that you deployed the environment in, you can create an alias record that routes traffic to the environment. For example, the domain name my-environment.us-west-2.elasticbeanstalk.com
is a regionalized domain name.
For environments that were created before early 2016, the domain name doesn't include the region. To route traffic to these environments, you must create a CNAME record instead of an alias record. Note that you can't create a CNAME record for the root domain name. For example, if your domain name is example.com, you can create a record that routes traffic for acme.example.com to your Elastic Beanstalk environment, but you can't create a record that routes traffic for example.com to your Elastic Beanstalk environment.
For Elastic Beanstalk environments that have regionalized subdomains, specify the CNAME
attribute for the environment. You can use the following methods to get the value of the CNAME attribute:
AWS Management Console: For information about how to get the value by using the console, see Using Custom Domains with AWS Elastic Beanstalk in the AWS Elastic Beanstalk Developer Guide.
Elastic Beanstalk API: Use the DescribeEnvironments
action to get the value of the CNAME
attribute. For more information, see DescribeEnvironments in the AWS Elastic Beanstalk API Reference.
AWS CLI: Use the describe-environments
command to get the value of the CNAME
attribute. For more information, see describe-environments in the AWS Command Line Interface Reference.
Specify the DNS name that is associated with the load balancer. Get the DNS name by using the AWS Management Console, the ELB API, or the AWS CLI.
AWS Management Console: Go to the EC2 page, choose Load Balancers in the navigation pane, choose the load balancer, choose the Description tab, and get the value of the DNS name field.
If you're routing traffic to a Classic Load Balancer, get the value that begins with dualstack. If you're routing traffic to another type of load balancer, get the value that applies to the record type, A or AAAA.
Elastic Load Balancing API: Use DescribeLoadBalancers
to get the value of DNSName
. For more information, see the applicable guide:
Classic Load Balancers: DescribeLoadBalancers
Application and Network Load Balancers: DescribeLoadBalancers
AWS CLI: Use describe-load-balancers
to get the value of DNSName
. For more information, see the applicable guide:
Classic Load Balancers: describe-load-balancers
Application and Network Load Balancers: describe-load-balancers
Specify the domain name of the Amazon S3 website endpoint that you created the bucket in, for example, s3-website.us-east-2.amazonaws.com
. For more information about valid values, see the table Amazon Simple Storage Service (S3) Website Endpoints in the Amazon Web Services General Reference. For more information about using S3 buckets for websites, see Getting Started with Amazon Route 53 in the Amazon Route 53 Developer Guide.
Specify the value of the Name
element for a resource record set in the current hosted zone.
If you're creating an alias record that has the same name as the hosted zone (known as the zone apex), you can't specify the domain name for a record for which the value of Type
is CNAME
. This is because the alias record must have the same type as the record that you're routing traffic to, and creating a CNAME record for the zone apex isn't supported even for an alias record.
Alias resource record sets only: The value that you specify depends on where you want to route queries:
Specify the applicable domain name for your API. You can get the applicable value using the AWS CLI command get-domain-names:
For regional APIs, specify the value of regionalDomainName
.
For edge-optimized APIs, specify the value of distributionDomainName
. This is the name of the associated CloudFront distribution, such as da1b2c3d4e5.cloudfront.net
.
The name of the record that you're creating must match a custom domain name for your API, such as api.example.com
.
Enter the API endpoint for the interface endpoint, such as vpce-123456789abcdef01-example-us-east-1a.elasticloadbalancing.us-east-1.vpce.amazonaws.com
. For edge-optimized APIs, this is the domain name for the corresponding CloudFront distribution. You can get the value of DnsName
using the AWS CLI command describe-vpc-endpoints.
Specify the domain name that CloudFront assigned when you created your distribution.
Your CloudFront distribution must include an alternate domain name that matches the name of the resource record set. For example, if the name of the resource record set is acme.example.com, your CloudFront distribution must include acme.example.com as one of the alternate domain names. For more information, see Using Alternate Domain Names (CNAMEs) in the Amazon CloudFront Developer Guide.
You can't create a resource record set in a private hosted zone to route traffic to a CloudFront distribution.
For failover alias records, you can't specify a CloudFront distribution for both the primary and secondary records. A distribution must include an alternate domain name that matches the name of the record. However, the primary and secondary records have the same name, and you can't include the same alternate domain name in more than one distribution.
If the domain name for your Elastic Beanstalk environment includes the region that you deployed the environment in, you can create an alias record that routes traffic to the environment. For example, the domain name my-environment.us-west-2.elasticbeanstalk.com
is a regionalized domain name.
For environments that were created before early 2016, the domain name doesn't include the region. To route traffic to these environments, you must create a CNAME record instead of an alias record. Note that you can't create a CNAME record for the root domain name. For example, if your domain name is example.com, you can create a record that routes traffic for acme.example.com to your Elastic Beanstalk environment, but you can't create a record that routes traffic for example.com to your Elastic Beanstalk environment.
For Elastic Beanstalk environments that have regionalized subdomains, specify the CNAME
attribute for the environment. You can use the following methods to get the value of the CNAME attribute:
AWS Management Console: For information about how to get the value by using the console, see Using Custom Domains with AWS Elastic Beanstalk in the AWS Elastic Beanstalk Developer Guide.
Elastic Beanstalk API: Use the DescribeEnvironments
action to get the value of the CNAME
attribute. For more information, see DescribeEnvironments in the AWS Elastic Beanstalk API Reference.
AWS CLI: Use the describe-environments
command to get the value of the CNAME
attribute. For more information, see describe-environments in the AWS CLI Command Reference.
Specify the DNS name that is associated with the load balancer. Get the DNS name by using the AWS Management Console, the ELB API, or the AWS CLI.
AWS Management Console: Go to the EC2 page, choose Load Balancers in the navigation pane, choose the load balancer, choose the Description tab, and get the value of the DNS name field.
If you're routing traffic to a Classic Load Balancer, get the value that begins with dualstack. If you're routing traffic to another type of load balancer, get the value that applies to the record type, A or AAAA.
Elastic Load Balancing API: Use DescribeLoadBalancers
to get the value of DNSName
. For more information, see the applicable guide:
Classic Load Balancers: DescribeLoadBalancers
Application and Network Load Balancers: DescribeLoadBalancers
AWS CLI: Use describe-load-balancers
to get the value of DNSName
. For more information, see the applicable guide:
Classic Load Balancers: describe-load-balancers
Application and Network Load Balancers: describe-load-balancers
Specify the DNS name for your accelerator:
Global Accelerator API: To get the DNS name, use DescribeAccelerator.
AWS CLI: To get the DNS name, use describe-accelerator.
Specify the domain name of the Amazon S3 website endpoint that you created the bucket in, for example, s3-website.us-east-2.amazonaws.com
. For more information about valid values, see the table Amazon S3 Website Endpoints in the Amazon Web Services General Reference. For more information about using S3 buckets for websites, see Getting Started with Amazon Route 53 in the Amazon Route 53 Developer Guide.
Specify the value of the Name
element for a resource record set in the current hosted zone.
If you're creating an alias record that has the same name as the hosted zone (known as the zone apex), you can't specify the domain name for a record for which the value of Type
is CNAME
. This is because the alias record must have the same type as the record that you're routing traffic to, and creating a CNAME record for the zone apex isn't supported even for an alias record.
The name of the domain. Specify a fully qualified domain name, for example, www.example.com. The trailing dot is optional; Amazon Route 53 assumes that the domain name is fully qualified. This means that Route 53 treats www.example.com (without a trailing dot) and www.example.com. (with a trailing dot) as identical.
If you're creating a public hosted zone, this is the name you have registered with your DNS registrar. If your domain name is registered with a registrar other than Route 53, change the name servers for your domain to the set of NameServers
that CreateHostedZone
returns in DelegationSet
.
The domain name (such as example.com) or subdomain name (such as www.example.com) for which Amazon Route 53 responds to DNS queries by using the resource record sets that Route 53 creates for this traffic policy instance.
", "DelegationSetNameServers$member": null, @@ -319,7 +319,7 @@ "ListHostedZonesByNameRequest$DNSName": "(Optional) For your first request to ListHostedZonesByName
, include the dnsname
parameter only if you want to specify the name of the first hosted zone in the response. If you don't include the dnsname
parameter, Amazon Route 53 returns all of the hosted zones that were created by the current AWS account, in ASCII order. For subsequent requests, include both dnsname
and hostedzoneid
parameters. For dnsname
, specify the value of NextDNSName
from the previous response.
For the second and subsequent calls to ListHostedZonesByName
, DNSName
is the value that you specified for the dnsname
parameter in the request that produced the current response.
If IsTruncated
is true, the value of NextDNSName
is the name of the first hosted zone in the next group of maxitems
hosted zones. Call ListHostedZonesByName
again and specify the value of NextDNSName
and NextHostedZoneId
in the dnsname
and hostedzoneid
parameters, respectively.
This element is present only if IsTruncated
is true
.
The first name in the lexicographic ordering of resource record sets that you want to list.
", + "ListResourceRecordSetsRequest$StartRecordName": "The first name in the lexicographic ordering of resource record sets that you want to list. If the specified record name doesn't exist, the results begin with the first resource record set that has a name greater than the value of name
.
If the results were truncated, the name of the next record in the list.
This element is present only if IsTruncated
is true.
If the value of IsTruncated
in the previous response is true, you have more traffic policy instances. To get more traffic policy instances, submit another ListTrafficPolicyInstances
request. For the value of trafficpolicyinstancename
, specify the value of TrafficPolicyInstanceNameMarker
from the previous response, which is the name of the first traffic policy instance in the next group of traffic policy instances.
If the value of IsTruncated
in the previous response was false
, there are no more traffic policy instances to get.
If IsTruncated
is true
, TrafficPolicyInstanceNameMarker
is the name of the first traffic policy instance in the next group of traffic policy instances.
If IsTruncated
is true
, TrafficPolicyInstanceNameMarker
is the name of the first traffic policy instance in the next group of MaxItems
traffic policy instances.
If the value of IsTruncated
in the previous response was true
, you have more traffic policy instances. To get more traffic policy instances, submit another ListTrafficPolicyInstances
request. For the value of trafficpolicyinstancename
, specify the value of TrafficPolicyInstanceNameMarker
from the previous response, which is the name of the first traffic policy instance in the next group of traffic policy instances.
If the value of IsTruncated
in the previous response was false
, there are no more traffic policy instances to get.
If IsTruncated
is true
, TrafficPolicyInstanceNameMarker
is the name of the first traffic policy instance that Route 53 will return if you submit another ListTrafficPolicyInstances
request.
For ChangeResourceRecordSets
requests, the name of the record that you want to create, update, or delete. For ListResourceRecordSets
responses, the name of a record in the specified hosted zone.
ChangeResourceRecordSets Only
Enter a fully qualified domain name, for example, www.example.com
. You can optionally include a trailing dot. If you omit the trailing dot, Amazon Route 53 assumes that the domain name that you specify is fully qualified. This means that Route 53 treats www.example.com
(without a trailing dot) and www.example.com.
(with a trailing dot) as identical.
For information about how to specify characters other than a-z
, 0-9
, and -
(hyphen) and how to specify internationalized domain names, see DNS Domain Name Format in the Amazon Route 53 Developer Guide.
You can use the asterisk (*) wildcard to replace the leftmost label in a domain name, for example, *.example.com
. Note the following:
The * must replace the entire label. For example, you can't specify *prod.example.com
or prod*.example.com
.
The * can't replace any of the middle labels, for example, marketing.*.example.com.
If you include * in any position other than the leftmost label in a domain name, DNS treats it as an * character (ASCII 42), not as a wildcard.
You can't use the * wildcard for resource records sets that have a type of NS.
You can use the * wildcard as the leftmost label in a domain name, for example, *.example.com
. You can't use an * for one of the middle labels, for example, marketing.*.example.com
. In addition, the * must replace the entire label; for example, you can't specify prod*.example.com
.
For ChangeResourceRecordSets
requests, the name of the record that you want to create, update, or delete. For ListResourceRecordSets
responses, the name of a record in the specified hosted zone.
ChangeResourceRecordSets Only
Enter a fully qualified domain name, for example, www.example.com
. You can optionally include a trailing dot. If you omit the trailing dot, Amazon Route 53 assumes that the domain name that you specify is fully qualified. This means that Route 53 treats www.example.com
(without a trailing dot) and www.example.com.
(with a trailing dot) as identical.
For information about how to specify characters other than a-z
, 0-9
, and -
(hyphen) and how to specify internationalized domain names, see DNS Domain Name Format in the Amazon Route 53 Developer Guide.
You can use the asterisk (*) wildcard to replace the leftmost label in a domain name, for example, *.example.com
. Note the following:
The * must replace the entire label. For example, you can't specify *prod.example.com
or prod*.example.com
.
The * can't replace any of the middle labels, for example, marketing.*.example.com.
If you include * in any position other than the leftmost label in a domain name, DNS treats it as an * character (ASCII 42), not as a wildcard.
You can't use the * wildcard for resource records sets that have a type of NS.
You can use the * wildcard as the leftmost label in a domain name, for example, *.example.com
. You can't use an * for one of the middle labels, for example, marketing.*.example.com
. In addition, the * must replace the entire label; for example, you can't specify prod*.example.com
.
The name of the resource record set that you want Amazon Route 53 to simulate a query for.
", "TestDNSAnswerResponse$RecordName": "The name of the resource record set that you submitted a request for.
", "TrafficPolicyInstance$Name": "The DNS name, such as www.example.com, for which Amazon Route 53 responds to queries by using the resource record sets that are associated with this traffic policy instance.
" @@ -472,7 +472,7 @@ "DimensionList": { "base": null, "refs": { - "CloudWatchAlarmConfiguration$Dimensions": "For the metric that the CloudWatch alarm is associated with, a complex type that contains information about the dimensions for the metric. For information, see Amazon CloudWatch Namespaces, Dimensions, and Metrics Reference in the Amazon CloudWatch User Guide.
" + "CloudWatchAlarmConfiguration$Dimensions": "For the metric that the CloudWatch alarm is associated with, a complex type that contains information about the dimensions for the metric. For information, see Amazon CloudWatch Namespaces, Dimensions, and Metrics Reference in the Amazon CloudWatch User Guide.
" } }, "Disabled": { @@ -577,8 +577,8 @@ "FailureThreshold": { "base": null, "refs": { - "HealthCheckConfig$FailureThreshold": "The number of consecutive health checks that an endpoint must pass or fail for Amazon Route 53 to change the current status of the endpoint from unhealthy to healthy or vice versa. For more information, see How Amazon Route 53 Determines Whether an Endpoint Is Healthy in the Amazon Route 53 Developer Guide.
If you don't specify a value for FailureThreshold
, the default value is three health checks.
The number of consecutive health checks that an endpoint must pass or fail for Amazon Route 53 to change the current status of the endpoint from unhealthy to healthy or vice versa. For more information, see How Amazon Route 53 Determines Whether an Endpoint Is Healthy in the Amazon Route 53 Developer Guide.
If you don't specify a value for FailureThreshold
, the default value is three health checks.
The number of consecutive health checks that an endpoint must pass or fail for Amazon Route 53 to change the current status of the endpoint from unhealthy to healthy or vice versa. For more information, see How Amazon Route 53 Determines Whether an Endpoint Is Healthy in the Amazon Route 53 Developer Guide.
If you don't specify a value for FailureThreshold
, the default value is three health checks.
The number of consecutive health checks that an endpoint must pass or fail for Amazon Route 53 to change the current status of the endpoint from unhealthy to healthy or vice versa. For more information, see How Amazon Route 53 Determines Whether an Endpoint Is Healthy in the Amazon Route 53 Developer Guide.
If you don't specify a value for FailureThreshold
, the default value is three health checks.
A complex type that contains information about a geographic location.
", "refs": { - "ResourceRecordSet$GeoLocation": " Geolocation resource record sets only: A complex type that lets you control how Amazon Route 53 responds to DNS queries based on the geographic origin of the query. For example, if you want all queries from Africa to be routed to a web server with an IP address of 192.0.2.111
, create a resource record set with a Type
of A
and a ContinentCode
of AF
.
Creating geolocation and geolocation alias resource record sets in private hosted zones is not supported.
If you create separate resource record sets for overlapping geographic regions (for example, one resource record set for a continent and one for a country on the same continent), priority goes to the smallest geographic region. This allows you to route most queries for a continent to one resource and to route queries for a country on that continent to a different resource.
You can't create two geolocation resource record sets that specify the same geographic location.
The value *
in the CountryCode
element matches all geographic locations that aren't specified in other geolocation resource record sets that have the same values for the Name
and Type
elements.
Geolocation works by mapping IP addresses to locations. However, some IP addresses aren't mapped to geographic locations, so even if you create geolocation resource record sets that cover all seven continents, Route 53 will receive some DNS queries from locations that it can't identify. We recommend that you create a resource record set for which the value of CountryCode
is *
, which handles both queries that come from locations for which you haven't created geolocation resource record sets and queries from IP addresses that aren't mapped to a location. If you don't create a *
resource record set, Route 53 returns a \"no answer\" response for queries from those locations.
You can't create non-geolocation resource record sets that have the same values for the Name
and Type
elements as geolocation resource record sets.
Geolocation resource record sets only: A complex type that lets you control how Amazon Route 53 responds to DNS queries based on the geographic origin of the query. For example, if you want all queries from Africa to be routed to a web server with an IP address of 192.0.2.111
, create a resource record set with a Type
of A
and a ContinentCode
of AF
.
Although creating geolocation and geolocation alias resource record sets in a private hosted zone is allowed, it's not supported.
If you create separate resource record sets for overlapping geographic regions (for example, one resource record set for a continent and one for a country on the same continent), priority goes to the smallest geographic region. This allows you to route most queries for a continent to one resource and to route queries for a country on that continent to a different resource.
You can't create two geolocation resource record sets that specify the same geographic location.
The value *
in the CountryCode
element matches all geographic locations that aren't specified in other geolocation resource record sets that have the same values for the Name
and Type
elements.
Geolocation works by mapping IP addresses to locations. However, some IP addresses aren't mapped to geographic locations, so even if you create geolocation resource record sets that cover all seven continents, Route 53 will receive some DNS queries from locations that it can't identify. We recommend that you create a resource record set for which the value of CountryCode
is *
. Two groups of queries are routed to the resource that you specify in this record: queries that come from locations for which you haven't created geolocation resource record sets and queries from IP addresses that aren't mapped to a location. If you don't create a *
resource record set, Route 53 returns a \"no answer\" response for queries from those locations.
You can't create non-geolocation resource record sets that have the same values for the Name
and Type
elements as geolocation resource record sets.
The two-letter code for the continent.
Valid values: AF
| AN
| AS
| EU
| OC
| NA
| SA
Constraint: Specifying ContinentCode
with either CountryCode
or SubdivisionCode
returns an InvalidInput
error.
The two-letter code for the continent.
Amazon Route 53 supports the following continent codes:
AF: Africa
AN: Antarctica
AS: Asia
EU: Europe
OC: Oceania
NA: North America
SA: South America
Constraint: Specifying ContinentCode
with either CountryCode
or SubdivisionCode
returns an InvalidInput
error.
The two-letter code for the continent.
", - "GetGeoLocationRequest$ContinentCode": "Amazon Route 53 supports the following continent codes:
AF: Africa
AN: Antarctica
AS: Asia
EU: Europe
OC: Oceania
NA: North America
SA: South America
For geolocation resource record sets, a two-letter abbreviation that identifies a continent. Amazon Route 53 supports the following continent codes:
AF: Africa
AN: Antarctica
AS: Asia
EU: Europe
OC: Oceania
NA: North America
SA: South America
The code for the continent with which you want to start listing locations that Amazon Route 53 supports for geolocation. If Route 53 has already returned a page or more of results, if IsTruncated
is true, and if NextContinentCode
from the previous response has a value, enter that value in startcontinentcode
to return the next page of results.
Include startcontinentcode
only if you want to list continents. Don't include startcontinentcode
when you're listing countries or countries with their subdivisions.
If IsTruncated
is true
, you can make a follow-up request to display more locations. Enter the value of NextContinentCode
in the startcontinentcode
parameter in another ListGeoLocations
request.
The two-letter code for the country.
", + "GeoLocation$CountryCode": "For geolocation resource record sets, the two-letter code for a country.
Amazon Route 53 uses the two-letter country codes that are specified in ISO standard 3166-1 alpha-2.
", "GeoLocationDetails$CountryCode": "The two-letter code for the country.
", "GetGeoLocationRequest$CountryCode": "Amazon Route 53 uses the two-letter country codes that are specified in ISO standard 3166-1 alpha-2.
", - "ListGeoLocationsRequest$StartCountryCode": "The code for the country with which you want to start listing locations that Amazon Route 53 supports for geolocation. If Route 53 has already returned a page or more of results, if IsTruncated
is true
, and if NextCountryCode
from the previous response has a value, enter that value in startcountrycode
to return the next page of results.
Route 53 uses the two-letter country codes that are specified in ISO standard 3166-1 alpha-2.
", + "ListGeoLocationsRequest$StartCountryCode": "The code for the country with which you want to start listing locations that Amazon Route 53 supports for geolocation. If Route 53 has already returned a page or more of results, if IsTruncated
is true
, and if NextCountryCode
from the previous response has a value, enter that value in startcountrycode
to return the next page of results.
If IsTruncated
is true
, you can make a follow-up request to display more locations. Enter the value of NextCountryCode
in the startcountrycode
parameter in another ListGeoLocations
request.
The code for the subdivision. Route 53 currently supports only states in the United States.
", + "GeoLocation$SubdivisionCode": "For geolocation resource record sets, the two-letter code for a state of the United States. Route 53 doesn't support any other values for SubdivisionCode
. For a list of state abbreviations, see Appendix B: Two–Letter State and Possession Abbreviations on the United States Postal Service website.
If you specify subdivisioncode
, you must also specify US
for CountryCode
.
The code for the subdivision. Route 53 currently supports only states in the United States.
", - "GetGeoLocationRequest$SubdivisionCode": "Amazon Route 53 uses the one- to three-letter subdivision codes that are specified in ISO standard 3166-1 alpha-2. Route 53 doesn't support subdivision codes for all countries. If you specify subdivisioncode
, you must also specify countrycode
.
The code for the subdivision (for example, state or province) with which you want to start listing locations that Amazon Route 53 supports for geolocation. If Route 53 has already returned a page or more of results, if IsTruncated
is true
, and if NextSubdivisionCode
from the previous response has a value, enter that value in startsubdivisioncode
to return the next page of results.
To list subdivisions of a country, you must include both startcountrycode
and startsubdivisioncode
.
For SubdivisionCode
, Amazon Route 53 supports only states of the United States. For a list of state abbreviations, see Appendix B: Two–Letter State and Possession Abbreviations on the United States Postal Service website.
If you specify subdivisioncode
, you must also specify US
for CountryCode
.
The code for the state of the United States with which you want to start listing locations that Amazon Route 53 supports for geolocation. If Route 53 has already returned a page or more of results, if IsTruncated
is true
, and if NextSubdivisionCode
from the previous response has a value, enter that value in startsubdivisioncode
to return the next page of results.
To list subdivisions (U.S. states), you must include both startcountrycode
and startsubdivisioncode
.
If IsTruncated
is true
, you can make a follow-up request to display more locations. Enter the value of NextSubdivisionCode
in the startsubdivisioncode
parameter in another ListGeoLocations
request.
The identifier that Amazon Route 53 assigned to the health check when you created it. When you add or update a resource record set, you use this value to specify which health check to use. The value can be up to 64 characters long.
", "GetHealthCheckStatusRequest$HealthCheckId": "The ID for the health check that you want the current status for. When you created the health check, CreateHealthCheck
returned the ID in the response, in the HealthCheckId
element.
If you want to check the status of a calculated health check, you must use the Amazon Route 53 console or the CloudWatch console. You can't use GetHealthCheckStatus
to get the status of a calculated health check.
The identifier that Amazon Route 53assigned to the health check when you created it. When you add or update a resource record set, you use this value to specify which health check to use. The value can be up to 64 characters long.
", - "ResourceRecordSet$HealthCheckId": "If you want Amazon Route 53 to return this resource record set in response to a DNS query only when the status of a health check is healthy, include the HealthCheckId
element and specify the ID of the applicable health check.
Route 53 determines whether a resource record set is healthy based on one of the following:
By periodically sending a request to the endpoint that is specified in the health check
By aggregating the status of a specified group of health checks (calculated health checks)
By determining the current state of a CloudWatch alarm (CloudWatch metric health checks)
Route 53 doesn't check the health of the endpoint that is specified in the resource record set, for example, the endpoint specified by the IP address in the Value
element. When you add a HealthCheckId
element to a resource record set, Route 53 checks the health of the endpoint that you specified in the health check.
For more information, see the following topics in the Amazon Route 53 Developer Guide:
When to Specify HealthCheckId
Specifying a value for HealthCheckId
is useful only when Route 53 is choosing between two or more resource record sets to respond to a DNS query, and you want Route 53 to base the choice in part on the status of a health check. Configuring health checks makes sense only in the following configurations:
Non-alias resource record sets: You're checking the health of a group of non-alias resource record sets that have the same routing policy, name, and type (such as multiple weighted records named www.example.com with a type of A) and you specify health check IDs for all the resource record sets.
If the health check status for a resource record set is healthy, Route 53 includes the record among the records that it responds to DNS queries with.
If the health check status for a resource record set is unhealthy, Route 53 stops responding to DNS queries using the value for that resource record set.
If the health check status for all resource record sets in the group is unhealthy, Route 53 considers all resource record sets in the group healthy and responds to DNS queries accordingly.
Alias resource record sets: You specify the following settings:
You set EvaluateTargetHealth
to true for an alias resource record set in a group of resource record sets that have the same routing policy, name, and type (such as multiple weighted records named www.example.com with a type of A).
You configure the alias resource record set to route traffic to a non-alias resource record set in the same hosted zone.
You specify a health check ID for the non-alias resource record set.
If the health check status is healthy, Route 53 considers the alias resource record set to be healthy and includes the alias record among the records that it responds to DNS queries with.
If the health check status is unhealthy, Route 53 stops responding to DNS queries using the alias resource record set.
The alias resource record set can also route traffic to a group of non-alias resource record sets that have the same routing policy, name, and type. In that configuration, associate health checks with all of the resource record sets in the group of non-alias resource record sets.
Geolocation Routing
For geolocation resource record sets, if an endpoint is unhealthy, Route 53 looks for a resource record set for the larger, associated geographic region. For example, suppose you have resource record sets for a state in the United States, for the entire United States, for North America, and a resource record set that has *
for CountryCode
is *
, which applies to all locations. If the endpoint for the state resource record set is unhealthy, Route 53 checks for healthy resource record sets in the following order until it finds a resource record set for which the endpoint is healthy:
The United States
North America
The default resource record set
Specifying the Health Check Endpoint by Domain Name
If your health checks specify the endpoint only by domain name, we recommend that you create a separate health check for each endpoint. For example, create a health check for each HTTP
server that is serving content for www.example.com
. For the value of FullyQualifiedDomainName
, specify the domain name of the server (such as us-east-2-www.example.com
), not the name of the resource record sets (www.example.com
).
Health check results will be unpredictable if you do the following:
Create a health check that has the same value for FullyQualifiedDomainName
as the name of a resource record set.
Associate that health check with the resource record set.
If you want Amazon Route 53 to return this resource record set in response to a DNS query only when the status of a health check is healthy, include the HealthCheckId
element and specify the ID of the applicable health check.
Route 53 determines whether a resource record set is healthy based on one of the following:
By periodically sending a request to the endpoint that is specified in the health check
By aggregating the status of a specified group of health checks (calculated health checks)
By determining the current state of a CloudWatch alarm (CloudWatch metric health checks)
Route 53 doesn't check the health of the endpoint that is specified in the resource record set, for example, the endpoint specified by the IP address in the Value
element. When you add a HealthCheckId
element to a resource record set, Route 53 checks the health of the endpoint that you specified in the health check.
For more information, see the following topics in the Amazon Route 53 Developer Guide:
When to Specify HealthCheckId
Specifying a value for HealthCheckId
is useful only when Route 53 is choosing between two or more resource record sets to respond to a DNS query, and you want Route 53 to base the choice in part on the status of a health check. Configuring health checks makes sense only in the following configurations:
Non-alias resource record sets: You're checking the health of a group of non-alias resource record sets that have the same routing policy, name, and type (such as multiple weighted records named www.example.com with a type of A) and you specify health check IDs for all the resource record sets.
If the health check status for a resource record set is healthy, Route 53 includes the record among the records that it responds to DNS queries with.
If the health check status for a resource record set is unhealthy, Route 53 stops responding to DNS queries using the value for that resource record set.
If the health check status for all resource record sets in the group is unhealthy, Route 53 considers all resource record sets in the group healthy and responds to DNS queries accordingly.
Alias resource record sets: You specify the following settings:
You set EvaluateTargetHealth
to true for an alias resource record set in a group of resource record sets that have the same routing policy, name, and type (such as multiple weighted records named www.example.com with a type of A).
You configure the alias resource record set to route traffic to a non-alias resource record set in the same hosted zone.
You specify a health check ID for the non-alias resource record set.
If the health check status is healthy, Route 53 considers the alias resource record set to be healthy and includes the alias record among the records that it responds to DNS queries with.
If the health check status is unhealthy, Route 53 stops responding to DNS queries using the alias resource record set.
The alias resource record set can also route traffic to a group of non-alias resource record sets that have the same routing policy, name, and type. In that configuration, associate health checks with all of the resource record sets in the group of non-alias resource record sets.
Geolocation Routing
For geolocation resource record sets, if an endpoint is unhealthy, Route 53 looks for a resource record set for the larger, associated geographic region. For example, suppose you have resource record sets for a state in the United States, for the entire United States, for North America, and a resource record set that has *
for CountryCode
is *
, which applies to all locations. If the endpoint for the state resource record set is unhealthy, Route 53 checks for healthy resource record sets in the following order until it finds a resource record set for which the endpoint is healthy:
The United States
North America
The default resource record set
Specifying the Health Check Endpoint by Domain Name
If your health checks specify the endpoint only by domain name, we recommend that you create a separate health check for each endpoint. For example, create a health check for each HTTP
server that is serving content for www.example.com
. For the value of FullyQualifiedDomainName
, specify the domain name of the server (such as us-east-2-www.example.com
), not the name of the resource record sets (www.example.com
).
Health check results will be unpredictable if you do the following:
Create a health check that has the same value for FullyQualifiedDomainName
as the name of a resource record set.
Associate that health check with the resource record set.
The ID for the health check for which you want detailed information. When you created the health check, CreateHealthCheck
returned the ID in the response, in the HealthCheckId
element.
The namespace of the metric that the alarm is associated with. For more information, see Amazon CloudWatch Namespaces, Dimensions, and Metrics Reference in the Amazon CloudWatch User Guide.
" + "CloudWatchAlarmConfiguration$Namespace": "The namespace of the metric that the alarm is associated with. For more information, see Amazon CloudWatch Namespaces, Dimensions, and Metrics Reference in the Amazon CloudWatch User Guide.
" } }, "NoSuchChange": { @@ -1312,7 +1312,7 @@ } }, "NoSuchGeoLocation": { - "base": "Amazon Route 53 doesn't support the specified geographic location.
", + "base": "Amazon Route 53 doesn't support the specified geographic location. For a list of supported geolocation codes, see the GeoLocation data type.
", "refs": { } }, @@ -1430,8 +1430,8 @@ "Port": { "base": null, "refs": { - "HealthCheckConfig$Port": "The port on the endpoint on which you want Amazon Route 53 to perform health checks. Specify a value for Port
only when you specify a value for IPAddress
.
The port on the endpoint on which you want Amazon Route 53 to perform health checks.
" + "HealthCheckConfig$Port": "The port on the endpoint that you want Amazon Route 53 to perform health checks on.
Don't specify a value for Port
when you specify a value for Type
of CLOUDWATCH_METRIC
or CALCULATED
.
The port on the endpoint that you want Amazon Route 53 to perform health checks on.
Don't specify a value for Port
when you specify a value for Type
of CLOUDWATCH_METRIC
or CALCULATED
.
The type of resource record set to begin the record listing from.
Valid values for basic resource record sets: A
| AAAA
| CAA
| CNAME
| MX
| NAPTR
| NS
| PTR
| SOA
| SPF
| SRV
| TXT
Values for weighted, latency, geolocation, and failover resource record sets: A
| AAAA
| CAA
| CNAME
| MX
| NAPTR
| PTR
| SPF
| SRV
| TXT
Values for alias resource record sets:
API Gateway custom regional API or edge-optimized API: A
CloudFront distribution: A or AAAA
Elastic Beanstalk environment that has a regionalized subdomain: A
Elastic Load Balancing load balancer: A | AAAA
Amazon S3 bucket: A
Amazon VPC interface VPC endpoint: A
Another resource record set in this hosted zone: The type of the resource record set that the alias references.
Constraint: Specifying type
without specifying name
returns an InvalidInput
error.
The type of resource record set to begin the record listing from.
Valid values for basic resource record sets: A
| AAAA
| CAA
| CNAME
| MX
| NAPTR
| NS
| PTR
| SOA
| SPF
| SRV
| TXT
Values for weighted, latency, geolocation, and failover resource record sets: A
| AAAA
| CAA
| CNAME
| MX
| NAPTR
| PTR
| SPF
| SRV
| TXT
Values for alias resource record sets:
API Gateway custom regional API or edge-optimized API: A
CloudFront distribution: A or AAAA
Elastic Beanstalk environment that has a regionalized subdomain: A
Elastic Load Balancing load balancer: A | AAAA
S3 bucket: A
VPC interface VPC endpoint: A
Another resource record set in this hosted zone: The type of the resource record set that the alias references.
Constraint: Specifying type
without specifying name
returns an InvalidInput
error.
If the results were truncated, the type of the next record in the list.
This element is present only if IsTruncated
is true.
If the value of IsTruncated
in the previous response is true, you have more traffic policy instances. To get more traffic policy instances, submit another ListTrafficPolicyInstances
request. For the value of trafficpolicyinstancetype
, specify the value of TrafficPolicyInstanceTypeMarker
from the previous response, which is the type of the first traffic policy instance in the next group of traffic policy instances.
If the value of IsTruncated
in the previous response was false
, there are no more traffic policy instances to get.
If IsTruncated
is true, TrafficPolicyInstanceTypeMarker
is the DNS type of the resource record sets that are associated with the first traffic policy instance in the next group of traffic policy instances.
If IsTruncated
is true
, TrafficPolicyInstanceTypeMarker
is the DNS type of the resource record sets that are associated with the first traffic policy instance in the next group of MaxItems
traffic policy instances.
If the value of IsTruncated
in the previous response was true
, you have more traffic policy instances. To get more traffic policy instances, submit another ListTrafficPolicyInstances
request. For the value of trafficpolicyinstancetype
, specify the value of TrafficPolicyInstanceTypeMarker
from the previous response, which is the type of the first traffic policy instance in the next group of traffic policy instances.
If the value of IsTruncated
in the previous response was false
, there are no more traffic policy instances to get.
If IsTruncated
is true
, TrafficPolicyInstanceTypeMarker
is the DNS type of the resource record sets that are associated with the first traffic policy instance that Amazon Route 53 will return if you submit another ListTrafficPolicyInstances
request.
The DNS record type. For information about different record types and how data is encoded for them, see Supported DNS Resource Record Types in the Amazon Route 53 Developer Guide.
Valid values for basic resource record sets: A
| AAAA
| CAA
| CNAME
| MX
| NAPTR
| NS
| PTR
| SOA
| SPF
| SRV
| TXT
Values for weighted, latency, geolocation, and failover resource record sets: A
| AAAA
| CAA
| CNAME
| MX
| NAPTR
| PTR
| SPF
| SRV
| TXT
. When creating a group of weighted, latency, geolocation, or failover resource record sets, specify the same value for all of the resource record sets in the group.
Valid values for multivalue answer resource record sets: A
| AAAA
| MX
| NAPTR
| PTR
| SPF
| SRV
| TXT
SPF records were formerly used to verify the identity of the sender of email messages. However, we no longer recommend that you create resource record sets for which the value of Type
is SPF
. RFC 7208, Sender Policy Framework (SPF) for Authorizing Use of Domains in Email, Version 1, has been updated to say, \"...[I]ts existence and mechanism defined in [RFC4408] have led to some interoperability issues. Accordingly, its use is no longer appropriate for SPF version 1; implementations are not to use it.\" In RFC 7208, see section 14.1, The SPF DNS Record Type.
Values for alias resource record sets:
Amazon API Gateway custom regional APIs and edge-optimized APIs: A
CloudFront distributions: A
If IPv6 is enabled for the distribution, create two resource record sets to route traffic to your distribution, one with a value of A
and one with a value of AAAA
.
AWS Elastic Beanstalk environment that has a regionalized subdomain: A
ELB load balancers: A
| AAAA
Amazon S3 buckets: A
Amazon Virtual Private Cloud interface VPC endpoints A
Another resource record set in this hosted zone: Specify the type of the resource record set that you're creating the alias for. All values are supported except NS
and SOA
.
If you're creating an alias record that has the same name as the hosted zone (known as the zone apex), you can't route traffic to a record for which the value of Type
is CNAME
. This is because the alias record must have the same type as the record you're routing traffic to, and creating a CNAME record for the zone apex isn't supported even for an alias record.
The DNS record type. For information about different record types and how data is encoded for them, see Supported DNS Resource Record Types in the Amazon Route 53 Developer Guide.
Valid values for basic resource record sets: A
| AAAA
| CAA
| CNAME
| MX
| NAPTR
| NS
| PTR
| SOA
| SPF
| SRV
| TXT
Values for weighted, latency, geolocation, and failover resource record sets: A
| AAAA
| CAA
| CNAME
| MX
| NAPTR
| PTR
| SPF
| SRV
| TXT
. When creating a group of weighted, latency, geolocation, or failover resource record sets, specify the same value for all of the resource record sets in the group.
Valid values for multivalue answer resource record sets: A
| AAAA
| MX
| NAPTR
| PTR
| SPF
| SRV
| TXT
SPF records were formerly used to verify the identity of the sender of email messages. However, we no longer recommend that you create resource record sets for which the value of Type
is SPF
. RFC 7208, Sender Policy Framework (SPF) for Authorizing Use of Domains in Email, Version 1, has been updated to say, \"...[I]ts existence and mechanism defined in [RFC4408] have led to some interoperability issues. Accordingly, its use is no longer appropriate for SPF version 1; implementations are not to use it.\" In RFC 7208, see section 14.1, The SPF DNS Record Type.
Values for alias resource record sets:
Amazon API Gateway custom regional APIs and edge-optimized APIs: A
CloudFront distributions: A
If IPv6 is enabled for the distribution, create two resource record sets to route traffic to your distribution, one with a value of A
and one with a value of AAAA
.
Amazon API Gateway environment that has a regionalized subdomain: A
ELB load balancers: A
| AAAA
Amazon S3 buckets: A
Amazon Virtual Private Cloud interface VPC endpoints A
Another resource record set in this hosted zone: Specify the type of the resource record set that you're creating the alias for. All values are supported except NS
and SOA
.
If you're creating an alias record that has the same name as the hosted zone (known as the zone apex), you can't route traffic to a record for which the value of Type
is CNAME
. This is because the alias record must have the same type as the record you're routing traffic to, and creating a CNAME record for the zone apex isn't supported even for an alias record.
The type of the resource record set.
", "TestDNSAnswerResponse$RecordType": "The type of the resource record set that you submitted a request for.
", "TrafficPolicy$Type": "The DNS type of the resource record sets that Amazon Route 53 creates when you use a traffic policy to create a traffic policy instance.
", @@ -1539,7 +1539,7 @@ "ResourceId": { "base": null, "refs": { - "AliasTarget$HostedZoneId": "Alias resource records sets only: The value used depends on where you want to route traffic:
Specify the hosted zone ID for your API. You can get the applicable value using the AWS CLI command get-domain-names:
For regional APIs, specify the value of regionalHostedZoneId
.
For edge-optimized APIs, specify the value of distributionHostedZoneId
.
Specify the hosted zone ID for your interface endpoint. You can get the value of HostedZoneId
using the AWS CLI command describe-vpc-endpoints.
Specify Z2FDTNDATAQYW2
.
Alias resource record sets for CloudFront can't be created in a private zone.
Specify the hosted zone ID for the region that you created the environment in. The environment must have a regionalized subdomain. For a list of regions and the corresponding hosted zone IDs, see AWS Elastic Beanstalk in the \"AWS Regions and Endpoints\" chapter of the Amazon Web Services General Reference.
Specify the value of the hosted zone ID for the load balancer. Use the following methods to get the hosted zone ID:
Elastic Load Balancing table in the \"AWS Regions and Endpoints\" chapter of the Amazon Web Services General Reference: Use the value that corresponds with the region that you created your load balancer in. Note that there are separate columns for Application and Classic Load Balancers and for Network Load Balancers.
AWS Management Console: Go to the Amazon EC2 page, choose Load Balancers in the navigation pane, select the load balancer, and get the value of the Hosted zone field on the Description tab.
Elastic Load Balancing API: Use DescribeLoadBalancers
to get the applicable value. For more information, see the applicable guide:
Classic Load Balancers: Use DescribeLoadBalancers to get the value of CanonicalHostedZoneNameId
.
Application and Network Load Balancers: Use DescribeLoadBalancers to get the value of CanonicalHostedZoneId
.
AWS CLI: Use describe-load-balancers
to get the applicable value. For more information, see the applicable guide:
Classic Load Balancers: Use describe-load-balancers to get the value of CanonicalHostedZoneNameId
.
Application and Network Load Balancers: Use describe-load-balancers to get the value of CanonicalHostedZoneId
.
Specify the hosted zone ID for the region that you created the bucket in. For more information about valid values, see the Amazon Simple Storage Service Website Endpoints table in the \"AWS Regions and Endpoints\" chapter of the Amazon Web Services General Reference.
Specify the hosted zone ID of your hosted zone. (An alias resource record set can't reference a resource record set in a different hosted zone.)
Alias resource records sets only: The value used depends on where you want to route traffic:
Specify the hosted zone ID for your API. You can get the applicable value using the AWS CLI command get-domain-names:
For regional APIs, specify the value of regionalHostedZoneId
.
For edge-optimized APIs, specify the value of distributionHostedZoneId
.
Specify the hosted zone ID for your interface endpoint. You can get the value of HostedZoneId
using the AWS CLI command describe-vpc-endpoints.
Specify Z2FDTNDATAQYW2
.
Alias resource record sets for CloudFront can't be created in a private zone.
Specify the hosted zone ID for the region that you created the environment in. The environment must have a regionalized subdomain. For a list of regions and the corresponding hosted zone IDs, see AWS Elastic Beanstalk in the \"AWS Service Endpoints\" chapter of the Amazon Web Services General Reference.
Specify the value of the hosted zone ID for the load balancer. Use the following methods to get the hosted zone ID:
Service Endpoints table in the \"Elastic Load Balancing Endpoints and Quotas\" topic in the Amazon Web Services General Reference: Use the value that corresponds with the region that you created your load balancer in. Note that there are separate columns for Application and Classic Load Balancers and for Network Load Balancers.
AWS Management Console: Go to the Amazon EC2 page, choose Load Balancers in the navigation pane, select the load balancer, and get the value of the Hosted zone field on the Description tab.
Elastic Load Balancing API: Use DescribeLoadBalancers
to get the applicable value. For more information, see the applicable guide:
Classic Load Balancers: Use DescribeLoadBalancers to get the value of CanonicalHostedZoneNameId
.
Application and Network Load Balancers: Use DescribeLoadBalancers to get the value of CanonicalHostedZoneId
.
AWS CLI: Use describe-load-balancers
to get the applicable value. For more information, see the applicable guide:
Classic Load Balancers: Use describe-load-balancers to get the value of CanonicalHostedZoneNameId
.
Application and Network Load Balancers: Use describe-load-balancers to get the value of CanonicalHostedZoneId
.
Specify Z2BJ6XQ5FK7U4H
.
Specify the hosted zone ID for the region that you created the bucket in. For more information about valid values, see the table Amazon S3 Website Endpoints in the Amazon Web Services General Reference.
Specify the hosted zone ID of your hosted zone. (An alias resource record set can't reference a resource record set in a different hosted zone.)
The ID of the private hosted zone that you want to associate an Amazon VPC with.
Note that you can't associate a VPC with a hosted zone that doesn't have an existing VPC association.
", "ChangeInfo$Id": "The ID of the request.
", "ChangeResourceRecordSetsRequest$HostedZoneId": "The ID of the hosted zone that contains the resource record sets that you want to change.
", @@ -1602,7 +1602,7 @@ "ResourceRecordSetFailover": { "base": null, "refs": { - "ResourceRecordSet$Failover": " Failover resource record sets only: To configure failover, you add the Failover
element to two resource record sets. For one resource record set, you specify PRIMARY
as the value for Failover
; for the other resource record set, you specify SECONDARY
. In addition, you include the HealthCheckId
element and specify the health check that you want Amazon Route 53 to perform for each resource record set.
Except where noted, the following failover behaviors assume that you have included the HealthCheckId
element in both resource record sets:
When the primary resource record set is healthy, Route 53 responds to DNS queries with the applicable value from the primary resource record set regardless of the health of the secondary resource record set.
When the primary resource record set is unhealthy and the secondary resource record set is healthy, Route 53 responds to DNS queries with the applicable value from the secondary resource record set.
When the secondary resource record set is unhealthy, Route 53 responds to DNS queries with the applicable value from the primary resource record set regardless of the health of the primary resource record set.
If you omit the HealthCheckId
element for the secondary resource record set, and if the primary resource record set is unhealthy, Route 53 always responds to DNS queries with the applicable value from the secondary resource record set. This is true regardless of the health of the associated endpoint.
You can't create non-failover resource record sets that have the same values for the Name
and Type
elements as failover resource record sets.
For failover alias resource record sets, you must also include the EvaluateTargetHealth
element and set the value to true.
For more information about configuring failover for Route 53, see the following topics in the Amazon Route 53 Developer Guide:
" + "ResourceRecordSet$Failover": " Failover resource record sets only: To configure failover, you add the Failover
element to two resource record sets. For one resource record set, you specify PRIMARY
as the value for Failover
; for the other resource record set, you specify SECONDARY
. In addition, you include the HealthCheckId
element and specify the health check that you want Amazon Route 53 to perform for each resource record set.
Except where noted, the following failover behaviors assume that you have included the HealthCheckId
element in both resource record sets:
When the primary resource record set is healthy, Route 53 responds to DNS queries with the applicable value from the primary resource record set regardless of the health of the secondary resource record set.
When the primary resource record set is unhealthy and the secondary resource record set is healthy, Route 53 responds to DNS queries with the applicable value from the secondary resource record set.
When the secondary resource record set is unhealthy, Route 53 responds to DNS queries with the applicable value from the primary resource record set regardless of the health of the primary resource record set.
If you omit the HealthCheckId
element for the secondary resource record set, and if the primary resource record set is unhealthy, Route 53 always responds to DNS queries with the applicable value from the secondary resource record set. This is true regardless of the health of the associated endpoint.
You can't create non-failover resource record sets that have the same values for the Name
and Type
elements as failover resource record sets.
For failover alias resource record sets, you must also include the EvaluateTargetHealth
element and set the value to true.
For more information about configuring failover for Route 53, see the following topics in the Amazon Route 53 Developer Guide:
" } }, "ResourceRecordSetIdentifier": { @@ -1622,13 +1622,13 @@ "ResourceRecordSetRegion": { "base": null, "refs": { - "ResourceRecordSet$Region": "Latency-based resource record sets only: The Amazon EC2 Region where you created the resource that this resource record set refers to. The resource typically is an AWS resource, such as an EC2 instance or an ELB load balancer, and is referred to by an IP address or a DNS domain name, depending on the record type.
Creating latency and latency alias resource record sets in private hosted zones is not supported.
When Amazon Route 53 receives a DNS query for a domain name and type for which you have created latency resource record sets, Route 53 selects the latency resource record set that has the lowest latency between the end user and the associated Amazon EC2 Region. Route 53 then returns the value that is associated with the selected resource record set.
Note the following:
You can only specify one ResourceRecord
per latency resource record set.
You can only create one latency resource record set for each Amazon EC2 Region.
You aren't required to create latency resource record sets for all Amazon EC2 Regions. Route 53 will choose the region with the best latency from among the regions that you create latency resource record sets for.
You can't create non-latency resource record sets that have the same values for the Name
and Type
elements as latency resource record sets.
Latency-based resource record sets only: The Amazon EC2 Region where you created the resource that this resource record set refers to. The resource typically is an AWS resource, such as an EC2 instance or an ELB load balancer, and is referred to by an IP address or a DNS domain name, depending on the record type.
Although creating latency and latency alias resource record sets in a private hosted zone is allowed, it's not supported.
When Amazon Route 53 receives a DNS query for a domain name and type for which you have created latency resource record sets, Route 53 selects the latency resource record set that has the lowest latency between the end user and the associated Amazon EC2 Region. Route 53 then returns the value that is associated with the selected resource record set.
Note the following:
You can only specify one ResourceRecord
per latency resource record set.
You can only create one latency resource record set for each Amazon EC2 Region.
You aren't required to create latency resource record sets for all Amazon EC2 Regions. Route 53 will choose the region with the best latency from among the regions that you create latency resource record sets for.
You can't create non-latency resource record sets that have the same values for the Name
and Type
elements as latency resource record sets.
Weighted resource record sets only: Among resource record sets that have the same combination of DNS name and type, a value that determines the proportion of DNS queries that Amazon Route 53 responds to using the current resource record set. Route 53 calculates the sum of the weights for the resource record sets that have the same combination of DNS name and type. Route 53 then responds to queries based on the ratio of a resource's weight to the total. Note the following:
You must specify a value for the Weight
element for every weighted resource record set.
You can only specify one ResourceRecord
per weighted resource record set.
You can't create latency, failover, or geolocation resource record sets that have the same values for the Name
and Type
elements as weighted resource record sets.
You can create a maximum of 100 weighted resource record sets that have the same values for the Name
and Type
elements.
For weighted (but not weighted alias) resource record sets, if you set Weight
to 0
for a resource record set, Route 53 never responds to queries with the applicable value for that resource record set. However, if you set Weight
to 0
for all resource record sets that have the same combination of DNS name and type, traffic is routed to all resources with equal probability.
The effect of setting Weight
to 0
is different when you associate health checks with weighted resource record sets. For more information, see Options for Configuring Route 53 Active-Active and Active-Passive Failover in the Amazon Route 53 Developer Guide.
Weighted resource record sets only: Among resource record sets that have the same combination of DNS name and type, a value that determines the proportion of DNS queries that Amazon Route 53 responds to using the current resource record set. Route 53 calculates the sum of the weights for the resource record sets that have the same combination of DNS name and type. Route 53 then responds to queries based on the ratio of a resource's weight to the total. Note the following:
You must specify a value for the Weight
element for every weighted resource record set.
You can only specify one ResourceRecord
per weighted resource record set.
You can't create latency, failover, or geolocation resource record sets that have the same values for the Name
and Type
elements as weighted resource record sets.
You can create a maximum of 100 weighted resource record sets that have the same values for the Name
and Type
elements.
For weighted (but not weighted alias) resource record sets, if you set Weight
to 0
for a resource record set, Route 53 never responds to queries with the applicable value for that resource record set. However, if you set Weight
to 0
for all resource record sets that have the same combination of DNS name and type, traffic is routed to all resources with equal probability.
The effect of setting Weight
to 0
is different when you associate health checks with weighted resource record sets. For more information, see Options for Configuring Route 53 Active-Active and Active-Passive Failover in the Amazon Route 53 Developer Guide.
If the value of Type is HTTP_STR_MATCH
or HTTP_STR_MATCH
, the string that you want Amazon Route 53 to search for in the response body from the specified resource. If the string appears in the response body, Route 53 considers the resource healthy.
Route 53 considers case when searching for SearchString
in the response body.
If the value of Type
is HTTP_STR_MATCH
or HTTP_STR_MATCH
, the string that you want Amazon Route 53 to search for in the response body from the specified resource. If the string appears in the response body, Route 53 considers the resource healthy. (You can't change the value of Type
when you update a health check.)
If the value of Type is HTTP_STR_MATCH
or HTTPS_STR_MATCH
, the string that you want Amazon Route 53 to search for in the response body from the specified resource. If the string appears in the response body, Route 53 considers the resource healthy.
Route 53 considers case when searching for SearchString
in the response body.
If the value of Type
is HTTP_STR_MATCH
or HTTPS_STR_MATCH
, the string that you want Amazon Route 53 to search for in the response body from the specified resource. If the string appears in the response body, Route 53 considers the resource healthy. (You can't change the value of Type
when you update a health check.)
Amazon Route 53 API actions let you register domain names and perform related operations.
", "operations": { + "AcceptDomainTransferFromAnotherAwsAccount": "Accepts the transfer of a domain from another AWS account to the current AWS account. You initiate a transfer between AWS accounts using TransferDomainToAnotherAwsAccount.
Use either ListOperations or GetOperationDetail to determine whether the operation succeeded. GetOperationDetail provides additional information, for example, Domain Transfer from Aws Account 111122223333 has been cancelled
.
Cancels the transfer of a domain from the current AWS account to another AWS account. You initiate a transfer between AWS accounts using TransferDomainToAnotherAwsAccount.
You must cancel the transfer before the other AWS account accepts the transfer using AcceptDomainTransferFromAnotherAwsAccount.
Use either ListOperations or GetOperationDetail to determine whether the operation succeeded. GetOperationDetail provides additional information, for example, Domain Transfer from Aws Account 111122223333 has been cancelled
.
This operation checks the availability of one domain name. Note that if the availability status of a domain is pending, you must submit another request to determine the availability of the domain name.
", "CheckDomainTransferability": "Checks whether a domain name can be transferred to Amazon Route 53.
", "DeleteTagsForDomain": "This operation deletes the specified tags for a domain.
All tag operations are eventually consistent; subsequent operations might not immediately represent all issued operations.
", "DisableDomainAutoRenew": "This operation disables automatic renewal of domain registration for the specified domain.
", "DisableDomainTransferLock": "This operation removes the transfer lock on the domain (specifically the clientTransferProhibited
status) to allow domain transfers. We recommend you refrain from performing this action unless you intend to transfer the domain to a different registrar. Successful submission returns an operation ID that you can use to track the progress and completion of the action. If the request is not completed successfully, the domain registrant will be notified by email.
This operation configures Amazon Route 53 to automatically renew the specified domain before the domain registration expires. The cost of renewing your domain registration is billed to your AWS account.
The period during which you can renew a domain name varies by TLD. For a list of TLDs and their renewal policies, see \"Renewal, restoration, and deletion times\" on the website for our registrar associate, Gandi. Amazon Route 53 requires that you renew before the end of the renewal period that is listed on the Gandi website so we can complete processing before the deadline.
", + "EnableDomainAutoRenew": "This operation configures Amazon Route 53 to automatically renew the specified domain before the domain registration expires. The cost of renewing your domain registration is billed to your AWS account.
The period during which you can renew a domain name varies by TLD. For a list of TLDs and their renewal policies, see Domains That You Can Register with Amazon Route 53 in the Amazon Route 53 Developer Guide. Route 53 requires that you renew before the end of the renewal period so we can complete processing before the deadline.
", "EnableDomainTransferLock": "This operation sets the transfer lock on the domain (specifically the clientTransferProhibited
status) to prevent domain transfers. Successful submission returns an operation ID that you can use to track the progress and completion of the action. If the request is not completed successfully, the domain registrant will be notified by email.
For operations that require confirmation that the email address for the registrant contact is valid, such as registering a new domain, this operation returns information about whether the registrant contact has responded.
If you want us to resend the email, use the ResendContactReachabilityEmail
operation.
This operation returns detailed information about a specified domain that is associated with the current AWS account. Contact information for the domain is also returned as part of the output.
", - "GetDomainSuggestions": "The GetDomainSuggestions operation returns a list of suggested domain names given a string, which can either be a domain name or simply a word or phrase (without spaces).
", + "GetDomainSuggestions": "The GetDomainSuggestions operation returns a list of suggested domain names.
", "GetOperationDetail": "This operation returns the current status of an operation that is not completed.
", "ListDomains": "This operation returns all the domain names registered with Amazon Route 53 for the current AWS account.
", - "ListOperations": "This operation returns the operation IDs of operations that are not yet complete.
", + "ListOperations": "Returns information about all of the operations that return an operation ID and that have ever been performed on domains that were registered by the current account.
", "ListTagsForDomain": "This operation returns all of the tags that are associated with the specified domain.
All tag operations are eventually consistent; subsequent operations might not immediately represent all issued operations.
", - "RegisterDomain": "This operation registers a domain. Domains are registered either by Amazon Registrar (for .com, .net, and .org domains) or by our registrar associate, Gandi (for all other domains). For some top-level domains (TLDs), this operation requires extra parameters.
When you register a domain, Amazon Route 53 does the following:
Creates a Amazon Route 53 hosted zone that has the same name as the domain. Amazon Route 53 assigns four name servers to your hosted zone and automatically updates your domain registration with the names of these name servers.
Enables autorenew, so your domain registration will renew automatically each year. We'll notify you in advance of the renewal date so you can choose whether to renew the registration.
Optionally enables privacy protection, so WHOIS queries return contact information either for Amazon Registrar (for .com, .net, and .org domains) or for our registrar associate, Gandi (for all other TLDs). If you don't enable privacy protection, WHOIS queries return the information that you entered for the registrant, admin, and tech contacts.
If registration is successful, returns an operation ID that you can use to track the progress and completion of the action. If the request is not completed successfully, the domain registrant is notified by email.
Charges your AWS account an amount based on the top-level domain. For more information, see Amazon Route 53 Pricing.
This operation renews a domain for the specified number of years. The cost of renewing your domain is billed to your AWS account.
We recommend that you renew your domain several weeks before the expiration date. Some TLD registries delete domains before the expiration date if you haven't renewed far enough in advance. For more information about renewing domain registration, see Renewing Registration for a Domain in the Amazon Route 53 Developer Guide.
", + "RegisterDomain": "This operation registers a domain. Domains are registered either by Amazon Registrar (for .com, .net, and .org domains) or by our registrar associate, Gandi (for all other domains). For some top-level domains (TLDs), this operation requires extra parameters.
When you register a domain, Amazon Route 53 does the following:
Creates a Route 53 hosted zone that has the same name as the domain. Route 53 assigns four name servers to your hosted zone and automatically updates your domain registration with the names of these name servers.
Enables autorenew, so your domain registration will renew automatically each year. We'll notify you in advance of the renewal date so you can choose whether to renew the registration.
Optionally enables privacy protection, so WHOIS queries return contact information either for Amazon Registrar (for .com, .net, and .org domains) or for our registrar associate, Gandi (for all other TLDs). If you don't enable privacy protection, WHOIS queries return the information that you entered for the registrant, admin, and tech contacts.
If registration is successful, returns an operation ID that you can use to track the progress and completion of the action. If the request is not completed successfully, the domain registrant is notified by email.
Charges your AWS account an amount based on the top-level domain. For more information, see Amazon Route 53 Pricing.
Rejects the transfer of a domain from another AWS account to the current AWS account. You initiate a transfer between AWS accounts using TransferDomainToAnotherAwsAccount.
Use either ListOperations or GetOperationDetail to determine whether the operation succeeded. GetOperationDetail provides additional information, for example, Domain Transfer from Aws Account 111122223333 has been cancelled
.
This operation renews a domain for the specified number of years. The cost of renewing your domain is billed to your AWS account.
We recommend that you renew your domain several weeks before the expiration date. Some TLD registries delete domains before the expiration date if you haven't renewed far enough in advance. For more information about renewing domain registration, see Renewing Registration for a Domain in the Amazon Route 53 Developer Guide.
", "ResendContactReachabilityEmail": "For operations that require confirmation that the email address for the registrant contact is valid, such as registering a new domain, this operation resends the confirmation email to the current email address for the registrant contact.
", "RetrieveDomainAuthCode": "This operation returns the AuthCode for the domain. To transfer a domain to another registrar, you provide this value to the new registrar.
", - "TransferDomain": "This operation transfers a domain from another registrar to Amazon Route 53. When the transfer is complete, the domain is registered either with Amazon Registrar (for .com, .net, and .org domains) or with our registrar associate, Gandi (for all other TLDs).
For transfer requirements, a detailed procedure, and information about viewing the status of a domain transfer, see Transferring Registration for a Domain to Amazon Route 53 in the Amazon Route 53 Developer Guide.
If the registrar for your domain is also the DNS service provider for the domain, we highly recommend that you consider transferring your DNS service to Amazon Route 53 or to another DNS service provider before you transfer your registration. Some registrars provide free DNS service when you purchase a domain registration. When you transfer the registration, the previous registrar will not renew your domain registration and could end your DNS service at any time.
If the registrar for your domain is also the DNS service provider for the domain and you don't transfer DNS service to another provider, your website, email, and the web applications associated with the domain might become unavailable.
If the transfer is successful, this method returns an operation ID that you can use to track the progress and completion of the action. If the transfer doesn't complete successfully, the domain registrant will be notified by email.
", + "TransferDomain": "Transfers a domain from another registrar to Amazon Route 53. When the transfer is complete, the domain is registered either with Amazon Registrar (for .com, .net, and .org domains) or with our registrar associate, Gandi (for all other TLDs).
For more information about transferring domains, see the following topics:
For transfer requirements, a detailed procedure, and information about viewing the status of a domain that you're transferring to Route 53, see Transferring Registration for a Domain to Amazon Route 53 in the Amazon Route 53 Developer Guide.
For information about how to transfer a domain from one AWS account to another, see TransferDomainToAnotherAwsAccount.
For information about how to transfer a domain to another domain registrar, see Transferring a Domain from Amazon Route 53 to Another Registrar in the Amazon Route 53 Developer Guide.
If the registrar for your domain is also the DNS service provider for the domain, we highly recommend that you transfer your DNS service to Route 53 or to another DNS service provider before you transfer your registration. Some registrars provide free DNS service when you purchase a domain registration. When you transfer the registration, the previous registrar will not renew your domain registration and could end your DNS service at any time.
If the registrar for your domain is also the DNS service provider for the domain and you don't transfer DNS service to another provider, your website, email, and the web applications associated with the domain might become unavailable.
If the transfer is successful, this method returns an operation ID that you can use to track the progress and completion of the action. If the transfer doesn't complete successfully, the domain registrant will be notified by email.
", + "TransferDomainToAnotherAwsAccount": "Transfers a domain from the current AWS account to another AWS account. Note the following:
The AWS account that you're transferring the domain to must accept the transfer. If the other account doesn't accept the transfer within 3 days, we cancel the transfer. See AcceptDomainTransferFromAnotherAwsAccount.
You can cancel the transfer before the other account accepts it. See CancelDomainTransferToAnotherAwsAccount.
The other account can reject the transfer. See RejectDomainTransferFromAnotherAwsAccount.
When you transfer a domain from one AWS account to another, Route 53 doesn't transfer the hosted zone that is associated with the domain. DNS resolution isn't affected if the domain and the hosted zone are owned by separate accounts, so transferring the hosted zone is optional. For information about transferring the hosted zone to another AWS account, see Migrating a Hosted Zone to a Different AWS Account in the Amazon Route 53 Developer Guide.
Use either ListOperations or GetOperationDetail to determine whether the operation succeeded. GetOperationDetail provides additional information, for example, Domain Transfer from Aws Account 111122223333 has been cancelled
.
This operation updates the contact information for a particular domain. You must specify information for at least one contact: registrant, administrator, or technical.
If the update is successful, this method returns an operation ID that you can use to track the progress and completion of the action. If the request is not completed successfully, the domain registrant will be notified by email.
", - "UpdateDomainContactPrivacy": "This operation updates the specified domain contact's privacy setting. When privacy protection is enabled, contact information such as email address is replaced either with contact information for Amazon Registrar (for .com, .net, and .org domains) or with contact information for our registrar associate, Gandi.
This operation affects only the contact information for the specified contact type (registrant, administrator, or tech). If the request succeeds, Amazon Route 53 returns an operation ID that you can use with GetOperationDetail to track the progress and completion of the action. If the request doesn't complete successfully, the domain registrant will be notified by email.
", + "UpdateDomainContactPrivacy": "This operation updates the specified domain contact's privacy setting. When privacy protection is enabled, contact information such as email address is replaced either with contact information for Amazon Registrar (for .com, .net, and .org domains) or with contact information for our registrar associate, Gandi.
This operation affects only the contact information for the specified contact type (registrant, administrator, or tech). If the request succeeds, Amazon Route 53 returns an operation ID that you can use with GetOperationDetail to track the progress and completion of the action. If the request doesn't complete successfully, the domain registrant will be notified by email.
By disabling the privacy service via API, you consent to the publication of the contact information provided for this domain via the public WHOIS database. You certify that you are the registrant of this domain name and have the authority to make this decision. You may withdraw your consent at any time by enabling privacy protection using either UpdateDomainContactPrivacy
or the Route 53 console. Enabling privacy protection removes the contact information provided for this domain from the WHOIS database. For more information on our privacy practices, see https://aws.amazon.com/privacy/.
This operation replaces the current set of name servers for the domain with the specified set of name servers. If you use Amazon Route 53 as your DNS service, specify the four name servers in the delegation set for the hosted zone for the domain.
If successful, this operation returns an operation ID that you can use to track the progress and completion of the action. If the request is not completed successfully, the domain registrant will be notified by email.
", "UpdateTagsForDomain": "This operation adds or updates tags for a specified domain.
All tag operations are eventually consistent; subsequent operations might not immediately represent all issued operations.
", "ViewBilling": "Returns all the domain-related billing records for the current AWS account for a specified period
" }, "shapes": { + "AcceptDomainTransferFromAnotherAwsAccountRequest": { + "base": "The AcceptDomainTransferFromAnotherAwsAccount request includes the following elements.
", + "refs": { + } + }, + "AcceptDomainTransferFromAnotherAwsAccountResponse": { + "base": "The AcceptDomainTransferFromAnotherAwsAccount response includes the following element.
", + "refs": { + } + }, + "AccountId": { + "base": null, + "refs": { + "TransferDomainToAnotherAwsAccountRequest$AccountId": "The account ID of the AWS account that you want to transfer the domain to, for example, 111122223333
.
Specifies whether contact information is concealed from WHOIS queries. If the value is true
, WHOIS (\"who is\") queries return contact information either for Amazon Registrar (for .com, .net, and .org domains) or for our registrar associate, Gandi (for all other TLDs). If the value is false
, WHOIS queries return the information that you entered for the admin contact.
Specifies whether contact information is concealed from WHOIS queries. If the value is true
, WHOIS (\"who is\") queries return contact information either for Amazon Registrar (for .com, .net, and .org domains) or for our registrar associate, Gandi (for all other TLDs). If the value is false
, WHOIS queries return the information that you entered for the registrant contact (domain owner).
Specifies whether contact information is concealed from WHOIS queries. If the value is true
, WHOIS (\"who is\") queries return contact information either for Amazon Registrar (for .com, .net, and .org domains) or for our registrar associate, Gandi (for all other TLDs). If the value is false
, WHOIS queries return the information that you entered for the technical contact.
If OnlyAvailable
is true
, Amazon Route 53 returns only domain names that are available. If OnlyAvailable
is false
, Amazon Route 53 returns domain names without checking whether they're available to be registered. To determine whether the domain is available, you can call checkDomainAvailability
for each suggestion.
If OnlyAvailable
is true
, Route 53 returns only domain names that are available. If OnlyAvailable
is false
, Route 53 returns domain names without checking whether they're available to be registered. To determine whether the domain is available, you can call checkDomainAvailability
for each suggestion.
Indicates whether the domain will be automatically renewed (true
) or not (false
). Autorenewal only takes effect after the account is charged.
Default: true
Whether you want to conceal contact information from WHOIS queries. If you specify true
, WHOIS (\"who is\") queries return contact information either for Amazon Registrar (for .com, .net, and .org domains) or for our registrar associate, Gandi (for all other TLDs). If you specify false
, WHOIS queries return the information that you entered for the admin contact.
Default: true
Whether you want to conceal contact information from WHOIS queries. If you specify true
, WHOIS (\"who is\") queries return contact information either for Amazon Registrar (for .com, .net, and .org domains) or for our registrar associate, Gandi (for all other TLDs). If you specify false
, WHOIS queries return the information that you entered for the registrant contact (the domain owner).
Default: true
Whether you want to conceal contact information from WHOIS queries. If you specify true
, WHOIS (\"who is\") queries return contact information either for Amazon Registrar (for .com, .net, and .org domains) or for our registrar associate, Gandi (for all other TLDs). If you specify false
, WHOIS queries return the information that you entered for the technical contact.
The CancelDomainTransferToAnotherAwsAccount request includes the following element.
", + "refs": { + } + }, + "CancelDomainTransferToAnotherAwsAccountResponse": { + "base": "The CancelDomainTransferToAnotherAwsAccount
response includes the following element.
The CheckDomainAvailability request contains the following elements.
", "refs": { @@ -103,9 +133,9 @@ "GetDomainDetailResponse$AdminContact": "Provides details about the domain administrative contact.
", "GetDomainDetailResponse$RegistrantContact": "Provides details about the domain registrant.
", "GetDomainDetailResponse$TechContact": "Provides details about the domain technical contact.
", - "RegisterDomainRequest$AdminContact": "Provides detailed contact information.
", - "RegisterDomainRequest$RegistrantContact": "Provides detailed contact information.
", - "RegisterDomainRequest$TechContact": "Provides detailed contact information.
", + "RegisterDomainRequest$AdminContact": "Provides detailed contact information. For information about the values that you specify for each element, see ContactDetail.
", + "RegisterDomainRequest$RegistrantContact": "Provides detailed contact information. For information about the values that you specify for each element, see ContactDetail.
", + "RegisterDomainRequest$TechContact": "Provides detailed contact information. For information about the values that you specify for each element, see ContactDetail.
", "TransferDomainRequest$AdminContact": "Provides detailed contact information.
", "TransferDomainRequest$RegistrantContact": "Provides detailed contact information.
", "TransferDomainRequest$TechContact": "Provides detailed contact information.
", @@ -133,7 +163,7 @@ "ContactType": { "base": null, "refs": { - "ContactDetail$ContactType": "Indicates whether the contact is a person, company, association, or public organization. If you choose an option other than PERSON
, you must enter an organization name, and you can't enable privacy protection for the contact.
Indicates whether the contact is a person, company, association, or public organization. Note the following:
If you specify a value other than PERSON
, you must also specify a value for OrganizationName
.
For some TLDs, the privacy protection available depends on the value that you specify for Contact Type
. For the privacy protection settings for your TLD, see Domains that You Can Register with Amazon Route 53 in the Amazon Route 53 Developer Guide
For .es domains, if you specify PERSON
, you must specify INDIVIDUAL
for the value of ES_LEGAL_FORM
.
Whether the domain name is available for registering.
You can register only domains designated as AVAILABLE
.
Valid values:
The domain name is available.
The domain name is reserved under specific conditions.
The domain name is available and can be preordered.
The TLD registry didn't reply with a definitive answer about whether the domain name is available. Amazon Route 53 can return this response for a variety of reasons, for example, the registry is performing maintenance. Try again later.
The TLD registry didn't return a response in the expected amount of time. When the response is delayed, it usually takes just a few extra seconds. You can resubmit the request immediately.
The domain name has been reserved for another person or organization.
The domain name is not available.
The domain name is not available.
The domain name is forbidden.
Whether the domain name is available for registering.
You can register only domains designated as AVAILABLE
.
Valid values:
The domain name is available.
The domain name is reserved under specific conditions.
The domain name is available and can be preordered.
The TLD registry didn't reply with a definitive answer about whether the domain name is available. Route 53 can return this response for a variety of reasons, for example, the registry is performing maintenance. Try again later.
The TLD registry didn't return a response in the expected amount of time. When the response is delayed, it usually takes just a few extra seconds. You can resubmit the request immediately.
The domain name has been reserved for another person or organization.
The domain name is not available.
The domain name is not available.
The domain name is forbidden.
The name of the domain that the billing record applies to. If the domain name contains characters other than a-z, 0-9, and - (hyphen), such as an internationalized domain name, then this value is in Punycode. For more information, see DNS Domain Name Format in the Amazon Route 53 Developer Guidezzz.
", - "CheckDomainAvailabilityRequest$DomainName": "The name of the domain that you want to get availability for.
Constraints: The domain name can contain only the letters a through z, the numbers 0 through 9, and hyphen (-). Internationalized Domain Names are not supported.
", - "CheckDomainTransferabilityRequest$DomainName": "The name of the domain that you want to transfer to Amazon Route 53.
Constraints: The domain name can contain only the letters a through z, the numbers 0 through 9, and hyphen (-). Internationalized Domain Names are not supported.
", + "AcceptDomainTransferFromAnotherAwsAccountRequest$DomainName": "The name of the domain that was specified when another AWS account submitted a TransferDomainToAnotherAwsAccount request.
", + "BillingRecord$DomainName": "The name of the domain that the billing record applies to. If the domain name contains characters other than a-z, 0-9, and - (hyphen), such as an internationalized domain name, then this value is in Punycode. For more information, see DNS Domain Name Format in the Amazon Route 53 Developer Guide.
", + "CancelDomainTransferToAnotherAwsAccountRequest$DomainName": "The name of the domain for which you want to cancel the transfer to another AWS account.
", + "CheckDomainAvailabilityRequest$DomainName": "The name of the domain that you want to get availability for. The top-level domain (TLD), such as .com, must be a TLD that Route 53 supports. For a list of supported TLDs, see Domains that You Can Register with Amazon Route 53 in the Amazon Route 53 Developer Guide.
The domain name can contain only the following characters:
Letters a through z. Domain names are not case sensitive.
Numbers 0 through 9.
Hyphen (-). You can't specify a hyphen at the beginning or end of a label.
Period (.) to separate the labels in the name, such as the .
in example.com
.
Internationalized domain names are not supported for some top-level domains. To determine whether the TLD that you want to use supports internationalized domain names, see Domains that You Can Register with Amazon Route 53. For more information, see Formatting Internationalized Domain Names.
", + "CheckDomainTransferabilityRequest$DomainName": "The name of the domain that you want to transfer to Route 53. The top-level domain (TLD), such as .com, must be a TLD that Route 53 supports. For a list of supported TLDs, see Domains that You Can Register with Amazon Route 53 in the Amazon Route 53 Developer Guide.
The domain name can contain only the following characters:
Letters a through z. Domain names are not case sensitive.
Numbers 0 through 9.
Hyphen (-). You can't specify a hyphen at the beginning or end of a label.
Period (.) to separate the labels in the name, such as the .
in example.com
.
The domain for which you want to delete one or more tags.
", "DisableDomainAutoRenewRequest$DomainName": "The name of the domain that you want to disable automatic renewal for.
", "DisableDomainTransferLockRequest$DomainName": "The name of the domain that you want to remove the transfer lock for.
", @@ -220,15 +252,17 @@ "GetContactReachabilityStatusResponse$domainName": "The domain name for which you requested the reachability status.
", "GetDomainDetailRequest$DomainName": "The name of the domain that you want to get detailed information about.
", "GetDomainDetailResponse$DomainName": "The name of a domain.
", - "GetDomainSuggestionsRequest$DomainName": "A domain name that you want to use as the basis for a list of possible domain names. The domain name must contain a top-level domain (TLD), such as .com, that Amazon Route 53 supports. For a list of TLDs, see Domains that You Can Register with Amazon Route 53 in the Amazon Route 53 Developer Guide.
", + "GetDomainSuggestionsRequest$DomainName": "A domain name that you want to use as the basis for a list of possible domain names. The top-level domain (TLD), such as .com, must be a TLD that Route 53 supports. For a list of supported TLDs, see Domains that You Can Register with Amazon Route 53 in the Amazon Route 53 Developer Guide.
The domain name can contain only the following characters:
Letters a through z. Domain names are not case sensitive.
Numbers 0 through 9.
Hyphen (-). You can't specify a hyphen at the beginning or end of a label.
Period (.) to separate the labels in the name, such as the .
in example.com
.
Internationalized domain names are not supported for some top-level domains. To determine whether the TLD that you want to use supports internationalized domain names, see Domains that You Can Register with Amazon Route 53.
", "GetOperationDetailResponse$DomainName": "The name of a domain.
", "ListTagsForDomainRequest$DomainName": "The domain for which you want to get a list of tags.
", - "RegisterDomainRequest$DomainName": "The domain name that you want to register.
Constraints: The domain name can contain only the letters a through z, the numbers 0 through 9, and hyphen (-). Internationalized Domain Names are not supported.
", + "RegisterDomainRequest$DomainName": "The domain name that you want to register. The top-level domain (TLD), such as .com, must be a TLD that Route 53 supports. For a list of supported TLDs, see Domains that You Can Register with Amazon Route 53 in the Amazon Route 53 Developer Guide.
The domain name can contain only the following characters:
Letters a through z. Domain names are not case sensitive.
Numbers 0 through 9.
Hyphen (-). You can't specify a hyphen at the beginning or end of a label.
Period (.) to separate the labels in the name, such as the .
in example.com
.
Internationalized domain names are not supported for some top-level domains. To determine whether the TLD that you want to use supports internationalized domain names, see Domains that You Can Register with Amazon Route 53. For more information, see Formatting Internationalized Domain Names.
", + "RejectDomainTransferFromAnotherAwsAccountRequest$DomainName": "The name of the domain that was specified when another AWS account submitted a TransferDomainToAnotherAwsAccount request.
", "RenewDomainRequest$DomainName": "The name of the domain that you want to renew.
", - "ResendContactReachabilityEmailRequest$domainName": "The name of the domain for which you want Amazon Route 53 to resend a confirmation email to the registrant contact.
", + "ResendContactReachabilityEmailRequest$domainName": "The name of the domain for which you want Route 53 to resend a confirmation email to the registrant contact.
", "ResendContactReachabilityEmailResponse$domainName": "The domain name for which you requested a confirmation email.
", "RetrieveDomainAuthCodeRequest$DomainName": "The name of the domain that you want to get an authorization code for.
", - "TransferDomainRequest$DomainName": "The name of the domain that you want to transfer to Amazon Route 53.
Constraints: The domain name can contain only the letters a through z, the numbers 0 through 9, and hyphen (-). Internationalized Domain Names are not supported.
", + "TransferDomainRequest$DomainName": "The name of the domain that you want to transfer to Route 53. The top-level domain (TLD), such as .com, must be a TLD that Route 53 supports. For a list of supported TLDs, see Domains that You Can Register with Amazon Route 53 in the Amazon Route 53 Developer Guide.
The domain name can contain only the following characters:
Letters a through z. Domain names are not case sensitive.
Numbers 0 through 9.
Hyphen (-). You can't specify a hyphen at the beginning or end of a label.
Period (.) to separate the labels in the name, such as the .
in example.com
.
The name of the domain that you want to transfer from the current AWS account to another account.
", "UpdateDomainContactPrivacyRequest$DomainName": "The name of the domain that you want to update the privacy setting for.
", "UpdateDomainContactRequest$DomainName": "The name of the domain that you want to update contact information for.
", "UpdateDomainNameserversRequest$DomainName": "The name of the domain that you want to change name servers for.
", @@ -272,9 +306,9 @@ } }, "DomainTransferability": { - "base": "A complex type that contains information about whether the specified domain can be transferred to Amazon Route 53.
", + "base": "A complex type that contains information about whether the specified domain can be transferred to Route 53.
", "refs": { - "CheckDomainTransferabilityResponse$Transferability": "A complex type that contains information about whether the specified domain can be transferred to Amazon Route 53.
" + "CheckDomainTransferabilityResponse$Transferability": "A complex type that contains information about whether the specified domain can be transferred to Route 53.
" } }, "DuplicateRequest": { @@ -285,8 +319,8 @@ "DurationInYears": { "base": null, "refs": { - "RegisterDomainRequest$DurationInYears": "The number of years that you want to register the domain for. Domains are registered for a minimum of one year. The maximum period depends on the top-level domain. For the range of valid values for your domain, see Domains that You Can Register with Amazon Route 53 in the Amazon Route 53 Developer Guide.
Default: 1
", - "RenewDomainRequest$DurationInYears": "The number of years that you want to renew the domain for. The maximum number of years depends on the top-level domain. For the range of valid values for your domain, see Domains that You Can Register with Amazon Route 53 in the Amazon Route 53 Developer Guide.
Default: 1
", + "RegisterDomainRequest$DurationInYears": "The number of years that you want to register the domain for. Domains are registered for a minimum of one year. The maximum period depends on the top-level domain. For the range of valid values for your domain, see Domains that You Can Register with Amazon Route 53 in the Amazon Route 53 Developer Guide.
Default: 1
", + "RenewDomainRequest$DurationInYears": "The number of years that you want to renew the domain for. The maximum number of years depends on the top-level domain. For the range of valid values for your domain, see Domains that You Can Register with Amazon Route 53 in the Amazon Route 53 Developer Guide.
Default: 1
", "TransferDomainRequest$DurationInYears": "The number of years that you want to register the domain for. Domains are registered for a minimum of one year. The maximum period depends on the top-level domain.
Default: 1
" } }, @@ -345,13 +379,13 @@ "ExtraParamName": { "base": null, "refs": { - "ExtraParam$Name": "Name of the additional parameter required by the top-level domain. Here are the top-level domains that require additional parameters and which parameters they require:
.com.au and .net.au: AU_ID_NUMBER
and AU_ID_TYPE
.ca: BRAND_NUMBER
, CA_LEGAL_TYPE
, and CA_BUSINESS_ENTITY_TYPE
.es: ES_IDENTIFICATION
, ES_IDENTIFICATION_TYPE
, and ES_LEGAL_FORM
.fi: BIRTH_DATE_IN_YYYY_MM_DD
, FI_BUSINESS_NUMBER
, FI_ID_NUMBER
, FI_NATIONALITY
, and FI_ORGANIZATION_TYPE
.fr: BRAND_NUMBER
, BIRTH_DEPARTMENT
, BIRTH_DATE_IN_YYYY_MM_DD
, BIRTH_COUNTRY
, and BIRTH_CITY
.it: BIRTH_COUNTRY
, IT_PIN
, and IT_REGISTRANT_ENTITY_TYPE
.ru: BIRTH_DATE_IN_YYYY_MM_DD
and RU_PASSPORT_DATA
.se: BIRTH_COUNTRY
and SE_ID_NUMBER
.sg: SG_ID_NUMBER
.co.uk, .me.uk, and .org.uk: UK_CONTACT_TYPE
and UK_COMPANY_NUMBER
In addition, many TLDs require VAT_NUMBER
.
The name of an additional parameter that is required by a top-level domain. Here are the top-level domains that require additional parameters and the names of the parameters that they require:
AU_ID_NUMBER
AU_ID_TYPE
Valid values include the following:
ABN
(Australian business number)
ACN
(Australian company number)
TM
(Trademark number)
BRAND_NUMBER
CA_BUSINESS_ENTITY_TYPE
Valid values include the following:
BANK
(Bank)
COMMERCIAL_COMPANY
(Commercial company)
COMPANY
(Company)
COOPERATION
(Cooperation)
COOPERATIVE
(Cooperative)
COOPRIX
(Cooprix)
CORP
(Corporation)
CREDIT_UNION
(Credit union)
FOMIA
(Federation of mutual insurance associations)
INC
(Incorporated)
LTD
(Limited)
LTEE
(Limitée)
LLC
(Limited liability corporation)
LLP
(Limited liability partnership)
LTE
(Lte.)
MBA
(Mutual benefit association)
MIC
(Mutual insurance company)
NFP
(Not-for-profit corporation)
SA
(S.A.)
SAVINGS_COMPANY
(Savings company)
SAVINGS_UNION
(Savings union)
SARL
(Société à responsabilité limitée)
TRUST
(Trust)
ULC
(Unlimited liability corporation)
CA_LEGAL_TYPE
When ContactType
is PERSON
, valid values include the following:
ABO
(Aboriginal Peoples indigenous to Canada)
CCT
(Canadian citizen)
LGR
(Legal Representative of a Canadian Citizen or Permanent Resident)
RES
(Permanent resident of Canada)
When ContactType
is a value other than PERSON
, valid values include the following:
ASS
(Canadian unincorporated association)
CCO
(Canadian corporation)
EDU
(Canadian educational institution)
GOV
(Government or government entity in Canada)
HOP
(Canadian Hospital)
INB
(Indian Band recognized by the Indian Act of Canada)
LAM
(Canadian Library, Archive, or Museum)
MAJ
(Her/His Majesty the Queen/King)
OMK
(Official mark registered in Canada)
PLT
(Canadian Political Party)
PRT
(Partnership Registered in Canada)
TDM
(Trademark registered in Canada)
TRD
(Canadian Trade Union)
TRS
(Trust established in Canada)
ES_IDENTIFICATION
Specify the applicable value:
For contacts inside Spain: Enter your passport ID.
For contacts outside of Spain: Enter the VAT identification number for the company.
For .es domains, the value of ContactType
must be PERSON
.
ES_IDENTIFICATION_TYPE
Valid values include the following:
DNI_AND_NIF
(For Spanish contacts)
NIE
(For foreigners with legal residence)
OTHER
(For contacts outside of Spain)
ES_LEGAL_FORM
Valid values include the following:
ASSOCIATION
CENTRAL_GOVERNMENT_BODY
CIVIL_SOCIETY
COMMUNITY_OF_OWNERS
COMMUNITY_PROPERTY
CONSULATE
COOPERATIVE
DESIGNATION_OF_ORIGIN_SUPERVISORY_COUNCIL
ECONOMIC_INTEREST_GROUP
EMBASSY
ENTITY_MANAGING_NATURAL_AREAS
FARM_PARTNERSHIP
FOUNDATION
GENERAL_AND_LIMITED_PARTNERSHIP
GENERAL_PARTNERSHIP
INDIVIDUAL
LIMITED_COMPANY
LOCAL_AUTHORITY
LOCAL_PUBLIC_ENTITY
MUTUAL_INSURANCE_COMPANY
NATIONAL_PUBLIC_ENTITY
ORDER_OR_RELIGIOUS_INSTITUTION
OTHERS (Only for contacts outside of Spain)
POLITICAL_PARTY
PROFESSIONAL_ASSOCIATION
PUBLIC_LAW_ASSOCIATION
PUBLIC_LIMITED_COMPANY
REGIONAL_GOVERNMENT_BODY
REGIONAL_PUBLIC_ENTITY
SAVINGS_BANK
SPANISH_OFFICE
SPORTS_ASSOCIATION
SPORTS_FEDERATION
SPORTS_LIMITED_COMPANY
TEMPORARY_ALLIANCE_OF_ENTERPRISES
TRADE_UNION
WORKER_OWNED_COMPANY
WORKER_OWNED_LIMITED_COMPANY
BIRTH_DATE_IN_YYYY_MM_DD
FI_BUSINESS_NUMBER
FI_ID_NUMBER
FI_NATIONALITY
Valid values include the following:
FINNISH
NOT_FINNISH
FI_ORGANIZATION_TYPE
Valid values include the following:
COMPANY
CORPORATION
GOVERNMENT
INSTITUTION
POLITICAL_PARTY
PUBLIC_COMMUNITY
TOWNSHIP
BIRTH_CITY
BIRTH_COUNTRY
BIRTH_DATE_IN_YYYY_MM_DD
BIRTH_DEPARTMENT
: Specify the INSEE code that corresponds with the department where the contact was born. If the contact was born somewhere other than France or its overseas departments, specify 99
. For more information, including a list of departments and the corresponding INSEE numbers, see the Wikipedia entry Departments of France.
BRAND_NUMBER
IT_NATIONALITY
IT_PIN
IT_REGISTRANT_ENTITY_TYPE
Valid values include the following:
FOREIGNERS
FREELANCE_WORKERS
(Freelance workers and professionals)
ITALIAN_COMPANIES
(Italian companies and one-person companies)
NON_PROFIT_ORGANIZATIONS
OTHER_SUBJECTS
PUBLIC_ORGANIZATIONS
BIRTH_DATE_IN_YYYY_MM_DD
RU_PASSPORT_DATA
BIRTH_COUNTRY
SE_ID_NUMBER
SG_ID_NUMBER
UK_CONTACT_TYPE
Valid values include the following:
CRC
(UK Corporation by Royal Charter)
FCORP
(Non-UK Corporation)
FIND
(Non-UK Individual, representing self)
FOTHER
(Non-UK Entity that does not fit into any other category)
GOV
(UK Government Body)
IND
(UK Individual (representing self))
IP
(UK Industrial/Provident Registered Company)
LLP
(UK Limited Liability Partnership)
LTD
(UK Limited Company)
OTHER
(UK Entity that does not fit into any other category)
PLC
(UK Public Limited Company)
PTNR
(UK Partnership)
RCHAR
(UK Registered Charity)
SCH
(UK School)
STAT
(UK Statutory Body)
STRA
(UK Sole Trader)
UK_COMPANY_NUMBER
In addition, many TLDs require a VAT_NUMBER
.
Values corresponding to the additional parameter names required by some top-level domains.
" + "ExtraParam$Value": "The value that corresponds with the name of an extra parameter.
" } }, "FIAuthKey": { @@ -391,7 +425,7 @@ } }, "GetOperationDetailRequest": { - "base": "The GetOperationDetail request includes the following element.
", + "base": "The GetOperationDetail request includes the following element.
", "refs": { } }, @@ -421,11 +455,11 @@ "Integer": { "base": null, "refs": { - "GetDomainSuggestionsRequest$SuggestionCount": "The number of suggested domain names that you want Amazon Route 53 to return.
" + "GetDomainSuggestionsRequest$SuggestionCount": "The number of suggested domain names that you want Route 53 to return. Specify a value between 1 and 50.
" } }, "InvalidInput": { - "base": "The requested item is not acceptable. For example, for an OperationId it might refer to the ID of an operation that is already completed. For a domain name, it might not be a valid domain name or belong to the requester account.
", + "base": "The requested item is not acceptable. For example, for APIs that accept a domain name, the request might specify a domain name that doesn't belong to the account that submitted the request. For AcceptDomainTransferFromAnotherAwsAccount
, the password might be invalid.
Identifier for tracking the progress of the request. To use this ID to query the operation status, use GetOperationDetail.
", + "AcceptDomainTransferFromAnotherAwsAccountResponse$OperationId": "Identifier for tracking the progress of the request. To query the operation status, use GetOperationDetail.
", + "CancelDomainTransferToAnotherAwsAccountResponse$OperationId": "The identifier that TransferDomainToAnotherAwsAccount
returned to track the progress of the request. Because the transfer request was canceled, the value is no longer valid, and you can't use GetOperationDetail
to query the operation status.
Identifier for tracking the progress of the request. To query the operation status, use GetOperationDetail.
", "EnableDomainTransferLockResponse$OperationId": "Identifier for tracking the progress of the request. To use this ID to query the operation status, use GetOperationDetail.
", - "GetOperationDetailRequest$OperationId": "The identifier for the operation for which you want to get the status. Amazon Route 53 returned the identifier in the response to the original request.
", + "GetOperationDetailRequest$OperationId": "The identifier for the operation for which you want to get the status. Route 53 returned the identifier in the response to the original request.
", "GetOperationDetailResponse$OperationId": "The identifier for the operation.
", "OperationSummary$OperationId": "Identifier returned to track the requested action.
", - "RegisterDomainResponse$OperationId": "Identifier for tracking the progress of the request. To use this ID to query the operation status, use GetOperationDetail.
", - "RenewDomainResponse$OperationId": "The identifier for tracking the progress of the request. To use this ID to query the operation status, use GetOperationDetail.
", - "TransferDomainResponse$OperationId": "Identifier for tracking the progress of the request. To use this ID to query the operation status, use GetOperationDetail.
", + "RegisterDomainResponse$OperationId": "Identifier for tracking the progress of the request. To query the operation status, use GetOperationDetail.
", + "RejectDomainTransferFromAnotherAwsAccountResponse$OperationId": "The identifier that TransferDomainToAnotherAwsAccount
returned to track the progress of the request. Because the transfer request was rejected, the value is no longer valid, and you can't use GetOperationDetail
to query the operation status.
Identifier for tracking the progress of the request. To query the operation status, use GetOperationDetail.
", + "TransferDomainResponse$OperationId": "Identifier for tracking the progress of the request. To query the operation status, use GetOperationDetail.
", + "TransferDomainToAnotherAwsAccountResponse$OperationId": "Identifier for tracking the progress of the request. To query the operation status, use GetOperationDetail.
", "UpdateDomainContactPrivacyResponse$OperationId": "Identifier for tracking the progress of the request. To use this ID to query the operation status, use GetOperationDetail.
", - "UpdateDomainContactResponse$OperationId": "Identifier for tracking the progress of the request. To use this ID to query the operation status, use GetOperationDetail.
", - "UpdateDomainNameserversResponse$OperationId": "Identifier for tracking the progress of the request. To use this ID to query the operation status, use GetOperationDetail.
" + "UpdateDomainContactResponse$OperationId": "Identifier for tracking the progress of the request. To query the operation status, use GetOperationDetail.
", + "UpdateDomainNameserversResponse$OperationId": "Identifier for tracking the progress of the request. To query the operation status, use GetOperationDetail.
" } }, "OperationLimitExceeded": { @@ -600,6 +638,16 @@ "GetDomainDetailResponse$RegistryDomainId": "Reserved for future use.
" } }, + "RejectDomainTransferFromAnotherAwsAccountRequest": { + "base": "The RejectDomainTransferFromAnotherAwsAccount request includes the following element.
", + "refs": { + } + }, + "RejectDomainTransferFromAnotherAwsAccountResponse": { + "base": "The RejectDomainTransferFromAnotherAwsAccount response includes the following element.
", + "refs": { + } + }, "RenewDomainRequest": { "base": "A RenewDomain
request includes the number of years that you want to renew for and the current expiration year.
Reseller of the domain. Domains registered or transferred using Amazon Route 53 domains will have \"Amazon\"
as the reseller.
Reseller of the domain. Domains registered or transferred using Route 53 domains will have \"Amazon\"
as the reseller.
Whether the domain name is available for registering.
You can register only the domains that are designated as AVAILABLE
.
Valid values:
The domain name is available.
The domain name is reserved under specific conditions.
The domain name is available and can be preordered.
The TLD registry didn't reply with a definitive answer about whether the domain name is available. Amazon Route 53 can return this response for a variety of reasons, for example, the registry is performing maintenance. Try again later.
The TLD registry didn't return a response in the expected amount of time. When the response is delayed, it usually takes just a few extra seconds. You can resubmit the request immediately.
The domain name has been reserved for another person or organization.
The domain name is not available.
The domain name is not available.
The domain name is forbidden.
The password that was returned by the TransferDomainToAnotherAwsAccount request.
", + "DomainSuggestion$Availability": "Whether the domain name is available for registering.
You can register only the domains that are designated as AVAILABLE
.
Valid values:
The domain name is available.
The domain name is reserved under specific conditions.
The domain name is available and can be preordered.
The TLD registry didn't reply with a definitive answer about whether the domain name is available. Route 53 can return this response for a variety of reasons, for example, the registry is performing maintenance. Try again later.
The TLD registry didn't return a response in the expected amount of time. When the response is delayed, it usually takes just a few extra seconds. You can resubmit the request immediately.
The domain name has been reserved for another person or organization.
The domain name is not available.
The domain name is not available.
The domain name is forbidden.
To finish transferring a domain to another AWS account, the account that the domain is being transferred to must submit an AcceptDomainTransferFromAnotherAwsAccount request. The request must include the value of the Password
element that was returned in the TransferDomainToAnotherAwsAccount
response.
The date that the operation was billed, in Unix format.
", - "DomainSummary$Expiry": "Expiration date of the domain in Coordinated Universal Time (UTC).
", - "GetDomainDetailResponse$CreationDate": "The date when the domain was created as found in the response to a WHOIS query. The date and time is in Coordinated Universal time (UTC).
", - "GetDomainDetailResponse$UpdatedDate": "The last updated date of the domain as found in the response to a WHOIS query. The date and time is in Coordinated Universal time (UTC).
", - "GetDomainDetailResponse$ExpirationDate": "The date when the registration for the domain is set to expire. The date and time is in Coordinated Universal time (UTC).
", + "DomainSummary$Expiry": "Expiration date of the domain in Unix time format and Coordinated Universal Time (UTC).
", + "GetDomainDetailResponse$CreationDate": "The date when the domain was created as found in the response to a WHOIS query. The date and time is in Unix time format and Coordinated Universal time (UTC).
", + "GetDomainDetailResponse$UpdatedDate": "The last updated date of the domain as found in the response to a WHOIS query. The date and time is in Unix time format and Coordinated Universal time (UTC).
", + "GetDomainDetailResponse$ExpirationDate": "The date when the registration for the domain is set to expire. The date and time is in Unix time format and Coordinated Universal time (UTC).
", "GetOperationDetailResponse$SubmittedDate": "The date when the request was submitted.
", - "ListOperationsRequest$SubmittedSince": "An optional parameter that lets you get information about all the operations that you submitted after a specified date and time. Specify the date and time in Coordinated Universal time (UTC).
", + "ListOperationsRequest$SubmittedSince": "An optional parameter that lets you get information about all the operations that you submitted after a specified date and time. Specify the date and time in Unix time format and Coordinated Universal time (UTC).
", "OperationSummary$SubmittedDate": "The date when the request was submitted.
", - "ViewBillingRequest$Start": "The beginning date and time for the time period for which you want a list of billing records. Specify the date and time in Coordinated Universal time (UTC).
", - "ViewBillingRequest$End": "The end date and time for the time period for which you want a list of billing records. Specify the date and time in Coordinated Universal time (UTC).
" + "ViewBillingRequest$Start": "The beginning date and time for the time period for which you want a list of billing records. Specify the date and time in Unix time format and Coordinated Universal time (UTC).
", + "ViewBillingRequest$End": "The end date and time for the time period for which you want a list of billing records. Specify the date and time in Unix time format and Coordinated Universal time (UTC).
" } }, "TransferDomainRequest": { @@ -706,12 +756,22 @@ } }, "TransferDomainResponse": { - "base": "The TranserDomain response includes the following element.
", + "base": "The TransferDomain response includes the following element.
", + "refs": { + } + }, + "TransferDomainToAnotherAwsAccountRequest": { + "base": "The TransferDomainToAnotherAwsAccount request includes the following elements.
", + "refs": { + } + }, + "TransferDomainToAnotherAwsAccountResponse": { + "base": "The TransferDomainToAnotherAwsAccount
response includes the following elements.
Whether the domain name can be transferred to Amazon Route 53.
You can transfer only domains that have a value of TRANSFERABLE
for Transferable
.
Valid values:
The domain name can be transferred to Amazon Route 53.
The domain name can't be transferred to Amazon Route 53.
Reserved for future use.
Whether the domain name can be transferred to Route 53.
You can transfer only domains that have a value of TRANSFERABLE
for Transferable
.
Valid values:
The domain name can be transferred to Route 53.
The domain name can't be transferred to Route 53.
Reserved for future use.
Amazon Augmented AI (Augmented AI) (Preview) is a service that adds human judgment to any machine learning application. Human reviewers can take over when an AI application can't evaluate data with a high degree of confidence.
From fraudulent bank transaction identification to document processing to image analysis, machine learning models can be trained to make decisions as well as or better than a human. Nevertheless, some decisions require contextual interpretation, such as when you need to decide whether an image is appropriate for a given audience. Content moderation guidelines are nuanced and highly dependent on context, and they vary between countries. When trying to apply AI in these situations, you can be forced to choose between \"ML only\" systems with unacceptably high error rates or \"human only\" systems that are expensive and difficult to scale, and that slow down decision making.
This API reference includes information about API actions and data types you can use to interact with Augmented AI programmatically.
You can create a flow definition against the Augmented AI API. Provide the Amazon Resource Name (ARN) of a flow definition to integrate AI service APIs, such as Textract.AnalyzeDocument
and Rekognition.DetectModerationLabels
. These AI services, in turn, invoke the StartHumanLoop API, which evaluates conditions under which humans will be invoked. If humans are required, Augmented AI creates a human loop. Results of human work are available asynchronously in Amazon Simple Storage Service (Amazon S3). You can use Amazon CloudWatch Events to detect human work results.
You can find additional Augmented AI API documentation in the following reference guides: Amazon Rekognition, Amazon SageMaker, and Amazon Textract.
", + "service": "Amazon Augmented AI is in preview release and is subject to change. We do not recommend using this product in production environments.
Amazon Augmented AI (Amazon A2I) adds the benefit of human judgment to any machine learning application. When an AI application can't evaluate data with a high degree of confidence, human reviewers can take over. This human review is called a human review workflow. To create and start a human review workflow, you need three resources: a worker task template, a flow definition, and a human loop.
For information about these resources and prerequisites for using Amazon A2I, see Get Started with Amazon Augmented AI in the Amazon SageMaker Developer Guide.
This API reference includes information about API actions and data types that you can use to interact with Amazon A2I programmatically. Use this guide to:
Start a human loop with the StartHumanLoop
operation when using Amazon A2I with a custom task type. To learn more about the difference between custom and built-in task types, see Use Task Types . To learn how to start a human loop using this API, see Create and Start a Human Loop for a Custom Task Type in the Amazon SageMaker Developer Guide.
Manage your human loops. You can list all human loops that you have created, describe individual human loops, and stop and delete human loops. To learn more, see Monitor and Manage Your Human Loop in the Amazon SageMaker Developer Guide.
Amazon A2I integrates APIs from various AWS services to create and start human review workflows for those services. To learn how Amazon A2I uses these APIs, see Use APIs in Amazon A2I in the Amazon SageMaker Developer Guide.
", "operations": { "DeleteHumanLoop": "Deletes the specified human loop for a flow definition.
", "DescribeHumanLoop": "Returns information about the specified human loop.
", @@ -50,7 +50,7 @@ "base": null, "refs": { "ConflictException$Message": null, - "HumanLoopSummary$FailureReason": "The reason why the human loop failed. A failure reason is returned only when the status of the human loop is Failed
.
The reason why the human loop failed. A failure reason is returned when the status of the human loop is Failed
.
The Amazon Resource Name (ARN) of the flow definition.
", - "HumanLoopSummary$FlowDefinitionArn": "The Amazon Resource Name (ARN) of the flow definition.
", + "HumanLoopSummary$FlowDefinitionArn": "The Amazon Resource Name (ARN) of the flow definition used to configure the human loop.
", "ListHumanLoopsRequest$FlowDefinitionArn": "The Amazon Resource Name (ARN) of a flow definition.
", - "StartHumanLoopRequest$FlowDefinitionArn": "The Amazon Resource Name (ARN) of the flow definition.
" + "StartHumanLoopRequest$FlowDefinitionArn": "The Amazon Resource Name (ARN) of the flow definition associated with this human loop.
" } }, "HumanLoopArn": { @@ -77,43 +77,43 @@ "HumanLoopDataAttributes": { "base": "Attributes of the data specified by the customer. Use these to describe the data to be labeled.
", "refs": { - "StartHumanLoopRequest$DataAttributes": "Attributes of the data specified by the customer.
" + "StartHumanLoopRequest$DataAttributes": "Attributes of the specified data. Use DataAttributes
to specify if your data is free of personally identifiable information and/or free of adult content.
An object containing the human loop input in JSON format.
", "refs": { - "StartHumanLoopRequest$HumanLoopInput": "An object containing information about the human loop.
" + "StartHumanLoopRequest$HumanLoopInput": "An object that contains information about the human loop.
" } }, "HumanLoopName": { "base": null, "refs": { - "DeleteHumanLoopRequest$HumanLoopName": "The name of the human loop you want to delete.
", - "DescribeHumanLoopRequest$HumanLoopName": "The unique name of the human loop.
", - "DescribeHumanLoopResponse$HumanLoopName": "The name of the human loop.
", + "DeleteHumanLoopRequest$HumanLoopName": "The name of the human loop that you want to delete.
", + "DescribeHumanLoopRequest$HumanLoopName": "The name of the human loop that you want information about.
", + "DescribeHumanLoopResponse$HumanLoopName": "The name of the human loop. The name must be lowercase, unique within the Region in your account, and can have up to 63 characters. Valid characters: a-z, 0-9, and - (hyphen).
", "HumanLoopSummary$HumanLoopName": "The name of the human loop.
", "StartHumanLoopRequest$HumanLoopName": "The name of the human loop.
", - "StopHumanLoopRequest$HumanLoopName": "The name of the human loop you want to stop.
" + "StopHumanLoopRequest$HumanLoopName": "The name of the human loop that you want to stop.
" } }, "HumanLoopOutput": { "base": "Information about where the human output will be stored.
", "refs": { - "DescribeHumanLoopResponse$HumanLoopOutput": "An object containing information about the output of the human loop.
" + "DescribeHumanLoopResponse$HumanLoopOutput": "An object that contains information about the output of the human loop.
" } }, "HumanLoopStatus": { "base": null, "refs": { - "DescribeHumanLoopResponse$HumanLoopStatus": "The status of the human loop. Valid values:
", - "HumanLoopSummary$HumanLoopStatus": "The status of the human loop. Valid values:
" + "DescribeHumanLoopResponse$HumanLoopStatus": "The status of the human loop.
", + "HumanLoopSummary$HumanLoopStatus": "The status of the human loop.
" } }, "HumanLoopSummaries": { "base": null, "refs": { - "ListHumanLoopsResponse$HumanLoopSummaries": "An array of objects containing information about the human loops.
" + "ListHumanLoopsResponse$HumanLoopSummaries": "An array of objects that contain information about the human loops.
" } }, "HumanLoopSummary": { @@ -129,7 +129,7 @@ } }, "InternalServerException": { - "base": "Your request could not be processed.
", + "base": "We couldn't process your request because of an issue with the server. Try again later.
", "refs": { } }, @@ -146,30 +146,30 @@ "MaxResults": { "base": null, "refs": { - "ListHumanLoopsRequest$MaxResults": "The total number of items to return. If the total number of available items is more than the value specified in MaxResults
, then a NextToken
will be provided in the output that you can use to resume pagination.
The total number of items to return. If the total number of available items is more than the value specified in MaxResults
, then a NextToken
is returned in the output. You can use this token to display the next page of results.
A token to resume pagination.
", - "ListHumanLoopsResponse$NextToken": "A token to resume pagination.
" + "ListHumanLoopsRequest$NextToken": "A token to display the next page of results.
", + "ListHumanLoopsResponse$NextToken": "A token to display the next page of results.
" } }, "ResourceNotFoundException": { - "base": "We were unable to find the requested resource.
", + "base": "We couldn't find the requested resource.
", "refs": { } }, "ServiceQuotaExceededException": { - "base": "You have exceeded your service quota. To perform the requested action, remove some of the relevant resources, or request a service quota increase.
", + "base": "You exceeded your service quota. Delete some resources or request an increase in your service quota.
", "refs": { } }, "SortOrder": { "base": null, "refs": { - "ListHumanLoopsRequest$SortOrder": "An optional value that specifies whether you want the results sorted in Ascending
or Descending
order.
Optional. The order for displaying results. Valid values: Ascending
and Descending
.
The reason why a human loop has failed. The failure reason is returned when the human loop status is Failed
.
A failure code denoting a specific type of failure.
", + "DescribeHumanLoopResponse$FailureReason": "The reason why a human loop failed. The failure reason is returned when the status of the human loop is Failed
.
A failure code that identifies the type of failure.
", "HumanLoopOutput$OutputS3Uri": "The location of the Amazon S3 object where Amazon Augmented AI stores your human loop output.
" } }, "ThrottlingException": { - "base": "Your request has exceeded the allowed amount of requests.
", + "base": "You exceeded the maximum number of requests.
", "refs": { } }, @@ -215,7 +215,7 @@ } }, "ValidationException": { - "base": "Your request was not valid. Check the syntax and try again.
", + "base": "The request isn't valid. Check the syntax and try again.
", "refs": { } } diff --git a/models/apis/sagemaker/2017-07-24/api-2.json b/models/apis/sagemaker/2017-07-24/api-2.json index ffb4838443a..74eae8b88f3 100644 --- a/models/apis/sagemaker/2017-07-24/api-2.json +++ b/models/apis/sagemaker/2017-07-24/api-2.json @@ -2690,6 +2690,7 @@ ], "members":{ "FlowDefinitionName":{"shape":"FlowDefinitionName"}, + "HumanLoopRequestSource":{"shape":"HumanLoopRequestSource"}, "HumanLoopActivationConfig":{"shape":"HumanLoopActivationConfig"}, "HumanLoopConfig":{"shape":"HumanLoopConfig"}, "OutputConfig":{"shape":"FlowDefinitionOutputConfig"}, @@ -3096,6 +3097,7 @@ "CreationTime":{"type":"timestamp"}, "CsvContentType":{ "type":"string", + "max":256, "min":1, "pattern":"^[a-zA-Z0-9](-*[a-zA-Z0-9])*\\/[a-zA-Z0-9](-*[a-zA-Z0-9.])*" }, @@ -3689,6 +3691,7 @@ "FlowDefinitionName":{"shape":"FlowDefinitionName"}, "FlowDefinitionStatus":{"shape":"FlowDefinitionStatus"}, "CreationTime":{"shape":"Timestamp"}, + "HumanLoopRequestSource":{"shape":"HumanLoopRequestSource"}, "HumanLoopActivationConfig":{"shape":"HumanLoopActivationConfig"}, "HumanLoopConfig":{"shape":"HumanLoopConfig"}, "OutputConfig":{"shape":"FlowDefinitionOutputConfig"}, @@ -4433,7 +4436,7 @@ "EnvironmentArn":{ "type":"string", "max":256, - "pattern":"^arn:aws(-[\\w]+)*:sagemaker:.+:[0-9]{12}:environment/[a-z0-9](-*[a-z0-9]){0,62}$" + "pattern":"^arn:aws(-[\\w]+)*:sagemaker:.+:[0-9]{12}:environment/[a-z0-9]([-.]?[a-z0-9])*$" }, "EnvironmentKey":{ "type":"string", @@ -4650,8 +4653,7 @@ "Initializing", "Active", "Failed", - "Deleting", - "Deleted" + "Deleting" ] }, "FlowDefinitionSummaries":{ @@ -4721,7 +4723,8 @@ "MXNET", "ONNX", "PYTORCH", - "XGBOOST" + "XGBOOST", + "TFLITE" ] }, "GenerateCandidateDefinitionsOnly":{"type":"boolean"}, @@ -4781,12 +4784,8 @@ }, "HumanLoopActivationConfig":{ "type":"structure", - "required":[ - "HumanLoopRequestSource", - "HumanLoopActivationConditionsConfig" - ], + "required":["HumanLoopActivationConditionsConfig"], "members":{ - "HumanLoopRequestSource":{"shape":"HumanLoopRequestSource"}, "HumanLoopActivationConditionsConfig":{"shape":"HumanLoopActivationConditionsConfig"} } }, @@ -5257,6 +5256,7 @@ }, "JsonContentType":{ "type":"string", + "max":256, "min":1, "pattern":"^[a-zA-Z0-9](-*[a-zA-Z0-9])*\\/[a-zA-Z0-9](-*[a-zA-Z0-9.])*" }, @@ -6897,7 +6897,8 @@ "LessThanOrEqualTo", "Contains", "Exists", - "NotExists" + "NotExists", + "In" ] }, "OptionalDouble":{"type":"double"}, @@ -7110,6 +7111,33 @@ "ml.r5.24xlarge" ] }, + "ProcessingJob":{ + "type":"structure", + "members":{ + "ProcessingInputs":{"shape":"ProcessingInputs"}, + "ProcessingOutputConfig":{"shape":"ProcessingOutputConfig"}, + "ProcessingJobName":{"shape":"ProcessingJobName"}, + "ProcessingResources":{"shape":"ProcessingResources"}, + "StoppingCondition":{"shape":"ProcessingStoppingCondition"}, + "AppSpecification":{"shape":"AppSpecification"}, + "Environment":{"shape":"ProcessingEnvironmentMap"}, + "NetworkConfig":{"shape":"NetworkConfig"}, + "RoleArn":{"shape":"RoleArn"}, + "ExperimentConfig":{"shape":"ExperimentConfig"}, + "ProcessingJobArn":{"shape":"ProcessingJobArn"}, + "ProcessingJobStatus":{"shape":"ProcessingJobStatus"}, + "ExitMessage":{"shape":"ExitMessage"}, + "FailureReason":{"shape":"FailureReason"}, + "ProcessingEndTime":{"shape":"Timestamp"}, + "ProcessingStartTime":{"shape":"Timestamp"}, + "LastModifiedTime":{"shape":"Timestamp"}, + "CreationTime":{"shape":"Timestamp"}, + "MonitoringScheduleArn":{"shape":"MonitoringScheduleArn"}, + "AutoMLJobArn":{"shape":"AutoMLJobArn"}, + "TrainingJobArn":{"shape":"TrainingJobArn"}, + "Tags":{"shape":"TagList"} + } + }, "ProcessingJobArn":{ "type":"string", "max":256, @@ -8110,6 +8138,7 @@ "sbe_c", "qcs605", "qcs603", + "sitara_am57x", "amba_cv22" ] }, @@ -8240,7 +8269,12 @@ "ml.c5.2xlarge", "ml.c5.4xlarge", "ml.c5.9xlarge", - "ml.c5.18xlarge" + "ml.c5.18xlarge", + "ml.c5n.xlarge", + "ml.c5n.2xlarge", + "ml.c5n.4xlarge", + "ml.c5n.9xlarge", + "ml.c5n.18xlarge" ] }, "TrainingInstanceTypes":{ @@ -8679,7 +8713,9 @@ "enum":[ "InProgress", "Completed", - "Failed" + "Failed", + "Stopping", + "Stopped" ] }, "TrialComponentSimpleSummaries":{ @@ -8713,7 +8749,8 @@ "type":"structure", "members":{ "SourceArn":{"shape":"TrialComponentSourceArn"}, - "TrainingJob":{"shape":"TrainingJob"} + "TrainingJob":{"shape":"TrainingJob"}, + "ProcessingJob":{"shape":"ProcessingJob"} } }, "TrialComponentStatus":{ diff --git a/models/apis/sagemaker/2017-07-24/docs-2.json b/models/apis/sagemaker/2017-07-24/docs-2.json index 08bf0679d4f..d096fff48f4 100644 --- a/models/apis/sagemaker/2017-07-24/docs-2.json +++ b/models/apis/sagemaker/2017-07-24/docs-2.json @@ -1,17 +1,17 @@ { "version": "2.0", - "service": "Provides APIs for creating and managing Amazon SageMaker resources.
", + "service": "Provides APIs for creating and managing Amazon SageMaker resources.
Other Resources:
", "operations": { "AddTags": "Adds or overwrites one or more tags for the specified Amazon SageMaker resource. You can add tags to notebook instances, training jobs, hyperparameter tuning jobs, batch transform jobs, models, labeling jobs, work teams, endpoint configurations, and endpoints.
Each tag consists of a key and an optional value. Tag keys must be unique per resource. For more information about tags, see For more information, see AWS Tagging Strategies.
Tags that you add to a hyperparameter tuning job by calling this API are also added to any training jobs that the hyperparameter tuning job launches after you call this API, but not to training jobs that the hyperparameter tuning job launched before you called this API. To make sure that the tags associated with a hyperparameter tuning job are also added to all training jobs that the hyperparameter tuning job launches, add the tags when you first create the tuning job by specifying them in the Tags
parameter of CreateHyperParameterTuningJob
Associates a trial component with a trial. A trial component can be associated with multiple trials. To disassociate a trial component from a trial, call the DisassociateTrialComponent API.
", "CreateAlgorithm": "Create a machine learning algorithm that you can use in Amazon SageMaker and list in the AWS Marketplace.
", "CreateApp": "Creates a running App for the specified UserProfile. Supported Apps are JupyterServer and KernelGateway. This operation is automatically invoked by Amazon SageMaker Amazon SageMaker Studio (Studio) upon access to the associated Studio Domain, and when new kernel configurations are selected by the user. A user may have multiple Apps active simultaneously. Apps will automatically terminate and be deleted when stopped from within Studio, or when the DeleteApp API is manually called. UserProfiles are limited to 5 concurrently running Apps at a time.
", - "CreateAutoMLJob": "Creates an AutoPilot job.
", + "CreateAutoMLJob": "Creates an AutoPilot job.
After you run an AutoPilot job, you can find the best performing model by calling , and then deploy that model by following the steps described in Step 6.1: Deploy the Model to Amazon SageMaker Hosting Services.
For information about how to use AutoPilot, see Use AutoPilot to Automate Model Development.
", "CreateCodeRepository": "Creates a Git repository as a resource in your Amazon SageMaker account. You can associate the repository with notebook instances so that you can use Git source control for the notebooks you create. The Git repository is a resource in your Amazon SageMaker account, so it can be associated with more than one notebook instance, and it persists independently from the lifecycle of any notebook instances it is associated with.
The repository can be hosted either in AWS CodeCommit or in any other Git repository.
", "CreateCompilationJob": "Starts a model compilation job. After the model has been compiled, Amazon SageMaker saves the resulting model artifacts to an Amazon Simple Storage Service (Amazon S3) bucket that you specify.
If you choose to host your model using Amazon SageMaker hosting services, you can use the resulting model artifacts as part of the model. You can also use the artifacts with AWS IoT Greengrass. In that case, deploy them as an ML resource.
In the request body, you provide the following:
A name for the compilation job
Information about the input model artifacts
The output location for the compiled model and the device (target) that the model runs on
The Amazon Resource Name (ARN) of the IAM role that Amazon SageMaker assumes to perform the model compilation job
You can also provide a Tag
to track the model compilation job's resource use and costs. The response body contains the CompilationJobArn
for the compiled job.
To stop a model compilation job, use StopCompilationJob. To get information about a particular model compilation job, use DescribeCompilationJob. To get information about multiple model compilation jobs, use ListCompilationJobs.
", "CreateDomain": "Creates a Domain for Amazon SageMaker Amazon SageMaker Studio (Studio), which can be accessed by end-users in a web browser. A Domain has an associated directory, list of authorized users, and a variety of security, application, policies, and Amazon Virtual Private Cloud configurations. An AWS account is limited to one Domain, per region. Users within a domain can share notebook files and other artifacts with each other. When a Domain is created, an Amazon Elastic File System (EFS) is also created for use by all of the users within the Domain. Each user receives a private home directory within the EFS for notebooks, Git repositories, and data files.
", - "CreateEndpoint": "Creates an endpoint using the endpoint configuration specified in the request. Amazon SageMaker uses the endpoint to provision resources and deploy models. You create the endpoint configuration with the CreateEndpointConfig API.
Use this API to deploy models using Amazon SageMaker hosting services.
For an example that calls this method when deploying a model to Amazon SageMaker hosting services, see Deploy the Model to Amazon SageMaker Hosting Services (AWS SDK for Python (Boto 3)).
You must not delete an EndpointConfig
that is in use by an endpoint that is live or while the UpdateEndpoint
or CreateEndpoint
operations are being performed on the endpoint. To update an endpoint, you must create a new EndpointConfig
.
The endpoint name must be unique within an AWS Region in your AWS account.
When it receives the request, Amazon SageMaker creates the endpoint, launches the resources (ML compute instances), and deploys the model(s) on them.
When Amazon SageMaker receives the request, it sets the endpoint status to Creating
. After it creates the endpoint, it sets the status to InService
. Amazon SageMaker can then process incoming requests for inferences. To check the status of an endpoint, use the DescribeEndpoint API.
If any of the models hosted at this endpoint get model data from an Amazon S3 location, Amazon SageMaker uses AWS Security Token Service to download model artifacts from the S3 path you provided. AWS STS is activated in your IAM user account by default. If you previously deactivated AWS STS for a region, you need to reactivate AWS STS for that region. For more information, see Activating and Deactivating AWS STS in an AWS Region in the AWS Identity and Access Management User Guide.
", - "CreateEndpointConfig": "Creates an endpoint configuration that Amazon SageMaker hosting services uses to deploy models. In the configuration, you identify one or more models, created using the CreateModel
API, to deploy and the resources that you want Amazon SageMaker to provision. Then you call the CreateEndpoint API.
Use this API if you want to use Amazon SageMaker hosting services to deploy models into production.
In the request, you define a ProductionVariant
, for each model that you want to deploy. Each ProductionVariant
parameter also describes the resources that you want Amazon SageMaker to provision. This includes the number and type of ML compute instances to deploy.
If you are hosting multiple models, you also assign a VariantWeight
to specify how much traffic you want to allocate to each model. For example, suppose that you want to host two models, A and B, and you assign traffic weight 2 for model A and 1 for model B. Amazon SageMaker distributes two-thirds of the traffic to Model A, and one-third to model B.
For an example that calls this method when deploying a model to Amazon SageMaker hosting services, see Deploy the Model to Amazon SageMaker Hosting Services (AWS SDK for Python (Boto 3)).
", + "CreateEndpoint": "Creates an endpoint using the endpoint configuration specified in the request. Amazon SageMaker uses the endpoint to provision resources and deploy models. You create the endpoint configuration with the CreateEndpointConfig API.
Use this API to deploy models using Amazon SageMaker hosting services.
For an example that calls this method when deploying a model to Amazon SageMaker hosting services, see Deploy the Model to Amazon SageMaker Hosting Services (AWS SDK for Python (Boto 3)).
You must not delete an EndpointConfig
that is in use by an endpoint that is live or while the UpdateEndpoint
or CreateEndpoint
operations are being performed on the endpoint. To update an endpoint, you must create a new EndpointConfig
.
The endpoint name must be unique within an AWS Region in your AWS account.
When it receives the request, Amazon SageMaker creates the endpoint, launches the resources (ML compute instances), and deploys the model(s) on them.
When Amazon SageMaker receives the request, it sets the endpoint status to Creating
. After it creates the endpoint, it sets the status to InService
. Amazon SageMaker can then process incoming requests for inferences. To check the status of an endpoint, use the DescribeEndpoint API.
If any of the models hosted at this endpoint get model data from an Amazon S3 location, Amazon SageMaker uses AWS Security Token Service to download model artifacts from the S3 path you provided. AWS STS is activated in your IAM user account by default. If you previously deactivated AWS STS for a region, you need to reactivate AWS STS for that region. For more information, see Activating and Deactivating AWS STS in an AWS Region in the AWS Identity and Access Management User Guide.
", + "CreateEndpointConfig": "Creates an endpoint configuration that Amazon SageMaker hosting services uses to deploy models. In the configuration, you identify one or more models, created using the CreateModel
API, to deploy and the resources that you want Amazon SageMaker to provision. Then you call the CreateEndpoint API.
Use this API if you want to use Amazon SageMaker hosting services to deploy models into production.
In the request, you define a ProductionVariant
, for each model that you want to deploy. Each ProductionVariant
parameter also describes the resources that you want Amazon SageMaker to provision. This includes the number and type of ML compute instances to deploy.
If you are hosting multiple models, you also assign a VariantWeight
to specify how much traffic you want to allocate to each model. For example, suppose that you want to host two models, A and B, and you assign traffic weight 2 for model A and 1 for model B. Amazon SageMaker distributes two-thirds of the traffic to Model A, and one-third to model B.
For an example that calls this method when deploying a model to Amazon SageMaker hosting services, see Deploy the Model to Amazon SageMaker Hosting Services (AWS SDK for Python (Boto 3)).
", "CreateExperiment": "Creates an Amazon SageMaker experiment. An experiment is a collection of trials that are observed, compared and evaluated as a group. A trial is a set of steps, called trial components, that produce a machine learning model.
The goal of an experiment is to determine the components that produce the best model. Multiple trials are performed, each one isolating and measuring the impact of a change to one or more inputs, while keeping the remaining inputs constant.
When you use Amazon SageMaker Studio or the Amazon SageMaker Python SDK, all experiments, trials, and trial components are automatically tracked, logged, and indexed. When you use the AWS SDK for Python (Boto), you must use the logging APIs provided by the SDK.
You can add tags to experiments, trials, trial components and then use the Search API to search for the tags.
To add a description to an experiment, specify the optional Description
parameter. To add a description later, or to change the description, call the UpdateExperiment API.
To get a list of all your experiments, call the ListExperiments API. To view an experiment's properties, call the DescribeExperiment API. To get a list of all the trials associated with an experiment, call the ListTrials API. To create a trial call the CreateTrial API.
", "CreateFlowDefinition": "Creates a flow definition.
", "CreateHumanTaskUi": "Defines the settings you will use for the human review workflow user interface. Reviewers will see a three-panel interface with an instruction area, the item to review, and an input area.
", @@ -23,7 +23,7 @@ "CreateNotebookInstance": "Creates an Amazon SageMaker notebook instance. A notebook instance is a machine learning (ML) compute instance running on a Jupyter notebook.
In a CreateNotebookInstance
request, specify the type of ML compute instance that you want to run. Amazon SageMaker launches the instance, installs common libraries that you can use to explore datasets for model training, and attaches an ML storage volume to the notebook instance.
Amazon SageMaker also provides a set of example notebooks. Each notebook demonstrates how to use Amazon SageMaker with a specific algorithm or with a machine learning framework.
After receiving the request, Amazon SageMaker does the following:
Creates a network interface in the Amazon SageMaker VPC.
(Option) If you specified SubnetId
, Amazon SageMaker creates a network interface in your own VPC, which is inferred from the subnet ID that you provide in the input. When creating this network interface, Amazon SageMaker attaches the security group that you specified in the request to the network interface that it creates in your VPC.
Launches an EC2 instance of the type specified in the request in the Amazon SageMaker VPC. If you specified SubnetId
of your VPC, Amazon SageMaker specifies both network interfaces when launching this instance. This enables inbound traffic from your own VPC to the notebook instance, assuming that the security groups allow it.
After creating the notebook instance, Amazon SageMaker returns its Amazon Resource Name (ARN). You can't change the name of a notebook instance after you create it.
After Amazon SageMaker creates the notebook instance, you can connect to the Jupyter server and work in Jupyter notebooks. For example, you can write code to explore a dataset that you can use for model training, train a model, host models by creating Amazon SageMaker endpoints, and validate hosted models.
For more information, see How It Works.
", "CreateNotebookInstanceLifecycleConfig": "Creates a lifecycle configuration that you can associate with a notebook instance. A lifecycle configuration is a collection of shell scripts that run when you create or start a notebook instance.
Each lifecycle configuration script has a limit of 16384 characters.
The value of the $PATH
environment variable that is available to both scripts is /sbin:bin:/usr/sbin:/usr/bin
.
View CloudWatch Logs for notebook instance lifecycle configurations in log group /aws/sagemaker/NotebookInstances
in log stream [notebook-instance-name]/[LifecycleConfigHook]
.
Lifecycle configuration scripts cannot run for longer than 5 minutes. If a script runs for longer than 5 minutes, it fails and the notebook instance is not created or started.
For information about notebook instance lifestyle configurations, see Step 2.1: (Optional) Customize a Notebook Instance.
", "CreatePresignedDomainUrl": "Creates a URL for a specified UserProfile in a Domain. When accessed in a web browser, the user will be automatically signed in to Amazon SageMaker Amazon SageMaker Studio (Studio), and granted access to all of the Apps and files associated with that Amazon Elastic File System (EFS). This operation can only be called when AuthMode equals IAM.
", - "CreatePresignedNotebookInstanceUrl": "Returns a URL that you can use to connect to the Jupyter server from a notebook instance. In the Amazon SageMaker console, when you choose Open
next to a notebook instance, Amazon SageMaker opens a new tab showing the Jupyter server home page from the notebook instance. The console uses this API to get the URL and show the page.
IAM authorization policies for this API are also enforced for every HTTP request and WebSocket frame that attempts to connect to the notebook instance.For example, you can restrict access to this API and to the URL that it returns to a list of IP addresses that you specify. Use the NotIpAddress
condition operator and the aws:SourceIP
condition context key to specify the list of IP addresses that you want to have access to the notebook instance. For more information, see Limit Access to a Notebook Instance by IP Address.
The URL that you get from a call to is valid only for 5 minutes. If you try to use the URL after the 5-minute limit expires, you are directed to the AWS console sign-in page.
Returns a URL that you can use to connect to the Jupyter server from a notebook instance. In the Amazon SageMaker console, when you choose Open
next to a notebook instance, Amazon SageMaker opens a new tab showing the Jupyter server home page from the notebook instance. The console uses this API to get the URL and show the page.
IAM authorization policies for this API are also enforced for every HTTP request and WebSocket frame that attempts to connect to the notebook instance.For example, you can restrict access to this API and to the URL that it returns to a list of IP addresses that you specify. Use the NotIpAddress
condition operator and the aws:SourceIP
condition context key to specify the list of IP addresses that you want to have access to the notebook instance. For more information, see Limit Access to a Notebook Instance by IP Address.
The URL that you get from a call to CreatePresignedNotebookInstanceUrl is valid only for 5 minutes. If you try to use the URL after the 5-minute limit expires, you are directed to the AWS console sign-in page.
Creates a processing job.
", "CreateTrainingJob": "Starts a model training job. After training completes, Amazon SageMaker saves the resulting model artifacts to an Amazon S3 location that you specify.
If you choose to host your model using Amazon SageMaker hosting services, you can use the resulting model artifacts as part of the model. You can also use the artifacts in a machine learning service other than Amazon SageMaker, provided that you know how to use them for inferences.
In the request body, you provide the following:
AlgorithmSpecification
- Identifies the training algorithm to use.
HyperParameters
- Specify these algorithm-specific parameters to enable the estimation of model parameters during training. Hyperparameters can be tuned to optimize this learning process. For a list of hyperparameters for each training algorithm provided by Amazon SageMaker, see Algorithms.
InputDataConfig
- Describes the training dataset and the Amazon S3, EFS, or FSx location where it is stored.
OutputDataConfig
- Identifies the Amazon S3 bucket where you want Amazon SageMaker to save the results of model training.
ResourceConfig
- Identifies the resources, ML compute instances, and ML storage volumes to deploy for model training. In distributed training, you specify more than one instance.
EnableManagedSpotTraining
- Optimize the cost of training machine learning models by up to 80% by using Amazon EC2 Spot instances. For more information, see Managed Spot Training.
RoleARN
- The Amazon Resource Number (ARN) that Amazon SageMaker assumes to perform tasks on your behalf during model training. You must grant this role the necessary permissions so that Amazon SageMaker can successfully complete model training.
StoppingCondition
- To help cap training costs, use MaxRuntimeInSeconds
to set a time limit for training. Use MaxWaitTimeInSeconds
to specify how long you are willing to wait for a managed spot training job to complete.
For more information about Amazon SageMaker, see How It Works.
", "CreateTransformJob": "Starts a transform job. A transform job uses a trained model to get inferences on a dataset and saves these results to an Amazon S3 location that you specify.
To perform batch transformations, you create a transform job and use the data that you have readily available.
In the request body, you provide the following:
TransformJobName
- Identifies the transform job. The name must be unique within an AWS Region in an AWS account.
ModelName
- Identifies the model to use. ModelName
must be the name of an existing Amazon SageMaker model in the same AWS Region and AWS account. For information on creating a model, see CreateModel.
TransformInput
- Describes the dataset to be transformed and the Amazon S3 location where it is stored.
TransformOutput
- Identifies the Amazon S3 location where you want Amazon SageMaker to save the results from the transform job.
TransformResources
- Identifies the ML compute instances for the transform job.
For more information about how batch transformation works, see Batch Transform.
", @@ -39,7 +39,7 @@ "DeleteEndpointConfig": "Deletes an endpoint configuration. The DeleteEndpointConfig
API deletes only the specified configuration. It does not delete endpoints created using the configuration.
Deletes an Amazon SageMaker experiment. All trials associated with the experiment must be deleted first. Use the ListTrials API to get a list of the trials associated with the experiment.
", "DeleteFlowDefinition": "Deletes the specified flow definition.
", - "DeleteModel": "Deletes a model. The DeleteModel
API deletes only the model entry that was created in Amazon SageMaker when you called the CreateModel API. It does not delete model artifacts, inference code, or the IAM role that you specified when creating the model.
Deletes a model. The DeleteModel
API deletes only the model entry that was created in Amazon SageMaker when you called the CreateModel API. It does not delete model artifacts, inference code, or the IAM role that you specified when creating the model.
Deletes a model package.
A model package is used to create Amazon SageMaker models or list on AWS Marketplace. Buyers can subscribe to model packages listed on AWS Marketplace to create models in Amazon SageMaker.
", "DeleteMonitoringSchedule": "Deletes a monitoring schedule. Also stops the schedule had not already been stopped. This does not delete the job execution history of the monitoring schedule.
", "DeleteNotebookInstance": " Deletes an Amazon SageMaker notebook instance. Before you can delete a notebook instance, you must call the StopNotebookInstance
API.
When you delete a notebook instance, you lose all of your data. Amazon SageMaker removes the ML compute instance, and deletes the ML storage volume and the network interface associated with the notebook instance.
Gets a list of labeling jobs.
", "ListLabelingJobsForWorkteam": "Gets a list of labeling jobs assigned to a specified work team.
", "ListModelPackages": "Lists the model packages that have been created.
", - "ListModels": "Lists models created with the CreateModel API.
", + "ListModels": "Lists models created with the CreateModel API.
", "ListMonitoringExecutions": "Returns list of all monitoring job executions.
", "ListMonitoringSchedules": "Returns list of all monitoring schedules.
", "ListNotebookInstanceLifecycleConfigs": "Lists notebook instance lifestyle configurations created with the CreateNotebookInstanceLifecycleConfig API.
", @@ -110,7 +110,7 @@ "ListUserProfiles": "Lists user profiles.
", "ListWorkteams": "Gets a list of work teams that you have defined in a region. The list may be empty if no work team satisfies the filter specified in the NameContains
parameter.
Renders the UI template so that you can preview the worker's experience.
", - "Search": "Finds Amazon SageMaker resources that match a search query. Matching resource objects are returned as a list of SearchResult
objects in the response. You can sort the search results by any resource property in a ascending or descending order.
You can query against the following value types: numeric, text, Boolean, and timestamp.
", + "Search": "Finds Amazon SageMaker resources that match a search query. Matching resources are returned as a list of SearchRecord
objects in the response. You can sort the search results by any resource property in a ascending or descending order.
You can query against the following value types: numeric, text, Boolean, and timestamp.
", "StartMonitoringSchedule": "Starts a previously stopped monitoring schedule.
New monitoring schedules are immediately started after creation.
Launches an ML compute instance with the latest version of the libraries and attaches your ML storage volume. After configuring the notebook instance, Amazon SageMaker sets the notebook instance status to InService
. A notebook instance's status must be InService
before you can connect to your Jupyter notebook.
A method for forcing the termination of a running job.
", @@ -124,8 +124,8 @@ "StopTransformJob": "Stops a transform job.
When Amazon SageMaker receives a StopTransformJob
request, the status of the job changes to Stopping
. After Amazon SageMaker stops the job, the status is set to Stopped
. When you stop a transform job before it is completed, Amazon SageMaker doesn't store the job's output in Amazon S3.
Updates the specified Git repository with the specified values.
", "UpdateDomain": "Updates a domain. Changes will impact all of the people in the domain.
", - "UpdateEndpoint": "Deploys the new EndpointConfig
specified in the request, switches to using newly created endpoint, and then deletes resources provisioned for the endpoint using the previous EndpointConfig
(there is no availability loss).
When Amazon SageMaker receives the request, it sets the endpoint status to Updating
. After updating the endpoint, it sets the status to InService
. To check the status of an endpoint, use the DescribeEndpoint API.
You must not delete an EndpointConfig
in use by an endpoint that is live or while the UpdateEndpoint
or CreateEndpoint
operations are being performed on the endpoint. To update an endpoint, you must create a new EndpointConfig
.
Updates variant weight of one or more variants associated with an existing endpoint, or capacity of one variant associated with an existing endpoint. When it receives the request, Amazon SageMaker sets the endpoint status to Updating
. After updating the endpoint, it sets the status to InService
. To check the status of an endpoint, use the DescribeEndpoint API.
Deploys the new EndpointConfig
specified in the request, switches to using newly created endpoint, and then deletes resources provisioned for the endpoint using the previous EndpointConfig
(there is no availability loss).
When Amazon SageMaker receives the request, it sets the endpoint status to Updating
. After updating the endpoint, it sets the status to InService
. To check the status of an endpoint, use the DescribeEndpoint API.
You must not delete an EndpointConfig
in use by an endpoint that is live or while the UpdateEndpoint
or CreateEndpoint
operations are being performed on the endpoint. To update an endpoint, you must create a new EndpointConfig
.
Updates variant weight of one or more variants associated with an existing endpoint, or capacity of one variant associated with an existing endpoint. When it receives the request, Amazon SageMaker sets the endpoint status to Updating
. After updating the endpoint, it sets the status to InService
. To check the status of an endpoint, use the DescribeEndpoint API.
Adds, updates, or removes the description of an experiment. Updates the display name of an experiment.
", "UpdateMonitoringSchedule": "Updates a previously created schedule.
", "UpdateNotebookInstance": "Updates a notebook instance. NotebookInstance updates include upgrading or downgrading the ML compute instance used for your notebook instance to accommodate changes in your workload requirements.
", @@ -191,7 +191,7 @@ } }, "AlgorithmSpecification": { - "base": "Specifies the training algorithm to use in a CreateTrainingJob request.
For more information about algorithms provided by Amazon SageMaker, see Algorithms. For information about using your own algorithms, see Using Your Own Algorithms with Amazon SageMaker.
", + "base": "Specifies the training algorithm to use in a CreateTrainingJob request.
For more information about algorithms provided by Amazon SageMaker, see Algorithms. For information about using your own algorithms, see Using Your Own Algorithms with Amazon SageMaker.
", "refs": { "CreateTrainingJobRequest$AlgorithmSpecification": "The registry path of the Docker image that contains the training algorithm and algorithm-specific metadata, including the input mode. For more information about algorithms provided by Amazon SageMaker, see Algorithms. For information about providing your own algorithms, see Using Your Own Algorithms with Amazon SageMaker.
", "DescribeTrainingJobResponse$AlgorithmSpecification": "Information about the algorithm used for training, and algorithm metadata.
", @@ -306,7 +306,8 @@ "base": "Configuration to run a processing job in a specified container image.
", "refs": { "CreateProcessingJobRequest$AppSpecification": "Configures the processing job to run a specified Docker container image.
", - "DescribeProcessingJobResponse$AppSpecification": "Configures the processing job to run a specified container image.
" + "DescribeProcessingJobResponse$AppSpecification": "Configures the processing job to run a specified container image.
", + "ProcessingJob$AppSpecification": null } }, "AppStatus": { @@ -426,7 +427,7 @@ "AutoMLInputDataConfig": { "base": null, "refs": { - "CreateAutoMLJobRequest$InputDataConfig": "Similar to InputDataConfig supported by Tuning. Format(s) supported: CSV.
", + "CreateAutoMLJobRequest$InputDataConfig": "Similar to InputDataConfig supported by Tuning. Format(s) supported: CSV. Minimum of 1000 rows.
", "DescribeAutoMLJobResponse$InputDataConfig": "Returns the job's input data config.
" } }, @@ -439,6 +440,7 @@ "DescribeProcessingJobResponse$AutoMLJobArn": "The ARN of an AutoML job associated with this processing job.
", "DescribeTrainingJobResponse$AutoMLJobArn": "", "DescribeTransformJobResponse$AutoMLJobArn": "", + "ProcessingJob$AutoMLJobArn": "The Amazon Resource Name (ARN) of the AutoML job associated with this processing job.
", "TrainingJob$AutoMLJobArn": "The Amazon Resource Name (ARN) of the job.
" } }, @@ -544,7 +546,7 @@ "AutoMLS3DataSource": { "base": "The Amazon S3 data source.
", "refs": { - "AutoMLDataSource$S3DataSource": "The Amazon S3 location of the data.
" + "AutoMLDataSource$S3DataSource": "The Amazon S3 location of the input data.
The input data must be in CSV format and contain at least 1000 rows.
Specifies the number of records to include in a mini-batch for an HTTP inference request. A record is a single unit of input data that inference can be made on. For example, a single line in a CSV file is a record.
To enable the batch strategy, you must set the SplitType
property of the DataProcessing object to Line
, RecordIO
, or TFRecord
.
To use only one record when making an HTTP invocation request to a container, set BatchStrategy
to SingleRecord
and SplitType
to Line
.
To fit as many records in a mini-batch as can fit within the MaxPayloadInMB
limit, set BatchStrategy
to MultiRecord
and SplitType
to Line
.
Specifies the number of records to include in a mini-batch for an HTTP inference request. A record is a single unit of input data that inference can be made on. For example, a single line in a CSV file is a record.
To enable the batch strategy, you must set the SplitType
property to Line
, RecordIO
, or TFRecord
.
To use only one record when making an HTTP invocation request to a container, set BatchStrategy
to SingleRecord
and SplitType
to Line
.
To fit as many records in a mini-batch as can fit within the MaxPayloadInMB
limit, set BatchStrategy
to MultiRecord
and SplitType
to Line
.
Specifies the number of records to include in a mini-batch for an HTTP inference request. A record is a single unit of input data that inference can be made on. For example, a single line in a CSV file is a record.
To enable the batch strategy, you must set SplitType
to Line
, RecordIO
, or TFRecord
.
A string that determines the number of records included in a single mini-batch.
SingleRecord
means only one record is used per mini-batch. MultiRecord
means a mini-batch is set to contain as many records that can fit within the MaxPayloadInMB
limit.
Whether to allow inbound and outbound network calls to and from the containers used for the processing job.
", "TrainingJob$EnableNetworkIsolation": "If the TrainingJob
was created with network isolation, the value is set to true
. If network isolation is enabled, nodes can't communicate beyond the VPC they run in.
To encrypt all communications between ML compute instances in distributed training, choose True
. Encryption provides greater security for distributed training, but training might take longer. How long it takes depends on the amount of communication between compute instances, especially if you use a deep learning algorithm in distributed training.
When true, enables managed spot training using Amazon EC2 Spot instances to run training jobs instead of on-demand instances. For more information, see model-managed-spot-training.
", + "TrainingJob$EnableManagedSpotTraining": "When true, enables managed spot training using Amazon EC2 Spot instances to run training jobs instead of on-demand instances. For more information, see Managed Spot Training.
", "TrainingSpecification$SupportsDistributedTraining": "Indicates whether the algorithm supports distributed training. If set to false, buyers can't request more than one instance during training.
", "UpdateEndpointInput$RetainAllVariantProperties": "When updating endpoint resources, enables or disables the retention of variant properties, such as the instance count or the variant weight. To retain the variant properties of an endpoint when updating it, set RetainAllVariantProperties
to true
. To use the variant properties specified in a new EndpointConfig
call when updating an endpoint, set RetainAllVariantProperties
to false
.
The name of the endpoint configuration. You specify this name in a CreateEndpoint request.
", - "CreateEndpointInput$EndpointConfigName": "The name of an endpoint configuration. For more information, see CreateEndpointConfig.
", + "CreateEndpointConfigInput$EndpointConfigName": "The name of the endpoint configuration. You specify this name in a CreateEndpoint request.
", + "CreateEndpointInput$EndpointConfigName": "The name of an endpoint configuration. For more information, see CreateEndpointConfig.
", "DeleteEndpointConfigInput$EndpointConfigName": "The name of the endpoint configuration that you want to delete.
", "DescribeEndpointConfigInput$EndpointConfigName": "The name of the endpoint configuration.
", "DescribeEndpointConfigOutput$EndpointConfigName": "Name of the Amazon SageMaker endpoint configuration.
", @@ -2205,13 +2207,14 @@ "base": null, "refs": { "DescribeProcessingJobResponse$ExitMessage": "An optional string, up to one KB in size, that contains metadata from the processing container when the processing job exits.
", + "ProcessingJob$ExitMessage": "A string, up to one KB in size, that contains metadata from the processing container when the processing job exits.
", "ProcessingJobSummary$ExitMessage": "An optional string, up to one KB in size, that contains metadata from the processing container when the processing job exits.
" } }, "Experiment": { - "base": "A summary of the properties of an experiment as returned by the Search API.
", + "base": "The properties of an experiment as returned by the Search API.
", "refs": { - "SearchRecord$Experiment": "A summary of the properties of an experiment.
" + "SearchRecord$Experiment": "The properties of an experiment.
" } }, "ExperimentArn": { @@ -2234,6 +2237,7 @@ "DescribeProcessingJobResponse$ExperimentConfig": "The configuration information used to create an experiment.
", "DescribeTrainingJobResponse$ExperimentConfig": null, "DescribeTransformJobResponse$ExperimentConfig": null, + "ProcessingJob$ExperimentConfig": null, "TrainingJob$ExperimentConfig": null } }, @@ -2351,6 +2355,7 @@ "HyperParameterTrainingJobSummary$FailureReason": "The reason that the training job failed.
", "LabelingJobSummary$FailureReason": "If the LabelingJobStatus
field is Failed
, this field contains a description of the error.
Contains the reason a monitoring job failed, if it failed.
", + "ProcessingJob$FailureReason": "A string, up to one KB in size, that contains the reason a processing job failed, if it failed.
", "ProcessingJobSummary$FailureReason": "A string, up to one KB in size, that contains the reason a processing job failed, if it failed.
", "ResourceInUse$Message": null, "ResourceLimitExceeded$Message": null, @@ -2384,7 +2389,7 @@ } }, "Filter": { - "base": "A conditional statement for a search expression that includes a resource property, a Boolean operator, and a value.
If you don't specify an Operator
and a Value
, the filter searches for only the specified property. For example, defining a Filter
for the FailureReason
for the TrainingJob
Resource
searches for training job objects that have a value in the FailureReason
field.
If you specify a Value
, but not an Operator
, Amazon SageMaker uses the equals operator as the default.
In search, there are several property types:
To define a metric filter, enter a value using the form \"Metrics.<name>\"
, where <name>
is a metric name. For example, the following filter searches for training jobs with an \"accuracy\"
metric greater than \"0.9\"
:
{
\"Name\": \"Metrics.accuracy\",
\"Operator\": \"GREATER_THAN\",
\"Value\": \"0.9\"
}
To define a hyperparameter filter, enter a value with the form \"HyperParameters.<name>\"
. Decimal hyperparameter values are treated as a decimal in a comparison if the specified Value
is also a decimal value. If the specified Value
is an integer, the decimal hyperparameter values are treated as integers. For example, the following filter is satisfied by training jobs with a \"learning_rate\"
hyperparameter that is less than \"0.5\"
:
{
\"Name\": \"HyperParameters.learning_rate\",
\"Operator\": \"LESS_THAN\",
\"Value\": \"0.5\"
}
To define a tag filter, enter a value with the form \"Tags.<key>\"
.
A conditional statement for a search expression that includes a resource property, a Boolean operator, and a value. Resources that match the statement are returned in the results from the Search API.
If you specify a Value
, but not an Operator
, Amazon SageMaker uses the equals operator.
In search, there are several property types:
To define a metric filter, enter a value using the form \"Metrics.<name>\"
, where <name>
is a metric name. For example, the following filter searches for training jobs with an \"accuracy\"
metric greater than \"0.9\"
:
{
\"Name\": \"Metrics.accuracy\",
\"Operator\": \"GreaterThan\",
\"Value\": \"0.9\"
}
To define a hyperparameter filter, enter a value with the form \"HyperParameters.<name>\"
. Decimal hyperparameter values are treated as a decimal in a comparison if the specified Value
is also a decimal value. If the specified Value
is an integer, the decimal hyperparameter values are treated as integers. For example, the following filter is satisfied by training jobs with a \"learning_rate\"
hyperparameter that is less than \"0.5\"
:
{
\"Name\": \"HyperParameters.learning_rate\",
\"Operator\": \"LessThan\",
\"Value\": \"0.5\"
}
To define a tag filter, enter a value with the form Tags.<key>
.
A value used with Resource
and Operator
to determine if objects satisfy the filter's condition. For numerical properties, Value
must be an integer or floating-point decimal. For timestamp properties, Value
must be an ISO 8601 date-time string of the following format: YYYY-mm-dd'T'HH:MM:SS
.
A value used with Name
and Operator
to determine which resources satisfy the filter's condition. For numerical properties, Value
must be an integer or floating-point decimal. For timestamp properties, Value
must be an ISO 8601 date-time string of the following format: YYYY-mm-dd'T'HH:MM:SS
.
JSON expressing use-case specific conditions declaratively. If any condition is matched, atomic tasks are created against the configured work team. The set of conditions is different for Rekognition and Textract.
" + "HumanLoopActivationConditionsConfig$HumanLoopActivationConditions": "JSON expressing use-case specific conditions declaratively. If any condition is matched, atomic tasks are created against the configured work team. The set of conditions is different for Rekognition and Textract. For more information about how to structure the JSON, see JSON Schema for Human Loop Activation Conditions in Amazon Augmented AI in the Amazon SageMaker Developer Guide.
" } }, "HumanLoopActivationConditionsConfig": { - "base": "Defines under what conditions SageMaker creates a human loop. Used within .
", + "base": "Defines under what conditions SageMaker creates a human loop. Used within . See for the required format of activation conditions.
", "refs": { "HumanLoopActivationConfig$HumanLoopActivationConditionsConfig": "Container structure for defining under what conditions SageMaker creates a human loop.
" } @@ -2591,7 +2596,8 @@ "HumanLoopRequestSource": { "base": "Container for configuring the source of human task requests.
", "refs": { - "HumanLoopActivationConfig$HumanLoopRequestSource": "Container for configuring the source of human task requests.
" + "CreateFlowDefinitionRequest$HumanLoopRequestSource": "Container for configuring the source of human task requests. Use to specify if Amazon Rekognition or Amazon Textract is used as an integration source.
", + "DescribeFlowDefinitionResponse$HumanLoopRequestSource": "Container for configuring the source of human task requests. Used to specify if Amazon Rekognition or Amazon Textract is used as an integration source.
" } }, "HumanTaskConfig": { @@ -2705,7 +2711,7 @@ "HyperParameterTuningJobConfig": { "base": "Configures a hyperparameter tuning job.
", "refs": { - "CreateHyperParameterTuningJobRequest$HyperParameterTuningJobConfig": "The HyperParameterTuningJobConfig object that describes the tuning job, including the search strategy, the objective metric used to evaluate training jobs, ranges of parameters to search, and resource limits for the tuning job. For more information, see automatic-model-tuning
", + "CreateHyperParameterTuningJobRequest$HyperParameterTuningJobConfig": "The HyperParameterTuningJobConfig object that describes the tuning job, including the search strategy, the objective metric used to evaluate training jobs, ranges of parameters to search, and resource limits for the tuning job. For more information, see How Hyperparameter Tuning Works.
", "DescribeHyperParameterTuningJobResponse$HyperParameterTuningJobConfig": "The HyperParameterTuningJobConfig object that specifies the configuration of the tuning job.
" } }, @@ -3117,8 +3123,8 @@ "LambdaFunctionArn": { "base": null, "refs": { - "AnnotationConsolidationConfig$AnnotationConsolidationLambdaArn": "The Amazon Resource Name (ARN) of a Lambda function implements the logic for annotation consolidation.
For the built-in bounding box, image classification, semantic segmentation, and text classification task types, Amazon SageMaker Ground Truth provides the following Lambda functions:
Bounding box - Finds the most similar boxes from different workers based on the Jaccard index of the boxes.
arn:aws:lambda:us-east-1:432418664414:function:ACS-BoundingBox
arn:aws:lambda:us-east-2:266458841044:function:ACS-BoundingBox
arn:aws:lambda:us-west-2:081040173940:function:ACS-BoundingBox
arn:aws:lambda:eu-west-1:568282634449:function:ACS-BoundingBox
arn:aws:lambda:ap-northeast-1:477331159723:function:ACS-BoundingBox
arn:aws:lambda:ap-southeast-2:454466003867:function:ACS-BoundingBox
arn:aws:lambda:ap-south-1:565803892007:function:ACS-BoundingBox
arn:aws:lambda:eu-central-1:203001061592:function:ACS-BoundingBox
arn:aws:lambda:ap-northeast-2:845288260483:function:ACS-BoundingBox
arn:aws:lambda:eu-west-2:487402164563:function:ACS-BoundingBox
arn:aws:lambda:ap-southeast-1:377565633583:function:ACS-BoundingBox
arn:aws:lambda:ca-central-1:918755190332:function:ACS-BoundingBox
Image classification - Uses a variant of the Expectation Maximization approach to estimate the true class of an image based on annotations from individual workers.
arn:aws:lambda:us-east-1:432418664414:function:ACS-ImageMultiClass
arn:aws:lambda:us-east-2:266458841044:function:ACS-ImageMultiClass
arn:aws:lambda:us-west-2:081040173940:function:ACS-ImageMultiClass
arn:aws:lambda:eu-west-1:568282634449:function:ACS-ImageMultiClass
arn:aws:lambda:ap-northeast-1:477331159723:function:ACS-ImageMultiClass
arn:aws:lambda:ap-southeast-2:454466003867:function:ACS-ImageMultiClass
arn:aws:lambda:ap-south-1:565803892007:function:ACS-ImageMultiClass
arn:aws:lambda:eu-central-1:203001061592:function:ACS-ImageMultiClass
arn:aws:lambda:ap-northeast-2:845288260483:function:ACS-ImageMultiClass
arn:aws:lambda:eu-west-2:487402164563:function:ACS-ImageMultiClass
arn:aws:lambda:ap-southeast-1:377565633583:function:ACS-ImageMultiClass
arn:aws:lambda:ca-central-1:918755190332:function:ACS-ImageMultiClass
Semantic segmentation - Treats each pixel in an image as a multi-class classification and treats pixel annotations from workers as \"votes\" for the correct label.
arn:aws:lambda:us-east-1:432418664414:function:ACS-SemanticSegmentation
arn:aws:lambda:us-east-2:266458841044:function:ACS-SemanticSegmentation
arn:aws:lambda:us-west-2:081040173940:function:ACS-SemanticSegmentation
arn:aws:lambda:eu-west-1:568282634449:function:ACS-SemanticSegmentation
arn:aws:lambda:ap-northeast-1:477331159723:function:ACS-SemanticSegmentation
arn:aws:lambda:ap-southeast-2:454466003867:function:ACS-SemanticSegmentation
arn:aws:lambda:ap-south-1:565803892007:function:ACS-SemanticSegmentation
arn:aws:lambda:eu-central-1:203001061592:function:ACS-SemanticSegmentation
arn:aws:lambda:ap-northeast-2:845288260483:function:ACS-SemanticSegmentation
arn:aws:lambda:eu-west-2:487402164563:function:ACS-SemanticSegmentation
arn:aws:lambda:ap-southeast-1:377565633583:function:ACS-SemanticSegmentation
arn:aws:lambda:ca-central-1:918755190332:function:ACS-SemanticSegmentation
Text classification - Uses a variant of the Expectation Maximization approach to estimate the true class of text based on annotations from individual workers.
arn:aws:lambda:us-east-1:432418664414:function:ACS-TextMultiClass
arn:aws:lambda:us-east-2:266458841044:function:ACS-TextMultiClass
arn:aws:lambda:us-west-2:081040173940:function:ACS-TextMultiClass
arn:aws:lambda:eu-west-1:568282634449:function:ACS-TextMultiClass
arn:aws:lambda:ap-northeast-1:477331159723:function:ACS-TextMultiClass
arn:aws:lambda:ap-southeast-2:454466003867:function:ACS-TextMultiClass
arn:aws:lambda:ap-south-1:565803892007:function:ACS-TextMultiClass
arn:aws:lambda:eu-central-1:203001061592:function:ACS-TextMultiClass
arn:aws:lambda:ap-northeast-2:845288260483:function:ACS-TextMultiClass
arn:aws:lambda:eu-west-2:487402164563:function:ACS-TextMultiClass
arn:aws:lambda:ap-southeast-1:377565633583:function:ACS-TextMultiClass
arn:aws:lambda:ca-central-1:918755190332:function:ACS-TextMultiClass
Named entity recognition - Groups similar selections and calculates aggregate boundaries, resolving to most-assigned label.
arn:aws:lambda:us-east-1:432418664414:function:ACS-NamedEntityRecognition
arn:aws:lambda:us-east-2:266458841044:function:ACS-NamedEntityRecognition
arn:aws:lambda:us-west-2:081040173940:function:ACS-NamedEntityRecognition
arn:aws:lambda:eu-west-1:568282634449:function:ACS-NamedEntityRecognition
arn:aws:lambda:ap-northeast-1:477331159723:function:ACS-NamedEntityRecognition
arn:aws:lambda:ap-southeast-2:454466003867:function:ACS-NamedEntityRecognition
arn:aws:lambda:ap-south-1:565803892007:function:ACS-NamedEntityRecognition
arn:aws:lambda:eu-central-1:203001061592:function:ACS-NamedEntityRecognition
arn:aws:lambda:ap-northeast-2:845288260483:function:ACS-NamedEntityRecognition
arn:aws:lambda:eu-west-2:487402164563:function:ACS-NamedEntityRecognition
arn:aws:lambda:ap-southeast-1:377565633583:function:ACS-NamedEntityRecognition
arn:aws:lambda:ca-central-1:918755190332:function:ACS-NamedEntityRecognition
Bounding box verification - Uses a variant of the Expectation Maximization approach to estimate the true class of verification judgement for bounding box labels based on annotations from individual workers.
arn:aws:lambda:us-east-1:432418664414:function:ACS-VerificationBoundingBox
arn:aws:lambda:us-east-2:266458841044:function:ACS-VerificationBoundingBox
arn:aws:lambda:us-west-2:081040173940:function:ACS-VerificationBoundingBox
arn:aws:lambda:eu-west-1:568282634449:function:ACS-VerificationBoundingBox
arn:aws:lambda:ap-northeast-1:477331159723:function:ACS-VerificationBoundingBox
arn:aws:lambda:ap-southeast-2:454466003867:function:ACS-VerificationBoundingBox
arn:aws:lambda:ap-south-1:565803892007:function:ACS-VerificationBoundingBox
arn:aws:lambda:eu-central-1:203001061592:function:ACS-VerificationBoundingBox
arn:aws:lambda:ap-northeast-2:845288260483:function:ACS-VerificationBoundingBox
arn:aws:lambda:eu-west-2:487402164563:function:ACS-VerificationBoundingBox
arn:aws:lambda:ap-southeast-1:377565633583:function:ACS-VerificationBoundingBox
arn:aws:lambda:ca-central-1:918755190332:function:ACS-VerificationBoundingBox
Semantic segmentation verification - Uses a variant of the Expectation Maximization approach to estimate the true class of verification judgment for semantic segmentation labels based on annotations from individual workers.
arn:aws:lambda:us-east-1:432418664414:function:ACS-VerificationSemanticSegmentation
arn:aws:lambda:us-east-2:266458841044:function:ACS-VerificationSemanticSegmentation
arn:aws:lambda:us-west-2:081040173940:function:ACS-VerificationSemanticSegmentation
arn:aws:lambda:eu-west-1:568282634449:function:ACS-VerificationSemanticSegmentation
arn:aws:lambda:ap-northeast-1:477331159723:function:ACS-VerificationSemanticSegmentation
arn:aws:lambda:ap-southeast-2:454466003867:function:ACS-VerificationSemanticSegmentation
arn:aws:lambda:ap-south-1:565803892007:function:ACS-VerificationSemanticSegmentation
arn:aws:lambda:eu-central-1:203001061592:function:ACS-VerificationSemanticSegmentation
arn:aws:lambda:ap-northeast-2:845288260483:function:ACS-VerificationSemanticSegmentation
arn:aws:lambda:eu-west-2:487402164563:function:ACS-VerificationSemanticSegmentation
arn:aws:lambda:ap-southeast-1:377565633583:function:ACS-VerificationSemanticSegmentation
arn:aws:lambda:ca-central-1:918755190332:function:ACS-VerificationSemanticSegmentation
Bounding box adjustment - Finds the most similar boxes from different workers based on the Jaccard index of the adjusted annotations.
arn:aws:lambda:us-east-1:432418664414:function:ACS-AdjustmentBoundingBox
arn:aws:lambda:us-east-2:266458841044:function:ACS-AdjustmentBoundingBox
arn:aws:lambda:us-west-2:081040173940:function:ACS-AdjustmentBoundingBox
arn:aws:lambda:eu-west-1:568282634449:function:ACS-AdjustmentBoundingBox
arn:aws:lambda:ap-northeast-1:477331159723:function:ACS-AdjustmentBoundingBox
arn:aws:lambda:ap-southeast-2:454466003867:function:ACS-AdjustmentBoundingBox
arn:aws:lambda:ap-south-1:565803892007:function:ACS-AdjustmentBoundingBox
arn:aws:lambda:eu-central-1:203001061592:function:ACS-AdjustmentBoundingBox
arn:aws:lambda:ap-northeast-2:845288260483:function:ACS-AdjustmentBoundingBox
arn:aws:lambda:eu-west-2:487402164563:function:ACS-AdjustmentBoundingBox
arn:aws:lambda:ap-southeast-1:377565633583:function:ACS-AdjustmentBoundingBox
arn:aws:lambda:ca-central-1:918755190332:function:ACS-AdjustmentBoundingBox
Semantic segmentation adjustment - Treats each pixel in an image as a multi-class classification and treats pixel adjusted annotations from workers as \"votes\" for the correct label.
arn:aws:lambda:us-east-1:432418664414:function:ACS-AdjustmentSemanticSegmentation
arn:aws:lambda:us-east-2:266458841044:function:ACS-AdjustmentSemanticSegmentation
arn:aws:lambda:us-west-2:081040173940:function:ACS-AdjustmentSemanticSegmentation
arn:aws:lambda:eu-west-1:568282634449:function:ACS-AdjustmentSemanticSegmentation
arn:aws:lambda:ap-northeast-1:477331159723:function:ACS-AdjustmentSemanticSegmentation
arn:aws:lambda:ap-southeast-2:454466003867:function:ACS-AdjustmentSemanticSegmentation
arn:aws:lambda:ap-south-1:565803892007:function:ACS-AdjustmentSemanticSegmentation
arn:aws:lambda:eu-central-1:203001061592:function:ACS-AdjustmentSemanticSegmentation
arn:aws:lambda:ap-northeast-2:845288260483:function:ACS-AdjustmentSemanticSegmentation
arn:aws:lambda:eu-west-2:487402164563:function:ACS-AdjustmentSemanticSegmentation
arn:aws:lambda:ap-southeast-1:377565633583:function:ACS-AdjustmentSemanticSegmentation
arn:aws:lambda:ca-central-1:918755190332:function:ACS-AdjustmentSemanticSegmentation
For more information, see Annotation Consolidation.
", - "HumanTaskConfig$PreHumanTaskLambdaArn": "The Amazon Resource Name (ARN) of a Lambda function that is run before a data object is sent to a human worker. Use this function to provide input to a custom labeling job.
For the built-in bounding box, image classification, semantic segmentation, and text classification task types, Amazon SageMaker Ground Truth provides the following Lambda functions:
US East (Northern Virginia) (us-east-1):
arn:aws:lambda:us-east-1:432418664414:function:PRE-BoundingBox
arn:aws:lambda:us-east-1:432418664414:function:PRE-ImageMultiClass
arn:aws:lambda:us-east-1:432418664414:function:PRE-SemanticSegmentation
arn:aws:lambda:us-east-1:432418664414:function:PRE-TextMultiClass
arn:aws:lambda:us-east-1:432418664414:function:PRE-NamedEntityRecognition
arn:aws:lambda:us-east-1:432418664414:function:PRE-VerificationBoundingBox
arn:aws:lambda:us-east-1:432418664414:function:PRE-VerificationSemanticSegmentation
arn:aws:lambda:us-east-1:432418664414:function:PRE-AdjustmentBoundingBox
arn:aws:lambda:us-east-1:432418664414:function:PRE-AdjustmentSemanticSegmentation
US East (Ohio) (us-east-2):
arn:aws:lambda:us-east-2:266458841044:function:PRE-BoundingBox
arn:aws:lambda:us-east-2:266458841044:function:PRE-ImageMultiClass
arn:aws:lambda:us-east-2:266458841044:function:PRE-SemanticSegmentation
arn:aws:lambda:us-east-2:266458841044:function:PRE-TextMultiClass
arn:aws:lambda:us-east-2:266458841044:function:PRE-NamedEntityRecognition
arn:aws:lambda:us-east-2:266458841044:function:PRE-VerificationBoundingBox
arn:aws:lambda:us-east-2:266458841044:function:PRE-VerificationSemanticSegmentation
arn:aws:lambda:us-east-2:266458841044:function:PRE-AdjustmentBoundingBox
arn:aws:lambda:us-east-2:266458841044:function:PRE-AdjustmentSemanticSegmentation
US West (Oregon) (us-west-2):
arn:aws:lambda:us-west-2:081040173940:function:PRE-BoundingBox
arn:aws:lambda:us-west-2:081040173940:function:PRE-ImageMultiClass
arn:aws:lambda:us-west-2:081040173940:function:PRE-SemanticSegmentation
arn:aws:lambda:us-west-2:081040173940:function:PRE-TextMultiClass
arn:aws:lambda:us-west-2:081040173940:function:PRE-NamedEntityRecognition
arn:aws:lambda:us-west-2:081040173940:function:PRE-VerificationBoundingBox
arn:aws:lambda:us-west-2:081040173940:function:PRE-VerificationSemanticSegmentation
arn:aws:lambda:us-west-2:081040173940:function:PRE-AdjustmentBoundingBox
arn:aws:lambda:us-west-2:081040173940:function:PRE-AdjustmentSemanticSegmentation
Canada (Central) (ca-central-1):
arn:aws:lambda:ca-central-1:918755190332:function:PRE-BoundingBox
arn:aws:lambda:ca-central-1:918755190332:function:PRE-ImageMultiClass
arn:aws:lambda:ca-central-1:918755190332:function:PRE-SemanticSegmentation
arn:aws:lambda:ca-central-1:918755190332:function:PRE-TextMultiClass
arn:aws:lambda:ca-central-1:918755190332:function:PRE-NamedEntityRecognition
arn:aws:lambda:ca-central-1:918755190332:function:PRE-VerificationBoundingBox
arn:aws:lambda:ca-central-1:918755190332:function:PRE-VerificationSemanticSegmentation
arn:aws:lambda:ca-central-1:918755190332:function:PRE-AdjustmentBoundingBox
arn:aws:lambda:ca-central-1:918755190332:function:PRE-AdjustmentSemanticSegmentation
EU (Ireland) (eu-west-1):
arn:aws:lambda:eu-west-1:568282634449:function:PRE-BoundingBox
arn:aws:lambda:eu-west-1:568282634449:function:PRE-ImageMultiClass
arn:aws:lambda:eu-west-1:568282634449:function:PRE-SemanticSegmentation
arn:aws:lambda:eu-west-1:568282634449:function:PRE-TextMultiClass
arn:aws:lambda:eu-west-1:568282634449:function:PRE-NamedEntityRecognition
arn:aws:lambda:eu-west-1:568282634449:function:PRE-VerificationBoundingBox
arn:aws:lambda:eu-west-1:568282634449:function:PRE-VerificationSemanticSegmentation
arn:aws:lambda:eu-west-1:568282634449:function:PRE-AdjustmentBoundingBox
arn:aws:lambda:eu-west-1:568282634449:function:PRE-AdjustmentSemanticSegmentation
EU (London) (eu-west-2):
arn:aws:lambda:eu-west-2:487402164563:function:PRE-BoundingBox
arn:aws:lambda:eu-west-2:487402164563:function:PRE-ImageMultiClass
arn:aws:lambda:eu-west-2:487402164563:function:PRE-SemanticSegmentation
arn:aws:lambda:eu-west-2:487402164563:function:PRE-TextMultiClass
arn:aws:lambda:eu-west-2:487402164563:function:PRE-NamedEntityRecognition
arn:aws:lambda:eu-west-2:487402164563:function:PRE-VerificationBoundingBox
arn:aws:lambda:eu-west-2:487402164563:function:PRE-VerificationSemanticSegmentation
arn:aws:lambda:eu-west-2:487402164563:function:PRE-AdjustmentBoundingBox
arn:aws:lambda:eu-west-2:487402164563:function:PRE-AdjustmentSemanticSegmentation
EU Frankfurt (eu-central-1):
arn:aws:lambda:eu-central-1:203001061592:function:PRE-BoundingBox
arn:aws:lambda:eu-central-1:203001061592:function:PRE-ImageMultiClass
arn:aws:lambda:eu-central-1:203001061592:function:PRE-SemanticSegmentation
arn:aws:lambda:eu-central-1:203001061592:function:PRE-TextMultiClass
arn:aws:lambda:eu-central-1:203001061592:function:PRE-NamedEntityRecognition
arn:aws:lambda:eu-central-1:203001061592:function:PRE-VerificationBoundingBox
arn:aws:lambda:eu-central-1:203001061592:function:PRE-VerificationSemanticSegmentation
arn:aws:lambda:eu-central-1:203001061592:function:PRE-AdjustmentBoundingBox
arn:aws:lambda:eu-central-1:203001061592:function:PRE-AdjustmentSemanticSegmentation
Asia Pacific (Tokyo) (ap-northeast-1):
arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-BoundingBox
arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-ImageMultiClass
arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-SemanticSegmentation
arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-TextMultiClass
arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-NamedEntityRecognition
arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-VerificationBoundingBox
arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-VerificationSemanticSegmentation
arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-AdjustmentBoundingBox
arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-AdjustmentSemanticSegmentation
Asia Pacific (Seoul) (ap-northeast-2):
arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-BoundingBox
arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-ImageMultiClass
arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-SemanticSegmentation
arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-TextMultiClass
arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-NamedEntityRecognition
arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-VerificationBoundingBox
arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-VerificationSemanticSegmentation
arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-AdjustmentBoundingBox
arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-AdjustmentSemanticSegmentation
Asia Pacific (Mumbai) (ap-south-1):
arn:aws:lambda:ap-south-1:565803892007:function:PRE-BoundingBox
arn:aws:lambda:ap-south-1:565803892007:function:PRE-ImageMultiClass
arn:aws:lambda:ap-south-1:565803892007:function:PRE-SemanticSegmentation
arn:aws:lambda:ap-south-1:565803892007:function:PRE-TextMultiClass
arn:aws:lambda:ap-south-1:565803892007:function:PRE-NamedEntityRecognition
arn:aws:lambda:ap-south-1:565803892007:function:PRE-VerificationBoundingBox
arn:aws:lambda:ap-south-1:565803892007:function:PRE-VerificationSemanticSegmentation
arn:aws:lambda:ap-south-1:565803892007:function:PRE-AdjustmentBoundingBox
arn:aws:lambda:ap-south-1:565803892007:function:PRE-AdjustmentSemanticSegmentation
Asia Pacific (Singapore) (ap-southeast-1):
arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-BoundingBox
arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-ImageMultiClass
arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-SemanticSegmentation
arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-TextMultiClass
arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-NamedEntityRecognition
arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-VerificationBoundingBox
arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-VerificationSemanticSegmentation
arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-AdjustmentBoundingBox
arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-AdjustmentSemanticSegmentation
Asia Pacific (Sydney) (ap-southeast-2):
arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-BoundingBox
arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-ImageMultiClass
arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-SemanticSegmentation
arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-TextMultiClass
arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-NamedEntityRecognition
arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-VerificationBoundingBox
arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-VerificationSemanticSegmentation
arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-AdjustmentBoundingBox
arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-AdjustmentSemanticSegmentation
The Amazon Resource Name (ARN) of a Lambda function implements the logic for annotation consolidation.
For the built-in bounding box, image classification, semantic segmentation, and text classification task types, Amazon SageMaker Ground Truth provides the following Lambda functions:
Bounding box - Finds the most similar boxes from different workers based on the Jaccard index of the boxes.
arn:aws:lambda:us-east-1:432418664414:function:ACS-BoundingBox
arn:aws:lambda:us-east-2:266458841044:function:ACS-BoundingBox
arn:aws:lambda:us-west-2:081040173940:function:ACS-BoundingBox
arn:aws:lambda:eu-west-1:568282634449:function:ACS-BoundingBox
arn:aws:lambda:ap-northeast-1:477331159723:function:ACS-BoundingBox
arn:aws:lambda:ap-southeast-2:454466003867:function:ACS-BoundingBox
arn:aws:lambda:ap-south-1:565803892007:function:ACS-BoundingBox
arn:aws:lambda:eu-central-1:203001061592:function:ACS-BoundingBox
arn:aws:lambda:ap-northeast-2:845288260483:function:ACS-BoundingBox
arn:aws:lambda:eu-west-2:487402164563:function:ACS-BoundingBox
arn:aws:lambda:ap-southeast-1:377565633583:function:ACS-BoundingBox
arn:aws:lambda:ca-central-1:918755190332:function:ACS-BoundingBox
Image classification - Uses a variant of the Expectation Maximization approach to estimate the true class of an image based on annotations from individual workers.
arn:aws:lambda:us-east-1:432418664414:function:ACS-ImageMultiClass
arn:aws:lambda:us-east-2:266458841044:function:ACS-ImageMultiClass
arn:aws:lambda:us-west-2:081040173940:function:ACS-ImageMultiClass
arn:aws:lambda:eu-west-1:568282634449:function:ACS-ImageMultiClass
arn:aws:lambda:ap-northeast-1:477331159723:function:ACS-ImageMultiClass
arn:aws:lambda:ap-southeast-2:454466003867:function:ACS-ImageMultiClass
arn:aws:lambda:ap-south-1:565803892007:function:ACS-ImageMultiClass
arn:aws:lambda:eu-central-1:203001061592:function:ACS-ImageMultiClass
arn:aws:lambda:ap-northeast-2:845288260483:function:ACS-ImageMultiClass
arn:aws:lambda:eu-west-2:487402164563:function:ACS-ImageMultiClass
arn:aws:lambda:ap-southeast-1:377565633583:function:ACS-ImageMultiClass
arn:aws:lambda:ca-central-1:918755190332:function:ACS-ImageMultiClass
Multi-label image classification - Uses a variant of the Expectation Maximization approach to estimate the true classes of an image based on annotations from individual workers.
arn:aws:lambda:us-east-1:432418664414:function:ACS-ImageMultiClassMultiLabel
arn:aws:lambda:us-east-2:266458841044:function:ACS-ImageMultiClassMultiLabel
arn:aws:lambda:us-west-2:081040173940:function:ACS-ImageMultiClassMultiLabel
arn:aws:lambda:eu-west-1:568282634449:function:ACS-ImageMultiClassMultiLabel
arn:aws:lambda:ap-northeast-1:477331159723:function:ACS-ImageMultiClassMultiLabel
arn:aws:lambda:ap-southeast-2:454466003867:function:ACS-ImageMultiClassMultiLabel
arn:aws:lambda:ap-south-1:565803892007:function:ACS-ImageMultiClassMultiLabel
arn:aws:lambda:eu-central-1:203001061592:function:ACS-ImageMultiClassMultiLabel
arn:aws:lambda:ap-northeast-2:845288260483:function:ACS-ImageMultiClassMultiLabel
arn:aws:lambda:eu-west-2:487402164563:function:ACS-ImageMultiClassMultiLabel
arn:aws:lambda:ap-southeast-1:377565633583:function:ACS-ImageMultiClassMultiLabel
arn:aws:lambda:ca-central-1:918755190332:function:ACS-ImageMultiClassMultiLabel
Semantic segmentation - Treats each pixel in an image as a multi-class classification and treats pixel annotations from workers as \"votes\" for the correct label.
arn:aws:lambda:us-east-1:432418664414:function:ACS-SemanticSegmentation
arn:aws:lambda:us-east-2:266458841044:function:ACS-SemanticSegmentation
arn:aws:lambda:us-west-2:081040173940:function:ACS-SemanticSegmentation
arn:aws:lambda:eu-west-1:568282634449:function:ACS-SemanticSegmentation
arn:aws:lambda:ap-northeast-1:477331159723:function:ACS-SemanticSegmentation
arn:aws:lambda:ap-southeast-2:454466003867:function:ACS-SemanticSegmentation
arn:aws:lambda:ap-south-1:565803892007:function:ACS-SemanticSegmentation
arn:aws:lambda:eu-central-1:203001061592:function:ACS-SemanticSegmentation
arn:aws:lambda:ap-northeast-2:845288260483:function:ACS-SemanticSegmentation
arn:aws:lambda:eu-west-2:487402164563:function:ACS-SemanticSegmentation
arn:aws:lambda:ap-southeast-1:377565633583:function:ACS-SemanticSegmentation
arn:aws:lambda:ca-central-1:918755190332:function:ACS-SemanticSegmentation
Text classification - Uses a variant of the Expectation Maximization approach to estimate the true class of text based on annotations from individual workers.
arn:aws:lambda:us-east-1:432418664414:function:ACS-TextMultiClass
arn:aws:lambda:us-east-2:266458841044:function:ACS-TextMultiClass
arn:aws:lambda:us-west-2:081040173940:function:ACS-TextMultiClass
arn:aws:lambda:eu-west-1:568282634449:function:ACS-TextMultiClass
arn:aws:lambda:ap-northeast-1:477331159723:function:ACS-TextMultiClass
arn:aws:lambda:ap-southeast-2:454466003867:function:ACS-TextMultiClass
arn:aws:lambda:ap-south-1:565803892007:function:ACS-TextMultiClass
arn:aws:lambda:eu-central-1:203001061592:function:ACS-TextMultiClass
arn:aws:lambda:ap-northeast-2:845288260483:function:ACS-TextMultiClass
arn:aws:lambda:eu-west-2:487402164563:function:ACS-TextMultiClass
arn:aws:lambda:ap-southeast-1:377565633583:function:ACS-TextMultiClass
arn:aws:lambda:ca-central-1:918755190332:function:ACS-TextMultiClass
Multi-label text classification - Uses a variant of the Expectation Maximization approach to estimate the true classes of text based on annotations from individual workers.
arn:aws:lambda:us-east-1:432418664414:function:ACS-TextMultiClassMultiLabel
arn:aws:lambda:us-east-2:266458841044:function:ACS-TextMultiClassMultiLabel
arn:aws:lambda:us-west-2:081040173940:function:ACS-TextMultiClassMultiLabel
arn:aws:lambda:eu-west-1:568282634449:function:ACS-TextMultiClassMultiLabel
arn:aws:lambda:ap-northeast-1:477331159723:function:ACS-TextMultiClassMultiLabel
arn:aws:lambda:ap-southeast-2:454466003867:function:ACS-TextMultiClassMultiLabel
arn:aws:lambda:ap-south-1:565803892007:function:ACS-TextMultiClassMultiLabel
arn:aws:lambda:eu-central-1:203001061592:function:ACS-TextMultiClassMultiLabel
arn:aws:lambda:ap-northeast-2:845288260483:function:ACS-TextMultiClassMultiLabel
arn:aws:lambda:eu-west-2:487402164563:function:ACS-TextMultiClassMultiLabel
arn:aws:lambda:ap-southeast-1:377565633583:function:ACS-TextMultiClassMultiLabel
arn:aws:lambda:ca-central-1:918755190332:function:ACS-TextMultiClassMultiLabel
Named entity recognition - Groups similar selections and calculates aggregate boundaries, resolving to most-assigned label.
arn:aws:lambda:us-east-1:432418664414:function:ACS-NamedEntityRecognition
arn:aws:lambda:us-east-2:266458841044:function:ACS-NamedEntityRecognition
arn:aws:lambda:us-west-2:081040173940:function:ACS-NamedEntityRecognition
arn:aws:lambda:eu-west-1:568282634449:function:ACS-NamedEntityRecognition
arn:aws:lambda:ap-northeast-1:477331159723:function:ACS-NamedEntityRecognition
arn:aws:lambda:ap-southeast-2:454466003867:function:ACS-NamedEntityRecognition
arn:aws:lambda:ap-south-1:565803892007:function:ACS-NamedEntityRecognition
arn:aws:lambda:eu-central-1:203001061592:function:ACS-NamedEntityRecognition
arn:aws:lambda:ap-northeast-2:845288260483:function:ACS-NamedEntityRecognition
arn:aws:lambda:eu-west-2:487402164563:function:ACS-NamedEntityRecognition
arn:aws:lambda:ap-southeast-1:377565633583:function:ACS-NamedEntityRecognition
arn:aws:lambda:ca-central-1:918755190332:function:ACS-NamedEntityRecognition
Bounding box verification - Uses a variant of the Expectation Maximization approach to estimate the true class of verification judgement for bounding box labels based on annotations from individual workers.
arn:aws:lambda:us-east-1:432418664414:function:ACS-VerificationBoundingBox
arn:aws:lambda:us-east-2:266458841044:function:ACS-VerificationBoundingBox
arn:aws:lambda:us-west-2:081040173940:function:ACS-VerificationBoundingBox
arn:aws:lambda:eu-west-1:568282634449:function:ACS-VerificationBoundingBox
arn:aws:lambda:ap-northeast-1:477331159723:function:ACS-VerificationBoundingBox
arn:aws:lambda:ap-southeast-2:454466003867:function:ACS-VerificationBoundingBox
arn:aws:lambda:ap-south-1:565803892007:function:ACS-VerificationBoundingBox
arn:aws:lambda:eu-central-1:203001061592:function:ACS-VerificationBoundingBox
arn:aws:lambda:ap-northeast-2:845288260483:function:ACS-VerificationBoundingBox
arn:aws:lambda:eu-west-2:487402164563:function:ACS-VerificationBoundingBox
arn:aws:lambda:ap-southeast-1:377565633583:function:ACS-VerificationBoundingBox
arn:aws:lambda:ca-central-1:918755190332:function:ACS-VerificationBoundingBox
Semantic segmentation verification - Uses a variant of the Expectation Maximization approach to estimate the true class of verification judgment for semantic segmentation labels based on annotations from individual workers.
arn:aws:lambda:us-east-1:432418664414:function:ACS-VerificationSemanticSegmentation
arn:aws:lambda:us-east-2:266458841044:function:ACS-VerificationSemanticSegmentation
arn:aws:lambda:us-west-2:081040173940:function:ACS-VerificationSemanticSegmentation
arn:aws:lambda:eu-west-1:568282634449:function:ACS-VerificationSemanticSegmentation
arn:aws:lambda:ap-northeast-1:477331159723:function:ACS-VerificationSemanticSegmentation
arn:aws:lambda:ap-southeast-2:454466003867:function:ACS-VerificationSemanticSegmentation
arn:aws:lambda:ap-south-1:565803892007:function:ACS-VerificationSemanticSegmentation
arn:aws:lambda:eu-central-1:203001061592:function:ACS-VerificationSemanticSegmentation
arn:aws:lambda:ap-northeast-2:845288260483:function:ACS-VerificationSemanticSegmentation
arn:aws:lambda:eu-west-2:487402164563:function:ACS-VerificationSemanticSegmentation
arn:aws:lambda:ap-southeast-1:377565633583:function:ACS-VerificationSemanticSegmentation
arn:aws:lambda:ca-central-1:918755190332:function:ACS-VerificationSemanticSegmentation
Bounding box adjustment - Finds the most similar boxes from different workers based on the Jaccard index of the adjusted annotations.
arn:aws:lambda:us-east-1:432418664414:function:ACS-AdjustmentBoundingBox
arn:aws:lambda:us-east-2:266458841044:function:ACS-AdjustmentBoundingBox
arn:aws:lambda:us-west-2:081040173940:function:ACS-AdjustmentBoundingBox
arn:aws:lambda:eu-west-1:568282634449:function:ACS-AdjustmentBoundingBox
arn:aws:lambda:ap-northeast-1:477331159723:function:ACS-AdjustmentBoundingBox
arn:aws:lambda:ap-southeast-2:454466003867:function:ACS-AdjustmentBoundingBox
arn:aws:lambda:ap-south-1:565803892007:function:ACS-AdjustmentBoundingBox
arn:aws:lambda:eu-central-1:203001061592:function:ACS-AdjustmentBoundingBox
arn:aws:lambda:ap-northeast-2:845288260483:function:ACS-AdjustmentBoundingBox
arn:aws:lambda:eu-west-2:487402164563:function:ACS-AdjustmentBoundingBox
arn:aws:lambda:ap-southeast-1:377565633583:function:ACS-AdjustmentBoundingBox
arn:aws:lambda:ca-central-1:918755190332:function:ACS-AdjustmentBoundingBox
Semantic segmentation adjustment - Treats each pixel in an image as a multi-class classification and treats pixel adjusted annotations from workers as \"votes\" for the correct label.
arn:aws:lambda:us-east-1:432418664414:function:ACS-AdjustmentSemanticSegmentation
arn:aws:lambda:us-east-2:266458841044:function:ACS-AdjustmentSemanticSegmentation
arn:aws:lambda:us-west-2:081040173940:function:ACS-AdjustmentSemanticSegmentation
arn:aws:lambda:eu-west-1:568282634449:function:ACS-AdjustmentSemanticSegmentation
arn:aws:lambda:ap-northeast-1:477331159723:function:ACS-AdjustmentSemanticSegmentation
arn:aws:lambda:ap-southeast-2:454466003867:function:ACS-AdjustmentSemanticSegmentation
arn:aws:lambda:ap-south-1:565803892007:function:ACS-AdjustmentSemanticSegmentation
arn:aws:lambda:eu-central-1:203001061592:function:ACS-AdjustmentSemanticSegmentation
arn:aws:lambda:ap-northeast-2:845288260483:function:ACS-AdjustmentSemanticSegmentation
arn:aws:lambda:eu-west-2:487402164563:function:ACS-AdjustmentSemanticSegmentation
arn:aws:lambda:ap-southeast-1:377565633583:function:ACS-AdjustmentSemanticSegmentation
arn:aws:lambda:ca-central-1:918755190332:function:ACS-AdjustmentSemanticSegmentation
For more information, see Annotation Consolidation.
", + "HumanTaskConfig$PreHumanTaskLambdaArn": "The Amazon Resource Name (ARN) of a Lambda function that is run before a data object is sent to a human worker. Use this function to provide input to a custom labeling job.
For the built-in bounding box, image classification, semantic segmentation, and text classification task types, Amazon SageMaker Ground Truth provides the following Lambda functions:
US East (Northern Virginia) (us-east-1):
arn:aws:lambda:us-east-1:432418664414:function:PRE-BoundingBox
arn:aws:lambda:us-east-1:432418664414:function:PRE-ImageMultiClass
arn:aws:lambda:us-east-1:432418664414:function:PRE-ImageMultiClassMultiLabel
arn:aws:lambda:us-east-1:432418664414:function:PRE-SemanticSegmentation
arn:aws:lambda:us-east-1:432418664414:function:PRE-TextMultiClass
arn:aws:lambda:us-east-1:432418664414:function:PRE-TextMultiClassMultiLabel
arn:aws:lambda:us-east-1:432418664414:function:PRE-NamedEntityRecognition
arn:aws:lambda:us-east-1:432418664414:function:PRE-VerificationBoundingBox
arn:aws:lambda:us-east-1:432418664414:function:PRE-VerificationSemanticSegmentation
arn:aws:lambda:us-east-1:432418664414:function:PRE-AdjustmentBoundingBox
arn:aws:lambda:us-east-1:432418664414:function:PRE-AdjustmentSemanticSegmentation
US East (Ohio) (us-east-2):
arn:aws:lambda:us-east-2:266458841044:function:PRE-BoundingBox
arn:aws:lambda:us-east-2:266458841044:function:PRE-ImageMultiClass
arn:aws:lambda:us-east-2:266458841044:function:PRE-ImageMultiClassMultiLabel
arn:aws:lambda:us-east-2:266458841044:function:PRE-SemanticSegmentation
arn:aws:lambda:us-east-2:266458841044:function:PRE-TextMultiClass
arn:aws:lambda:us-east-2:266458841044:function:PRE-TextMultiClassMultiLabel
arn:aws:lambda:us-east-2:266458841044:function:PRE-NamedEntityRecognition
arn:aws:lambda:us-east-2:266458841044:function:PRE-VerificationBoundingBox
arn:aws:lambda:us-east-2:266458841044:function:PRE-VerificationSemanticSegmentation
arn:aws:lambda:us-east-2:266458841044:function:PRE-AdjustmentBoundingBox
arn:aws:lambda:us-east-2:266458841044:function:PRE-AdjustmentSemanticSegmentation
US West (Oregon) (us-west-2):
arn:aws:lambda:us-west-2:081040173940:function:PRE-BoundingBox
arn:aws:lambda:us-west-2:081040173940:function:PRE-ImageMultiClass
arn:aws:lambda:us-west-2:081040173940:function:PRE-ImageMultiClassMultiLabel
arn:aws:lambda:us-west-2:081040173940:function:PRE-SemanticSegmentation
arn:aws:lambda:us-west-2:081040173940:function:PRE-TextMultiClass
arn:aws:lambda:us-west-2:081040173940:function:PRE-TextMultiClassMultiLabel
arn:aws:lambda:us-west-2:081040173940:function:PRE-NamedEntityRecognition
arn:aws:lambda:us-west-2:081040173940:function:PRE-VerificationBoundingBox
arn:aws:lambda:us-west-2:081040173940:function:PRE-VerificationSemanticSegmentation
arn:aws:lambda:us-west-2:081040173940:function:PRE-AdjustmentBoundingBox
arn:aws:lambda:us-west-2:081040173940:function:PRE-AdjustmentSemanticSegmentation
Canada (Central) (ca-central-1):
arn:aws:lambda:ca-central-1:918755190332:function:PRE-BoundingBox
arn:aws:lambda:ca-central-1:918755190332:function:PRE-ImageMultiClass
arn:aws:lambda:ca-central-1:918755190332:function:PRE-ImageMultiClassMultiLabel
arn:aws:lambda:ca-central-1:918755190332:function:PRE-SemanticSegmentation
arn:aws:lambda:ca-central-1:918755190332:function:PRE-TextMultiClass
arn:aws:lambda:ca-central-1:918755190332:function:PRE-TextMultiClassMultiLabel
arn:aws:lambda:ca-central-1:918755190332:function:PRE-NamedEntityRecognition
arn:aws:lambda:ca-central-1:918755190332:function:PRE-VerificationBoundingBox
arn:aws:lambda:ca-central-1:918755190332:function:PRE-VerificationSemanticSegmentation
arn:aws:lambda:ca-central-1:918755190332:function:PRE-AdjustmentBoundingBox
arn:aws:lambda:ca-central-1:918755190332:function:PRE-AdjustmentSemanticSegmentation
EU (Ireland) (eu-west-1):
arn:aws:lambda:eu-west-1:568282634449:function:PRE-BoundingBox
arn:aws:lambda:eu-west-1:568282634449:function:PRE-ImageMultiClass
arn:aws:lambda:eu-west-1:568282634449:function:PRE-ImageMultiClassMultiLabel
arn:aws:lambda:eu-west-1:568282634449:function:PRE-SemanticSegmentation
arn:aws:lambda:eu-west-1:568282634449:function:PRE-TextMultiClass
arn:aws:lambda:eu-west-1:568282634449:function:PRE-TextMultiClassMultiLabel
arn:aws:lambda:eu-west-1:568282634449:function:PRE-NamedEntityRecognition
arn:aws:lambda:eu-west-1:568282634449:function:PRE-VerificationBoundingBox
arn:aws:lambda:eu-west-1:568282634449:function:PRE-VerificationSemanticSegmentation
arn:aws:lambda:eu-west-1:568282634449:function:PRE-AdjustmentBoundingBox
arn:aws:lambda:eu-west-1:568282634449:function:PRE-AdjustmentSemanticSegmentation
EU (London) (eu-west-2):
arn:aws:lambda:eu-west-2:487402164563:function:PRE-BoundingBox
arn:aws:lambda:eu-west-2:487402164563:function:PRE-ImageMultiClass
arn:aws:lambda:eu-west-2:487402164563:function:PRE-ImageMultiClassMultiLabel
arn:aws:lambda:eu-west-2:487402164563:function:PRE-SemanticSegmentation
arn:aws:lambda:eu-west-2:487402164563:function:PRE-TextMultiClass
arn:aws:lambda:eu-west-2:487402164563:function:PRE-TextMultiClassMultiLabel
arn:aws:lambda:eu-west-2:487402164563:function:PRE-NamedEntityRecognition
arn:aws:lambda:eu-west-2:487402164563:function:PRE-VerificationBoundingBox
arn:aws:lambda:eu-west-2:487402164563:function:PRE-VerificationSemanticSegmentation
arn:aws:lambda:eu-west-2:487402164563:function:PRE-AdjustmentBoundingBox
arn:aws:lambda:eu-west-2:487402164563:function:PRE-AdjustmentSemanticSegmentation
EU Frankfurt (eu-central-1):
arn:aws:lambda:eu-central-1:203001061592:function:PRE-BoundingBox
arn:aws:lambda:eu-central-1:203001061592:function:PRE-ImageMultiClass
arn:aws:lambda:eu-central-1:203001061592:function:PRE-ImageMultiClassMultiLabel
arn:aws:lambda:eu-central-1:203001061592:function:PRE-SemanticSegmentation
arn:aws:lambda:eu-central-1:203001061592:function:PRE-TextMultiClass
arn:aws:lambda:eu-central-1:203001061592:function:PRE-TextMultiClassMultiLabel
arn:aws:lambda:eu-central-1:203001061592:function:PRE-NamedEntityRecognition
arn:aws:lambda:eu-central-1:203001061592:function:PRE-VerificationBoundingBox
arn:aws:lambda:eu-central-1:203001061592:function:PRE-VerificationSemanticSegmentation
arn:aws:lambda:eu-central-1:203001061592:function:PRE-AdjustmentBoundingBox
arn:aws:lambda:eu-central-1:203001061592:function:PRE-AdjustmentSemanticSegmentation
Asia Pacific (Tokyo) (ap-northeast-1):
arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-BoundingBox
arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-ImageMultiClass
arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-ImageMultiClassMultiLabel
arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-SemanticSegmentation
arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-TextMultiClass
arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-TextMultiClassMultiLabel
arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-NamedEntityRecognition
arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-VerificationBoundingBox
arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-VerificationSemanticSegmentation
arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-AdjustmentBoundingBox
arn:aws:lambda:ap-northeast-1:477331159723:function:PRE-AdjustmentSemanticSegmentation
Asia Pacific (Seoul) (ap-northeast-2):
arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-BoundingBox
arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-ImageMultiClass
arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-ImageMultiClassMultiLabel
arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-SemanticSegmentation
arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-TextMultiClass
arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-TextMultiClassMultiLabel
arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-NamedEntityRecognition
arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-VerificationBoundingBox
arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-VerificationSemanticSegmentation
arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-AdjustmentBoundingBox
arn:aws:lambda:ap-northeast-2:845288260483:function:PRE-AdjustmentSemanticSegmentation
Asia Pacific (Mumbai) (ap-south-1):
arn:aws:lambda:ap-south-1:565803892007:function:PRE-BoundingBox
arn:aws:lambda:ap-south-1:565803892007:function:PRE-ImageMultiClass
arn:aws:lambda:ap-south-1:565803892007:function:PRE-ImageMultiClassMultiLabel
arn:aws:lambda:ap-south-1:565803892007:function:PRE-SemanticSegmentation
arn:aws:lambda:ap-south-1:565803892007:function:PRE-TextMultiClass
arn:aws:lambda:ap-south-1:565803892007:function:PRE-TextMultiClassMultiLabel
arn:aws:lambda:ap-south-1:565803892007:function:PRE-NamedEntityRecognition
arn:aws:lambda:ap-south-1:565803892007:function:PRE-VerificationBoundingBox
arn:aws:lambda:ap-south-1:565803892007:function:PRE-VerificationSemanticSegmentation
arn:aws:lambda:ap-south-1:565803892007:function:PRE-AdjustmentBoundingBox
arn:aws:lambda:ap-south-1:565803892007:function:PRE-AdjustmentSemanticSegmentation
Asia Pacific (Singapore) (ap-southeast-1):
arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-BoundingBox
arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-ImageMultiClass
arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-ImageMultiClassMultiLabel
arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-SemanticSegmentation
arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-TextMultiClass
arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-TextMultiClassMultiLabel
arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-NamedEntityRecognition
arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-VerificationBoundingBox
arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-VerificationSemanticSegmentation
arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-AdjustmentBoundingBox
arn:aws:lambda:ap-southeast-1:377565633583:function:PRE-AdjustmentSemanticSegmentation
Asia Pacific (Sydney) (ap-southeast-2):
arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-BoundingBox
arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-ImageMultiClass
arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-ImageMultiClassMultiLabel
arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-SemanticSegmentation
arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-TextMultiClass
arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-TextMultiClassMultiLabel
arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-NamedEntityRecognition
arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-VerificationBoundingBox
arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-VerificationSemanticSegmentation
arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-AdjustmentBoundingBox
arn:aws:lambda:ap-southeast-2:454466003867:function:PRE-AdjustmentSemanticSegmentation
The Amazon Resource Name (ARN) of a Lambda function. The function is run before each data object is sent to a worker.
", "LabelingJobSummary$AnnotationConsolidationLambdaArn": "The Amazon Resource Name (ARN) of the Lambda function used to consolidate the annotations from individual workers into a label for a data object. For more information, see Annotation Consolidation.
" } @@ -3577,7 +3583,7 @@ "ListTrialsRequest$MaxResults": "The maximum number of trials to return in the response. The default value is 10.
", "ListUserProfilesRequest$MaxResults": "Returns a list up to a specified limit.
", "ListWorkteamsRequest$MaxResults": "The maximum number of work teams to return in each page of the response.
", - "SearchRequest$MaxResults": "The maximum number of results to return in a SearchResponse
.
The maximum number of results to return.
" } }, "MaxRuntimeInSeconds": { @@ -3921,6 +3927,7 @@ "DescribeMonitoringScheduleResponse$MonitoringScheduleArn": "The Amazon Resource Name (ARN) of the monitoring schedule.
", "DescribeProcessingJobResponse$MonitoringScheduleArn": "The ARN of a monitoring schedule for an endpoint associated with this processing job.
", "MonitoringScheduleSummary$MonitoringScheduleArn": "The Amazon Resource Name (ARN) of the monitoring schedule.
", + "ProcessingJob$MonitoringScheduleArn": "The ARN of a monitoring schedule for an endpoint associated with this processing job.
", "UpdateMonitoringScheduleResponse$MonitoringScheduleArn": "The Amazon Resource Name (ARN) of the monitoring schedule.
" } }, @@ -3991,7 +3998,7 @@ } }, "NestedFilters": { - "base": "Defines a list of NestedFilters
objects. To satisfy the conditions specified in the NestedFilters
call, a resource must satisfy the conditions of all of the filters.
For example, you could define a NestedFilters
using the training job's InputDataConfig
property to filter on Channel
objects.
A NestedFilters
object contains multiple filters. For example, to find all training jobs whose name contains train
and that have cat/data
in their S3Uri
(specified in InputDataConfig
), you need to create a NestedFilters
object that specifies the InputDataConfig
property with the following Filter
objects:
'{Name:\"InputDataConfig.ChannelName\", \"Operator\":\"EQUALS\", \"Value\":\"train\"}',
'{Name:\"InputDataConfig.DataSource.S3DataSource.S3Uri\", \"Operator\":\"CONTAINS\", \"Value\":\"cat/data\"}'
A list of nested Filter objects. A resource must satisfy the conditions of all filters to be included in the results returned from the Search API.
For example, to filter on a training job's InputDataConfig
property with a specific channel name and S3Uri
prefix, define the following filters:
'{Name:\"InputDataConfig.ChannelName\", \"Operator\":\"Equals\", \"Value\":\"train\"}',
'{Name:\"InputDataConfig.DataSource.S3DataSource.S3Uri\", \"Operator\":\"Contains\", \"Value\":\"mybucket/catdata\"}'
Networking options for a processing job.
", "DescribeProcessingJobResponse$NetworkConfig": "Networking options for a processing job.
", - "MonitoringJobDefinition$NetworkConfig": "Specifies networking options for an monitoring job.
" + "MonitoringJobDefinition$NetworkConfig": "Specifies networking options for an monitoring job.
", + "ProcessingJob$NetworkConfig": null } }, "NetworkInterfaceId": { @@ -4075,7 +4083,7 @@ "ListUserProfilesResponse$NextToken": "If the previous response was truncated, you will receive this token. Use it in your next request to receive the next set of results.
", "ListWorkteamsRequest$NextToken": "If the result of the previous ListWorkteams
request was truncated, the response includes a NextToken
. To retrieve the next set of labeling jobs, use the token in the next request.
If the response is truncated, Amazon SageMaker returns this token. To retrieve the next set of work teams, use it in the subsequent request.
", - "SearchRequest$NextToken": "If more than MaxResults
resource objects match the specified SearchExpression
, the SearchResponse
includes a NextToken
. The NextToken
can be passed to the next SearchRequest
to continue retrieving results for the specified SearchExpression
and Sort
parameters.
If more than MaxResults
resources match the specified SearchExpression
, the response includes a NextToken
. The NextToken
can be passed to the next SearchRequest
to continue retrieving results.
If the result of the previous Search
request was truncated, the response includes a NextToken. To retrieve the next set of results, use the token in the next request.
A Boolean binary operator that is used to evaluate the filter. The operator field contains one of the following values:
The specified resource in Name
equals the specified Value
.
The specified resource in Name
does not equal the specified Value
.
The specified resource in Name
is greater than the specified Value
. Not supported for text-based properties.
The specified resource in Name
is greater than or equal to the specified Value
. Not supported for text-based properties.
The specified resource in Name
is less than the specified Value
. Not supported for text-based properties.
The specified resource in Name
is less than or equal to the specified Value
. Not supported for text-based properties.
Only supported for text-based properties. The word-list of the property contains the specified Value
. A SearchExpression
can include only one Contains
operator.
If you have specified a filter Value
, the default is Equals
.
A Boolean binary operator that is used to evaluate the filter. The operator field contains one of the following values:
The value of Name
equals Value
.
The value of Name
doesn't equal Value
.
The value of Name
is greater than Value
. Not supported for text properties.
The value of Name
is greater than or equal to Value
. Not supported for text properties.
The value of Name
is less than Value
. Not supported for text properties.
The value of Name
is less than or equal to Value
. Not supported for text properties.
The value of Name
contains the string Value
. A SearchExpression
can include only one Contains
operator. Only supported for text properties.
The Name
property exists.
The Name
property does not exist.
The value of Name
is one of the comma delimited strings in Value
. Only supported for text properties.
Sets the environment variables in the Docker container.
", - "DescribeProcessingJobResponse$Environment": "The environment variables set in the Docker container.
" + "DescribeProcessingJobResponse$Environment": "The environment variables set in the Docker container.
", + "ProcessingJob$Environment": "Sets the environment variables in the Docker container.
" } }, "ProcessingEnvironmentValue": { @@ -4491,7 +4500,8 @@ "base": null, "refs": { "CreateProcessingJobRequest$ProcessingInputs": "For each input, data is downloaded from S3 into the processing container before the processing job begins running if \"S3InputMode\" is set to File
.
The inputs for a processing job.
" + "DescribeProcessingJobResponse$ProcessingInputs": "The inputs for a processing job.
", + "ProcessingJob$ProcessingInputs": "For each input, data is downloaded from S3 into the processing container before the processing job begins running if \"S3InputMode\" is set to File
.
The ML compute instance type for the processing job.
" } }, + "ProcessingJob": { + "base": "An Amazon SageMaker processing job that is used to analyze data and evaluate models. For more information, see Process Data and Evaluate Models.
", + "refs": { + "TrialComponentSourceDetail$ProcessingJob": "Information about a processing job that's the source of a trial component.
" + } + }, "ProcessingJobArn": { "base": null, "refs": { @@ -4516,6 +4532,7 @@ "DebugRuleEvaluationStatus$RuleEvaluationJobArn": "The Amazon Resource Name (ARN) of the rule evaluation job.
", "DescribeProcessingJobResponse$ProcessingJobArn": "The Amazon Resource Name (ARN) of the processing job.
", "MonitoringExecutionSummary$ProcessingJobArn": "The Amazon Resource Name (ARN) of the monitoring job.
", + "ProcessingJob$ProcessingJobArn": "The ARN of the processing job.
", "ProcessingJobSummary$ProcessingJobArn": "The Amazon Resource Name (ARN) of the processing job..
" } }, @@ -4525,6 +4542,7 @@ "CreateProcessingJobRequest$ProcessingJobName": "The name of the processing job. The name must be unique within an AWS Region in the AWS account.
", "DescribeProcessingJobRequest$ProcessingJobName": "The name of the processing job. The name must be unique within an AWS Region in the AWS account.
", "DescribeProcessingJobResponse$ProcessingJobName": "The name of the processing job. The name must be unique within an AWS Region in the AWS account.
", + "ProcessingJob$ProcessingJobName": "The name of the processing job.
", "ProcessingJobSummary$ProcessingJobName": "The name of the processing job.
", "StopProcessingJobRequest$ProcessingJobName": "The name of the processing job to stop.
" } @@ -4534,6 +4552,7 @@ "refs": { "DescribeProcessingJobResponse$ProcessingJobStatus": "Provides the status of a processing job.
", "ListProcessingJobsRequest$StatusEquals": "A filter that retrieves only processing jobs with a specific status.
", + "ProcessingJob$ProcessingJobStatus": "The status of the processing job.
", "ProcessingJobSummary$ProcessingJobStatus": "The status of the processing job.
" } }, @@ -4574,7 +4593,8 @@ "base": "The output configuration for the processing job.
", "refs": { "CreateProcessingJobRequest$ProcessingOutputConfig": "Output configuration for the processing job.
", - "DescribeProcessingJobResponse$ProcessingOutputConfig": "Output configuration for the processing job.
" + "DescribeProcessingJobResponse$ProcessingOutputConfig": "Output configuration for the processing job.
", + "ProcessingJob$ProcessingOutputConfig": null } }, "ProcessingOutputs": { @@ -4587,13 +4607,14 @@ "base": "Identifies the resources, ML compute instances, and ML storage volumes to deploy for a processing job. In distributed training, you specify more than one instance.
", "refs": { "CreateProcessingJobRequest$ProcessingResources": "Identifies the resources, ML compute instances, and ML storage volumes to deploy for a processing job. In distributed training, you specify more than one instance.
", - "DescribeProcessingJobResponse$ProcessingResources": "Identifies the resources, ML compute instances, and ML storage volumes to deploy for a processing job. In distributed training, you specify more than one instance.
" + "DescribeProcessingJobResponse$ProcessingResources": "Identifies the resources, ML compute instances, and ML storage volumes to deploy for a processing job. In distributed training, you specify more than one instance.
", + "ProcessingJob$ProcessingResources": null } }, "ProcessingS3CompressionType": { "base": null, "refs": { - "ProcessingS3Input$S3CompressionType": "Whether to use Gzip
compresion for Amazon S3 storage.
Whether to use Gzip
compression for Amazon S3 storage.
Whether the Pipe
or File
is used as the input mode for transfering data for the monitoring job. Pipe
mode is recommended for large datasets. File
mode is useful for small files that fit in memory. Defaults to File
.
Wether to use File
or Pipe
input mode. In File
mode, Amazon SageMaker copies the data from the input source onto the local Amazon Elastic Block Store (Amazon EBS) volumes before starting your training algorithm. This is the most commonly used input mode. In Pipe
mode, Amazon SageMaker streams input data from the source directly to your algorithm without using the EBS volume.
Whether to use File
or Pipe
input mode. In File
mode, Amazon SageMaker copies the data from the input source onto the local Amazon Elastic Block Store (Amazon EBS) volumes before starting your training algorithm. This is the most commonly used input mode. In Pipe
mode, Amazon SageMaker streams input data from the source directly to your algorithm without using the EBS volume.
Specifies a time limit for how long the processing job is allowed to run.
", "refs": { "CreateProcessingJobRequest$StoppingCondition": "The time limit for how long the processing job is allowed to run.
", - "DescribeProcessingJobResponse$StoppingCondition": "The time limit for how long the processing job is allowed to run.
" + "DescribeProcessingJobResponse$StoppingCondition": "The time limit for how long the processing job is allowed to run.
", + "ProcessingJob$StoppingCondition": null } }, "ProcessingVolumeSizeInGB": { @@ -4826,7 +4848,7 @@ "ResourcePropertyName": { "base": null, "refs": { - "Filter$Name": "A property name. For example, TrainingJobName
. For the list of valid property names returned in a search result for each supported resource, see TrainingJob properties. You must specify a valid property name for the resource.
A resource property name. For example, TrainingJobName
. For valid property names, see SearchRecord. You must specify a valid property for the resource.
The name of the property to use in the nested filters. The value must match a listed property name, such as InputDataConfig
.
A suggested property name based on what you entered in the search textbox in the Amazon SageMaker console.
", "SearchRequest$SortBy": "The name of the resource property used to sort the SearchResults
. The default is LastModifiedTime
.
The name of the Amazon SageMaker resource to Search for.
", + "GetSearchSuggestionsRequest$Resource": "The name of the Amazon SageMaker resource to search for.
", "SearchRequest$Resource": "The name of the Amazon SageMaker resource to search for.
" } }, @@ -4896,6 +4918,7 @@ "HyperParameterTrainingJobDefinition$RoleArn": "The Amazon Resource Name (ARN) of the IAM role associated with the training jobs that the tuning job launches.
", "ModelPackageValidationSpecification$ValidationRole": "The IAM roles to be used for the validation of the model package.
", "MonitoringJobDefinition$RoleArn": "The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker can assume to perform tasks on your behalf.
", + "ProcessingJob$RoleArn": "The ARN of the role used to create the processing job.
", "RenderUiTemplateRequest$RoleArn": "The Amazon Resource Name (ARN) that has access to the S3 objects that are used by the template.
", "TrainingJob$RoleArn": "The AWS Identity and Access Management (IAM) role configured for the training job.
", "UpdateNotebookInstanceInput$RoleArn": "The Amazon Resource Name (ARN) of the IAM role that Amazon SageMaker can assume to access the notebook instance. For more information, see Amazon SageMaker Roles.
To be able to pass this role to Amazon SageMaker, the caller of this API must have the iam:PassRole
permission.
A multi-expression that searches for the specified resource or resources in a search. All resource objects that satisfy the expression's condition are included in the search results. You must specify at least one subexpression, filter, or nested filter. A SearchExpression
can contain up to twenty elements.
A SearchExpression
contains the following components:
A list of Filter
objects. Each filter defines a simple Boolean expression comprised of a resource property name, Boolean operator, and value. A SearchExpression
can include only one Contains
operator.
A list of NestedFilter
objects. Each nested filter defines a list of Boolean expressions using a list of resource properties. A nested filter is satisfied if a single object in the list satisfies all Boolean expressions.
A list of SearchExpression
objects. A search expression object can be nested in a list of search expression objects.
A Boolean operator: And
or Or
.
A Boolean conditional statement. Resource objects must satisfy this condition to be included in search results. You must provide at least one subexpression, filter, or nested filter. The maximum number of recursive SubExpressions
, NestedFilters
, and Filters
that can be included in a SearchExpression
object is 50.
A Boolean conditional statement. Resources must satisfy this condition to be included in search results. You must provide at least one subexpression, filter, or nested filter. The maximum number of recursive SubExpressions
, NestedFilters
, and Filters
that can be included in a SearchExpression
object is 50.
An individual search result record that contains a single resource object.
", + "base": "A single resource returned as part of the Search API response.
", "refs": { "SearchResultsList$member": null } @@ -5039,7 +5062,7 @@ "SearchResultsList": { "base": null, "refs": { - "SearchResponse$Results": "A list of SearchResult
objects.
A list of SearchRecord
objects.
An array of key-value pairs. You can use tags to categorize your AWS resources in different ways, for example, by purpose, owner, or environment. For more information, see AWS Tagging Strategies.
Tags that you specify for the tuning job are also added to all training jobs that the tuning job launches.
", "CreateLabelingJobRequest$Tags": "An array of key/value pairs. For more information, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.
", "CreateModelInput$Tags": "An array of key-value pairs. For more information, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.
", - "CreateMonitoringScheduleRequest$Tags": "(Optional) An array of key-value pairs. For more information, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.
", + "CreateMonitoringScheduleRequest$Tags": "(Optional) An array of key-value pairs. For more information, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.
", "CreateNotebookInstanceInput$Tags": "A list of tags to associate with the notebook instance. You can add tags later by using the CreateTags
API.
(Optional) An array of key-value pairs. For more information, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.
", + "CreateProcessingJobRequest$Tags": "(Optional) An array of key-value pairs. For more information, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.
", "CreateTrainingJobRequest$Tags": "An array of key-value pairs. For more information, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.
", "CreateTransformJobRequest$Tags": "(Optional) An array of key-value pairs. For more information, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.
", "CreateTrialComponentRequest$Tags": "A list of tags to associate with the component. You can use Search API to search on the tags.
", @@ -5435,6 +5458,7 @@ "DescribeLabelingJobResponse$Tags": "An array of key/value pairs. For more information, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.
", "Experiment$Tags": "The list of tags that are associated with the experiment. You can use Search API to search on the tags.
", "ListTagsOutput$Tags": "An array of Tag
objects, each with a tag key and a value.
An array of key-value pairs. For more information, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.
", "TrainingJob$Tags": "An array of key-value pairs. For more information, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.
", "Trial$Tags": "The list of tags that are associated with the trial. You can use Search API to search on the tags.
", "TrialComponent$Tags": "The list of tags that are associated with the component. You can use Search API to search on the tags.
" @@ -5468,7 +5492,7 @@ "TaskAvailabilityLifetimeInSeconds": { "base": null, "refs": { - "HumanTaskConfig$TaskAvailabilityLifetimeInSeconds": "The length of time that a task remains available for labeling by human workers. If you choose the Amazon Mechanical Turk workforce, the maximum is 12 hours (43200). The default value is 864000 seconds (1 day). For private and vendor workforces, the maximum is as listed.
" + "HumanTaskConfig$TaskAvailabilityLifetimeInSeconds": "The length of time that a task remains available for labeling by human workers. If you choose the Amazon Mechanical Turk workforce, the maximum is 12 hours (43200). The default value is 864000 seconds (10 days). For private and vendor workforces, the maximum is as listed.
" } }, "TaskCount": { @@ -5525,7 +5549,7 @@ "TemplateContentSha256": { "base": null, "refs": { - "UiTemplateInfo$ContentSha256": "The SHA 256 hash that you used to create the request signature.
" + "UiTemplateInfo$ContentSha256": "The SHA-256 digest of the contents of the template.
" } }, "TemplateUrl": { @@ -5689,6 +5713,10 @@ "MonitoringExecutionSummary$LastModifiedTime": "A timestamp that indicates the last time the monitoring job was modified.
", "MonitoringScheduleSummary$CreationTime": "The creation time of the monitoring schedule.
", "MonitoringScheduleSummary$LastModifiedTime": "The last time the monitoring schedule was modified.
", + "ProcessingJob$ProcessingEndTime": "The time that the processing job ended.
", + "ProcessingJob$ProcessingStartTime": "The time that the processing job started.
", + "ProcessingJob$LastModifiedTime": "The time the processing job was last modified.
", + "ProcessingJob$CreationTime": "The time the processing job was created.
", "ProcessingJobSummary$CreationTime": "The time at which the processing job was created.
", "ProcessingJobSummary$ProcessingEndTime": "The time at which the processing job completed.
", "ProcessingJobSummary$LastModifiedTime": "A timestamp that indicates the last time the processing job was modified.
", @@ -5757,8 +5785,8 @@ "TrainingJob": { "base": "Contains information about a training job.
", "refs": { - "SearchRecord$TrainingJob": "A TrainingJob
object that is returned as part of a Search
request.
The properties of a training job.
", + "TrialComponentSourceDetail$TrainingJob": "Information about a training job that's the source of a trial component.
" } }, "TrainingJobArn": { @@ -5768,6 +5796,7 @@ "DescribeProcessingJobResponse$TrainingJobArn": "The ARN of a training job associated with this processing job.
", "DescribeTrainingJobResponse$TrainingJobArn": "The Amazon Resource Name (ARN) of the training job.
", "HyperParameterTrainingJobSummary$TrainingJobArn": "The Amazon Resource Name (ARN) of the training job.
", + "ProcessingJob$TrainingJobArn": "The ARN of the training job associated with this processing job.
", "TrainingJob$TrainingJobArn": "The Amazon Resource Name (ARN) of the training job.
", "TrainingJobSummary$TrainingJobArn": "The Amazon Resource Name (ARN) of the training job.
" } @@ -5977,9 +6006,9 @@ } }, "Trial": { - "base": "A summary of the properties of a trial as returned by the Search API.
", + "base": "The properties of a trial as returned by the Search API.
", "refs": { - "SearchRecord$Trial": "A summary of the properties of a trial.
" + "SearchRecord$Trial": "The properties of a trial.
" } }, "TrialArn": { @@ -5996,9 +6025,9 @@ } }, "TrialComponent": { - "base": "A summary of the properties of a trial component as returned by the Search API.
", + "base": "The properties of a trial component as returned by the Search API.
", "refs": { - "SearchRecord$TrialComponent": "A summary of the properties of a trial component.
" + "SearchRecord$TrialComponent": "The properties of a trial component.
" } }, "TrialComponentArn": { @@ -6100,10 +6129,10 @@ } }, "TrialComponentSource": { - "base": "The source of the trial component.
", + "base": "The Amazon Resource Name (ARN) and job type of the source of a trial component.
", "refs": { "DescribeTrialComponentResponse$Source": "The Amazon Resource Name (ARN) of the source and, optionally, the job type.
", - "TrialComponent$Source": null, + "TrialComponent$Source": "The Amazon Resource Name (ARN) and job type of the source of the component.
", "TrialComponentSimpleSummary$TrialComponentSource": null, "TrialComponentSummary$TrialComponentSource": null } @@ -6112,14 +6141,14 @@ "base": null, "refs": { "TrialComponentMetricSummary$SourceArn": "The Amazon Resource Name (ARN) of the source.
", - "TrialComponentSource$SourceArn": "The Amazon Resource Name (ARN) of the source.
", + "TrialComponentSource$SourceArn": "The source ARN.
", "TrialComponentSourceDetail$SourceArn": "The Amazon Resource Name (ARN) of the source.
" } }, "TrialComponentSourceDetail": { - "base": "Detailed information about the source of a trial component.
", + "base": "Detailed information about the source of a trial component. Either ProcessingJob
or TrainingJob
is returned.
The source of the trial component.>
" + "TrialComponent$SourceDetail": "Details of the source of the component.
" } }, "TrialComponentStatus": { @@ -6438,7 +6467,7 @@ } }, "VariantProperty": { - "base": "Specifies a production variant property type for an Endpoint.
If you are updating an endpoint with the RetainAllVariantProperties option set to true
, the VariantProperty
objects listed in ExcludeRetainedVariantProperties override the existing variant properties of the endpoint.
Specifies a production variant property type for an Endpoint.
If you are updating an endpoint with the UpdateEndpointInput$RetainAllVariantProperties option set to true
, the VariantProperty
objects listed in UpdateEndpointInput$ExcludeRetainedVariantProperties override the existing variant properties of the endpoint.
When you are updating endpoint resources with RetainAllVariantProperties, whose value is set to true
, ExcludeRetainedVariantProperties
specifies the list of type VariantProperty to override with the values provided by EndpointConfig
. If you don't specify a value for ExcludeAllVariantProperties
, no variant properties are overridden.
When you are updating endpoint resources with UpdateEndpointInput$RetainAllVariantProperties, whose value is set to true
, ExcludeRetainedVariantProperties
specifies the list of type VariantProperty to override with the values provided by EndpointConfig
. If you don't specify a value for ExcludeAllVariantProperties
, no variant properties are overridden.
The type of variant property. The supported values are:
DesiredInstanceCount
: Overrides the existing variant instance counts using the InitialInstanceCount values in the ProductionVariants.
DesiredWeight
: Overrides the existing variant weights using the InitialVariantWeight values in the ProductionVariants.
DataCaptureConfig
: (Not currently supported.)
The type of variant property. The supported values are:
DesiredInstanceCount
: Overrides the existing variant instance counts using the ProductionVariant$InitialInstanceCount values in the CreateEndpointConfigInput$ProductionVariants.
DesiredWeight
: Overrides the existing variant weights using the ProductionVariant$InitialVariantWeight values in the CreateEndpointConfigInput$ProductionVariants.
DataCaptureConfig
: (Not currently supported.)
Specifies a VPC that your training jobs and hosted models have access to. Control access to and from your training and model containers by configuring the VPC. For more information, see Protect Endpoints by Using an Amazon Virtual Private Cloud and Protect Training Jobs by Using an Amazon Virtual Private Cloud.
", "refs": { "AutoMLSecurityConfig$VpcConfig": "VPC configuration.
", - "CreateModelInput$VpcConfig": "A VpcConfig object that specifies the VPC that you want your model to connect to. Control access to and from your model container by configuring the VPC. VpcConfig
is used in hosting services and in batch transform. For more information, see Protect Endpoints by Using an Amazon Virtual Private Cloud and Protect Data in Batch Transform Jobs by Using an Amazon Virtual Private Cloud.
A VpcConfig object that specifies the VPC that you want your model to connect to. Control access to and from your model container by configuring the VPC. VpcConfig
is used in hosting services and in batch transform. For more information, see Protect Endpoints by Using an Amazon Virtual Private Cloud and Protect Data in Batch Transform Jobs by Using an Amazon Virtual Private Cloud.
A VpcConfig object that specifies the VPC that you want your training job to connect to. Control access to and from your training container by configuring the VPC. For more information, see Protect Training Jobs by Using an Amazon Virtual Private Cloud.
", "DescribeModelOutput$VpcConfig": "A VpcConfig object that specifies the VPC that this model has access to. For more information, see Protect Endpoints by Using an Amazon Virtual Private Cloud
", "DescribeTrainingJobResponse$VpcConfig": "A VpcConfig object that specifies the VPC that this training job has access to. For more information, see Protect Training Jobs by Using an Amazon Virtual Private Cloud.
", diff --git a/models/apis/securityhub/2018-10-26/api-2.json b/models/apis/securityhub/2018-10-26/api-2.json index 1e7cdc15485..deeef68bb33 100644 --- a/models/apis/securityhub/2018-10-26/api-2.json +++ b/models/apis/securityhub/2018-10-26/api-2.json @@ -73,6 +73,21 @@ {"shape":"InvalidAccessException"} ] }, + "BatchUpdateFindings":{ + "name":"BatchUpdateFindings", + "http":{ + "method":"PATCH", + "requestUri":"/findings/batchupdate" + }, + "input":{"shape":"BatchUpdateFindingsRequest"}, + "output":{"shape":"BatchUpdateFindingsResponse"}, + "errors":[ + {"shape":"InternalException"}, + {"shape":"InvalidInputException"}, + {"shape":"LimitExceededException"}, + {"shape":"InvalidAccessException"} + ] + }, "CreateActionTarget":{ "name":"CreateActionTarget", "http":{ @@ -1351,6 +1366,21 @@ "Keyword":{"shape":"KeywordFilterList"} } }, + "AwsSecurityFindingIdentifier":{ + "type":"structure", + "required":[ + "Id", + "ProductArn" + ], + "members":{ + "Id":{"shape":"NonEmptyString"}, + "ProductArn":{"shape":"NonEmptyString"} + } + }, + "AwsSecurityFindingIdentifierList":{ + "type":"list", + "member":{"shape":"AwsSecurityFindingIdentifier"} + }, "AwsSecurityFindingList":{ "type":"list", "member":{"shape":"AwsSecurityFinding"} @@ -1453,6 +1483,50 @@ "FailedFindings":{"shape":"ImportFindingsErrorList"} } }, + "BatchUpdateFindingsRequest":{ + "type":"structure", + "required":["FindingIdentifiers"], + "members":{ + "FindingIdentifiers":{"shape":"AwsSecurityFindingIdentifierList"}, + "Note":{"shape":"NoteUpdate"}, + "Severity":{"shape":"SeverityUpdate"}, + "VerificationState":{"shape":"VerificationState"}, + "Confidence":{"shape":"RatioScale"}, + "Criticality":{"shape":"RatioScale"}, + "Types":{"shape":"TypeList"}, + "UserDefinedFields":{"shape":"FieldMap"}, + "Workflow":{"shape":"WorkflowUpdate"}, + "RelatedFindings":{"shape":"RelatedFindingList"} + } + }, + "BatchUpdateFindingsResponse":{ + "type":"structure", + "required":[ + "ProcessedFindings", + "UnprocessedFindings" + ], + "members":{ + "ProcessedFindings":{"shape":"AwsSecurityFindingIdentifierList"}, + "UnprocessedFindings":{"shape":"BatchUpdateFindingsUnprocessedFindingsList"} + } + }, + "BatchUpdateFindingsUnprocessedFinding":{ + "type":"structure", + "required":[ + "FindingIdentifier", + "ErrorCode", + "ErrorMessage" + ], + "members":{ + "FindingIdentifier":{"shape":"AwsSecurityFindingIdentifier"}, + "ErrorCode":{"shape":"NonEmptyString"}, + "ErrorMessage":{"shape":"NonEmptyString"} + } + }, + "BatchUpdateFindingsUnprocessedFindingsList":{ + "type":"list", + "member":{"shape":"BatchUpdateFindingsUnprocessedFinding"} + }, "Boolean":{"type":"boolean"}, "CategoryList":{ "type":"list", @@ -1809,7 +1883,8 @@ "EnableSecurityHubRequest":{ "type":"structure", "members":{ - "Tags":{"shape":"TagMap"} + "Tags":{"shape":"TagMap"}, + "EnableDefaultStandards":{"shape":"Boolean"} } }, "EnableSecurityHubResponse":{ @@ -2364,6 +2439,11 @@ "type":"list", "member":{"shape":"Product"} }, + "RatioScale":{ + "type":"integer", + "max":100, + "min":0 + }, "Recommendation":{ "type":"structure", "members":{ @@ -2511,6 +2591,14 @@ "CRITICAL" ] }, + "SeverityUpdate":{ + "type":"structure", + "members":{ + "Normalized":{"shape":"RatioScale"}, + "Product":{"shape":"Double"}, + "Label":{"shape":"SeverityLabel"} + } + }, "SortCriteria":{ "type":"list", "member":{"shape":"SortCriterion"} @@ -2534,7 +2622,8 @@ "members":{ "StandardsArn":{"shape":"NonEmptyString"}, "Name":{"shape":"NonEmptyString"}, - "Description":{"shape":"NonEmptyString"} + "Description":{"shape":"NonEmptyString"}, + "EnabledByDefault":{"shape":"Boolean"} } }, "Standards":{ @@ -2879,6 +2968,12 @@ "RESOLVED", "SUPPRESSED" ] + }, + "WorkflowUpdate":{ + "type":"structure", + "members":{ + "Status":{"shape":"WorkflowStatus"} + } } } } diff --git a/models/apis/securityhub/2018-10-26/docs-2.json b/models/apis/securityhub/2018-10-26/docs-2.json index f89e1469e5d..ec383efc671 100644 --- a/models/apis/securityhub/2018-10-26/docs-2.json +++ b/models/apis/securityhub/2018-10-26/docs-2.json @@ -5,7 +5,8 @@ "AcceptInvitation": "Accepts the invitation to be a member account and be monitored by the Security Hub master account that the invitation was sent from.
When the member account accepts the invitation, permission is granted to the master account to view findings generated in the member account.
", "BatchDisableStandards": "Disables the standards specified by the provided StandardsSubscriptionArns
.
For more information, see Security Standards section of the AWS Security Hub User Guide.
", "BatchEnableStandards": "Enables the standards specified by the provided StandardsArn
. To obtain the ARN for a standard, use the DescribeStandards
operation.
For more information, see the Security Standards section of the AWS Security Hub User Guide.
", - "BatchImportFindings": "Imports security findings generated from an integrated third-party product into Security Hub. This action is requested by the integrated product to import its findings into Security Hub.
The maximum allowed size for a finding is 240 Kb. An error is returned for any finding larger than 240 Kb.
", + "BatchImportFindings": "Imports security findings generated from an integrated third-party product into Security Hub. This action is requested by the integrated product to import its findings into Security Hub.
The maximum allowed size for a finding is 240 Kb. An error is returned for any finding larger than 240 Kb.
After a finding is created, BatchImportFindings
cannot be used to update the following finding fields and objects, which Security Hub customers use to manage their investigation workflow.
Confidence
Criticality
Note
RelatedFindings
Severity
Types
UserDefinedFields
VerificationState
Workflow
Used by Security Hub customers to update information about their investigation into a finding. Requested by master accounts or member accounts. Master accounts can update findings for their account and their member accounts. Member accounts can update findings for their account.
Updates from BatchUpdateFindings
do not affect the value of UpdatedAt
for a finding.
Master accounts can use BatchUpdateFindings
to update the following finding fields and objects.
Confidence
Criticality
Note
RelatedFindings
Severity
Types
UserDefinedFields
VerificationState
Workflow
Member accounts can only use BatchUpdateFindings
to update the Note object.
Creates a custom action target in Security Hub.
You can use custom actions on findings and insights in Security Hub to trigger target actions in Amazon CloudWatch Events.
", "CreateInsight": "Creates a custom insight in Security Hub. An insight is a consolidation of findings that relate to a security issue that requires attention or remediation.
To group the related findings in the insight, use the GroupByAttribute
.
Creates a member association in Security Hub between the specified accounts and the account used to make the request, which is the master account. To successfully create a member, you must use this action from an account that already has Security Hub enabled. To enable Security Hub, you can use the EnableSecurityHub
operation.
After you use CreateMembers
to create member account associations in Security Hub, you must use the InviteMembers
operation to invite the accounts to enable Security Hub and become member accounts in Security Hub.
If the account owner accepts the invitation, the account becomes a member account in Security Hub, and a permission policy is added that permits the master account to view the findings generated in the member account. When Security Hub is enabled in the invited account, findings start to be sent to both the member and master accounts.
To remove the association between the master and member accounts, use the DisassociateFromMasterAccount
or DisassociateMembers
operation.
Disassociates the current Security Hub member account from the associated master account.
", "DisassociateMembers": "Disassociates the specified member accounts from the associated master account.
", "EnableImportFindingsForProduct": "Enables the integration of a partner product with Security Hub. Integrated products send findings to Security Hub.
When you enable a product integration, a permission policy that grants permission for the product to send findings to Security Hub is applied.
", - "EnableSecurityHub": "Enables Security Hub for your account in the current Region or the Region you specify in the request.
When you enable Security Hub, you grant to Security Hub the permissions necessary to gather findings from AWS Config, Amazon GuardDuty, Amazon Inspector, and Amazon Macie.
When you use the EnableSecurityHub
operation to enable Security Hub, you also automatically enable the CIS AWS Foundations standard. You do not enable the Payment Card Industry Data Security Standard (PCI DSS) standard. To enable a standard, use the BatchEnableStandards
operation. To disable a standard, use the BatchDisableStandards
operation.
To learn more, see Setting Up AWS Security Hub in the AWS Security Hub User Guide.
", + "EnableSecurityHub": "Enables Security Hub for your account in the current Region or the Region you specify in the request.
When you enable Security Hub, you grant to Security Hub the permissions necessary to gather findings from other services that are integrated with Security Hub.
When you use the EnableSecurityHub
operation to enable Security Hub, you also automatically enable the CIS AWS Foundations standard. You do not enable the Payment Card Industry Data Security Standard (PCI DSS) standard. To not enable the CIS AWS Foundations standard, set EnableDefaultStandards
to false
.
After you enable Security Hub, to enable a standard, use the BatchEnableStandards
operation. To disable a standard, use the BatchDisableStandards
operation.
To learn more, see Setting Up AWS Security Hub in the AWS Security Hub User Guide.
", "GetEnabledStandards": "Returns a list of the standards that are currently enabled.
", "GetFindings": "Returns a list of findings that match the specified criteria.
", "GetInsightResults": "Lists the results of the Security Hub insight specified by the insight ARN.
", @@ -40,7 +41,7 @@ "TagResource": "Adds one or more tags to a resource.
", "UntagResource": "Removes one or more tags from a resource.
", "UpdateActionTarget": "Updates the name and description of a custom action target in Security Hub.
", - "UpdateFindings": "Updates the Note
and RecordState
of the Security Hub-aggregated findings that the filter attributes specify. Any member account that can view the finding also sees the update to the finding.
UpdateFindings
is deprecated. Instead of UpdateFindings
, use BatchUpdateFindings
.
Updates the Note
and RecordState
of the Security Hub-aggregated findings that the filter attributes specify. Any member account that can view the finding also sees the update to the finding.
Updates the Security Hub insight identified by the specified insight ARN.
", "UpdateStandardsControl": "Used to control whether an individual security standard control is enabled or disabled.
" }, @@ -500,6 +501,20 @@ "UpdateInsightRequest$Filters": "The updated filters that define this insight.
" } }, + "AwsSecurityFindingIdentifier": { + "base": "Identifies a finding to update using BatchUpdateFindings
.
The identifier of the finding that was not updated.
" + } + }, + "AwsSecurityFindingIdentifierList": { + "base": null, + "refs": { + "BatchUpdateFindingsRequest$FindingIdentifiers": "The list of findings to update. BatchUpdateFindings
can be used to update up to 100 findings at a time.
For each finding, the list provides the finding identifier and the ARN of the finding provider.
", + "BatchUpdateFindingsResponse$ProcessedFindings": "The list of findings that were updated successfully.
" + } + }, "AwsSecurityFindingList": { "base": null, "refs": { @@ -579,6 +594,28 @@ "refs": { } }, + "BatchUpdateFindingsRequest": { + "base": null, + "refs": { + } + }, + "BatchUpdateFindingsResponse": { + "base": null, + "refs": { + } + }, + "BatchUpdateFindingsUnprocessedFinding": { + "base": "A finding from a BatchUpdateFindings
request that Security Hub was unable to update.
The list of findings that were not updated.
" + } + }, "Boolean": { "base": null, "refs": { @@ -594,7 +631,9 @@ "AwsRdsDbInstanceDetails$IAMDatabaseAuthenticationEnabled": "True if mapping of AWS Identity and Access Management (IAM) accounts to database accounts is enabled, and otherwise false.
IAM database authentication can be enabled for the following database engines.
For MySQL 5.6, minor version 5.6.34 or higher
For MySQL 5.7, minor version 5.7.16 or higher
Aurora 5.6 or higher
Specifies the accessibility options for the DB instance.
A value of true specifies an Internet-facing instance with a publicly resolvable DNS name, which resolves to a public IP address.
A value of false specifies an internal instance with a DNS name that resolves to a private IP address.
", "AwsRdsDbInstanceDetails$StorageEncrypted": "Specifies whether the DB instance is encrypted.
", - "ListMembersRequest$OnlyAssociated": "Specifies which member accounts to include in the response based on their relationship status with the master account. The default value is TRUE
.
If OnlyAssociated
is set to TRUE
, the response includes member accounts whose relationship status with the master is set to ENABLED
or DISABLED
.
If OnlyAssociated
is set to FALSE
, the response includes all existing member accounts.
Whether to enable the security standards that Security Hub has designated as automatically enabled. If you do not provide a value for EnableDefaultStandards
, it is set to true
. To not enable the automatically enabled standards, set EnableDefaultStandards
to false
.
Specifies which member accounts to include in the response based on their relationship status with the master account. The default value is TRUE
.
If OnlyAssociated
is set to TRUE
, the response includes member accounts whose relationship status with the master is set to ENABLED
or DISABLED
.
If OnlyAssociated
is set to FALSE
, the response includes all existing member accounts.
Whether the standard is enabled by default. When Security Hub is enabled from the console, if a standard is enabled by default, the check box for that standard is selected by default.
When Security Hub is enabled using the EnableSecurityHub
API operation, the standard is enabled by default unless EnableDefaultStandards
is set to false
.
The greater-than-equal condition to be applied to a single field when querying for findings.
", "NumberFilter$Lte": "The less-than-equal condition to be applied to a single field when querying for findings.
", "NumberFilter$Eq": "The equal-to condition to be applied to a single field when querying for findings.
", - "Severity$Product": "The native severity as defined by the AWS service or integrated partner product that generated the finding.
" + "Severity$Product": "The native severity as defined by the AWS service or integrated partner product that generated the finding.
", + "SeverityUpdate$Product": "The native severity as defined by the AWS service or integrated partner product that generated the finding.
" } }, "EnableImportFindingsForProductRequest": { @@ -869,6 +909,7 @@ "AwsLambdaFunctionEnvironment$Variables": "Environment variable key-value pairs.
", "AwsSecurityFinding$ProductFields": "A data type where security-findings providers can include additional solution-specific details that aren't part of the defined AwsSecurityFinding
format.
A list of name/value string pairs associated with the finding. These are custom, user-defined fields added to a finding.
", + "BatchUpdateFindingsRequest$UserDefinedFields": "A list of name/value string pairs associated with the finding. These are custom, user-defined fields added to a finding.
", "Resource$Tags": "A list of AWS tags associated with a resource at the time the finding was processed.
", "ResourceDetails$Other": "Details about a resource that are not available in a type-specific details object. Use the Other
object in the following cases.
The type-specific object does not contain all of the fields that you want to populate. In this case, first use the type-specific object to populate those fields. Use the Other
object to populate the fields that are missing from the type-specific object.
The resource type does not have a corresponding object. This includes resources for which the type is Other
.
Includes details of the list of the findings that cannot be imported.
", + "base": "The list of the findings that cannot be imported. For each finding, the list provides the error.
", "refs": { "ImportFindingsErrorList$member": null } @@ -1408,6 +1449,8 @@ "AwsSecurityFinding$Title": "A finding's title.
In this release, Title
is a required property.
A finding's description.
In this release, Description
is a required property.
A URL that links to a page about the current finding in the security-findings provider's solution.
", + "AwsSecurityFindingIdentifier$Id": "The identifier of the finding that was specified by the finding provider.
", + "AwsSecurityFindingIdentifier$ProductArn": "The ARN generated by Security Hub that uniquely identifies a product that generates findings. This can be the ARN for a third-party product that is integrated with Security Hub, or the ARN for a custom integration.
", "AwsSnsTopicDetails$KmsMasterKeyId": "The ID of an AWS-managed customer master key (CMK) for Amazon SNS or a custom CMK.
", "AwsSnsTopicDetails$TopicName": "The name of the topic.
", "AwsSnsTopicDetails$Owner": "The subscription's owner.
", @@ -1421,6 +1464,8 @@ "AwsWafWebAclDetails$WebAclId": "A unique identifier for a WebACL.
", "AwsWafWebAclRule$RuleId": "The identifier for a Rule.
", "AwsWafWebAclRule$Type": "The rule type.
Valid values: REGULAR
| RATE_BASED
| GROUP
The default is REGULAR
.
The code associated with the error.
", + "BatchUpdateFindingsUnprocessedFinding$ErrorMessage": "The message associated with the error.
", "CategoryList$member": null, "ContainerDetails$Name": "The name of the container related to a finding.
", "ContainerDetails$ImageId": "The identifier of the image related to a finding.
", @@ -1431,7 +1476,7 @@ "CreateActionTargetRequest$Id": "The ID for the custom action target.
", "CreateActionTargetResponse$ActionTargetArn": "The ARN for the custom action target.
", "CreateInsightRequest$Name": "The name of the custom insight to create.
", - "CreateInsightRequest$GroupByAttribute": "The attribute used as the aggregator to group related findings for the insight.
", + "CreateInsightRequest$GroupByAttribute": "The attribute used to group the findings for the insight. The grouping attribute identifies the type of item that the insight applies to. For example, if an insight is grouped by resource identifier, then the insight produces a list of resource identifiers.
", "CreateInsightResponse$InsightArn": "The ARN of the insight created.
", "DateFilter$Start": "A start date for the date filter.
", "DateFilter$End": "An end date for the date filter.
", @@ -1449,12 +1494,12 @@ "FieldMap$key": null, "FieldMap$value": null, "GetInsightResultsRequest$InsightArn": "The ARN of the insight for which to return results.
", - "ImportFindingsError$Id": "The ID of the error made during the BatchImportFindings
operation.
The code of the error made during the BatchImportFindings
operation.
The message of the error made during the BatchImportFindings
operation.
The identifier of the finding that could not be updated.
", + "ImportFindingsError$ErrorCode": "The code of the error returned by the BatchImportFindings
operation.
The message of the error returned by the BatchImportFindings
operation.
The ARN of a Security Hub insight.
", "Insight$Name": "The name of a Security Hub insight.
", - "Insight$GroupByAttribute": "The attribute that the insight's findings are grouped by. This attribute is used as a findings aggregator for the purposes of viewing and managing multiple related findings under a single operand.
", + "Insight$GroupByAttribute": "The grouping attribute for the insight's findings. Indicates how to group the matching findings, and identifies the type of item that the insight applies to. For example, if an insight is grouped by resource identifier, then the insight produces a list of resource identifiers.
", "InsightResultValue$GroupByAttributeValue": "The value of the attribute that the findings are grouped by for the insight whose results are returned by the GetInsightResults
operation.
The ARN of the insight whose results are returned by the GetInsightResults
operation.
The attribute that the findings are grouped by for the insight whose results are returned by the GetInsightResults
operation.
The updated note.
", "refs": { + "BatchUpdateFindingsRequest$Note": null, "UpdateFindingsRequest$Note": "The updated note for the finding.
" } }, @@ -1631,6 +1677,14 @@ "DescribeProductsResponse$Products": "A list of products, including details for each product.
" } }, + "RatioScale": { + "base": null, + "refs": { + "BatchUpdateFindingsRequest$Confidence": "The updated value for the finding confidence. Confidence is defined as the likelihood that a finding accurately identifies the behavior or issue that it was intended to identify.
Confidence is scored on a 0-100 basis using a ratio scale, where 0 means zero percent confidence and 100 means 100 percent confidence.
", + "BatchUpdateFindingsRequest$Criticality": "The updated value for the level of importance assigned to the resources associated with the findings.
A score of 0 means that the underlying resources have no criticality, and a score of 100 is reserved for the most critical resources.
", + "SeverityUpdate$Normalized": "The normalized severity for the finding. This attribute is to be deprecated in favor of Label
.
If you provide Normalized
and do not provide Label
, Label
is set automatically as follows.
0 - INFORMATIONAL
1–39 - LOW
40–69 - MEDIUM
70–89 - HIGH
90–100 - CRITICAL
A recommendation on how to remediate the issue identified in a finding.
", "refs": { @@ -1653,7 +1707,8 @@ "RelatedFindingList": { "base": null, "refs": { - "AwsSecurityFinding$RelatedFindings": "A list of related findings.
" + "AwsSecurityFinding$RelatedFindings": "A list of related findings.
", + "BatchUpdateFindingsRequest$RelatedFindings": "A list of findings that are related to the updated findings.
" } }, "RelatedRequirementsList": { @@ -1737,7 +1792,8 @@ "SeverityLabel": { "base": null, "refs": { - "Severity$Label": "The severity value of the finding. The allowed values are the following.
INFORMATIONAL
- No issue was found.
LOW
- The issue does not require action on its own.
MEDIUM
- The issue must be addressed but not urgently.
HIGH
- The issue must be addressed as a priority.
CRITICAL
- The issue must be remediated immediately to avoid it escalating.
The severity value of the finding. The allowed values are the following.
INFORMATIONAL
- No issue was found.
LOW
- The issue does not require action on its own.
MEDIUM
- The issue must be addressed but not urgently.
HIGH
- The issue must be addressed as a priority.
CRITICAL
- The issue must be remediated immediately to avoid it escalating.
The severity value of the finding. The allowed values are the following.
INFORMATIONAL
- No issue was found.
LOW
- The issue does not require action on its own.
MEDIUM
- The issue must be addressed but not urgently.
HIGH
- The issue must be addressed as a priority.
CRITICAL
- The issue must be remediated immediately to avoid it escalating.
The severity of findings generated from this security standard control.
The finding severity is based on an assessment of how easy it would be to compromise AWS resources if the issue is detected.
" } }, + "SeverityUpdate": { + "base": "Updates to the severity information for a finding.
", + "refs": { + "BatchUpdateFindingsRequest$Severity": "Used to update the finding severity.
" + } + }, "SortCriteria": { "base": null, "refs": { @@ -1985,7 +2047,8 @@ "TypeList": { "base": null, "refs": { - "AwsSecurityFinding$Types": "One or more finding types in the format of namespace/category/classifier
that classify a finding.
Valid namespace values are: Software and Configuration Checks | TTPs | Effects | Unusual Behaviors | Sensitive Data Identifications
" + "AwsSecurityFinding$Types": "One or more finding types in the format of namespace/category/classifier
that classify a finding.
Valid namespace values are: Software and Configuration Checks | TTPs | Effects | Unusual Behaviors | Sensitive Data Identifications
", + "BatchUpdateFindingsRequest$Types": "One or more finding types in the format of namespace/category/classifier that classify a finding.
Valid namespace values are as follows.
Software and Configuration Checks
TTPs
Effects
Unusual Behaviors
Sensitive Data Identifications
Indicates the veracity of a finding.
" + "AwsSecurityFinding$VerificationState": "Indicates the veracity of a finding.
", + "BatchUpdateFindingsRequest$VerificationState": "Indicates the veracity of a finding.
The available values for VerificationState
are as follows.
UNKNOWN
– The default disposition of a security finding
TRUE_POSITIVE
– The security finding is confirmed
FALSE_POSITIVE
– The security finding was determined to be a false alarm
BENIGN_POSITIVE
– A special case of TRUE_POSITIVE
where the finding doesn't pose any threat, is expected, or both
The status of the investigation into the finding. The allowed values are the following.
NEW
- The initial state of a finding, before it is reviewed.
NOTIFIED
- Indicates that you notified the resource owner about the security issue. Used when the initial reviewer is not the resource owner, and needs intervention from the resource owner.
SUPPRESSED
- The finding will not be reviewed again and will not be acted upon.
RESOLVED
- The finding was reviewed and remediated and is now considered resolved.
The status of the investigation into the finding. The allowed values are the following.
NEW
- The initial state of a finding, before it is reviewed.
NOTIFIED
- Indicates that you notified the resource owner about the security issue. Used when the initial reviewer is not the resource owner, and needs intervention from the resource owner.
SUPPRESSED
- The finding will not be reviewed again and will not be acted upon.
RESOLVED
- The finding was reviewed and remediated and is now considered resolved.
The status of the investigation into the finding. The allowed values are the following.
NEW
- The initial state of a finding, before it is reviewed.
NOTIFIED
- Indicates that you notified the resource owner about the security issue. Used when the initial reviewer is not the resource owner, and needs intervention from the resource owner.
RESOLVED
- The finding was reviewed and remediated and is now considered resolved.
SUPPRESSED
- The finding will not be reviewed again and will not be acted upon.
Used to update information about the investigation into the finding.
", + "refs": { + "BatchUpdateFindingsRequest$Workflow": "Used to update the workflow status of a finding.
The workflow status indicates the progress of the investigation into the finding.
" } } } diff --git a/models/apis/servicecatalog/2015-12-10/api-2.json b/models/apis/servicecatalog/2015-12-10/api-2.json index a514fdcd200..e70a86661fc 100644 --- a/models/apis/servicecatalog/2015-12-10/api-2.json +++ b/models/apis/servicecatalog/2015-12-10/api-2.json @@ -1339,7 +1339,9 @@ "ConstraintId":{"shape":"Id"}, "Type":{"shape":"ConstraintType"}, "Description":{"shape":"ConstraintDescription"}, - "Owner":{"shape":"AccountId"} + "Owner":{"shape":"AccountId"}, + "ProductId":{"shape":"Id"}, + "PortfolioId":{"shape":"Id"} } }, "ConstraintDetails":{ diff --git a/models/apis/servicecatalog/2015-12-10/docs-2.json b/models/apis/servicecatalog/2015-12-10/docs-2.json index 50a033fbf81..ffed48db404 100644 --- a/models/apis/servicecatalog/2015-12-10/docs-2.json +++ b/models/apis/servicecatalog/2015-12-10/docs-2.json @@ -400,10 +400,10 @@ "ConstraintParameters": { "base": null, "refs": { - "CreateConstraintInput$Parameters": "The constraint parameters, in JSON format. The syntax depends on the constraint type as follows:
Specify the RoleArn
property as follows:
{\"RoleArn\" : \"arn:aws:iam::123456789012:role/LaunchRole\"}
You cannot have both a LAUNCH
and a STACKSET
constraint.
You also cannot have more than one LAUNCH
constraint on a product and portfolio.
Specify the NotificationArns
property as follows:
{\"NotificationArns\" : [\"arn:aws:sns:us-east-1:123456789012:Topic\"]}
Specify the TagUpdatesOnProvisionedProduct
property as follows:
{\"Version\":\"2.0\",\"Properties\":{\"TagUpdateOnProvisionedProduct\":\"String\"}}
The TagUpdatesOnProvisionedProduct
property accepts a string value of ALLOWED
or NOT_ALLOWED
.
Specify the Parameters
property as follows:
{\"Version\": \"String\", \"Properties\": {\"AccountList\": [ \"String\" ], \"RegionList\": [ \"String\" ], \"AdminRole\": \"String\", \"ExecutionRole\": \"String\"}}
You cannot have both a LAUNCH
and a STACKSET
constraint.
You also cannot have more than one STACKSET
constraint on a product and portfolio.
Products with a STACKSET
constraint will launch an AWS CloudFormation stack set.
Specify the Rules
property. For more information, see Template Constraint Rules.
The constraint parameters, in JSON format. The syntax depends on the constraint type as follows:
You are required to specify either the RoleArn
or the LocalRoleName
but can't use both.
Specify the RoleArn
property as follows:
{\"RoleArn\" : \"arn:aws:iam::123456789012:role/LaunchRole\"}
Specify the LocalRoleName
property as follows:
{\"LocalRoleName\": \"SCBasicLaunchRole\"}
If you specify the LocalRoleName
property, when an account uses the launch constraint, the IAM role with that name in the account will be used. This allows launch-role constraints to be account-agnostic so the administrator can create fewer resources per shared account.
The given role name must exist in the account used to create the launch constraint and the account of the user who launches a product with this launch constraint.
You cannot have both a LAUNCH
and a STACKSET
constraint.
You also cannot have more than one LAUNCH
constraint on a product and portfolio.
Specify the NotificationArns
property as follows:
{\"NotificationArns\" : [\"arn:aws:sns:us-east-1:123456789012:Topic\"]}
Specify the TagUpdatesOnProvisionedProduct
property as follows:
{\"Version\":\"2.0\",\"Properties\":{\"TagUpdateOnProvisionedProduct\":\"String\"}}
The TagUpdatesOnProvisionedProduct
property accepts a string value of ALLOWED
or NOT_ALLOWED
.
Specify the Parameters
property as follows:
{\"Version\": \"String\", \"Properties\": {\"AccountList\": [ \"String\" ], \"RegionList\": [ \"String\" ], \"AdminRole\": \"String\", \"ExecutionRole\": \"String\"}}
You cannot have both a LAUNCH
and a STACKSET
constraint.
You also cannot have more than one STACKSET
constraint on a product and portfolio.
Products with a STACKSET
constraint will launch an AWS CloudFormation stack set.
Specify the Rules
property. For more information, see Template Constraint Rules.
The constraint parameters.
", "DescribeConstraintOutput$ConstraintParameters": "The constraint parameters.
", - "UpdateConstraintInput$Parameters": "The constraint parameters, in JSON format. The syntax depends on the constraint type as follows:
Specify the RoleArn
property as follows:
{\"RoleArn\" : \"arn:aws:iam::123456789012:role/LaunchRole\"}
You cannot have both a LAUNCH
and a STACKSET
constraint.
You also cannot have more than one LAUNCH
constraint on a product and portfolio.
Specify the NotificationArns
property as follows:
{\"NotificationArns\" : [\"arn:aws:sns:us-east-1:123456789012:Topic\"]}
Specify the TagUpdatesOnProvisionedProduct
property as follows:
{\"Version\":\"2.0\",\"Properties\":{\"TagUpdateOnProvisionedProduct\":\"String\"}}
The TagUpdatesOnProvisionedProduct
property accepts a string value of ALLOWED
or NOT_ALLOWED
.
Specify the Parameters
property as follows:
{\"Version\": \"String\", \"Properties\": {\"AccountList\": [ \"String\" ], \"RegionList\": [ \"String\" ], \"AdminRole\": \"String\", \"ExecutionRole\": \"String\"}}
You cannot have both a LAUNCH
and a STACKSET
constraint.
You also cannot have more than one STACKSET
constraint on a product and portfolio.
Products with a STACKSET
constraint will launch an AWS CloudFormation stack set.
Specify the Rules
property. For more information, see Template Constraint Rules.
The constraint parameters, in JSON format. The syntax depends on the constraint type as follows:
You are required to specify either the RoleArn
or the LocalRoleName
but can't use both.
Specify the RoleArn
property as follows:
{\"RoleArn\" : \"arn:aws:iam::123456789012:role/LaunchRole\"}
Specify the LocalRoleName
property as follows:
{\"LocalRoleName\": \"SCBasicLaunchRole\"}
If you specify the LocalRoleName
property, when an account uses the launch constraint, the IAM role with that name in the account will be used. This allows launch-role constraints to be account-agnostic so the administrator can create fewer resources per shared account.
The given role name must exist in the account used to create the launch constraint and the account of the user who launches a product with this launch constraint.
You cannot have both a LAUNCH
and a STACKSET
constraint.
You also cannot have more than one LAUNCH
constraint on a product and portfolio.
Specify the NotificationArns
property as follows:
{\"NotificationArns\" : [\"arn:aws:sns:us-east-1:123456789012:Topic\"]}
Specify the TagUpdatesOnProvisionedProduct
property as follows:
{\"Version\":\"2.0\",\"Properties\":{\"TagUpdateOnProvisionedProduct\":\"String\"}}
The TagUpdatesOnProvisionedProduct
property accepts a string value of ALLOWED
or NOT_ALLOWED
.
Specify the Parameters
property as follows:
{\"Version\": \"String\", \"Properties\": {\"AccountList\": [ \"String\" ], \"RegionList\": [ \"String\" ], \"AdminRole\": \"String\", \"ExecutionRole\": \"String\"}}
You cannot have both a LAUNCH
and a STACKSET
constraint.
You also cannot have more than one STACKSET
constraint on a product and portfolio.
Products with a STACKSET
constraint will launch an AWS CloudFormation stack set.
Specify the Rules
property. For more information, see Template Constraint Rules.
The constraint parameters.
" } }, @@ -1007,6 +1007,8 @@ "AssociateServiceActionWithProvisioningArtifactInput$ProvisioningArtifactId": "The identifier of the provisioning artifact. For example, pa-4abcdjnxjj6ne
.
The self-service action identifier. For example, act-fs7abcd89wxyz
.
The identifier of the constraint.
", + "ConstraintDetail$ProductId": "The identifier of the product the constraint applies to. Note that a constraint applies to a specific instance of a product within a certain portfolio.
", + "ConstraintDetail$PortfolioId": "The identifier of the portfolio the product resides in. The constraint applies only to the instance of the product that lives within this portfolio.
", "CopyProductInput$TargetProductId": "The identifier of the target product. By default, a new product is created.
", "CopyProductOutput$CopyProductToken": "The token to use to track the progress of the operation.
", "CreateConstraintInput$PortfolioId": "The portfolio identifier.
", diff --git a/models/apis/snowball/2016-06-30/api-2.json b/models/apis/snowball/2016-06-30/api-2.json index dc280e200ce..2455522d9e9 100755 --- a/models/apis/snowball/2016-06-30/api-2.json +++ b/models/apis/snowball/2016-06-30/api-2.json @@ -897,6 +897,7 @@ "T80", "T100", "T42", + "T98", "NoPreference" ] }, @@ -906,7 +907,8 @@ "STANDARD", "EDGE", "EDGE_C", - "EDGE_CG" + "EDGE_CG", + "EDGE_S" ] }, "SnsTopicARN":{ diff --git a/models/apis/storagegateway/2013-06-30/api-2.json b/models/apis/storagegateway/2013-06-30/api-2.json index 78c5101a2e9..acbf5a40e28 100644 --- a/models/apis/storagegateway/2013-06-30/api-2.json +++ b/models/apis/storagegateway/2013-06-30/api-2.json @@ -1140,6 +1140,10 @@ "TargetARN":{"shape":"TargetARN"} } }, + "AuditDestinationARN":{ + "type":"string", + "max":1024 + }, "Authentication":{ "type":"string", "max":15, @@ -1336,6 +1340,7 @@ "AdminUserList":{"shape":"FileShareUserList"}, "ValidUserList":{"shape":"FileShareUserList"}, "InvalidUserList":{"shape":"FileShareUserList"}, + "AuditDestinationARN":{"shape":"AuditDestinationARN"}, "Authentication":{"shape":"Authentication"}, "Tags":{"shape":"Tags"} } @@ -2664,6 +2669,7 @@ "AdminUserList":{"shape":"FileShareUserList"}, "ValidUserList":{"shape":"FileShareUserList"}, "InvalidUserList":{"shape":"FileShareUserList"}, + "AuditDestinationARN":{"shape":"AuditDestinationARN"}, "Authentication":{"shape":"Authentication"}, "Tags":{"shape":"Tags"} } @@ -3087,7 +3093,8 @@ "SMBACLEnabled":{"shape":"Boolean"}, "AdminUserList":{"shape":"FileShareUserList"}, "ValidUserList":{"shape":"FileShareUserList"}, - "InvalidUserList":{"shape":"FileShareUserList"} + "InvalidUserList":{"shape":"FileShareUserList"}, + "AuditDestinationARN":{"shape":"AuditDestinationARN"} } }, "UpdateSMBFileShareOutput":{ diff --git a/models/apis/storagegateway/2013-06-30/docs-2.json b/models/apis/storagegateway/2013-06-30/docs-2.json index 65487b8afd8..2ec5e111455 100644 --- a/models/apis/storagegateway/2013-06-30/docs-2.json +++ b/models/apis/storagegateway/2013-06-30/docs-2.json @@ -162,6 +162,14 @@ "refs": { } }, + "AuditDestinationARN": { + "base": null, + "refs": { + "CreateSMBFileShareInput$AuditDestinationARN": "The Amazon Resource Name (ARN) of the storage used for the audit logs.
", + "SMBFileShareInfo$AuditDestinationARN": "The Amazon Resource Name (ARN) of the storage used for the audit logs.
", + "UpdateSMBFileShareInput$AuditDestinationARN": "The Amazon Resource Name (ARN) of the storage used for the audit logs.
" + } + }, "Authentication": { "base": "The authentication method of the file share.
Valid values are ActiveDirectory
or GuestAccess
. The default is ActiveDirectory
.
The Amazon Resource Name (ARN) of the file gateway on which you want to create a file share.
", - "CreateSMBFileShareInput$GatewayARN": "The Amazon Resource Name (ARN) of the file gateway on which you want to create a file share.
", + "CreateSMBFileShareInput$GatewayARN": "The ARN of the file gateway on which you want to create a file share.
", "CreateStorediSCSIVolumeInput$GatewayARN": null, "CreateTapeWithBarcodeInput$GatewayARN": "The unique Amazon Resource Name (ARN) that represents the gateway to associate the virtual tape with. Use the ListGateways operation to return a list of gateways for your account and AWS Region.
", "CreateTapesInput$GatewayARN": "The unique Amazon Resource Name (ARN) that represents the gateway to associate the virtual tapes with. Use the ListGateways operation to return a list of gateways for your account and AWS Region.
", diff --git a/models/apis/synthetics/2017-10-11/api-2.json b/models/apis/synthetics/2017-10-11/api-2.json new file mode 100644 index 00000000000..e931f56e86c --- /dev/null +++ b/models/apis/synthetics/2017-10-11/api-2.json @@ -0,0 +1,754 @@ +{ + "version":"2.0", + "metadata":{ + "apiVersion":"2017-10-11", + "endpointPrefix":"synthetics", + "jsonVersion":"1.1", + "protocol":"rest-json", + "serviceAbbreviation":"Synthetics", + "serviceFullName":"Synthetics", + "serviceId":"synthetics", + "signatureVersion":"v4", + "signingName":"synthetics", + "uid":"synthetics-2017-10-11" + }, + "operations":{ + "CreateCanary":{ + "name":"CreateCanary", + "http":{ + "method":"POST", + "requestUri":"/canary" + }, + "input":{"shape":"CreateCanaryRequest"}, + "output":{"shape":"CreateCanaryResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ValidationException"} + ] + }, + "DeleteCanary":{ + "name":"DeleteCanary", + "http":{ + "method":"DELETE", + "requestUri":"/canary/{name}" + }, + "input":{"shape":"DeleteCanaryRequest"}, + "output":{"shape":"DeleteCanaryResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ValidationException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ConflictException"} + ] + }, + "DescribeCanaries":{ + "name":"DescribeCanaries", + "http":{ + "method":"POST", + "requestUri":"/canaries" + }, + "input":{"shape":"DescribeCanariesRequest"}, + "output":{"shape":"DescribeCanariesResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ValidationException"} + ] + }, + "DescribeCanariesLastRun":{ + "name":"DescribeCanariesLastRun", + "http":{ + "method":"POST", + "requestUri":"/canaries/last-run" + }, + "input":{"shape":"DescribeCanariesLastRunRequest"}, + "output":{"shape":"DescribeCanariesLastRunResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ValidationException"} + ] + }, + "DescribeRuntimeVersions":{ + "name":"DescribeRuntimeVersions", + "http":{ + "method":"POST", + "requestUri":"/runtime-versions" + }, + "input":{"shape":"DescribeRuntimeVersionsRequest"}, + "output":{"shape":"DescribeRuntimeVersionsResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ValidationException"} + ] + }, + "GetCanary":{ + "name":"GetCanary", + "http":{ + "method":"GET", + "requestUri":"/canary/{name}" + }, + "input":{"shape":"GetCanaryRequest"}, + "output":{"shape":"GetCanaryResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ValidationException"} + ] + }, + "GetCanaryRuns":{ + "name":"GetCanaryRuns", + "http":{ + "method":"POST", + "requestUri":"/canary/{name}/runs" + }, + "input":{"shape":"GetCanaryRunsRequest"}, + "output":{"shape":"GetCanaryRunsResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ValidationException"}, + {"shape":"ResourceNotFoundException"} + ] + }, + "ListTagsForResource":{ + "name":"ListTagsForResource", + "http":{ + "method":"GET", + "requestUri":"/tags/{resourceArn}" + }, + "input":{"shape":"ListTagsForResourceRequest"}, + "output":{"shape":"ListTagsForResourceResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ValidationException"} + ] + }, + "StartCanary":{ + "name":"StartCanary", + "http":{ + "method":"POST", + "requestUri":"/canary/{name}/start" + }, + "input":{"shape":"StartCanaryRequest"}, + "output":{"shape":"StartCanaryResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ValidationException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ConflictException"} + ] + }, + "StopCanary":{ + "name":"StopCanary", + "http":{ + "method":"POST", + "requestUri":"/canary/{name}/stop" + }, + "input":{"shape":"StopCanaryRequest"}, + "output":{"shape":"StopCanaryResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ValidationException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ConflictException"} + ] + }, + "TagResource":{ + "name":"TagResource", + "http":{ + "method":"POST", + "requestUri":"/tags/{resourceArn}" + }, + "input":{"shape":"TagResourceRequest"}, + "output":{"shape":"TagResourceResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ValidationException"} + ] + }, + "UntagResource":{ + "name":"UntagResource", + "http":{ + "method":"DELETE", + "requestUri":"/tags/{resourceArn}" + }, + "input":{"shape":"UntagResourceRequest"}, + "output":{"shape":"UntagResourceResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ValidationException"} + ] + }, + "UpdateCanary":{ + "name":"UpdateCanary", + "http":{ + "method":"PATCH", + "requestUri":"/canary/{name}" + }, + "input":{"shape":"UpdateCanaryRequest"}, + "output":{"shape":"UpdateCanaryResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ValidationException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ConflictException"} + ] + } + }, + "shapes":{ + "Arn":{ + "type":"string", + "pattern":"^arn:(aws|aws-cn|aws-us-gov|aws-iso-{0,1}[a-z]{0,1}):[A-Za-z0-9][A-Za-z0-9_/.-]{0,62}:[A-Za-z0-9_/.-]{0,63}:[A-Za-z0-9_/.-]{0,63}:[A-Za-z0-9][A-Za-z0-9:_/+=,@.-]{0,1023}$" + }, + "Blob":{ + "type":"blob", + "max":10000000, + "min":1 + }, + "Canaries":{ + "type":"list", + "member":{"shape":"Canary"} + }, + "CanariesLastRun":{ + "type":"list", + "member":{"shape":"CanaryLastRun"} + }, + "Canary":{ + "type":"structure", + "members":{ + "Id":{"shape":"UUID"}, + "Name":{"shape":"CanaryName"}, + "Code":{"shape":"CanaryCodeOutput"}, + "ExecutionRoleArn":{"shape":"Arn"}, + "Schedule":{"shape":"CanaryScheduleOutput"}, + "RunConfig":{"shape":"CanaryRunConfigOutput"}, + "SuccessRetentionPeriodInDays":{"shape":"MaxSize1024"}, + "FailureRetentionPeriodInDays":{"shape":"MaxSize1024"}, + "Status":{"shape":"CanaryStatus"}, + "Timeline":{"shape":"CanaryTimeline"}, + "ArtifactS3Location":{"shape":"String"}, + "EngineArn":{"shape":"Arn"}, + "RuntimeVersion":{"shape":"String"}, + "VpcConfig":{"shape":"VpcConfigOutput"}, + "Tags":{"shape":"TagMap"} + } + }, + "CanaryCodeInput":{ + "type":"structure", + "required":["Handler"], + "members":{ + "S3Bucket":{"shape":"String"}, + "S3Key":{"shape":"String"}, + "S3Version":{"shape":"String"}, + "ZipFile":{"shape":"Blob"}, + "Handler":{"shape":"String"} + } + }, + "CanaryCodeOutput":{ + "type":"structure", + "members":{ + "SourceLocationArn":{"shape":"String"}, + "Handler":{"shape":"String"} + } + }, + "CanaryLastRun":{ + "type":"structure", + "members":{ + "CanaryName":{"shape":"CanaryName"}, + "LastRun":{"shape":"CanaryRun"} + } + }, + "CanaryName":{ + "type":"string", + "max":21, + "min":1, + "pattern":"^[0-9a-z_\\-]+$" + }, + "CanaryRun":{ + "type":"structure", + "members":{ + "Name":{"shape":"CanaryName"}, + "Status":{"shape":"CanaryRunStatus"}, + "Timeline":{"shape":"CanaryRunTimeline"}, + "ArtifactS3Location":{"shape":"String"} + } + }, + "CanaryRunConfigInput":{ + "type":"structure", + "required":["TimeoutInSeconds"], + "members":{ + "TimeoutInSeconds":{"shape":"MaxFifteenMinutesInSeconds"} + } + }, + "CanaryRunConfigOutput":{ + "type":"structure", + "members":{ + "TimeoutInSeconds":{"shape":"MaxFifteenMinutesInSeconds"} + } + }, + "CanaryRunState":{ + "type":"string", + "enum":[ + "RUNNING", + "PASSED", + "FAILED" + ] + }, + "CanaryRunStateReasonCode":{ + "type":"string", + "enum":[ + "CANARY_FAILURE", + "EXECUTION_FAILURE" + ] + }, + "CanaryRunStatus":{ + "type":"structure", + "members":{ + "State":{"shape":"CanaryRunState"}, + "StateReason":{"shape":"String"}, + "StateReasonCode":{"shape":"CanaryRunStateReasonCode"} + } + }, + "CanaryRunTimeline":{ + "type":"structure", + "members":{ + "Started":{"shape":"Timestamp"}, + "Completed":{"shape":"Timestamp"} + } + }, + "CanaryRuns":{ + "type":"list", + "member":{"shape":"CanaryRun"} + }, + "CanaryScheduleInput":{ + "type":"structure", + "required":["Expression"], + "members":{ + "Expression":{"shape":"String"}, + "DurationInSeconds":{"shape":"MaxOneYearInSeconds"} + } + }, + "CanaryScheduleOutput":{ + "type":"structure", + "members":{ + "Expression":{"shape":"String"}, + "DurationInSeconds":{"shape":"MaxOneYearInSeconds"} + } + }, + "CanaryState":{ + "type":"string", + "enum":[ + "CREATING", + "READY", + "STARTING", + "RUNNING", + "UPDATING", + "STOPPING", + "STOPPED", + "ERROR", + "DELETING" + ] + }, + "CanaryStateReasonCode":{ + "type":"string", + "enum":["INVALID_PERMISSIONS"] + }, + "CanaryStatus":{ + "type":"structure", + "members":{ + "State":{"shape":"CanaryState"}, + "StateReason":{"shape":"String"}, + "StateReasonCode":{"shape":"CanaryStateReasonCode"} + } + }, + "CanaryTimeline":{ + "type":"structure", + "members":{ + "Created":{"shape":"Timestamp"}, + "LastModified":{"shape":"Timestamp"}, + "LastStarted":{"shape":"Timestamp"}, + "LastStopped":{"shape":"Timestamp"} + } + }, + "ConflictException":{ + "type":"structure", + "members":{ + "Message":{"shape":"ErrorMessage"} + }, + "error":{"httpStatusCode":409}, + "exception":true + }, + "CreateCanaryRequest":{ + "type":"structure", + "required":[ + "Name", + "Code", + "ArtifactS3Location", + "ExecutionRoleArn", + "Schedule", + "RuntimeVersion" + ], + "members":{ + "Name":{"shape":"CanaryName"}, + "Code":{"shape":"CanaryCodeInput"}, + "ArtifactS3Location":{"shape":"String"}, + "ExecutionRoleArn":{"shape":"Arn"}, + "Schedule":{"shape":"CanaryScheduleInput"}, + "RunConfig":{"shape":"CanaryRunConfigInput"}, + "SuccessRetentionPeriodInDays":{"shape":"MaxSize1024"}, + "FailureRetentionPeriodInDays":{"shape":"MaxSize1024"}, + "RuntimeVersion":{"shape":"String"}, + "VpcConfig":{"shape":"VpcConfigInput"}, + "Tags":{"shape":"TagMap"} + } + }, + "CreateCanaryResponse":{ + "type":"structure", + "members":{ + "Canary":{"shape":"Canary"} + } + }, + "DeleteCanaryRequest":{ + "type":"structure", + "required":["Name"], + "members":{ + "Name":{ + "shape":"CanaryName", + "location":"uri", + "locationName":"name" + } + } + }, + "DeleteCanaryResponse":{ + "type":"structure", + "members":{ + } + }, + "DescribeCanariesLastRunRequest":{ + "type":"structure", + "members":{ + "NextToken":{"shape":"Token"}, + "MaxResults":{"shape":"MaxSize100"} + } + }, + "DescribeCanariesLastRunResponse":{ + "type":"structure", + "members":{ + "CanariesLastRun":{"shape":"CanariesLastRun"}, + "NextToken":{"shape":"Token"} + } + }, + "DescribeCanariesRequest":{ + "type":"structure", + "members":{ + "NextToken":{"shape":"Token"}, + "MaxResults":{"shape":"MaxCanaryResults"} + } + }, + "DescribeCanariesResponse":{ + "type":"structure", + "members":{ + "Canaries":{"shape":"Canaries"}, + "NextToken":{"shape":"Token"} + } + }, + "DescribeRuntimeVersionsRequest":{ + "type":"structure", + "members":{ + "NextToken":{"shape":"Token"}, + "MaxResults":{"shape":"MaxSize100"} + } + }, + "DescribeRuntimeVersionsResponse":{ + "type":"structure", + "members":{ + "RuntimeVersions":{"shape":"RuntimeVersionList"}, + "NextToken":{"shape":"Token"} + } + }, + "ErrorMessage":{"type":"string"}, + "GetCanaryRequest":{ + "type":"structure", + "required":["Name"], + "members":{ + "Name":{ + "shape":"CanaryName", + "location":"uri", + "locationName":"name" + } + } + }, + "GetCanaryResponse":{ + "type":"structure", + "members":{ + "Canary":{"shape":"Canary"} + } + }, + "GetCanaryRunsRequest":{ + "type":"structure", + "required":["Name"], + "members":{ + "Name":{ + "shape":"CanaryName", + "location":"uri", + "locationName":"name" + }, + "NextToken":{"shape":"Token"}, + "MaxResults":{"shape":"MaxSize100"} + } + }, + "GetCanaryRunsResponse":{ + "type":"structure", + "members":{ + "CanaryRuns":{"shape":"CanaryRuns"}, + "NextToken":{"shape":"Token"} + } + }, + "InternalServerException":{ + "type":"structure", + "members":{ + "Message":{"shape":"ErrorMessage"} + }, + "error":{"httpStatusCode":500}, + "exception":true + }, + "ListTagsForResourceRequest":{ + "type":"structure", + "required":["ResourceArn"], + "members":{ + "ResourceArn":{ + "shape":"Arn", + "location":"uri", + "locationName":"resourceArn" + } + } + }, + "ListTagsForResourceResponse":{ + "type":"structure", + "members":{ + "Tags":{"shape":"TagMap"} + } + }, + "MaxCanaryResults":{ + "type":"integer", + "max":20, + "min":1 + }, + "MaxFifteenMinutesInSeconds":{ + "type":"integer", + "max":900, + "min":60 + }, + "MaxOneYearInSeconds":{ + "type":"long", + "max":31622400, + "min":0 + }, + "MaxSize100":{ + "type":"integer", + "max":100, + "min":1 + }, + "MaxSize1024":{ + "type":"integer", + "max":1024, + "min":1 + }, + "ResourceNotFoundException":{ + "type":"structure", + "members":{ + "Message":{"shape":"ErrorMessage"} + }, + "error":{"httpStatusCode":404}, + "exception":true + }, + "RuntimeVersion":{ + "type":"structure", + "members":{ + "VersionName":{"shape":"String"}, + "Description":{"shape":"String"}, + "ReleaseDate":{"shape":"Timestamp"}, + "DeprecationDate":{"shape":"Timestamp"} + } + }, + "RuntimeVersionList":{ + "type":"list", + "member":{"shape":"RuntimeVersion"} + }, + "SecurityGroupId":{"type":"string"}, + "SecurityGroupIds":{ + "type":"list", + "member":{"shape":"SecurityGroupId"}, + "max":5, + "min":0 + }, + "StartCanaryRequest":{ + "type":"structure", + "required":["Name"], + "members":{ + "Name":{ + "shape":"CanaryName", + "location":"uri", + "locationName":"name" + } + } + }, + "StartCanaryResponse":{ + "type":"structure", + "members":{ + } + }, + "StopCanaryRequest":{ + "type":"structure", + "required":["Name"], + "members":{ + "Name":{ + "shape":"CanaryName", + "location":"uri", + "locationName":"name" + } + } + }, + "StopCanaryResponse":{ + "type":"structure", + "members":{ + } + }, + "String":{ + "type":"string", + "max":1024, + "min":1 + }, + "SubnetId":{"type":"string"}, + "SubnetIds":{ + "type":"list", + "member":{"shape":"SubnetId"}, + "max":16, + "min":0 + }, + "TagKey":{ + "type":"string", + "max":128, + "min":1, + "pattern":"^(?!aws:)[a-zA-Z+-=._:/]+$" + }, + "TagKeyList":{ + "type":"list", + "member":{"shape":"TagKey"}, + "max":50, + "min":1 + }, + "TagMap":{ + "type":"map", + "key":{"shape":"TagKey"}, + "value":{"shape":"TagValue"}, + "max":50, + "min":1 + }, + "TagResourceRequest":{ + "type":"structure", + "required":[ + "ResourceArn", + "Tags" + ], + "members":{ + "ResourceArn":{ + "shape":"Arn", + "location":"uri", + "locationName":"resourceArn" + }, + "Tags":{"shape":"TagMap"} + } + }, + "TagResourceResponse":{ + "type":"structure", + "members":{ + } + }, + "TagValue":{ + "type":"string", + "max":256 + }, + "Timestamp":{"type":"timestamp"}, + "Token":{ + "type":"string", + "pattern":"^[a-zA-Z0-9=/+_.-]{4,252}$" + }, + "UUID":{ + "type":"string", + "pattern":"^[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}$" + }, + "UntagResourceRequest":{ + "type":"structure", + "required":[ + "ResourceArn", + "TagKeys" + ], + "members":{ + "ResourceArn":{ + "shape":"Arn", + "location":"uri", + "locationName":"resourceArn" + }, + "TagKeys":{ + "shape":"TagKeyList", + "location":"querystring", + "locationName":"tagKeys" + } + } + }, + "UntagResourceResponse":{ + "type":"structure", + "members":{ + } + }, + "UpdateCanaryRequest":{ + "type":"structure", + "required":["Name"], + "members":{ + "Name":{ + "shape":"CanaryName", + "location":"uri", + "locationName":"name" + }, + "Code":{"shape":"CanaryCodeInput"}, + "ExecutionRoleArn":{"shape":"Arn"}, + "RuntimeVersion":{"shape":"String"}, + "Schedule":{"shape":"CanaryScheduleInput"}, + "RunConfig":{"shape":"CanaryRunConfigInput"}, + "SuccessRetentionPeriodInDays":{"shape":"MaxSize1024"}, + "FailureRetentionPeriodInDays":{"shape":"MaxSize1024"}, + "VpcConfig":{"shape":"VpcConfigInput"} + } + }, + "UpdateCanaryResponse":{ + "type":"structure", + "members":{ + } + }, + "ValidationException":{ + "type":"structure", + "members":{ + "Message":{"shape":"ErrorMessage"} + }, + "error":{"httpStatusCode":400}, + "exception":true + }, + "VpcConfigInput":{ + "type":"structure", + "members":{ + "SubnetIds":{"shape":"SubnetIds"}, + "SecurityGroupIds":{"shape":"SecurityGroupIds"} + } + }, + "VpcConfigOutput":{ + "type":"structure", + "members":{ + "VpcId":{"shape":"VpcId"}, + "SubnetIds":{"shape":"SubnetIds"}, + "SecurityGroupIds":{"shape":"SecurityGroupIds"} + } + }, + "VpcId":{"type":"string"} + } +} diff --git a/models/apis/synthetics/2017-10-11/docs-2.json b/models/apis/synthetics/2017-10-11/docs-2.json new file mode 100644 index 00000000000..8b44092aeee --- /dev/null +++ b/models/apis/synthetics/2017-10-11/docs-2.json @@ -0,0 +1,518 @@ +{ + "version": "2.0", + "service": "You can use Amazon CloudWatch Synthetics to continually monitor your services. You can create and manage canaries, which are modular, lightweight scripts that monitor your endpoints and APIs from the outside-in. You can set up your canaries to run 24 hours a day, once per minute. The canaries help you check the availability and latency of your web services and troubleshoot anomalies by investigating load time data, screenshots of the UI, logs, and metrics. The canaries seamlessly integrate with CloudWatch ServiceLens to help you trace the causes of impacted nodes in your applications. For more information, see Using ServiceLens to Monitor the Health of Your Applications in the Amazon CloudWatch User Guide.
Before you create and manage canaries, be aware of the security considerations. For more information, see Security Considerations for Synthetics Canaries.
", + "operations": { + "CreateCanary": "Creates a canary. Canaries are scripts that monitor your endpoints and APIs from the outside-in. Canaries help you check the availability and latency of your web services and troubleshoot anomalies by investigating load time data, screenshots of the UI, logs, and metrics. You can set up a canary to run continuously or just once.
Do not use CreateCanary
to modify an existing canary. Use UpdateCanary instead.
To create canaries, you must have the CloudWatchSyntheticsFullAccess
policy. If you are creating a new IAM role for the canary, you also need the the iam:CreateRole
, iam:CreatePolicy
and iam:AttachRolePolicy
permissions. For more information, see Necessary Roles and Permissions.
Do not include secrets or proprietary information in your canary names. The canary name makes up part of the Amazon Resource Name (ARN) for the canary, and the ARN is included in outbound calls over the internet. For more information, see Security Considerations for Synthetics Canaries.
", + "DeleteCanary": "Permanently deletes the specified canary.
When you delete a canary, resources used and created by the canary are not automatically deleted. After you delete a canary that you do not intend to use again, you should also delete the following:
The Lambda functions and layers used by this canary. These have the prefix cwsyn-MyCanaryName
.
The CloudWatch alarms created for this canary. These alarms have a name of Synthetics-SharpDrop-Alarm-MyCanaryName
.
Amazon S3 objects and buckets, such as the canary's artifact location.
IAM roles created for the canary. If they were created in the console, these roles have the name role/service-role/CloudWatchSyntheticsRole-MyCanaryName
.
CloudWatch Logs log groups created for the canary. These logs groups have the name /aws/lambda/cwsyn-MyCanaryName
.
Before you delete a canary, you might want to use GetCanary
to display the information about this canary. Make note of the information returned by this operation so that you can delete these resources after you delete the canary.
This operation returns a list of the canaries in your account, along with full details about each canary.
This operation does not have resource-level authorization, so if a user is able to use DescribeCanaries
, the user can see all of the canaries in the account. A deny policy can only be used to restrict access to all canaries. It cannot be used on specific resources.
Use this operation to see information from the most recent run of each canary that you have created.
", + "DescribeRuntimeVersions": "Returns a list of Synthetics canary runtime versions. For more information, see Canary Runtime Versions.
", + "GetCanary": "Retrieves complete information about one canary. You must specify the name of the canary that you want. To get a list of canaries and their names, use DescribeCanaries.
", + "GetCanaryRuns": "Retrieves a list of runs for a specified canary.
", + "ListTagsForResource": "Displays the tags associated with a canary.
", + "StartCanary": "Use this operation to run a canary that has already been created. The frequency of the canary runs is determined by the value of the canary's Schedule
. To see a canary's schedule, use GetCanary.
Stops the canary to prevent all future runs. If the canary is currently running, Synthetics stops waiting for the current run of the specified canary to complete. The run that is in progress completes on its own, publishes metrics, and uploads artifacts, but it is not recorded in Synthetics as a completed run.
You can use StartCanary
to start it running again with the canary’s current schedule at any point in the future.
Assigns one or more tags (key-value pairs) to the specified canary.
Tags can help you organize and categorize your resources. You can also use them to scope user permissions, by granting a user permission to access or change only resources with certain tag values.
Tags don't have any semantic meaning to AWS and are interpreted strictly as strings of characters.
You can use the TagResource
action with a canary that already has tags. If you specify a new tag key for the alarm, this tag is appended to the list of tags associated with the alarm. If you specify a tag key that is already associated with the alarm, the new tag value that you specify replaces the previous value for that tag.
You can associate as many as 50 tags with a canary.
", + "UntagResource": "Removes one or more tags from the specified canary.
", + "UpdateCanary": "Use this operation to change the settings of a canary that has already been created.
You can't use this operation to update the tags of an existing canary. To change the tags of an existing canary, use TagResource.
" + }, + "shapes": { + "Arn": { + "base": null, + "refs": { + "Canary$ExecutionRoleArn": "The ARN of the IAM role used to run the canary. This role must include lambda.amazonaws.com
as a principal in the trust policy.
The ARN of the Lambda function that is used as your canary's engine. For more information about Lambda ARN format, see Resources and Conditions for Lambda Actions.
", + "CreateCanaryRequest$ExecutionRoleArn": "The ARN of the IAM role to be used to run the canary. This role must already exist, and must include lambda.amazonaws.com
as a principal in the trust policy. The role must also have the following permissions:
s3:PutObject
s3:GetBucketLocation
s3:ListAllMyBuckets
cloudwatch:PutMetricData
logs:CreateLogGroup
logs:CreateLogStream
logs:CreateLogStream
The ARN of the canary that you want to view tags for.
The ARN format of a canary is arn:aws:synthetics:Region:account-id:canary:canary-name
.
The ARN of the canary that you're adding tags to.
The ARN format of a canary is arn:aws:synthetics:Region:account-id:canary:canary-name
.
The ARN of the canary that you're removing tags from.
The ARN format of a canary is arn:aws:synthetics:Region:account-id:canary:canary-name
.
The ARN of the IAM role to be used to run the canary. This role must already exist, and must include lambda.amazonaws.com
as a principal in the trust policy. The role must also have the following permissions:
s3:PutObject
s3:GetBucketLocation
s3:ListAllMyBuckets
cloudwatch:PutMetricData
logs:CreateLogGroup
logs:CreateLogStream
logs:CreateLogStream
If you input your canary script directly into the canary instead of referring to an S3 location, the value of this parameter is the .zip file that contains the script. It can be up to 5 MB.
" + } + }, + "Canaries": { + "base": null, + "refs": { + "DescribeCanariesResponse$Canaries": "Returns an array. Each item in the array contains the full information about one canary.
" + } + }, + "CanariesLastRun": { + "base": null, + "refs": { + "DescribeCanariesLastRunResponse$CanariesLastRun": "An array that contains the information from the most recent run of each canary.
" + } + }, + "Canary": { + "base": "This structure contains all information about one canary in your account.
", + "refs": { + "Canaries$member": null, + "CreateCanaryResponse$Canary": "The full details about the canary you have created.
", + "GetCanaryResponse$Canary": "A strucure that contains the full information about the canary.
" + } + }, + "CanaryCodeInput": { + "base": "Use this structure to input your script code for the canary. This structure contains the Lambda handler with the location where the canary should start running the script. If the script is stored in an S3 bucket, the bucket name, key, and version are also included. If the script was passed into the canary directly, the script code is contained in the value of Zipfile
.
A structure that includes the entry point from which the canary should start running your script. If the script is stored in an S3 bucket, the bucket name, key, and version are also included.
", + "UpdateCanaryRequest$Code": "A structure that includes the entry point from which the canary should start running your script. If the script is stored in an S3 bucket, the bucket name, key, and version are also included.
" + } + }, + "CanaryCodeOutput": { + "base": "This structure contains information about the canary's Lambda handler and where its code is stored by CloudWatch Synthetics.
", + "refs": { + "Canary$Code": null + } + }, + "CanaryLastRun": { + "base": "This structure contains information about the most recent run of a single canary.
", + "refs": { + "CanariesLastRun$member": null + } + }, + "CanaryName": { + "base": null, + "refs": { + "Canary$Name": "The name of the canary.
", + "CanaryLastRun$CanaryName": "The name of the canary.
", + "CanaryRun$Name": "The name of the canary.
", + "CreateCanaryRequest$Name": "The name for this canary. Be sure to give it a descriptive name that distinguishes it from other canaries in your account.
Do not include secrets or proprietary information in your canary names. The canary name makes up part of the canary ARN, and the ARN is included in outbound calls over the internet. For more information, see Security Considerations for Synthetics Canaries.
", + "DeleteCanaryRequest$Name": "The name of the canary that you want to delete. To find the names of your canaries, use DescribeCanaries.
", + "GetCanaryRequest$Name": "The name of the canary that you want details for.
", + "GetCanaryRunsRequest$Name": "The name of the canary that you want to see runs for.
", + "StartCanaryRequest$Name": "The name of the canary that you want to run. To find canary names, use DescribeCanaries.
", + "StopCanaryRequest$Name": "The name of the canary that you want to stop. To find the names of your canaries, use DescribeCanaries.
", + "UpdateCanaryRequest$Name": "The name of the canary that you want to update. To find the names of your canaries, use DescribeCanaries.
You cannot change the name of a canary that has already been created.
" + } + }, + "CanaryRun": { + "base": "This structure contains the details about one run of one canary.
", + "refs": { + "CanaryLastRun$LastRun": "The results from this canary's most recent run.
", + "CanaryRuns$member": null + } + }, + "CanaryRunConfigInput": { + "base": "A structure that contains input information for a canary run.
", + "refs": { + "CreateCanaryRequest$RunConfig": "A structure that contains the configuration for individual canary runs, such as timeout value.
", + "UpdateCanaryRequest$RunConfig": "A structure that contains the timeout value that is used for each individual run of the canary.
" + } + }, + "CanaryRunConfigOutput": { + "base": "A structure that contains information for a canary run.
", + "refs": { + "Canary$RunConfig": null + } + }, + "CanaryRunState": { + "base": null, + "refs": { + "CanaryRunStatus$State": "The current state of the run.
" + } + }, + "CanaryRunStateReasonCode": { + "base": null, + "refs": { + "CanaryRunStatus$StateReasonCode": "If this value is CANARY_FAILURE
, an exception occurred in the canary code. If this value is EXECUTION_FAILURE
, an exception occurred in CloudWatch Synthetics.
This structure contains the status information about a canary run.
", + "refs": { + "CanaryRun$Status": "The status of this run.
" + } + }, + "CanaryRunTimeline": { + "base": "This structure contains the start and end times of a single canary run.
", + "refs": { + "CanaryRun$Timeline": "A structure that contains the start and end times of this run.
" + } + }, + "CanaryRuns": { + "base": null, + "refs": { + "GetCanaryRunsResponse$CanaryRuns": "An array of structures. Each structure contains the details of one of the retrieved canary runs.
" + } + }, + "CanaryScheduleInput": { + "base": "This structure specifies how often a canary is to make runs and the date and time when it should stop making runs.
", + "refs": { + "CreateCanaryRequest$Schedule": "A structure that contains information about how often the canary is to run and when these test runs are to stop.
", + "UpdateCanaryRequest$Schedule": "A structure that contains information about how often the canary is to run, and when these runs are to stop.
" + } + }, + "CanaryScheduleOutput": { + "base": "How long, in seconds, for the canary to continue making regular runs according to the schedule in the Expression
value.
A structure that contains information about how often the canary is to run, and when these runs are to stop.
" + } + }, + "CanaryState": { + "base": null, + "refs": { + "CanaryStatus$State": "The current state of the canary.
" + } + }, + "CanaryStateReasonCode": { + "base": null, + "refs": { + "CanaryStatus$StateReasonCode": "If the canary cannot run or has failed, this field displays the reason.
" + } + }, + "CanaryStatus": { + "base": "A structure that contains the current state of the canary.
", + "refs": { + "Canary$Status": "A structure that contains information about the canary's status.
" + } + }, + "CanaryTimeline": { + "base": "This structure contains information about when the canary was created and modified.
", + "refs": { + "Canary$Timeline": "A structure that contains information about when the canary was created, modified, and most recently run.
" + } + }, + "ConflictException": { + "base": "A conflicting operation is already in progress.
", + "refs": { + } + }, + "CreateCanaryRequest": { + "base": null, + "refs": { + } + }, + "CreateCanaryResponse": { + "base": null, + "refs": { + } + }, + "DeleteCanaryRequest": { + "base": null, + "refs": { + } + }, + "DeleteCanaryResponse": { + "base": null, + "refs": { + } + }, + "DescribeCanariesLastRunRequest": { + "base": null, + "refs": { + } + }, + "DescribeCanariesLastRunResponse": { + "base": null, + "refs": { + } + }, + "DescribeCanariesRequest": { + "base": null, + "refs": { + } + }, + "DescribeCanariesResponse": { + "base": null, + "refs": { + } + }, + "DescribeRuntimeVersionsRequest": { + "base": null, + "refs": { + } + }, + "DescribeRuntimeVersionsResponse": { + "base": null, + "refs": { + } + }, + "ErrorMessage": { + "base": null, + "refs": { + "ConflictException$Message": null, + "InternalServerException$Message": null, + "ResourceNotFoundException$Message": null, + "ValidationException$Message": null + } + }, + "GetCanaryRequest": { + "base": null, + "refs": { + } + }, + "GetCanaryResponse": { + "base": null, + "refs": { + } + }, + "GetCanaryRunsRequest": { + "base": null, + "refs": { + } + }, + "GetCanaryRunsResponse": { + "base": null, + "refs": { + } + }, + "InternalServerException": { + "base": "An unknown internal error occurred.
", + "refs": { + } + }, + "ListTagsForResourceRequest": { + "base": null, + "refs": { + } + }, + "ListTagsForResourceResponse": { + "base": null, + "refs": { + } + }, + "MaxCanaryResults": { + "base": null, + "refs": { + "DescribeCanariesRequest$MaxResults": "Specify this parameter to limit how many canaries are returned each time you use the DescribeCanaries
operation. If you omit this parameter, the default of 100 is used.
How long the canary is allowed to run before it must stop. If you omit this field, the frequency of the canary is used as this value, up to a maximum of 14 minutes.
", + "CanaryRunConfigOutput$TimeoutInSeconds": "How long the canary is allowed to run before it must stop.
" + } + }, + "MaxOneYearInSeconds": { + "base": null, + "refs": { + "CanaryScheduleInput$DurationInSeconds": "How long, in seconds, for the canary to continue making regular runs according to the schedule in the Expression
value. If you specify 0, the canary continues making runs until you stop it. If you omit this field, the default of 0 is used.
How long, in seconds, for the canary to continue making regular runs after it was created. The runs are performed according to the schedule in the Expression
value.
Specify this parameter to limit how many runs are returned each time you use the DescribeLastRun
operation. If you omit this parameter, the default of 100 is used.
Specify this parameter to limit how many runs are returned each time you use the DescribeRuntimeVersions
operation. If you omit this parameter, the default of 100 is used.
Specify this parameter to limit how many runs are returned each time you use the GetCanaryRuns
operation. If you omit this parameter, the default of 100 is used.
The number of days to retain data about successful runs of this canary.
", + "Canary$FailureRetentionPeriodInDays": "The number of days to retain data about failed runs of this canary.
", + "CreateCanaryRequest$SuccessRetentionPeriodInDays": "The number of days to retain data about successful runs of this canary. If you omit this field, the default of 31 days is used. The valid range is 1 to 455 days.
", + "CreateCanaryRequest$FailureRetentionPeriodInDays": "The number of days to retain data about failed runs of this canary. If you omit this field, the default of 31 days is used. The valid range is 1 to 455 days.
", + "UpdateCanaryRequest$SuccessRetentionPeriodInDays": "The number of days to retain data about successful runs of this canary.
", + "UpdateCanaryRequest$FailureRetentionPeriodInDays": "The number of days to retain data about failed runs of this canary.
" + } + }, + "ResourceNotFoundException": { + "base": "One of the specified resources was not found.
", + "refs": { + } + }, + "RuntimeVersion": { + "base": "This structure contains information about one canary runtime version. For more information about runtime versions, see Canary Runtime Versions.
", + "refs": { + "RuntimeVersionList$member": null + } + }, + "RuntimeVersionList": { + "base": null, + "refs": { + "DescribeRuntimeVersionsResponse$RuntimeVersions": "An array of objects that display the details about each Synthetics canary runtime version.
" + } + }, + "SecurityGroupId": { + "base": null, + "refs": { + "SecurityGroupIds$member": null + } + }, + "SecurityGroupIds": { + "base": null, + "refs": { + "VpcConfigInput$SecurityGroupIds": "The IDs of the security groups for this canary.
", + "VpcConfigOutput$SecurityGroupIds": "The IDs of the security groups for this canary.
" + } + }, + "StartCanaryRequest": { + "base": null, + "refs": { + } + }, + "StartCanaryResponse": { + "base": null, + "refs": { + } + }, + "StopCanaryRequest": { + "base": null, + "refs": { + } + }, + "StopCanaryResponse": { + "base": null, + "refs": { + } + }, + "String": { + "base": null, + "refs": { + "Canary$ArtifactS3Location": "The location in Amazon S3 where Synthetics stores artifacts from the runs of this canary. Artifacts include the log file, screenshots, and HAR files.
", + "Canary$RuntimeVersion": "Specifies the runtime version to use for the canary. Currently, the only valid value is syn-1.0
. For more information about runtime versions, see Canary Runtime Versions.
If your canary script is located in S3, specify the full bucket name here. The bucket must already exist. Specify the full bucket name, including s3://
as the start of the bucket name.
The S3 key of your script. For more information, see Working with Amazon S3 Objects.
", + "CanaryCodeInput$S3Version": "The S3 version ID of your script.
", + "CanaryCodeInput$Handler": "The entry point to use for the source code when running the canary. This value must end with the string .handler
.
The ARN of the Lambda layer where Synthetics stores the canary script code.
", + "CanaryCodeOutput$Handler": "The entry point to use for the source code when running the canary.
", + "CanaryRun$ArtifactS3Location": "The location where the canary stored artifacts from the run. Artifacts include the log file, screenshots, and HAR files.
", + "CanaryRunStatus$StateReason": "If run of the canary failed, this field contains the reason for the error.
", + "CanaryScheduleInput$Expression": "A rate expression that defines how often the canary is to run. The syntax is rate(number unit)
. unit can be minute
, minutes
, or hour
.
For example, rate(1 minute)
runs the canary once a minute, rate(10 minutes)
runs it once every 10 minutes, and rate(1 hour)
runs it once every hour. You can specify a frequency between rate(1 minute)
and rate(1 hour)
.
Specifying rate(0 minute)
or rate(0 hour)
is a special value that causes the canary to run only once when it is started.
A rate expression that defines how often the canary is to run. The syntax is rate(number unit)
. unit can be minute
, minutes
, or hour
.
For example, rate(1 minute)
runs the canary once a minute, rate(10 minutes)
runs it once every 10 minutes, and rate(1 hour)
runs it once every hour.
Specifying rate(0 minute)
or rate(0 hour)
is a special value that causes the canary to run only once when it is started.
If the canary has insufficient permissions to run, this field provides more details.
", + "CreateCanaryRequest$ArtifactS3Location": "The location in Amazon S3 where Synthetics stores artifacts from the test runs of this canary. Artifacts include the log file, screenshots, and HAR files.
", + "CreateCanaryRequest$RuntimeVersion": "Specifies the runtime version to use for the canary. Currently, the only valid value is syn-1.0
. For more information about runtime versions, see Canary Runtime Versions.
The name of the runtime version. Currently, the only valid value is syn-1.0
.
Specifies the runtime version to use for the canary. Currently, the only valid value is syn-1.0
.
A description of the runtime version, created by Amazon.
", + "UpdateCanaryRequest$RuntimeVersion": "Specifies the runtime version to use for the canary. Currently, the only valid value is syn-1.0
. For more information about runtime versions, see Canary Runtime Versions.
The IDs of the subnets where this canary is to run.
", + "VpcConfigOutput$SubnetIds": "The IDs of the subnets where this canary is to run.
" + } + }, + "TagKey": { + "base": null, + "refs": { + "TagKeyList$member": null, + "TagMap$key": null + } + }, + "TagKeyList": { + "base": null, + "refs": { + "UntagResourceRequest$TagKeys": "The list of tag keys to remove from the resource.
" + } + }, + "TagMap": { + "base": null, + "refs": { + "Canary$Tags": "The list of key-value pairs that are associated with the canary.
", + "CreateCanaryRequest$Tags": "A list of key-value pairs to associate with the canary. You can associate as many as 50 tags with a canary.
Tags can help you organize and categorize your resources. You can also use them to scope user permissions, by granting a user permission to access or change only the resources that have certain tag values.
", + "ListTagsForResourceResponse$Tags": "The list of tag keys and values associated with the canary that you specified.
", + "TagResourceRequest$Tags": "The list of key-value pairs to associate with the canary.
" + } + }, + "TagResourceRequest": { + "base": null, + "refs": { + } + }, + "TagResourceResponse": { + "base": null, + "refs": { + } + }, + "TagValue": { + "base": null, + "refs": { + "TagMap$value": null + } + }, + "Timestamp": { + "base": null, + "refs": { + "CanaryRunTimeline$Started": "The start time of the run.
", + "CanaryRunTimeline$Completed": "The end time of the run.
", + "CanaryTimeline$Created": "The date and time the canary was created.
", + "CanaryTimeline$LastModified": "The date and time the canary was most recently modified.
", + "CanaryTimeline$LastStarted": "The date and time that the canary's most recent run started.
", + "CanaryTimeline$LastStopped": "The date and time that the canary's most recent run ended.
", + "RuntimeVersion$ReleaseDate": "The date that the runtime version was released.
", + "RuntimeVersion$DeprecationDate": "If this runtime version is deprecated, this value is the date of deprecation.
" + } + }, + "Token": { + "base": null, + "refs": { + "DescribeCanariesLastRunRequest$NextToken": "A token that indicates that there is more data available. You can use this token in a subsequent DescribeCanaries
operation to retrieve the next set of results.
A token that indicates that there is more data available. You can use this token in a subsequent DescribeCanariesLastRun
operation to retrieve the next set of results.
A token that indicates that there is more data available. You can use this token in a subsequent operation to retrieve the next set of results.
", + "DescribeCanariesResponse$NextToken": "A token that indicates that there is more data available. You can use this token in a subsequent DescribeCanaries
operation to retrieve the next set of results.
A token that indicates that there is more data available. You can use this token in a subsequent DescribeRuntimeVersions
operation to retrieve the next set of results.
A token that indicates that there is more data available. You can use this token in a subsequent DescribeRuntimeVersions
operation to retrieve the next set of results.
A token that indicates that there is more data available. You can use this token in a subsequent GetCanaryRuns
operation to retrieve the next set of results.
A token that indicates that there is more data available. You can use this token in a subsequent GetCanaryRuns
operation to retrieve the next set of results.
The unique ID of this canary.
" + } + }, + "UntagResourceRequest": { + "base": null, + "refs": { + } + }, + "UntagResourceResponse": { + "base": null, + "refs": { + } + }, + "UpdateCanaryRequest": { + "base": null, + "refs": { + } + }, + "UpdateCanaryResponse": { + "base": null, + "refs": { + } + }, + "ValidationException": { + "base": "A parameter could not be validated.
", + "refs": { + } + }, + "VpcConfigInput": { + "base": "If this canary is to test an endpoint in a VPC, this structure contains information about the subnets and security groups of the VPC endpoint. For more information, see Running a Canary in a VPC.
", + "refs": { + "CreateCanaryRequest$VpcConfig": "If this canary is to test an endpoint in a VPC, this structure contains information about the subnet and security groups of the VPC endpoint. For more information, see Running a Canary in a VPC.
", + "UpdateCanaryRequest$VpcConfig": "If this canary is to test an endpoint in a VPC, this structure contains information about the subnet and security groups of the VPC endpoint. For more information, see Running a Canary in a VPC.
" + } + }, + "VpcConfigOutput": { + "base": "If this canary is to test an endpoint in a VPC, this structure contains information about the subnets and security groups of the VPC endpoint. For more information, see Running a Canary in a VPC.
", + "refs": { + "Canary$VpcConfig": null + } + }, + "VpcId": { + "base": null, + "refs": { + "VpcConfigOutput$VpcId": "The IDs of the VPC where this canary is to run.
" + } + } + } +} diff --git a/models/apis/synthetics/2017-10-11/examples-1.json b/models/apis/synthetics/2017-10-11/examples-1.json new file mode 100644 index 00000000000..0ea7e3b0bbe --- /dev/null +++ b/models/apis/synthetics/2017-10-11/examples-1.json @@ -0,0 +1,5 @@ +{ + "version": "1.0", + "examples": { + } +} diff --git a/models/apis/synthetics/2017-10-11/paginators-1.json b/models/apis/synthetics/2017-10-11/paginators-1.json new file mode 100644 index 00000000000..e5412aa47fd --- /dev/null +++ b/models/apis/synthetics/2017-10-11/paginators-1.json @@ -0,0 +1,24 @@ +{ + "pagination": { + "DescribeCanaries": { + "input_token": "NextToken", + "limit_key": "MaxResults", + "output_token": "NextToken" + }, + "DescribeCanariesLastRun": { + "input_token": "NextToken", + "limit_key": "MaxResults", + "output_token": "NextToken" + }, + "DescribeRuntimeVersions": { + "input_token": "NextToken", + "limit_key": "MaxResults", + "output_token": "NextToken" + }, + "GetCanaryRuns": { + "input_token": "NextToken", + "limit_key": "MaxResults", + "output_token": "NextToken" + } + } +} \ No newline at end of file diff --git a/models/apis/transcribe/2017-10-26/api-2.json b/models/apis/transcribe/2017-10-26/api-2.json index 81668c7ff09..17c9db55859 100644 --- a/models/apis/transcribe/2017-10-26/api-2.json +++ b/models/apis/transcribe/2017-10-26/api-2.json @@ -43,6 +43,19 @@ {"shape":"ConflictException"} ] }, + "DeleteMedicalTranscriptionJob":{ + "name":"DeleteMedicalTranscriptionJob", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteMedicalTranscriptionJobRequest"}, + "errors":[ + {"shape":"LimitExceededException"}, + {"shape":"BadRequestException"}, + {"shape":"InternalFailureException"} + ] + }, "DeleteTranscriptionJob":{ "name":"DeleteTranscriptionJob", "http":{ @@ -84,6 +97,21 @@ {"shape":"InternalFailureException"} ] }, + "GetMedicalTranscriptionJob":{ + "name":"GetMedicalTranscriptionJob", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"GetMedicalTranscriptionJobRequest"}, + "output":{"shape":"GetMedicalTranscriptionJobResponse"}, + "errors":[ + {"shape":"BadRequestException"}, + {"shape":"LimitExceededException"}, + {"shape":"InternalFailureException"}, + {"shape":"NotFoundException"} + ] + }, "GetTranscriptionJob":{ "name":"GetTranscriptionJob", "http":{ @@ -129,6 +157,20 @@ {"shape":"BadRequestException"} ] }, + "ListMedicalTranscriptionJobs":{ + "name":"ListMedicalTranscriptionJobs", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListMedicalTranscriptionJobsRequest"}, + "output":{"shape":"ListMedicalTranscriptionJobsResponse"}, + "errors":[ + {"shape":"BadRequestException"}, + {"shape":"LimitExceededException"}, + {"shape":"InternalFailureException"} + ] + }, "ListTranscriptionJobs":{ "name":"ListTranscriptionJobs", "http":{ @@ -171,6 +213,21 @@ {"shape":"InternalFailureException"} ] }, + "StartMedicalTranscriptionJob":{ + "name":"StartMedicalTranscriptionJob", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"StartMedicalTranscriptionJobRequest"}, + "output":{"shape":"StartMedicalTranscriptionJobResponse"}, + "errors":[ + {"shape":"BadRequestException"}, + {"shape":"LimitExceededException"}, + {"shape":"InternalFailureException"}, + {"shape":"ConflictException"} + ] + }, "StartTranscriptionJob":{ "name":"StartTranscriptionJob", "http":{ @@ -294,6 +351,13 @@ "pattern":"^arn:aws:iam::[0-9]{0,63}:role/[A-Za-z0-9:_/+=,@.-]{0,1023}$" }, "DateTime":{"type":"timestamp"}, + "DeleteMedicalTranscriptionJobRequest":{ + "type":"structure", + "required":["MedicalTranscriptionJobName"], + "members":{ + "MedicalTranscriptionJobName":{"shape":"TranscriptionJobName"} + } + }, "DeleteTranscriptionJobRequest":{ "type":"structure", "required":["TranscriptionJobName"], @@ -316,6 +380,19 @@ } }, "FailureReason":{"type":"string"}, + "GetMedicalTranscriptionJobRequest":{ + "type":"structure", + "required":["MedicalTranscriptionJobName"], + "members":{ + "MedicalTranscriptionJobName":{"shape":"TranscriptionJobName"} + } + }, + "GetMedicalTranscriptionJobResponse":{ + "type":"structure", + "members":{ + "MedicalTranscriptionJob":{"shape":"MedicalTranscriptionJob"} + } + }, "GetTranscriptionJobRequest":{ "type":"structure", "required":["TranscriptionJobName"], @@ -427,6 +504,23 @@ }, "exception":true }, + "ListMedicalTranscriptionJobsRequest":{ + "type":"structure", + "members":{ + "Status":{"shape":"TranscriptionJobStatus"}, + "JobNameContains":{"shape":"TranscriptionJobName"}, + "NextToken":{"shape":"NextToken"}, + "MaxResults":{"shape":"MaxResults"} + } + }, + "ListMedicalTranscriptionJobsResponse":{ + "type":"structure", + "members":{ + "Status":{"shape":"TranscriptionJobStatus"}, + "NextToken":{"shape":"NextToken"}, + "MedicalTranscriptionJobSummaries":{"shape":"MedicalTranscriptionJobSummaries"} + } + }, "ListTranscriptionJobsRequest":{ "type":"structure", "members":{ @@ -511,6 +605,60 @@ "max":48000, "min":8000 }, + "MedicalTranscript":{ + "type":"structure", + "members":{ + "TranscriptFileUri":{"shape":"Uri"} + } + }, + "MedicalTranscriptionJob":{ + "type":"structure", + "members":{ + "MedicalTranscriptionJobName":{"shape":"TranscriptionJobName"}, + "TranscriptionJobStatus":{"shape":"TranscriptionJobStatus"}, + "LanguageCode":{"shape":"LanguageCode"}, + "MediaSampleRateHertz":{"shape":"MediaSampleRateHertz"}, + "MediaFormat":{"shape":"MediaFormat"}, + "Media":{"shape":"Media"}, + "Transcript":{"shape":"MedicalTranscript"}, + "StartTime":{"shape":"DateTime"}, + "CreationTime":{"shape":"DateTime"}, + "CompletionTime":{"shape":"DateTime"}, + "FailureReason":{"shape":"FailureReason"}, + "Settings":{"shape":"MedicalTranscriptionSetting"}, + "Specialty":{"shape":"Specialty"}, + "Type":{"shape":"Type"} + } + }, + "MedicalTranscriptionJobSummaries":{ + "type":"list", + "member":{"shape":"MedicalTranscriptionJobSummary"} + }, + "MedicalTranscriptionJobSummary":{ + "type":"structure", + "members":{ + "MedicalTranscriptionJobName":{"shape":"TranscriptionJobName"}, + "CreationTime":{"shape":"DateTime"}, + "StartTime":{"shape":"DateTime"}, + "CompletionTime":{"shape":"DateTime"}, + "LanguageCode":{"shape":"LanguageCode"}, + "TranscriptionJobStatus":{"shape":"TranscriptionJobStatus"}, + "FailureReason":{"shape":"FailureReason"}, + "OutputLocationType":{"shape":"OutputLocationType"}, + "Specialty":{"shape":"Specialty"}, + "Type":{"shape":"Type"} + } + }, + "MedicalTranscriptionSetting":{ + "type":"structure", + "members":{ + "ShowSpeakerLabels":{"shape":"Boolean"}, + "MaxSpeakerLabels":{"shape":"MaxSpeakers"}, + "ChannelIdentification":{"shape":"Boolean"}, + "ShowAlternatives":{"shape":"Boolean"}, + "MaxAlternatives":{"shape":"MaxAlternatives"} + } + }, "NextToken":{ "type":"string", "max":8192, @@ -569,6 +717,39 @@ "VocabularyFilterMethod":{"shape":"VocabularyFilterMethod"} } }, + "Specialty":{ + "type":"string", + "enum":["PRIMARYCARE"] + }, + "StartMedicalTranscriptionJobRequest":{ + "type":"structure", + "required":[ + "MedicalTranscriptionJobName", + "LanguageCode", + "Media", + "OutputBucketName", + "Specialty", + "Type" + ], + "members":{ + "MedicalTranscriptionJobName":{"shape":"TranscriptionJobName"}, + "LanguageCode":{"shape":"LanguageCode"}, + "MediaSampleRateHertz":{"shape":"MediaSampleRateHertz"}, + "MediaFormat":{"shape":"MediaFormat"}, + "Media":{"shape":"Media"}, + "OutputBucketName":{"shape":"OutputBucketName"}, + "OutputEncryptionKMSKeyId":{"shape":"KMSKeyId"}, + "Settings":{"shape":"MedicalTranscriptionSetting"}, + "Specialty":{"shape":"Specialty"}, + "Type":{"shape":"Type"} + } + }, + "StartMedicalTranscriptionJobResponse":{ + "type":"structure", + "members":{ + "MedicalTranscriptionJob":{"shape":"MedicalTranscriptionJob"} + } + }, "StartTranscriptionJobRequest":{ "type":"structure", "required":[ @@ -655,6 +836,13 @@ "ContentRedaction":{"shape":"ContentRedaction"} } }, + "Type":{ + "type":"string", + "enum":[ + "CONVERSATION", + "DICTATION" + ] + }, "UpdateVocabularyFilterRequest":{ "type":"structure", "required":["VocabularyFilterName"], diff --git a/models/apis/transcribe/2017-10-26/docs-2.json b/models/apis/transcribe/2017-10-26/docs-2.json index 4c7829396f7..92ab689d103 100644 --- a/models/apis/transcribe/2017-10-26/docs-2.json +++ b/models/apis/transcribe/2017-10-26/docs-2.json @@ -4,15 +4,19 @@ "operations": { "CreateVocabulary": "Creates a new custom vocabulary that you can use to change the way Amazon Transcribe handles transcription of an audio file.
", "CreateVocabularyFilter": "Creates a new vocabulary filter that you can use to filter words, such as profane words, from the output of a transcription job.
", + "DeleteMedicalTranscriptionJob": "Deletes a transcription job generated by Amazon Transcribe Medical and any related information.
", "DeleteTranscriptionJob": "Deletes a previously submitted transcription job along with any other generated results such as the transcription, models, and so on.
", "DeleteVocabulary": "Deletes a vocabulary from Amazon Transcribe.
", "DeleteVocabularyFilter": "Removes a vocabulary filter.
", + "GetMedicalTranscriptionJob": "Returns information about a transcription job from Amazon Transcribe Medical. To see the status of the job, check the TranscriptionJobStatus
field. If the status is COMPLETED
, the job is finished. You find the results of the completed job in the TranscriptFileUri
field.
Returns information about a transcription job. To see the status of the job, check the TranscriptionJobStatus
field. If the status is COMPLETED
, the job is finished and you can find the results at the location specified in the TranscriptFileUri
field. If you enable content redaction, the redacted transcript appears in RedactedTranscriptFileUri
.
Gets information about a vocabulary.
", "GetVocabularyFilter": "Returns information about a vocabulary filter.
", + "ListMedicalTranscriptionJobs": "Lists medical transcription jobs with a specified status or substring that matches their names.
", "ListTranscriptionJobs": "Lists transcription jobs with the specified status.
", "ListVocabularies": "Returns a list of vocabularies that match the specified criteria. If no criteria are specified, returns the entire list of vocabularies.
", "ListVocabularyFilters": "Gets information about vocabulary filters.
", + "StartMedicalTranscriptionJob": "Start a batch job to transcribe medical speech to text.
", "StartTranscriptionJob": "Starts an asynchronous job to transcribe speech to text.
", "UpdateVocabulary": "Updates an existing vocabulary with new values. The UpdateVocabulary
operation overwrites all of the existing information with the values that you provide in the request.
Updates a vocabulary filter with a new list of filtered words.
" @@ -26,7 +30,10 @@ "Boolean": { "base": null, "refs": { - "JobExecutionSettings$AllowDeferredExecution": "Indicates whether a job should be queued by Amazon Transcribe when the concurrent execution limit is exceeded. When the AllowDeferredExecution
field is true, jobs are queued and will be executed when the number of executing jobs falls below the concurrent execution limit. If the field is false, Amazon Transcribe returns a LimitExceededException
exception.
If you specify the AllowDeferredExecution
field, you must specify the DataAccessRoleArn
field.
Indicates whether a job should be queued by Amazon Transcribe when the concurrent execution limit is exceeded. When the AllowDeferredExecution
field is true, jobs are queued and executed when the number of executing jobs falls below the concurrent execution limit. If the field is false, Amazon Transcribe returns a LimitExceededException
exception.
If you specify the AllowDeferredExecution
field, you must specify the DataAccessRoleArn
field.
Determines whether the transcription job uses speaker recognition to identify different speakers in the input audio. Speaker recongition labels individual speakers in the audio file. If you set the ShowSpeakerLabels
field to true, you must also set the maximum number of speaker labels in the MaxSpeakerLabels
field.
You can't set both ShowSpeakerLabels
and ChannelIdentification
in the same request. If you set both, your request returns a BadRequestException
.
Instructs Amazon Transcribe Medical to process each audio channel separately and then merge the transcription output of each channel into a single transcription.
Amazon Transcribe Medical also produces a transcription of each item detected on an audio channel, including the start time and end time of the item and alternative transcriptions of item. The alternative transcriptions also come with confidence scores provided by Amazon Transcribe Medical.
You can't set both ShowSpeakerLabels
and ChannelIdentification
in the same request. If you set both, your request returns a BadRequestException
Determines whether alternative transcripts are generated along with the transcript that has the highest confidence. If you set ShowAlternatives
field to true, you must also set the maximum number of alternatives to return in the MaxAlternatives
field.
Determines whether the transcription job uses speaker recognition to identify different speakers in the input audio. Speaker recognition labels individual speakers in the audio file. If you set the ShowSpeakerLabels
field to true, you must also set the maximum number of speaker labels MaxSpeakerLabels
field.
You can't set both ShowSpeakerLabels
and ChannelIdentification
in the same request. If you set both, your request returns a BadRequestException
.
Instructs Amazon Transcribe to process each audio channel separately and then merge the transcription output of each channel into a single transcription.
Amazon Transcribe also produces a transcription of each item detected on an audio channel, including the start time and end time of the item and alternative transcriptions of the item including the confidence that Amazon Transcribe has in the transcription.
You can't set both ShowSpeakerLabels
and ChannelIdentification
in the same request. If you set both, your request returns a BadRequestException
.
Determines whether the transcription contains alternative transcriptions. If you set the ShowAlternatives
field to true, you must also set the maximum number of alternatives to return in the MaxAlternatives
field.
Settings for content redaction within a transcription job.
You can redact transcripts in US English (en-us). For more information see: Automatic Content Redaction
", + "base": "Settings for content redaction within a transcription job.
", "refs": { "StartTranscriptionJobRequest$ContentRedaction": "An object that contains the request parameters for content redaction.
", "TranscriptionJob$ContentRedaction": "An object that describes content redaction settings for the transcription job.
", @@ -68,7 +75,7 @@ "DataAccessRoleArn": { "base": null, "refs": { - "JobExecutionSettings$DataAccessRoleArn": "The Amazon Resource Name (ARN) of a role that has access to the S3 bucket that contains the input files. Amazon Transcribe will assume this role to read queued media files. If you have specified an output S3 bucket for the transcription results, this role should have access to the output bucket as well.
If you specify the AllowDeferredExecution
field, you must specify the DataAccessRoleArn
field.
The Amazon Resource Name (ARN) of a role that has access to the S3 bucket that contains the input files. Amazon Transcribe assumes this role to read queued media files. If you have specified an output S3 bucket for the transcription results, this role should have access to the output bucket as well.
If you specify the AllowDeferredExecution
field, you must specify the DataAccessRoleArn
field.
The date and time that the vocabulary was created.
", "GetVocabularyFilterResponse$LastModifiedTime": "The date and time that the contents of the vocabulary filter were updated.
", "GetVocabularyResponse$LastModifiedTime": "The date and time that the vocabulary was last modified.
", + "MedicalTranscriptionJob$StartTime": "A timestamp that shows when the job started processing.
", + "MedicalTranscriptionJob$CreationTime": "A timestamp that shows when the job was created.
", + "MedicalTranscriptionJob$CompletionTime": "A timestamp that shows when the job was completed.
", + "MedicalTranscriptionJobSummary$CreationTime": "A timestamp that shows when the medical transcription job was created.
", + "MedicalTranscriptionJobSummary$StartTime": "A timestamp that shows when the job began processing.
", + "MedicalTranscriptionJobSummary$CompletionTime": "A timestamp that shows when the job was completed.
", "TranscriptionJob$StartTime": "A timestamp that shows with the job was started processing.
", "TranscriptionJob$CreationTime": "A timestamp that shows when the job was created.
", "TranscriptionJob$CompletionTime": "A timestamp that shows when the job was completed.
", @@ -90,6 +103,11 @@ "VocabularyInfo$LastModifiedTime": "The date and time that the vocabulary was last modified.
" } }, + "DeleteMedicalTranscriptionJobRequest": { + "base": null, + "refs": { + } + }, "DeleteTranscriptionJobRequest": { "base": null, "refs": { @@ -111,10 +129,22 @@ "BadRequestException$Message": null, "CreateVocabularyResponse$FailureReason": "If the VocabularyState
field is FAILED
, this field contains information about why the job failed.
If the VocabularyState
field is FAILED
, this field contains information about why the job failed.
If the TranscriptionJobStatus
field is FAILED
, this field contains information about why the job failed.
The FailureReason
field contains one of the following values:
Unsupported media format
- The media format specified in the MediaFormat
field of the request isn't valid. See the description of the MediaFormat
field for a list of valid values.
The media format provided does not match the detected media format
- The media format of the audio file doesn't match the format specified in the MediaFormat
field in the request. Check the media format of your media file and make sure the two values match.
Invalid sample rate for audio file
- The sample rate specified in the MediaSampleRateHertz
of the request isn't valid. The sample rate must be between 8000 and 48000 Hertz.
The sample rate provided does not match the detected sample rate
- The sample rate in the audio file doesn't match the sample rate specified in the MediaSampleRateHertz
field in the request. Check the sample rate of your media file and make sure that the two values match.
Invalid file size: file size too large
- The size of your audio file is larger than what Amazon Transcribe Medical can process. For more information, see Guidlines and Quotas in the Amazon Transcribe Medical Guide
Invalid number of channels: number of channels too large
- Your audio contains more channels than Amazon Transcribe Medical is configured to process. To request additional channels, see Amazon Transcribe Medical Endpoints and Quotas in the Amazon Web Services General Reference
If the TranscriptionJobStatus
field is FAILED
, a description of the error.
If the TranscriptionJobStatus
field is FAILED
, this field contains information about why the job failed.
The FailureReason
field can contain one of the following values:
Unsupported media format
- The media format specified in the MediaFormat
field of the request isn't valid. See the description of the MediaFormat
field for a list of valid values.
The media format provided does not match the detected media format
- The media format of the audio file doesn't match the format specified in the MediaFormat
field in the request. Check the media format of your media file and make sure that the two values match.
Invalid sample rate for audio file
- The sample rate specified in the MediaSampleRateHertz
of the request isn't valid. The sample rate must be between 8000 and 48000 Hertz.
The sample rate provided does not match the detected sample rate
- The sample rate in the audio file doesn't match the sample rate specified in the MediaSampleRateHertz
field in the request. Check the sample rate of your media file and make sure that the two values match.
Invalid file size: file size too large
- The size of your audio file is larger than Amazon Transcribe can process. For more information, see Limits in the Amazon Transcribe Developer Guide.
Invalid number of channels: number of channels too large
- Your audio contains more channels than Amazon Transcribe is configured to process. To request additional channels, see Amazon Transcribe Limits in the Amazon Web Services General Reference.
If the TranscriptionJobStatus
field is FAILED
, a description of the error.
The Amazon Resource Name (ARN) of the AWS Key Management Service (KMS) key used to encrypt the output of the transcription job. The user calling the StartMedicalTranscriptionJob operation must have permission to use the specified KMS key.
You use either of the following to identify a KMS key in the current account:
KMS Key ID: \"1234abcd-12ab-34cd-56ef-1234567890ab\"
KMS Key Alias: \"alias/ExampleAlias\"
You can use either of the following to identify a KMS key in the current account or another account:
Amazon Resource Name (ARN) of a KMS key in the current account or another account: \"arn:aws:kms:region:account ID:key/1234abcd-12ab-34cd-56ef-1234567890ab\"
ARN of a KMS Key Alias: \"arn:aws:kms:region:account ID:alias/ExampleAlias\"
If you don't specify an encryption key, the output of the medical transcription job is encrypted with the default Amazon S3 key (SSE-S3).
If you specify a KMS key to encrypt your output, you must also specify an output location in the OutputBucketName
parameter.
The Amazon Resource Name (ARN) of the AWS Key Management Service (KMS) key used to encrypt the output of the transcription job. The user calling the StartTranscriptionJob
operation must have permission to use the specified KMS key.
You can use either of the following to identify a KMS key in the current account:
KMS Key ID: \"1234abcd-12ab-34cd-56ef-1234567890ab\"
KMS Key Alias: \"alias/ExampleAlias\"
You can use either of the following to identify a KMS key in the current account or another account:
Amazon Resource Name (ARN) of a KMS Key: \"arn:aws:kms:region:account ID:key/1234abcd-12ab-34cd-56ef-1234567890ab\"
ARN of a KMS Key Alias: \"arn:aws:kms:region:account ID:alias/ExampleAlias\"
If you don't specify an encryption key, the output of the transcription job is encrypted with the default Amazon S3 key (SSE-S3).
If you specify a KMS key to encrypt your output, you must also specify an output location in the OutputBucketName
parameter.
The language code of the vocabulary entries.
", "GetVocabularyFilterResponse$LanguageCode": "The language code of the words in the vocabulary filter.
", "GetVocabularyResponse$LanguageCode": "The language code of the vocabulary entries.
", + "MedicalTranscriptionJob$LanguageCode": "The language code for the language spoken in the source audio file. US English (en-US) is the only supported language for medical transcriptions. Any other value you enter for language code results in a BadRequestException
error.
The language of the transcript in the source audio file.
", + "StartMedicalTranscriptionJobRequest$LanguageCode": "The language code for the language spoken in the input media file. US English (en-US) is the valid value for medical transcription jobs. Any other value you enter for language code results in a BadRequestException
error.
The language code for the language used in the input media file.
", "TranscriptionJob$LanguageCode": "The language code for the input speech.
", "TranscriptionJobSummary$LanguageCode": "The language code for the input speech.
", @@ -187,6 +221,16 @@ "refs": { } }, + "ListMedicalTranscriptionJobsRequest": { + "base": null, + "refs": { + } + }, + "ListMedicalTranscriptionJobsResponse": { + "base": null, + "refs": { + } + }, "ListTranscriptionJobsRequest": { "base": null, "refs": { @@ -220,12 +264,14 @@ "MaxAlternatives": { "base": null, "refs": { + "MedicalTranscriptionSetting$MaxAlternatives": "The maximum number of alternatives that you tell the service to return. If you specify the MaxAlternatives
field, you must set the ShowAlternatives
field to true.
The number of alternative transcriptions that the service should return. If you specify the MaxAlternatives
field, you must set the ShowAlternatives
field to true.
The maximum number of medical transcription jobs to return in the response. IF there are fewer results in the list, this response contains only the actual results.
", "ListTranscriptionJobsRequest$MaxResults": "The maximum number of jobs to return in the response. If there are fewer results in the list, this response contains only the actual results.
", "ListVocabulariesRequest$MaxResults": "The maximum number of vocabularies to return in the response. If there are fewer results in the list, this response contains only the actual results.
", "ListVocabularyFiltersRequest$MaxResults": "The maximum number of filters to return in the response. If there are fewer results in the list, this response contains only the actual results.
" @@ -234,12 +280,15 @@ "MaxSpeakers": { "base": null, "refs": { - "Settings$MaxSpeakerLabels": "The maximum number of speakers to identify in the input audio. If there are more speakers in the audio than this number, multiple speakers will be identified as a single speaker. If you specify the MaxSpeakerLabels
field, you must set the ShowSpeakerLabels
field to true.
The maximum number of speakers to identify in the input audio. If there are more speakers in the audio than this number, multiple speakers are identified as a single speaker. If you specify the MaxSpeakerLabels
field, you must set the ShowSpeakerLabels
field to true.
The maximum number of speakers to identify in the input audio. If there are more speakers in the audio than this number, multiple speakers are identified as a single speaker. If you specify the MaxSpeakerLabels
field, you must set the ShowSpeakerLabels
field to true.
Describes the input media file in a transcription request.
", "refs": { + "MedicalTranscriptionJob$Media": null, + "StartMedicalTranscriptionJobRequest$Media": null, "StartTranscriptionJobRequest$Media": "An object that describes the input media for a transcription job.
", "TranscriptionJob$Media": "An object that describes the input media for the transcription job.
" } @@ -247,6 +296,8 @@ "MediaFormat": { "base": null, "refs": { + "MedicalTranscriptionJob$MediaFormat": "The format of the input media file.
", + "StartMedicalTranscriptionJobRequest$MediaFormat": "The audio format of the input media file.
", "StartTranscriptionJobRequest$MediaFormat": "The format of the input media file.
", "TranscriptionJob$MediaFormat": "The format of the input media file.
" } @@ -254,13 +305,49 @@ "MediaSampleRateHertz": { "base": null, "refs": { + "MedicalTranscriptionJob$MediaSampleRateHertz": "The sample rate, in Hertz, of the source audio containing medical information.
If you don't specify the sample rate, Amazon Transcribe Medical determines it for you. If you choose to specify the sample rate, it must match the rate detected by Amazon Transcribe Medical. In most cases, you should leave the MediaSampleHertz
blank and let Amazon Transcribe Medical determine the sample rate.
The sample rate, in Hertz, of the audio track in the input media file.
If you do not specify the media sample rate, Amazon Transcribe Medical determines the sample rate. If you specify the sample rate, it must match the rate detected by Amazon Transcribe Medical. In most cases, you should leave the MediaSampleRateHertz
field blank and let Amazon Transcribe Medical determine the sample rate.
The sample rate, in Hertz, of the audio track in the input media file.
If you do not specify the media sample rate, Amazon Transcribe determines the sample rate. If you specify the sample rate, it must match the sample rate detected by Amazon Transcribe. In most cases, you should leave the MediaSampleRateHertz
field blank and let Amazon Transcribe determine the sample rate.
The sample rate, in Hertz, of the audio track in the input media file.
" } }, + "MedicalTranscript": { + "base": "Identifies the location of a medical transcript.
", + "refs": { + "MedicalTranscriptionJob$Transcript": "An object that contains the MedicalTranscript
. The MedicalTranscript
contains the TranscriptFileUri
.
The data structure that containts the information for a medical transcription job.
", + "refs": { + "GetMedicalTranscriptionJobResponse$MedicalTranscriptionJob": "An object that contains the results of the medical transcription job.
", + "StartMedicalTranscriptionJobResponse$MedicalTranscriptionJob": "A batch job submitted to transcribe medical speech to text.
" + } + }, + "MedicalTranscriptionJobSummaries": { + "base": null, + "refs": { + "ListMedicalTranscriptionJobsResponse$MedicalTranscriptionJobSummaries": "A list of objects containing summary information for a transcription job.
" + } + }, + "MedicalTranscriptionJobSummary": { + "base": "Provides summary information about a transcription job.
", + "refs": { + "MedicalTranscriptionJobSummaries$member": null + } + }, + "MedicalTranscriptionSetting": { + "base": "Optional settings for the StartMedicalTranscriptionJob operation.
", + "refs": { + "MedicalTranscriptionJob$Settings": "Object that contains object.
", + "StartMedicalTranscriptionJobRequest$Settings": "Optional settings for the medical transcription job.
" + } + }, "NextToken": { "base": null, "refs": { + "ListMedicalTranscriptionJobsRequest$NextToken": "If you a receive a truncated result in the previous request of ListMedicalTranscriptionJobs
, include NextToken
to fetch the next set of jobs.
The ListMedicalTranscriptionJobs
operation returns a page of jobs at a time. The maximum size of the page is set by the MaxResults
parameter. If the number of jobs exceeds what can fit on a page, Amazon Transcribe Medical returns the NextPage
token. Include the token in the next request to the ListMedicalTranscriptionJobs
operation to return in the next page of jobs.
If the result of the previous request to ListTranscriptionJobs
was truncated, include the NextToken
to fetch the next set of jobs.
The ListTranscriptionJobs
operation returns a page of jobs at a time. The maximum size of the page is set by the MaxResults
parameter. If there are more jobs in the list than the page size, Amazon Transcribe returns the NextPage
token. Include the token in the next request to the ListTranscriptionJobs
operation to return in the next page of jobs.
If the result of the previous request to ListVocabularies
was truncated, include the NextToken
to fetch the next set of jobs.
The Amazon S3 location where the transcription is stored.
You must set OutputBucketName
for Amazon Transcribe Medical to store the transcription results. Your transcript appears in the S3 location you specify. When you call the GetMedicalTranscriptionJob, the operation returns this location in the TranscriptFileUri
field. The S3 bucket must have permissions that allow Amazon Transcribe Medical to put files in the bucket. For more information, see Permissions Required for IAM User Roles.
You can specify an AWS Key Management Service (KMS) key to encrypt the output of your transcription using the OutputEncryptionKMSKeyId
parameter. If you don't specify a KMS key, Amazon Transcribe Medical uses the default Amazon S3 key for server-side encryption of transcripts that are placed in your S3 bucket.
The location where the transcription is stored.
If you set the OutputBucketName
, Amazon Transcribe puts the transcript in the specified S3 bucket. When you call the GetTranscriptionJob operation, the operation returns this location in the TranscriptFileUri
field. If you enable content redaction, the redacted transcript appears in RedactedTranscriptFileUri
. If you enable content redaction and choose to output an unredacted transcript, that transcript's location still appears in the TranscriptFileUri
. The S3 bucket must have permissions that allow Amazon Transcribe to put files in the bucket. For more information, see Permissions Required for IAM User Roles.
You can specify an AWS Key Management Service (KMS) key to encrypt the output of your transcription using the OutputEncryptionKMSKeyId
parameter. If you don't specify a KMS key, Amazon Transcribe uses the default Amazon S3 key for server-side encryption of transcripts that are placed in your S3 bucket.
If you don't set the OutputBucketName
, Amazon Transcribe generates a pre-signed URL, a shareable URL that provides secure access to your transcription, and returns it in the TranscriptFileUri
field. Use this URL to download the transcription.
Indicates the location of the transcription job's output.
The CUSTOMER_BUCKET
is the S3 location provided in the OutputBucketName
field when the
Indicates the location of the output of the transcription job.
If the value is CUSTOMER_BUCKET
then the location is the S3 bucket specified in the outputBucketName
field when the transcription job was started with the StartTranscriptionJob
operation.
If the value is SERVICE_BUCKET
then the output is stored by Amazon Transcribe and can be retrieved using the URI in the GetTranscriptionJob
response's TranscriptFileUri
field.
Request parameter where you choose whether to output only the redacted transcript or generate an additional unredacted transcript.
When you choose redacted
Amazon Transcribe outputs a JSON file with only the redacted transcript and related information.
When you choose redacted_and_unredacted
Amazon Transcribe outputs a JSON file with the unredacted transcript and related information in addition to the JSON file with the redacted transcript.
The output transcript file stored in either the default S3 bucket or in a bucket you specify.
When you choose redacted
Amazon Transcribe outputs only the redacted transcript.
When you choose redacted_and_unredacted
Amazon Transcribe outputs both the redacted and unredacted transcripts.
Optional settings for the transcription job. Use these settings to turn on speaker recognition, to set the maximum number of speakers that should be identified and to specify a custom vocabulary to use when processing the transcription job.
" } }, + "Specialty": { + "base": null, + "refs": { + "MedicalTranscriptionJob$Specialty": "The medical specialty of any clinicians providing a dictation or having a conversation. PRIMARYCARE
is the only available setting for this object. This specialty enables you to generate transcriptions for the following medical fields:
Family Medicine
The medical specialty of the transcription job. Primary care
is the only valid value.
The medical specialty of any clinician speaking in the input media.
" + } + }, + "StartMedicalTranscriptionJobRequest": { + "base": null, + "refs": { + } + }, + "StartMedicalTranscriptionJobResponse": { + "base": null, + "refs": { + } + }, "StartTranscriptionJobRequest": { "base": null, "refs": { @@ -353,9 +460,15 @@ "TranscriptionJobName": { "base": null, "refs": { + "DeleteMedicalTranscriptionJobRequest$MedicalTranscriptionJobName": "The name you provide to the DeleteMedicalTranscriptionJob
object to delete a transcription job.
The name of the transcription job to be deleted.
", + "GetMedicalTranscriptionJobRequest$MedicalTranscriptionJobName": "The name of the medical transcription job.
", "GetTranscriptionJobRequest$TranscriptionJobName": "The name of the job.
", + "ListMedicalTranscriptionJobsRequest$JobNameContains": "When specified, the jobs returned in the list are limited to jobs whose name contains the specified string.
", "ListTranscriptionJobsRequest$JobNameContains": "When specified, the jobs returned in the list are limited to jobs whose name contains the specified string.
", + "MedicalTranscriptionJob$MedicalTranscriptionJobName": "The name for a given medical transcription job.
", + "MedicalTranscriptionJobSummary$MedicalTranscriptionJobName": "The name of a medical transcription job.
", + "StartMedicalTranscriptionJobRequest$MedicalTranscriptionJobName": "The name of the medical transcription job. You can't use the strings \".\" or \"..\" by themselves as the job name. The name must also be unique within an AWS account.
", "StartTranscriptionJobRequest$TranscriptionJobName": "The name of the job. Note that you can't use the strings \".\" or \"..\" by themselves as the job name. The name must also be unique within an AWS account.
", "TranscriptionJob$TranscriptionJobName": "The name of the transcription job.
", "TranscriptionJobSummary$TranscriptionJobName": "The name of the transcription job.
" @@ -364,9 +477,13 @@ "TranscriptionJobStatus": { "base": null, "refs": { + "ListMedicalTranscriptionJobsRequest$Status": "When specified, returns only medical transcription jobs with the specified status. Jobs are ordered by creation date, with the newest jobs returned first. If you don't specify a status, Amazon Transcribe Medical returns all transcription jobs ordered by creation date.
", + "ListMedicalTranscriptionJobsResponse$Status": "The requested status of the medical transcription jobs returned.
", "ListTranscriptionJobsRequest$Status": "When specified, returns only transcription jobs with the specified status. Jobs are ordered by creation date, with the newest jobs returned first. If you don’t specify a status, Amazon Transcribe returns all transcription jobs ordered by creation date.
", "ListTranscriptionJobsResponse$Status": "The requested status of the jobs returned.
", "ListVocabulariesResponse$Status": "The requested vocabulary state.
", + "MedicalTranscriptionJob$TranscriptionJobStatus": "The completion status of a medical transcription job.
", + "MedicalTranscriptionJobSummary$TranscriptionJobStatus": "The status of the medical transcription job.
", "TranscriptionJob$TranscriptionJobStatus": "The status of the transcription job.
", "TranscriptionJobSummary$TranscriptionJobStatus": "The status of the transcription job. When the status is COMPLETED
, use the GetTranscriptionJob
operation to get the results of the transcription.
The type of speech in the transcription job. CONVERSATION
is generally used for patient-physician dialogues. DICTATION
is the setting for physicians speaking their notes after seeing a patient. For more information, see how-it-works-med
The speech of the clinician in the input audio.
", + "StartMedicalTranscriptionJobRequest$Type": "The speech of clinician in the input audio. CONVERSATION
refers to conversations clinicians have with patients. DICTATION
refers to medical professionals dictating their notes about a patient encounter.
The URI of the list of words in the vocabulary filter. You can use this URI to get the list of words.
", "GetVocabularyResponse$DownloadUri": "The S3 location where the vocabulary is stored. Use this URI to get the contents of the vocabulary. The URI is available for a limited time.
", "Media$MediaFileUri": "The S3 object location of the input media file. The URI must be in the same region as the API endpoint that you are calling. The general form is:
s3://<bucket-name>/<keyprefix>/<objectkey>
For example:
s3://examplebucket/example.mp4
s3://examplebucket/mediadocs/example.mp4
For more information about S3 object names, see Object Keys in the Amazon S3 Developer Guide.
", + "MedicalTranscript$TranscriptFileUri": "The S3 object location of the medical transcript.
Use this URI to access the medical transcript. This URI points to the S3 bucket you created to store the medical transcript.
", "Transcript$TranscriptFileUri": "The S3 object location of the the transcript.
Use this URI to access the transcript. If you specified an S3 bucket in the OutputBucketName
field when you created the job, this is the URI of that bucket. If you chose to store the transcript in Amazon Transcribe, this is a shareable URL that provides secure access to that location.
The S3 object location of the redacted transcript.
Use this URI to access the redacated transcript. If you specified an S3 bucket in the OutputBucketName
field when you created the job, this is the URI of that bucket. If you chose to store the transcript in Amazon Transcribe, this is a shareable URL that provides secure access to that location.
The Amazon S3 location of a text file used as input to create the vocabulary filter. Only use characters from the character set defined for custom vocabularies. For a list of character sets, see Character Sets for Custom Vocabularies.
The specified file must be less than 50 KB of UTF-8 characters.
If you provide the location of a list of words in the VocabularyFilterFileUri
parameter, you can't use the Words
parameter.
The list of vocabulary filters. It will contain at most MaxResults
number of filters. If there are more filters, call the ListVocabularyFilters
operation again with the NextToken
parameter in the request set to the value of the NextToken
field in the response.
The list of vocabulary filters. It contains at most MaxResults
number of filters. If there are more filters, call the ListVocabularyFilters
operation again with the NextToken
parameter in the request set to the value of the NextToken
field in the response.
The name of the vocabulary to delete.
", "GetVocabularyRequest$VocabularyName": "The name of the vocabulary to return information about. The name is case-sensitive.
", "GetVocabularyResponse$VocabularyName": "The name of the vocabulary to return.
", - "ListVocabulariesRequest$NameContains": "When specified, the vocabularies returned in the list are limited to vocabularies whose name contains the specified string. The search is case-insensitive, ListVocabularies
will return both \"vocabularyname\" and \"VocabularyName\" in the response list.
When specified, the vocabularies returned in the list are limited to vocabularies whose name contains the specified string. The search is case-insensitive, ListVocabularies
returns both \"vocabularyname\" and \"VocabularyName\" in the response list.
The name of a vocabulary to use when processing the transcription job.
", "UpdateVocabularyRequest$VocabularyName": "The name of the vocabulary to update. The name is case-sensitive.
", "UpdateVocabularyResponse$VocabularyName": "The name of the vocabulary that was updated.
", diff --git a/models/apis/transcribe/2017-10-26/paginators-1.json b/models/apis/transcribe/2017-10-26/paginators-1.json index aded8e376ef..ec10a28afab 100644 --- a/models/apis/transcribe/2017-10-26/paginators-1.json +++ b/models/apis/transcribe/2017-10-26/paginators-1.json @@ -1,5 +1,10 @@ { "pagination": { + "ListMedicalTranscriptionJobs": { + "input_token": "NextToken", + "output_token": "NextToken", + "limit_key": "MaxResults" + }, "ListTranscriptionJobs": { "input_token": "NextToken", "output_token": "NextToken", diff --git a/models/apis/wafv2/2019-07-29/api-2.json b/models/apis/wafv2/2019-07-29/api-2.json index 562a8234560..c921cf5eeb6 100755 --- a/models/apis/wafv2/2019-07-29/api-2.json +++ b/models/apis/wafv2/2019-07-29/api-2.json @@ -25,7 +25,8 @@ {"shape":"WAFInternalErrorException"}, {"shape":"WAFInvalidParameterException"}, {"shape":"WAFNonexistentItemException"}, - {"shape":"WAFUnavailableEntityException"} + {"shape":"WAFUnavailableEntityException"}, + {"shape":"WAFInvalidOperationException"} ] }, "CheckCapacity":{ @@ -61,7 +62,8 @@ {"shape":"WAFOptimisticLockException"}, {"shape":"WAFLimitsExceededException"}, {"shape":"WAFTagOperationException"}, - {"shape":"WAFTagOperationInternalErrorException"} + {"shape":"WAFTagOperationInternalErrorException"}, + {"shape":"WAFInvalidOperationException"} ] }, "CreateRegexPatternSet":{ @@ -79,7 +81,8 @@ {"shape":"WAFOptimisticLockException"}, {"shape":"WAFLimitsExceededException"}, {"shape":"WAFTagOperationException"}, - {"shape":"WAFTagOperationInternalErrorException"} + {"shape":"WAFTagOperationInternalErrorException"}, + {"shape":"WAFInvalidOperationException"} ] }, "CreateRuleGroup":{ @@ -99,7 +102,9 @@ {"shape":"WAFUnavailableEntityException"}, {"shape":"WAFTagOperationException"}, {"shape":"WAFTagOperationInternalErrorException"}, - {"shape":"WAFSubscriptionNotFoundException"} + {"shape":"WAFSubscriptionNotFoundException"}, + {"shape":"WAFNonexistentItemException"}, + {"shape":"WAFInvalidOperationException"} ] }, "CreateWebACL":{ @@ -121,7 +126,24 @@ {"shape":"WAFNonexistentItemException"}, {"shape":"WAFTagOperationException"}, {"shape":"WAFTagOperationInternalErrorException"}, - {"shape":"WAFSubscriptionNotFoundException"} + {"shape":"WAFSubscriptionNotFoundException"}, + {"shape":"WAFInvalidOperationException"} + ] + }, + "DeleteFirewallManagerRuleGroups":{ + "name":"DeleteFirewallManagerRuleGroups", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteFirewallManagerRuleGroupsRequest"}, + "output":{"shape":"DeleteFirewallManagerRuleGroupsResponse"}, + "errors":[ + {"shape":"WAFInternalErrorException"}, + {"shape":"WAFInvalidParameterException"}, + {"shape":"WAFNonexistentItemException"}, + {"shape":"WAFOptimisticLockException"}, + {"shape":"WAFInvalidOperationException"} ] }, "DeleteIPSet":{ @@ -139,7 +161,8 @@ {"shape":"WAFOptimisticLockException"}, {"shape":"WAFAssociatedItemException"}, {"shape":"WAFTagOperationException"}, - {"shape":"WAFTagOperationInternalErrorException"} + {"shape":"WAFTagOperationInternalErrorException"}, + {"shape":"WAFInvalidOperationException"} ] }, "DeleteLoggingConfiguration":{ @@ -153,7 +176,23 @@ "errors":[ {"shape":"WAFInternalErrorException"}, {"shape":"WAFNonexistentItemException"}, - {"shape":"WAFOptimisticLockException"} + {"shape":"WAFOptimisticLockException"}, + {"shape":"WAFInvalidParameterException"}, + {"shape":"WAFInvalidOperationException"} + ] + }, + "DeletePermissionPolicy":{ + "name":"DeletePermissionPolicy", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeletePermissionPolicyRequest"}, + "output":{"shape":"DeletePermissionPolicyResponse"}, + "errors":[ + {"shape":"WAFNonexistentItemException"}, + {"shape":"WAFInternalErrorException"}, + {"shape":"WAFInvalidParameterException"} ] }, "DeleteRegexPatternSet":{ @@ -171,7 +210,8 @@ {"shape":"WAFOptimisticLockException"}, {"shape":"WAFAssociatedItemException"}, {"shape":"WAFTagOperationException"}, - {"shape":"WAFTagOperationInternalErrorException"} + {"shape":"WAFTagOperationInternalErrorException"}, + {"shape":"WAFInvalidOperationException"} ] }, "DeleteRuleGroup":{ @@ -189,7 +229,8 @@ {"shape":"WAFOptimisticLockException"}, {"shape":"WAFAssociatedItemException"}, {"shape":"WAFTagOperationException"}, - {"shape":"WAFTagOperationInternalErrorException"} + {"shape":"WAFTagOperationInternalErrorException"}, + {"shape":"WAFInvalidOperationException"} ] }, "DeleteWebACL":{ @@ -207,7 +248,8 @@ {"shape":"WAFOptimisticLockException"}, {"shape":"WAFAssociatedItemException"}, {"shape":"WAFTagOperationException"}, - {"shape":"WAFTagOperationInternalErrorException"} + {"shape":"WAFTagOperationInternalErrorException"}, + {"shape":"WAFInvalidOperationException"} ] }, "DescribeManagedRuleGroup":{ @@ -222,7 +264,8 @@ {"shape":"WAFInternalErrorException"}, {"shape":"WAFInvalidParameterException"}, {"shape":"WAFInvalidResourceException"}, - {"shape":"WAFNonexistentItemException"} + {"shape":"WAFNonexistentItemException"}, + {"shape":"WAFInvalidOperationException"} ] }, "DisassociateWebACL":{ @@ -236,7 +279,8 @@ "errors":[ {"shape":"WAFInternalErrorException"}, {"shape":"WAFInvalidParameterException"}, - {"shape":"WAFNonexistentItemException"} + {"shape":"WAFNonexistentItemException"}, + {"shape":"WAFInvalidOperationException"} ] }, "GetIPSet":{ @@ -250,7 +294,8 @@ "errors":[ {"shape":"WAFInternalErrorException"}, {"shape":"WAFInvalidParameterException"}, - {"shape":"WAFNonexistentItemException"} + {"shape":"WAFNonexistentItemException"}, + {"shape":"WAFInvalidOperationException"} ] }, "GetLoggingConfiguration":{ @@ -263,7 +308,23 @@ "output":{"shape":"GetLoggingConfigurationResponse"}, "errors":[ {"shape":"WAFInternalErrorException"}, - {"shape":"WAFNonexistentItemException"} + {"shape":"WAFNonexistentItemException"}, + {"shape":"WAFInvalidParameterException"}, + {"shape":"WAFInvalidOperationException"} + ] + }, + "GetPermissionPolicy":{ + "name":"GetPermissionPolicy", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"GetPermissionPolicyRequest"}, + "output":{"shape":"GetPermissionPolicyResponse"}, + "errors":[ + {"shape":"WAFNonexistentItemException"}, + {"shape":"WAFInternalErrorException"}, + {"shape":"WAFInvalidParameterException"} ] }, "GetRateBasedStatementManagedKeys":{ @@ -277,7 +338,8 @@ "errors":[ {"shape":"WAFInternalErrorException"}, {"shape":"WAFInvalidParameterException"}, - {"shape":"WAFNonexistentItemException"} + {"shape":"WAFNonexistentItemException"}, + {"shape":"WAFInvalidOperationException"} ] }, "GetRegexPatternSet":{ @@ -291,7 +353,8 @@ "errors":[ {"shape":"WAFInternalErrorException"}, {"shape":"WAFInvalidParameterException"}, - {"shape":"WAFNonexistentItemException"} + {"shape":"WAFNonexistentItemException"}, + {"shape":"WAFInvalidOperationException"} ] }, "GetRuleGroup":{ @@ -305,7 +368,8 @@ "errors":[ {"shape":"WAFInternalErrorException"}, {"shape":"WAFInvalidParameterException"}, - {"shape":"WAFNonexistentItemException"} + {"shape":"WAFNonexistentItemException"}, + {"shape":"WAFInvalidOperationException"} ] }, "GetSampledRequests":{ @@ -333,7 +397,8 @@ "errors":[ {"shape":"WAFInternalErrorException"}, {"shape":"WAFInvalidParameterException"}, - {"shape":"WAFNonexistentItemException"} + {"shape":"WAFNonexistentItemException"}, + {"shape":"WAFInvalidOperationException"} ] }, "GetWebACLForResource":{ @@ -348,7 +413,8 @@ {"shape":"WAFInternalErrorException"}, {"shape":"WAFNonexistentItemException"}, {"shape":"WAFInvalidParameterException"}, - {"shape":"WAFUnavailableEntityException"} + {"shape":"WAFUnavailableEntityException"}, + {"shape":"WAFInvalidOperationException"} ] }, "ListAvailableManagedRuleGroups":{ @@ -361,7 +427,8 @@ "output":{"shape":"ListAvailableManagedRuleGroupsResponse"}, "errors":[ {"shape":"WAFInternalErrorException"}, - {"shape":"WAFInvalidParameterException"} + {"shape":"WAFInvalidParameterException"}, + {"shape":"WAFInvalidOperationException"} ] }, "ListIPSets":{ @@ -374,7 +441,8 @@ "output":{"shape":"ListIPSetsResponse"}, "errors":[ {"shape":"WAFInternalErrorException"}, - {"shape":"WAFInvalidParameterException"} + {"shape":"WAFInvalidParameterException"}, + {"shape":"WAFInvalidOperationException"} ] }, "ListLoggingConfigurations":{ @@ -387,7 +455,8 @@ "output":{"shape":"ListLoggingConfigurationsResponse"}, "errors":[ {"shape":"WAFInternalErrorException"}, - {"shape":"WAFInvalidParameterException"} + {"shape":"WAFInvalidParameterException"}, + {"shape":"WAFInvalidOperationException"} ] }, "ListRegexPatternSets":{ @@ -400,7 +469,8 @@ "output":{"shape":"ListRegexPatternSetsResponse"}, "errors":[ {"shape":"WAFInternalErrorException"}, - {"shape":"WAFInvalidParameterException"} + {"shape":"WAFInvalidParameterException"}, + {"shape":"WAFInvalidOperationException"} ] }, "ListResourcesForWebACL":{ @@ -414,7 +484,8 @@ "errors":[ {"shape":"WAFInternalErrorException"}, {"shape":"WAFNonexistentItemException"}, - {"shape":"WAFInvalidParameterException"} + {"shape":"WAFInvalidParameterException"}, + {"shape":"WAFInvalidOperationException"} ] }, "ListRuleGroups":{ @@ -427,7 +498,8 @@ "output":{"shape":"ListRuleGroupsResponse"}, "errors":[ {"shape":"WAFInternalErrorException"}, - {"shape":"WAFInvalidParameterException"} + {"shape":"WAFInvalidParameterException"}, + {"shape":"WAFInvalidOperationException"} ] }, "ListTagsForResource":{ @@ -443,7 +515,8 @@ {"shape":"WAFInvalidParameterException"}, {"shape":"WAFNonexistentItemException"}, {"shape":"WAFTagOperationException"}, - {"shape":"WAFTagOperationInternalErrorException"} + {"shape":"WAFTagOperationInternalErrorException"}, + {"shape":"WAFInvalidOperationException"} ] }, "ListWebACLs":{ @@ -456,7 +529,8 @@ "output":{"shape":"ListWebACLsResponse"}, "errors":[ {"shape":"WAFInternalErrorException"}, - {"shape":"WAFInvalidParameterException"} + {"shape":"WAFInvalidParameterException"}, + {"shape":"WAFInvalidOperationException"} ] }, "PutLoggingConfiguration":{ @@ -472,7 +546,23 @@ {"shape":"WAFNonexistentItemException"}, {"shape":"WAFOptimisticLockException"}, {"shape":"WAFServiceLinkedRoleErrorException"}, - {"shape":"WAFInvalidParameterException"} + {"shape":"WAFInvalidParameterException"}, + {"shape":"WAFInvalidOperationException"} + ] + }, + "PutPermissionPolicy":{ + "name":"PutPermissionPolicy", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"PutPermissionPolicyRequest"}, + "output":{"shape":"PutPermissionPolicyResponse"}, + "errors":[ + {"shape":"WAFNonexistentItemException"}, + {"shape":"WAFInternalErrorException"}, + {"shape":"WAFInvalidParameterException"}, + {"shape":"WAFInvalidPermissionPolicyException"} ] }, "TagResource":{ @@ -489,7 +579,8 @@ {"shape":"WAFLimitsExceededException"}, {"shape":"WAFNonexistentItemException"}, {"shape":"WAFTagOperationException"}, - {"shape":"WAFTagOperationInternalErrorException"} + {"shape":"WAFTagOperationInternalErrorException"}, + {"shape":"WAFInvalidOperationException"} ] }, "UntagResource":{ @@ -505,7 +596,8 @@ {"shape":"WAFInvalidParameterException"}, {"shape":"WAFNonexistentItemException"}, {"shape":"WAFTagOperationException"}, - {"shape":"WAFTagOperationInternalErrorException"} + {"shape":"WAFTagOperationInternalErrorException"}, + {"shape":"WAFInvalidOperationException"} ] }, "UpdateIPSet":{ @@ -522,7 +614,8 @@ {"shape":"WAFNonexistentItemException"}, {"shape":"WAFDuplicateItemException"}, {"shape":"WAFOptimisticLockException"}, - {"shape":"WAFLimitsExceededException"} + {"shape":"WAFLimitsExceededException"}, + {"shape":"WAFInvalidOperationException"} ] }, "UpdateRegexPatternSet":{ @@ -539,7 +632,8 @@ {"shape":"WAFNonexistentItemException"}, {"shape":"WAFDuplicateItemException"}, {"shape":"WAFOptimisticLockException"}, - {"shape":"WAFLimitsExceededException"} + {"shape":"WAFLimitsExceededException"}, + {"shape":"WAFInvalidOperationException"} ] }, "UpdateRuleGroup":{ @@ -558,7 +652,8 @@ {"shape":"WAFOptimisticLockException"}, {"shape":"WAFLimitsExceededException"}, {"shape":"WAFUnavailableEntityException"}, - {"shape":"WAFSubscriptionNotFoundException"} + {"shape":"WAFSubscriptionNotFoundException"}, + {"shape":"WAFInvalidOperationException"} ] }, "UpdateWebACL":{ @@ -578,7 +673,8 @@ {"shape":"WAFLimitsExceededException"}, {"shape":"WAFInvalidResourceException"}, {"shape":"WAFUnavailableEntityException"}, - {"shape":"WAFSubscriptionNotFoundException"} + {"shape":"WAFSubscriptionNotFoundException"}, + {"shape":"WAFInvalidOperationException"} ] } }, @@ -1043,6 +1139,23 @@ "Allow":{"shape":"AllowAction"} } }, + "DeleteFirewallManagerRuleGroupsRequest":{ + "type":"structure", + "required":[ + "WebACLArn", + "WebACLLockToken" + ], + "members":{ + "WebACLArn":{"shape":"ResourceArn"}, + "WebACLLockToken":{"shape":"LockToken"} + } + }, + "DeleteFirewallManagerRuleGroupsResponse":{ + "type":"structure", + "members":{ + "NextWebACLLockToken":{"shape":"LockToken"} + } + }, "DeleteIPSetRequest":{ "type":"structure", "required":[ @@ -1075,6 +1188,18 @@ "members":{ } }, + "DeletePermissionPolicyRequest":{ + "type":"structure", + "required":["ResourceArn"], + "members":{ + "ResourceArn":{"shape":"ResourceArn"} + } + }, + "DeletePermissionPolicyResponse":{ + "type":"structure", + "members":{ + } + }, "DeleteRegexPatternSetRequest":{ "type":"structure", "required":[ @@ -1216,6 +1341,34 @@ "min":1, "pattern":".*\\S.*" }, + "FirewallManagerRuleGroup":{ + "type":"structure", + "required":[ + "Name", + "Priority", + "FirewallManagerStatement", + "OverrideAction", + "VisibilityConfig" + ], + "members":{ + "Name":{"shape":"EntityName"}, + "Priority":{"shape":"RulePriority"}, + "FirewallManagerStatement":{"shape":"FirewallManagerStatement"}, + "OverrideAction":{"shape":"OverrideAction"}, + "VisibilityConfig":{"shape":"VisibilityConfig"} + } + }, + "FirewallManagerRuleGroups":{ + "type":"list", + "member":{"shape":"FirewallManagerRuleGroup"} + }, + "FirewallManagerStatement":{ + "type":"structure", + "members":{ + "ManagedRuleGroupStatement":{"shape":"ManagedRuleGroupStatement"}, + "RuleGroupReferenceStatement":{"shape":"RuleGroupReferenceStatement"} + } + }, "GeoMatchStatement":{ "type":"structure", "members":{ @@ -1255,6 +1408,19 @@ "LoggingConfiguration":{"shape":"LoggingConfiguration"} } }, + "GetPermissionPolicyRequest":{ + "type":"structure", + "required":["ResourceArn"], + "members":{ + "ResourceArn":{"shape":"ResourceArn"} + } + }, + "GetPermissionPolicyResponse":{ + "type":"structure", + "members":{ + "Policy":{"shape":"PolicyString"} + } + }, "GetRateBasedStatementManagedKeysRequest":{ "type":"structure", "required":[ @@ -1727,13 +1893,18 @@ "RESOURCE_TYPE", "TAGS", "TAG_KEYS", - "METRIC_NAME" + "METRIC_NAME", + "FIREWALL_MANAGER_STATEMENT" ] }, "ParameterExceptionParameter":{ "type":"string", "min":1 }, + "PolicyString":{ + "type":"string", + "min":1 + }, "PopulationSize":{"type":"long"}, "PositionalConstraint":{ "type":"string", @@ -1758,6 +1929,22 @@ "LoggingConfiguration":{"shape":"LoggingConfiguration"} } }, + "PutPermissionPolicyRequest":{ + "type":"structure", + "required":[ + "ResourceArn", + "Policy" + ], + "members":{ + "ResourceArn":{"shape":"ResourceArn"}, + "Policy":{"shape":"PolicyString"} + } + }, + "PutPermissionPolicyResponse":{ + "type":"structure", + "members":{ + } + }, "QueryString":{ "type":"structure", "members":{ @@ -1847,8 +2034,7 @@ }, "RegularExpressionList":{ "type":"list", - "member":{"shape":"Regex"}, - "min":1 + "member":{"shape":"Regex"} }, "ResourceArn":{ "type":"string", @@ -2311,6 +2497,13 @@ "exception":true, "fault":true }, + "WAFInvalidOperationException":{ + "type":"structure", + "members":{ + "Message":{"shape":"ErrorMessage"} + }, + "exception":true + }, "WAFInvalidParameterException":{ "type":"structure", "members":{ @@ -2321,6 +2514,13 @@ }, "exception":true }, + "WAFInvalidPermissionPolicyException":{ + "type":"structure", + "members":{ + "Message":{"shape":"ErrorMessage"} + }, + "exception":true + }, "WAFInvalidResourceException":{ "type":"structure", "members":{ @@ -2402,7 +2602,10 @@ "Description":{"shape":"EntityDescription"}, "Rules":{"shape":"Rules"}, "VisibilityConfig":{"shape":"VisibilityConfig"}, - "Capacity":{"shape":"ConsumedCapacity"} + "Capacity":{"shape":"ConsumedCapacity"}, + "PreProcessFirewallManagerRuleGroups":{"shape":"FirewallManagerRuleGroups"}, + "PostProcessFirewallManagerRuleGroups":{"shape":"FirewallManagerRuleGroups"}, + "ManagedByFirewallManager":{"shape":"Boolean"} } }, "WebACLSummaries":{ diff --git a/models/apis/wafv2/2019-07-29/docs-2.json b/models/apis/wafv2/2019-07-29/docs-2.json index ed2e39134a2..87d7c1aa604 100755 --- a/models/apis/wafv2/2019-07-29/docs-2.json +++ b/models/apis/wafv2/2019-07-29/docs-2.json @@ -1,22 +1,25 @@ { "version": "2.0", - "service": "This is the latest version of the AWS WAF API, released in November, 2019. The names of the entities that you use to access this API, like endpoints and namespaces, all have the versioning information added, like \"V2\" or \"v2\", to distinguish from the prior version. We recommend migrating your resources to this version, because it has a number of significant improvements.
If you used AWS WAF prior to this release, you can't use this AWS WAFV2 API to access any AWS WAF resources that you created before. You can access your old rules, web ACLs, and other AWS WAF resources only through the AWS WAF Classic APIs. The AWS WAF Classic APIs have retained the prior names, endpoints, and namespaces.
For information, including how to migrate your AWS WAF resources to this version, see the AWS WAF Developer Guide.
AWS WAF is a web application firewall that lets you monitor the HTTP and HTTPS requests that are forwarded to Amazon CloudFront, an Amazon API Gateway API, or an Application Load Balancer. AWS WAF also lets you control access to your content. Based on conditions that you specify, such as the IP addresses that requests originate from or the values of query strings, API Gateway, CloudFront, or the Application Load Balancer responds to requests either with the requested content or with an HTTP 403 status code (Forbidden). You also can configure CloudFront to return a custom error page when a request is blocked.
This API guide is for developers who need detailed information about AWS WAF API actions, data types, and errors. For detailed information about AWS WAF features and an overview of how to use AWS WAF, see the AWS WAF Developer Guide.
You can make API calls using the endpoints listed in AWS Service Endpoints for AWS WAF.
For regional applications, you can use any of the endpoints in the list. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
For AWS CloudFront applications, you must use the API endpoint listed for US East (N. Virginia): us-east-1.
Alternatively, you can use one of the AWS SDKs to access an API that's tailored to the programming language or platform that you're using. For more information, see AWS SDKs.
We currently provide two versions of the AWS WAF API: this API and the prior versions, the classic AWS WAF APIs. This new API provides the same functionality as the older versions, with the following major improvements:
You use one API for both global and regional applications. Where you need to distinguish the scope, you specify a Scope
parameter and set it to CLOUDFRONT
or REGIONAL
.
You can define a Web ACL or rule group with a single API call, and update it with a single call. You define all rule specifications in JSON format, and pass them to your rule group or Web ACL API calls.
The limits AWS WAF places on the use of rules more closely reflects the cost of running each type of rule. Rule groups include capacity settings, so you know the maximum cost of a rule group when you use it.
This is the latest version of the AWS WAF API, released in November, 2019. The names of the entities that you use to access this API, like endpoints and namespaces, all have the versioning information added, like \"V2\" or \"v2\", to distinguish from the prior version. We recommend migrating your resources to this version, because it has a number of significant improvements.
If you used AWS WAF prior to this release, you can't use this AWS WAFV2 API to access any AWS WAF resources that you created before. You can access your old rules, web ACLs, and other AWS WAF resources only through the AWS WAF Classic APIs. The AWS WAF Classic APIs have retained the prior names, endpoints, and namespaces.
For information, including how to migrate your AWS WAF resources to this version, see the AWS WAF Developer Guide.
AWS WAF is a web application firewall that lets you monitor the HTTP and HTTPS requests that are forwarded to Amazon CloudFront, an Amazon API Gateway API, or an Application Load Balancer. AWS WAF also lets you control access to your content. Based on conditions that you specify, such as the IP addresses that requests originate from or the values of query strings, API Gateway, CloudFront, or the Application Load Balancer responds to requests either with the requested content or with an HTTP 403 status code (Forbidden). You also can configure CloudFront to return a custom error page when a request is blocked.
This API guide is for developers who need detailed information about AWS WAF API actions, data types, and errors. For detailed information about AWS WAF features and an overview of how to use AWS WAF, see the AWS WAF Developer Guide.
You can make calls using the endpoints listed in AWS Service Endpoints for AWS WAF.
For regional applications, you can use any of the endpoints in the list. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
For AWS CloudFront applications, you must use the API endpoint listed for US East (N. Virginia): us-east-1.
Alternatively, you can use one of the AWS SDKs to access an API that's tailored to the programming language or platform that you're using. For more information, see AWS SDKs.
We currently provide two versions of the AWS WAF API: this API and the prior versions, the classic AWS WAF APIs. This new API provides the same functionality as the older versions, with the following major improvements:
You use one API for both global and regional applications. Where you need to distinguish the scope, you specify a Scope
parameter and set it to CLOUDFRONT
or REGIONAL
.
You can define a Web ACL or rule group with a single call, and update it with a single call. You define all rule specifications in JSON format, and pass them to your rule group or Web ACL calls.
The limits AWS WAF places on the use of rules more closely reflects the cost of running each type of rule. Rule groups include capacity settings, so you know the maximum cost of a rule group when you use it.
This is the latest version of AWS WAF, named AWS WAFV2, released in November, 2019. For information, including how to migrate your AWS WAF resources from the prior release, see the AWS WAF Developer Guide.
Associates a Web ACL with a regional application resource, to protect the resource. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
For AWS CloudFront, you can associate the Web ACL by providing the ARN
of the WebACL to the CloudFront API call UpdateDistribution
. For information, see UpdateDistribution.
This is the latest version of AWS WAF, named AWS WAFV2, released in November, 2019. For information, including how to migrate your AWS WAF resources from the prior release, see the AWS WAF Developer Guide.
Associates a Web ACL with a regional application resource, to protect the resource. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
For AWS CloudFront, don't use this call. Instead, use your CloudFront distribution configuration. To associate a Web ACL, in the CloudFront call UpdateDistribution
, set the web ACL ID to the Amazon Resource Name (ARN) of the Web ACL. For information, see UpdateDistribution.
This is the latest version of AWS WAF, named AWS WAFV2, released in November, 2019. For information, including how to migrate your AWS WAF resources from the prior release, see the AWS WAF Developer Guide.
Returns the web ACL capacity unit (WCU) requirements for a specified scope and set of rules. You can use this to check the capacity requirements for the rules you want to use in a RuleGroup or WebACL.
AWS WAF uses WCUs to calculate and control the operating resources that are used to run your rules, rule groups, and web ACLs. AWS WAF calculates capacity differently for each rule type, to reflect the relative cost of each rule. Simple rules that cost little to run use fewer WCUs than more complex rules that use more processing power. Rule group capacity is fixed at creation, which helps users plan their web ACL WCU usage when they use a rule group. The WCU limit for web ACLs is 1,500.
", "CreateIPSet": "This is the latest version of AWS WAF, named AWS WAFV2, released in November, 2019. For information, including how to migrate your AWS WAF resources from the prior release, see the AWS WAF Developer Guide.
Creates an IPSet, which you use to identify web requests that originate from specific IP addresses or ranges of IP addresses. For example, if you're receiving a lot of requests from a ranges of IP addresses, you can configure AWS WAF to block them using an IPSet that lists those IP addresses.
", "CreateRegexPatternSet": "This is the latest version of AWS WAF, named AWS WAFV2, released in November, 2019. For information, including how to migrate your AWS WAF resources from the prior release, see the AWS WAF Developer Guide.
Creates a RegexPatternSet, which you reference in a RegexPatternSetReferenceStatement, to have AWS WAF inspect a web request component for the specified patterns.
", "CreateRuleGroup": "This is the latest version of AWS WAF, named AWS WAFV2, released in November, 2019. For information, including how to migrate your AWS WAF resources from the prior release, see the AWS WAF Developer Guide.
Creates a RuleGroup per the specifications provided.
A rule group defines a collection of rules to inspect and control web requests that you can use in a WebACL. When you create a rule group, you define an immutable capacity limit. If you update a rule group, you must stay within the capacity. This allows others to reuse the rule group with confidence in its capacity requirements.
", "CreateWebACL": "This is the latest version of AWS WAF, named AWS WAFV2, released in November, 2019. For information, including how to migrate your AWS WAF resources from the prior release, see the AWS WAF Developer Guide.
Creates a WebACL per the specifications provided.
A Web ACL defines a collection of rules to use to inspect and control web requests. Each rule has an action defined (allow, block, or count) for requests that match the statement of the rule. In the Web ACL, you assign a default action to take (allow, block) for any request that does not match any of the rules. The rules in a Web ACL can be a combination of the types Rule, RuleGroup, and managed rule group. You can associate a Web ACL with one or more AWS resources to protect. The resources can be Amazon CloudFront, an Amazon API Gateway API, or an Application Load Balancer.
", + "DeleteFirewallManagerRuleGroups": "Deletes all rule groups that are managed by AWS Firewall Manager for the specified web ACL.
You can only use this if ManagedByFirewallManager
is false in the specified WebACL.
This is the latest version of AWS WAF, named AWS WAFV2, released in November, 2019. For information, including how to migrate your AWS WAF resources from the prior release, see the AWS WAF Developer Guide.
Deletes the specified IPSet.
", "DeleteLoggingConfiguration": "This is the latest version of AWS WAF, named AWS WAFV2, released in November, 2019. For information, including how to migrate your AWS WAF resources from the prior release, see the AWS WAF Developer Guide.
Deletes the LoggingConfiguration from the specified web ACL.
", + "DeletePermissionPolicy": "Permanently deletes an IAM policy from the specified rule group.
You must be the owner of the rule group to perform this operation.
", "DeleteRegexPatternSet": "This is the latest version of AWS WAF, named AWS WAFV2, released in November, 2019. For information, including how to migrate your AWS WAF resources from the prior release, see the AWS WAF Developer Guide.
Deletes the specified RegexPatternSet.
", "DeleteRuleGroup": "This is the latest version of AWS WAF, named AWS WAFV2, released in November, 2019. For information, including how to migrate your AWS WAF resources from the prior release, see the AWS WAF Developer Guide.
Deletes the specified RuleGroup.
", - "DeleteWebACL": "This is the latest version of AWS WAF, named AWS WAFV2, released in November, 2019. For information, including how to migrate your AWS WAF resources from the prior release, see the AWS WAF Developer Guide.
Deletes the specified WebACL.
", + "DeleteWebACL": "This is the latest version of AWS WAF, named AWS WAFV2, released in November, 2019. For information, including how to migrate your AWS WAF resources from the prior release, see the AWS WAF Developer Guide.
Deletes the specified WebACL.
You can only use this if ManagedByFirewallManager
is false in the specified WebACL.
This is the latest version of AWS WAF, named AWS WAFV2, released in November, 2019. For information, including how to migrate your AWS WAF resources from the prior release, see the AWS WAF Developer Guide.
Provides high-level information for a managed rule group, including descriptions of the rules.
", - "DisassociateWebACL": "This is the latest version of AWS WAF, named AWS WAFV2, released in November, 2019. For information, including how to migrate your AWS WAF resources from the prior release, see the AWS WAF Developer Guide.
Disassociates a Web ACL from a regional application resource. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
For AWS CloudFront, you can disassociate the Web ACL by providing an empty web ACL ARN in the CloudFront API call UpdateDistribution
. For information, see UpdateDistribution.
This is the latest version of AWS WAF, named AWS WAFV2, released in November, 2019. For information, including how to migrate your AWS WAF resources from the prior release, see the AWS WAF Developer Guide.
Disassociates a Web ACL from a regional application resource. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
For AWS CloudFront, don't use this call. Instead, use your CloudFront distribution configuration. To disassociate a Web ACL, provide an empty web ACL ID in the CloudFront call UpdateDistribution
. For information, see UpdateDistribution.
This is the latest version of AWS WAF, named AWS WAFV2, released in November, 2019. For information, including how to migrate your AWS WAF resources from the prior release, see the AWS WAF Developer Guide.
Retrieves the specified IPSet.
", "GetLoggingConfiguration": "This is the latest version of AWS WAF, named AWS WAFV2, released in November, 2019. For information, including how to migrate your AWS WAF resources from the prior release, see the AWS WAF Developer Guide.
Returns the LoggingConfiguration for the specified web ACL.
", + "GetPermissionPolicy": "Returns the IAM policy that is attached to the specified rule group.
You must be the owner of the rule group to perform this operation.
", "GetRateBasedStatementManagedKeys": "This is the latest version of AWS WAF, named AWS WAFV2, released in November, 2019. For information, including how to migrate your AWS WAF resources from the prior release, see the AWS WAF Developer Guide.
Retrieves the keys that are currently blocked by a rate-based rule. The maximum number of managed keys that can be blocked for a single rate-based rule is 10,000. If more than 10,000 addresses exceed the rate limit, those with the highest rates are blocked.
", "GetRegexPatternSet": "This is the latest version of AWS WAF, named AWS WAFV2, released in November, 2019. For information, including how to migrate your AWS WAF resources from the prior release, see the AWS WAF Developer Guide.
Retrieves the specified RegexPatternSet.
", "GetRuleGroup": "This is the latest version of AWS WAF, named AWS WAFV2, released in November, 2019. For information, including how to migrate your AWS WAF resources from the prior release, see the AWS WAF Developer Guide.
Retrieves the specified RuleGroup.
", @@ -31,7 +34,8 @@ "ListRuleGroups": "This is the latest version of AWS WAF, named AWS WAFV2, released in November, 2019. For information, including how to migrate your AWS WAF resources from the prior release, see the AWS WAF Developer Guide.
Retrieves an array of RuleGroupSummary objects for the rule groups that you manage.
", "ListTagsForResource": "This is the latest version of AWS WAF, named AWS WAFV2, released in November, 2019. For information, including how to migrate your AWS WAF resources from the prior release, see the AWS WAF Developer Guide.
Retrieves the TagInfoForResource for the specified resource.
", "ListWebACLs": "This is the latest version of AWS WAF, named AWS WAFV2, released in November, 2019. For information, including how to migrate your AWS WAF resources from the prior release, see the AWS WAF Developer Guide.
Retrieves an array of WebACLSummary objects for the web ACLs that you manage.
", - "PutLoggingConfiguration": "This is the latest version of AWS WAF, named AWS WAFV2, released in November, 2019. For information, including how to migrate your AWS WAF resources from the prior release, see the AWS WAF Developer Guide.
Enables the specified LoggingConfiguration, to start logging from a web ACL, according to the configuration provided.
You can access information about all traffic that AWS WAF inspects using the following steps:
Create an Amazon Kinesis Data Firehose.
Create the data firehose with a PUT source and in the region that you are operating. If you are capturing logs for Amazon CloudFront, always create the firehose in US East (N. Virginia).
Do not create the data firehose using a Kinesis stream
as your source.
Associate that firehose to your web ACL using a PutLoggingConfiguration
request.
When you successfully enable logging using a PutLoggingConfiguration
request, AWS WAF will create a service linked role with the necessary permissions to write logs to the Amazon Kinesis Data Firehose. For more information, see Logging Web ACL Traffic Information in the AWS WAF Developer Guide.
This is the latest version of AWS WAF, named AWS WAFV2, released in November, 2019. For information, including how to migrate your AWS WAF resources from the prior release, see the AWS WAF Developer Guide.
Enables the specified LoggingConfiguration, to start logging from a web ACL, according to the configuration provided.
You can access information about all traffic that AWS WAF inspects using the following steps:
Create an Amazon Kinesis Data Firehose.
Create the data firehose with a PUT source and in the Region that you are operating. If you are capturing logs for Amazon CloudFront, always create the firehose in US East (N. Virginia).
Do not create the data firehose using a Kinesis stream
as your source.
Associate that firehose to your web ACL using a PutLoggingConfiguration
request.
When you successfully enable logging using a PutLoggingConfiguration
request, AWS WAF will create a service linked role with the necessary permissions to write logs to the Amazon Kinesis Data Firehose. For more information, see Logging Web ACL Traffic Information in the AWS WAF Developer Guide.
Attaches an IAM policy to the specified resource. Use this to share a rule group across accounts.
You must be the owner of the rule group to perform this operation.
This action is subject to the following restrictions:
You can attach only one policy with each PutPermissionPolicy
request.
The ARN in the request must be a valid WAF RuleGroup ARN and the rule group must exist in the same region.
The user making the request must be the owner of the rule group.
This is the latest version of AWS WAF, named AWS WAFV2, released in November, 2019. For information, including how to migrate your AWS WAF resources from the prior release, see the AWS WAF Developer Guide.
Associates tags with the specified AWS resource. Tags are key:value pairs that you can associate with AWS resources. For example, the tag key might be \"customer\" and the tag value might be \"companyA.\" You can specify one or more tags to add to each container. You can add up to 50 tags to each AWS resource.
", "UntagResource": "This is the latest version of AWS WAF, named AWS WAFV2, released in November, 2019. For information, including how to migrate your AWS WAF resources from the prior release, see the AWS WAF Developer Guide.
Disassociates tags from an AWS resource. Tags are key:value pairs that you can associate with AWS resources. For example, the tag key might be \"customer\" and the tag value might be \"companyA.\" You can specify one or more tags to add to each container. You can add up to 50 tags to each AWS resource.
", "UpdateIPSet": "This is the latest version of AWS WAF, named AWS WAFV2, released in November, 2019. For information, including how to migrate your AWS WAF resources from the prior release, see the AWS WAF Developer Guide.
Updates the specified IPSet.
", @@ -92,7 +96,8 @@ "base": null, "refs": { "VisibilityConfig$SampledRequestsEnabled": "A boolean indicating whether AWS WAF should store a sampling of the web requests that match the rules. You can view the sampled requests through the AWS WAF console.
", - "VisibilityConfig$CloudWatchMetricsEnabled": "A boolean indicating whether the associated resource sends metrics to CloudWatch. For the list of available metrics, see AWS WAF Metrics.
" + "VisibilityConfig$CloudWatchMetricsEnabled": "A boolean indicating whether the associated resource sends metrics to CloudWatch. For the list of available metrics, see AWS WAF Metrics.
", + "WebACL$ManagedByFirewallManager": "Indicates whether this web ACL is managed by AWS Firewall Manager. If true, then only AWS Firewall Manager can delete the web ACL or any Firewall Manager rule groups in the web ACL.
" } }, "ByteMatchStatement": { @@ -205,6 +210,16 @@ "WebACL$DefaultAction": "The action to perform if none of the Rules
contained in the WebACL
match.
A friendly description of the IP set. You cannot change the description of an IP set after you create it.
", - "CreateRegexPatternSetRequest$Description": "A friendly description of the set. You cannot change the description of a set after you create it.
", - "CreateRuleGroupRequest$Description": "A friendly description of the rule group. You cannot change the description of a rule group after you create it.
", - "CreateWebACLRequest$Description": "A friendly description of the Web ACL. You cannot change the description of a Web ACL after you create it.
", - "IPSet$Description": "A friendly description of the IP set. You cannot change the description of an IP set after you create it.
", - "IPSetSummary$Description": "A friendly description of the IP set. You cannot change the description of an IP set after you create it.
", + "CreateIPSetRequest$Description": "A description of the IP set that helps with identification. You cannot change the description of an IP set after you create it.
", + "CreateRegexPatternSetRequest$Description": "A description of the set that helps with identification. You cannot change the description of a set after you create it.
", + "CreateRuleGroupRequest$Description": "A description of the rule group that helps with identification. You cannot change the description of a rule group after you create it.
", + "CreateWebACLRequest$Description": "A description of the Web ACL that helps with identification. You cannot change the description of a Web ACL after you create it.
", + "IPSet$Description": "A description of the IP set that helps with identification. You cannot change the description of an IP set after you create it.
", + "IPSetSummary$Description": "A description of the IP set that helps with identification. You cannot change the description of an IP set after you create it.
", "ManagedRuleGroupSummary$Description": "The description of the managed rule group, provided by AWS Managed Rules or the AWS Marketplace seller who manages it.
", - "RegexPatternSet$Description": "A friendly description of the set. You cannot change the description of a set after you create it.
", - "RegexPatternSetSummary$Description": "A friendly description of the set. You cannot change the description of a set after you create it.
", - "RuleGroup$Description": "A friendly description of the rule group. You cannot change the description of a rule group after you create it.
", - "RuleGroupSummary$Description": "A friendly description of the rule group. You cannot change the description of a rule group after you create it.
", - "UpdateIPSetRequest$Description": "A friendly description of the IP set. You cannot change the description of an IP set after you create it.
", - "UpdateRegexPatternSetRequest$Description": "A friendly description of the set. You cannot change the description of a set after you create it.
", - "UpdateRuleGroupRequest$Description": "A friendly description of the rule group. You cannot change the description of a rule group after you create it.
", - "UpdateWebACLRequest$Description": "A friendly description of the Web ACL. You cannot change the description of a Web ACL after you create it.
", - "WebACL$Description": "A friendly description of the Web ACL. You cannot change the description of a Web ACL after you create it.
", - "WebACLSummary$Description": "A friendly description of the Web ACL. You cannot change the description of a Web ACL after you create it.
" + "RegexPatternSet$Description": "A description of the set that helps with identification. You cannot change the description of a set after you create it.
", + "RegexPatternSetSummary$Description": "A description of the set that helps with identification. You cannot change the description of a set after you create it.
", + "RuleGroup$Description": "A description of the rule group that helps with identification. You cannot change the description of a rule group after you create it.
", + "RuleGroupSummary$Description": "A description of the rule group that helps with identification. You cannot change the description of a rule group after you create it.
", + "UpdateIPSetRequest$Description": "A description of the IP set that helps with identification. You cannot change the description of an IP set after you create it.
", + "UpdateRegexPatternSetRequest$Description": "A description of the set that helps with identification. You cannot change the description of a set after you create it.
", + "UpdateRuleGroupRequest$Description": "A description of the rule group that helps with identification. You cannot change the description of a rule group after you create it.
", + "UpdateWebACLRequest$Description": "A description of the Web ACL that helps with identification. You cannot change the description of a Web ACL after you create it.
", + "WebACL$Description": "A description of the Web ACL that helps with identification. You cannot change the description of a Web ACL after you create it.
", + "WebACLSummary$Description": "A description of the Web ACL that helps with identification. You cannot change the description of a Web ACL after you create it.
" } }, "EntityId": { @@ -326,39 +351,40 @@ "EntityName": { "base": null, "refs": { - "CreateIPSetRequest$Name": "A friendly name of the IP set. You cannot change the name of an IPSet
after you create it.
A friendly name of the set. You cannot change the name after you create the set.
", - "CreateRuleGroupRequest$Name": "A friendly name of the rule group. You cannot change the name of a rule group after you create it.
", - "CreateWebACLRequest$Name": "A friendly name of the Web ACL. You cannot change the name of a Web ACL after you create it.
", - "DeleteIPSetRequest$Name": "A friendly name of the IP set. You cannot change the name of an IPSet
after you create it.
A friendly name of the set. You cannot change the name after you create the set.
", - "DeleteRuleGroupRequest$Name": "A friendly name of the rule group. You cannot change the name of a rule group after you create it.
", - "DeleteWebACLRequest$Name": "A friendly name of the Web ACL. You cannot change the name of a Web ACL after you create it.
", + "CreateIPSetRequest$Name": "The name of the IP set. You cannot change the name of an IPSet
after you create it.
The name of the set. You cannot change the name after you create the set.
", + "CreateRuleGroupRequest$Name": "The name of the rule group. You cannot change the name of a rule group after you create it.
", + "CreateWebACLRequest$Name": "The name of the Web ACL. You cannot change the name of a Web ACL after you create it.
", + "DeleteIPSetRequest$Name": "The name of the IP set. You cannot change the name of an IPSet
after you create it.
The name of the set. You cannot change the name after you create the set.
", + "DeleteRuleGroupRequest$Name": "The name of the rule group. You cannot change the name of a rule group after you create it.
", + "DeleteWebACLRequest$Name": "The name of the Web ACL. You cannot change the name of a Web ACL after you create it.
", "DescribeManagedRuleGroupRequest$Name": "The name of the managed rule group. You use this, along with the vendor name, to identify the rule group.
", "ExcludedRule$Name": "The name of the rule to exclude.
", - "GetIPSetRequest$Name": "A friendly name of the IP set. You cannot change the name of an IPSet
after you create it.
A friendly name of the Web ACL. You cannot change the name of a Web ACL after you create it.
", + "FirewallManagerRuleGroup$Name": "The name of the rule group. You cannot change the name of a rule group after you create it.
", + "GetIPSetRequest$Name": "The name of the IP set. You cannot change the name of an IPSet
after you create it.
The name of the Web ACL. You cannot change the name of a Web ACL after you create it.
", "GetRateBasedStatementManagedKeysRequest$RuleName": "The name of the rate-based rule to get the keys for.
", - "GetRegexPatternSetRequest$Name": "A friendly name of the set. You cannot change the name after you create the set.
", - "GetRuleGroupRequest$Name": "A friendly name of the rule group. You cannot change the name of a rule group after you create it.
", - "GetWebACLRequest$Name": "A friendly name of the Web ACL. You cannot change the name of a Web ACL after you create it.
", - "IPSet$Name": "A friendly name of the IP set. You cannot change the name of an IPSet
after you create it.
A friendly name of the IP set. You cannot change the name of an IPSet
after you create it.
The name of the set. You cannot change the name after you create the set.
", + "GetRuleGroupRequest$Name": "The name of the rule group. You cannot change the name of a rule group after you create it.
", + "GetWebACLRequest$Name": "The name of the Web ACL. You cannot change the name of a Web ACL after you create it.
", + "IPSet$Name": "The name of the IP set. You cannot change the name of an IPSet
after you create it.
The name of the IP set. You cannot change the name of an IPSet
after you create it.
The name of the managed rule group. You use this, along with the vendor name, to identify the rule group.
", "ManagedRuleGroupSummary$Name": "The name of the managed rule group. You use this, along with the vendor name, to identify the rule group.
", - "RegexPatternSet$Name": "A friendly name of the set. You cannot change the name after you create the set.
", - "RegexPatternSetSummary$Name": "A friendly name of the data type instance. You cannot change the name after you create the instance.
", - "Rule$Name": "A friendly name of the rule. You can't change the name of a Rule
after you create it.
A friendly name of the rule group. You cannot change the name of a rule group after you create it.
", - "RuleGroupSummary$Name": "A friendly name of the data type instance. You cannot change the name after you create the instance.
", + "RegexPatternSet$Name": "The name of the set. You cannot change the name after you create the set.
", + "RegexPatternSetSummary$Name": "The name of the data type instance. You cannot change the name after you create the instance.
", + "Rule$Name": "The name of the rule. You can't change the name of a Rule
after you create it.
The name of the rule group. You cannot change the name of a rule group after you create it.
", + "RuleGroupSummary$Name": "The name of the data type instance. You cannot change the name after you create the instance.
", "RuleSummary$Name": "The name of the rule.
", "SampledHTTPRequest$RuleNameWithinRuleGroup": "The name of the Rule
that the request matched. For managed rule groups, the format for this name is <vendor name>#<managed rule group name>#<rule name>
. For your own rule groups, the format for this name is <rule group name>#<rule name>
. If the rule is not in a rule group, the format is <rule name>
.
A friendly name of the IP set. You cannot change the name of an IPSet
after you create it.
A friendly name of the set. You cannot change the name after you create the set.
", - "UpdateRuleGroupRequest$Name": "A friendly name of the rule group. You cannot change the name of a rule group after you create it.
", - "UpdateWebACLRequest$Name": "A friendly name of the Web ACL. You cannot change the name of a Web ACL after you create it.
", - "WebACL$Name": "A friendly name of the Web ACL. You cannot change the name of a Web ACL after you create it.
", - "WebACLSummary$Name": "A friendly name of the Web ACL. You cannot change the name of a Web ACL after you create it.
" + "UpdateIPSetRequest$Name": "The name of the IP set. You cannot change the name of an IPSet
after you create it.
The name of the set. You cannot change the name after you create the set.
", + "UpdateRuleGroupRequest$Name": "The name of the rule group. You cannot change the name of a rule group after you create it.
", + "UpdateWebACLRequest$Name": "The name of the Web ACL. You cannot change the name of a Web ACL after you create it.
", + "WebACL$Name": "The name of the Web ACL. You cannot change the name of a Web ACL after you create it.
", + "WebACLSummary$Name": "The name of the Web ACL. You cannot change the name of a Web ACL after you create it.
" } }, "ErrorMessage": { @@ -367,7 +393,9 @@ "WAFAssociatedItemException$Message": null, "WAFDuplicateItemException$Message": null, "WAFInternalErrorException$Message": null, + "WAFInvalidOperationException$Message": null, "WAFInvalidParameterException$message": null, + "WAFInvalidPermissionPolicyException$Message": null, "WAFInvalidResourceException$Message": null, "WAFLimitsExceededException$Message": null, "WAFNonexistentItemException$Message": null, @@ -399,7 +427,7 @@ } }, "FieldToMatch": { - "base": "This is the latest version of AWS WAF, named AWS WAFV2, released in November, 2019. For information, including how to migrate your AWS WAF resources from the prior release, see the AWS WAF Developer Guide.
The part of a web request that you want AWS WAF to inspect. Include the FieldToMatch
types that you want to inspect, with additional specifications as needed, according to the type.
This is the latest version of AWS WAF, named AWS WAFV2, released in November, 2019. For information, including how to migrate your AWS WAF resources from the prior release, see the AWS WAF Developer Guide.
The part of a web request that you want AWS WAF to inspect. Include the single FieldToMatch
type that you want to inspect, with additional specifications as needed, according to the type. You specify a single request component in FieldToMatch
for each rule statement that requires it. To inspect more than one component of a web request, create a separate rule statement for each component.
The part of a web request that you want AWS WAF to inspect. For more information, see FieldToMatch.
", "RedactedFields$member": null, @@ -416,6 +444,25 @@ "SingleQueryArgument$Name": "The name of the query argument to inspect.
" } }, + "FirewallManagerRuleGroup": { + "base": "A rule group that's defined for an AWS Firewall Manager WAF policy.
", + "refs": { + "FirewallManagerRuleGroups$member": null + } + }, + "FirewallManagerRuleGroups": { + "base": null, + "refs": { + "WebACL$PreProcessFirewallManagerRuleGroups": "The first set of rules for AWS WAF to process in the web ACL. This is defined in an AWS Firewall Manager WAF policy and contains only rule group references. You can't alter these. Any rules and rule groups that you define for the web ACL are prioritized after these.
In the Firewall Manager WAF policy, the Firewall Manager administrator can define a set of rule groups to run first in the web ACL and a set of rule groups to run last. Within each set, the administrator prioritizes the rule groups, to determine their relative processing order.
", + "WebACL$PostProcessFirewallManagerRuleGroups": "The last set of rules for AWS WAF to process in the web ACL. This is defined in an AWS Firewall Manager WAF policy and contains only rule group references. You can't alter these. Any rules and rule groups that you define for the web ACL are prioritized before these.
In the Firewall Manager WAF policy, the Firewall Manager administrator can define a set of rule groups to run first in the web ACL and a set of rule groups to run last. Within each set, the administrator prioritizes the rule groups, to determine their relative processing order.
" + } + }, + "FirewallManagerStatement": { + "base": "The processing guidance for an AWS Firewall Manager rule. This is like a regular rule Statement, but it can only contain a rule group reference.
", + "refs": { + "FirewallManagerRuleGroup$FirewallManagerStatement": "The processing guidance for an AWS Firewall Manager rule. This is like a regular rule Statement, but it can only contain a rule group reference.
" + } + }, "GeoMatchStatement": { "base": "This is the latest version of AWS WAF, named AWS WAFV2, released in November, 2019. For information, including how to migrate your AWS WAF resources from the prior release, see the AWS WAF Developer Guide.
A rule statement used to identify web requests based on country of origin.
", "refs": { @@ -442,6 +489,16 @@ "refs": { } }, + "GetPermissionPolicyRequest": { + "base": null, + "refs": { + } + }, + "GetPermissionPolicyResponse": { + "base": null, + "refs": { + } + }, "GetRateBasedStatementManagedKeysRequest": { "base": null, "refs": { @@ -687,6 +744,8 @@ "LockToken": { "base": null, "refs": { + "DeleteFirewallManagerRuleGroupsRequest$WebACLLockToken": "A token used for optimistic locking. AWS WAF returns a token to your get and list requests, to mark the state of the entity at the time of the request. To make changes to the entity associated with the token, you provide the token to operations like update and delete. AWS WAF uses the token to ensure that no changes have been made to the entity since you last retrieved it. If a change has been made, the update fails with a WAFOptimisticLockException
. If this happens, perform another get, and use the new token returned by that operation.
A token used for optimistic locking. AWS WAF returns a token to your get and list requests, to mark the state of the entity at the time of the request. To make changes to the entity associated with the token, you provide the token to operations like update and delete. AWS WAF uses the token to ensure that no changes have been made to the entity since you last retrieved it. If a change has been made, the update fails with a WAFOptimisticLockException
. If this happens, perform another get, and use the new token returned by that operation.
A token used for optimistic locking. AWS WAF returns a token to your get and list requests, to mark the state of the entity at the time of the request. To make changes to the entity associated with the token, you provide the token to operations like update and delete. AWS WAF uses the token to ensure that no changes have been made to the entity since you last retrieved it. If a change has been made, the update fails with a WAFOptimisticLockException
. If this happens, perform another get, and use the new token returned by that operation.
A token used for optimistic locking. AWS WAF returns a token to your get and list requests, to mark the state of the entity at the time of the request. To make changes to the entity associated with the token, you provide the token to operations like update and delete. AWS WAF uses the token to ensure that no changes have been made to the entity since you last retrieved it. If a change has been made, the update fails with a WAFOptimisticLockException
. If this happens, perform another get, and use the new token returned by that operation.
A token used for optimistic locking. AWS WAF returns a token to your get and list requests, to mark the state of the entity at the time of the request. To make changes to the entity associated with the token, you provide the token to operations like update and delete. AWS WAF uses the token to ensure that no changes have been made to the entity since you last retrieved it. If a change has been made, the update fails with a WAFOptimisticLockException
. If this happens, perform another get, and use the new token returned by that operation.
This is the latest version of AWS WAF, named AWS WAFV2, released in November, 2019. For information, including how to migrate your AWS WAF resources from the prior release, see the AWS WAF Developer Guide.
A rule statement used to run the rules that are defined in a managed rule group. To use this, provide the vendor name and the name of the rule group in this statement. You can retrieve the required names by calling ListAvailableManagedRuleGroups.
You can't nest a ManagedRuleGroupStatement
, for example for use inside a NotStatement
or OrStatement
. It can only be referenced as a top-level statement within a rule.
A rule statement used to run the rules that are defined in a managed rule group. To use this, provide the vendor name and the name of the rule group in this statement. You can retrieve the required names by calling ListAvailableManagedRuleGroups.
You can't nest a ManagedRuleGroupStatement
, for example for use inside a NotStatement
or OrStatement
. It can only be referenced as a top-level statement within a rule.
The metric name assigned to the Rule
or RuleGroup
for which you want a sample of requests.
A friendly name of the CloudWatch metric. The name can contain only alphanumeric characters (A-Z, a-z, 0-9), with length from one to 128 characters. It can't contain whitespace or metric names reserved for AWS WAF, for example \"All\" and \"Default_Action.\" You can't change a MetricName
after you create a VisibilityConfig
.
A name of the CloudWatch metric. The name can contain only alphanumeric characters (A-Z, a-z, 0-9), with length from one to 128 characters. It can't contain whitespace or metric names reserved for AWS WAF, for example \"All\" and \"Default_Action.\" You can't change a MetricName
after you create a VisibilityConfig
.
This is the latest version of AWS WAF, named AWS WAFV2, released in November, 2019. For information, including how to migrate your AWS WAF resources from the prior release, see the AWS WAF Developer Guide.
The action to use to override the rule's Action
setting. You can use no override action, in which case the rule action is in effect, or count, in which case, if the rule matches a web request, it only counts the match.
The override action to apply to the rules in a rule group. Used only for rule statements that reference a rule group, like RuleGroupReferenceStatement
and ManagedRuleGroupStatement
.
Set the override action to none to leave the rule actions in effect. Set it to count to only count matches, regardless of the rule action settings.
In a Rule, you must specify either this OverrideAction
setting or the rule Action
setting, but not both:
If the rule statement references a rule group, use this override action setting and not the action setting.
If the rule statement does not reference a rule group, use the rule action setting and not this rule override action setting.
The action to use to override the rule's Action
setting. You can use no override action, in which case the rule action is in effect, or count action, in which case, if the rule matches a web request, it only counts the match.
The override action to apply to the rules in a rule group. Used only for rule statements that reference a rule group, like RuleGroupReferenceStatement
and ManagedRuleGroupStatement
.
Set the override action to none to leave the rule actions in effect. Set it to count to only count matches, regardless of the rule action settings.
In a Rule, you must specify either this OverrideAction
setting or the rule Action
setting, but not both:
If the rule statement references a rule group, use this override action setting and not the action setting.
If the rule statement does not reference a rule group, use the rule action setting and not this rule override action setting.
The IAM policy that is attached to the specified rule group.
", + "PutPermissionPolicyRequest$Policy": "The policy to attach to the specified rule group.
The policy specifications must conform to the following:
The policy must be composed using IAM Policy version 2012-10-17 or version 2015-01-01.
The policy must include specifications for Effect
, Action
, and Principal
.
Effect
must specify Allow
.
Action
must specify wafv2:CreateWebACL
, wafv2:UpdateWebACL
, and wafv2:PutFirewallManagerRuleGroups
. AWS WAF rejects any extra actions or wildcard actions in the policy.
The policy must not include a Resource
parameter.
For more information, see IAM Policies.
" + } + }, "PopulationSize": { "base": null, "refs": { @@ -850,6 +918,16 @@ "refs": { } }, + "PutPermissionPolicyRequest": { + "base": null, + "refs": { + } + }, + "PutPermissionPolicyResponse": { + "base": null, + "refs": { + } + }, "QueryString": { "base": "This is the latest version of AWS WAF, named AWS WAFV2, released in November, 2019. For information, including how to migrate your AWS WAF resources from the prior release, see the AWS WAF Developer Guide.
The query string of a web request. This is the part of a URL that appears after a ?
character, if any.
This is used only to indicate the web request component for AWS WAF to inspect, in the FieldToMatch specification.
", "refs": { @@ -937,9 +1015,12 @@ "refs": { "AssociateWebACLRequest$WebACLArn": "The Amazon Resource Name (ARN) of the Web ACL that you want to associate with the resource.
", "AssociateWebACLRequest$ResourceArn": "The Amazon Resource Name (ARN) of the resource to associate with the web ACL.
The ARN must be in one of the following formats:
For an Application Load Balancer: arn:aws:elasticloadbalancing:region:account-id:loadbalancer/app/load-balancer-name/load-balancer-id
For an Amazon API Gateway stage: arn:aws:apigateway:region::/restapis/api-id/stages/stage-name
The Amazon Resource Name (ARN) of the web ACL.
", "DeleteLoggingConfigurationRequest$ResourceArn": "The Amazon Resource Name (ARN) of the web ACL from which you want to delete the LoggingConfiguration.
", + "DeletePermissionPolicyRequest$ResourceArn": "The Amazon Resource Name (ARN) of the rule group from which you want to delete the policy.
You must be the owner of the rule group to perform this operation.
", "DisassociateWebACLRequest$ResourceArn": "The Amazon Resource Name (ARN) of the resource to disassociate from the web ACL.
The ARN must be in one of the following formats:
For an Application Load Balancer: arn:aws:elasticloadbalancing:region:account-id:loadbalancer/app/load-balancer-name/load-balancer-id
For an Amazon API Gateway stage: arn:aws:apigateway:region::/restapis/api-id/stages/stage-name
The Amazon Resource Name (ARN) of the web ACL for which you want to get the LoggingConfiguration.
", + "GetPermissionPolicyRequest$ResourceArn": "The Amazon Resource Name (ARN) of the rule group for which you want to get the policy.
", "GetSampledRequestsRequest$WebAclArn": "The Amazon resource name (ARN) of the WebACL
for which you want a sample of requests.
The ARN (Amazon Resource Name) of the resource.
", "IPSet$ARN": "The Amazon Resource Name (ARN) of the entity.
", @@ -949,6 +1030,7 @@ "ListTagsForResourceRequest$ResourceARN": "The Amazon Resource Name (ARN) of the resource.
", "LogDestinationConfigs$member": null, "LoggingConfiguration$ResourceArn": "The Amazon Resource Name (ARN) of the web ACL that you want to associate with LogDestinationConfigs
.
The Amazon Resource Name (ARN) of the RuleGroup to which you want to attach the policy.
", "RegexPatternSet$ARN": "The Amazon Resource Name (ARN) of the entity.
", "RegexPatternSetReferenceStatement$ARN": "The Amazon Resource Name (ARN) of the RegexPatternSet that this statement references.
", "RegexPatternSetSummary$ARN": "The Amazon Resource Name (ARN) of the entity.
", @@ -984,7 +1066,7 @@ "RuleAction": { "base": "This is the latest version of AWS WAF, named AWS WAFV2, released in November, 2019. For information, including how to migrate your AWS WAF resources from the prior release, see the AWS WAF Developer Guide.
The action that AWS WAF should take on a web request when it matches a rule's statement. Settings at the web ACL level can override the rule action setting.
", "refs": { - "Rule$Action": "The action that AWS WAF should take on a web request when it matches the rule's statement. Settings at the web ACL level can override the rule action setting.
", + "Rule$Action": "The action that AWS WAF should take on a web request when it matches the rule statement. Settings at the web ACL level can override the rule action setting.
This is used only for rules whose statements do not reference a rule group. Rule statements that reference a rule group include RuleGroupReferenceStatement
and ManagedRuleGroupStatement
.
You must specify either this Action
setting or the rule OverrideAction
setting, but not both:
If the rule statement does not reference a rule group, use this rule action setting and not the rule override action setting.
If the rule statement references a rule group, use the override action setting and not this action setting.
This is the latest version of AWS WAF, named AWS WAFV2, released in November, 2019. For information, including how to migrate your AWS WAF resources from the prior release, see the AWS WAF Developer Guide.
A rule statement used to run the rules that are defined in a RuleGroup. To use this, create a rule group with your rules, then provide the ARN of the rule group in this statement.
You cannot nest a RuleGroupReferenceStatement
, for example for use inside a NotStatement
or OrStatement
. It can only be referenced as a top-level statement within a rule.
A rule statement used to run the rules that are defined in a RuleGroup. To use this, create a rule group with your rules, then provide the ARN of the rule group in this statement.
You cannot nest a RuleGroupReferenceStatement
, for example for use inside a NotStatement
or OrStatement
. It can only be referenced as a top-level statement within a rule.
If you define more than one rule group in the first or last Firewall Manager rule groups, AWS WAF evaluates each request against the rule groups in order, starting from the lowest priority setting. The priorities don't need to be consecutive, but they must all be different.
", "Rule$Priority": "If you define more than one Rule
in a WebACL
, AWS WAF evaluates each request against the Rules
in order based on the value of Priority
. AWS WAF processes rules with lower priority first. The priorities don't need to be consecutive, but they must all be different.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the Region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the Region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the Region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the Region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the Region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the Region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the Region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the Region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the Region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the Region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the Region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the Region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the Region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the Region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the Region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the Region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the Region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the Region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the Region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the Region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the Region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the Region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the Region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the Region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the Region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
Specifies whether this is for an AWS CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB) or an API Gateway stage.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the Region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1
.
API and SDKs - For all calls, use the Region endpoint us-east-1.
A string value that you want AWS WAF to search for. AWS WAF searches only in the part of web requests that you designate for inspection in FieldToMatch. The maximum length of the value is 50 bytes.
Valid values depend on the areas that you specify for inspection in FieldToMatch
:
Method
: The HTTP method that you want AWS WAF to search for. This indicates the type of operation specified in the request.
UriPath
: The value that you want AWS WAF to search for in the URI path, for example, /images/daily-ad.jpg
.
If SearchString
includes alphabetic characters A-Z and a-z, note that the value is case sensitive.
If you're using the AWS WAF API
Specify a base64-encoded version of the value. The maximum length of the value before you base64-encode it is 50 bytes.
For example, suppose the value of Type
is HEADER
and the value of Data
is User-Agent
. If you want to search the User-Agent
header for the value BadBot
, you base64-encode BadBot
using MIME base64-encoding and include the resulting value, QmFkQm90
, in the value of SearchString
.
If you're using the AWS CLI or one of the AWS SDKs
The value that you want AWS WAF to search for. The SDK automatically base64 encodes the value.
" + "ByteMatchStatement$SearchString": "A string value that you want AWS WAF to search for. AWS WAF searches only in the part of web requests that you designate for inspection in FieldToMatch. The maximum length of the value is 50 bytes.
Valid values depend on the component that you specify for inspection in FieldToMatch
:
Method
: The HTTP method that you want AWS WAF to search for. This indicates the type of operation specified in the request.
UriPath
: The value that you want AWS WAF to search for in the URI path, for example, /images/daily-ad.jpg
.
If SearchString
includes alphabetic characters A-Z and a-z, note that the value is case sensitive.
If you're using the AWS WAF API
Specify a base64-encoded version of the value. The maximum length of the value before you base64-encode it is 50 bytes.
For example, suppose the value of Type
is HEADER
and the value of Data
is User-Agent
. If you want to search the User-Agent
header for the value BadBot
, you base64-encode BadBot
using MIME base64-encoding and include the resulting value, QmFkQm90
, in the value of SearchString
.
If you're using the AWS CLI or one of the AWS SDKs
The value that you want AWS WAF to search for. The SDK automatically base64 encodes the value.
" } }, "SingleHeader": { @@ -1217,11 +1301,11 @@ "TextTransformations": { "base": null, "refs": { - "ByteMatchStatement$TextTransformations": "Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection. If you specify one or more transformations in a rule statement, AWS WAF performs all transformations on the content identified by FieldToMatch
, starting from the lowest priority setting, before inspecting the content for a match.
Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection. If you specify one or more transformations in a rule statement, AWS WAF performs all transformations on the content identified by FieldToMatch
, starting from the lowest priority setting, before inspecting the content for a match.
Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection. If you specify one or more transformations in a rule statement, AWS WAF performs all transformations on the content identified by FieldToMatch
, starting from the lowest priority setting, before inspecting the content for a match.
Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection. If you specify one or more transformations in a rule statement, AWS WAF performs all transformations on the content identified by FieldToMatch
, starting from the lowest priority setting, before inspecting the content for a match.
Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection. If you specify one or more transformations in a rule statement, AWS WAF performs all transformations on the content identified by FieldToMatch
, starting from the lowest priority setting, before inspecting the content for a match.
Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection. If you specify one or more transformations in a rule statement, AWS WAF performs all transformations on the content of the request component identified by FieldToMatch
, starting from the lowest priority setting, before inspecting the content for a match.
Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection. If you specify one or more transformations in a rule statement, AWS WAF performs all transformations on the content of the request component identified by FieldToMatch
, starting from the lowest priority setting, before inspecting the content for a match.
Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection. If you specify one or more transformations in a rule statement, AWS WAF performs all transformations on the content of the request component identified by FieldToMatch
, starting from the lowest priority setting, before inspecting the content for a match.
Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection. If you specify one or more transformations in a rule statement, AWS WAF performs all transformations on the content of the request component identified by FieldToMatch
, starting from the lowest priority setting, before inspecting the content for a match.
Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection. If you specify one or more transformations in a rule statement, AWS WAF performs all transformations on the content of the request component identified by FieldToMatch
, starting from the lowest priority setting, before inspecting the content for a match.
Defines and enables Amazon CloudWatch metrics and web request sample collection.
", "CreateWebACLRequest$VisibilityConfig": "Defines and enables Amazon CloudWatch metrics and web request sample collection.
", + "FirewallManagerRuleGroup$VisibilityConfig": null, "Rule$VisibilityConfig": "Defines and enables Amazon CloudWatch metrics and web request sample collection.
", "RuleGroup$VisibilityConfig": "Defines and enables Amazon CloudWatch metrics and web request sample collection.
", "UpdateRuleGroupRequest$VisibilityConfig": "Defines and enables Amazon CloudWatch metrics and web request sample collection.
", @@ -1336,11 +1421,21 @@ "refs": { } }, + "WAFInvalidOperationException": { + "base": "The operation isn't valid.
", + "refs": { + } + }, "WAFInvalidParameterException": { "base": "The operation failed because AWS WAF didn't recognize a parameter in the request. For example:
You specified an invalid parameter name or value.
Your nested statement isn't valid. You might have tried to nest a statement that can’t be nested.
You tried to update a WebACL
with a DefaultAction
that isn't among the types available at DefaultAction.
Your request references an ARN that is malformed, or corresponds to a resource with which a Web ACL cannot be associated.
The operation failed because the specified policy isn't in the proper format.
The policy specifications must conform to the following:
The policy must be composed using IAM Policy version 2012-10-17 or version 2015-01-01.
The policy must include specifications for Effect
, Action
, and Principal
.
Effect
must specify Allow
.
Action
must specify wafv2:CreateWebACL
, wafv2:UpdateWebACL
, and wafv2:PutFirewallManagerRuleGroups
. AWS WAF rejects any extra actions or wildcard actions in the policy.
The policy must not include a Resource
parameter.
For more information, see IAM Policies.
", + "refs": { + } + }, "WAFInvalidResourceException": { "base": "AWS WAF couldn’t perform the operation because the resource that you requested isn’t valid. Check the resource, and try again.
", "refs": { diff --git a/models/apis/xray/2016-04-12/api-2.json b/models/apis/xray/2016-04-12/api-2.json index 6af76eafcec..b9bd6d8fc7c 100644 --- a/models/apis/xray/2016-04-12/api-2.json +++ b/models/apis/xray/2016-04-12/api-2.json @@ -480,7 +480,8 @@ "ErrorRootCause":{ "type":"structure", "members":{ - "Services":{"shape":"ErrorRootCauseServices"} + "Services":{"shape":"ErrorRootCauseServices"}, + "ClientImpacting":{"shape":"NullableBoolean"} } }, "ErrorRootCauseEntity":{ @@ -525,7 +526,8 @@ "FaultRootCause":{ "type":"structure", "members":{ - "Services":{"shape":"FaultRootCauseServices"} + "Services":{"shape":"FaultRootCauseServices"}, + "ClientImpacting":{"shape":"NullableBoolean"} } }, "FaultRootCauseEntity":{ @@ -892,7 +894,8 @@ "ResponseTimeRootCause":{ "type":"structure", "members":{ - "Services":{"shape":"ResponseTimeRootCauseServices"} + "Services":{"shape":"ResponseTimeRootCauseServices"}, + "ClientImpacting":{"shape":"NullableBoolean"} } }, "ResponseTimeRootCauseEntity":{ diff --git a/models/apis/xray/2016-04-12/docs-2.json b/models/apis/xray/2016-04-12/docs-2.json index 6b134701cbf..32b7428bb56 100644 --- a/models/apis/xray/2016-04-12/docs-2.json +++ b/models/apis/xray/2016-04-12/docs-2.json @@ -545,11 +545,14 @@ "base": null, "refs": { "AnnotationValue$BooleanValue": "Value for a Boolean annotation.
", + "ErrorRootCause$ClientImpacting": "A flag that denotes that the root cause impacts the trace client.
", "ErrorRootCauseEntity$Remote": "A flag that denotes a remote subsegment.
", "ErrorRootCauseService$Inferred": "A Boolean value indicating if the service is inferred from the trace.
", + "FaultRootCause$ClientImpacting": "A flag that denotes that the root cause impacts the trace client.
", "FaultRootCauseEntity$Remote": "A flag that denotes a remote subsegment.
", "FaultRootCauseService$Inferred": "A Boolean value indicating if the service is inferred from the trace.
", "GetTraceSummariesRequest$Sampling": "Set to true
to get summaries for only a subset of available traces.
A flag that denotes that the root cause impacts the trace client.
", "ResponseTimeRootCauseEntity$Remote": "A flag that denotes a remote subsegment.
", "ResponseTimeRootCauseService$Inferred": "A Boolean value indicating if the service is inferred from the trace.
", "Service$Root": "Indicates that the service was the first service to process a request.
", diff --git a/models/endpoints/endpoints.json b/models/endpoints/endpoints.json index f4e95b5b450..e00dc7f0dc0 100644 --- a/models/endpoints/endpoints.json +++ b/models/endpoints/endpoints.json @@ -274,6 +274,30 @@ }, "hostname" : "api.ecr.eu-west-3.amazonaws.com" }, + "fips-us-east-1" : { + "credentialScope" : { + "region" : "us-east-1" + }, + "hostname" : "ecr-fips.us-east-1.amazonaws.com" + }, + "fips-us-east-2" : { + "credentialScope" : { + "region" : "us-east-2" + }, + "hostname" : "ecr-fips.us-east-2.amazonaws.com" + }, + "fips-us-west-1" : { + "credentialScope" : { + "region" : "us-west-1" + }, + "hostname" : "ecr-fips.us-west-1.amazonaws.com" + }, + "fips-us-west-2" : { + "credentialScope" : { + "region" : "us-west-2" + }, + "hostname" : "ecr-fips.us-west-2.amazonaws.com" + }, "me-south-1" : { "credentialScope" : { "region" : "me-south-1" @@ -591,6 +615,30 @@ "eu-west-1" : { }, "eu-west-2" : { }, "eu-west-3" : { }, + "fips-us-east-1" : { + "credentialScope" : { + "region" : "us-east-1" + }, + "hostname" : "fips.batch.us-east-1.amazonaws.com" + }, + "fips-us-east-2" : { + "credentialScope" : { + "region" : "us-east-2" + }, + "hostname" : "fips.batch.us-east-2.amazonaws.com" + }, + "fips-us-west-1" : { + "credentialScope" : { + "region" : "us-west-1" + }, + "hostname" : "fips.batch.us-west-1.amazonaws.com" + }, + "fips-us-west-2" : { + "credentialScope" : { + "region" : "us-west-2" + }, + "hostname" : "fips.batch.us-west-2.amazonaws.com" + }, "me-south-1" : { }, "sa-east-1" : { }, "us-east-1" : { }, @@ -687,9 +735,33 @@ "me-south-1" : { }, "sa-east-1" : { }, "us-east-1" : { }, + "us-east-1-fips" : { + "credentialScope" : { + "region" : "us-east-1" + }, + "hostname" : "cloudformation-fips.us-east-1.amazonaws.com" + }, "us-east-2" : { }, + "us-east-2-fips" : { + "credentialScope" : { + "region" : "us-east-2" + }, + "hostname" : "cloudformation-fips.us-east-2.amazonaws.com" + }, "us-west-1" : { }, - "us-west-2" : { } + "us-west-1-fips" : { + "credentialScope" : { + "region" : "us-west-1" + }, + "hostname" : "cloudformation-fips.us-west-1.amazonaws.com" + }, + "us-west-2" : { }, + "us-west-2-fips" : { + "credentialScope" : { + "region" : "us-west-2" + }, + "hostname" : "cloudformation-fips.us-west-2.amazonaws.com" + } } }, "cloudfront" : { @@ -774,6 +846,30 @@ "eu-west-1" : { }, "eu-west-2" : { }, "eu-west-3" : { }, + "fips-us-east-1" : { + "credentialScope" : { + "region" : "us-east-1" + }, + "hostname" : "cloudtrail-fips.us-east-1.amazonaws.com" + }, + "fips-us-east-2" : { + "credentialScope" : { + "region" : "us-east-2" + }, + "hostname" : "cloudtrail-fips.us-east-2.amazonaws.com" + }, + "fips-us-west-1" : { + "credentialScope" : { + "region" : "us-west-1" + }, + "hostname" : "cloudtrail-fips.us-west-1.amazonaws.com" + }, + "fips-us-west-2" : { + "credentialScope" : { + "region" : "us-west-2" + }, + "hostname" : "cloudtrail-fips.us-west-2.amazonaws.com" + }, "me-south-1" : { }, "sa-east-1" : { }, "us-east-1" : { }, @@ -915,6 +1011,36 @@ "eu-west-1" : { }, "eu-west-2" : { }, "eu-west-3" : { }, + "fips-ca-central-1" : { + "credentialScope" : { + "region" : "ca-central-1" + }, + "hostname" : "codepipeline-fips.ca-central-1.amazonaws.com" + }, + "fips-us-east-1" : { + "credentialScope" : { + "region" : "us-east-1" + }, + "hostname" : "codepipeline-fips.us-east-1.amazonaws.com" + }, + "fips-us-east-2" : { + "credentialScope" : { + "region" : "us-east-2" + }, + "hostname" : "codepipeline-fips.us-east-2.amazonaws.com" + }, + "fips-us-west-1" : { + "credentialScope" : { + "region" : "us-west-1" + }, + "hostname" : "codepipeline-fips.us-west-1.amazonaws.com" + }, + "fips-us-west-2" : { + "credentialScope" : { + "region" : "us-west-2" + }, + "hostname" : "codepipeline-fips.us-west-2.amazonaws.com" + }, "sa-east-1" : { }, "us-east-1" : { }, "us-east-2" : { }, @@ -948,6 +1074,7 @@ "ap-southeast-2" : { }, "ca-central-1" : { }, "eu-central-1" : { }, + "eu-north-1" : { }, "eu-west-1" : { }, "eu-west-2" : { }, "eu-west-3" : { }, @@ -1055,6 +1182,24 @@ "eu-central-1" : { }, "eu-west-1" : { }, "eu-west-2" : { }, + "fips-us-east-1" : { + "credentialScope" : { + "region" : "us-east-1" + }, + "hostname" : "comprehend-fips.us-east-1.amazonaws.com" + }, + "fips-us-east-2" : { + "credentialScope" : { + "region" : "us-east-2" + }, + "hostname" : "comprehend-fips.us-east-2.amazonaws.com" + }, + "fips-us-west-2" : { + "credentialScope" : { + "region" : "us-west-2" + }, + "hostname" : "comprehend-fips.us-west-2.amazonaws.com" + }, "us-east-1" : { }, "us-east-2" : { }, "us-west-2" : { } @@ -1145,6 +1290,7 @@ "eu-central-1" : { }, "eu-north-1" : { }, "eu-west-1" : { }, + "eu-west-2" : { }, "us-east-1" : { }, "us-west-2" : { } } @@ -1261,6 +1407,30 @@ "eu-west-1" : { }, "eu-west-2" : { }, "eu-west-3" : { }, + "fips-us-east-1" : { + "credentialScope" : { + "region" : "us-east-1" + }, + "hostname" : "directconnect-fips.us-east-1.amazonaws.com" + }, + "fips-us-east-2" : { + "credentialScope" : { + "region" : "us-east-2" + }, + "hostname" : "directconnect-fips.us-east-2.amazonaws.com" + }, + "fips-us-west-1" : { + "credentialScope" : { + "region" : "us-west-1" + }, + "hostname" : "directconnect-fips.us-west-1.amazonaws.com" + }, + "fips-us-west-2" : { + "credentialScope" : { + "region" : "us-west-2" + }, + "hostname" : "directconnect-fips.us-west-2.amazonaws.com" + }, "me-south-1" : { }, "sa-east-1" : { }, "us-east-1" : { }, @@ -1271,6 +1441,7 @@ }, "discovery" : { "endpoints" : { + "ap-southeast-2" : { }, "eu-central-1" : { }, "us-west-2" : { } } @@ -1284,6 +1455,12 @@ "ap-southeast-1" : { }, "ap-southeast-2" : { }, "ca-central-1" : { }, + "dms-fips" : { + "credentialScope" : { + "region" : "us-west-1" + }, + "hostname" : "dms-fips.us-west-1.amazonaws.com" + }, "eu-central-1" : { }, "eu-north-1" : { }, "eu-west-1" : { }, @@ -1393,6 +1570,36 @@ "eu-west-1" : { }, "eu-west-2" : { }, "eu-west-3" : { }, + "fips-ca-central-1" : { + "credentialScope" : { + "region" : "ca-central-1" + }, + "hostname" : "ds-fips.ca-central-1.amazonaws.com" + }, + "fips-us-east-1" : { + "credentialScope" : { + "region" : "us-east-1" + }, + "hostname" : "ds-fips.us-east-1.amazonaws.com" + }, + "fips-us-east-2" : { + "credentialScope" : { + "region" : "us-east-2" + }, + "hostname" : "ds-fips.us-east-2.amazonaws.com" + }, + "fips-us-west-1" : { + "credentialScope" : { + "region" : "us-west-1" + }, + "hostname" : "ds-fips.us-west-1.amazonaws.com" + }, + "fips-us-west-2" : { + "credentialScope" : { + "region" : "us-west-2" + }, + "hostname" : "ds-fips.us-west-2.amazonaws.com" + }, "me-south-1" : { }, "sa-east-1" : { }, "us-east-1" : { }, @@ -1480,6 +1687,36 @@ "eu-west-1" : { }, "eu-west-2" : { }, "eu-west-3" : { }, + "fips-ca-central-1" : { + "credentialScope" : { + "region" : "ca-central-1" + }, + "hostname" : "ec2-fips.ca-central-1.amazonaws.com" + }, + "fips-us-east-1" : { + "credentialScope" : { + "region" : "us-east-1" + }, + "hostname" : "ec2-fips.us-east-1.amazonaws.com" + }, + "fips-us-east-2" : { + "credentialScope" : { + "region" : "us-east-2" + }, + "hostname" : "ec2-fips.us-east-2.amazonaws.com" + }, + "fips-us-west-1" : { + "credentialScope" : { + "region" : "us-west-1" + }, + "hostname" : "ec2-fips.us-west-1.amazonaws.com" + }, + "fips-us-west-2" : { + "credentialScope" : { + "region" : "us-west-2" + }, + "hostname" : "ec2-fips.us-west-2.amazonaws.com" + }, "me-south-1" : { }, "sa-east-1" : { }, "us-east-1" : { }, @@ -1502,6 +1739,30 @@ "eu-west-1" : { }, "eu-west-2" : { }, "eu-west-3" : { }, + "fips-us-east-1" : { + "credentialScope" : { + "region" : "us-east-1" + }, + "hostname" : "ecs-fips.us-east-1.amazonaws.com" + }, + "fips-us-east-2" : { + "credentialScope" : { + "region" : "us-east-2" + }, + "hostname" : "ecs-fips.us-east-2.amazonaws.com" + }, + "fips-us-west-1" : { + "credentialScope" : { + "region" : "us-west-1" + }, + "hostname" : "ecs-fips.us-west-1.amazonaws.com" + }, + "fips-us-west-2" : { + "credentialScope" : { + "region" : "us-west-2" + }, + "hostname" : "ecs-fips.us-west-2.amazonaws.com" + }, "me-south-1" : { }, "sa-east-1" : { }, "us-east-1" : { }, @@ -1510,6 +1771,28 @@ "us-west-2" : { } } }, + "elastic-inference" : { + "endpoints" : { + "ap-northeast-1" : { + "hostname" : "api.elastic-inference.ap-northeast-1.amazonaws.com" + }, + "ap-northeast-2" : { + "hostname" : "api.elastic-inference.ap-northeast-2.amazonaws.com" + }, + "eu-west-1" : { + "hostname" : "api.elastic-inference.eu-west-1.amazonaws.com" + }, + "us-east-1" : { + "hostname" : "api.elastic-inference.us-east-1.amazonaws.com" + }, + "us-east-2" : { + "hostname" : "api.elastic-inference.us-east-2.amazonaws.com" + }, + "us-west-2" : { + "hostname" : "api.elastic-inference.us-west-2.amazonaws.com" + } + } + }, "elasticache" : { "endpoints" : { "ap-east-1" : { }, @@ -1552,6 +1835,30 @@ "eu-west-1" : { }, "eu-west-2" : { }, "eu-west-3" : { }, + "fips-us-east-1" : { + "credentialScope" : { + "region" : "us-east-1" + }, + "hostname" : "elasticbeanstalk-fips.us-east-1.amazonaws.com" + }, + "fips-us-east-2" : { + "credentialScope" : { + "region" : "us-east-2" + }, + "hostname" : "elasticbeanstalk-fips.us-east-2.amazonaws.com" + }, + "fips-us-west-1" : { + "credentialScope" : { + "region" : "us-west-1" + }, + "hostname" : "elasticbeanstalk-fips.us-west-1.amazonaws.com" + }, + "fips-us-west-2" : { + "credentialScope" : { + "region" : "us-west-2" + }, + "hostname" : "elasticbeanstalk-fips.us-west-2.amazonaws.com" + }, "me-south-1" : { }, "sa-east-1" : { }, "us-east-1" : { }, @@ -1574,8 +1881,116 @@ "eu-west-1" : { }, "eu-west-2" : { }, "eu-west-3" : { }, - "me-south-1" : { }, - "sa-east-1" : { }, + "fips-ap-east-1" : { + "credentialScope" : { + "region" : "ap-east-1" + }, + "hostname" : "elasticfilesystem-fips.ap-east-1.amazonaws.com" + }, + "fips-ap-northeast-1" : { + "credentialScope" : { + "region" : "ap-northeast-1" + }, + "hostname" : "elasticfilesystem-fips.ap-northeast-1.amazonaws.com" + }, + "fips-ap-northeast-2" : { + "credentialScope" : { + "region" : "ap-northeast-2" + }, + "hostname" : "elasticfilesystem-fips.ap-northeast-2.amazonaws.com" + }, + "fips-ap-south-1" : { + "credentialScope" : { + "region" : "ap-south-1" + }, + "hostname" : "elasticfilesystem-fips.ap-south-1.amazonaws.com" + }, + "fips-ap-southeast-1" : { + "credentialScope" : { + "region" : "ap-southeast-1" + }, + "hostname" : "elasticfilesystem-fips.ap-southeast-1.amazonaws.com" + }, + "fips-ap-southeast-2" : { + "credentialScope" : { + "region" : "ap-southeast-2" + }, + "hostname" : "elasticfilesystem-fips.ap-southeast-2.amazonaws.com" + }, + "fips-ca-central-1" : { + "credentialScope" : { + "region" : "ca-central-1" + }, + "hostname" : "elasticfilesystem-fips.ca-central-1.amazonaws.com" + }, + "fips-eu-central-1" : { + "credentialScope" : { + "region" : "eu-central-1" + }, + "hostname" : "elasticfilesystem-fips.eu-central-1.amazonaws.com" + }, + "fips-eu-north-1" : { + "credentialScope" : { + "region" : "eu-north-1" + }, + "hostname" : "elasticfilesystem-fips.eu-north-1.amazonaws.com" + }, + "fips-eu-west-1" : { + "credentialScope" : { + "region" : "eu-west-1" + }, + "hostname" : "elasticfilesystem-fips.eu-west-1.amazonaws.com" + }, + "fips-eu-west-2" : { + "credentialScope" : { + "region" : "eu-west-2" + }, + "hostname" : "elasticfilesystem-fips.eu-west-2.amazonaws.com" + }, + "fips-eu-west-3" : { + "credentialScope" : { + "region" : "eu-west-3" + }, + "hostname" : "elasticfilesystem-fips.eu-west-3.amazonaws.com" + }, + "fips-me-south-1" : { + "credentialScope" : { + "region" : "me-south-1" + }, + "hostname" : "elasticfilesystem-fips.me-south-1.amazonaws.com" + }, + "fips-sa-east-1" : { + "credentialScope" : { + "region" : "sa-east-1" + }, + "hostname" : "elasticfilesystem-fips.sa-east-1.amazonaws.com" + }, + "fips-us-east-1" : { + "credentialScope" : { + "region" : "us-east-1" + }, + "hostname" : "elasticfilesystem-fips.us-east-1.amazonaws.com" + }, + "fips-us-east-2" : { + "credentialScope" : { + "region" : "us-east-2" + }, + "hostname" : "elasticfilesystem-fips.us-east-2.amazonaws.com" + }, + "fips-us-west-1" : { + "credentialScope" : { + "region" : "us-west-1" + }, + "hostname" : "elasticfilesystem-fips.us-west-1.amazonaws.com" + }, + "fips-us-west-2" : { + "credentialScope" : { + "region" : "us-west-2" + }, + "hostname" : "elasticfilesystem-fips.us-west-2.amazonaws.com" + }, + "me-south-1" : { }, + "sa-east-1" : { }, "us-east-1" : { }, "us-east-2" : { }, "us-west-1" : { }, @@ -1599,6 +2014,30 @@ "eu-west-1" : { }, "eu-west-2" : { }, "eu-west-3" : { }, + "fips-us-east-1" : { + "credentialScope" : { + "region" : "us-east-1" + }, + "hostname" : "elasticloadbalancing-fips.us-east-1.amazonaws.com" + }, + "fips-us-east-2" : { + "credentialScope" : { + "region" : "us-east-2" + }, + "hostname" : "elasticloadbalancing-fips.us-east-2.amazonaws.com" + }, + "fips-us-west-1" : { + "credentialScope" : { + "region" : "us-west-1" + }, + "hostname" : "elasticloadbalancing-fips.us-west-1.amazonaws.com" + }, + "fips-us-west-2" : { + "credentialScope" : { + "region" : "us-west-2" + }, + "hostname" : "elasticloadbalancing-fips.us-west-2.amazonaws.com" + }, "me-south-1" : { }, "sa-east-1" : { }, "us-east-1" : { }, @@ -1627,6 +2066,36 @@ "eu-west-1" : { }, "eu-west-2" : { }, "eu-west-3" : { }, + "fips-ca-central-1" : { + "credentialScope" : { + "region" : "ca-central-1" + }, + "hostname" : "elasticmapreduce-fips.ca-central-1.amazonaws.com" + }, + "fips-us-east-1" : { + "credentialScope" : { + "region" : "us-east-1" + }, + "hostname" : "elasticmapreduce-fips.us-east-1.amazonaws.com" + }, + "fips-us-east-2" : { + "credentialScope" : { + "region" : "us-east-2" + }, + "hostname" : "elasticmapreduce-fips.us-east-2.amazonaws.com" + }, + "fips-us-west-1" : { + "credentialScope" : { + "region" : "us-west-1" + }, + "hostname" : "elasticmapreduce-fips.us-west-1.amazonaws.com" + }, + "fips-us-west-2" : { + "credentialScope" : { + "region" : "us-west-2" + }, + "hostname" : "elasticmapreduce-fips.us-west-2.amazonaws.com" + }, "me-south-1" : { }, "sa-east-1" : { }, "us-east-1" : { @@ -1711,6 +2180,30 @@ "eu-west-1" : { }, "eu-west-2" : { }, "eu-west-3" : { }, + "fips-us-east-1" : { + "credentialScope" : { + "region" : "us-east-1" + }, + "hostname" : "events-fips.us-east-1.amazonaws.com" + }, + "fips-us-east-2" : { + "credentialScope" : { + "region" : "us-east-2" + }, + "hostname" : "events-fips.us-east-2.amazonaws.com" + }, + "fips-us-west-1" : { + "credentialScope" : { + "region" : "us-west-1" + }, + "hostname" : "events-fips.us-west-1.amazonaws.com" + }, + "fips-us-west-2" : { + "credentialScope" : { + "region" : "us-west-2" + }, + "hostname" : "events-fips.us-west-2.amazonaws.com" + }, "me-south-1" : { }, "sa-east-1" : { }, "us-east-1" : { }, @@ -1733,6 +2226,30 @@ "eu-west-1" : { }, "eu-west-2" : { }, "eu-west-3" : { }, + "fips-us-east-1" : { + "credentialScope" : { + "region" : "us-east-1" + }, + "hostname" : "firehose-fips.us-east-1.amazonaws.com" + }, + "fips-us-east-2" : { + "credentialScope" : { + "region" : "us-east-2" + }, + "hostname" : "firehose-fips.us-east-2.amazonaws.com" + }, + "fips-us-west-1" : { + "credentialScope" : { + "region" : "us-west-1" + }, + "hostname" : "firehose-fips.us-west-1.amazonaws.com" + }, + "fips-us-west-2" : { + "credentialScope" : { + "region" : "us-west-2" + }, + "hostname" : "firehose-fips.us-west-2.amazonaws.com" + }, "me-south-1" : { }, "sa-east-1" : { }, "us-east-1" : { }, @@ -1858,7 +2375,10 @@ "endpoints" : { "ap-northeast-1" : { }, "ap-northeast-2" : { }, + "ap-south-1" : { }, "ap-southeast-1" : { }, + "ap-southeast-2" : { }, + "eu-central-1" : { }, "eu-west-1" : { }, "us-east-1" : { }, "us-east-2" : { }, @@ -1868,7 +2388,11 @@ "forecastquery" : { "endpoints" : { "ap-northeast-1" : { }, + "ap-northeast-2" : { }, + "ap-south-1" : { }, "ap-southeast-1" : { }, + "ap-southeast-2" : { }, + "eu-central-1" : { }, "eu-west-1" : { }, "us-east-1" : { }, "us-east-2" : { }, @@ -1877,6 +2401,7 @@ }, "fsx" : { "endpoints" : { + "ap-east-1" : { }, "ap-northeast-1" : { }, "ap-southeast-1" : { }, "ap-southeast-2" : { }, @@ -1977,6 +2502,30 @@ "eu-west-1" : { }, "eu-west-2" : { }, "eu-west-3" : { }, + "fips-us-east-1" : { + "credentialScope" : { + "region" : "us-east-1" + }, + "hostname" : "glue-fips.us-east-1.amazonaws.com" + }, + "fips-us-east-2" : { + "credentialScope" : { + "region" : "us-east-2" + }, + "hostname" : "glue-fips.us-east-2.amazonaws.com" + }, + "fips-us-west-1" : { + "credentialScope" : { + "region" : "us-west-1" + }, + "hostname" : "glue-fips.us-west-1.amazonaws.com" + }, + "fips-us-west-2" : { + "credentialScope" : { + "region" : "us-west-2" + }, + "hostname" : "glue-fips.us-west-2.amazonaws.com" + }, "me-south-1" : { }, "sa-east-1" : { }, "us-east-1" : { }, @@ -2006,6 +2555,7 @@ }, "groundstation" : { "endpoints" : { + "ap-southeast-2" : { }, "eu-north-1" : { }, "me-south-1" : { }, "us-east-2" : { }, @@ -2074,6 +2624,12 @@ "region" : "us-east-1" }, "hostname" : "iam.amazonaws.com" + }, + "iam-fips" : { + "credentialScope" : { + "region" : "us-east-1" + }, + "hostname" : "iam-fips.amazonaws.com" } }, "isRegionalized" : false, @@ -2103,6 +2659,30 @@ "eu-north-1" : { }, "eu-west-1" : { }, "eu-west-2" : { }, + "fips-us-east-1" : { + "credentialScope" : { + "region" : "us-east-1" + }, + "hostname" : "inspector-fips.us-east-1.amazonaws.com" + }, + "fips-us-east-2" : { + "credentialScope" : { + "region" : "us-east-2" + }, + "hostname" : "inspector-fips.us-east-2.amazonaws.com" + }, + "fips-us-west-1" : { + "credentialScope" : { + "region" : "us-west-1" + }, + "hostname" : "inspector-fips.us-west-1.amazonaws.com" + }, + "fips-us-west-2" : { + "credentialScope" : { + "region" : "us-west-2" + }, + "hostname" : "inspector-fips.us-west-2.amazonaws.com" + }, "us-east-1" : { }, "us-east-2" : { }, "us-west-1" : { }, @@ -2297,6 +2877,30 @@ "eu-west-1" : { }, "eu-west-2" : { }, "eu-west-3" : { }, + "fips-us-east-1" : { + "credentialScope" : { + "region" : "us-east-1" + }, + "hostname" : "kinesis-fips.us-east-1.amazonaws.com" + }, + "fips-us-east-2" : { + "credentialScope" : { + "region" : "us-east-2" + }, + "hostname" : "kinesis-fips.us-east-2.amazonaws.com" + }, + "fips-us-west-1" : { + "credentialScope" : { + "region" : "us-west-1" + }, + "hostname" : "kinesis-fips.us-west-1.amazonaws.com" + }, + "fips-us-west-2" : { + "credentialScope" : { + "region" : "us-west-2" + }, + "hostname" : "kinesis-fips.us-west-2.amazonaws.com" + }, "me-south-1" : { }, "sa-east-1" : { }, "us-east-1" : { }, @@ -2398,15 +3002,39 @@ "eu-west-1" : { }, "eu-west-2" : { }, "eu-west-3" : { }, - "me-south-1" : { }, - "sa-east-1" : { }, - "us-east-1" : { }, - "us-east-2" : { }, - "us-west-1" : { }, - "us-west-2" : { } - } - }, - "license-manager" : { + "fips-us-east-1" : { + "credentialScope" : { + "region" : "us-east-1" + }, + "hostname" : "lambda-fips.us-east-1.amazonaws.com" + }, + "fips-us-east-2" : { + "credentialScope" : { + "region" : "us-east-2" + }, + "hostname" : "lambda-fips.us-east-2.amazonaws.com" + }, + "fips-us-west-1" : { + "credentialScope" : { + "region" : "us-west-1" + }, + "hostname" : "lambda-fips.us-west-1.amazonaws.com" + }, + "fips-us-west-2" : { + "credentialScope" : { + "region" : "us-west-2" + }, + "hostname" : "lambda-fips.us-west-2.amazonaws.com" + }, + "me-south-1" : { }, + "sa-east-1" : { }, + "us-east-1" : { }, + "us-east-2" : { }, + "us-west-1" : { }, + "us-west-2" : { } + } + }, + "license-manager" : { "endpoints" : { "ap-east-1" : { }, "ap-northeast-1" : { }, @@ -2420,6 +3048,30 @@ "eu-west-1" : { }, "eu-west-2" : { }, "eu-west-3" : { }, + "fips-us-east-1" : { + "credentialScope" : { + "region" : "us-east-1" + }, + "hostname" : "license-manager-fips.us-east-1.amazonaws.com" + }, + "fips-us-east-2" : { + "credentialScope" : { + "region" : "us-east-2" + }, + "hostname" : "license-manager-fips.us-east-2.amazonaws.com" + }, + "fips-us-west-1" : { + "credentialScope" : { + "region" : "us-west-1" + }, + "hostname" : "license-manager-fips.us-west-1.amazonaws.com" + }, + "fips-us-west-2" : { + "credentialScope" : { + "region" : "us-west-2" + }, + "hostname" : "license-manager-fips.us-west-2.amazonaws.com" + }, "me-south-1" : { }, "sa-east-1" : { }, "us-east-1" : { }, @@ -2476,7 +3128,9 @@ "managedblockchain" : { "endpoints" : { "ap-northeast-1" : { }, + "ap-northeast-2" : { }, "ap-southeast-1" : { }, + "eu-west-1" : { }, "us-east-1" : { } } }, @@ -2550,6 +3204,7 @@ "ap-southeast-1" : { }, "ap-southeast-2" : { }, "eu-central-1" : { }, + "eu-north-1" : { }, "eu-west-1" : { }, "eu-west-2" : { }, "eu-west-3" : { }, @@ -2567,6 +3222,7 @@ "eu-central-1" : { }, "eu-north-1" : { }, "eu-west-1" : { }, + "eu-west-2" : { }, "us-east-1" : { }, "us-west-2" : { } } @@ -2600,6 +3256,7 @@ }, "mgh" : { "endpoints" : { + "ap-southeast-2" : { }, "eu-central-1" : { }, "us-west-2" : { } } @@ -2639,6 +3296,30 @@ "eu-west-1" : { }, "eu-west-2" : { }, "eu-west-3" : { }, + "fips-us-east-1" : { + "credentialScope" : { + "region" : "us-east-1" + }, + "hostname" : "monitoring-fips.us-east-1.amazonaws.com" + }, + "fips-us-east-2" : { + "credentialScope" : { + "region" : "us-east-2" + }, + "hostname" : "monitoring-fips.us-east-2.amazonaws.com" + }, + "fips-us-west-1" : { + "credentialScope" : { + "region" : "us-west-1" + }, + "hostname" : "monitoring-fips.us-west-1.amazonaws.com" + }, + "fips-us-west-2" : { + "credentialScope" : { + "region" : "us-west-2" + }, + "hostname" : "monitoring-fips.us-west-2.amazonaws.com" + }, "me-south-1" : { }, "sa-east-1" : { }, "us-east-1" : { }, @@ -2893,6 +3574,12 @@ "region" : "us-east-1" }, "hostname" : "organizations.us-east-1.amazonaws.com" + }, + "fips-aws-global" : { + "credentialScope" : { + "region" : "us-east-1" + }, + "hostname" : "organizations-fips.us-east-1.amazonaws.com" } }, "isRegionalized" : false, @@ -2969,6 +3656,30 @@ "eu-west-1" : { }, "eu-west-2" : { }, "eu-west-3" : { }, + "fips-us-east-1" : { + "credentialScope" : { + "region" : "us-east-1" + }, + "hostname" : "polly-fips.us-east-1.amazonaws.com" + }, + "fips-us-east-2" : { + "credentialScope" : { + "region" : "us-east-2" + }, + "hostname" : "polly-fips.us-east-2.amazonaws.com" + }, + "fips-us-west-1" : { + "credentialScope" : { + "region" : "us-west-1" + }, + "hostname" : "polly-fips.us-west-1.amazonaws.com" + }, + "fips-us-west-2" : { + "credentialScope" : { + "region" : "us-west-2" + }, + "hostname" : "polly-fips.us-west-2.amazonaws.com" + }, "me-south-1" : { }, "sa-east-1" : { }, "us-east-1" : { }, @@ -3119,6 +3830,36 @@ "eu-west-1" : { }, "eu-west-2" : { }, "eu-west-3" : { }, + "fips-ca-central-1" : { + "credentialScope" : { + "region" : "ca-central-1" + }, + "hostname" : "redshift-fips.ca-central-1.amazonaws.com" + }, + "fips-us-east-1" : { + "credentialScope" : { + "region" : "us-east-1" + }, + "hostname" : "redshift-fips.us-east-1.amazonaws.com" + }, + "fips-us-east-2" : { + "credentialScope" : { + "region" : "us-east-2" + }, + "hostname" : "redshift-fips.us-east-2.amazonaws.com" + }, + "fips-us-west-1" : { + "credentialScope" : { + "region" : "us-west-1" + }, + "hostname" : "redshift-fips.us-west-1.amazonaws.com" + }, + "fips-us-west-2" : { + "credentialScope" : { + "region" : "us-west-2" + }, + "hostname" : "redshift-fips.us-west-2.amazonaws.com" + }, "me-south-1" : { }, "sa-east-1" : { }, "us-east-1" : { }, @@ -3234,6 +3975,8 @@ "eu-west-1" : { }, "eu-west-2" : { }, "eu-west-3" : { }, + "me-south-1" : { }, + "sa-east-1" : { }, "us-east-1" : { }, "us-east-2" : { }, "us-west-1" : { }, @@ -3768,7 +4511,18 @@ "sslCommonName" : "shield.us-east-1.amazonaws.com" }, "endpoints" : { - "us-east-1" : { } + "fips-us-east-1" : { + "credentialScope" : { + "region" : "us-east-1" + }, + "hostname" : "shield-fips.us-east-1.amazonaws.com" + }, + "us-east-1" : { + "credentialScope" : { + "region" : "us-east-1" + }, + "hostname" : "shield.us-east-1.amazonaws.com" + } }, "isRegionalized" : false }, @@ -3830,6 +4584,96 @@ "eu-west-1" : { }, "eu-west-2" : { }, "eu-west-3" : { }, + "fips-ap-northeast-1" : { + "credentialScope" : { + "region" : "ap-northeast-1" + }, + "hostname" : "snowball-fips.ap-northeast-1.amazonaws.com" + }, + "fips-ap-northeast-2" : { + "credentialScope" : { + "region" : "ap-northeast-2" + }, + "hostname" : "snowball-fips.ap-northeast-2.amazonaws.com" + }, + "fips-ap-south-1" : { + "credentialScope" : { + "region" : "ap-south-1" + }, + "hostname" : "snowball-fips.ap-south-1.amazonaws.com" + }, + "fips-ap-southeast-1" : { + "credentialScope" : { + "region" : "ap-southeast-1" + }, + "hostname" : "snowball-fips.ap-southeast-1.amazonaws.com" + }, + "fips-ap-southeast-2" : { + "credentialScope" : { + "region" : "ap-southeast-2" + }, + "hostname" : "snowball-fips.ap-southeast-2.amazonaws.com" + }, + "fips-ca-central-1" : { + "credentialScope" : { + "region" : "ca-central-1" + }, + "hostname" : "snowball-fips.ca-central-1.amazonaws.com" + }, + "fips-eu-central-1" : { + "credentialScope" : { + "region" : "eu-central-1" + }, + "hostname" : "snowball-fips.eu-central-1.amazonaws.com" + }, + "fips-eu-west-1" : { + "credentialScope" : { + "region" : "eu-west-1" + }, + "hostname" : "snowball-fips.eu-west-1.amazonaws.com" + }, + "fips-eu-west-2" : { + "credentialScope" : { + "region" : "eu-west-2" + }, + "hostname" : "snowball-fips.eu-west-2.amazonaws.com" + }, + "fips-eu-west-3" : { + "credentialScope" : { + "region" : "eu-west-3" + }, + "hostname" : "snowball-fips.eu-west-3.amazonaws.com" + }, + "fips-sa-east-1" : { + "credentialScope" : { + "region" : "sa-east-1" + }, + "hostname" : "snowball-fips.sa-east-1.amazonaws.com" + }, + "fips-us-east-1" : { + "credentialScope" : { + "region" : "us-east-1" + }, + "hostname" : "snowball-fips.us-east-1.amazonaws.com" + }, + "fips-us-east-2" : { + "credentialScope" : { + "region" : "us-east-2" + }, + "hostname" : "snowball-fips.us-east-2.amazonaws.com" + }, + "fips-us-west-1" : { + "credentialScope" : { + "region" : "us-west-1" + }, + "hostname" : "snowball-fips.us-west-1.amazonaws.com" + }, + "fips-us-west-2" : { + "credentialScope" : { + "region" : "us-west-2" + }, + "hostname" : "snowball-fips.us-west-2.amazonaws.com" + }, "sa-east-1" : { }, "us-east-1" : { }, "us-east-2" : { }, @@ -3854,6 +4698,30 @@ "eu-west-1" : { }, "eu-west-2" : { }, "eu-west-3" : { }, + "fips-us-east-1" : { + "credentialScope" : { + "region" : "us-east-1" + }, + "hostname" : "sns-fips.us-east-1.amazonaws.com" + }, + "fips-us-east-2" : { + "credentialScope" : { + "region" : "us-east-2" + }, + "hostname" : "sns-fips.us-east-2.amazonaws.com" + }, + "fips-us-west-1" : { + "credentialScope" : { + "region" : "us-west-1" + }, + "hostname" : "sns-fips.us-west-1.amazonaws.com" + }, + "fips-us-west-2" : { + "credentialScope" : { + "region" : "us-west-2" + }, + "hostname" : "sns-fips.us-west-2.amazonaws.com" + }, "me-south-1" : { }, "sa-east-1" : { }, "us-east-1" : { }, @@ -3928,13 +4796,61 @@ "eu-west-1" : { }, "eu-west-2" : { }, "eu-west-3" : { }, + "fips-us-east-1" : { + "credentialScope" : { + "region" : "us-east-1" + }, + "hostname" : "ssm-fips.us-east-1.amazonaws.com" + }, + "fips-us-east-2" : { + "credentialScope" : { + "region" : "us-east-2" + }, + "hostname" : "ssm-fips.us-east-2.amazonaws.com" + }, + "fips-us-west-1" : { + "credentialScope" : { + "region" : "us-west-1" + }, + "hostname" : "ssm-fips.us-west-1.amazonaws.com" + }, + "fips-us-west-2" : { + "credentialScope" : { + "region" : "us-west-2" + }, + "hostname" : "ssm-fips.us-west-2.amazonaws.com" + }, "me-south-1" : { }, "sa-east-1" : { }, - "us-east-1" : { }, - "us-east-2" : { }, - "us-west-1" : { }, - "us-west-2" : { } - } + "ssm-facade-fips-us-east-1" : { + "credentialScope" : { + "region" : "us-east-1" + }, + "hostname" : "ssm-facade-fips.us-east-1.amazonaws.com" + }, + "ssm-facade-fips-us-east-2" : { + "credentialScope" : { + "region" : "us-east-2" + }, + "hostname" : "ssm-facade-fips.us-east-2.amazonaws.com" + }, + "ssm-facade-fips-us-west-1" : { + "credentialScope" : { + "region" : "us-west-1" + }, + "hostname" : "ssm-facade-fips.us-west-1.amazonaws.com" + }, + "ssm-facade-fips-us-west-2" : { + "credentialScope" : { + "region" : "us-west-2" + }, + "hostname" : "ssm-facade-fips.us-west-2.amazonaws.com" + }, + "us-east-1" : { }, + "us-east-2" : { }, + "us-west-1" : { }, + "us-west-2" : { } + } }, "states" : { "endpoints" : { @@ -3950,6 +4866,30 @@ "eu-west-1" : { }, "eu-west-2" : { }, "eu-west-3" : { }, + "fips-us-east-1" : { + "credentialScope" : { + "region" : "us-east-1" + }, + "hostname" : "states-fips.us-east-1.amazonaws.com" + }, + "fips-us-east-2" : { + "credentialScope" : { + "region" : "us-east-2" + }, + "hostname" : "states-fips.us-east-2.amazonaws.com" + }, + "fips-us-west-1" : { + "credentialScope" : { + "region" : "us-west-1" + }, + "hostname" : "states-fips.us-west-1.amazonaws.com" + }, + "fips-us-west-2" : { + "credentialScope" : { + "region" : "us-west-2" + }, + "hostname" : "states-fips.us-west-2.amazonaws.com" + }, "me-south-1" : { }, "sa-east-1" : { }, "us-east-1" : { }, @@ -4122,6 +5062,30 @@ "eu-west-1" : { }, "eu-west-2" : { }, "eu-west-3" : { }, + "fips-us-east-1" : { + "credentialScope" : { + "region" : "us-east-1" + }, + "hostname" : "swf-fips.us-east-1.amazonaws.com" + }, + "fips-us-east-2" : { + "credentialScope" : { + "region" : "us-east-2" + }, + "hostname" : "swf-fips.us-east-2.amazonaws.com" + }, + "fips-us-west-1" : { + "credentialScope" : { + "region" : "us-west-1" + }, + "hostname" : "swf-fips.us-west-1.amazonaws.com" + }, + "fips-us-west-2" : { + "credentialScope" : { + "region" : "us-west-2" + }, + "hostname" : "swf-fips.us-west-2.amazonaws.com" + }, "me-south-1" : { }, "sa-east-1" : { }, "us-east-1" : { }, @@ -4168,6 +5132,30 @@ "eu-west-1" : { }, "eu-west-2" : { }, "eu-west-3" : { }, + "fips-us-east-1" : { + "credentialScope" : { + "region" : "us-east-1" + }, + "hostname" : "fips.transcribe.us-east-1.amazonaws.com" + }, + "fips-us-east-2" : { + "credentialScope" : { + "region" : "us-east-2" + }, + "hostname" : "fips.transcribe.us-east-2.amazonaws.com" + }, + "fips-us-west-1" : { + "credentialScope" : { + "region" : "us-west-1" + }, + "hostname" : "fips.transcribe.us-west-1.amazonaws.com" + }, + "fips-us-west-2" : { + "credentialScope" : { + "region" : "us-west-2" + }, + "hostname" : "fips.transcribe.us-west-2.amazonaws.com" + }, "me-south-1" : { }, "sa-east-1" : { }, "us-east-1" : { }, @@ -4249,6 +5237,12 @@ }, "waf" : { "endpoints" : { + "aws-fips" : { + "credentialScope" : { + "region" : "us-east-1" + }, + "hostname" : "waf-fips.amazonaws.com" + }, "aws-global" : { "credentialScope" : { "region" : "us-east-1" @@ -4261,22 +5255,222 @@ }, "waf-regional" : { "endpoints" : { - "ap-northeast-1" : { }, - "ap-northeast-2" : { }, - "ap-south-1" : { }, - "ap-southeast-1" : { }, - "ap-southeast-2" : { }, - "ca-central-1" : { }, - "eu-central-1" : { }, - "eu-north-1" : { }, - "eu-west-1" : { }, - "eu-west-2" : { }, - "eu-west-3" : { }, - "sa-east-1" : { }, - "us-east-1" : { }, - "us-east-2" : { }, - "us-west-1" : { }, - "us-west-2" : { } + "ap-east-1" : { + "credentialScope" : { + "region" : "ap-east-1" + }, + "hostname" : "waf-regional.ap-east-1.amazonaws.com" + }, + "ap-northeast-1" : { + "credentialScope" : { + "region" : "ap-northeast-1" + }, + "hostname" : "waf-regional.ap-northeast-1.amazonaws.com" + }, + "ap-northeast-2" : { + "credentialScope" : { + "region" : "ap-northeast-2" + }, + "hostname" : "waf-regional.ap-northeast-2.amazonaws.com" + }, + "ap-south-1" : { + "credentialScope" : { + "region" : "ap-south-1" + }, + "hostname" : "waf-regional.ap-south-1.amazonaws.com" + }, + "ap-southeast-1" : { + "credentialScope" : { + "region" : "ap-southeast-1" + }, + "hostname" : "waf-regional.ap-southeast-1.amazonaws.com" + }, + "ap-southeast-2" : { + "credentialScope" : { + "region" : "ap-southeast-2" + }, + "hostname" : "waf-regional.ap-southeast-2.amazonaws.com" + }, + "ca-central-1" : { + "credentialScope" : { + "region" : "ca-central-1" + }, + "hostname" : "waf-regional.ca-central-1.amazonaws.com" + }, + "eu-central-1" : { + "credentialScope" : { + "region" : "eu-central-1" + }, + "hostname" : "waf-regional.eu-central-1.amazonaws.com" + }, + "eu-north-1" : { + "credentialScope" : { + "region" : "eu-north-1" + }, + "hostname" : "waf-regional.eu-north-1.amazonaws.com" + }, + "eu-west-1" : { + "credentialScope" : { + "region" : "eu-west-1" + }, + "hostname" : "waf-regional.eu-west-1.amazonaws.com" + }, + "eu-west-2" : { + "credentialScope" : { + "region" : "eu-west-2" + }, + "hostname" : "waf-regional.eu-west-2.amazonaws.com" + }, + "eu-west-3" : { + "credentialScope" : { + "region" : "eu-west-3" + }, + "hostname" : "waf-regional.eu-west-3.amazonaws.com" + }, + "fips-ap-east-1" : { + "credentialScope" : { + "region" : "ap-east-1" + }, + "hostname" : "waf-regional-fips.ap-east-1.amazonaws.com" + }, + "fips-ap-northeast-1" : { + "credentialScope" : { + "region" : "ap-northeast-1" + }, + "hostname" : "waf-regional-fips.ap-northeast-1.amazonaws.com" + }, + "fips-ap-northeast-2" : { + "credentialScope" : { + "region" : "ap-northeast-2" + }, + "hostname" : "waf-regional-fips.ap-northeast-2.amazonaws.com" + }, + "fips-ap-south-1" : { + "credentialScope" : { + "region" : "ap-south-1" + }, + "hostname" : "waf-regional-fips.ap-south-1.amazonaws.com" + }, + "fips-ap-southeast-1" : { + "credentialScope" : { + "region" : "ap-southeast-1" + }, + "hostname" : "waf-regional-fips.ap-southeast-1.amazonaws.com" + }, + "fips-ap-southeast-2" : { + "credentialScope" : { + "region" : "ap-southeast-2" + }, + "hostname" : "waf-regional-fips.ap-southeast-2.amazonaws.com" + }, + "fips-ca-central-1" : { + "credentialScope" : { + "region" : "ca-central-1" + }, + "hostname" : "waf-regional-fips.ca-central-1.amazonaws.com" + }, + "fips-eu-central-1" : { + "credentialScope" : { + "region" : "eu-central-1" + }, + "hostname" : "waf-regional-fips.eu-central-1.amazonaws.com" + }, + "fips-eu-north-1" : { + "credentialScope" : { + "region" : "eu-north-1" + }, + "hostname" : "waf-regional-fips.eu-north-1.amazonaws.com" + }, + "fips-eu-west-1" : { + "credentialScope" : { + "region" : "eu-west-1" + }, + "hostname" : "waf-regional-fips.eu-west-1.amazonaws.com" + }, + "fips-eu-west-2" : { + "credentialScope" : { + "region" : "eu-west-2" + }, + "hostname" : "waf-regional-fips.eu-west-2.amazonaws.com" + }, + "fips-eu-west-3" : { + "credentialScope" : { + "region" : "eu-west-3" + }, + "hostname" : "waf-regional-fips.eu-west-3.amazonaws.com" + }, + "fips-me-south-1" : { + "credentialScope" : { + "region" : "me-south-1" + }, + "hostname" : "waf-regional-fips.me-south-1.amazonaws.com" + }, + "fips-sa-east-1" : { + "credentialScope" : { + "region" : "sa-east-1" + }, + "hostname" : "waf-regional-fips.sa-east-1.amazonaws.com" + }, + "fips-us-east-1" : { + "credentialScope" : { + "region" : "us-east-1" + }, + "hostname" : "waf-regional-fips.us-east-1.amazonaws.com" + }, + "fips-us-east-2" : { + "credentialScope" : { + "region" : "us-east-2" + }, + "hostname" : "waf-regional-fips.us-east-2.amazonaws.com" + }, + "fips-us-west-1" : { + "credentialScope" : { + "region" : "us-west-1" + }, + "hostname" : "waf-regional-fips.us-west-1.amazonaws.com" + }, + "fips-us-west-2" : { + "credentialScope" : { + "region" : "us-west-2" + }, + "hostname" : "waf-regional-fips.us-west-2.amazonaws.com" + }, + "me-south-1" : { + "credentialScope" : { + "region" : "me-south-1" + }, + "hostname" : "waf-regional.me-south-1.amazonaws.com" + }, + "sa-east-1" : { + "credentialScope" : { + "region" : "sa-east-1" + }, + "hostname" : "waf-regional.sa-east-1.amazonaws.com" + }, + "us-east-1" : { + "credentialScope" : { + "region" : "us-east-1" + }, + "hostname" : "waf-regional.us-east-1.amazonaws.com" + }, + "us-east-2" : { + "credentialScope" : { + "region" : "us-east-2" + }, + "hostname" : "waf-regional.us-east-2.amazonaws.com" + }, + "us-west-1" : { + "credentialScope" : { + "region" : "us-west-1" + }, + "hostname" : "waf-regional.us-west-1.amazonaws.com" + }, + "us-west-2" : { + "credentialScope" : { + "region" : "us-west-2" + }, + "hostname" : "waf-regional.us-west-2.amazonaws.com" + } } }, "workdocs" : { @@ -4285,6 +5479,18 @@ "ap-southeast-1" : { }, "ap-southeast-2" : { }, "eu-west-1" : { }, + "fips-us-east-1" : { + "credentialScope" : { + "region" : "us-east-1" + }, + "hostname" : "workdocs-fips.us-east-1.amazonaws.com" + }, + "fips-us-west-2" : { + "credentialScope" : { + "region" : "us-west-2" + }, + "hostname" : "workdocs-fips.us-west-2.amazonaws.com" + }, "us-east-1" : { }, "us-west-2" : { } } @@ -4378,6 +5584,12 @@ } } }, + "api.sagemaker" : { + "endpoints" : { + "cn-north-1" : { }, + "cn-northwest-1" : { } + } + }, "apigateway" : { "endpoints" : { "cn-north-1" : { }, @@ -4400,6 +5612,7 @@ }, "athena" : { "endpoints" : { + "cn-north-1" : { }, "cn-northwest-1" : { } } }, @@ -4546,7 +5759,19 @@ "elasticfilesystem" : { "endpoints" : { "cn-north-1" : { }, - "cn-northwest-1" : { } + "cn-northwest-1" : { }, + "fips-cn-north-1" : { + "credentialScope" : { + "region" : "cn-north-1" + }, + "hostname" : "elasticfilesystem-fips.cn-north-1.amazonaws.com.cn" + }, + "fips-cn-northwest-1" : { + "credentialScope" : { + "region" : "cn-northwest-1" + }, + "hostname" : "elasticfilesystem-fips.cn-northwest-1.amazonaws.com.cn" + } } }, "elasticloadbalancing" : { @@ -4601,6 +5826,7 @@ }, "glue" : { "endpoints" : { + "cn-north-1" : { }, "cn-northwest-1" : { } } }, @@ -4642,6 +5868,18 @@ "cn-northwest-1" : { } } }, + "iotsecuredtunneling" : { + "endpoints" : { + "cn-north-1" : { }, + "cn-northwest-1" : { } + } + }, + "kafka" : { + "endpoints" : { + "cn-north-1" : { }, + "cn-northwest-1" : { } + } + }, "kinesis" : { "endpoints" : { "cn-north-1" : { }, @@ -4718,6 +5956,12 @@ "cn-northwest-1" : { } } }, + "runtime.sagemaker" : { + "endpoints" : { + "cn-north-1" : { }, + "cn-northwest-1" : { } + } + }, "s3" : { "defaults" : { "protocols" : [ "http", "https" ], @@ -4777,7 +6021,13 @@ }, "snowball" : { "endpoints" : { - "cn-north-1" : { } + "cn-north-1" : { }, + "fips-cn-north-1" : { + "credentialScope" : { + "region" : "cn-north-1" + }, + "hostname" : "snowball-fips.cn-north-1.amazonaws.com.cn" + } } }, "sns" : { @@ -4931,6 +6181,18 @@ }, "api.ecr" : { "endpoints" : { + "fips-us-gov-east-1" : { + "credentialScope" : { + "region" : "us-gov-east-1" + }, + "hostname" : "ecr-fips.us-gov-east-1.amazonaws.com" + }, + "fips-us-gov-west-1" : { + "credentialScope" : { + "region" : "us-gov-west-1" + }, + "hostname" : "ecr-fips.us-gov-west-1.amazonaws.com" + }, "us-gov-east-1" : { "credentialScope" : { "region" : "us-gov-east-1" @@ -5019,6 +6281,18 @@ }, "batch" : { "endpoints" : { + "fips-us-gov-east-1" : { + "credentialScope" : { + "region" : "us-gov-east-1" + }, + "hostname" : "batch.us-gov-east-1.amazonaws.com" + }, + "fips-us-gov-west-1" : { + "credentialScope" : { + "region" : "us-gov-west-1" + }, + "hostname" : "batch.us-gov-west-1.amazonaws.com" + }, "us-gov-east-1" : { }, "us-gov-west-1" : { } } @@ -5059,11 +6333,29 @@ "codebuild" : { "endpoints" : { "us-gov-east-1" : { }, - "us-gov-west-1" : { } + "us-gov-east-1-fips" : { + "credentialScope" : { + "region" : "us-gov-east-1" + }, + "hostname" : "codebuild-fips.us-gov-east-1.amazonaws.com" + }, + "us-gov-west-1" : { }, + "us-gov-west-1-fips" : { + "credentialScope" : { + "region" : "us-gov-west-1" + }, + "hostname" : "codebuild-fips.us-gov-west-1.amazonaws.com" + } } }, "codecommit" : { "endpoints" : { + "fips" : { + "credentialScope" : { + "region" : "us-gov-west-1" + }, + "hostname" : "codecommit-fips.us-gov-west-1.amazonaws.com" + }, "us-gov-east-1" : { }, "us-gov-west-1" : { } } @@ -5086,11 +6378,28 @@ } } }, + "codepipeline" : { + "endpoints" : { + "fips-us-gov-west-1" : { + "credentialScope" : { + "region" : "us-gov-west-1" + }, + "hostname" : "codepipeline-fips.us-gov-west-1.amazonaws.com" + }, + "us-gov-west-1" : { } + } + }, "comprehend" : { "defaults" : { "protocols" : [ "https" ] }, "endpoints" : { + "fips-us-gov-west-1" : { + "credentialScope" : { + "region" : "us-gov-west-1" + }, + "hostname" : "comprehend-fips.us-gov-west-1.amazonaws.com" + }, "us-gov-west-1" : { } } }, @@ -5136,18 +6445,46 @@ }, "directconnect" : { "endpoints" : { - "us-gov-east-1" : { }, - "us-gov-west-1" : { } + "us-gov-east-1" : { + "credentialScope" : { + "region" : "us-gov-east-1" + }, + "hostname" : "directconnect.us-gov-east-1.amazonaws.com" + }, + "us-gov-west-1" : { + "credentialScope" : { + "region" : "us-gov-west-1" + }, + "hostname" : "directconnect.us-gov-west-1.amazonaws.com" + } } }, "dms" : { "endpoints" : { + "dms-fips" : { + "credentialScope" : { + "region" : "us-gov-west-1" + }, + "hostname" : "dms.us-gov-west-1.amazonaws.com" + }, "us-gov-east-1" : { }, "us-gov-west-1" : { } } }, "ds" : { "endpoints" : { + "fips-us-gov-east-1" : { + "credentialScope" : { + "region" : "us-gov-east-1" + }, + "hostname" : "ds-fips.us-gov-east-1.amazonaws.com" + }, + "fips-us-gov-west-1" : { + "credentialScope" : { + "region" : "us-gov-west-1" + }, + "hostname" : "ds-fips.us-gov-west-1.amazonaws.com" + }, "us-gov-east-1" : { }, "us-gov-west-1" : { } } @@ -5172,12 +6509,34 @@ }, "ec2" : { "endpoints" : { - "us-gov-east-1" : { }, - "us-gov-west-1" : { } + "us-gov-east-1" : { + "credentialScope" : { + "region" : "us-gov-east-1" + }, + "hostname" : "ec2.us-gov-east-1.amazonaws.com" + }, + "us-gov-west-1" : { + "credentialScope" : { + "region" : "us-gov-west-1" + }, + "hostname" : "ec2.us-gov-west-1.amazonaws.com" + } } }, "ecs" : { "endpoints" : { + "fips-us-gov-east-1" : { + "credentialScope" : { + "region" : "us-gov-east-1" + }, + "hostname" : "ecs-fips.us-gov-east-1.amazonaws.com" + }, + "fips-us-gov-west-1" : { + "credentialScope" : { + "region" : "us-gov-west-1" + }, + "hostname" : "ecs-fips.us-gov-west-1.amazonaws.com" + }, "us-gov-east-1" : { }, "us-gov-west-1" : { } } @@ -5196,12 +6555,34 @@ }, "elasticbeanstalk" : { "endpoints" : { - "us-gov-east-1" : { }, - "us-gov-west-1" : { } + "us-gov-east-1" : { + "credentialScope" : { + "region" : "us-gov-east-1" + }, + "hostname" : "elasticbeanstalk.us-gov-east-1.amazonaws.com" + }, + "us-gov-west-1" : { + "credentialScope" : { + "region" : "us-gov-west-1" + }, + "hostname" : "elasticbeanstalk.us-gov-west-1.amazonaws.com" + } } }, "elasticfilesystem" : { "endpoints" : { + "fips-us-gov-east-1" : { + "credentialScope" : { + "region" : "us-gov-east-1" + }, + "hostname" : "elasticfilesystem-fips.us-gov-east-1.amazonaws.com" + }, + "fips-us-gov-west-1" : { + "credentialScope" : { + "region" : "us-gov-west-1" + }, + "hostname" : "elasticfilesystem-fips.us-gov-west-1.amazonaws.com" + }, "us-gov-east-1" : { }, "us-gov-west-1" : { } } @@ -5242,6 +6623,18 @@ }, "firehose" : { "endpoints" : { + "fips-us-gov-east-1" : { + "credentialScope" : { + "region" : "us-gov-east-1" + }, + "hostname" : "firehose-fips.us-gov-east-1.amazonaws.com" + }, + "fips-us-gov-west-1" : { + "credentialScope" : { + "region" : "us-gov-west-1" + }, + "hostname" : "firehose-fips.us-gov-west-1.amazonaws.com" + }, "us-gov-east-1" : { }, "us-gov-west-1" : { } } @@ -5265,6 +6658,18 @@ }, "glue" : { "endpoints" : { + "fips-us-gov-east-1" : { + "credentialScope" : { + "region" : "us-gov-east-1" + }, + "hostname" : "glue-fips.us-gov-east-1.amazonaws.com" + }, + "fips-us-gov-west-1" : { + "credentialScope" : { + "region" : "us-gov-west-1" + }, + "hostname" : "glue-fips.us-gov-west-1.amazonaws.com" + }, "us-gov-east-1" : { }, "us-gov-west-1" : { } } @@ -5306,6 +6711,18 @@ }, "inspector" : { "endpoints" : { + "fips-us-gov-east-1" : { + "credentialScope" : { + "region" : "us-gov-east-1" + }, + "hostname" : "inspector-fips.us-gov-east-1.amazonaws.com" + }, + "fips-us-gov-west-1" : { + "credentialScope" : { + "region" : "us-gov-west-1" + }, + "hostname" : "inspector-fips.us-gov-west-1.amazonaws.com" + }, "us-gov-east-1" : { }, "us-gov-west-1" : { } } @@ -5345,12 +6762,36 @@ }, "lambda" : { "endpoints" : { + "fips-us-gov-east-1" : { + "credentialScope" : { + "region" : "us-gov-east-1" + }, + "hostname" : "lambda-fips.us-gov-east-1.amazonaws.com" + }, + "fips-us-gov-west-1" : { + "credentialScope" : { + "region" : "us-gov-west-1" + }, + "hostname" : "lambda-fips.us-gov-west-1.amazonaws.com" + }, "us-gov-east-1" : { }, "us-gov-west-1" : { } } }, "license-manager" : { "endpoints" : { + "fips-us-gov-east-1" : { + "credentialScope" : { + "region" : "us-gov-east-1" + }, + "hostname" : "license-manager-fips.us-gov-east-1.amazonaws.com" + }, + "fips-us-gov-west-1" : { + "credentialScope" : { + "region" : "us-gov-west-1" + }, + "hostname" : "license-manager-fips.us-gov-west-1.amazonaws.com" + }, "us-gov-east-1" : { }, "us-gov-west-1" : { } } @@ -5379,6 +6820,18 @@ }, "monitoring" : { "endpoints" : { + "fips-us-gov-east-1" : { + "credentialScope" : { + "region" : "us-gov-east-1" + }, + "hostname" : "monitoring.us-gov-east-1.amazonaws.com" + }, + "fips-us-gov-west-1" : { + "credentialScope" : { + "region" : "us-gov-west-1" + }, + "hostname" : "monitoring.us-gov-west-1.amazonaws.com" + }, "us-gov-east-1" : { }, "us-gov-west-1" : { } } @@ -5411,8 +6864,20 @@ "isRegionalized" : false, "partitionEndpoint" : "aws-us-gov-global" }, + "outposts" : { + "endpoints" : { + "us-gov-east-1" : { }, + "us-gov-west-1" : { } + } + }, "polly" : { "endpoints" : { + "fips-us-gov-west-1" : { + "credentialScope" : { + "region" : "us-gov-west-1" + }, + "hostname" : "polly-fips.us-gov-west-1.amazonaws.com" + }, "us-gov-west-1" : { } } }, @@ -5430,8 +6895,18 @@ }, "redshift" : { "endpoints" : { - "us-gov-east-1" : { }, - "us-gov-west-1" : { } + "us-gov-east-1" : { + "credentialScope" : { + "region" : "us-gov-east-1" + }, + "hostname" : "redshift.us-gov-east-1.amazonaws.com" + }, + "us-gov-west-1" : { + "credentialScope" : { + "region" : "us-gov-west-1" + }, + "hostname" : "redshift.us-gov-west-1.amazonaws.com" + } } }, "rekognition" : { @@ -5570,6 +7045,13 @@ }, "servicecatalog" : { "endpoints" : { + "us-gov-east-1" : { }, + "us-gov-east-1-fips" : { + "credentialScope" : { + "region" : "us-gov-east-1" + }, + "hostname" : "servicecatalog-fips.us-gov-east-1.amazonaws.com" + }, "us-gov-west-1" : { }, "us-gov-west-1-fips" : { "credentialScope" : { @@ -5599,6 +7081,18 @@ }, "snowball" : { "endpoints" : { + "fips-us-gov-east-1" : { + "credentialScope" : { + "region" : "us-gov-east-1" + }, + "hostname" : "snowball-fips.us-gov-east-1.amazonaws.com" + }, + "fips-us-gov-west-1" : { + "credentialScope" : { + "region" : "us-gov-west-1" + }, + "hostname" : "snowball-fips.us-gov-west-1.amazonaws.com" + }, "us-gov-east-1" : { }, "us-gov-west-1" : { } } @@ -5628,6 +7122,18 @@ }, "states" : { "endpoints" : { + "fips-us-gov-east-1" : { + "credentialScope" : { + "region" : "us-gov-east-1" + }, + "hostname" : "states-fips.us-gov-east-1.amazonaws.com" + }, + "fips-us-gov-west-1" : { + "credentialScope" : { + "region" : "us-gov-west-1" + }, + "hostname" : "states.us-gov-west-1.amazonaws.com" + }, "us-gov-east-1" : { }, "us-gov-west-1" : { } } @@ -5680,8 +7186,18 @@ }, "swf" : { "endpoints" : { - "us-gov-east-1" : { }, - "us-gov-west-1" : { } + "us-gov-east-1" : { + "credentialScope" : { + "region" : "us-gov-east-1" + }, + "hostname" : "swf.us-gov-east-1.amazonaws.com" + }, + "us-gov-west-1" : { + "credentialScope" : { + "region" : "us-gov-west-1" + }, + "hostname" : "swf.us-gov-west-1.amazonaws.com" + } } }, "tagging" : { @@ -5695,6 +7211,18 @@ "protocols" : [ "https" ] }, "endpoints" : { + "fips-us-gov-east-1" : { + "credentialScope" : { + "region" : "us-gov-east-1" + }, + "hostname" : "fips.transcribe.us-gov-east-1.amazonaws.com" + }, + "fips-us-gov-west-1" : { + "credentialScope" : { + "region" : "us-gov-west-1" + }, + "hostname" : "fips.transcribe.us-gov-west-1.amazonaws.com" + }, "us-gov-east-1" : { }, "us-gov-west-1" : { } } @@ -5715,7 +7243,18 @@ }, "waf-regional" : { "endpoints" : { - "us-gov-west-1" : { } + "fips-us-gov-west-1" : { + "credentialScope" : { + "region" : "us-gov-west-1" + }, + "hostname" : "waf-regional-fips.us-gov-west-1.amazonaws.com" + }, + "us-gov-west-1" : { + "credentialScope" : { + "region" : "us-gov-west-1" + }, + "hostname" : "waf-regional.us-gov-west-1.amazonaws.com" + } } }, "workspaces" : { @@ -5813,6 +7352,12 @@ }, "dms" : { "endpoints" : { + "dms-fips" : { + "credentialScope" : { + "region" : "us-iso-east-1" + }, + "hostname" : "dms.us-iso-east-1.c2s.ic.gov" + }, "us-iso-east-1" : { } } }, @@ -6073,6 +7618,12 @@ }, "dms" : { "endpoints" : { + "dms-fips" : { + "credentialScope" : { + "region" : "us-isob-east-1" + }, + "hostname" : "dms.us-isob-east-1.sc2s.sgov.gov" + }, "us-isob-east-1" : { } } }, diff --git a/service/accessanalyzer/api_enums.go b/service/accessanalyzer/api_enums.go index 45ed03a9038..2a695279c85 100644 --- a/service/accessanalyzer/api_enums.go +++ b/service/accessanalyzer/api_enums.go @@ -2,6 +2,25 @@ package accessanalyzer +type AnalyzerStatus string + +// Enum values for AnalyzerStatus +const ( + AnalyzerStatusActive AnalyzerStatus = "ACTIVE" + AnalyzerStatusCreating AnalyzerStatus = "CREATING" + AnalyzerStatusDisabled AnalyzerStatus = "DISABLED" + AnalyzerStatusFailed AnalyzerStatus = "FAILED" +) + +func (enum AnalyzerStatus) MarshalValue() (string, error) { + return string(enum), nil +} + +func (enum AnalyzerStatus) MarshalValueBuf(b []byte) ([]byte, error) { + b = b[0:0] + return append(b, enum...), nil +} + type FindingStatus string // Enum values for FindingStatus @@ -54,6 +73,25 @@ func (enum OrderBy) MarshalValueBuf(b []byte) ([]byte, error) { return append(b, enum...), nil } +type ReasonCode string + +// Enum values for ReasonCode +const ( + ReasonCodeAwsServiceAccessDisabled ReasonCode = "AWS_SERVICE_ACCESS_DISABLED" + ReasonCodeDelegatedAdministratorDeregistered ReasonCode = "DELEGATED_ADMINISTRATOR_DEREGISTERED" + ReasonCodeOrganizationDeleted ReasonCode = "ORGANIZATION_DELETED" + ReasonCodeServiceLinkedRoleCreationFailed ReasonCode = "SERVICE_LINKED_ROLE_CREATION_FAILED" +) + +func (enum ReasonCode) MarshalValue() (string, error) { + return string(enum), nil +} + +func (enum ReasonCode) MarshalValueBuf(b []byte) ([]byte, error) { + b = b[0:0] + return append(b, enum...), nil +} + type ResourceType string // Enum values for ResourceType @@ -79,7 +117,8 @@ type Type string // Enum values for Type const ( - TypeAccount Type = "ACCOUNT" + TypeAccount Type = "ACCOUNT" + TypeOrganization Type = "ORGANIZATION" ) func (enum Type) MarshalValue() (string, error) { diff --git a/service/accessanalyzer/api_types.go b/service/accessanalyzer/api_types.go index ebc7b7084ec..9931c4163ae 100644 --- a/service/accessanalyzer/api_types.go +++ b/service/accessanalyzer/api_types.go @@ -46,6 +46,11 @@ type AnalyzedResource struct { // ResourceArn is a required field ResourceArn *string `locationName:"resourceArn" type:"string" required:"true"` + // The AWS account ID that owns the resource. + // + // ResourceOwnerAccount is a required field + ResourceOwnerAccount *string `locationName:"resourceOwnerAccount" type:"string" required:"true"` + // The type of the resource that was analyzed. // // ResourceType is a required field @@ -114,6 +119,12 @@ func (s AnalyzedResource) MarshalFields(e protocol.FieldEncoder) error { metadata := protocol.Metadata{} e.SetValue(protocol.BodyTarget, "resourceArn", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) } + if s.ResourceOwnerAccount != nil { + v := *s.ResourceOwnerAccount + + metadata := protocol.Metadata{} + e.SetValue(protocol.BodyTarget, "resourceOwnerAccount", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) + } if len(s.ResourceType) > 0 { v := s.ResourceType @@ -157,6 +168,11 @@ type AnalyzedResourceSummary struct { // ResourceArn is a required field ResourceArn *string `locationName:"resourceArn" type:"string" required:"true"` + // The AWS account ID that owns the resource. + // + // ResourceOwnerAccount is a required field + ResourceOwnerAccount *string `locationName:"resourceOwnerAccount" type:"string" required:"true"` + // The type of resource that was analyzed. // // ResourceType is a required field @@ -176,6 +192,12 @@ func (s AnalyzedResourceSummary) MarshalFields(e protocol.FieldEncoder) error { metadata := protocol.Metadata{} e.SetValue(protocol.BodyTarget, "resourceArn", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) } + if s.ResourceOwnerAccount != nil { + v := *s.ResourceOwnerAccount + + metadata := protocol.Metadata{} + e.SetValue(protocol.BodyTarget, "resourceOwnerAccount", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) + } if len(s.ResourceType) > 0 { v := s.ResourceType @@ -210,6 +232,23 @@ type AnalyzerSummary struct { // Name is a required field Name *string `locationName:"name" min:"1" type:"string" required:"true"` + // The status of the analyzer. An Active analyzer successfully monitors supported + // resources and generates new findings. The analyzer is Disabled when a user + // action, such as removing trusted access for IAM Access Analyzer from AWS + // Organizations, causes the analyzer to stop generating new findings. The status + // is Creating when the analyzer creation is in progress and Failed when the + // analyzer creation has failed. + // + // Status is a required field + Status AnalyzerStatus `locationName:"status" type:"string" required:"true" enum:"true"` + + // The statusReason provides more details about the current status of the analyzer. + // For example, if the creation for the analyzer fails, a Failed status is displayed. + // For an analyzer with organization as the type, this failure can be due to + // an issue with creating the service-linked roles required in the member accounts + // of the AWS organization. + StatusReason *StatusReason `locationName:"statusReason" type:"structure"` + // The tags added to the analyzer. Tags map[string]string `locationName:"tags" type:"map"` @@ -259,6 +298,18 @@ func (s AnalyzerSummary) MarshalFields(e protocol.FieldEncoder) error { metadata := protocol.Metadata{} e.SetValue(protocol.BodyTarget, "name", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) } + if len(s.Status) > 0 { + v := s.Status + + metadata := protocol.Metadata{} + e.SetValue(protocol.BodyTarget, "status", protocol.QuotedValue{ValueMarshaler: v}, metadata) + } + if s.StatusReason != nil { + v := s.StatusReason + + metadata := protocol.Metadata{} + e.SetFields(protocol.BodyTarget, "statusReason", v, metadata) + } if s.Tags != nil { v := s.Tags @@ -476,6 +527,11 @@ type Finding struct { // The resource that an external principal has access to. Resource *string `locationName:"resource" type:"string"` + // The AWS account ID that owns the resource. + // + // ResourceOwnerAccount is a required field + ResourceOwnerAccount *string `locationName:"resourceOwnerAccount" type:"string" required:"true"` + // The type of the resource reported in the finding. // // ResourceType is a required field @@ -573,6 +629,12 @@ func (s Finding) MarshalFields(e protocol.FieldEncoder) error { metadata := protocol.Metadata{} e.SetValue(protocol.BodyTarget, "resource", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) } + if s.ResourceOwnerAccount != nil { + v := *s.ResourceOwnerAccount + + metadata := protocol.Metadata{} + e.SetValue(protocol.BodyTarget, "resourceOwnerAccount", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) + } if len(s.ResourceType) > 0 { v := s.ResourceType @@ -637,6 +699,11 @@ type FindingSummary struct { // The resource that the external principal has access to. Resource *string `locationName:"resource" type:"string"` + // The AWS account ID that owns the resource. + // + // ResourceOwnerAccount is a required field + ResourceOwnerAccount *string `locationName:"resourceOwnerAccount" type:"string" required:"true"` + // The type of the resource that the external principal has access to. // // ResourceType is a required field @@ -734,6 +801,12 @@ func (s FindingSummary) MarshalFields(e protocol.FieldEncoder) error { metadata := protocol.Metadata{} e.SetValue(protocol.BodyTarget, "resource", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) } + if s.ResourceOwnerAccount != nil { + v := *s.ResourceOwnerAccount + + metadata := protocol.Metadata{} + e.SetValue(protocol.BodyTarget, "resourceOwnerAccount", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) + } if len(s.ResourceType) > 0 { v := s.ResourceType @@ -861,6 +934,36 @@ func (s SortCriteria) MarshalFields(e protocol.FieldEncoder) error { return nil } +// Provides more details about the current status of the analyzer. For example, +// if the creation for the analyzer fails, a Failed status is displayed. For +// an analyzer with organization as the type, this failure can be due to an +// issue with creating the service-linked roles required in the member accounts +// of the AWS organization. +type StatusReason struct { + _ struct{} `type:"structure"` + + // The reason code for the current status of the analyzer. + // + // Code is a required field + Code ReasonCode `locationName:"code" type:"string" required:"true" enum:"true"` +} + +// String returns the string representation +func (s StatusReason) String() string { + return awsutil.Prettify(s) +} + +// MarshalFields encodes the AWS API shape using the passed in protocol encoder. +func (s StatusReason) MarshalFields(e protocol.FieldEncoder) error { + if len(s.Code) > 0 { + v := s.Code + + metadata := protocol.Metadata{} + e.SetValue(protocol.BodyTarget, "code", protocol.QuotedValue{ValueMarshaler: v}, metadata) + } + return nil +} + // Contains information about a validation exception. type ValidationExceptionField struct { _ struct{} `type:"structure"` diff --git a/service/acm/api_errors.go b/service/acm/api_errors.go index 5dfd1e97d46..c80a5352947 100644 --- a/service/acm/api_errors.go +++ b/service/acm/api_errors.go @@ -44,7 +44,7 @@ const ( // ErrCodeLimitExceededException for service response error code // "LimitExceededException". // - // An ACM limit has been exceeded. + // An ACM quota has been exceeded. ErrCodeLimitExceededException = "LimitExceededException" // ErrCodeRequestInProgressException for service response error code diff --git a/service/acm/api_op_GetCertificate.go b/service/acm/api_op_GetCertificate.go index 08757a2d603..5971dcea70a 100644 --- a/service/acm/api_op_GetCertificate.go +++ b/service/acm/api_op_GetCertificate.go @@ -48,12 +48,12 @@ func (s *GetCertificateInput) Validate() error { type GetCertificateOutput struct { _ struct{} `type:"structure"` - // String that contains the ACM certificate represented by the ARN specified - // at input. + // The ACM-issued certificate corresponding to the ARN specified as input. Certificate *string `min:"1" type:"string"` - // The certificate chain that contains the root certificate issued by the certificate - // authority (CA). + // Certificates forming the requested certificate's chain of trust. The chain + // consists of the certificate of the issuing CA and the intermediate certificates + // of any other subordinate CAs. CertificateChain *string `min:"1" type:"string"` } @@ -67,12 +67,11 @@ const opGetCertificate = "GetCertificate" // GetCertificateRequest returns a request value for making API operation for // AWS Certificate Manager. // -// Retrieves a certificate specified by an ARN and its certificate chain . The -// chain is an ordered list of certificates that contains the end entity certificate, -// intermediate certificates of subordinate CAs, and the root certificate in -// that order. The certificate and certificate chain are base64 encoded. If -// you want to decode the certificate to see the individual fields, you can -// use OpenSSL. +// Retrieves an Amazon-issued certificate and its certificate chain. The chain +// consists of the certificate of the issuing CA and the intermediate certificates +// of any other subordinate CAs. All of the certificates are base64 encoded. +// You can use OpenSSL (https://wiki.openssl.org/index.php/Command_Line_Utilities) +// to decode the certificates and inspect individual fields. // // // Example sending a request using GetCertificateRequest. // req := client.GetCertificateRequest(params) diff --git a/service/acm/api_op_RequestCertificate.go b/service/acm/api_op_RequestCertificate.go index 88ee504fb9a..a60fffd3523 100644 --- a/service/acm/api_op_RequestCertificate.go +++ b/service/acm/api_op_RequestCertificate.go @@ -59,9 +59,9 @@ type RequestCertificateInput struct { // of the ACM certificate. For example, add the name www.example.net to a certificate // for which the DomainName field is www.example.com if users can reach your // site by using either name. The maximum number of domain names that you can - // add to an ACM certificate is 100. However, the initial limit is 10 domain - // names. If you need more than 10 names, you must request a limit increase. - // For more information, see Limits (https://docs.aws.amazon.com/acm/latest/userguide/acm-limits.html). + // add to an ACM certificate is 100. However, the initial quota is 10 domain + // names. If you need more than 10 names, you must request a quota increase. + // For more information, see Quotas (https://docs.aws.amazon.com/acm/latest/userguide/acm-limits.html). // // The maximum length of a SAN DNS name is 253 octets. The name is made up of // multiple labels separated by periods. No label can be longer than 63 octets. diff --git a/service/acm/api_types.go b/service/acm/api_types.go index 0bdc27b4ad6..db179c838b5 100644 --- a/service/acm/api_types.go +++ b/service/acm/api_types.go @@ -196,6 +196,11 @@ type DomainValidation struct { // Contains the CNAME record that you add to your DNS database for domain validation. // For more information, see Use DNS to Validate Domain Ownership (https://docs.aws.amazon.com/acm/latest/userguide/gs-acm-validate-dns.html). + // + // Note: The CNAME information that you need does not include the name of your + // domain. If you include your domain name in the DNS database CNAME record, + // validation fails. For example, if the name is "_a79865eb4cd1a6ab990a45779b4e0b96.yourdomain.com", + // only "_a79865eb4cd1a6ab990a45779b4e0b96" must be used. ResourceRecord *ResourceRecord `type:"structure"` // The domain name that ACM used to send domain validation emails. diff --git a/service/apigateway/api_op_CreateApiKey.go b/service/apigateway/api_op_CreateApiKey.go index b7270c83231..cb725d004f6 100644 --- a/service/apigateway/api_op_CreateApiKey.go +++ b/service/apigateway/api_op_CreateApiKey.go @@ -26,7 +26,8 @@ type CreateApiKeyInput struct { Enabled *bool `locationName:"enabled" type:"boolean"` // Specifies whether (true) or not (false) the key identifier is distinct from - // the created API key value. + // the created API key value. This parameter is deprecated and should not be + // used. GenerateDistinctId *bool `locationName:"generateDistinctId" type:"boolean"` // The name of the ApiKey. diff --git a/service/apigateway/api_op_CreateBasePathMapping.go b/service/apigateway/api_op_CreateBasePathMapping.go index 52816290967..55291c484b4 100644 --- a/service/apigateway/api_op_CreateBasePathMapping.go +++ b/service/apigateway/api_op_CreateBasePathMapping.go @@ -31,8 +31,8 @@ type CreateBasePathMappingInput struct { RestApiId *string `locationName:"restApiId" type:"string" required:"true"` // The name of the API's stage that you want to use for this mapping. Specify - // '(none)' if you do not want callers to explicitly specify the stage name - // after any base path name. + // '(none)' if you want callers to explicitly specify the stage name after any + // base path name. Stage *string `locationName:"stage" type:"string"` } diff --git a/service/apigateway/api_op_CreateVpcLink.go b/service/apigateway/api_op_CreateVpcLink.go index 73e1de8e32e..9e9b953d362 100644 --- a/service/apigateway/api_op_CreateVpcLink.go +++ b/service/apigateway/api_op_CreateVpcLink.go @@ -30,8 +30,8 @@ type CreateVpcLinkInput struct { // tag value can be up to 256 characters. Tags map[string]string `locationName:"tags" type:"map"` - // [Required] The ARNs of network load balancers of the VPC targeted by the - // VPC link. The network load balancers must be owned by the same AWS account + // [Required] The ARN of the network load balancer of the VPC targeted by the + // VPC link. The network load balancer must be owned by the same AWS account // of the API owner. // // TargetArns is a required field @@ -104,7 +104,7 @@ func (s CreateVpcLinkInput) MarshalFields(e protocol.FieldEncoder) error { return nil } -// A API Gateway VPC link for a RestApi to access resources in an Amazon Virtual +// An API Gateway VPC link for a RestApi to access resources in an Amazon Virtual // Private Cloud (VPC). // // To enable access to a resource in an Amazon Virtual Private Cloud through @@ -138,8 +138,9 @@ type CreateVpcLinkOutput struct { // The collection of tags. Each tag element is associated with a given resource. Tags map[string]string `locationName:"tags" type:"map"` - // The ARNs of network load balancers of the VPC targeted by the VPC link. The - // network load balancers must be owned by the same AWS account of the API owner. + // The ARN of the network load balancer of the VPC targeted by the VPC link. + // The network load balancer must be owned by the same AWS account of the API + // owner. TargetArns []string `locationName:"targetArns" type:"list"` } diff --git a/service/apigateway/api_op_GetTags.go b/service/apigateway/api_op_GetTags.go index e970e7c72bb..ef4f542f588 100644 --- a/service/apigateway/api_op_GetTags.go +++ b/service/apigateway/api_op_GetTags.go @@ -22,8 +22,7 @@ type GetTagsInput struct { // set. Position *string `location:"querystring" locationName:"position" type:"string"` - // [Required] The ARN of a resource that can be tagged. The resource ARN must - // be URL-encoded. + // [Required] The ARN of a resource that can be tagged. // // ResourceArn is a required field ResourceArn *string `location:"uri" locationName:"resource_arn" type:"string" required:"true"` diff --git a/service/apigateway/api_op_GetVpcLink.go b/service/apigateway/api_op_GetVpcLink.go index d25cffcb95a..8683c6951d8 100644 --- a/service/apigateway/api_op_GetVpcLink.go +++ b/service/apigateway/api_op_GetVpcLink.go @@ -53,7 +53,7 @@ func (s GetVpcLinkInput) MarshalFields(e protocol.FieldEncoder) error { return nil } -// A API Gateway VPC link for a RestApi to access resources in an Amazon Virtual +// An API Gateway VPC link for a RestApi to access resources in an Amazon Virtual // Private Cloud (VPC). // // To enable access to a resource in an Amazon Virtual Private Cloud through @@ -87,8 +87,9 @@ type GetVpcLinkOutput struct { // The collection of tags. Each tag element is associated with a given resource. Tags map[string]string `locationName:"tags" type:"map"` - // The ARNs of network load balancers of the VPC targeted by the VPC link. The - // network load balancers must be owned by the same AWS account of the API owner. + // The ARN of the network load balancer of the VPC targeted by the VPC link. + // The network load balancer must be owned by the same AWS account of the API + // owner. TargetArns []string `locationName:"targetArns" type:"list"` } diff --git a/service/apigateway/api_op_TagResource.go b/service/apigateway/api_op_TagResource.go index 9fb4f882b5e..34aeb6664f1 100644 --- a/service/apigateway/api_op_TagResource.go +++ b/service/apigateway/api_op_TagResource.go @@ -15,8 +15,7 @@ import ( type TagResourceInput struct { _ struct{} `type:"structure"` - // [Required] The ARN of a resource that can be tagged. The resource ARN must - // be URL-encoded. + // [Required] The ARN of a resource that can be tagged. // // ResourceArn is a required field ResourceArn *string `location:"uri" locationName:"resource_arn" type:"string" required:"true"` diff --git a/service/apigateway/api_op_UntagResource.go b/service/apigateway/api_op_UntagResource.go index 4a129bb185b..0cf17be5f0e 100644 --- a/service/apigateway/api_op_UntagResource.go +++ b/service/apigateway/api_op_UntagResource.go @@ -15,8 +15,7 @@ import ( type UntagResourceInput struct { _ struct{} `type:"structure"` - // [Required] The ARN of a resource that can be tagged. The resource ARN must - // be URL-encoded. + // [Required] The ARN of a resource that can be tagged. // // ResourceArn is a required field ResourceArn *string `location:"uri" locationName:"resource_arn" type:"string" required:"true"` diff --git a/service/apigateway/api_op_UpdateVpcLink.go b/service/apigateway/api_op_UpdateVpcLink.go index dbcb633eed4..8ff60bb0ddc 100644 --- a/service/apigateway/api_op_UpdateVpcLink.go +++ b/service/apigateway/api_op_UpdateVpcLink.go @@ -69,7 +69,7 @@ func (s UpdateVpcLinkInput) MarshalFields(e protocol.FieldEncoder) error { return nil } -// A API Gateway VPC link for a RestApi to access resources in an Amazon Virtual +// An API Gateway VPC link for a RestApi to access resources in an Amazon Virtual // Private Cloud (VPC). // // To enable access to a resource in an Amazon Virtual Private Cloud through @@ -103,8 +103,9 @@ type UpdateVpcLinkOutput struct { // The collection of tags. Each tag element is associated with a given resource. Tags map[string]string `locationName:"tags" type:"map"` - // The ARNs of network load balancers of the VPC targeted by the VPC link. The - // network load balancers must be owned by the same AWS account of the API owner. + // The ARN of the network load balancer of the VPC targeted by the VPC link. + // The network load balancer must be owned by the same AWS account of the API + // owner. TargetArns []string `locationName:"targetArns" type:"list"` } diff --git a/service/apigateway/api_types.go b/service/apigateway/api_types.go index e5b1114847b..875ca3ffdaf 100644 --- a/service/apigateway/api_types.go +++ b/service/apigateway/api_types.go @@ -18,7 +18,9 @@ var _ = awsutil.Prettify type AccessLogSettings struct { _ struct{} `type:"structure"` - // The ARN of the CloudWatch Logs log group to receive access logs. + // The Amazon Resource Name (ARN) of the CloudWatch Logs log group or Kinesis + // Data Firehose delivery stream to receive access logs. If you specify a Kinesis + // Data Firehose delivery stream, the stream name must begin with amazon-apigateway-. DestinationArn *string `locationName:"destinationArn" type:"string"` // A single line format of the access logs of data, as specified by selected @@ -2124,7 +2126,9 @@ type MethodSetting struct { // Specifies the logging level for this method, which affects the log entries // pushed to Amazon CloudWatch Logs. The PATCH path for this setting is /{method_setting_key}/logging/loglevel, - // and the available levels are OFF, ERROR, and INFO. + // and the available levels are OFF, ERROR, and INFO. Choose ERROR to write + // only error-level entries to CloudWatch Logs, or choose INFO to include all + // ERROR events as well as extra informational events. LoggingLevel *string `locationName:"loggingLevel" type:"string"` // Specifies whether Amazon CloudWatch metrics are enabled for this method. @@ -3337,7 +3341,7 @@ func (s UsagePlanKey) MarshalFields(e protocol.FieldEncoder) error { return nil } -// A API Gateway VPC link for a RestApi to access resources in an Amazon Virtual +// An API Gateway VPC link for a RestApi to access resources in an Amazon Virtual // Private Cloud (VPC). // // To enable access to a resource in an Amazon Virtual Private Cloud through @@ -3371,8 +3375,9 @@ type VpcLink struct { // The collection of tags. Each tag element is associated with a given resource. Tags map[string]string `locationName:"tags" type:"map"` - // The ARNs of network load balancers of the VPC targeted by the VPC link. The - // network load balancers must be owned by the same AWS account of the API owner. + // The ARN of the network load balancer of the VPC targeted by the VPC link. + // The network load balancer must be owned by the same AWS account of the API + // owner. TargetArns []string `locationName:"targetArns" type:"list"` } diff --git a/service/apigatewayv2/api_op_CreateIntegration.go b/service/apigatewayv2/api_op_CreateIntegration.go index 158e465f913..801b16b1e66 100644 --- a/service/apigatewayv2/api_op_CreateIntegration.go +++ b/service/apigatewayv2/api_op_CreateIntegration.go @@ -72,7 +72,7 @@ type CreateIntegrationInput struct { // for more information. TemplateSelectionExpression *string `locationName:"templateSelectionExpression" type:"string"` - // An integer with a value between [50-29000]. + // An integer with a value between [50-30000]. TimeoutInMillis *int64 `locationName:"timeoutInMillis" min:"50" type:"integer"` // The TLS configuration for a private integration. If you specify a TLS configuration, @@ -288,7 +288,7 @@ type CreateIntegrationOutput struct { // for more information. TemplateSelectionExpression *string `locationName:"templateSelectionExpression" type:"string"` - // An integer with a value between [50-29000]. + // An integer with a value between [50-30000]. TimeoutInMillis *int64 `locationName:"timeoutInMillis" min:"50" type:"integer"` // The TLS configuration for a private integration. If you specify a TLS configuration, diff --git a/service/apigatewayv2/api_op_ExportApi.go b/service/apigatewayv2/api_op_ExportApi.go new file mode 100644 index 00000000000..4d41a8fb27a --- /dev/null +++ b/service/apigatewayv2/api_op_ExportApi.go @@ -0,0 +1,193 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package apigatewayv2 + +import ( + "context" + + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/internal/awsutil" + "github.com/aws/aws-sdk-go-v2/private/protocol" +) + +type ExportApiInput struct { + _ struct{} `type:"structure"` + + // ApiId is a required field + ApiId *string `location:"uri" locationName:"apiId" type:"string" required:"true"` + + ExportVersion *string `location:"querystring" locationName:"exportVersion" type:"string"` + + IncludeExtensions *bool `location:"querystring" locationName:"includeExtensions" type:"boolean"` + + // OutputType is a required field + OutputType *string `location:"querystring" locationName:"outputType" type:"string" required:"true"` + + // Specification is a required field + Specification *string `location:"uri" locationName:"specification" type:"string" required:"true"` + + StageName *string `location:"querystring" locationName:"stageName" type:"string"` +} + +// String returns the string representation +func (s ExportApiInput) String() string { + return awsutil.Prettify(s) +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ExportApiInput) Validate() error { + invalidParams := aws.ErrInvalidParams{Context: "ExportApiInput"} + + if s.ApiId == nil { + invalidParams.Add(aws.NewErrParamRequired("ApiId")) + } + + if s.OutputType == nil { + invalidParams.Add(aws.NewErrParamRequired("OutputType")) + } + + if s.Specification == nil { + invalidParams.Add(aws.NewErrParamRequired("Specification")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// MarshalFields encodes the AWS API shape using the passed in protocol encoder. +func (s ExportApiInput) MarshalFields(e protocol.FieldEncoder) error { + e.SetValue(protocol.HeaderTarget, "Content-Type", protocol.StringValue("application/json"), protocol.Metadata{}) + + if s.ApiId != nil { + v := *s.ApiId + + metadata := protocol.Metadata{} + e.SetValue(protocol.PathTarget, "apiId", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) + } + if s.Specification != nil { + v := *s.Specification + + metadata := protocol.Metadata{} + e.SetValue(protocol.PathTarget, "specification", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) + } + if s.ExportVersion != nil { + v := *s.ExportVersion + + metadata := protocol.Metadata{} + e.SetValue(protocol.QueryTarget, "exportVersion", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) + } + if s.IncludeExtensions != nil { + v := *s.IncludeExtensions + + metadata := protocol.Metadata{} + e.SetValue(protocol.QueryTarget, "includeExtensions", protocol.BoolValue(v), metadata) + } + if s.OutputType != nil { + v := *s.OutputType + + metadata := protocol.Metadata{} + e.SetValue(protocol.QueryTarget, "outputType", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) + } + if s.StageName != nil { + v := *s.StageName + + metadata := protocol.Metadata{} + e.SetValue(protocol.QueryTarget, "stageName", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) + } + return nil +} + +type ExportApiOutput struct { + _ struct{} `type:"structure" payload:"Body"` + + // Represents an exported definition of an API in a particular output format, + // for example, YAML. The API is serialized to the requested specification, + // for example, OpenAPI 3.0. + Body []byte `locationName:"body" type:"blob"` +} + +// String returns the string representation +func (s ExportApiOutput) String() string { + return awsutil.Prettify(s) +} + +// MarshalFields encodes the AWS API shape using the passed in protocol encoder. +func (s ExportApiOutput) MarshalFields(e protocol.FieldEncoder) error { + if s.Body != nil { + v := s.Body + + metadata := protocol.Metadata{} + e.SetStream(protocol.PayloadTarget, "body", protocol.BytesStream(v), metadata) + } + return nil +} + +const opExportApi = "ExportApi" + +// ExportApiRequest returns a request value for making API operation for +// AmazonApiGatewayV2. +// +// Exports a definition of an API in a particular output format and specification. +// +// // Example sending a request using ExportApiRequest. +// req := client.ExportApiRequest(params) +// resp, err := req.Send(context.TODO()) +// if err == nil { +// fmt.Println(resp) +// } +// +// Please also see https://docs.aws.amazon.com/goto/WebAPI/apigatewayv2-2018-11-29/ExportApi +func (c *Client) ExportApiRequest(input *ExportApiInput) ExportApiRequest { + op := &aws.Operation{ + Name: opExportApi, + HTTPMethod: "GET", + HTTPPath: "/v2/apis/{apiId}/exports/{specification}", + } + + if input == nil { + input = &ExportApiInput{} + } + + req := c.newRequest(op, input, &ExportApiOutput{}) + return ExportApiRequest{Request: req, Input: input, Copy: c.ExportApiRequest} +} + +// ExportApiRequest is the request type for the +// ExportApi API operation. +type ExportApiRequest struct { + *aws.Request + Input *ExportApiInput + Copy func(*ExportApiInput) ExportApiRequest +} + +// Send marshals and sends the ExportApi API request. +func (r ExportApiRequest) Send(ctx context.Context) (*ExportApiResponse, error) { + r.Request.SetContext(ctx) + err := r.Request.Send() + if err != nil { + return nil, err + } + + resp := &ExportApiResponse{ + ExportApiOutput: r.Request.Data.(*ExportApiOutput), + response: &aws.Response{Request: r.Request}, + } + + return resp, nil +} + +// ExportApiResponse is the response type for the +// ExportApi API operation. +type ExportApiResponse struct { + *ExportApiOutput + + response *aws.Response +} + +// SDKResponseMetdata returns the response metadata for the +// ExportApi request. +func (r *ExportApiResponse) SDKResponseMetdata() *aws.Response { + return r.response +} diff --git a/service/apigatewayv2/api_op_GetIntegration.go b/service/apigatewayv2/api_op_GetIntegration.go index dafd3860ec1..42c78502d19 100644 --- a/service/apigatewayv2/api_op_GetIntegration.go +++ b/service/apigatewayv2/api_op_GetIntegration.go @@ -129,7 +129,7 @@ type GetIntegrationOutput struct { // for more information. TemplateSelectionExpression *string `locationName:"templateSelectionExpression" type:"string"` - // An integer with a value between [50-29000]. + // An integer with a value between [50-30000]. TimeoutInMillis *int64 `locationName:"timeoutInMillis" min:"50" type:"integer"` // The TLS configuration for a private integration. If you specify a TLS configuration, diff --git a/service/apigatewayv2/api_op_UpdateIntegration.go b/service/apigatewayv2/api_op_UpdateIntegration.go index 5e04c496de6..c8ceb58cad8 100644 --- a/service/apigatewayv2/api_op_UpdateIntegration.go +++ b/service/apigatewayv2/api_op_UpdateIntegration.go @@ -73,7 +73,7 @@ type UpdateIntegrationInput struct { // for more information. TemplateSelectionExpression *string `locationName:"templateSelectionExpression" type:"string"` - // An integer with a value between [50-29000]. + // An integer with a value between [50-30000]. TimeoutInMillis *int64 `locationName:"timeoutInMillis" min:"50" type:"integer"` // The TLS configuration for a private integration. If you specify a TLS configuration, @@ -296,7 +296,7 @@ type UpdateIntegrationOutput struct { // for more information. TemplateSelectionExpression *string `locationName:"templateSelectionExpression" type:"string"` - // An integer with a value between [50-29000]. + // An integer with a value between [50-30000]. TimeoutInMillis *int64 `locationName:"timeoutInMillis" min:"50" type:"integer"` // The TLS configuration for a private integration. If you specify a TLS configuration, diff --git a/service/apigatewayv2/api_types.go b/service/apigatewayv2/api_types.go index 93d0db2a8ea..6e83fc110eb 100644 --- a/service/apigatewayv2/api_types.go +++ b/service/apigatewayv2/api_types.go @@ -849,7 +849,7 @@ type Integration struct { // Balancer listener, Network Load Balancer listener, or AWS Cloud Map service. // If you specify the ARN of an AWS Cloud Map service, API Gateway uses DiscoverInstances // to identify resources. You can use query parameters to target specific resources. - // To learn more, see DiscoverInstances (https://alpha-docs-aws.amazon.com/cloud-map/latest/api/API_DiscoverInstances.html). + // To learn more, see DiscoverInstances (https://docs.aws.amazon.com/cloud-map/latest/api/API_DiscoverInstances.html). // For private integrations, all resources must be owned by the same AWS account. IntegrationUri *string `locationName:"integrationUri" type:"string"` @@ -895,9 +895,9 @@ type Integration struct { // WebSocket APIs. TemplateSelectionExpression *string `locationName:"templateSelectionExpression" type:"string"` - // Custom timeout between 50 and 29,000 milliseconds. The default value is 29,000 - // milliseconds or 29 seconds for WebSocket APIs. The default value is 5,000 - // milliseconds, or 5 seconds for HTTP APIs. + // Custom timeout between 50 and 29,000 milliseconds for WebSocket APIs and + // between 50 and 30,000 milliseconds for HTTP APIs. The default timeout is + // 29 seconds for WebSocket APIs and 30 seconds for HTTP APIs. TimeoutInMillis *int64 `locationName:"timeoutInMillis" min:"50" type:"integer"` // The TLS configuration for a private integration. If you specify a TLS configuration, diff --git a/service/apigatewayv2/apigatewayv2iface/interface.go b/service/apigatewayv2/apigatewayv2iface/interface.go index 0f1f9340e61..92c9f3719b3 100644 --- a/service/apigatewayv2/apigatewayv2iface/interface.go +++ b/service/apigatewayv2/apigatewayv2iface/interface.go @@ -117,6 +117,8 @@ type ClientAPI interface { DeleteVpcLinkRequest(*apigatewayv2.DeleteVpcLinkInput) apigatewayv2.DeleteVpcLinkRequest + ExportApiRequest(*apigatewayv2.ExportApiInput) apigatewayv2.ExportApiRequest + GetApiRequest(*apigatewayv2.GetApiInput) apigatewayv2.GetApiRequest GetApiMappingRequest(*apigatewayv2.GetApiMappingInput) apigatewayv2.GetApiMappingRequest diff --git a/service/appconfig/api_enums.go b/service/appconfig/api_enums.go index a036f972064..6516d6a87e8 100644 --- a/service/appconfig/api_enums.go +++ b/service/appconfig/api_enums.go @@ -2,6 +2,27 @@ package appconfig +type DeploymentEventType string + +// Enum values for DeploymentEventType +const ( + DeploymentEventTypePercentageUpdated DeploymentEventType = "PERCENTAGE_UPDATED" + DeploymentEventTypeRollbackStarted DeploymentEventType = "ROLLBACK_STARTED" + DeploymentEventTypeRollbackCompleted DeploymentEventType = "ROLLBACK_COMPLETED" + DeploymentEventTypeBakeTimeStarted DeploymentEventType = "BAKE_TIME_STARTED" + DeploymentEventTypeDeploymentStarted DeploymentEventType = "DEPLOYMENT_STARTED" + DeploymentEventTypeDeploymentCompleted DeploymentEventType = "DEPLOYMENT_COMPLETED" +) + +func (enum DeploymentEventType) MarshalValue() (string, error) { + return string(enum), nil +} + +func (enum DeploymentEventType) MarshalValueBuf(b []byte) ([]byte, error) { + b = b[0:0] + return append(b, enum...), nil +} + type DeploymentState string // Enum values for DeploymentState @@ -76,6 +97,25 @@ func (enum ReplicateTo) MarshalValueBuf(b []byte) ([]byte, error) { return append(b, enum...), nil } +type TriggeredBy string + +// Enum values for TriggeredBy +const ( + TriggeredByUser TriggeredBy = "USER" + TriggeredByAppconfig TriggeredBy = "APPCONFIG" + TriggeredByCloudwatchAlarm TriggeredBy = "CLOUDWATCH_ALARM" + TriggeredByInternalError TriggeredBy = "INTERNAL_ERROR" +) + +func (enum TriggeredBy) MarshalValue() (string, error) { + return string(enum), nil +} + +func (enum TriggeredBy) MarshalValueBuf(b []byte) ([]byte, error) { + b = b[0:0] + return append(b, enum...), nil +} + type ValidatorType string // Enum values for ValidatorType diff --git a/service/appconfig/api_op_GetDeployment.go b/service/appconfig/api_op_GetDeployment.go index df9a3004372..105eccd44de 100644 --- a/service/appconfig/api_op_GetDeployment.go +++ b/service/appconfig/api_op_GetDeployment.go @@ -118,6 +118,10 @@ type GetDeploymentOutput struct { // The ID of the environment that was deployed. EnvironmentId *string `type:"string"` + // A list containing all events related to a deployment. The most recent events + // are displayed first. + EventLog []DeploymentEvent `type:"list"` + // The amount of time AppConfig monitored for alarms before considering the // deployment to be complete and no longer eligible for automatic roll back. FinalBakeTimeInMinutes *int64 `type:"integer"` @@ -213,6 +217,18 @@ func (s GetDeploymentOutput) MarshalFields(e protocol.FieldEncoder) error { metadata := protocol.Metadata{} e.SetValue(protocol.BodyTarget, "EnvironmentId", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) } + if s.EventLog != nil { + v := s.EventLog + + metadata := protocol.Metadata{} + ls0 := e.List(protocol.BodyTarget, "EventLog", metadata) + ls0.Start() + for _, v1 := range v { + ls0.ListAddFields(v1) + } + ls0.End() + + } if s.FinalBakeTimeInMinutes != nil { v := *s.FinalBakeTimeInMinutes diff --git a/service/appconfig/api_op_StartDeployment.go b/service/appconfig/api_op_StartDeployment.go index 691256b0ab7..9495f766d6c 100644 --- a/service/appconfig/api_op_StartDeployment.go +++ b/service/appconfig/api_op_StartDeployment.go @@ -177,6 +177,10 @@ type StartDeploymentOutput struct { // The ID of the environment that was deployed. EnvironmentId *string `type:"string"` + // A list containing all events related to a deployment. The most recent events + // are displayed first. + EventLog []DeploymentEvent `type:"list"` + // The amount of time AppConfig monitored for alarms before considering the // deployment to be complete and no longer eligible for automatic roll back. FinalBakeTimeInMinutes *int64 `type:"integer"` @@ -272,6 +276,18 @@ func (s StartDeploymentOutput) MarshalFields(e protocol.FieldEncoder) error { metadata := protocol.Metadata{} e.SetValue(protocol.BodyTarget, "EnvironmentId", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) } + if s.EventLog != nil { + v := s.EventLog + + metadata := protocol.Metadata{} + ls0 := e.List(protocol.BodyTarget, "EventLog", metadata) + ls0.Start() + for _, v1 := range v { + ls0.ListAddFields(v1) + } + ls0.End() + + } if s.FinalBakeTimeInMinutes != nil { v := *s.FinalBakeTimeInMinutes diff --git a/service/appconfig/api_op_StopDeployment.go b/service/appconfig/api_op_StopDeployment.go index 338677af1b9..6e3b69c252d 100644 --- a/service/appconfig/api_op_StopDeployment.go +++ b/service/appconfig/api_op_StopDeployment.go @@ -118,6 +118,10 @@ type StopDeploymentOutput struct { // The ID of the environment that was deployed. EnvironmentId *string `type:"string"` + // A list containing all events related to a deployment. The most recent events + // are displayed first. + EventLog []DeploymentEvent `type:"list"` + // The amount of time AppConfig monitored for alarms before considering the // deployment to be complete and no longer eligible for automatic roll back. FinalBakeTimeInMinutes *int64 `type:"integer"` @@ -213,6 +217,18 @@ func (s StopDeploymentOutput) MarshalFields(e protocol.FieldEncoder) error { metadata := protocol.Metadata{} e.SetValue(protocol.BodyTarget, "EnvironmentId", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) } + if s.EventLog != nil { + v := s.EventLog + + metadata := protocol.Metadata{} + ls0 := e.List(protocol.BodyTarget, "EventLog", metadata) + ls0.Start() + for _, v1 := range v { + ls0.ListAddFields(v1) + } + ls0.End() + + } if s.FinalBakeTimeInMinutes != nil { v := *s.FinalBakeTimeInMinutes diff --git a/service/appconfig/api_types.go b/service/appconfig/api_types.go index 3bb56dfca3c..2eafcfd10d8 100644 --- a/service/appconfig/api_types.go +++ b/service/appconfig/api_types.go @@ -120,6 +120,64 @@ func (s ConfigurationProfileSummary) MarshalFields(e protocol.FieldEncoder) erro return nil } +// An object that describes a deployment event. +type DeploymentEvent struct { + _ struct{} `type:"structure"` + + // A description of the deployment event. Descriptions include, but are not + // limited to, the user account or the CloudWatch alarm ARN that initiated a + // rollback, the percentage of hosts that received the deployment, or in the + // case of an internal error, a recommendation to attempt a new deployment. + Description *string `type:"string"` + + // The type of deployment event. Deployment event types include the start, stop, + // or completion of a deployment; a percentage update; the start or stop of + // a bake period; the start or completion of a rollback. + EventType DeploymentEventType `type:"string" enum:"true"` + + // The date and time the event occurred. + OccurredAt *time.Time `type:"timestamp" timestampFormat:"iso8601"` + + // The entity that triggered the deployment event. Events can be triggered by + // a user, AWS AppConfig, an Amazon CloudWatch alarm, or an internal error. + TriggeredBy TriggeredBy `type:"string" enum:"true"` +} + +// String returns the string representation +func (s DeploymentEvent) String() string { + return awsutil.Prettify(s) +} + +// MarshalFields encodes the AWS API shape using the passed in protocol encoder. +func (s DeploymentEvent) MarshalFields(e protocol.FieldEncoder) error { + if s.Description != nil { + v := *s.Description + + metadata := protocol.Metadata{} + e.SetValue(protocol.BodyTarget, "Description", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) + } + if len(s.EventType) > 0 { + v := s.EventType + + metadata := protocol.Metadata{} + e.SetValue(protocol.BodyTarget, "EventType", protocol.QuotedValue{ValueMarshaler: v}, metadata) + } + if s.OccurredAt != nil { + v := *s.OccurredAt + + metadata := protocol.Metadata{} + e.SetValue(protocol.BodyTarget, "OccurredAt", + protocol.TimeValue{V: v, Format: "iso8601", QuotedFormatTime: true}, metadata) + } + if len(s.TriggeredBy) > 0 { + v := s.TriggeredBy + + metadata := protocol.Metadata{} + e.SetValue(protocol.BodyTarget, "TriggeredBy", protocol.QuotedValue{ValueMarshaler: v}, metadata) + } + return nil +} + type DeploymentStrategy struct { _ struct{} `type:"structure"` diff --git a/service/applicationinsights/api_enums.go b/service/applicationinsights/api_enums.go index 4feee704199..056d02400dd 100644 --- a/service/applicationinsights/api_enums.go +++ b/service/applicationinsights/api_enums.go @@ -2,6 +2,24 @@ package applicationinsights +type CloudWatchEventSource string + +// Enum values for CloudWatchEventSource +const ( + CloudWatchEventSourceEc2 CloudWatchEventSource = "EC2" + CloudWatchEventSourceCodeDeploy CloudWatchEventSource = "CODE_DEPLOY" + CloudWatchEventSourceHealth CloudWatchEventSource = "HEALTH" +) + +func (enum CloudWatchEventSource) MarshalValue() (string, error) { + return string(enum), nil +} + +func (enum CloudWatchEventSource) MarshalValueBuf(b []byte) ([]byte, error) { + b = b[0:0] + return append(b, enum...), nil +} + type ConfigurationEventResourceType string // Enum values for ConfigurationEventResourceType diff --git a/service/applicationinsights/api_op_CreateApplication.go b/service/applicationinsights/api_op_CreateApplication.go index 0193a828b29..5969e4d87fc 100644 --- a/service/applicationinsights/api_op_CreateApplication.go +++ b/service/applicationinsights/api_op_CreateApplication.go @@ -13,6 +13,11 @@ import ( type CreateApplicationInput struct { _ struct{} `type:"structure"` + // Indicates whether Application Insights can listen to CloudWatch events for + // the application resources, such as instance terminated, failed deployment, + // and others. + CWEMonitorEnabled *bool `type:"boolean"` + // When set to true, creates opsItems for any problems detected on an application. OpsCenterEnabled *bool `type:"boolean"` diff --git a/service/applicationinsights/api_op_UpdateApplication.go b/service/applicationinsights/api_op_UpdateApplication.go index d50f7acd12b..1f4cf4d5a0b 100644 --- a/service/applicationinsights/api_op_UpdateApplication.go +++ b/service/applicationinsights/api_op_UpdateApplication.go @@ -12,6 +12,11 @@ import ( type UpdateApplicationInput struct { _ struct{} `type:"structure"` + // Indicates whether Application Insights can listen to CloudWatch events for + // the application resources, such as instance terminated, failed deployment, + // and others. + CWEMonitorEnabled *bool `type:"boolean"` + // When set to true, creates opsItems for any problems detected on an application. OpsCenterEnabled *bool `type:"boolean"` diff --git a/service/applicationinsights/api_types.go b/service/applicationinsights/api_types.go index 1b26a4155f6..9d899c6937b 100644 --- a/service/applicationinsights/api_types.go +++ b/service/applicationinsights/api_types.go @@ -40,6 +40,11 @@ func (s ApplicationComponent) String() string { type ApplicationInfo struct { _ struct{} `type:"structure"` + // Indicates whether Application Insights can listen to CloudWatch events for + // the application resources, such as instance terminated, failed deployment, + // and others. + CWEMonitorEnabled *bool `type:"boolean"` + // The lifecycle of the application. LifeCycle *string `type:"string"` @@ -129,9 +134,55 @@ func (s LogPattern) String() string { type Observation struct { _ struct{} `type:"structure"` + // The detail type of the CloudWatch Event-based observation, for example, EC2 + // Instance State-change Notification. + CloudWatchEventDetailType *string `type:"string"` + + // The ID of the CloudWatch Event-based observation related to the detected + // problem. + CloudWatchEventId *string `type:"string"` + + // The source of the CloudWatch Event. + CloudWatchEventSource CloudWatchEventSource `type:"string" enum:"true"` + + // The CodeDeploy application to which the deployment belongs. + CodeDeployApplication *string `type:"string"` + + // The deployment group to which the CodeDeploy deployment belongs. + CodeDeployDeploymentGroup *string `type:"string"` + + // The deployment ID of the CodeDeploy-based observation related to the detected + // problem. + CodeDeployDeploymentId *string `type:"string"` + + // The instance group to which the CodeDeploy instance belongs. + CodeDeployInstanceGroupId *string `type:"string"` + + // The status of the CodeDeploy deployment, for example SUCCESS or FAILURE. + CodeDeployState *string `type:"string"` + + // The state of the instance, such as STOPPING or TERMINATING. + Ec2State *string `type:"string"` + // The time when the observation ended, in epoch seconds. EndTime *time.Time `type:"timestamp"` + // The Amazon Resource Name (ARN) of the AWS Health Event-based observation. + HealthEventArn *string `type:"string"` + + // The description of the AWS Health event provided by the service, such as + // Amazon EC2. + HealthEventDescription *string `type:"string"` + + // The category of the AWS Health event, such as issue. + HealthEventTypeCategory *string `type:"string"` + + // The type of the AWS Health event, for example, AWS_EC2_POWER_CONNECTIVITY_ISSUE. + HealthEventTypeCode *string `type:"string"` + + // The service to which the AWS Health Event belongs, such as EC2. + HealthService *string `type:"string"` + // The ID of the observation type. Id *string `min:"38" type:"string"` @@ -168,6 +219,27 @@ type Observation struct { // The value of the source observation metric. Value *float64 `type:"double"` + + // The X-Ray request error percentage for this node. + XRayErrorPercent *int64 `type:"integer"` + + // The X-Ray request fault percentage for this node. + XRayFaultPercent *int64 `type:"integer"` + + // The name of the X-Ray node. + XRayNodeName *string `type:"string"` + + // The type of the X-Ray node. + XRayNodeType *string `type:"string"` + + // The X-Ray node request average latency for this node. + XRayRequestAverageLatency *int64 `type:"long"` + + // The X-Ray request count for this node. + XRayRequestCount *int64 `type:"integer"` + + // The X-Ray request throttle percentage for this node. + XRayThrottlePercent *int64 `type:"integer"` } // String returns the string representation diff --git a/service/athena/api_op_ListNamedQueries.go b/service/athena/api_op_ListNamedQueries.go index c18e3bb27ca..f71cd149a73 100644 --- a/service/athena/api_op_ListNamedQueries.go +++ b/service/athena/api_op_ListNamedQueries.go @@ -19,7 +19,9 @@ type ListNamedQueriesInput struct { // was truncated. NextToken *string `min:"1" type:"string"` - // The name of the workgroup from which the named queries are being returned. + // The name of the workgroup from which the named queries are returned. If a + // workgroup is not specified, the saved queries for the primary workgroup are + // returned. WorkGroup *string `type:"string"` } @@ -62,7 +64,8 @@ const opListNamedQueries = "ListNamedQueries" // Amazon Athena. // // Provides a list of available query IDs only for queries saved in the specified -// workgroup. Requires that you have access to the workgroup. +// workgroup. Requires that you have access to the workgroup. If a workgroup +// is not specified, lists the saved queries for the primary workgroup. // // For code samples using the AWS SDK for Java, see Examples and Code Samples // (http://docs.aws.amazon.com/athena/latest/ug/code-samples.html) in the Amazon diff --git a/service/athena/api_op_ListQueryExecutions.go b/service/athena/api_op_ListQueryExecutions.go index 99feb482a1e..3b1027b86c5 100644 --- a/service/athena/api_op_ListQueryExecutions.go +++ b/service/athena/api_op_ListQueryExecutions.go @@ -19,7 +19,9 @@ type ListQueryExecutionsInput struct { // was truncated. NextToken *string `min:"1" type:"string"` - // The name of the workgroup from which queries are being returned. + // The name of the workgroup from which queries are returned. If a workgroup + // is not specified, a list of available query execution IDs for the queries + // in the primary workgroup is returned. WorkGroup *string `type:"string"` } @@ -62,8 +64,9 @@ const opListQueryExecutions = "ListQueryExecutions" // Amazon Athena. // // Provides a list of available query execution IDs for the queries in the specified -// workgroup. Requires you to have access to the workgroup in which the queries -// ran. +// workgroup. If a workgroup is not specified, returns a list of query execution +// IDs for the primary workgroup. Requires you to have access to the workgroup +// in which the queries ran. // // For code samples using the AWS SDK for Java, see Examples and Code Samples // (http://docs.aws.amazon.com/athena/latest/ug/code-samples.html) in the Amazon diff --git a/service/athena/api_types.go b/service/athena/api_types.go index 8b49b5a39fd..698a1cf7389 100644 --- a/service/athena/api_types.go +++ b/service/athena/api_types.go @@ -170,8 +170,9 @@ type QueryExecution struct { // and DML, such as SHOW CREATE TABLE, or DESCRIBE