aws-cdk.aws-stepfunctions-tasks 1.204.0

Last updated:

0 purchases

aws-cdk.aws-stepfunctions-tasks 1.204.0 Image
aws-cdk.aws-stepfunctions-tasks 1.204.0 Images
Add to Cart

Description:

awscdk.awsstepfunctionstasks 1.204.0

Tasks for AWS Step Functions
---


AWS CDK v1 has reached End-of-Support on 2023-06-01.
This package is no longer being updated, and users should migrate to AWS CDK v2.
For more information on how to migrate, see the Migrating to AWS CDK v2 guide.



AWS Step Functions is a web service that enables you to coordinate the
components of distributed applications and microservices using visual workflows.
You build applications from individual components that each perform a discrete
function, or task, allowing you to scale and change applications quickly.
A Task state represents a single unit of work performed by a state machine.
All work in your state machine is performed by tasks.
This module is part of the AWS Cloud Development Kit project.
Table Of Contents


Tasks for AWS Step Functions


Table Of Contents


Task


Paths

InputPath
OutputPath
ResultPath



Task parameters from the state JSON


Evaluate Expression


API Gateway

Call REST API Endpoint
Call HTTP API Endpoint



AWS SDK


Athena

StartQueryExecution
GetQueryExecution
GetQueryResults
StopQueryExecution



Batch

SubmitJob



CodeBuild

StartBuild



DynamoDB

GetItem
PutItem
DeleteItem
UpdateItem



ECS


RunTask

EC2
Fargate





EMR

Create Cluster
Termination Protection
Terminate Cluster
Add Step
Cancel Step
Modify Instance Fleet
Modify Instance Group



EMR on EKS

Create Virtual Cluster
Delete Virtual Cluster
Start Job Run



EKS

Call



EventBridge

Put Events



Glue


Glue DataBrew


Lambda


SageMaker

Create Training Job
Create Transform Job
Create Endpoint
Create Endpoint Config
Create Model
Update Endpoint



SNS


Step Functions

Start Execution
Invoke Activity



SQS




Task
A Task state represents a single unit of work performed by a state machine. In the
CDK, the exact work to be done is determined by a class that implements IStepFunctionsTask.
AWS Step Functions integrates with some AWS services so that you can call API
actions, and coordinate executions directly from the Amazon States Language in
Step Functions. You can directly call and pass parameters to the APIs of those
services.
Paths
In the Amazon States Language, a path is a string beginning with $ that you
can use to identify components within JSON text.
Learn more about input and output processing in Step Functions here
InputPath
Both InputPath and Parameters fields provide a way to manipulate JSON as it
moves through your workflow. AWS Step Functions applies the InputPath field first,
and then the Parameters field. You can first filter your raw input to a selection
you want using InputPath, and then apply Parameters to manipulate that input
further, or add new values. If you don't specify an InputPath, a default value
of $ will be used.
The following example provides the field named input as the input to the Task
state that runs a Lambda function.
# fn: lambda.Function

submit_job = tasks.LambdaInvoke(self, "Invoke Handler",
lambda_function=fn,
input_path="$.input"
)

OutputPath
Tasks also allow you to select a portion of the state output to pass to the next
state. This enables you to filter out unwanted information, and pass only the
portion of the JSON that you care about. If you don't specify an OutputPath,
a default value of $ will be used. This passes the entire JSON node to the next
state.
The response from a Lambda function includes the response from the function
as well as other metadata.
The following example assigns the output from the Task to a field named result
# fn: lambda.Function

submit_job = tasks.LambdaInvoke(self, "Invoke Handler",
lambda_function=fn,
output_path="$.Payload.result"
)

ResultSelector
You can use ResultSelector
to manipulate the raw result of a Task, Map or Parallel state before it is
passed to ResultPath. For service integrations, the raw
result contains metadata in addition to the response payload. You can use
ResultSelector to construct a JSON payload that becomes the effective result
using static values or references to the raw result or context object.
The following example extracts the output payload of a Lambda function Task and combines
it with some static values and the state name from the context object.
# fn: lambda.Function

tasks.LambdaInvoke(self, "Invoke Handler",
lambda_function=fn,
result_selector={
"lambda_output": sfn.JsonPath.string_at("$.Payload"),
"invoke_request_id": sfn.JsonPath.string_at("$.SdkResponseMetadata.RequestId"),
"static_value": {
"foo": "bar"
},
"state_name": sfn.JsonPath.string_at("$.State.Name")
}
)

ResultPath
The output of a state can be a copy of its input, the result it produces (for
example, output from a Task state’s Lambda function), or a combination of its
input and result. Use ResultPath to control which combination of these is
passed to the state output. If you don't specify an ResultPath, a default
value of $ will be used.
The following example adds the item from calling DynamoDB's getItem API to the state
input and passes it to the next state.
# my_table: dynamodb.Table

tasks.DynamoPutItem(self, "PutItem",
item={
"MessageId": tasks.DynamoAttributeValue.from_string("message-id")
},
table=my_table,
result_path="$.Item"
)

⚠️ The OutputPath is computed after applying ResultPath. All service integrations
return metadata as part of their response. When using ResultPath, it's not possible to
merge a subset of the task output to the input.
Task parameters from the state JSON
Most tasks take parameters. Parameter values can either be static, supplied directly
in the workflow definition (by specifying their values), or a value available at runtime
in the state machine's execution (either as its input or an output of a prior state).
Parameter values available at runtime can be specified via the JsonPath class,
using methods such as JsonPath.stringAt().
The following example provides the field named input as the input to the Lambda function
and invokes it asynchronously.
# fn: lambda.Function


submit_job = tasks.LambdaInvoke(self, "Invoke Handler",
lambda_function=fn,
payload=sfn.TaskInput.from_json_path_at("$.input"),
invocation_type=tasks.LambdaInvocationType.EVENT
)

You can also use intrinsic functions available on JsonPath, for example JsonPath.format().
Here is an example of starting an Athena query that is dynamically created using the task input:
start_query_execution_job = tasks.AthenaStartQueryExecution(self, "Athena Start Query",
query_string=sfn.JsonPath.format("select contacts where year={};", sfn.JsonPath.string_at("$.year")),
query_execution_context=tasks.QueryExecutionContext(
database_name="interactions"
),
result_configuration=tasks.ResultConfiguration(
encryption_configuration=tasks.EncryptionConfiguration(
encryption_option=tasks.EncryptionOption.S3_MANAGED
),
output_location=s3.Location(
bucket_name="mybucket",
object_key="myprefix"
)
),
integration_pattern=sfn.IntegrationPattern.RUN_JOB
)

Each service integration has its own set of parameters that can be supplied.
Evaluate Expression
Use the EvaluateExpression to perform simple operations referencing state paths. The
expression referenced in the task will be evaluated in a Lambda function
(eval()). This allows you to not have to write Lambda code for simple operations.
Example: convert a wait time from milliseconds to seconds, concat this in a message and wait:
convert_to_seconds = tasks.EvaluateExpression(self, "Convert to seconds",
expression="$.waitMilliseconds / 1000",
result_path="$.waitSeconds"
)

create_message = tasks.EvaluateExpression(self, "Create message",
# Note: this is a string inside a string.
expression="`Now waiting ${$.waitSeconds} seconds...`",
runtime=lambda_.Runtime.NODEJS_14_X,
result_path="$.message"
)

publish_message = tasks.SnsPublish(self, "Publish message",
topic=sns.Topic(self, "cool-topic"),
message=sfn.TaskInput.from_json_path_at("$.message"),
result_path="$.sns"
)

wait = sfn.Wait(self, "Wait",
time=sfn.WaitTime.seconds_path("$.waitSeconds")
)

sfn.StateMachine(self, "StateMachine",
definition=convert_to_seconds.next(create_message).next(publish_message).next(wait)
)

The EvaluateExpression supports a runtime prop to specify the Lambda
runtime to use to evaluate the expression. Currently, only runtimes
of the Node.js family are supported.
API Gateway
Step Functions supports API Gateway through the service integration pattern.
HTTP APIs are designed for low-latency, cost-effective integrations with AWS services, including AWS Lambda, and HTTP endpoints.
HTTP APIs support OIDC and OAuth 2.0 authorization, and come with built-in support for CORS and automatic deployments.
Previous-generation REST APIs currently offer more features. More details can be found here.
Call REST API Endpoint
The CallApiGatewayRestApiEndpoint calls the REST API endpoint.
import aws_cdk.aws_apigateway as apigateway

rest_api = apigateway.RestApi(self, "MyRestApi")

invoke_task = tasks.CallApiGatewayRestApiEndpoint(self, "Call REST API",
api=rest_api,
stage_name="prod",
method=tasks.HttpMethod.GET
)

Be aware that the header values must be arrays. When passing the Task Token
in the headers field WAIT_FOR_TASK_TOKEN integration, use
JsonPath.array() to wrap the token in an array:
import aws_cdk.aws_apigateway as apigateway
# api: apigateway.RestApi


tasks.CallApiGatewayRestApiEndpoint(self, "Endpoint",
api=api,
stage_name="Stage",
method=tasks.HttpMethod.PUT,
integration_pattern=sfn.IntegrationPattern.WAIT_FOR_TASK_TOKEN,
headers=sfn.TaskInput.from_object({
"TaskToken": sfn.JsonPath.array(sfn.JsonPath.task_token)
})
)

Call HTTP API Endpoint
The CallApiGatewayHttpApiEndpoint calls the HTTP API endpoint.
import aws_cdk.aws_apigatewayv2 as apigatewayv2

http_api = apigatewayv2.HttpApi(self, "MyHttpApi")

invoke_task = tasks.CallApiGatewayHttpApiEndpoint(self, "Call HTTP API",
api_id=http_api.api_id,
api_stack=Stack.of(http_api),
method=tasks.HttpMethod.GET
)

AWS SDK
Step Functions supports calling AWS service's API actions
through the service integration pattern.
You can use Step Functions' AWS SDK integrations to call any of the over two hundred AWS services
directly from your state machine, giving you access to over nine thousand API actions.
# my_bucket: s3.Bucket

get_object = tasks.CallAwsService(self, "GetObject",
service="s3",
action="getObject",
parameters={
"Bucket": my_bucket.bucket_name,
"Key": sfn.JsonPath.string_at("$.key")
},
iam_resources=[my_bucket.arn_for_objects("*")]
)

Use camelCase for actions and PascalCase for parameter names.
The task automatically adds an IAM statement to the state machine role's policy based on the
service and action called. The resources for this statement must be specified in iamResources.
Use the iamAction prop to manually specify the IAM action name in the case where the IAM
action name does not match with the API service/action name:
list_buckets = tasks.CallAwsService(self, "ListBuckets",
service="s3",
action="listBuckets",
iam_resources=["*"],
iam_action="s3:ListAllMyBuckets"
)

Athena
Step Functions supports Athena through the service integration pattern.
StartQueryExecution
The StartQueryExecution API runs the SQL query statement.
start_query_execution_job = tasks.AthenaStartQueryExecution(self, "Start Athena Query",
query_string=sfn.JsonPath.string_at("$.queryString"),
query_execution_context=tasks.QueryExecutionContext(
database_name="mydatabase"
),
result_configuration=tasks.ResultConfiguration(
encryption_configuration=tasks.EncryptionConfiguration(
encryption_option=tasks.EncryptionOption.S3_MANAGED
),
output_location=s3.Location(
bucket_name="query-results-bucket",
object_key="folder"
)
)
)

GetQueryExecution
The GetQueryExecution API gets information about a single execution of a query.
get_query_execution_job = tasks.AthenaGetQueryExecution(self, "Get Query Execution",
query_execution_id=sfn.JsonPath.string_at("$.QueryExecutionId")
)

GetQueryResults
The GetQueryResults API that streams the results of a single query execution specified by QueryExecutionId from S3.
get_query_results_job = tasks.AthenaGetQueryResults(self, "Get Query Results",
query_execution_id=sfn.JsonPath.string_at("$.QueryExecutionId")
)

StopQueryExecution
The StopQueryExecution API that stops a query execution.
stop_query_execution_job = tasks.AthenaStopQueryExecution(self, "Stop Query Execution",
query_execution_id=sfn.JsonPath.string_at("$.QueryExecutionId")
)

Batch
Step Functions supports Batch through the service integration pattern.
SubmitJob
The SubmitJob API submits an AWS Batch job from a job definition.
import aws_cdk.aws_batch as batch
# batch_job_definition: batch.JobDefinition
# batch_queue: batch.JobQueue


task = tasks.BatchSubmitJob(self, "Submit Job",
job_definition_arn=batch_job_definition.job_definition_arn,
job_name="MyJob",
job_queue_arn=batch_queue.job_queue_arn
)

CodeBuild
Step Functions supports CodeBuild through the service integration pattern.
StartBuild
StartBuild starts a CodeBuild Project by Project Name.
import aws_cdk.aws_codebuild as codebuild


codebuild_project = codebuild.Project(self, "Project",
project_name="MyTestProject",
build_spec=codebuild.BuildSpec.from_object({
"version": "0.2",
"phases": {
"build": {
"commands": ["echo \"Hello, CodeBuild!\""
]
}
}
})
)

task = tasks.CodeBuildStartBuild(self, "Task",
project=codebuild_project,
integration_pattern=sfn.IntegrationPattern.RUN_JOB,
environment_variables_override={
"ZONE": codebuild.BuildEnvironmentVariable(
type=codebuild.BuildEnvironmentVariableType.PLAINTEXT,
value=sfn.JsonPath.string_at("$.envVariables.zone")
)
}
)

DynamoDB
You can call DynamoDB APIs from a Task state.
Read more about calling DynamoDB APIs here
GetItem
The GetItem operation returns a set of attributes for the item with the given primary key.
# my_table: dynamodb.Table

tasks.DynamoGetItem(self, "Get Item",
key={"message_id": tasks.DynamoAttributeValue.from_string("message-007")},
table=my_table
)

PutItem
The PutItem operation creates a new item, or replaces an old item with a new item.
# my_table: dynamodb.Table

tasks.DynamoPutItem(self, "PutItem",
item={
"MessageId": tasks.DynamoAttributeValue.from_string("message-007"),
"Text": tasks.DynamoAttributeValue.from_string(sfn.JsonPath.string_at("$.bar")),
"TotalCount": tasks.DynamoAttributeValue.from_number(10)
},
table=my_table
)

DeleteItem
The DeleteItem operation deletes a single item in a table by primary key.
# my_table: dynamodb.Table

tasks.DynamoDeleteItem(self, "DeleteItem",
key={"MessageId": tasks.DynamoAttributeValue.from_string("message-007")},
table=my_table,
result_path=sfn.JsonPath.DISCARD
)

UpdateItem
The UpdateItem operation edits an existing item's attributes, or adds a new item
to the table if it does not already exist.
# my_table: dynamodb.Table

tasks.DynamoUpdateItem(self, "UpdateItem",
key={
"MessageId": tasks.DynamoAttributeValue.from_string("message-007")
},
table=my_table,
expression_attribute_values={
":val": tasks.DynamoAttributeValue.number_from_string(sfn.JsonPath.string_at("$.Item.TotalCount.N")),
":rand": tasks.DynamoAttributeValue.from_number(20)
},
update_expression="SET TotalCount = :val + :rand"
)

ECS
Step Functions supports ECS/Fargate through the service integration pattern.
RunTask
RunTask starts a new task using the specified task definition.
EC2
The EC2 launch type allows you to run your containerized applications on a cluster
of Amazon EC2 instances that you manage.
When a task that uses the EC2 launch type is launched, Amazon ECS must determine where
to place the task based on the requirements specified in the task definition, such as
CPU and memory. Similarly, when you scale down the task count, Amazon ECS must determine
which tasks to terminate. You can apply task placement strategies and constraints to
customize how Amazon ECS places and terminates tasks. Learn more about task placement
The latest ACTIVE revision of the passed task definition is used for running the task.
The following example runs a job from a task definition on EC2
vpc = ec2.Vpc.from_lookup(self, "Vpc",
is_default=True
)

cluster = ecs.Cluster(self, "Ec2Cluster", vpc=vpc)
cluster.add_capacity("DefaultAutoScalingGroup",
instance_type=ec2.InstanceType("t2.micro"),
vpc_subnets=ec2.SubnetSelection(subnet_type=ec2.SubnetType.PUBLIC)
)

task_definition = ecs.TaskDefinition(self, "TD",
compatibility=ecs.Compatibility.EC2
)

task_definition.add_container("TheContainer",
image=ecs.ContainerImage.from_registry("foo/bar"),
memory_limit_mi_b=256
)

run_task = tasks.EcsRunTask(self, "Run",
integration_pattern=sfn.IntegrationPattern.RUN_JOB,
cluster=cluster,
task_definition=task_definition,
launch_target=tasks.EcsEc2LaunchTarget(
placement_strategies=[
ecs.PlacementStrategy.spread_across_instances(),
ecs.PlacementStrategy.packed_by_cpu(),
ecs.PlacementStrategy.randomly()
],
placement_constraints=[
ecs.PlacementConstraint.member_of("blieptuut")
]
)
)

Fargate
AWS Fargate is a serverless compute engine for containers that works with Amazon
Elastic Container Service (ECS). Fargate makes it easy for you to focus on building
your applications. Fargate removes the need to provision and manage servers, lets you
specify and pay for resources per application, and improves security through application
isolation by design. Learn more about Fargate
The Fargate launch type allows you to run your containerized applications without the need
to provision and manage the backend infrastructure. Just register your task definition and
Fargate launches the container for you. The latest ACTIVE revision of the passed
task definition is used for running the task. Learn more about
Fargate Versioning
The following example runs a job from a task definition on Fargate
vpc = ec2.Vpc.from_lookup(self, "Vpc",
is_default=True
)

cluster = ecs.Cluster(self, "FargateCluster", vpc=vpc)

task_definition = ecs.TaskDefinition(self, "TD",
memory_mi_b="512",
cpu="256",
compatibility=ecs.Compatibility.FARGATE
)

container_definition = task_definition.add_container("TheContainer",
image=ecs.ContainerImage.from_registry("foo/bar"),
memory_limit_mi_b=256
)

run_task = tasks.EcsRunTask(self, "RunFargate",
integration_pattern=sfn.IntegrationPattern.RUN_JOB,
cluster=cluster,
task_definition=task_definition,
assign_public_ip=True,
container_overrides=[tasks.ContainerOverride(
container_definition=container_definition,
environment=[tasks.TaskEnvironmentVariable(name="SOME_KEY", value=sfn.JsonPath.string_at("$.SomeKey"))]
)],
launch_target=tasks.EcsFargateLaunchTarget()
)

EMR
Step Functions supports Amazon EMR through the service integration pattern.
The service integration APIs correspond to Amazon EMR APIs but differ in the
parameters that are used.
Read more about the differences when using these service integrations.
Create Cluster
Creates and starts running a cluster (job flow).
Corresponds to the runJobFlow API in EMR.
cluster_role = iam.Role(self, "ClusterRole",
assumed_by=iam.ServicePrincipal("ec2.amazonaws.com")
)

service_role = iam.Role(self, "ServiceRole",
assumed_by=iam.ServicePrincipal("elasticmapreduce.amazonaws.com")
)

auto_scaling_role = iam.Role(self, "AutoScalingRole",
assumed_by=iam.ServicePrincipal("elasticmapreduce.amazonaws.com")
)

auto_scaling_role.assume_role_policy.add_statements(
iam.PolicyStatement(
effect=iam.Effect.ALLOW,
principals=[
iam.ServicePrincipal("application-autoscaling.amazonaws.com")
],
actions=["sts:AssumeRole"
]
))

tasks.EmrCreateCluster(self, "Create Cluster",
instances=tasks.EmrCreateCluster.InstancesConfigProperty(),
cluster_role=cluster_role,
name=sfn.TaskInput.from_json_path_at("$.ClusterName").value,
service_role=service_role,
auto_scaling_role=auto_scaling_role
)

If you want to run multiple steps in parallel,
you can specify the stepConcurrencyLevel property. The concurrency range is between 1
and 256 inclusive, where the default concurrency of 1 means no step concurrency is allowed.
stepConcurrencyLevel requires the EMR release label to be 5.28.0 or above.
tasks.EmrCreateCluster(self, "Create Cluster",
instances=tasks.EmrCreateCluster.InstancesConfigProperty(),
name=sfn.TaskInput.from_json_path_at("$.ClusterName").value,
step_concurrency_level=10
)

Termination Protection
Locks a cluster (job flow) so the EC2 instances in the cluster cannot be
terminated by user intervention, an API call, or a job-flow error.
Corresponds to the setTerminationProtection API in EMR.
tasks.EmrSetClusterTerminationProtection(self, "Task",
cluster_id="ClusterId",
termination_protected=False
)

Terminate Cluster
Shuts down a cluster (job flow).
Corresponds to the terminateJobFlows API in EMR.
tasks.EmrTerminateCluster(self, "Task",
cluster_id="ClusterId"
)

Add Step
Adds a new step to a running cluster.
Corresponds to the addJobFlowSteps API in EMR.
tasks.EmrAddStep(self, "Task",
cluster_id="ClusterId",
name="StepName",
jar="Jar",
action_on_failure=tasks.ActionOnFailure.CONTINUE
)

Cancel Step
Cancels a pending step in a running cluster.
Corresponds to the cancelSteps API in EMR.
tasks.EmrCancelStep(self, "Task",
cluster_id="ClusterId",
step_id="StepId"
)

Modify Instance Fleet
Modifies the target On-Demand and target Spot capacities for the instance
fleet with the specified InstanceFleetName.
Corresponds to the modifyInstanceFleet API in EMR.
tasks.EmrModifyInstanceFleetByName(self, "Task",
cluster_id="ClusterId",
instance_fleet_name="InstanceFleetName",
target_on_demand_capacity=2,
target_spot_capacity=0
)

Modify Instance Group
Modifies the number of nodes and configuration settings of an instance group.
Corresponds to the modifyInstanceGroups API in EMR.
tasks.EmrModifyInstanceGroupByName(self, "Task",
cluster_id="ClusterId",
instance_group_name=sfn.JsonPath.string_at("$.InstanceGroupName"),
instance_group=tasks.EmrModifyInstanceGroupByName.InstanceGroupModifyConfigProperty(
instance_count=1
)
)

EMR on EKS
Step Functions supports Amazon EMR on EKS through the service integration pattern.
The service integration APIs correspond to Amazon EMR on EKS APIs, but differ in the parameters that are used.
Read more about the differences when using these service integrations.
Setting up the EKS cluster is required.
Create Virtual Cluster
The CreateVirtualCluster API creates a single virtual cluster that's mapped to a single Kubernetes namespace.
The EKS cluster containing the Kubernetes namespace where the virtual cluster will be mapped can be passed in from the task input.
tasks.EmrContainersCreateVirtualCluster(self, "Create a Virtual Cluster",
eks_cluster=tasks.EksClusterInput.from_task_input(sfn.TaskInput.from_text("clusterId"))
)

The EKS cluster can also be passed in directly.
import aws_cdk.aws_eks as eks

# eks_cluster: eks.Cluster


tasks.EmrContainersCreateVirtualCluster(self, "Create a Virtual Cluster",
eks_cluster=tasks.EksClusterInput.from_cluster(eks_cluster)
)

By default, the Kubernetes namespace that a virtual cluster maps to is "default", but a specific namespace within an EKS cluster can be selected.
tasks.EmrContainersCreateVirtualCluster(self, "Create a Virtual Cluster",
eks_cluster=tasks.EksClusterInput.from_task_input(sfn.TaskInput.from_text("clusterId")),
eks_namespace="specified-namespace"
)

Delete Virtual Cluster
The DeleteVirtualCluster API deletes a virtual cluster.
tasks.EmrContainersDeleteVirtualCluster(self, "Delete a Virtual Cluster",
virtual_cluster_id=sfn.TaskInput.from_json_path_at("$.virtualCluster")
)

Start Job Run
The StartJobRun API starts a job run. A job is a unit of work that you submit to Amazon EMR on EKS for execution. The work performed by the job can be defined by a Spark jar, PySpark script, or SparkSQL query. A job run is an execution of the job on the virtual cluster.
Required setup:

If not done already, follow the steps to setup EMR on EKS and create an EKS Cluster.
Enable Cluster access
Enable IAM Role access

The following actions must be performed if the virtual cluster ID is supplied from the task input. Otherwise, if it is supplied statically in the state machine definition, these actions will be done automatically.

Create an IAM role
Update the Role Trust Policy of the Job Execution Role.

The job can be configured with spark submit parameters:
tasks.EmrContainersStartJobRun(self, "EMR Containers Start Job Run",
virtual_cluster=tasks.VirtualClusterInput.from_virtual_cluster_id("de92jdei2910fwedz"),
release_label=tasks.ReleaseLabel.EMR_6_2_0,
job_driver=tasks.JobDriver(
spark_submit_job_driver=tasks.SparkSubmitJobDriver(
entry_point=sfn.TaskInput.from_text("local:///usr/lib/spark/examples/src/main/python/pi.py"),
spark_submit_parameters="--conf spark.executor.instances=2 --conf spark.executor.memory=2G --conf spark.executor.cores=2 --conf spark.driver.cores=1"
)
)
)

Configuring the job can also be done via application configuration:
tasks.EmrContainersStartJobRun(self, "EMR Containers Start Job Run",
virtual_cluster=tasks.VirtualClusterInput.from_virtual_cluster_id("de92jdei2910fwedz"),
release_label=tasks.ReleaseLabel.EMR_6_2_0,
job_name="EMR-Containers-Job",
job_driver=tasks.JobDriver(
spark_submit_job_driver=tasks.SparkSubmitJobDriver(
entry_point=sfn.TaskInput.from_text("local:///usr/lib/spark/examples/src/main/python/pi.py")
)
),
application_config=[tasks.ApplicationConfiguration(
classification=tasks.Classification.SPARK_DEFAULTS,
properties={
"spark.executor.instances": "1",
"spark.executor.memory": "512M"
}
)]
)

Job monitoring can be enabled if monitoring.logging is set true. This automatically generates an S3 bucket and CloudWatch logs.
tasks.EmrContainersStartJobRun(self, "EMR Containers Start Job Run",
virtual_cluster=tasks.VirtualClusterInput.from_virtual_cluster_id("de92jdei2910fwedz"),
release_label=tasks.ReleaseLabel.EMR_6_2_0,
job_driver=tasks.JobDriver(
spark_submit_job_driver=tasks.SparkSubmitJobDriver(
entry_point=sfn.TaskInput.from_text("local:///usr/lib/spark/examples/src/main/python/pi.py"),
spark_submit_parameters="--conf spark.executor.instances=2 --conf spark.executor.memory=2G --conf spark.executor.cores=2 --conf spark.driver.cores=1"
)
),
monitoring=tasks.Monitoring(
logging=True
)
)

Otherwise, providing monitoring for jobs with existing log groups and log buckets is also available.
import aws_cdk.aws_logs as logs


log_group = logs.LogGroup(self, "Log Group")
log_bucket = s3.Bucket(self, "S3 Bucket")

tasks.EmrContainersStartJobRun(self, "EMR Containers Start Job Run",
virtual_cluster=tasks.VirtualClusterInput.from_virtual_cluster_id("de92jdei2910fwedz"),
release_label=tasks.ReleaseLabel.EMR_6_2_0,
job_driver=tasks.JobDriver(
spark_submit_job_driver=tasks.SparkSubmitJobDriver(
entry_point=sfn.TaskInput.from_text("local:///usr/lib/spark/examples/src/main/python/pi.py"),
spark_submit_parameters="--conf spark.executor.instances=2 --conf spark.executor.memory=2G --conf spark.executor.cores=2 --conf spark.driver.cores=1"
)
),
monitoring=tasks.Monitoring(
log_group=log_group,
log_bucket=log_bucket
)
)

Users can provide their own existing Job Execution Role.
tasks.EmrContainersStartJobRun(self, "EMR Containers Start Job Run",
virtual_cluster=tasks.VirtualClusterInput.from_task_input(sfn.TaskInput.from_json_path_at("$.VirtualClusterId")),
release_label=tasks.ReleaseLabel.EMR_6_2_0,
job_name="EMR-Containers-Job",
execution_role=iam.Role.from_role_arn(self, "Job-Execution-Role", "arn:aws:iam::xxxxxxxxxxxx:role/JobExecutionRole"),
job_driver=tasks.JobDriver(
spark_submit_job_driver=tasks.SparkSubmitJobDriver(
entry_point=sfn.TaskInput.from_text("local:///usr/lib/spark/examples/src/main/python/pi.py"),
spark_submit_parameters="--conf spark.executor.instances=2 --conf spark.executor.memory=2G --conf spark.executor.cores=2 --conf spark.driver.cores=1"
)
)
)

EKS
Step Functions supports Amazon EKS through the service integration pattern.
The service integration APIs correspond to Amazon EKS APIs.
Read more about the differences when using these service integrations.
Call
Read and write Kubernetes resource objects via a Kubernetes API endpoint.
Corresponds to the call API in Step Functions Connector.
The following code snippet includes a Task state that uses eks:call to list the pods.
import aws_cdk.aws_eks as eks


my_eks_cluster = eks.Cluster(self, "my sample cluster",
version=eks.KubernetesVersion.V1_18,
cluster_name="myEksCluster"
)

tasks.EksCall(self, "Call a EKS Endpoint",
cluster=my_eks_cluster,
http_method=tasks.HttpMethods.GET,
http_path="/api/v1/namespaces/default/pods"
)

EventBridge
Step Functions supports Amazon EventBridge through the service integration pattern.
The service integration APIs correspond to Amazon EventBridge APIs.
Read more about the differences when using these service integrations.
Put Events
Send events to an EventBridge bus.
Corresponds to the put-events API in Step Functions Connector.
The following code snippet includes a Task state that uses events:putevents to send an event to the default bus.
import aws_cdk.aws_events as events


my_event_bus = events.EventBus(self, "EventBus",
event_bus_name="MyEventBus1"
)

tasks.EventBridgePutEvents(self, "Send an event to EventBridge",
entries=[tasks.EventBridgePutEventsEntry(
detail=sfn.TaskInput.from_object({
"Message": "Hello from Step Functions!"
}),
event_bus=my_event_bus,
detail_type="MessageFromStepFunctions",
source="step.functions"
)]
)

Glue
Step Functions supports AWS Glue through the service integration pattern.
You can call the StartJobRun API from a Task state.
tasks.GlueStartJobRun(self, "Task",
glue_job_name="my-glue-job",
arguments=sfn.TaskInput.from_object({
"key": "value"
}),
timeout=Duration.minutes(30),
notify_delay_after=Duration.minutes(5)
)

Glue DataBrew
Step Functions supports AWS Glue DataBrew through the service integration pattern.
You can call the StartJobRun API from a Task state.
tasks.GlueDataBrewStartJobRun(self, "Task",
name="databrew-job"
)

Lambda
Invoke a Lambda function.
You can specify the input to your Lambda function through the payload attribute.
By default, Step Functions invokes Lambda function with the state input (JSON path '$')
as the input.
The following snippet invokes a Lambda Function with the state input as the payload
by referencing the $ path.
# fn: lambda.Function

tasks.LambdaInvoke(self, "Invoke with state input",
lambda_function=fn
)

When a function is invoked, the Lambda service sends these response
elements
back.
⚠️ The response from the Lambda function is in an attribute called Payload
The following snippet invokes a Lambda Function by referencing the $.Payload path
to reference the output of a Lambda executed before it.
# fn: lambda.Function

tasks.LambdaInvoke(self, "Invoke with empty object as payload",
lambda_function=fn,
payload=sfn.TaskInput.from_object({})
)

# use the output of fn as input
tasks.LambdaInvoke(self, "Invoke with payload field in the state input",
lambda_function=fn,
payload=sfn.TaskInput.from_json_path_at("$.Payload")
)

The following snippet invokes a Lambda and sets the task output to only include
the Lambda function response.
# fn: lambda.Function

tasks.LambdaInvoke(self, "Invoke and set function response as task output",
lambda_function=fn,
output_path="$.Payload"
)

If you want to combine the input and the Lambda function response you can use
the payloadResponseOnly property and specify the resultPath. This will put the
Lambda function ARN directly in the "Resource" string, but it conflicts with the
integrationPattern, invocationType, clientContext, and qualifier properties.
# fn: lambda.Function

tasks.LambdaInvoke(self, "Invoke and combine function response with task input",
lambda_function=fn,
payload_response_only=True,
result_path="$.fn"
)

You can have Step Functions pause a task, and wait for an external process to
return a task token. Read more about the callback pattern
To use the callback pattern, set the token property on the task. Call the Step
Functions SendTaskSuccess or SendTaskFailure APIs with the token to
indicate that the task has completed and the state machine should resume execution.
The following snippet invokes a Lambda with the task token as part of the input
to the Lambda.
# fn: lambda.Function

tasks.LambdaInvoke(self, "Invoke with callback",
lambda_function=fn,
integration_pattern=sfn.IntegrationPattern.WAIT_FOR_TASK_TOKEN,
payload=sfn.TaskInput.from_object({
"token": sfn.JsonPath.task_token,
"input": sfn.JsonPath.string_at("$.someField")
})
)

⚠️ The task will pause until it receives that task token back with a SendTaskSuccess or SendTaskFailure
call. Learn more about Callback with the Task
Token.
AWS Lambda can occasionally experience transient service errors. In this case, invoking Lambda
results in a 500 error, such as ServiceException, AWSLambdaException, or SdkClientException.
As a best practice, the LambdaInvoke task will retry on those errors with an interval of 2 seconds,
a back-off rate of 2 and 6 maximum attempts. Set the retryOnServiceExceptions prop to false to
disable this behavior.
SageMaker
Step Functions supports AWS SageMaker through the service integration pattern.
If your training job or model uses resources from AWS Marketplace,
network isolation is required.
To do so, set the enableNetworkIsolation property to true for SageMakerCreateModel or SageMakerCreateTrainingJob.
To set environment variables for the Docker container use the environment property.
Create Training Job
You can call the CreateTrainingJob API from a Task state.
tasks.SageMakerCreateTrainingJob(self, "TrainSagemaker",
training_job_name=sfn.JsonPath.string_at("$.JobName"),
algorithm_specification=tasks.AlgorithmSpecification(
algorithm_name="BlazingText",
training_input_mode=tasks.InputMode.FILE
),
input_data_config=[tasks.Channel(
channel_name="train",
data_source=tasks.DataSource(
s3_data_source=tasks.S3DataSource(
s3_data_type=tasks.S3DataType.S3_PREFIX,
s3_location=tasks.S3Location.from_json_expression("$.S3Bucket")
)
)
)],
output_data_config=tasks.OutputDataConfig(
s3_output_location=tasks.S3Location.from_bucket(s3.Bucket.from_bucket_name(self, "Bucket", "mybucket"), "myoutputpath")
),
resource_config=tasks.ResourceConfig(
instance_count=1,
instance_type=ec2.InstanceType(sfn.JsonPath.string_at("$.InstanceType")),
volume_size=Size.gibibytes(50)
), # optional: default is 1 instance of EC2 `M4.XLarge` with `10GB` volume
stopping_condition=tasks.StoppingCondition(
max_runtime=Duration.hours(2)
)
)

Create Transform Job
You can call the CreateTransformJob API from a Task state.
tasks.SageMakerCreateTransformJob(self, "Batch Inference",
transform_job_name="MyTransformJob",
model_name="MyModelName",
model_client_options=tasks.ModelClientOptions(
invocations_max_retries=3, # default is 0
invocations_timeout=Duration.minutes(5)
),
transform_input=tasks.TransformInput(
transform_data_source=tasks.TransformDataSource(
s3_data_source=tasks.TransformS3DataSource(
s3_uri="s3://inputbucket/train",
s3_data_type=tasks.S3DataType.S3_PREFIX
)
)
),
transform_output=tasks.TransformOutput(
s3_output_path="s3://outputbucket/TransformJobOutputPath"
),
transform_resources=tasks.TransformResources(
instance_count=1,
instance_type=ec2.InstanceType.of(ec2.InstanceClass.M4, ec2.InstanceSize.XLARGE)
)
)

Create Endpoint
You can call the CreateEndpoint API from a Task state.
tasks.SageMakerCreateEndpoint(self, "SagemakerEndpoint",
endpoint_name=sfn.JsonPath.string_at("$.EndpointName"),
endpoint_config_name=sfn.JsonPath.string_at("$.EndpointConfigName")
)

Create Endpoint Config
You can call the CreateEndpointConfig API from a Task state.
tasks.SageMakerCreateEndpointConfig(self, "SagemakerEndpointConfig",
endpoint_config_name="MyEndpointConfig",
production_variants=[tasks.ProductionVariant(
initial_instance_count=2,
instance_type=ec2.InstanceType.of(ec2.InstanceClass.M5, ec2.InstanceSize.XLARGE),
model_name="MyModel",
variant_name="awesome-variant"
)]
)

Create Model
You can call the CreateModel API from a Task state.
tasks.SageMakerCreateModel(self, "Sagemaker",
model_name="MyModel",
primary_container=tasks.ContainerDefinition(
image=tasks.DockerImage.from_json_expression(sfn.JsonPath.string_at("$.Model.imageName")),
mode=tasks.Mode.SINGLE_MODEL,
model_s3_location=tasks.S3Location.from_json_expression("$.TrainingJob.ModelArtifacts.S3ModelArtifacts")
)
)

Update Endpoint
You can call the UpdateEndpoint API from a Task state.
tasks.SageMakerUpdateEndpoint(self, "SagemakerEndpoint",
endpoint_name=sfn.JsonPath.string_at("$.Endpoint.Name"),
endpoint_config_name=sfn.JsonPath.string_at("$.Endpoint.EndpointConfig")
)

SNS
Step Functions supports Amazon SNS through the service integration pattern.
You can call the Publish API from a Task state to publish to an SNS topic.
topic = sns.Topic(self, "Topic")

# Use a field from the execution data as message.
task1 = tasks.SnsPublish(self, "Publish1",
topic=topic,
integration_pattern=sfn.IntegrationPattern.REQUEST_RESPONSE,
message=sfn.TaskInput.from_data_at("$.state.message"),
message_attributes={
"place": tasks.MessageAttribute(
value=sfn.JsonPath.string_at("$.place")
),
"pic": tasks.MessageAttribute(
# BINARY must be explicitly set
data_type=tasks.MessageAttributeDataType.BINARY,
value=sfn.JsonPath.string_at("$.pic")
),
"people": tasks.MessageAttribute(
value=4
),
"handles": tasks.MessageAttribute(
value=["@kslater", "@jjf", null, "@mfanning"]
)
}
)

# Combine a field from the execution data with
# a literal object.
task2 = tasks.SnsPublish(self, "Publish2",
topic=topic,
message=sfn.TaskInput.from_object({
"field1": "somedata",
"field2": sfn.JsonPath.string_at("$.field2")
})
)

Step Functions
Start Execution
You can manage AWS Step Functions executions.
AWS Step Functions supports it's own StartExecution API as a service integration.
# Define a state machine with one Pass state
child = sfn.StateMachine(self, "ChildStateMachine",
definition=sfn.Chain.start(sfn.Pass(self, "PassState"))
)

# Include the state machine in a Task state with callback pattern
task = tasks.StepFunctionsStartExecution(self, "ChildTask",
state_machine=child,
integration_pattern=sfn.IntegrationPattern.WAIT_FOR_TASK_TOKEN,
input=sfn.TaskInput.from_object({
"token": sfn.JsonPath.task_token,
"foo": "bar"
}),
name="MyExecutionName"
)

# Define a second state machine with the Task state above
sfn.StateMachine(self, "ParentStateMachine",
definition=task
)

You can utilize Associate Workflow Executions
via the associateWithParent property. This allows the Step Functions UI to link child
executions from parent executions, making it easier to trace execution flow across state machines.
# child: sfn.StateMachine

task = tasks.StepFunctionsStartExecution(self, "ChildTask",
state_machine=child,
associate_with_parent=True
)

This will add the payload AWS_STEP_FUNCTIONS_STARTED_BY_EXECUTION_ID.$: $$.Execution.Id to the
inputproperty for you, which will pass the execution ID from the context object to the
execution input. It requires input to be an object or not be set at all.
Invoke Activity
You can invoke a Step Functions Activity which enables you to have
a task in your state machine where the work is performed by a worker that can
be hosted on Amazon EC2, Amazon ECS, AWS Lambda, basically anywhere. Activities
are a way to associate code running somewhere (known as an activity worker) with
a specific task in a state machine.
When Step Functions reaches an activity task state, the workflow waits for an
activity worker to poll for a task. An activity worker polls Step Functions by
using GetActivityTask, and sending the ARN for the related activity.
After the activity worker completes its work, it can provide a report of its
success or failure by using SendTaskSuccess or SendTaskFailure. These two
calls use the taskToken provided by GetActivityTask to associate the result
with that task.
The following example creates an activity and creates a task that invokes the activity.
submit_job_activity = sfn.Activity(self, "SubmitJob")

tasks.StepFunctionsInvokeActivity(self, "Submit Job",
activity=submit_job_activity
)

SQS
Step Functions supports Amazon SQS
You can call the SendMessage API from a Task state
to send a message to an SQS queue.
queue = sqs.Queue(self, "Queue")

# Use a field from the execution data as message.
task1 = tasks.SqsSendMessage(self, "Send1",
queue=queue,
message_body=sfn.TaskInput.from_json_path_at("$.message")
)

# Combine a field from the execution data with
# a literal object.
task2 = tasks.SqsSendMessage(self, "Send2",
queue=queue,
message_body=sfn.TaskInput.from_object({
"field1": "somedata",
"field2": sfn.JsonPath.string_at("$.field2")
})
)

License:

For personal and professional use. You cannot resell or redistribute these repositories in their original state.

Customer Reviews

There are no reviews.