New AWS Lambda controls for stream processing and asynchronous invocations

Today AWS Lambda is introducing new controls for asynchronous and stream processing invocations. These new features allow you to customize responses to Lambda function errors and build more resilient event-driven and stream-processing applications.

Stream processing function invocations

When processing data from event sources such as Amazon Kinesis Data Streams , and Amazon DynamoDB Streams , Lambda reads records in batches via shards. A shard is a uniquely identified sequence of data records. Your function is then invoked to process records from the batch “in order.” If an error is returned, Lambda retries the batch until processing succeeds or the data expires. This retry behavior is desirable in many cases, but not all:

Until the issue is resolved, no data in the shard is processed. A single malformed record can prevent processing on an entire shard since “in order” guarantee ensures that failed batches are retried until the data record expires. This is often referred to as a “poison pill” event in that it prevents the rest of the system from processing data.

In some cases, it may not be helpful to retry if subsequent invocations will also fail.

If the function is not idempotent, retrying might produce unexpected side effects, and may result in duplicate outputs (for example, multiple database entries or business transactions).

Figure 1: Default stream invocation retry logs.

The new Lambda controls for stream processing invocations

With the new customizable controls, users are able to control how function errors and retries impact stream processing function invocations. A new event source-mapping configuration subresource (shown below), allows developers to clearly specify which controls will apply specifically to invocations.

DestinationConfig: {

OnFailure: {

Destination: SNS/SQS arn (String)

}

}

Figure 2: Stream processing destination configuration.

{

“MaximumRetryAttempts”: integer,

“BisectBatchOnFunctionError” : boolean,

“MaximumRecordAgeInSeconds” : integer

}

Figure 3: Stream processing failure configuration.

MaximumRetryAttempts

Set a maximum number of retry attempts for batches before they can be skipped to unblock processing and prevent duplicate outputs.

Minimum: 0 | maximum: 10,000 | default: 10,000

MaximumRecordAgeInSeconds

Define a record maximum age in seconds, with expired records skipped to allow processing to continue. Data records that do not get successfully processed within the defined age of submission are skipped

Minimum: 60 | default/maximum: 604,800

BisectBatchOnFunctionError

This gives the option to recursively split the failed batch and retry on a smaller subset of records, eventually isolating the metadata causing the error.

Default: false

On-failure Destination

When either MaximumRetryAttempts or MaximumRecordAgeInSeconds reaches the specified value, a record will be skipped. If ‘Destination’ is set, metadata about the skipped records can be sent to a target ARN (i.e. Amazon SQS or Amazon SNS ). If no target is configured, then the record will be dropped.

Getting started with error handling for stream processing invocations

Here you will see how to use the new controls to create a customized error handling configuration for stream processing invocations. You will set the maximum retry attempts to 1, the maximum record age to 60s, and send the metadata of skipped record to an Amazon Simple Queue Service (SQS) queue. First create a Kinesis Stream to put records into, and create an SQS queue to hold the metadata of retry exhausted or expired data records.

Go to the Lambda console and choose Create function .

Ensure that Author from scratch is selected, and enter a function a name such as “customStreamErrorExample.”

Select Runtime as Node.js.12.x .

Choose Create function .

To enable Kinesis as an event trigger and to send metadata of a failed invocation to the SQS standard queue, you must give the Lambda execution role the required permissions.

Scroll down to the Execution Role section and select the view link below the Existing role drop-down.

Choose Add inline policy > JSON , then paste the following into the text box replacing {yourSQSarn} with the ARN of the SQS queue you created earlier and replacing {yourKinesisarn} with the ARN of the stream you created earlier. Choose Review policy . {

“Version”: “2012-10-17”,

“Statement”: [

{

“Sid”: “VisualEditor0”,

“Effect”: “Allow”,

“Action”: [

“sqs:SendMessage”,

“kinesis:GetShardIterator”,

“kinesis:GetRecords”,

“kinesis:DescribeStream”

],

“Resource”: [

“{yourSQSarn}”,

“{yourKinesisarn}”

]

},

{

“Sid”: “VisualEditor1”,

“Effect”: “Allow”,

“Action”: “kinesis:ListStreams”,

“Resource”: “*”

}

]

}

Give the policy a name such as CustomSQSKinesis and choose Create policy .

Navigate back to your Lambda function and choose Add trigger .

Select Kinesis from the drop-down. Select your stream in the Kinesis stream dropdown.

To see the new control options, choose the drop-down arrow on the Additional settings section.

Paste the ARN of the previously created SQS queue (refer to this link to find the ARN).

Set the Maximum retry attempts to 1. Set the Maximum record age to 60. Choose Add .

The Kinesis trigger has now been configured and the Lambda function has the required permissions to write to SQS and CloudWatch Logs.

Choose the Lambda icon in the Designer section. Scroll down to the Function code section and paste the following code: exports.handler = async (event) => {

// TODO implement

console.log(event)

const response = {

statusCode: 200,

body: event,dfh

};

return response;

}; This code has a syntax error in order to force a failure response.

Next, you will trigger the Lambda function by adding a record to the Kinesis Stream via the AWS CLI. See how to install the AWS CLI if you have not previously done so.

In your preferred terminal run the following command, replacing {YourStreamName} with the name of the stream you have created.

aws kinesis put-record –stream-name {YourStreamName} –partition-key 123 –data testdata

You should see a response similar to this (your sequence number will be different):

In the AWS Lambda console, choose Monitoring > View logs in CloudWatch and select the most recent log group.

As you can see the Lambda function failed as expected, but this time with only a single retry attempt. This is because the max retry attempt value was set to 1 and the record is younger than the max record age that was set to 60.

In the AWS Management Console navigate to your SQS queue, Services > SQS .

Select your queue and choose Queue Actions > View/Delete Messages > Start Polling for Messages .

Here you will see the metadata of your failed invocation. It has successfully been sent to the SQS destination for further investigation or processing.

Asynchronous function invocations

When a function is asynchronously invoked, Lambda sends the event to a queue before it is processed by your function. Invocations that result in an exception in the function code are retried twice with a delay of one minute before the first retry and two minutes before the second. Some invocations may never run due to a throttle or service fault: these are retried with exponential backoff until the invocation is six hours old .

This retry behaviour shown above, works well in situations where every invocation must run at least once. However, in situations with a large volume of errors subsequent invocations are increasingly delayed as the backlog increases, as each new error places another retry event back into the event queue. If it were possible to skip the event without retrying, it would eliminate this delay and continue processing new events.

In order to avoid this default auto-retry policy, there are several widely used approaches:

Function error handling with AWS Step Functions

Using the event handler for error routing logic.

Third party monitoring services for exception alerts

These workarounds require extra resources, code, or custom logic and add latency to your applications. Retries caused by system failures due to timeouts, lack of memory, early exits, etc. could still occur.

New Lambda controls for asynchronous event invocations

New asynchronous event invoke controls mean that functions can now skip the processing of certain events and discard unwanted or backlogged requests from the asynchronous event queue without the need for “workarounds.” The controls will be accessible from the console and from a new Event Invoke Config subresource, listed in detail below:

{

“MaximumEventAgeInSeconds”: integer,

“MaximumRetryAttempts”: integer

}

Figure 4: Asynchronous event invoke configuration.

MaximumRetryAttempts

Set a maximum number of retry attempts for events before they can be skipped to unblock processing and prevent duplicate outputs.

Minimum: 0 | maximum: 2 | default: 2

MaximumEventAgeInSeconds

Define an event maximum age in seconds, with expired events skipped to allow processing to continue. Events that do not get successfully processed within defined age of submission are written to the function’s configured Dead Letter Queue and/or On-failure Destinations for asynchronous invocations. If none is configured, the event is discarded.

Minimum: 60 | maximum: 21600 (6 hours)

These controls can also be configured from the AWS Lambda console, in the “Failure handling configuration” section shown below:

When the same Lambda function is run with maximum retry attempts set to 0, the Amazon CloudWatch Logs show that the function is only invoked a single time, and not retried after the initial error.

Conclusion

The addition of these new Lambda controls allows you to handle function failures and retries appropriately for both asynchronous and stream processing. This eliminates the need for custom-built logic, alerts, and added overhead.

Developers understand the needs of their own applications. These new features allow them to decide how to react to errors on a per-function basis. Whether that means real-time stream processing, non-blocking event queues, or more retries, it depends on the application’s requirements. With the new controls in place, the power is in your hands.

You can get started with the new controls for stream processing and asynchronous invocations via the AWS Management Console, AWS CLI, AWS SAM, AWS CloudFormation, or AWS SDK for Lambda.