Things I learned in my first S3-SQS-Lambda project

Johnny Lai
2 min readFeb 16, 2023
S3-SQS-Lambda

This is a quite common serverless pattern, when something new in S3, like new file uploaded/ file delete, trigger a new message in SQS. Lambda in another side, would keep polling the SQS queue for new message and process it. However, there are several behaviors we should bear in mind

1. Duplicate S3 event

Some people complaining that S3 would occasionally publish the same event more than once.

This is not a bug. In official documentation

S3 event notification are at least once
It may duplicate in some cases

https://repost.aws/knowledge-center/s3-duplicate-sqs-messages

Resolution

According to the official documentation, we can know that is a duplicate event by “sequencer key”

sequencer key to identify duplicate event

https://repost.aws/knowledge-center/s3-duplicate-sqs-messages

2. Lambda process SQS in batch by default

If Lambda receives more than 1 message, it would process them as a batch, if any of them fail to proceed, the whole batch will fail and back to the queue, so there is a chance the same message process multiple times

all succeed or fail

https://docs.aws.amazon.com/en_gb/lambda/latest/dg/with-sqs.html#services-sqs-batchfailurereporting

Resolution: Manually delete the message which proceeded before they appear in the queue again

3. SQS receives less message than expected

We call the receive message API in Lambda by AWS SDK, but it receives less message than in console…

This is not a bug. It likely happens if it only got a few messages in queue.

Only some message are returned

https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sqs/classes/receivemessagecommand.html

Resolution: Just repeat the request

--

--