Your DLQ Is Set to 14 Days. Your Messages Are Still Expiring.
March 10, 2026 · DeadQueue Team
You set your DLQ retention to 14 days. You know the default is only 4 days and you read the docs. You felt good about it.
Then a message expired anyway, and you lost it permanently.
Here’s what happened.
The mechanic
SQS preserves the original enqueue timestamp when it redrives a message to the DLQ.
The message doesn’t get a fresh clock. It keeps the timestamp from when it first arrived on the source queue.
Say a message lands on your source queue, fails processing, and gets retried over 3 days before SQS gives up and moves it to the DLQ. That message arrives in the DLQ with 3 days already burned off its retention period. If your source queue retention is 4 days, the message has one day left. You have roughly 24 hours to notice it exists and do something about it.
Most on-call rotations don’t run that fast.
Why matching periods doesn’t save you
A common response to this is: “I’ll just set both queues to 14 days.” That’s better, but it doesn’t eliminate the problem.
If your source queue is 14 days and your DLQ is also 14 days, a message that spent 5 days retrying on the source queue arrives in the DLQ with 9 days left. That’s more breathing room, but it’s still not 14 days. The DLQ retention never actually equals 14 days of fresh time for the message.
The DLQ retention needs to be longer than the source queue retention. Not equal. Longer. The only way to maximize your window is to set the DLQ to the maximum (14 days) and keep your source queue well under that.
The default is especially bad
SQS default retention is 4 days. Most teams never change it.
A message that fails after 2 days of retries arrives in the DLQ with 2 days left. If you don’t check the DLQ over the weekend, it’s gone Monday morning.
This is the scenario that surprises teams the most because everything looks set up correctly: the DLQ exists, the redrive policy is configured, CloudWatch shows messages were sent. The alert fired. But by the time someone looks, the messages are gone.
How to check your setup
List all your queues and compare retention periods:
# Get all queue URLs
aws sqs list-queues --query 'QueueUrls[]'
# Check retention for a specific queue
aws sqs get-queue-attributes \
--queue-url <your-dlq-url> \
--attribute-names MessageRetentionPeriod
# Check retention for the source queue
aws sqs get-queue-attributes \
--queue-url <your-source-queue-url> \
--attribute-names MessageRetentionPeriod
The response is in seconds. 345600 is 4 days. 1209600 is 14 days.
If your DLQ returns anything less than 1209600, you’re leaving time on the table. If your source queue and DLQ return the same value, you have the matched-periods problem described above.
The fix
Set DLQ retention to 14 days. It costs nothing extra. SQS charges by storage time, and the cost difference between 4 days and 14 days of retention on a DLQ is negligible for most workloads.
Here’s a CloudFormation example with the right configuration:
DeadLetterQueue:
Type: AWS::SQS::Queue
Properties:
MessageRetentionPeriod: 1209600 # 14 days
SourceQueue:
Type: AWS::SQS::Queue
Properties:
MessageRetentionPeriod: 345600 # 4 days
RedrivePolicy:
deadLetterTargetArn: !GetAtt DeadLetterQueue.Arn
maxReceiveCount: 5
Source queue at 4 days, DLQ at 14 days. A message that exhausts retries on day 4 still lands in the DLQ with 14 days to go. That’s the setup you want.
If you use Terraform, the same logic applies: message_retention_seconds = 1209600 on the dead letter queue resource.
The timestamp trap is one of those things that only bites you after a real incident. By then you’ve already lost the message. DeadQueue flags retention mismatches automatically during setup, before the first message fails.