AWS LAMBDA

Study notes & sandbox — built from the interview exercise

OVERVIEW

A study site built on top of a working Lambda + MinIO sandbox. Read the page, run the code, break things on purpose.

What this is

The repo at the root of this site (ethics/) holds a Python AWS Lambda function — lambda_function.py — that lists PDFs in an S3 bucket under a prefix, paginates, generates 15-minute presigned URLs, and writes a JSONL manifest. It runs locally against MinIO via docker compose, with the same handler signature as a real Lambda. This site explains the surrounding mental model in the order you'd want to study it before walking into a Lambda-heavy interview or production rotation.

How it's organised

The sidebar groups topics into four reading orders. Foundations is the picture in your head. Operating covers the day-to-day knobs. Production covers what changes when real users and real money are involved. Reference holds the must-know checklist (Pitfalls), brief orientations on adjacent tools (Glue, Prometheus/Grafana), the hands-on labs (Labs), and the repo tree (Repository).

How to use it

  1. Read top-to-bottom — the order in the sidebar is the recommended study path.
  2. Run the sandbox. make install && make up && SOURCE_DIR=<dir> make seed && make invoke. The handler executes locally against MinIO; you can break it without burning AWS credit.
  3. Do the labs. Each one mutates the existing app: deploy to real AWS, add an S3 trigger, switch to arm64, enable Provisioned Concurrency, fan out across prefixes with Step Functions, and so on.
  4. Skim Pitfalls the night before any interview or design review.

System overview

Caller → handler → MinIO/S3 → manifest write-back. The async producer/consumer overlaps S3 LIST calls with presigning + JSONL writes, so the manifest streams to /tmp rather than buffering in memory.

System overview
Real / live Ephemeral / caveat Lambda boundary Pitfall

MENTAL MODEL

Lambda is a Linux process whose lifecycle is managed for you. Most of the surprise comes from forgetting that it's still a process.

What Lambda actually is

Each invocation runs inside an execution environment: a Firecracker microVM running the Lambda runtime (e.g. python3.13), with your code unpacked into /var/task and an ephemeral /tmp. AWS owns the VM; you own everything inside the process. The microVM is created on demand, kept warm for a while, then torn down when idle traffic stops feeding it. You don't pick a server, but there is a server, and it has memory, a clock, and a filesystem.

The two phases

Every cold start splits cleanly into two:

  • Init phase — your module-level code runs once: imports, client construction, anything outside the handler function. Capped at 10 s. Billed at full configured memory. The os.environ reads at the top of lambda_function.py happen here.
  • Handler phasehandler(event, context) runs once per invocation. Billed per-millisecond at configured memory. Subsequent invocations on the same environment skip the init phase and go straight here.

This split is the single most useful thing to internalise. Heavy work at module level → pay it once per cold start. Heavy work inside the handler → pay it every invocation.

Globals persist across warm invocations

Anything assigned at module scope survives between handler calls on the same environment. That includes the boto3 client (good — connection reuse, TCP keep-alive, no re-handshake) and any in-memory cache you build (good — but be careful, see Pitfalls). It also includes mutations you didn't mean to keep, like a list you appended to without thinking. The same warm container can serve thousands of invocations in a row, then disappear.

# module level — runs once per cold start, reused across warm invocations
BUCKET   = os.environ["BUCKET_NAME"]
ENDPOINT = os.environ.get("S3_ENDPOINT_URL")

# handler level — runs every invocation
def handler(event, context):
    return asyncio.run(_run())

/tmp is real but local

Each environment has its own /tmp (default 512 MB, configurable to 10 GB). It persists across warm invocations on that environment, so you can stash artefacts you'd rather not rebuild — but it is not shared between concurrent executions, and it's gone when the environment dies. lambda_function.py writes /tmp/<uuid>.jsonl per invocation and uploads it to S3 at the end; the file then becomes garbage, and the next invocation starts fresh.

Concurrency is horizontal

If two events arrive while one is being processed, AWS spins up a second execution environment. Each environment processes one invocation at a time, single-threaded relative to your handler. The "concurrency" you see in CloudWatch is the count of environments running in parallel. There is no thread pool to tune. There is no shared memory between environments. If you need shared state, externalise it (DynamoDB, Redis, S3).

The reuse window

Idle environments stick around for roughly 5–15 minutes (AWS doesn't promise a number) before being recycled. That's why a function that sees one request a minute almost never cold-starts, and a function that sees one a day always does. Cold Starts covers what that costs and how to mitigate it.

Lifecycle

Init is paid once, handler is paid every time. Freeze/thaw is free. Shutdown happens when nobody's looking.

Lambda execution environment lifecycle

LIMITS — CHEATSHEET

Every number worth memorising. The "why it matters" column is the part interviews actually probe.

Per-function compute & storage

LimitDefaultMaxWhy it matters
Memory128 MB10 240 MBCPU scales linearly with memory. More memory ≠ just more headroom — at >1769 MB you get a full vCPU; at higher tiers, multiple. Often cheaper to bump memory because duration drops faster than cost rises.
Timeout3 s900 s (15 min)3 s default is too short for almost anything that talks to S3. Set explicitly; don't accept the default. API Gateway caps at 29 s no matter what your function says (see below).
Ephemeral storage (/tmp)512 MB10 240 MBPersists across warm invocations on the same env, vanishes on cold start. Not shared between concurrent envs. Pay per-invocation for >512 MB.
Init phase10 s hard capModule-level code (imports, client construction). Heavy ML model loads, custom JIT warmups — measure them or you'll trip this.

Payloads & responses

LimitValueWhy it matters
Sync invocation request6 MBHard cap on the event body for RequestResponse invocations.
Sync invocation response6 MBTruncated silently above this — your handler "succeeds" but the caller gets a 413. lambda_function.py sidesteps this by returning a manifest URL instead of inlining all presigned URLs.
Async invocation event256 KBFor Event invocations and most event-source-mapped triggers (S3, EventBridge, SNS).
Response streaming20 MB (soft) / unlimited (with bandwidth cap)Function URLs and Lambda Streaming response mode break the 6 MB cap by flushing chunks. Not all clients/SDKs support it.
Environment variables4 KB totalPer function, all keys+values combined. Big config → Parameter Store / Secrets Manager.
Event size (SQS, SNS, EventBridge)256 KB eachProducer-side limit. Larger payloads → store in S3, send a pointer.

Packaging

LimitValueWhy it matters
Zip upload (direct)50 MBAbove this you must upload via S3 first.
Zip unzipped (function + layers)250 MBTotal of /var/task + all layers extracted. aioboto3+deps is ~50 MB; you have headroom but not infinite.
Container image10 GBPer image. Preferred when you'd otherwise blow the 250 MB zip ceiling — e.g. ML deps with native binaries.
Layers5 per functionOrdering matters: later layers overwrite earlier. Layers count toward the 250 MB unzipped cap.

Concurrency & scaling

LimitDefaultNotes
Account concurrent executions1 000 / regionSoft quota — request increase via Service Quotas. The single most common throttling cause in production.
Burst concurrency500–3 000 (region-dependent)How many fresh environments AWS will spin up immediately at traffic spike. Beyond this, scale-up is +500 envs / min.
Reserved concurrency0 to account quotaCarves a slice of the account pool for a function. Setting it to 0 effectively disables the function.
Provisioned concurrency0 by defaultPre-warmed envs. Eliminates cold starts at the cost of paying for idle capacity. Bills as PC-seconds + invocation cost.

Time & rate limits at the edges

SurfaceLimitWhy it matters
API Gateway integration timeout29 sCaps your effective Lambda timeout when fronted by API GW, regardless of what the Lambda timeout says. Function URLs allow up to 15 min.
Async invocation event age6 hIf retries don't succeed in this window, the event is dropped (or sent to DLQ / on-failure destination).
Async retry attempts2 (default)Total of 3 attempts (initial + 2). Configurable down to 0.
SQS visibility timeout requirement≥ 6× function timeoutAWS recommendation. Otherwise messages reappear while still being processed.
Memorisation hack. Three numbers cover most interview questions: 15 minutes (timeout), 10 GB (memory and /tmp ceiling), 6 MB (sync payload). Everything else is a footnote until you hit a specific design.

COLD STARTS

Init Duration vs warm path. Mitigations: Provisioned Concurrency, arm64, lazy imports, smaller packages, SnapStart.

Cold vs warm timeline

What triggers a cold start

A cold start happens whenever Lambda must create a new execution environment: the very first request after a deployment, when traffic spikes beyond the number of warm environments, and after an environment has been idle long enough to be recycled (typically 5–15 minutes, unspecified by AWS). Deployments always cold-start the incoming version — you can't avoid the first one, only reduce how long it takes.

The cold path

AWS provisions a Firecracker microVM, downloads and unpacks your code (or pulls the container image), starts the language runtime, then runs your module-level code. Only after all of that does your handler function get called. The timeline is roughly:

  1. Environment provisioning — microVM boot, network attachment, filesystem mount. Not billed; AWS absorbs this.
  2. Init phase — your module-level code: imports, client construction, config reads. Billed at full configured memory. Capped at 10 s.
  3. Handler phasehandler(event, context) runs. Billed per-ms.

CloudWatch shows this split: the REPORT line includes Init Duration only on cold invocations. Warm invocations have no Init Duration line.

Typical numbers

RuntimeTypical cold start (p50)Typical cold start (p99)
Python 3.13 (zip, minimal deps)~150 ms~400 ms
Python 3.13 (zip, aioboto3 + aiofiles)~300 ms~700 ms
Node.js 22~100 ms~300 ms
Java 21 (without SnapStart)~1–2 s~3–5 s
Java 21 (SnapStart enabled)~200 ms~600 ms
Container image (any runtime)+100–300 msfirst pull can be 1–3 s

Mitigations

Provisioned Concurrency (PC) — pre-warms N environments so they're always in the "warm" state. Eliminates cold starts for the provisioned slots. You pay for those slots 24/7 even when idle. Use for latency-sensitive, predictable-traffic paths. Schedule PC changes via Application Auto Scaling for cost efficiency.

arm64 — Graviton2 executes the init phase ~10% faster than x86_64 for CPU-bound init work. Combined with the ~20% price reduction, arm64 is the default choice unless native wheels block you.

Smaller packages — Lambda downloads and unpacks your zip on every cold start. Trimming unused transitive dependencies (use pip install --no-deps audit or pipdeptree) and stripping test/doc files shaves real time. Every MB of extracted code costs a few ms.

Lazy imports — move rarely-used or slow imports inside the handler (or into a lazy-init guard). The most common win is heavy ML libraries only needed for inference: import them on first call, cache the result in a module-level variable.

SnapStart (Java only) — takes a snapshot of the initialised JVM state after your init phase, then restores from that snapshot on cold starts. Collapses 1–5 s JVM startup to ~200 ms. Not available for Python or Node.

When cold starts don't matter: batch jobs, async event pipelines, scheduled tasks — nobody is waiting on the p99. Only optimise cold starts when a human is waiting synchronously for the response.

CONCURRENCY

Account quota, reserved, provisioned. The "100 RPS × 200 ms" math.

The fundamental model

Lambda concurrency = the number of execution environments processing requests at the same instant. Each environment handles exactly one invocation at a time. There is no thread pool, no event loop shared across invocations — if two requests arrive simultaneously, AWS spins up two separate environments.

The key formula: concurrency ≈ RPS × average duration (in seconds). At 100 requests/s with a 200 ms average handler duration, you need 100 × 0.2 = 20 concurrent environments. At 500 ms average, you need 50. At 2 s average, 200 — and so on. Latency optimisation directly reduces your concurrency footprint.

Account concurrency pool

Every AWS account has a regional concurrency quota — default 1 000 concurrent executions per region, shared across all functions. When the pool is full, new invocations get throttled (sync → HTTP 429 TooManyRequestsException; async → queued and retried). Raising the limit requires a Service Quotas increase request; AWS typically grants up to 10 000 with a business justification.

This is the single most common production surprise: one function spikes and starves all others in the same region. Reserved concurrency is the fix.

Types of concurrency

TypeWhat it doesCostUse for
UnreservedDraws from the shared regional pool on demandInvocation + duration onlyMost functions
ReservedCarves a slice of the regional pool exclusively for this function; acts as both a floor and a ceilingNo extra chargeProtecting critical paths from noisy neighbours; throttling cost runaway
ProvisionedPre-warms N environments; they stay initialised 24/7PC-hours + invocationLatency-sensitive functions where cold starts are unacceptable

Reserved concurrency edge cases

  • Setting reserved concurrency to 0 disables the function entirely — useful as a circuit breaker.
  • Reserved concurrency counts against the account pool even when idle. If you set 500 reserved on a function, only 500 remain for all other functions (at default 1 000).
  • Reserved concurrency does not pre-warm. You still cold-start; you just can't scale past the cap.

Burst scaling

When traffic spikes from zero, Lambda can spin up environments quickly — but not infinitely fast. The burst limit (region-dependent, typically 500–3 000 immediate) is how many environments AWS will create right now. Beyond that, it adds 500 new environments per minute. A spike from 0 to 5 000 concurrent requests takes several minutes to fully absorb. Provisioned Concurrency or pre-warming via a ping mechanism is the fix for sudden large spikes.

Interview answer template: "Concurrency = RPS × duration. Default pool is 1 000/region. Reserved carves a slice and prevents both starvation and runaway. Provisioned pre-warms to eliminate cold starts, but you pay for idle capacity."

TRIGGERS

Fan-in catalogue: API GW, Function URL, S3, SQS, SNS, EventBridge, DynamoDB streams, Kinesis, ALB, schedule, Step Functions.

Three invocation models

Every trigger falls into one of three models, and the model determines retry behaviour, error handling, and whether the caller can see the response.

ModelCaller behaviourRetries on errorMax event size
SynchronousBlocks for response; gets result or error directlyNone — caller decides6 MB request + response
AsynchronousGets 202 immediately; Lambda queues + retries internally2 retries (3 total) over up to 6 h256 KB event
Poll-based (ESM)Lambda polls the source on your behalf; batches recordsKeeps retrying until success or record expires/goes to DLQDepends on source

Trigger catalogue

TriggerModelKey notes
API Gateway (REST / HTTP)Sync29 s integration timeout regardless of Lambda timeout. HTTP API is cheaper and lower-latency than REST API. Transforms request/response.
Function URLSyncDirect HTTPS endpoint on the function; no API Gateway layer. Supports up to 15 min timeout and response streaming. Simpler, cheaper, fewer features.
ALB (Application Load Balancer)SyncLike API GW but routes at L7; useful when Lambda is one target among EC2/ECS targets. 29 s timeout.
S3 event notificationAsyncFires on object create/delete/etc. At-least-once delivery. Large PUT creates exactly one event per object but notifications can duplicate. Common pattern: S3 → SNS → SQS → Lambda for fan-out + replay.
SNSAsyncFan-out: one message → multiple subscribers. At-least-once. Dead-letter queue on the subscription, not the topic.
EventBridge (CloudWatch Events)AsyncEvent bus with content-based routing rules. Also the managed scheduler (cron/rate expressions, timezone-aware since 2022). At-least-once.
SQSPoll-based (ESM)Lambda polls and batches (up to 10 000 msg). Standard: at-least-once, unordered. FIFO: ordered per message group, exactly-once with dedup. Visibility timeout must be ≥ 6× function timeout. Partial batch failure via batchItemFailures.
Kinesis Data StreamsPoll-based (ESM)One Lambda shard per stream shard. Records expire (24 h–1 yr); Lambda retries until success or expiry. Use bisect-on-error and batchItemFailures to avoid one bad record blocking an entire shard.
DynamoDB StreamsPoll-based (ESM)Captures item-level changes. Ordered per partition key. 24 h retention. Same retry behaviour as Kinesis. Use for CDC (change-data-capture) patterns.
Step FunctionsSync (Task state)Step Functions calls the function synchronously and waits for the result. Retries and timeouts are defined in the state machine, not Lambda. See the Step Functions section.
Cognito / SES / IoT etc.Sync or AsyncService-specific; check the docs for each. Cognito triggers (pre-signup, pre-token) are sync and block the auth flow.

Choosing between SQS and SNS+SQS

Use plain SQS → Lambda when you have one consumer and want to buffer, batch, and retry. Use SNS → SQS → Lambda when you need fan-out (multiple independent consumers each get a copy) or when the producer is an AWS service that speaks SNS natively (S3 event notifications, for example). The SNS layer decouples producers from the queue topology.

IAM & PERMISSIONS

Execution role vs resource policy. The two policies most people confuse.

Two independent permission layers

Lambda has two separate permission surfaces that must each be correct independently. Confusing them is the most common "it works locally but not in AWS" failure.

LayerQuestion it answersWho creates it
Execution roleWhat can this Lambda function do once running? (call S3, write to DynamoDB, publish to SNS…)You — attached at function creation
Resource policyWho is allowed to invoke this Lambda function? (API Gateway, another account, EventBridge…)AWS adds it automatically for most triggers; you add it for cross-account or manual grants

Execution role

The execution role is an IAM role that Lambda assumes when running your function. Every Lambda must have one. The role's attached policies determine what AWS API calls the function can make. At minimum, every function needs:

# minimum: write its own logs
logs:CreateLogGroup
logs:CreateLogStream
logs:PutLogEvents

Common additions for a function that reads/writes S3:

s3:GetObject
s3:PutObject
s3:ListBucket        # needed for paginator; often forgotten
kms:Decrypt          # if the bucket uses a CMK, this is also required

The AWSLambdaBasicExecutionRole managed policy covers logs only — it is intentionally minimal. AWSLambdaVPCAccessExecutionRole adds the ENI permissions needed when the function is in a VPC.

Resource policy

The resource policy is attached to the Lambda function itself (not an IAM identity). When you add an S3 event notification or API Gateway integration in the console, AWS automatically adds a resource policy entry allowing that service to invoke the function. For cross-account invocations you add this manually via aws lambda add-permission.

# grant another account permission to invoke
aws lambda add-permission \
  --function-name my-function \
  --principal 123456789012 \  # the other AWS account
  --action lambda:InvokeFunction \
  --statement-id cross-account-invoke

Common mistakes

  • Missing s3:ListBucket on the bucket resource. ListObjectsV2 requires this on the bucket ARN (not the object ARN). Forgetting it causes AccessDenied on the paginator even when GetObject works fine.
  • Wrong resource ARN scope. s3:GetObject must be on arn:aws:s3:::bucket-name/*; s3:ListBucket must be on arn:aws:s3:::bucket-name. Swapping them is a frequent typo.
  • CMK not in execution role. KMS-encrypted bucket objects require both s3:GetObject and kms:Decrypt. The KMS key policy must also allow the role. Two separate policy documents, two separate denial points.
  • No resource policy for new trigger. If you wire up EventBridge manually (not via the console), the trigger silently fails because there's no resource policy entry granting EventBridge lambda:InvokeFunction.

Diagnosing permission errors

CloudTrail is the ground truth. Filter by errorCode: "AccessDenied" and userIdentity.arn matching the execution role ARN. The event tells you exactly which action on which resource was denied. CloudWatch will show the error in the Lambda log if you let the exception propagate, but CloudTrail shows it even when the call is made from a library that swallows the error.

PACKAGING

Zip vs layers vs container images. arm64 vs x86_64. Native wheels.

Three deployment formats

FormatSize limitBest forCaveats
Zip (direct)50 MB upload / 250 MB unzippedMost Python/Node functions with pure-Python or pre-built wheelsMust match Lambda's architecture; no custom runtime
Zip via S3250 MB unzippedSame as above but when zip exceeds 50 MBS3 bucket must be in the same region
Layers250 MB total (function + all layers)Shared dependencies across functions (e.g. a company-wide logging layer)Max 5 layers per function; later layers overwrite earlier ones
Container image10 GBML models, native binary deps, custom runtimesSlower first cold start (image pull); larger attack surface

Layers in practice

A layer is a zip file that Lambda extracts into /opt before running your function. Your code in /var/task can import from /opt/python (for Python) without any path manipulation. Use cases:

  • Shared internal libraries deployed independently of business logic
  • Large dependencies that change rarely (numpy, pandas) — cache them in a layer so deployments of the business logic are fast
  • AWS-provided layers: Lambda Insights extension, X-Ray SDK

Layers count toward the 250 MB unzipped limit. If you have 5 layers at 40 MB each and your function zip is 50 MB, you're at 250 MB — no room left.

Container images

Container images must be based on AWS-provided base images (public.ecr.aws/lambda/python:3.13) or implement the Lambda Runtime Interface. They must be stored in ECR (Elastic Container Registry) in the same region. The Lambda service caches images on the underlying host after the first pull, so subsequent cold starts on the same host are fast — but the very first invocation after a new image is deployed can be slow for large images.

Container images bypass the 250 MB unzipped limit, which is why they're the standard choice for Python ML workloads that bundle PyTorch or TensorFlow.

arm64 vs x86_64

Graviton2-based arm64 is ~20% cheaper per GB-second than x86_64 and typically faster at compute-heavy work. The decision tree:

  1. Check all your dependencies for arm64 wheels: pip download --platform manylinux2014_aarch64 --only-binary :all: -r requirements.txt. If any fail, you either build from source (needs Dockerfile) or stay on x86.
  2. For pure-Python deps and most modern packages, arm64 works out of the box.
  3. Native extensions (cryptography, numpy, psycopg2) have arm64 wheels on PyPI since ~2022. Check the exact version you need.

Building for Lambda (the common foot-gun)

Lambda runs on Amazon Linux 2023. pip install on macOS produces wheels compiled for macOS, which will segfault or import-error on Lambda. The correct approach:

# build inside the Lambda runtime image
docker run --rm \
  -v "$PWD":/var/task \
  public.ecr.aws/lambda/python:3.13 \
  pip install -r requirements.txt -t python/

zip -r layer.zip python/

This is also where architecture matters: use the :3.13-arm64 tag when building for arm64.

This project uses a zip deployment. aioboto3 and aiofiles are pure-Python and have no native extensions, so they build cleanly on any architecture. The Makefile's install target creates a local .venv for development; a real CI pipeline would build the deployment zip inside the Lambda image.

VPC & NETWORKING

When to put Lambda in a VPC (rarely). ENI cold start cost. NAT money pit.

Default: no VPC

By default, Lambda runs in an AWS-managed network with internet access. It can reach S3, DynamoDB, SQS, and other AWS services via their public endpoints. Do not put Lambda in a VPC unless you have a specific reason. Most applications don't need it.

When you actually need VPC

  • Connecting to RDS or Aurora (which live in a private subnet)
  • ElastiCache (Redis/Memcached) — VPC-only by design
  • Private REST APIs or internal services on private subnets
  • Compliance requirements mandating network isolation

S3, DynamoDB, SQS, SNS, and most AWS managed services do not require VPC placement — they're public services with public endpoints.

ENI attachment and cold start

When Lambda is VPC-attached, each execution environment gets an Elastic Network Interface (ENI) in your VPC. Pre-2019, ENIs were allocated per cold start, adding 10–30 s to init. AWS fixed this in 2019 with hyperplane ENIs shared across environments — today the VPC cold start penalty is ~100–500 ms on the first cold start of a new deployment, then negligible. It's no longer the dealbreaker it used to be, but it's not zero.

Subnet and AZ placement

Specify at least two subnets in different AZs for availability. Lambda will distribute environments across AZs. If a subnet runs out of available ENI slots (IP exhaustion), Lambda scaling fails — size subnets with this in mind. /24 (254 IPs) is often too small for high-concurrency functions.

The NAT money pit

VPC Lambda can't reach the internet by default. If your function needs to call an external API or reach an AWS service without a VPC endpoint, you need a NAT gateway in a public subnet. NAT gateways cost:

  • $0.045/hour (~$32/month) just to exist, per AZ
  • $0.045/GB of data processed

A function that sends 100 GB/month through NAT costs $4.50 in data alone, on top of the always-on hourly charge. Two AZs for HA = ~$64/month base cost before a single byte of traffic. This is frequently the largest unexpected cost in VPC Lambda setups.

VPC endpoints: the free alternative

For AWS services, VPC endpoints bypass NAT and the public internet entirely. Two types:

  • Gateway endpoints — S3 and DynamoDB only. Free. Route table entries. No data charge.
  • Interface endpoints (PrivateLink) — any AWS service. $0.01/AZ/hr + $0.01/GB. Expensive for high throughput but often cheaper than NAT for AWS-service-heavy workloads.

For a VPC Lambda that only talks to S3 and DynamoDB: create gateway endpoints for both → no NAT needed → near-zero networking cost.

Security groups

VPC Lambda gets a security group. Outbound rules control where it can connect. The security group of RDS/ElastiCache must allow inbound from the Lambda security group. A common pattern is to create a dedicated Lambda SG and reference it in the database SG's inbound rules — this avoids IP-range rules that break when Lambda ENIs change.

OBSERVABILITY

CloudWatch logs, structured JSON, X-Ray, Lambda Insights, EMF. Brief Prometheus/Grafana orientation.

CloudWatch Logs — what you get for free

Every Lambda function automatically writes to a CloudWatch Log Group named /aws/lambda/<function-name>. Each execution environment gets its own Log Stream. Lambda writes two special lines automatically:

START RequestId: abc-123 Version: $LATEST
END RequestId: abc-123
REPORT RequestId: abc-123  Duration: 312.45 ms  Billed Duration: 313 ms
        Memory Size: 256 MB  Max Memory Used: 89 MB
        Init Duration: 423.12 ms   # only on cold starts

The REPORT line is your free performance telemetry. Init Duration appears only on cold invocations. Max Memory Used helps right-size memory configuration.

Retention: Default is "Never Expire." Set it explicitly — 7, 14, or 30 days covers most needs. Every MB of retained logs costs money.

Structured logging

Emit JSON instead of plain strings. CloudWatch Logs Insights can filter and aggregate JSON fields efficiently; plain strings require regex and are slow. Example:

import json, logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)

def handler(event, context):
    logger.info(json.dumps({
        "event": "pdf_scan_start",
        "bucket": BUCKET,
        "prefix": PREFIX,
        "request_id": context.aws_request_id,
    }))

With this, Logs Insights can run: filter event = "pdf_scan_start" | stats count() by bin(5m) in seconds.

X-Ray tracing

X-Ray gives you request traces across services — how long the Lambda itself ran vs how long S3 calls took. Three things must all be true:

  1. Tracing enabled on the function — console toggle or TracingConfig: Active in SAM/CDK
  2. X-Ray SDK instrumented in your codefrom aws_xray_sdk.core import patch_all; patch_all() wraps boto3 calls automatically
  3. IAM permission — execution role needs xray:PutTraceSegments and xray:PutTelemetryRecords

Without all three, traces are either absent or incomplete. People flip one and conclude X-Ray is broken.

Lambda Insights

Lambda Insights is a CloudWatch feature (not a separate service) that surfaces system-level metrics: CPU usage, memory utilisation, network I/O, disk I/O — things the REPORT line doesn't include. To enable it:

  • Add the Lambda Insights extension layer (arn:aws:lambda:<region>:580247275435:layer:LambdaInsightsExtension:38)
  • Add cloudwatch:PutMetricData to the execution role

It's useful when you suspect memory or CPU contention but the REPORT line's "Max Memory Used" isn't granular enough.

EMF — Embedded Metrics Format

EMF lets you emit custom CloudWatch metrics by writing structured JSON to stdout. No PutMetricData API call needed — the Lambda runtime parses the log line and publishes the metric asynchronously. This is far more efficient than calling CloudWatch from inside the handler (which adds latency + cost per invocation).

import json

def emit_metric(name, value, unit="Count", **dims):
    print(json.dumps({
        "_aws": {
            "Timestamp": int(time.time() * 1000),
            "CloudWatchMetrics": [{
                "Namespace": "MyApp",
                "Dimensions": [list(dims.keys())],
                "Metrics": [{"Name": name, "Unit": unit}]
            }]
        },
        name: value,
        **dims,
    }))

# usage
emit_metric("PDFsProcessed", count, Unit="Count", Function="pdf-scanner")

Prometheus & Grafana (brief)

Prometheus uses a pull model — it scrapes HTTP endpoints. Lambda functions are ephemeral and have no persistent HTTP endpoint, so Prometheus can't scrape them directly. Approaches:

  • EMF → CloudWatch → Grafana CloudWatch plugin — easiest; Grafana queries CW as a data source
  • Amazon Managed Prometheus (AMP) + remote_write — Lambda pushes metrics to AMP via the Prometheus remote write API; Grafana (or Amazon Managed Grafana) reads from AMP
  • Statsd/push gateway — Lambda pushes to a persistent push gateway; Prometheus scrapes the gateway. More infra to manage.

For Lambda-centric dashboards, the CloudWatch → Grafana path is usually the simplest to operate.

ASYNC & ERRORS

Sync vs async invoke. Retries, DLQ, destinations, idempotency, partial-batch failures.

Sync vs async invocation

Synchronous (RequestResponse)Asynchronous (Event)
Caller blocks?Yes — waits for resultNo — gets 202 immediately
Response visible to caller?YesNo
Retries on errorNone (caller's responsibility)2 retries = 3 total attempts
Retry backoff~1 min then ~2 min
Event age limit6 hours
Max event size6 MB256 KB

Async retry flow

When Lambda invokes asynchronously and the function throws an unhandled exception (or is throttled), Lambda retries automatically — twice, with exponential backoff starting at ~1 minute. If all three attempts fail, or if the event ages past 6 hours, Lambda sends the event to the configured failure destination or DLQ. If neither is configured, the event is silently dropped.

DLQ vs Destinations

These are two different mechanisms that overlap in purpose but have different capabilities:

Dead-Letter Queue (DLQ)Event Destinations
Introduced2016 (legacy)2019 (preferred)
Triggers onFailure onlySuccess or failure (separate configs)
PayloadThe original event onlyOriginal event + result/error + metadata
TargetsSQS or SNSSQS, SNS, Lambda, EventBridge

Use Destinations for new code. DLQ remains useful when the downstream consumer must be SQS and you don't need success notifications.

Idempotency

Because async invocations retry and most event sources are at-least-once, your handler will occasionally execute more than once for the same logical event. Design handlers to be idempotent — the same input produces the same outcome regardless of how many times it runs.

Standard pattern: use a unique key from the event (S3 ETag + key, SQS MessageId, EventBridge detail.id) as a deduplication key. On first execution, write the key + result to DynamoDB with a TTL. On retry, check DynamoDB first — if already processed, return the cached result without re-running the work.

# pseudo-code
dedup_key = event["Records"][0]["messageId"]
existing = table.get_item(Key={"id": dedup_key})
if existing.get("Item"):
    return existing["Item"]["result"]

result = do_the_work(event)
table.put_item(Item={"id": dedup_key, "result": result, "ttl": now + 86400})
return result

AWS PowerTools for Lambda (Python) has a built-in @idempotent decorator that implements this pattern with DynamoDB.

Partial batch failures (SQS / Kinesis / DynamoDB Streams)

When Lambda processes a batch of records and one record fails, the default behaviour differs by source:

  • SQS (default): if the handler raises an exception, the entire batch is retried. One bad message blocks all others and can cause infinite retry loops.
  • With ReportBatchItemFailures enabled: return a batchItemFailures list containing only the failed message IDs. Lambda re-queues only those; successful messages are deleted.
def handler(event, context):
    failures = []
    for record in event["Records"]:
        try:
            process(record)
        except Exception:
            failures.append({"itemIdentifier": record["messageId"]})
    return {"batchItemFailures": failures}

Enable ReportBatchItemFailures in the ESM configuration and always implement partial-batch failure reporting for SQS and Kinesis handlers. A single poison-pill record can otherwise block an entire shard or queue indefinitely.

The idempotency–partial-batch intersection: with partial failures, successful records in the batch are deleted from SQS, but if your function crashes before returning the failure list, the entire batch including the successes gets retried. Idempotency guards must still cover every record, not just the ones in batchItemFailures.

STEP FUNCTIONS

When Lambda alone isn't enough. Standard vs Express. Map state for fan-out. Comparison with Airflow.

When Lambda alone isn't enough

A single Lambda function works well for one discrete task. Problems start when you need to chain multiple tasks, retry selectively, wait on human approval, or fan out across thousands of items. Doing this with Lambda alone means writing orchestration logic inside your functions — tracking state, implementing retry delays, deciding what "done" means. Step Functions externalises that orchestration into a state machine where every state transition is durable, auditable, and resumable.

Reach for Step Functions when you need: sequential steps with state passing, conditional branching, parallel fan-out with join, wait states longer than 15 minutes, or retry-with-exponential-backoff built in.

Standard vs Express workflows

StandardExpress
Max duration1 year5 minutes
Execution semanticsExactly-once per stateAt-least-once
Execution historyFull audit trail in AWS consoleCloudWatch Logs only
Pricing$0.025 per 1 000 state transitions$0.00001 per state transition + duration
Use forLong-running business workflows, human approvals, compliance audit trailsHigh-volume, short-duration event processing (IoT, streaming)

For most application orchestration, Standard is the right choice — the exactly-once semantic matters when steps have side effects (charging a card, sending an email). Express is for high-throughput pipelines where at-least-once is acceptable and cost per transition is a concern.

Map state for fan-out

The Map state runs the same workflow branch for every item in an array, in parallel. This is the core fan-out primitive. For this project's use case, a Step Functions version could fan out across S3 prefixes — run one Lambda per prefix, collect results in a fan-in step:

{
  "Type": "Map",
  "ItemsPath": "$.prefixes",
  "MaxConcurrency": 10,       // cap parallelism
  "Iterator": {
    "StartAt": "ScanPrefix",
    "States": {
      "ScanPrefix": {
        "Type": "Task",
        "Resource": "arn:aws:lambda:...:function:pdf-scanner",
        "End": true
      }
    }
  }
}

MaxConcurrency: 0 means unlimited — bounded only by the Lambda concurrency pool. Set an explicit cap to avoid saturating the account concurrency quota.

Other useful states

  • Wait — pause for a duration or until a timestamp. The only way to implement delays longer than 15 minutes without polling.
  • Choice — conditional branching on input values. Replaces if/else logic that would otherwise live inside a Lambda.
  • Parallel — run multiple independent branches simultaneously and join their results.
  • Task (SDK integrations) — Step Functions can call DynamoDB, SQS, ECS, Glue, etc. directly without a Lambda wrapper, reducing cost and latency for simple operations.

Step Functions vs Airflow

Step FunctionsApache Airflow (MWAA)
DAG definitionJSON/YAML state machine (ASL)Python code (DAG files)
SchedulingEvent-driven / on-demand; cron via EventBridgeBuilt-in rich scheduler (cron, data-interval-aware)
BackfillManual / customFirst-class, built-in
OperatorsAWS services + Lambda (AWS ecosystem only)600+ providers: Spark, BigQuery, dbt, Kubernetes…
InfrastructureServerless — zero infraManaged Airflow (MWAA) starts at ~$400/month
DebuggingConsole execution graph; CloudWatch for logsAirflow UI with task logs, Gantt charts, retries

Step Functions is the right choice when your workflow is AWS-native, event-driven, and you want zero infrastructure. Airflow is the right choice when you need complex scheduling, data-interval backfill, cross-cloud operators, or a data-engineering team that already knows Python DAGs.

COST

Pricing model, memory/cost trade-off, x86 vs arm64, free tier, common surprises.

The pricing formula

Lambda billing has two components, both permanent free tiers included:

Componentx86_64arm64Free tier (permanent)
Requests$0.20 / 1M$0.20 / 1M1M / month
Duration$0.0000166667 / GB-s$0.0000133334 / GB-s400 000 GB-s / month

GB-seconds = memory configured (GB) × duration (seconds). A 512 MB function running for 300 ms = 0.5 × 0.3 = 0.15 GB-s. At 1 million invocations, that's 150 000 GB-s — well inside the free tier.

Duration is billed in 1 ms increments. The old 100 ms minimum is gone (removed in 2020).

Memory vs cost: more can be cheaper

CPU scales linearly with memory. A function configured at 1 769 MB gets a full vCPU; below that it's a fraction. Doubling memory often more than halves duration for CPU-bound work, which means the total GB-s cost stays the same or decreases — while latency drops.

AWS Lambda Power Tuning is a Step Functions state machine that automatically benchmarks your function at multiple memory sizes and produces a cost/performance curve. Run it before guessing at the right memory setting. The optimal point is almost never the default 128 MB.

arm64 saves ~20%

arm64 duration pricing is 20% cheaper than x86. Same request price. If your function is compute-bound (not I/O-bound sleeping on S3 calls), arm64 also runs faster, compounding the saving. For I/O-bound functions (like lambda_function.py, which spends most of its time waiting on S3), the duration difference is smaller but the 20% price reduction still applies.

Provisioned Concurrency billing

PC is billed separately: $0.0000097222 per GB-s of provisioned time (x86) — even when idle. If you have 10 × 512 MB environments provisioned for 24 hours: 10 × 0.5 GB × 86 400 s = 432 000 GB-s/day = ~$4.20/day = ~$126/month just for the warm slots, before counting actual invocation cost on top. PC is for latency, not cost — it always increases your bill.

Hidden costs (the real bill)

  • NAT Gateway — $0.045/hr per AZ (~$32/month) + $0.045/GB data. Often the largest line item for VPC Lambda.
  • API Gateway — REST API: $3.50/1M calls. HTTP API: $1/1M. Can dwarf Lambda cost at high RPS.
  • CloudWatch Logs — $0.50/GB ingestion + $0.03/GB storage/month. Verbose Lambda logs accumulate fast; set retention.
  • Lambda Insights — additional CW Logs + custom metrics charges.
  • X-Ray — $5/million traces (after free 100K/month).
  • Data transfer — traffic leaving a region or going through a NAT has per-GB charges.
  • S3 API calls — LIST and GET requests are billed per 1 000. A function that does 10 000 LIST calls/invocation at 1M invocations = 10B API calls = real money.
For this project's function: at 1 000 invocations/day with 500 ms average duration and 256 MB memory, cost is ~$0.002/day — essentially free. Lambda's economics only require attention above ~100K invocations/day with non-trivial memory or duration.

LOCAL DEV

SAM CLI, Lambda RIE, LocalStack, MinIO — when to reach for which.

The local dev problem

Lambda has no local runtime by default. Your only loop without tooling is: zip, upload, invoke, read CloudWatch logs, repeat — minutes per cycle. The tools below collapse that to seconds, with different trade-offs between fidelity, setup cost, and scope.

SAM CLI

What it is: AWS's official local Lambda emulator. Wraps Docker to run your function inside a container that matches the Lambda runtime environment exactly. Also emulates API Gateway.

Commands:

sam local invoke -e event.json          # invoke once
sam local start-api                       # spin up local HTTP API gateway
sam local invoke --debug-port 5858       # attach debugger

Fidelity: high — same Amazon Linux image, same runtime, same filesystem layout. Catches architecture issues (x86 wheel on arm64) that a plain venv misses.

Downsides: requires Docker, slow to start (pulls image on first run), no MinIO/SQS/DynamoDB emulation built in. You wire those up separately.

Lambda Runtime Interface Emulator (RIE)

A lightweight binary embedded in all AWS-provided Lambda base images. When you run the image locally, RIE exposes a local HTTP endpoint that accepts invocations in the Lambda API format. You don't need SAM CLI — just Docker:

docker build -t my-fn .
docker run -p 9000:8080 my-fn
curl -XPOST http://localhost:9000/2015-03-31/functions/function/invocations \
  -d '{"key": "value"}'

Use RIE when you're building container-image Lambdas and want to test them without SAM overhead.

LocalStack

A full AWS mock that emulates Lambda, S3, SQS, DynamoDB, API Gateway, and dozens more services in a single container. Community edition is free; Pro ($35/month) adds more services and persistent state.

When to use: integration tests that span multiple AWS services (e.g. an EventBridge rule that triggers a Lambda that writes to DynamoDB). Without LocalStack you'd need a real AWS account for these tests.

When to avoid: if you only need one service (just S3 → use MinIO; just Lambda → use SAM/RIE). LocalStack's Lambda emulation has occasional edge-case differences from the real runtime.

docker run --rm -p 4566:4566 localstack/localstack
AWS_DEFAULT_REGION=us-east-1 \
  AWS_ACCESS_KEY_ID=test \
  AWS_SECRET_ACCESS_KEY=test \
  aws --endpoint-url=http://localhost:4566 s3 ls

MinIO (this project)

MinIO is an S3-compatible object store that runs locally in Docker. It implements the S3 API precisely enough that boto3/aioboto3 needs only an endpoint_url override to work against it. It is not a Lambda emulator — it replaces S3 only.

make up           # starts MinIO on :9000 (API) and :9001 (console)
SOURCE_DIR=~/pdfs make seed   # uploads PDFs to MinIO
make invoke       # runs lambda_function.py against MinIO via invoke.py

This is the lightest possible local setup: no Docker-in-Docker, no SAM overhead, minimal latency. The function handler runs in your local Python process against a real S3-compatible store. Differences from real Lambda (no execution environment lifecycle, no /tmp isolation between runs) are acceptable for the development loop but not for environment-fidelity tests.

Decision matrix

NeedReach for
Fast iteration on handler logicMinIO + python invoke.py (this project's setup)
Emulate Lambda runtime + API Gateway locallySAM CLI
Test a container-image LambdaLambda RIE via Docker
Integration test across multiple AWS servicesLocalStack
Full-fidelity staging before prodReal AWS account, separate environment

CI/CD

Aliases, versions, traffic shifting, blue/green. Plain CLI → SAM → CDK → Terraform.

Versions and aliases

Versions are immutable snapshots of a function's code and configuration. When you publish a version (aws lambda publish-version), AWS creates an immutable ARN like arn:…:function:my-fn:7. $LATEST is the only mutable version — always reflects the most recent code upload.

Aliases are named pointers to a version. prod might point to version 7; staging might point to version 8. Event source mappings, API Gateway integrations, and Step Functions tasks should target aliases, not version ARNs — this decouples deployment (publishing a new version) from promotion (updating the alias).

Traffic shifting (blue/green)

An alias can split traffic across two versions with weighted routing:

aws lambda update-alias \
  --function-name my-fn \
  --name prod \
  --function-version 8 \
  --routing-config AdditionalVersionWeights={"7"=0.9}
# result: 10% of prod traffic goes to v8, 90% still to v7

Start at 10% canary, watch error rates in CloudWatch, shift to 50%, then 100%. Rollback is instant: point the alias back to the stable version. No instance drain, no connection draining — Lambda is stateless, cutover is atomic.

CodeDeploy integration

SAM and CDK can wire up CodeDeploy for automatic traffic shifting with automatic rollback on CloudWatch alarms. You declare the deployment preference in the template:

# SAM template.yaml
DeploymentPreference:
  Type: Canary10Percent5Minutes   # 10% for 5 min, then 100%
  Alarms:
    - !Ref ErrorRateAlarm          # rolls back if alarm triggers

CodeDeploy manages the alias weight changes and calls the rollback if the alarm fires — fully automated blue/green without manual traffic management.

Deployment tooling progression

ToolGood forCaveats
AWS CLI / SDKOne-off deployments, scripting, deep controlVerbose; no state management; drift-prone at scale
SAM (CloudFormation extension)Lambda-first projects; built-in local testing; CodeDeploy integrationCloudFormation speed; YAML verbosity; AWS-only
CDKComplex infra in TypeScript/Python; reusable constructs; type safetyStill compiles to CloudFormation; learning curve; bootstrapping required
Terraform (AWS provider)Multi-cloud orgs; large existing Terraform estate; strong community modulesNo built-in Lambda local testing; plan/apply cycle slower than SAM deploy
Serverless FrameworkMulti-cloud serverless; plugin ecosystemV3 → V4 became paid for teams; community plugins vary in quality

CI pipeline skeleton

# GitHub Actions example
jobs:
  deploy:
    steps:
      - uses: actions/checkout@v4
      - name: Build zip
        run: |
          docker run --rm -v $PWD:/var/task \
            public.ecr.aws/lambda/python:3.13 \
            pip install -r requirements.txt -t package/
          cd package && zip -r ../function.zip . && cd ..
          zip function.zip lambda_function.py
      - name: Deploy
        run: |
          aws lambda update-function-code \
            --function-name my-fn --zip-file fileb://function.zip
          aws lambda wait function-updated --function-name my-fn
          aws lambda publish-version --function-name my-fn
          aws lambda update-alias --function-name my-fn \
            --name prod --function-version $VERSION

The wait function-updated call is important — update-function-code is asynchronous and publish-version must wait for it to complete.

PITFALLS — THE MUST-KNOWS

The list to skim before the next interview or design review. Each item has bitten someone in production.

Execution model

  1. Module-level state leaks across invocations. A list you append to in the handler grows forever on warm calls. A counter you increment is wrong by the second request. If it's mutable and lives at module scope, treat it as either a deliberate cache or a bug.
  2. Handler globals are shared by every invocation on that env, but not across envs. "I cached the result" works locally; in production half your traffic gets the cached value, the other half doesn't, depending on which warm container they hit. Externalise (Redis, DynamoDB) or accept the variance.
  3. /tmp is per-environment, not per-invocation. If you write /tmp/output.json with a fixed name, the next warm invocation finds yesterday's file. Always use a per-invocation suffix (UUID, request ID).
  4. Init phase has a hard 10 s cap. If you import TensorFlow, hydrate a 500 MB model, or do a network call at module scope, you can blow this budget on cold start. Defer expensive work until first handler call (lazy init), or move it to a layer that ships pre-warmed.
  5. Async asyncio.run in a sync handler creates a fresh event loop per invocation. Acceptable, but means async clients can't be shared across invocations the way sync boto3 clients can. Profile before assuming async is faster.

Payload & size limits

  1. 6 MB sync response cap is silent. Returning a JSON list of 50 000 items "works" in the function but the API GW caller gets 413. The fix in lambda_function.py — return a presigned URL to a manifest file rather than the full list — is the standard pattern.
  2. API Gateway caps integration time at 29 s. Doesn't matter if your Lambda timeout is 15 minutes. For longer work, return a job ID and poll, or use Function URLs (15 min) with response streaming.
  3. Environment variables max 4 KB total. Big secrets (RSA keys, JSON config blobs) blow this. Parameter Store / Secrets Manager and read on init.

Concurrency & throttling

  1. Default account concurrency is 1 000 per region. Most teams hit this before they realise. Sets a hard ceiling on RPS — at 100 ms latency, that's 10 000 RPS account-wide; at 1 s, 1 000 RPS.
  2. Reserved concurrency = 0 disables the function. Looks weird, used as a circuit breaker.
  3. Provisioned concurrency double-bills. You pay for the warm slots and for invocations against them. Worth it for latency-sensitive paths; wasteful for batch.
  4. Burst limit is regional and finite. A traffic spike from 0 to 5 000 RPS will throttle until AWS scales up at +500 envs/min. Provisioned concurrency or pre-warming is the fix.

Triggers, retries, idempotency

  1. Async invocation retries 2 times by default. Total 3 attempts. If your handler isn't idempotent, you can charge a card three times.
  2. S3, SNS, EventBridge invoke async — at-least-once. Plan for duplicates. SQS standard is also at-least-once. SQS FIFO and Kinesis are exactly-once-ish per shard but with their own quirks.
  3. SQS visibility timeout must be ≥ 6× function timeout. Otherwise the message comes back while you're still processing it, and you do the work twice (or more).
  4. Partial batch failures need explicit signalling. Returning batchItemFailures for SQS/Kinesis tells AWS which records to retry; otherwise the entire batch retries or none does.
  5. API Gateway error responses are JSON-shaped if you don't say otherwise. Throw an unhandled exception and the client sees {"errorMessage": "...", "errorType": "..."} with status 502. Map errors yourself.

Networking, IAM, observability

  1. Putting Lambda in a VPC adds an ENI cold-start penalty (improved a lot in 2019, but still real for first invocation). Only do it if you genuinely need private-subnet resources. Outbound internet from VPC Lambda needs NAT, which costs money 24/7.
  2. S3 access from a VPC Lambda needs a VPC gateway endpoint or NAT. Without one, your S3 calls hang and time out — looks like a code bug, isn't.
  3. CloudWatch log groups default to "Never expire" retention. Verbose Lambdas can rack up real cost in CW Logs alone — set retention (7/14/30 days) on every log group you create.
  4. Lambda execution role is implicit on every action. Forgetting s3:GetObject or kms:Decrypt on the bucket's CMK is the most common "but it works locally" failure. CloudTrail tells you what was denied.
  5. Resource policy vs execution role are different layers. Resource policy says "who can invoke this Lambda"; execution role says "what this Lambda can do". Both must allow.
  6. X-Ray needs an SDK call and tracing enabled on the function and IAM permission. Three switches. People flip one and conclude X-Ray is broken.

Deployment, dependencies, runtimes

  1. The boto3 in the Python runtime lags pip. If you need a recent API (e.g. new S3 features), bundle current boto3 in your zip. The runtime version is "good enough" for stable APIs, "sometimes wrong" for fresh ones.
  2. Native wheels must match Lambda's runtime architecture. pip install on a Mac and zip-uploading cryptography is a classic foot-gun. Build in a Docker image matching public.ecr.aws/lambda/python:3.13.
  3. arm64 saves ~20 % at the same memory but some wheels are still x86-only. Audit your deps before flipping the architecture.
  4. Layers are merge-ordered; later layers overwrite earlier. A "base" layer for your shared dependencies works; conflicting layers silently shadow each other.
  5. Container-image deploys are cached on the Lambda host. First cold start can be slow (image pull); subsequent are normal. Keep images small even though the limit is 10 GB.

Time, scheduling, secrets

  1. EventBridge schedule (cron/rate) is always UTC. "9 AM" in your local time means something different in production. Use the new EventBridge Scheduler (2022) for time-zone-aware schedules.
  2. Async invocations have a 6-hour event age. If retries fail past that, the event is silently dropped unless you've set a DLQ or on-failure destination.
  3. Secrets in env vars are visible to anyone with lambda:GetFunctionConfiguration. Encrypted at rest, plaintext in the console. Use Secrets Manager / Parameter Store for actual secrets.
Skim test: if you can re-state the cold-start split (Init / Handler), the 6 MB / 256 KB / 4 KB / 250 MB / 10 GB constants, and the difference between resource policy and execution role from memory, you'll handle most "tell me about Lambda" interview questions.

ADJACENT

Brief orientation on AWS Glue and Prometheus/Grafana — the secondary gaps from the interview.

AWS Glue

Glue is a managed Spark-based ETL service. Lambda and Glue solve different problems:

LambdaGlue
Runtime modelServerless; up to 15 min; one handler at a time per envManaged Spark cluster; hours-long jobs; distributed compute
Data scaleUp to a few GB comfortablyTB to PB natively
LanguagePython, Node, Java, Go, custom runtimePySpark, Scala; Glue Studio for no-code
Startup timeMilliseconds (warm)1–2 minutes to provision Spark cluster
Cost modelPer request + per msPer DPU-hour (1 DPU = $0.44/hr); 10-minute minimum billing
Use forLight transforms, event reactions, API backendsLarge-scale joins, aggregations, schema inference on data lake

Key Glue concepts to know: DynamicFrame (Glue's DataFrame variant with schema flexibility), Glue Catalog (centralised metadata store for table schemas — also used by Athena), Job Bookmarks (Glue tracks processed S3 partitions to avoid reprocessing on incremental runs).

The decision is usually straightforward: if the data fits in Lambda's memory and the job finishes in under 15 minutes, use Lambda. If you're joining multiple large S3 datasets or transforming daily partition files, use Glue.

Prometheus

Prometheus is a pull-based time-series metrics system. It scrapes HTTP /metrics endpoints on a schedule. The fundamental tension with Lambda: Lambda functions are ephemeral — there's no persistent HTTP endpoint to scrape, and the function may be at zero concurrency between invocations.

Options for Lambda → Prometheus:

  • EMF → CloudWatch → Grafana CloudWatch plugin — no Prometheus involved. Grafana reads directly from CloudWatch. Easiest for AWS-native stacks.
  • Remote write to Amazon Managed Prometheus (AMP) — the function pushes metrics to AMP via the Prometheus remote_write API at the end of each invocation. Grafana or Amazon Managed Grafana reads from AMP. Requires the prometheus_client library and SIGV4 signing on the remote_write request.
  • Push gateway — a persistent intermediate that Lambda pushes to; Prometheus scrapes the gateway. More infrastructure to manage, stale metric risk if the push gateway isn't flushed between invocations.

Grafana

Grafana is a dashboarding layer — it doesn't store data, it queries data sources. Relevant data sources for Lambda observability:

  • CloudWatch — built-in Grafana plugin; queries CW Metrics and CW Logs Insights. Zero extra infrastructure. The standard choice for Lambda metrics (invocations, errors, duration, throttles, concurrent executions).
  • Amazon Managed Prometheus — query via PromQL if you've pushed custom metrics.
  • Amazon Managed Grafana (AMG) — Grafana-as-a-service; integrates with AWS IAM; auto-discovers CW namespaces. Avoids self-hosting Grafana.

For a Lambda-only stack with no existing Prometheus investment, the practical answer is: use EMF for custom metrics, use CloudWatch for the built-in Lambda metrics, and connect Grafana to CloudWatch. It requires no extra infrastructure and gives you dashboards in an hour.

LABS

Hands-on walkthroughs that modify the existing app. Each mutates what you already have — no throw-away exercises.

Lab 0 — Local sandbox (start here)

Goal: run the full stack locally against MinIO with real PDFs.

  1. make install — creates .venv and installs deps
  2. make up — starts MinIO on :9000 (API) and :9001 (console)
  3. SOURCE_DIR=~/path/to/pdfs make seed — uploads PDFs to MinIO bucket
  4. make invoke — runs invoke.py which calls handler() with a minimal event
  5. Open http://localhost:9001 (minioadmin/minioadmin) and find the generated manifest in the manifests/ prefix

What you can break: set PREFIX to a non-existent prefix and observe the handler returns count=0. Set QUEUE_MAX=1 and observe the backpressure on the producer. Remove S3_ENDPOINT_URL and watch it fail to connect.

Lab 1 — Deploy to real AWS

Goal: package and deploy the function to AWS Lambda, invoke it against a real S3 bucket.

  1. Create an S3 bucket and upload sample PDFs to 2026/04/ prefix
  2. Create an IAM execution role with s3:GetObject, s3:PutObject, s3:ListBucket, and logs:*
  3. Build the deployment zip inside the Lambda image:
    docker run --rm -v $PWD:/var/task public.ecr.aws/lambda/python:3.13 pip install -r requirements.txt -t package/
  4. Create the function: aws lambda create-function --handler lambda_function.handler …
  5. Invoke: aws lambda invoke --function-name pdf-scanner --payload '{}' out.json
  6. Verify the manifest appeared in S3 and the presigned URL works

What you can break: invoke without s3:ListBucket on the bucket (not the object ARN) — observe AccessDenied. Watch CloudTrail to see the denied call.

Lab 2 — Add an S3 trigger

Goal: make the function fire automatically when a PDF is uploaded.

  1. Add a resource policy entry granting S3 lambda:InvokeFunction
  2. Configure an S3 event notification on the bucket for s3:ObjectCreated:* filtered to *.pdf
  3. Upload a PDF and check CloudWatch Logs for the invocation
  4. Notice the event structure differs from the manual invoke — update the handler to extract the key from event["Records"][0]["s3"]["object"]["key"]

What you can break: upload a non-PDF to the same prefix and verify the filter prevents invocation. Remove the resource policy and verify the trigger silently stops firing (no error to the uploader — this is the async invocation model).

Lab 3 — Switch to arm64

Goal: migrate to Graviton2 and verify 20% cost reduction.

  1. Rebuild the zip using the arm64 Lambda image: public.ecr.aws/lambda/python:3.13-arm64
  2. Update the function architecture: aws lambda update-function-configuration --architectures arm64
  3. Update the function code with the arm64 zip
  4. Invoke and compare REPORT duration and billed duration in CloudWatch

What you can break: try deploying the x86 zip against the arm64 architecture — the function will import-error on any C-extension wheels.

Lab 4 — Enable Provisioned Concurrency

Goal: eliminate cold starts on the production alias.

  1. Publish version 1: aws lambda publish-version --function-name pdf-scanner
  2. Create alias prod pointing to version 1
  3. Enable PC: aws lambda put-provisioned-concurrency-config --function-name pdf-scanner --qualifier prod --provisioned-concurrent-executions 2
  4. Invoke via the alias ARN and confirm Init Duration is absent from REPORT lines
  5. Check your AWS bill after 1 hour — note the PC charges

Lab 5 — Add X-Ray tracing

Goal: see a trace with S3 subsegments in the X-Ray console.

  1. Add aws-xray-sdk to requirements.txt and rebuild the zip
  2. Add to lambda_function.py: from aws_xray_sdk.core import patch_all; patch_all()
  3. Enable active tracing on the function and add X-Ray permissions to the execution role
  4. Invoke and open X-Ray → Traces in the console — verify S3 list_objects_v2 and generate_presigned_url appear as subsegments

Lab 6 — Fan out with Step Functions

Goal: process multiple S3 prefixes in parallel using a Map state.

  1. Update the handler to accept a prefix key in the event (instead of reading from env var)
  2. Create a Step Functions state machine with a Map state that iterates over a list of prefixes and invokes the Lambda for each
  3. Start an execution with input: {"prefixes": ["2026/01/", "2026/02/", "2026/03/"]}
  4. Observe parallel Lambda invocations in the execution graph and CloudWatch
  5. Add error handling: configure the Map state to catch Lambda errors and continue rather than fail the whole execution

REPOSITORY

Tree of eth/ — the sandbox plus this study site.

eth/ ├── lambda_function.py — handler: async PDF scan → presigned URLs → JSONL manifest ├── invoke.py — local runner: calls handler() with a minimal event, prints result ├── seed.py — uploads PDFs from a local directory to MinIO ├── requirements.txt — aioboto3, aiofiles (+ transitive: aiobotocore, botocore…) ├── docker-compose.yml — runs MinIO on :9000 (S3 API) and :9001 (web console) ├── Makefile — install / up / down / seed / invoke / graphs / docs ├── def/ │ └── task.md — original interview exercise specification └── docs/ ├── index.html — this study site (single-page, no build step) ├── viewer.html — pan/zoom SVG viewer (opened by graph links) └── graphs/ ├── system_overview.dot / .svg — caller → handler → MinIO/S3 → manifest ├── lifecycle.dot / .svg — init / handler / freeze / thaw / shutdown └── cold_warm_timeline.dot / .svg — cold vs warm invocation timeline

What the function does, end to end

The function lists every PDF inside an S3 prefix. For each one, it generates a presigned download URL that expires in 15 minutes. It writes those (key, URL) pairs into a JSONL file in /tmp as it goes. When the listing is done, it uploads the JSONL to S3 as a manifest, generates one more presigned URL pointing to the manifest itself, deletes the local file, and returns the manifest URL plus the count.

The use case: you want to ship a batch of files to someone who isn't on your AWS account. Send them one URL. They open it, get back a list of links, every link works for 15 minutes, then everything dies.

Imports and module-scope config

import asyncio, json, os, uuid
import aioboto3
import aiofiles

BUCKET   = os.environ.get("BUCKET_NAME", "my-company-reports-bucket")
PREFIX   = os.environ.get("PREFIX", "2026/04/")
EXPIRY   = int(os.environ.get("URL_EXPIRY_SECONDS", "900"))
ENDPOINT = os.environ.get("S3_ENDPOINT_URL") or None
QUEUE_MAX = int(os.environ.get("QUEUE_MAX", "2000"))
_DONE = object()

Five environment reads at module scope — init phase. They run once per cold start and every warm invocation reuses them for free. ENDPOINT is the MinIO trick: on real Lambda the var is unset, value is None, aioboto3 talks to real S3. Locally, set it to http://localhost:9000 and the same code talks to MinIO with no other changes. _DONE is a sentinel: an object() instance whose identity is unique and can't collide with any real S3 key — comparing with is (not ==) is unambiguous.

The handler — minimal on purpose

def handler(event, context):
    result = asyncio.run(_run())
    return {"statusCode": 200, "body": json.dumps(result)}

Sync because Lambda's contract is sync. asyncio.run opens a fresh event loop per invocation — means async clients can't be shared across invocations the way sync boto3 clients could, which is why the S3 client lives inside _run. The API-Gateway response shape is a habit: harmless for direct invoke, required if you later front this with API Gateway.

Why async at all? Lambda bills per millisecond of wall-clock time. Anything you can overlap, you save money on. S3 LIST calls overlap with presigning and file writes. That overlap directly reduces duration and cost.

_run() — the actual work

async def _run():
    session = aioboto3.Session()
    async with session.client("s3", endpoint_url=ENDPOINT) as s3:
        queue = asyncio.Queue(maxsize=QUEUE_MAX)
        manifest_path = f"/tmp/{uuid.uuid4()}.jsonl"

Session created inside _run (not module scope) because aioboto3 async clients are tied to the event loop — and each invocation gets a fresh loop. The queue bound gives backpressure: when full, await queue.put(...) blocks until the consumer takes something off. Without the bound, a million-file bucket would OOM before the first URL is presigned. UUID in the manifest path prevents collision between back-to-back warm invocations sharing the same /tmp.

The producer

        async def producer():
            paginator = s3.get_paginator("list_objects_v2")
            async for page in paginator.paginate(Bucket=BUCKET, Prefix=PREFIX):
                for obj in page.get("Contents", []) or []:
                    key = obj["Key"]
                    if key.lower().endswith(".pdf"):
                        await queue.put(key)
            await queue.put(_DONE)

Defined as a closure inside _run — captures s3 and queue without arguments; signals it's a private implementation detail. The paginator transparently fetches subsequent pages (S3 returns ≤1000 per page). await queue.put(key) blocks when the queue is full — that's the backpressure. After all pages, it puts _DONE to signal the consumer to stop (asyncio.Queue has no close method; the sentinel is the standard pattern).

The consumer

        async def consumer():
            count = 0
            async with aiofiles.open(manifest_path, "w") as f:
                while True:
                    item = await queue.get()
                    if item is _DONE:
                        break
                    url = await s3.generate_presigned_url(
                        "get_object",
                        Params={"Bucket": BUCKET, "Key": item},
                        ExpiresIn=EXPIRY,
                    )
                    await f.write(json.dumps({"key": item, "url": url}) + "\n")
                    count += 1
            return count

Same closure pattern. generate_presigned_url is a local computation — no network call. It uses your credentials, bucket, key, and expiry to produce a signed URL deterministically. Fast. JSONL (one JSON object per line) instead of a JSON array because it streams: write one line at a time without buffering the whole array, read one line at a time. Stays usable even at gigabyte scale.

Running them together

        prod_task = asyncio.create_task(producer())
        count = await consumer()
        await prod_task

create_task schedules the producer on the event loop and returns immediately — producer runs in the background. await consumer() runs in the foreground until it sees the sentinel. await prod_task makes the guarantee explicit and propagates any producer exceptions. The overlap: while S3 prepares the next LIST page (network), the consumer presigns and writes the previous page. Sequential would stack list latency + presign latency. Async pays only the larger of the two.

Upload, presign, clean up

        manifest_key = f"manifests/{uuid.uuid4()}.jsonl"
        body = await (aiofiles.open(manifest_path, "rb")).__aenter__() ...
        await s3.put_object(Bucket=BUCKET, Key=manifest_key, Body=body,
                            ContentType="application/x-ndjson")
        manifest_url = await s3.generate_presigned_url("get_object", ...)
        os.unlink(manifest_path)
        return {"count": count, "manifest_key": manifest_key, "manifest_url": manifest_url}

put_object over upload_file because aioboto3's async multipart handling is simpler this way for files in the KB–MB range. Content type application/x-ndjson is the registered MIME for newline-delimited JSON. os.unlink is required — /tmp persists across warm invocations; a thousand runs without cleanup would fill it and crash the next.

Why this design?

  • Presigned URLs, not raw data. Recipient needs no AWS account. URL expires automatically. No egress from Lambda.
  • Manifest in S3, not inline. The 6 MB sync response cap is silent — function succeeds, caller gets 413 with no warning. Manifest in S3 has no upper bound.
  • Bounded queue. Backpressure prevents producer from outrunning consumer and exhausting memory regardless of bucket size.
  • Sentinel _DONE = object(). asyncio.Queue has no close. An object() instance can't collide with any S3 key; is comparison is unambiguous.
  • Nested functions as closures. Capture s3, queue, manifest_path from the enclosing scope without arguments. Scope is explicit — nobody outside _run can call them.
  • UUID in /tmp. /tmp persists across warm invocations. Fixed filename = race condition between back-to-back runs on the same environment.

Cold start vs warm — CloudWatch REPORT line

# Cold start
REPORT RequestId: ...  Duration: 312.45 ms  Billed Duration: 313 ms
       Memory Size: 256 MB  Max Memory Used: 89 MB
       Init Duration: 423.12 ms

# Warm (next invocation within ~30 s)
REPORT RequestId: ...  Duration: 287.91 ms  Billed Duration: 288 ms
       Memory Size: 256 MB  Max Memory Used: 91 MB

Init Duration ~400 ms covers importing aioboto3 → aiobotocore → botocore (heavy chain). No Init Duration on warm runs — saved ~30 ms. For a function that runs once a day every invocation is cold. For one that runs every few seconds, init is irrelevant.

What happens if it times out

Default timeout is 3 s — too short. Set it explicitly to 30–60 s for a small prefix, up to 900 s (15 min) for large ones. On timeout, Lambda kills the process. The /tmp file may not have been deleted; the manifest may not have been uploaded. Re-running produces a fresh manifest with new UUIDs — no dedup, so two manifests for the same job can coexist in S3. If "exactly one manifest per job" is required, add a DynamoDB dedup table keyed on request ID.

How would you scale this

Fan out by prefix. Wrap in a Step Functions Map state. Pass a list of prefixes; each iteration runs one Lambda for one prefix. MaxConcurrency controls parallelism without saturating the account concurrency quota.

Go event-driven. Subscribe to S3 ObjectCreated filtered to *.pdf. The function fires once per upload, handles one file at a time — no producer/consumer needed. Simpler, but semantically different: "process new files as they arrive" vs "scan the existing bucket."

What I'd change before production

  1. Move BUCKET and PREFIX to the event payload. Currently set at deploy time (one function per prefix). Event-driven config lets one function serve many prefixes.
  2. Structured logging. JSON to stdout with request_id, bucket, prefix, count. Logs Insights can aggregate without regex.
  3. EMF metric for count. Free CloudWatch metric, no additional API call. Dashboard "PDFs processed per invocation" over time.
  4. Producer error handling. If paginator.paginate raises, the producer task fails but the consumer blocks on queue.get() forever — function times out. Wrap producer body in try/finally that always puts _DONE so the consumer exits cleanly.
  5. Explicit timeout on queue.get(). asyncio.wait_for(queue.get(), timeout=X) prevents the consumer hanging indefinitely if the producer dies without putting the sentinel.
  6. Consider sync boto3. aioboto3 adds ~200 ms to the cold start. If cold start matters and file counts are small, sync boto3 with threading is simpler and starts faster. Async pays off only when file counts are large enough that overlap is significant.

Makefile targets

TargetWhat it does
make installCreates .venv, installs requirements.txt
make upStarts MinIO via docker compose up -d
make downStops MinIO (keeps volumes)
make cleanStops MinIO and deletes volumes (wipes bucket data)
SOURCE_DIR=path make seedUploads all files from path to MinIO
make invokeRuns invoke.py (calls handler() directly)
make graphsRenders all docs/graphs/*.dot.svg via Graphviz dot
make docsRenders graphs then opens docs/index.html