Files
lambda_local_runner/docs/index.html
2026-05-11 20:27:17 -03:00

1617 lines
110 KiB
HTML
Raw Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>AWS Lambda — Study notes &amp; sandbox</title>
<style>
@import url('https://fonts.googleapis.com/css2?family=Inter:wght@400;500;600&family=JetBrains+Mono:wght@400;500&display=swap');
* { margin: 0; padding: 0; box-sizing: border-box; }
body {
background: #0a0e17;
color: #e8eaf0;
font-family: 'Inter', sans-serif;
line-height: 1.6;
height: 100vh;
overflow: hidden;
display: flex;
flex-direction: column;
}
header {
padding: 16px 24px;
border-bottom: 1px solid #1e2a4a;
display: flex;
align-items: baseline;
gap: 16px;
flex-shrink: 0;
}
header h1 {
font-family: 'JetBrains Mono', monospace;
font-size: 22px;
font-weight: 600;
letter-spacing: 3px;
color: #0066ff;
}
header .subtitle {
font-size: 13px;
color: #4a5568;
letter-spacing: 1px;
text-transform: uppercase;
}
.layout {
display: flex;
flex: 1;
min-height: 0;
}
nav {
display: flex;
flex-direction: column;
gap: 0;
width: 220px;
flex-shrink: 0;
background: #121829;
border-right: 1px solid #1e2a4a;
padding: 8px 0;
overflow-y: auto;
scrollbar-width: none;
}
nav::-webkit-scrollbar { display: none; }
nav a {
padding: 10px 20px;
font-family: 'JetBrains Mono', monospace;
font-size: 12px;
color: #8892a8;
text-decoration: none;
border-left: 2px solid transparent;
transition: all 0.15s;
cursor: pointer;
}
nav a:hover { color: #e8eaf0; background: #1a2340; }
nav a.active { color: #0066ff; border-left-color: #0066ff; background: #0d1a33; }
nav .nav-group {
font-family: 'JetBrains Mono', monospace;
font-size: 10px;
color: #4a5568;
letter-spacing: 1.5px;
text-transform: uppercase;
padding: 14px 20px 6px;
pointer-events: none;
}
main {
flex: 1;
overflow: auto;
padding: 32px 48px;
}
.graph-section {
display: none;
animation: fadeIn 0.2s ease;
}
.graph-section.active { display: block; }
@keyframes fadeIn {
from { opacity: 0; }
to { opacity: 1; }
}
.graph-section h2 {
font-family: 'JetBrains Mono', monospace;
font-size: 15px;
font-weight: 500;
color: #8892a8;
margin-bottom: 8px;
letter-spacing: 1px;
}
.graph-section p.lead {
font-size: 13px;
color: #4a5568;
margin-bottom: 24px;
max-width: 800px;
}
.graph-container {
background: #0a0e17;
border: 1px solid #1e2a4a;
padding: 24px;
overflow: auto;
}
.graph-container img {
max-width: 100%;
height: auto;
}
.legend {
display: flex;
gap: 24px;
margin-top: 16px;
font-size: 11px;
font-family: 'JetBrains Mono', monospace;
color: #4a5568;
}
.legend span::before {
content: '';
display: inline-block;
width: 8px;
height: 8px;
margin-right: 6px;
border-radius: 50%;
}
.legend .live::before { background: #00c853; }
.legend .mock::before { background: #ffc107; }
.legend .mcp::before { background: #0066ff; }
.legend .ops::before { background: #ff3d00; }
.graph-container a { display: block; }
/* Tree (repo structure) */
.tree-container {
background: #0a0e17;
border: 1px solid #1e2a4a;
padding: 24px;
overflow: auto;
}
.repo-tree {
font-family: 'JetBrains Mono', monospace;
font-size: 13px;
line-height: 1.7;
color: #8892a8;
}
.t-root { color: #0066ff; font-weight: 600; font-size: 15px; }
.t-dir { color: #e8eaf0; font-weight: 500; }
.t-comment { color: #4a5568; }
/* Prose sections */
.graph-section h3 {
font-family: 'JetBrains Mono', monospace;
font-size: 13px;
font-weight: 500;
color: #e8eaf0;
letter-spacing: 1px;
margin: 32px 0 10px;
text-transform: uppercase;
}
.graph-section h3:first-child { margin-top: 0; }
.prose { max-width: 820px; }
.prose p {
font-size: 14px;
color: #b4bccf;
margin-bottom: 14px;
line-height: 1.7;
}
.prose p b { color: #e8eaf0; font-weight: 600; }
.prose code {
font-family: 'JetBrains Mono', monospace;
font-size: 12px;
color: #7ab0ff;
background: #121829;
padding: 1px 5px;
border-radius: 3px;
}
.prose a { color: #0066ff; text-decoration: none; }
.prose a:hover { text-decoration: underline; }
.prose ul, .prose ol {
margin: 8px 0 16px 22px;
font-size: 14px;
color: #b4bccf;
line-height: 1.7;
}
.prose ul li, .prose ol li { margin-bottom: 8px; }
.prose ul li b, .prose ol li b { color: #e8eaf0; font-weight: 600; }
/* Pre / code blocks */
.prose pre {
background: #121829;
border: 1px solid #1e2a4a;
padding: 14px 16px;
border-radius: 4px;
overflow-x: auto;
margin: 12px 0 18px;
font-family: 'JetBrains Mono', monospace;
font-size: 12px;
color: #b4bccf;
line-height: 1.6;
}
.prose pre code { background: transparent; padding: 0; color: inherit; }
/* Tables */
.cmp-table {
width: 100%;
border-collapse: collapse;
font-size: 13px;
margin: 8px 0 20px;
border: 1px solid #1e2a4a;
}
.cmp-table th {
text-align: left;
background: #121829;
color: #8892a8;
font-family: 'JetBrains Mono', monospace;
font-size: 11px;
letter-spacing: 1px;
padding: 10px 14px;
border-bottom: 1px solid #1e2a4a;
}
.cmp-table td {
padding: 10px 14px;
color: #b4bccf;
border-bottom: 1px solid #1e2a4a;
vertical-align: top;
}
.cmp-table td.num {
font-family: 'JetBrains Mono', monospace;
color: #7ab0ff;
white-space: nowrap;
}
.cmp-table td.warn { color: #ffc107; }
.cmp-table td.bad { color: #ff3d00; }
.cmp-table td.ok { color: #00c853; }
.cmp-table tr:last-child td { border-bottom: none; }
/* Callouts */
.callout {
border-left: 3px solid #0066ff;
background: #0d1a33;
padding: 12px 16px;
margin: 16px 0;
font-size: 13px;
color: #b4bccf;
border-radius: 0 4px 4px 0;
}
.callout.warn { border-left-color: #ffc107; background: #2a1f0a; }
.callout.bad { border-left-color: #ff3d00; background: #2a0f0a; }
.callout.ok { border-left-color: #00c853; background: #0a2410; }
.callout b { color: #e8eaf0; }
.placeholder {
color: #4a5568;
font-style: italic;
font-size: 13px;
border: 1px dashed #1e2a4a;
padding: 32px;
text-align: center;
border-radius: 4px;
}
/* Mobile menu toggle */
.menu-toggle {
display: none;
background: transparent;
border: 1px solid #1e2a4a;
color: #e8eaf0;
padding: 6px 10px;
font-family: 'JetBrains Mono', monospace;
font-size: 14px;
cursor: pointer;
line-height: 1;
margin-left: auto;
}
.menu-toggle:hover { background: #1a2340; }
.nav-backdrop {
display: none;
position: absolute;
inset: 0;
background: rgba(0, 0, 0, 0.5);
z-index: 10;
}
.layout.nav-open .nav-backdrop { display: block; }
@media (max-width: 720px) {
header { padding: 10px 12px; gap: 8px; }
header h1 { font-size: 16px; letter-spacing: 1px; }
header .subtitle { display: none; }
.menu-toggle { display: inline-block; }
.layout { position: relative; }
nav {
position: absolute;
left: 0; top: 0; bottom: 0;
width: 240px;
z-index: 20;
transform: translateX(-100%);
transition: transform 0.2s ease;
box-shadow: 2px 0 8px rgba(0, 0, 0, 0.5);
}
.layout.nav-open nav { transform: translateX(0); }
main { padding: 16px; }
.graph-section h2 { font-size: 13px; }
.prose p, .prose ul, .prose ol { font-size: 13px; }
.cmp-table { font-size: 12px; }
.cmp-table th, .cmp-table td { padding: 6px 8px; }
}
</style>
</head>
<body>
<header>
<h1>AWS LAMBDA</h1>
<span class="subtitle">Study notes &amp; sandbox — built from the interview exercise</span>
<button class="menu-toggle" onclick="toggleNav()" aria-label="Toggle navigation"></button>
</header>
<div class="layout">
<div class="nav-backdrop" onclick="toggleNav()"></div>
<nav>
<span class="nav-group">Foundations</span>
<a class="active" onclick="show('overview')">Overview</a>
<a onclick="show('mental')">Mental Model</a>
<a onclick="show('limits')">Limits</a>
<span class="nav-group">Operating</span>
<a onclick="show('coldstarts')">Cold Starts</a>
<a onclick="show('concurrency')">Concurrency</a>
<a onclick="show('triggers')">Triggers</a>
<a onclick="show('iam')">IAM</a>
<a onclick="show('packaging')">Packaging</a>
<a onclick="show('vpc')">VPC &amp; Networking</a>
<span class="nav-group">Production</span>
<a onclick="show('observability')">Observability</a>
<a onclick="show('async')">Async &amp; Errors</a>
<a onclick="show('stepfns')">Step Functions</a>
<a onclick="show('cost')">Cost</a>
<a onclick="show('localdev')">Local Dev</a>
<a onclick="show('cicd')">CI/CD</a>
<span class="nav-group">Reference</span>
<a onclick="show('pitfalls')">Pitfalls</a>
<a onclick="show('adjacent')">Adjacent</a>
<a onclick="show('labs')">Labs</a>
<a onclick="show('repo')">Repository</a>
</nav>
<main>
<!-- ===================================================================== -->
<!-- OVERVIEW -->
<!-- ===================================================================== -->
<section id="overview" class="graph-section active">
<h2>OVERVIEW</h2>
<p class="lead">A study site built on top of a working Lambda + MinIO sandbox. Read the page, run the code, break things on purpose.</p>
<div class="prose">
<h3>What this is</h3>
<p>The repo at the root of this site (<code>ethics/</code>) holds a Python AWS Lambda function — <code>lambda_function.py</code> — that lists PDFs in an S3 bucket under a prefix, paginates, generates 15-minute presigned URLs, and writes a JSONL manifest. It runs locally against MinIO via <code>docker compose</code>, with the same handler signature as a real Lambda. This site explains the surrounding mental model in the order you'd want to study it before walking into a Lambda-heavy interview or production rotation.</p>
<h3>How it's organised</h3>
<p>The sidebar groups topics into four reading orders. <b>Foundations</b> is the picture in your head. <b>Operating</b> covers the day-to-day knobs. <b>Production</b> covers what changes when real users and real money are involved. <b>Reference</b> holds the must-know checklist (<a onclick="show('pitfalls')">Pitfalls</a>), brief orientations on adjacent tools (<a onclick="show('adjacent')">Glue, Prometheus/Grafana</a>), the hands-on labs (<a onclick="show('labs')">Labs</a>), and the repo tree (<a onclick="show('repo')">Repository</a>).</p>
<h3>How to use it</h3>
<ol>
<li><b>Read top-to-bottom</b> — the order in the sidebar is the recommended study path.</li>
<li><b>Run the sandbox.</b> <code>make install &amp;&amp; make up &amp;&amp; SOURCE_DIR=&lt;dir&gt; make seed &amp;&amp; make invoke</code>. The handler executes locally against MinIO; you can break it without burning AWS credit.</li>
<li><b>Do the labs.</b> Each one mutates the existing app: deploy to real AWS, add an S3 trigger, switch to arm64, enable Provisioned Concurrency, fan out across prefixes with Step Functions, and so on.</li>
<li><b>Skim Pitfalls</b> the night before any interview or design review.</li>
</ol>
</div>
<h3>System overview</h3>
<p class="lead">Caller → handler → MinIO/S3 → manifest write-back. The async producer/consumer overlaps S3 LIST calls with presigning + JSONL writes, so the manifest streams to <code>/tmp</code> rather than buffering in memory.</p>
<div class="graph-container">
<a href="viewer.html?src=graphs/system_overview.svg"><img src="graphs/system_overview.svg" alt="System overview"></a>
</div>
<div class="legend">
<span class="live">Real / live</span>
<span class="mock">Ephemeral / caveat</span>
<span class="mcp">Lambda boundary</span>
<span class="ops">Pitfall</span>
</div>
</section>
<!-- ===================================================================== -->
<!-- MENTAL MODEL -->
<!-- ===================================================================== -->
<section id="mental" class="graph-section">
<h2>MENTAL MODEL</h2>
<p class="lead">Lambda is a Linux process whose lifecycle is managed for you. Most of the surprise comes from forgetting that it's still a process.</p>
<div class="prose">
<h3>What Lambda actually is</h3>
<p>Each invocation runs inside an <b>execution environment</b>: a Firecracker microVM running the Lambda runtime (e.g. <code>python3.13</code>), with your code unpacked into <code>/var/task</code> and an ephemeral <code>/tmp</code>. AWS owns the VM; you own everything inside the process. The microVM is created on demand, kept warm for a while, then torn down when idle traffic stops feeding it. You don't pick a server, but there <i>is</i> a server, and it has memory, a clock, and a filesystem.</p>
<h3>The two phases</h3>
<p>Every cold start splits cleanly into two:</p>
<ul>
<li><b>Init phase</b> — your module-level code runs once: imports, client construction, anything outside the handler function. Capped at 10 s. Billed at full configured memory. The <code>os.environ</code> reads at the top of <code>lambda_function.py</code> happen here.</li>
<li><b>Handler phase</b><code>handler(event, context)</code> runs once per invocation. Billed per-millisecond at configured memory. Subsequent invocations on the same environment skip the init phase and go straight here.</li>
</ul>
<p>This split is the single most useful thing to internalise. Heavy work at module level → pay it once per cold start. Heavy work inside the handler → pay it every invocation.</p>
<h3>Globals persist across warm invocations</h3>
<p>Anything assigned at module scope survives between handler calls on the same environment. That includes the boto3 client (good — connection reuse, TCP keep-alive, no re-handshake) and any in-memory cache you build (good — but be careful, see Pitfalls). It also includes mutations you didn't mean to keep, like a list you appended to without thinking. The same warm container can serve thousands of invocations in a row, then disappear.</p>
<pre><code># module level — runs once per cold start, reused across warm invocations
BUCKET = os.environ["BUCKET_NAME"]
ENDPOINT = os.environ.get("S3_ENDPOINT_URL")
# handler level — runs every invocation
def handler(event, context):
return asyncio.run(_run())</code></pre>
<h3>/tmp is real but local</h3>
<p>Each environment has its own <code>/tmp</code> (default 512 MB, configurable to 10 GB). It persists across warm invocations on that environment, so you can stash artefacts you'd rather not rebuild — but it is <b>not</b> shared between concurrent executions, and it's gone when the environment dies. <code>lambda_function.py</code> writes <code>/tmp/&lt;uuid&gt;.jsonl</code> per invocation and uploads it to S3 at the end; the file then becomes garbage, and the next invocation starts fresh.</p>
<h3>Concurrency is horizontal</h3>
<p>If two events arrive while one is being processed, AWS spins up a second execution environment. Each environment processes one invocation at a time, single-threaded relative to your handler. The "concurrency" you see in CloudWatch is the count of environments running in parallel. There is no thread pool to tune. There is no shared memory between environments. If you need shared state, externalise it (DynamoDB, Redis, S3).</p>
<h3>The reuse window</h3>
<p>Idle environments stick around for roughly 515 minutes (AWS doesn't promise a number) before being recycled. That's why a function that sees one request a minute almost never cold-starts, and a function that sees one a day always does. <a onclick="show('coldstarts')">Cold Starts</a> covers what that costs and how to mitigate it.</p>
</div>
<h3>Lifecycle</h3>
<p class="lead">Init is paid once, handler is paid every time. Freeze/thaw is free. Shutdown happens when nobody's looking.</p>
<div class="graph-container">
<a href="viewer.html?src=graphs/lifecycle.svg"><img src="graphs/lifecycle.svg" alt="Lambda execution environment lifecycle"></a>
</div>
</section>
<!-- ===================================================================== -->
<!-- LIMITS -->
<!-- ===================================================================== -->
<section id="limits" class="graph-section">
<h2>LIMITS — CHEATSHEET</h2>
<p class="lead">Every number worth memorising. The "why it matters" column is the part interviews actually probe.</p>
<div class="prose">
<h3>Per-function compute &amp; storage</h3>
<table class="cmp-table">
<thead><tr><th>Limit</th><th>Default</th><th>Max</th><th>Why it matters</th></tr></thead>
<tbody>
<tr><td>Memory</td><td class="num">128 MB</td><td class="num">10 240 MB</td><td>CPU scales linearly with memory. More memory ≠ just more headroom — at &gt;1769 MB you get a full vCPU; at higher tiers, multiple. Often <i>cheaper</i> to bump memory because duration drops faster than cost rises.</td></tr>
<tr><td>Timeout</td><td class="num">3 s</td><td class="num">900 s (15 min)</td><td>3 s default is too short for almost anything that talks to S3. Set explicitly; don't accept the default. API Gateway caps at 29 s no matter what your function says (see below).</td></tr>
<tr><td>Ephemeral storage (/tmp)</td><td class="num">512 MB</td><td class="num">10 240 MB</td><td>Persists across warm invocations on the same env, vanishes on cold start. Not shared between concurrent envs. Pay per-invocation for &gt;512 MB.</td></tr>
<tr><td>Init phase</td><td colspan="2" class="num">10 s hard cap</td><td>Module-level code (imports, client construction). Heavy ML model loads, custom JIT warmups — measure them or you'll trip this.</td></tr>
</tbody>
</table>
<h3>Payloads &amp; responses</h3>
<table class="cmp-table">
<thead><tr><th>Limit</th><th>Value</th><th>Why it matters</th></tr></thead>
<tbody>
<tr><td>Sync invocation request</td><td class="num">6 MB</td><td>Hard cap on the event body for <code>RequestResponse</code> invocations.</td></tr>
<tr><td>Sync invocation response</td><td class="num">6 MB</td><td>Truncated silently above this — your handler "succeeds" but the caller gets a 413. <code>lambda_function.py</code> sidesteps this by returning a manifest URL instead of inlining all presigned URLs.</td></tr>
<tr><td>Async invocation event</td><td class="num">256 KB</td><td>For <code>Event</code> invocations and most event-source-mapped triggers (S3, EventBridge, SNS).</td></tr>
<tr><td>Response streaming</td><td class="num">20 MB (soft) / unlimited (with bandwidth cap)</td><td>Function URLs and Lambda Streaming response mode break the 6 MB cap by flushing chunks. Not all clients/SDKs support it.</td></tr>
<tr><td>Environment variables</td><td class="num">4 KB total</td><td>Per function, all keys+values combined. Big config → Parameter Store / Secrets Manager.</td></tr>
<tr><td>Event size (SQS, SNS, EventBridge)</td><td class="num">256 KB each</td><td>Producer-side limit. Larger payloads → store in S3, send a pointer.</td></tr>
</tbody>
</table>
<h3>Packaging</h3>
<table class="cmp-table">
<thead><tr><th>Limit</th><th>Value</th><th>Why it matters</th></tr></thead>
<tbody>
<tr><td>Zip upload (direct)</td><td class="num">50 MB</td><td>Above this you must upload via S3 first.</td></tr>
<tr><td>Zip unzipped (function + layers)</td><td class="num">250 MB</td><td>Total of <code>/var/task</code> + all layers extracted. <code>aioboto3</code>+deps is ~50 MB; you have headroom but not infinite.</td></tr>
<tr><td>Container image</td><td class="num">10 GB</td><td>Per image. Preferred when you'd otherwise blow the 250 MB zip ceiling — e.g. ML deps with native binaries.</td></tr>
<tr><td>Layers</td><td class="num">5 per function</td><td>Ordering matters: later layers overwrite earlier. Layers count toward the 250 MB unzipped cap.</td></tr>
</tbody>
</table>
<h3>Concurrency &amp; scaling</h3>
<table class="cmp-table">
<thead><tr><th>Limit</th><th>Default</th><th>Notes</th></tr></thead>
<tbody>
<tr><td>Account concurrent executions</td><td class="num">1 000 / region</td><td>Soft quota — request increase via Service Quotas. The single most common throttling cause in production.</td></tr>
<tr><td>Burst concurrency</td><td class="num">5003 000 (region-dependent)</td><td>How many fresh environments AWS will spin up immediately at traffic spike. Beyond this, scale-up is +500 envs / min.</td></tr>
<tr><td>Reserved concurrency</td><td class="num">0 to account quota</td><td>Carves a slice of the account pool for a function. Setting it to 0 effectively disables the function.</td></tr>
<tr><td>Provisioned concurrency</td><td class="num">0 by default</td><td>Pre-warmed envs. Eliminates cold starts at the cost of paying for idle capacity. Bills as PC-seconds + invocation cost.</td></tr>
</tbody>
</table>
<h3>Time &amp; rate limits at the edges</h3>
<table class="cmp-table">
<thead><tr><th>Surface</th><th>Limit</th><th>Why it matters</th></tr></thead>
<tbody>
<tr><td>API Gateway integration timeout</td><td class="num">29 s</td><td>Caps your effective Lambda timeout when fronted by API GW, regardless of what the Lambda timeout says. Function URLs allow up to 15 min.</td></tr>
<tr><td>Async invocation event age</td><td class="num">6 h</td><td>If retries don't succeed in this window, the event is dropped (or sent to DLQ / on-failure destination).</td></tr>
<tr><td>Async retry attempts</td><td class="num">2 (default)</td><td>Total of 3 attempts (initial + 2). Configurable down to 0.</td></tr>
<tr><td>SQS visibility timeout requirement</td><td class="num">≥ 6× function timeout</td><td>AWS recommendation. Otherwise messages reappear while still being processed.</td></tr>
</tbody>
</table>
<div class="callout">
<b>Memorisation hack.</b> Three numbers cover most interview questions: <b>15 minutes</b> (timeout), <b>10 GB</b> (memory and /tmp ceiling), <b>6 MB</b> (sync payload). Everything else is a footnote until you hit a specific design.
</div>
</div>
</section>
<!-- ===================================================================== -->
<!-- COLD STARTS — placeholder -->
<!-- ===================================================================== -->
<section id="coldstarts" class="graph-section">
<h2>COLD STARTS</h2>
<p class="lead">Init Duration vs warm path. Mitigations: Provisioned Concurrency, arm64, lazy imports, smaller packages, SnapStart.</p>
<div class="graph-container">
<a href="viewer.html?src=graphs/cold_warm_timeline.svg"><img src="graphs/cold_warm_timeline.svg" alt="Cold vs warm timeline"></a>
</div>
<div class="prose">
<h3>What triggers a cold start</h3>
<p>A cold start happens whenever Lambda must create a new execution environment: the very first request after a deployment, when traffic spikes beyond the number of warm environments, and after an environment has been idle long enough to be recycled (typically 515 minutes, unspecified by AWS). Deployments always cold-start the incoming version — you can't avoid the first one, only reduce how long it takes.</p>
<h3>The cold path</h3>
<p>AWS provisions a Firecracker microVM, downloads and unpacks your code (or pulls the container image), starts the language runtime, then runs your module-level code. Only after all of that does your handler function get called. The timeline is roughly:</p>
<ol>
<li><b>Environment provisioning</b> — microVM boot, network attachment, filesystem mount. Not billed; AWS absorbs this.</li>
<li><b>Init phase</b> — your module-level code: imports, client construction, config reads. Billed at full configured memory. Capped at 10 s.</li>
<li><b>Handler phase</b><code>handler(event, context)</code> runs. Billed per-ms.</li>
</ol>
<p>CloudWatch shows this split: the <code>REPORT</code> line includes <code>Init Duration</code> only on cold invocations. Warm invocations have no <code>Init Duration</code> line.</p>
<h3>Typical numbers</h3>
<table class="cmp-table">
<thead><tr><th>Runtime</th><th>Typical cold start (p50)</th><th>Typical cold start (p99)</th></tr></thead>
<tbody>
<tr><td>Python 3.13 (zip, minimal deps)</td><td class="num">~150 ms</td><td class="num">~400 ms</td></tr>
<tr><td>Python 3.13 (zip, aioboto3 + aiofiles)</td><td class="num">~300 ms</td><td class="num">~700 ms</td></tr>
<tr><td>Node.js 22</td><td class="num">~100 ms</td><td class="num">~300 ms</td></tr>
<tr><td>Java 21 (without SnapStart)</td><td class="num">~12 s</td><td class="num">~35 s</td></tr>
<tr><td>Java 21 (SnapStart enabled)</td><td class="num">~200 ms</td><td class="num">~600 ms</td></tr>
<tr><td>Container image (any runtime)</td><td class="num">+100300 ms</td><td class="num">first pull can be 13 s</td></tr>
</tbody>
</table>
<h3>Mitigations</h3>
<p><b>Provisioned Concurrency (PC)</b> — pre-warms N environments so they're always in the "warm" state. Eliminates cold starts for the provisioned slots. You pay for those slots 24/7 even when idle. Use for latency-sensitive, predictable-traffic paths. Schedule PC changes via Application Auto Scaling for cost efficiency.</p>
<p><b>arm64</b> — Graviton2 executes the init phase ~10% faster than x86_64 for CPU-bound init work. Combined with the ~20% price reduction, arm64 is the default choice unless native wheels block you.</p>
<p><b>Smaller packages</b> — Lambda downloads and unpacks your zip on every cold start. Trimming unused transitive dependencies (use <code>pip install --no-deps</code> audit or <code>pipdeptree</code>) and stripping test/doc files shaves real time. Every MB of extracted code costs a few ms.</p>
<p><b>Lazy imports</b> — move rarely-used or slow imports inside the handler (or into a lazy-init guard). The most common win is heavy ML libraries only needed for inference: import them on first call, cache the result in a module-level variable.</p>
<p><b>SnapStart (Java only)</b> — takes a snapshot of the initialised JVM state after your init phase, then restores from that snapshot on cold starts. Collapses 15 s JVM startup to ~200 ms. Not available for Python or Node.</p>
<div class="callout">
<b>When cold starts don't matter:</b> batch jobs, async event pipelines, scheduled tasks — nobody is waiting on the p99. Only optimise cold starts when a human is waiting synchronously for the response.
</div>
</div>
</section>
<!-- ===================================================================== -->
<!-- CONCURRENCY — placeholder -->
<!-- ===================================================================== -->
<section id="concurrency" class="graph-section">
<h2>CONCURRENCY</h2>
<p class="lead">Account quota, reserved, provisioned. The "100 RPS × 200 ms" math.</p>
<div class="prose">
<h3>The fundamental model</h3>
<p>Lambda concurrency = the number of execution environments processing requests at the same instant. Each environment handles exactly one invocation at a time. There is no thread pool, no event loop shared across invocations — if two requests arrive simultaneously, AWS spins up two separate environments.</p>
<p>The key formula: <b>concurrency ≈ RPS × average duration (in seconds)</b>. At 100 requests/s with a 200 ms average handler duration, you need 100 × 0.2 = <b>20 concurrent environments</b>. At 500 ms average, you need 50. At 2 s average, 200 — and so on. Latency optimisation directly reduces your concurrency footprint.</p>
<h3>Account concurrency pool</h3>
<p>Every AWS account has a regional concurrency quota — default <b>1 000 concurrent executions</b> per region, shared across all functions. When the pool is full, new invocations get throttled (sync → HTTP 429 TooManyRequestsException; async → queued and retried). Raising the limit requires a Service Quotas increase request; AWS typically grants up to 10 000 with a business justification.</p>
<p>This is the single most common production surprise: one function spikes and starves all others in the same region. Reserved concurrency is the fix.</p>
<h3>Types of concurrency</h3>
<table class="cmp-table">
<thead><tr><th>Type</th><th>What it does</th><th>Cost</th><th>Use for</th></tr></thead>
<tbody>
<tr><td><b>Unreserved</b></td><td>Draws from the shared regional pool on demand</td><td>Invocation + duration only</td><td>Most functions</td></tr>
<tr><td><b>Reserved</b></td><td>Carves a slice of the regional pool exclusively for this function; acts as both a floor and a ceiling</td><td>No extra charge</td><td>Protecting critical paths from noisy neighbours; throttling cost runaway</td></tr>
<tr><td><b>Provisioned</b></td><td>Pre-warms N environments; they stay initialised 24/7</td><td>PC-hours + invocation</td><td>Latency-sensitive functions where cold starts are unacceptable</td></tr>
</tbody>
</table>
<h3>Reserved concurrency edge cases</h3>
<ul>
<li>Setting reserved concurrency to <b>0</b> disables the function entirely — useful as a circuit breaker.</li>
<li>Reserved concurrency counts against the account pool even when idle. If you set 500 reserved on a function, only 500 remain for all other functions (at default 1 000).</li>
<li>Reserved concurrency does <b>not</b> pre-warm. You still cold-start; you just can't scale past the cap.</li>
</ul>
<h3>Burst scaling</h3>
<p>When traffic spikes from zero, Lambda can spin up environments quickly — but not infinitely fast. The burst limit (region-dependent, typically 5003 000 immediate) is how many environments AWS will create right now. Beyond that, it adds <b>500 new environments per minute</b>. A spike from 0 to 5 000 concurrent requests takes several minutes to fully absorb. Provisioned Concurrency or pre-warming via a ping mechanism is the fix for sudden large spikes.</p>
<div class="callout">
<b>Interview answer template:</b> "Concurrency = RPS × duration. Default pool is 1 000/region. Reserved carves a slice and prevents both starvation and runaway. Provisioned pre-warms to eliminate cold starts, but you pay for idle capacity."
</div>
</div>
</section>
<!-- ===================================================================== -->
<!-- TRIGGERS — placeholder -->
<!-- ===================================================================== -->
<section id="triggers" class="graph-section">
<h2>TRIGGERS</h2>
<p class="lead">Fan-in catalogue: API GW, Function URL, S3, SQS, SNS, EventBridge, DynamoDB streams, Kinesis, ALB, schedule, Step Functions.</p>
<div class="prose">
<h3>Three invocation models</h3>
<p>Every trigger falls into one of three models, and the model determines retry behaviour, error handling, and whether the caller can see the response.</p>
<table class="cmp-table">
<thead><tr><th>Model</th><th>Caller behaviour</th><th>Retries on error</th><th>Max event size</th></tr></thead>
<tbody>
<tr><td><b>Synchronous</b></td><td>Blocks for response; gets result or error directly</td><td>None — caller decides</td><td class="num">6 MB request + response</td></tr>
<tr><td><b>Asynchronous</b></td><td>Gets 202 immediately; Lambda queues + retries internally</td><td>2 retries (3 total) over up to 6 h</td><td class="num">256 KB event</td></tr>
<tr><td><b>Poll-based (ESM)</b></td><td>Lambda polls the source on your behalf; batches records</td><td>Keeps retrying until success or record expires/goes to DLQ</td><td class="num">Depends on source</td></tr>
</tbody>
</table>
<h3>Trigger catalogue</h3>
<table class="cmp-table">
<thead><tr><th>Trigger</th><th>Model</th><th>Key notes</th></tr></thead>
<tbody>
<tr><td><b>API Gateway (REST / HTTP)</b></td><td>Sync</td><td>29 s integration timeout regardless of Lambda timeout. HTTP API is cheaper and lower-latency than REST API. Transforms request/response.</td></tr>
<tr><td><b>Function URL</b></td><td>Sync</td><td>Direct HTTPS endpoint on the function; no API Gateway layer. Supports up to 15 min timeout and response streaming. Simpler, cheaper, fewer features.</td></tr>
<tr><td><b>ALB (Application Load Balancer)</b></td><td>Sync</td><td>Like API GW but routes at L7; useful when Lambda is one target among EC2/ECS targets. 29 s timeout.</td></tr>
<tr><td><b>S3 event notification</b></td><td>Async</td><td>Fires on object create/delete/etc. At-least-once delivery. Large PUT creates exactly one event per object but notifications can duplicate. Common pattern: S3 → SNS → SQS → Lambda for fan-out + replay.</td></tr>
<tr><td><b>SNS</b></td><td>Async</td><td>Fan-out: one message → multiple subscribers. At-least-once. Dead-letter queue on the subscription, not the topic.</td></tr>
<tr><td><b>EventBridge (CloudWatch Events)</b></td><td>Async</td><td>Event bus with content-based routing rules. Also the managed scheduler (cron/rate expressions, timezone-aware since 2022). At-least-once.</td></tr>
<tr><td><b>SQS</b></td><td>Poll-based (ESM)</td><td>Lambda polls and batches (up to 10 000 msg). Standard: at-least-once, unordered. FIFO: ordered per message group, exactly-once with dedup. Visibility timeout must be ≥ 6× function timeout. Partial batch failure via <code>batchItemFailures</code>.</td></tr>
<tr><td><b>Kinesis Data Streams</b></td><td>Poll-based (ESM)</td><td>One Lambda shard per stream shard. Records expire (24 h1 yr); Lambda retries until success or expiry. Use bisect-on-error and <code>batchItemFailures</code> to avoid one bad record blocking an entire shard.</td></tr>
<tr><td><b>DynamoDB Streams</b></td><td>Poll-based (ESM)</td><td>Captures item-level changes. Ordered per partition key. 24 h retention. Same retry behaviour as Kinesis. Use for CDC (change-data-capture) patterns.</td></tr>
<tr><td><b>Step Functions</b></td><td>Sync (Task state)</td><td>Step Functions calls the function synchronously and waits for the result. Retries and timeouts are defined in the state machine, not Lambda. See the <a onclick="show('stepfns')">Step Functions</a> section.</td></tr>
<tr><td><b>Cognito / SES / IoT etc.</b></td><td>Sync or Async</td><td>Service-specific; check the docs for each. Cognito triggers (pre-signup, pre-token) are sync and block the auth flow.</td></tr>
</tbody>
</table>
<h3>Choosing between SQS and SNS+SQS</h3>
<p>Use plain <b>SQS → Lambda</b> when you have one consumer and want to buffer, batch, and retry. Use <b>SNS → SQS → Lambda</b> when you need fan-out (multiple independent consumers each get a copy) or when the producer is an AWS service that speaks SNS natively (S3 event notifications, for example). The SNS layer decouples producers from the queue topology.</p>
</div>
</section>
<!-- ===================================================================== -->
<!-- IAM — placeholder -->
<!-- ===================================================================== -->
<section id="iam" class="graph-section">
<h2>IAM &amp; PERMISSIONS</h2>
<p class="lead">Execution role vs resource policy. The two policies most people confuse.</p>
<div class="prose">
<h3>Two independent permission layers</h3>
<p>Lambda has two separate permission surfaces that must each be correct independently. Confusing them is the most common "it works locally but not in AWS" failure.</p>
<table class="cmp-table">
<thead><tr><th>Layer</th><th>Question it answers</th><th>Who creates it</th></tr></thead>
<tbody>
<tr><td><b>Execution role</b></td><td>What can <i>this Lambda function do</i> once running? (call S3, write to DynamoDB, publish to SNS…)</td><td>You — attached at function creation</td></tr>
<tr><td><b>Resource policy</b></td><td>Who is <i>allowed to invoke</i> this Lambda function? (API Gateway, another account, EventBridge…)</td><td>AWS adds it automatically for most triggers; you add it for cross-account or manual grants</td></tr>
</tbody>
</table>
<h3>Execution role</h3>
<p>The execution role is an IAM role that Lambda assumes when running your function. Every Lambda must have one. The role's attached policies determine what AWS API calls the function can make. At minimum, every function needs:</p>
<pre><code># minimum: write its own logs
logs:CreateLogGroup
logs:CreateLogStream
logs:PutLogEvents</code></pre>
<p>Common additions for a function that reads/writes S3:</p>
<pre><code>s3:GetObject
s3:PutObject
s3:ListBucket # needed for paginator; often forgotten
kms:Decrypt # if the bucket uses a CMK, this is also required</code></pre>
<p>The <code>AWSLambdaBasicExecutionRole</code> managed policy covers logs only — it is intentionally minimal. <code>AWSLambdaVPCAccessExecutionRole</code> adds the ENI permissions needed when the function is in a VPC.</p>
<h3>Resource policy</h3>
<p>The resource policy is attached to the Lambda function itself (not an IAM identity). When you add an S3 event notification or API Gateway integration in the console, AWS automatically adds a resource policy entry allowing that service to invoke the function. For cross-account invocations you add this manually via <code>aws lambda add-permission</code>.</p>
<pre><code># grant another account permission to invoke
aws lambda add-permission \
--function-name my-function \
--principal 123456789012 \ # the other AWS account
--action lambda:InvokeFunction \
--statement-id cross-account-invoke</code></pre>
<h3>Common mistakes</h3>
<ul>
<li><b>Missing <code>s3:ListBucket</code> on the bucket resource.</b> <code>ListObjectsV2</code> requires this on the <i>bucket ARN</i> (not the object ARN). Forgetting it causes AccessDenied on the paginator even when GetObject works fine.</li>
<li><b>Wrong resource ARN scope.</b> <code>s3:GetObject</code> must be on <code>arn:aws:s3:::bucket-name/*</code>; <code>s3:ListBucket</code> must be on <code>arn:aws:s3:::bucket-name</code>. Swapping them is a frequent typo.</li>
<li><b>CMK not in execution role.</b> KMS-encrypted bucket objects require both <code>s3:GetObject</code> and <code>kms:Decrypt</code>. The KMS key policy must also allow the role. Two separate policy documents, two separate denial points.</li>
<li><b>No resource policy for new trigger.</b> If you wire up EventBridge manually (not via the console), the trigger silently fails because there's no resource policy entry granting EventBridge <code>lambda:InvokeFunction</code>.</li>
</ul>
<h3>Diagnosing permission errors</h3>
<p>CloudTrail is the ground truth. Filter by <code>errorCode: "AccessDenied"</code> and <code>userIdentity.arn</code> matching the execution role ARN. The event tells you exactly which action on which resource was denied. CloudWatch will show the error in the Lambda log if you let the exception propagate, but CloudTrail shows it even when the call is made from a library that swallows the error.</p>
</div>
</section>
<!-- ===================================================================== -->
<!-- PACKAGING — placeholder -->
<!-- ===================================================================== -->
<section id="packaging" class="graph-section">
<h2>PACKAGING</h2>
<p class="lead">Zip vs layers vs container images. arm64 vs x86_64. Native wheels.</p>
<div class="prose">
<h3>Three deployment formats</h3>
<table class="cmp-table">
<thead><tr><th>Format</th><th>Size limit</th><th>Best for</th><th>Caveats</th></tr></thead>
<tbody>
<tr><td><b>Zip (direct)</b></td><td class="num">50 MB upload / 250 MB unzipped</td><td>Most Python/Node functions with pure-Python or pre-built wheels</td><td>Must match Lambda's architecture; no custom runtime</td></tr>
<tr><td><b>Zip via S3</b></td><td class="num">250 MB unzipped</td><td>Same as above but when zip exceeds 50 MB</td><td>S3 bucket must be in the same region</td></tr>
<tr><td><b>Layers</b></td><td class="num">250 MB total (function + all layers)</td><td>Shared dependencies across functions (e.g. a company-wide logging layer)</td><td>Max 5 layers per function; later layers overwrite earlier ones</td></tr>
<tr><td><b>Container image</b></td><td class="num">10 GB</td><td>ML models, native binary deps, custom runtimes</td><td>Slower first cold start (image pull); larger attack surface</td></tr>
</tbody>
</table>
<h3>Layers in practice</h3>
<p>A layer is a zip file that Lambda extracts into <code>/opt</code> before running your function. Your code in <code>/var/task</code> can import from <code>/opt/python</code> (for Python) without any path manipulation. Use cases:</p>
<ul>
<li>Shared internal libraries deployed independently of business logic</li>
<li>Large dependencies that change rarely (numpy, pandas) — cache them in a layer so deployments of the business logic are fast</li>
<li>AWS-provided layers: Lambda Insights extension, X-Ray SDK</li>
</ul>
<p>Layers count toward the 250 MB unzipped limit. If you have 5 layers at 40 MB each and your function zip is 50 MB, you're at 250 MB — no room left.</p>
<h3>Container images</h3>
<p>Container images must be based on AWS-provided base images (<code>public.ecr.aws/lambda/python:3.13</code>) or implement the Lambda Runtime Interface. They must be stored in ECR (Elastic Container Registry) in the same region. The Lambda service caches images on the underlying host after the first pull, so subsequent cold starts on the same host are fast — but the very first invocation after a new image is deployed can be slow for large images.</p>
<p>Container images bypass the 250 MB unzipped limit, which is why they're the standard choice for Python ML workloads that bundle PyTorch or TensorFlow.</p>
<h3>arm64 vs x86_64</h3>
<p>Graviton2-based arm64 is ~20% cheaper per GB-second than x86_64 and typically faster at compute-heavy work. The decision tree:</p>
<ol>
<li>Check all your dependencies for arm64 wheels: <code>pip download --platform manylinux2014_aarch64 --only-binary :all: -r requirements.txt</code>. If any fail, you either build from source (needs Dockerfile) or stay on x86.</li>
<li>For pure-Python deps and most modern packages, arm64 works out of the box.</li>
<li>Native extensions (cryptography, numpy, psycopg2) have arm64 wheels on PyPI since ~2022. Check the exact version you need.</li>
</ol>
<h3>Building for Lambda (the common foot-gun)</h3>
<p>Lambda runs on Amazon Linux 2023. <code>pip install</code> on macOS produces wheels compiled for macOS, which will segfault or import-error on Lambda. The correct approach:</p>
<pre><code># build inside the Lambda runtime image
docker run --rm \
-v "$PWD":/var/task \
public.ecr.aws/lambda/python:3.13 \
pip install -r requirements.txt -t python/
zip -r layer.zip python/</code></pre>
<p>This is also where architecture matters: use the <code>:3.13-arm64</code> tag when building for arm64.</p>
<div class="callout">
<b>This project</b> uses a zip deployment. <code>aioboto3</code> and <code>aiofiles</code> are pure-Python and have no native extensions, so they build cleanly on any architecture. The Makefile's <code>install</code> target creates a local <code>.venv</code> for development; a real CI pipeline would build the deployment zip inside the Lambda image.
</div>
</div>
</section>
<!-- ===================================================================== -->
<!-- VPC — placeholder -->
<!-- ===================================================================== -->
<section id="vpc" class="graph-section">
<h2>VPC &amp; NETWORKING</h2>
<p class="lead">When to put Lambda in a VPC (rarely). ENI cold start cost. NAT money pit.</p>
<div class="prose">
<h3>Default: no VPC</h3>
<p>By default, Lambda runs in an AWS-managed network with internet access. It can reach S3, DynamoDB, SQS, and other AWS services via their public endpoints. <b>Do not put Lambda in a VPC unless you have a specific reason.</b> Most applications don't need it.</p>
<h3>When you actually need VPC</h3>
<ul>
<li>Connecting to RDS or Aurora (which live in a private subnet)</li>
<li>ElastiCache (Redis/Memcached) — VPC-only by design</li>
<li>Private REST APIs or internal services on private subnets</li>
<li>Compliance requirements mandating network isolation</li>
</ul>
<p>S3, DynamoDB, SQS, SNS, and most AWS managed services do <b>not</b> require VPC placement — they're public services with public endpoints.</p>
<h3>ENI attachment and cold start</h3>
<p>When Lambda is VPC-attached, each execution environment gets an Elastic Network Interface (ENI) in your VPC. Pre-2019, ENIs were allocated per cold start, adding 1030 s to init. AWS fixed this in 2019 with hyperplane ENIs shared across environments — today the VPC cold start penalty is ~100500 ms on the first cold start of a new deployment, then negligible. It's no longer the dealbreaker it used to be, but it's not zero.</p>
<h3>Subnet and AZ placement</h3>
<p>Specify at least two subnets in different AZs for availability. Lambda will distribute environments across AZs. If a subnet runs out of available ENI slots (IP exhaustion), Lambda scaling fails — size subnets with this in mind. /24 (254 IPs) is often too small for high-concurrency functions.</p>
<h3>The NAT money pit</h3>
<p>VPC Lambda can't reach the internet by default. If your function needs to call an external API or reach an AWS service without a VPC endpoint, you need a NAT gateway in a public subnet. NAT gateways cost:</p>
<ul>
<li><b>$0.045/hour</b> (~$32/month) just to exist, per AZ</li>
<li><b>$0.045/GB</b> of data processed</li>
</ul>
<p>A function that sends 100 GB/month through NAT costs $4.50 in data alone, on top of the always-on hourly charge. Two AZs for HA = ~$64/month base cost before a single byte of traffic. This is frequently the largest unexpected cost in VPC Lambda setups.</p>
<h3>VPC endpoints: the free alternative</h3>
<p>For AWS services, VPC endpoints bypass NAT and the public internet entirely. Two types:</p>
<ul>
<li><b>Gateway endpoints</b> — S3 and DynamoDB only. Free. Route table entries. No data charge.</li>
<li><b>Interface endpoints (PrivateLink)</b> — any AWS service. $0.01/AZ/hr + $0.01/GB. Expensive for high throughput but often cheaper than NAT for AWS-service-heavy workloads.</li>
</ul>
<p>For a VPC Lambda that only talks to S3 and DynamoDB: create gateway endpoints for both → no NAT needed → near-zero networking cost.</p>
<h3>Security groups</h3>
<p>VPC Lambda gets a security group. Outbound rules control where it can connect. The security group of RDS/ElastiCache must allow inbound from the Lambda security group. A common pattern is to create a dedicated Lambda SG and reference it in the database SG's inbound rules — this avoids IP-range rules that break when Lambda ENIs change.</p>
</div>
</section>
<!-- ===================================================================== -->
<!-- OBSERVABILITY — placeholder -->
<!-- ===================================================================== -->
<section id="observability" class="graph-section">
<h2>OBSERVABILITY</h2>
<p class="lead">CloudWatch logs, structured JSON, X-Ray, Lambda Insights, EMF. Brief Prometheus/Grafana orientation.</p>
<div class="prose">
<h3>CloudWatch Logs — what you get for free</h3>
<p>Every Lambda function automatically writes to a CloudWatch Log Group named <code>/aws/lambda/&lt;function-name&gt;</code>. Each execution environment gets its own Log Stream. Lambda writes two special lines automatically:</p>
<pre><code>START RequestId: abc-123 Version: $LATEST
END RequestId: abc-123
REPORT RequestId: abc-123 Duration: 312.45 ms Billed Duration: 313 ms
Memory Size: 256 MB Max Memory Used: 89 MB
Init Duration: 423.12 ms # only on cold starts</code></pre>
<p>The REPORT line is your free performance telemetry. <code>Init Duration</code> appears only on cold invocations. <code>Max Memory Used</code> helps right-size memory configuration.</p>
<p><b>Retention:</b> Default is "Never Expire." Set it explicitly — 7, 14, or 30 days covers most needs. Every MB of retained logs costs money.</p>
<h3>Structured logging</h3>
<p>Emit JSON instead of plain strings. CloudWatch Logs Insights can filter and aggregate JSON fields efficiently; plain strings require regex and are slow. Example:</p>
<pre><code>import json, logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
def handler(event, context):
logger.info(json.dumps({
"event": "pdf_scan_start",
"bucket": BUCKET,
"prefix": PREFIX,
"request_id": context.aws_request_id,
}))</code></pre>
<p>With this, Logs Insights can run: <code>filter event = "pdf_scan_start" | stats count() by bin(5m)</code> in seconds.</p>
<h3>X-Ray tracing</h3>
<p>X-Ray gives you request traces across services — how long the Lambda itself ran vs how long S3 calls took. Three things must all be true:</p>
<ol>
<li><b>Tracing enabled on the function</b> — console toggle or <code>TracingConfig: Active</code> in SAM/CDK</li>
<li><b>X-Ray SDK instrumented in your code</b><code>from aws_xray_sdk.core import patch_all; patch_all()</code> wraps boto3 calls automatically</li>
<li><b>IAM permission</b> — execution role needs <code>xray:PutTraceSegments</code> and <code>xray:PutTelemetryRecords</code></li>
</ol>
<p>Without all three, traces are either absent or incomplete. People flip one and conclude X-Ray is broken.</p>
<h3>Lambda Insights</h3>
<p>Lambda Insights is a CloudWatch feature (not a separate service) that surfaces system-level metrics: CPU usage, memory utilisation, network I/O, disk I/O — things the REPORT line doesn't include. To enable it:</p>
<ul>
<li>Add the Lambda Insights extension layer (<code>arn:aws:lambda:&lt;region&gt;:580247275435:layer:LambdaInsightsExtension:38</code>)</li>
<li>Add <code>cloudwatch:PutMetricData</code> to the execution role</li>
</ul>
<p>It's useful when you suspect memory or CPU contention but the REPORT line's "Max Memory Used" isn't granular enough.</p>
<h3>EMF — Embedded Metrics Format</h3>
<p>EMF lets you emit custom CloudWatch metrics by writing structured JSON to stdout. No <code>PutMetricData</code> API call needed — the Lambda runtime parses the log line and publishes the metric asynchronously. This is far more efficient than calling CloudWatch from inside the handler (which adds latency + cost per invocation).</p>
<pre><code>import json
def emit_metric(name, value, unit="Count", **dims):
print(json.dumps({
"_aws": {
"Timestamp": int(time.time() * 1000),
"CloudWatchMetrics": [{
"Namespace": "MyApp",
"Dimensions": [list(dims.keys())],
"Metrics": [{"Name": name, "Unit": unit}]
}]
},
name: value,
**dims,
}))
# usage
emit_metric("PDFsProcessed", count, Unit="Count", Function="pdf-scanner")</code></pre>
<h3>Prometheus &amp; Grafana (brief)</h3>
<p>Prometheus uses a <b>pull model</b> — it scrapes HTTP endpoints. Lambda functions are ephemeral and have no persistent HTTP endpoint, so Prometheus can't scrape them directly. Approaches:</p>
<ul>
<li><b>EMF → CloudWatch → Grafana CloudWatch plugin</b> — easiest; Grafana queries CW as a data source</li>
<li><b>Amazon Managed Prometheus (AMP) + remote_write</b> — Lambda pushes metrics to AMP via the Prometheus remote write API; Grafana (or Amazon Managed Grafana) reads from AMP</li>
<li><b>Statsd/push gateway</b> — Lambda pushes to a persistent push gateway; Prometheus scrapes the gateway. More infra to manage.</li>
</ul>
<p>For Lambda-centric dashboards, the CloudWatch → Grafana path is usually the simplest to operate.</p>
</div>
</section>
<!-- ===================================================================== -->
<!-- ASYNC & ERRORS — placeholder -->
<!-- ===================================================================== -->
<section id="async" class="graph-section">
<h2>ASYNC &amp; ERRORS</h2>
<p class="lead">Sync vs async invoke. Retries, DLQ, destinations, idempotency, partial-batch failures.</p>
<div class="prose">
<h3>Sync vs async invocation</h3>
<table class="cmp-table">
<thead><tr><th></th><th>Synchronous (RequestResponse)</th><th>Asynchronous (Event)</th></tr></thead>
<tbody>
<tr><td><b>Caller blocks?</b></td><td class="ok">Yes — waits for result</td><td class="ok">No — gets 202 immediately</td></tr>
<tr><td><b>Response visible to caller?</b></td><td class="ok">Yes</td><td class="bad">No</td></tr>
<tr><td><b>Retries on error</b></td><td>None (caller's responsibility)</td><td>2 retries = 3 total attempts</td></tr>
<tr><td><b>Retry backoff</b></td><td></td><td>~1 min then ~2 min</td></tr>
<tr><td><b>Event age limit</b></td><td></td><td>6 hours</td></tr>
<tr><td><b>Max event size</b></td><td class="num">6 MB</td><td class="num">256 KB</td></tr>
</tbody>
</table>
<h3>Async retry flow</h3>
<p>When Lambda invokes asynchronously and the function throws an unhandled exception (or is throttled), Lambda retries automatically — twice, with exponential backoff starting at ~1 minute. If all three attempts fail, or if the event ages past 6 hours, Lambda sends the event to the configured failure destination or DLQ. If neither is configured, the event is silently dropped.</p>
<h3>DLQ vs Destinations</h3>
<p>These are two different mechanisms that overlap in purpose but have different capabilities:</p>
<table class="cmp-table">
<thead><tr><th></th><th>Dead-Letter Queue (DLQ)</th><th>Event Destinations</th></tr></thead>
<tbody>
<tr><td><b>Introduced</b></td><td>2016 (legacy)</td><td>2019 (preferred)</td></tr>
<tr><td><b>Triggers on</b></td><td>Failure only</td><td>Success or failure (separate configs)</td></tr>
<tr><td><b>Payload</b></td><td>The original event only</td><td>Original event + result/error + metadata</td></tr>
<tr><td><b>Targets</b></td><td>SQS or SNS</td><td>SQS, SNS, Lambda, EventBridge</td></tr>
</tbody>
</table>
<p>Use Destinations for new code. DLQ remains useful when the downstream consumer must be SQS and you don't need success notifications.</p>
<h3>Idempotency</h3>
<p>Because async invocations retry and most event sources are at-least-once, your handler will occasionally execute more than once for the same logical event. Design handlers to be idempotent — the same input produces the same outcome regardless of how many times it runs.</p>
<p>Standard pattern: use a unique key from the event (S3 ETag + key, SQS MessageId, EventBridge detail.id) as a deduplication key. On first execution, write the key + result to DynamoDB with a TTL. On retry, check DynamoDB first — if already processed, return the cached result without re-running the work.</p>
<pre><code># pseudo-code
dedup_key = event["Records"][0]["messageId"]
existing = table.get_item(Key={"id": dedup_key})
if existing.get("Item"):
return existing["Item"]["result"]
result = do_the_work(event)
table.put_item(Item={"id": dedup_key, "result": result, "ttl": now + 86400})
return result</code></pre>
<p>AWS PowerTools for Lambda (Python) has a built-in <code>@idempotent</code> decorator that implements this pattern with DynamoDB.</p>
<h3>Partial batch failures (SQS / Kinesis / DynamoDB Streams)</h3>
<p>When Lambda processes a batch of records and one record fails, the default behaviour differs by source:</p>
<ul>
<li><b>SQS (default)</b>: if the handler raises an exception, the entire batch is retried. One bad message blocks all others and can cause infinite retry loops.</li>
<li><b>With <code>ReportBatchItemFailures</code> enabled</b>: return a <code>batchItemFailures</code> list containing only the failed message IDs. Lambda re-queues only those; successful messages are deleted.</li>
</ul>
<pre><code>def handler(event, context):
failures = []
for record in event["Records"]:
try:
process(record)
except Exception:
failures.append({"itemIdentifier": record["messageId"]})
return {"batchItemFailures": failures}</code></pre>
<p>Enable <code>ReportBatchItemFailures</code> in the ESM configuration and always implement partial-batch failure reporting for SQS and Kinesis handlers. A single poison-pill record can otherwise block an entire shard or queue indefinitely.</p>
<div class="callout warn">
<b>The idempotencypartial-batch intersection:</b> with partial failures, successful records in the batch are deleted from SQS, but if your function crashes before returning the failure list, the entire batch including the successes gets retried. Idempotency guards must still cover every record, not just the ones in <code>batchItemFailures</code>.
</div>
</div>
</section>
<!-- ===================================================================== -->
<!-- STEP FUNCTIONS — placeholder -->
<!-- ===================================================================== -->
<section id="stepfns" class="graph-section">
<h2>STEP FUNCTIONS</h2>
<p class="lead">When Lambda alone isn't enough. Standard vs Express. Map state for fan-out. Comparison with Airflow.</p>
<div class="prose">
<h3>When Lambda alone isn't enough</h3>
<p>A single Lambda function works well for one discrete task. Problems start when you need to chain multiple tasks, retry selectively, wait on human approval, or fan out across thousands of items. Doing this with Lambda alone means writing orchestration logic inside your functions — tracking state, implementing retry delays, deciding what "done" means. Step Functions externalises that orchestration into a state machine where every state transition is durable, auditable, and resumable.</p>
<p>Reach for Step Functions when you need: sequential steps with state passing, conditional branching, parallel fan-out with join, wait states longer than 15 minutes, or retry-with-exponential-backoff built in.</p>
<h3>Standard vs Express workflows</h3>
<table class="cmp-table">
<thead><tr><th></th><th>Standard</th><th>Express</th></tr></thead>
<tbody>
<tr><td><b>Max duration</b></td><td class="num">1 year</td><td class="num">5 minutes</td></tr>
<tr><td><b>Execution semantics</b></td><td>Exactly-once per state</td><td>At-least-once</td></tr>
<tr><td><b>Execution history</b></td><td>Full audit trail in AWS console</td><td>CloudWatch Logs only</td></tr>
<tr><td><b>Pricing</b></td><td>$0.025 per 1 000 state transitions</td><td>$0.00001 per state transition + duration</td></tr>
<tr><td><b>Use for</b></td><td>Long-running business workflows, human approvals, compliance audit trails</td><td>High-volume, short-duration event processing (IoT, streaming)</td></tr>
</tbody>
</table>
<p>For most application orchestration, Standard is the right choice — the exactly-once semantic matters when steps have side effects (charging a card, sending an email). Express is for high-throughput pipelines where at-least-once is acceptable and cost per transition is a concern.</p>
<h3>Map state for fan-out</h3>
<p>The Map state runs the same workflow branch for every item in an array, in parallel. This is the core fan-out primitive. For this project's use case, a Step Functions version could fan out across S3 prefixes — run one Lambda per prefix, collect results in a fan-in step:</p>
<pre><code>{
"Type": "Map",
"ItemsPath": "$.prefixes",
"MaxConcurrency": 10, // cap parallelism
"Iterator": {
"StartAt": "ScanPrefix",
"States": {
"ScanPrefix": {
"Type": "Task",
"Resource": "arn:aws:lambda:...:function:pdf-scanner",
"End": true
}
}
}
}</code></pre>
<p><code>MaxConcurrency: 0</code> means unlimited — bounded only by the Lambda concurrency pool. Set an explicit cap to avoid saturating the account concurrency quota.</p>
<h3>Other useful states</h3>
<ul>
<li><b>Wait</b> — pause for a duration or until a timestamp. The only way to implement delays longer than 15 minutes without polling.</li>
<li><b>Choice</b> — conditional branching on input values. Replaces <code>if/else</code> logic that would otherwise live inside a Lambda.</li>
<li><b>Parallel</b> — run multiple independent branches simultaneously and join their results.</li>
<li><b>Task (SDK integrations)</b> — Step Functions can call DynamoDB, SQS, ECS, Glue, etc. directly without a Lambda wrapper, reducing cost and latency for simple operations.</li>
</ul>
<h3>Step Functions vs Airflow</h3>
<table class="cmp-table">
<thead><tr><th></th><th>Step Functions</th><th>Apache Airflow (MWAA)</th></tr></thead>
<tbody>
<tr><td><b>DAG definition</b></td><td>JSON/YAML state machine (ASL)</td><td>Python code (DAG files)</td></tr>
<tr><td><b>Scheduling</b></td><td>Event-driven / on-demand; cron via EventBridge</td><td>Built-in rich scheduler (cron, data-interval-aware)</td></tr>
<tr><td><b>Backfill</b></td><td>Manual / custom</td><td>First-class, built-in</td></tr>
<tr><td><b>Operators</b></td><td>AWS services + Lambda (AWS ecosystem only)</td><td>600+ providers: Spark, BigQuery, dbt, Kubernetes…</td></tr>
<tr><td><b>Infrastructure</b></td><td>Serverless — zero infra</td><td>Managed Airflow (MWAA) starts at ~$400/month</td></tr>
<tr><td><b>Debugging</b></td><td>Console execution graph; CloudWatch for logs</td><td>Airflow UI with task logs, Gantt charts, retries</td></tr>
</tbody>
</table>
<p>Step Functions is the right choice when your workflow is AWS-native, event-driven, and you want zero infrastructure. Airflow is the right choice when you need complex scheduling, data-interval backfill, cross-cloud operators, or a data-engineering team that already knows Python DAGs.</p>
</div>
</section>
<!-- ===================================================================== -->
<!-- COST — placeholder -->
<!-- ===================================================================== -->
<section id="cost" class="graph-section">
<h2>COST</h2>
<p class="lead">Pricing model, memory/cost trade-off, x86 vs arm64, free tier, common surprises.</p>
<div class="prose">
<h3>The pricing formula</h3>
<p>Lambda billing has two components, both permanent free tiers included:</p>
<table class="cmp-table">
<thead><tr><th>Component</th><th>x86_64</th><th>arm64</th><th>Free tier (permanent)</th></tr></thead>
<tbody>
<tr><td><b>Requests</b></td><td class="num">$0.20 / 1M</td><td class="num">$0.20 / 1M</td><td class="num">1M / month</td></tr>
<tr><td><b>Duration</b></td><td class="num">$0.0000166667 / GB-s</td><td class="num">$0.0000133334 / GB-s</td><td class="num">400 000 GB-s / month</td></tr>
</tbody>
</table>
<p>GB-seconds = memory configured (GB) × duration (seconds). A 512 MB function running for 300 ms = 0.5 × 0.3 = 0.15 GB-s. At 1 million invocations, that's 150 000 GB-s — well inside the free tier.</p>
<p>Duration is billed in <b>1 ms increments</b>. The old 100 ms minimum is gone (removed in 2020).</p>
<h3>Memory vs cost: more can be cheaper</h3>
<p>CPU scales linearly with memory. A function configured at 1 769 MB gets a full vCPU; below that it's a fraction. Doubling memory often more than halves duration for CPU-bound work, which means the total GB-s cost stays the same or decreases — while latency drops.</p>
<p><b>AWS Lambda Power Tuning</b> is a Step Functions state machine that automatically benchmarks your function at multiple memory sizes and produces a cost/performance curve. Run it before guessing at the right memory setting. The optimal point is almost never the default 128 MB.</p>
<h3>arm64 saves ~20%</h3>
<p>arm64 duration pricing is 20% cheaper than x86. Same request price. If your function is compute-bound (not I/O-bound sleeping on S3 calls), arm64 also runs faster, compounding the saving. For I/O-bound functions (like <code>lambda_function.py</code>, which spends most of its time waiting on S3), the duration difference is smaller but the 20% price reduction still applies.</p>
<h3>Provisioned Concurrency billing</h3>
<p>PC is billed separately: $0.0000097222 per GB-s of provisioned time (x86) — even when idle. If you have 10 × 512 MB environments provisioned for 24 hours: 10 × 0.5 GB × 86 400 s = 432 000 GB-s/day = ~$4.20/day = ~$126/month just for the warm slots, before counting actual invocation cost on top. PC is for latency, not cost — it always increases your bill.</p>
<h3>Hidden costs (the real bill)</h3>
<ul>
<li><b>NAT Gateway</b> — $0.045/hr per AZ (~$32/month) + $0.045/GB data. Often the largest line item for VPC Lambda.</li>
<li><b>API Gateway</b> — REST API: $3.50/1M calls. HTTP API: $1/1M. Can dwarf Lambda cost at high RPS.</li>
<li><b>CloudWatch Logs</b> — $0.50/GB ingestion + $0.03/GB storage/month. Verbose Lambda logs accumulate fast; set retention.</li>
<li><b>Lambda Insights</b> — additional CW Logs + custom metrics charges.</li>
<li><b>X-Ray</b> — $5/million traces (after free 100K/month).</li>
<li><b>Data transfer</b> — traffic leaving a region or going through a NAT has per-GB charges.</li>
<li><b>S3 API calls</b> — LIST and GET requests are billed per 1 000. A function that does 10 000 LIST calls/invocation at 1M invocations = 10B API calls = real money.</li>
</ul>
<div class="callout ok">
<b>For this project's function:</b> at 1 000 invocations/day with 500 ms average duration and 256 MB memory, cost is ~$0.002/day — essentially free. Lambda's economics only require attention above ~100K invocations/day with non-trivial memory or duration.
</div>
</div>
</section>
<!-- ===================================================================== -->
<!-- LOCAL DEV — placeholder -->
<!-- ===================================================================== -->
<section id="localdev" class="graph-section">
<h2>LOCAL DEV</h2>
<p class="lead">SAM CLI, Lambda RIE, LocalStack, MinIO — when to reach for which.</p>
<div class="prose">
<h3>The local dev problem</h3>
<p>Lambda has no local runtime by default. Your only loop without tooling is: zip, upload, invoke, read CloudWatch logs, repeat — minutes per cycle. The tools below collapse that to seconds, with different trade-offs between fidelity, setup cost, and scope.</p>
<h3>SAM CLI</h3>
<p><b>What it is:</b> AWS's official local Lambda emulator. Wraps Docker to run your function inside a container that matches the Lambda runtime environment exactly. Also emulates API Gateway.</p>
<p><b>Commands:</b></p>
<pre><code>sam local invoke -e event.json # invoke once
sam local start-api # spin up local HTTP API gateway
sam local invoke --debug-port 5858 # attach debugger</code></pre>
<p><b>Fidelity:</b> high — same Amazon Linux image, same runtime, same filesystem layout. Catches architecture issues (x86 wheel on arm64) that a plain venv misses.</p>
<p><b>Downsides:</b> requires Docker, slow to start (pulls image on first run), no MinIO/SQS/DynamoDB emulation built in. You wire those up separately.</p>
<h3>Lambda Runtime Interface Emulator (RIE)</h3>
<p>A lightweight binary embedded in all AWS-provided Lambda base images. When you run the image locally, RIE exposes a local HTTP endpoint that accepts invocations in the Lambda API format. You don't need SAM CLI — just Docker:</p>
<pre><code>docker build -t my-fn .
docker run -p 9000:8080 my-fn
curl -XPOST http://localhost:9000/2015-03-31/functions/function/invocations \
-d '{"key": "value"}'</code></pre>
<p>Use RIE when you're building container-image Lambdas and want to test them without SAM overhead.</p>
<h3>LocalStack</h3>
<p>A full AWS mock that emulates Lambda, S3, SQS, DynamoDB, API Gateway, and dozens more services in a single container. Community edition is free; Pro ($35/month) adds more services and persistent state.</p>
<p><b>When to use:</b> integration tests that span multiple AWS services (e.g. an EventBridge rule that triggers a Lambda that writes to DynamoDB). Without LocalStack you'd need a real AWS account for these tests.</p>
<p><b>When to avoid:</b> if you only need one service (just S3 → use MinIO; just Lambda → use SAM/RIE). LocalStack's Lambda emulation has occasional edge-case differences from the real runtime.</p>
<pre><code>docker run --rm -p 4566:4566 localstack/localstack
AWS_DEFAULT_REGION=us-east-1 \
AWS_ACCESS_KEY_ID=test \
AWS_SECRET_ACCESS_KEY=test \
aws --endpoint-url=http://localhost:4566 s3 ls</code></pre>
<h3>MinIO (this project)</h3>
<p>MinIO is an S3-compatible object store that runs locally in Docker. It implements the S3 API precisely enough that <code>boto3</code>/<code>aioboto3</code> needs only an <code>endpoint_url</code> override to work against it. It is <b>not</b> a Lambda emulator — it replaces S3 only.</p>
<pre><code>make up # starts MinIO on :9000 (API) and :9001 (console)
SOURCE_DIR=~/pdfs make seed # uploads PDFs to MinIO
make invoke # runs lambda_function.py against MinIO via invoke.py</code></pre>
<p>This is the lightest possible local setup: no Docker-in-Docker, no SAM overhead, minimal latency. The function handler runs in your local Python process against a real S3-compatible store. Differences from real Lambda (no execution environment lifecycle, no /tmp isolation between runs) are acceptable for the development loop but not for environment-fidelity tests.</p>
<h3>Decision matrix</h3>
<table class="cmp-table">
<thead><tr><th>Need</th><th>Reach for</th></tr></thead>
<tbody>
<tr><td>Fast iteration on handler logic</td><td>MinIO + <code>python invoke.py</code> (this project's setup)</td></tr>
<tr><td>Emulate Lambda runtime + API Gateway locally</td><td>SAM CLI</td></tr>
<tr><td>Test a container-image Lambda</td><td>Lambda RIE via Docker</td></tr>
<tr><td>Integration test across multiple AWS services</td><td>LocalStack</td></tr>
<tr><td>Full-fidelity staging before prod</td><td>Real AWS account, separate environment</td></tr>
</tbody>
</table>
</div>
</section>
<!-- ===================================================================== -->
<!-- CI/CD — placeholder -->
<!-- ===================================================================== -->
<section id="cicd" class="graph-section">
<h2>CI/CD</h2>
<p class="lead">Aliases, versions, traffic shifting, blue/green. Plain CLI → SAM → CDK → Terraform.</p>
<div class="prose">
<h3>Versions and aliases</h3>
<p><b>Versions</b> are immutable snapshots of a function's code and configuration. When you publish a version (<code>aws lambda publish-version</code>), AWS creates an immutable ARN like <code>arn:…:function:my-fn:7</code>. <code>$LATEST</code> is the only mutable version — always reflects the most recent code upload.</p>
<p><b>Aliases</b> are named pointers to a version. <code>prod</code> might point to version 7; <code>staging</code> might point to version 8. Event source mappings, API Gateway integrations, and Step Functions tasks should target aliases, not version ARNs — this decouples deployment (publishing a new version) from promotion (updating the alias).</p>
<h3>Traffic shifting (blue/green)</h3>
<p>An alias can split traffic across two versions with weighted routing:</p>
<pre><code>aws lambda update-alias \
--function-name my-fn \
--name prod \
--function-version 8 \
--routing-config AdditionalVersionWeights={"7"=0.9}
# result: 10% of prod traffic goes to v8, 90% still to v7</code></pre>
<p>Start at 10% canary, watch error rates in CloudWatch, shift to 50%, then 100%. Rollback is instant: point the alias back to the stable version. No instance drain, no connection draining — Lambda is stateless, cutover is atomic.</p>
<h3>CodeDeploy integration</h3>
<p>SAM and CDK can wire up CodeDeploy for automatic traffic shifting with automatic rollback on CloudWatch alarms. You declare the deployment preference in the template:</p>
<pre><code># SAM template.yaml
DeploymentPreference:
Type: Canary10Percent5Minutes # 10% for 5 min, then 100%
Alarms:
- !Ref ErrorRateAlarm # rolls back if alarm triggers</code></pre>
<p>CodeDeploy manages the alias weight changes and calls the rollback if the alarm fires — fully automated blue/green without manual traffic management.</p>
<h3>Deployment tooling progression</h3>
<table class="cmp-table">
<thead><tr><th>Tool</th><th>Good for</th><th>Caveats</th></tr></thead>
<tbody>
<tr><td><b>AWS CLI / SDK</b></td><td>One-off deployments, scripting, deep control</td><td>Verbose; no state management; drift-prone at scale</td></tr>
<tr><td><b>SAM (CloudFormation extension)</b></td><td>Lambda-first projects; built-in local testing; CodeDeploy integration</td><td>CloudFormation speed; YAML verbosity; AWS-only</td></tr>
<tr><td><b>CDK</b></td><td>Complex infra in TypeScript/Python; reusable constructs; type safety</td><td>Still compiles to CloudFormation; learning curve; bootstrapping required</td></tr>
<tr><td><b>Terraform (AWS provider)</b></td><td>Multi-cloud orgs; large existing Terraform estate; strong community modules</td><td>No built-in Lambda local testing; plan/apply cycle slower than SAM deploy</td></tr>
<tr><td><b>Serverless Framework</b></td><td>Multi-cloud serverless; plugin ecosystem</td><td>V3 → V4 became paid for teams; community plugins vary in quality</td></tr>
</tbody>
</table>
<h3>CI pipeline skeleton</h3>
<pre><code># GitHub Actions example
jobs:
deploy:
steps:
- uses: actions/checkout@v4
- name: Build zip
run: |
docker run --rm -v $PWD:/var/task \
public.ecr.aws/lambda/python:3.13 \
pip install -r requirements.txt -t package/
cd package && zip -r ../function.zip . && cd ..
zip function.zip lambda_function.py
- name: Deploy
run: |
aws lambda update-function-code \
--function-name my-fn --zip-file fileb://function.zip
aws lambda wait function-updated --function-name my-fn
aws lambda publish-version --function-name my-fn
aws lambda update-alias --function-name my-fn \
--name prod --function-version $VERSION</code></pre>
<p>The <code>wait function-updated</code> call is important — <code>update-function-code</code> is asynchronous and <code>publish-version</code> must wait for it to complete.</p>
</div>
</section>
<!-- ===================================================================== -->
<!-- PITFALLS -->
<!-- ===================================================================== -->
<section id="pitfalls" class="graph-section">
<h2>PITFALLS — THE MUST-KNOWS</h2>
<p class="lead">The list to skim before the next interview or design review. Each item has bitten someone in production.</p>
<div class="prose">
<h3>Execution model</h3>
<ol>
<li><b>Module-level state leaks across invocations.</b> A list you append to in the handler grows forever on warm calls. A counter you increment is wrong by the second request. If it's mutable and lives at module scope, treat it as either a deliberate cache or a bug.</li>
<li><b>Handler globals are shared by every invocation on that env, but not across envs.</b> "I cached the result" works locally; in production half your traffic gets the cached value, the other half doesn't, depending on which warm container they hit. Externalise (Redis, DynamoDB) or accept the variance.</li>
<li><b>/tmp is per-environment, not per-invocation.</b> If you write <code>/tmp/output.json</code> with a fixed name, the next warm invocation finds yesterday's file. Always use a per-invocation suffix (UUID, request ID).</li>
<li><b>Init phase has a hard 10 s cap.</b> If you import TensorFlow, hydrate a 500 MB model, or do a network call at module scope, you can blow this budget on cold start. Defer expensive work until first handler call (lazy init), or move it to a layer that ships pre-warmed.</li>
<li><b>Async <code>asyncio.run</code> in a sync handler creates a fresh event loop per invocation.</b> Acceptable, but means async clients can't be shared across invocations the way sync boto3 clients can. Profile before assuming async is faster.</li>
</ol>
<h3>Payload &amp; size limits</h3>
<ol start="6">
<li><b>6 MB sync response cap is silent.</b> Returning a JSON list of 50 000 items "works" in the function but the API GW caller gets 413. The fix in <code>lambda_function.py</code> — return a presigned URL to a manifest file rather than the full list — is the standard pattern.</li>
<li><b>API Gateway caps integration time at 29 s.</b> Doesn't matter if your Lambda timeout is 15 minutes. For longer work, return a job ID and poll, or use Function URLs (15 min) with response streaming.</li>
<li><b>Environment variables max 4 KB total.</b> Big secrets (RSA keys, JSON config blobs) blow this. Parameter Store / Secrets Manager and read on init.</li>
</ol>
<h3>Concurrency &amp; throttling</h3>
<ol start="9">
<li><b>Default account concurrency is 1 000 per region.</b> Most teams hit this before they realise. Sets a hard ceiling on RPS — at 100 ms latency, that's 10 000 RPS account-wide; at 1 s, 1 000 RPS.</li>
<li><b>Reserved concurrency = 0 disables the function.</b> Looks weird, used as a circuit breaker.</li>
<li><b>Provisioned concurrency double-bills.</b> You pay for the warm slots <i>and</i> for invocations against them. Worth it for latency-sensitive paths; wasteful for batch.</li>
<li><b>Burst limit is regional and finite.</b> A traffic spike from 0 to 5 000 RPS will throttle until AWS scales up at +500 envs/min. Provisioned concurrency or pre-warming is the fix.</li>
</ol>
<h3>Triggers, retries, idempotency</h3>
<ol start="13">
<li><b>Async invocation retries 2 times by default.</b> Total 3 attempts. If your handler isn't idempotent, you can charge a card three times.</li>
<li><b>S3, SNS, EventBridge invoke async — at-least-once.</b> Plan for duplicates. SQS standard is also at-least-once. SQS FIFO and Kinesis are exactly-once-ish per shard but with their own quirks.</li>
<li><b>SQS visibility timeout must be ≥ 6× function timeout.</b> Otherwise the message comes back while you're still processing it, and you do the work twice (or more).</li>
<li><b>Partial batch failures need explicit signalling.</b> Returning <code>batchItemFailures</code> for SQS/Kinesis tells AWS which records to retry; otherwise the entire batch retries or none does.</li>
<li><b>API Gateway error responses are JSON-shaped if you don't say otherwise.</b> Throw an unhandled exception and the client sees <code>{"errorMessage": "...", "errorType": "..."}</code> with status 502. Map errors yourself.</li>
</ol>
<h3>Networking, IAM, observability</h3>
<ol start="18">
<li><b>Putting Lambda in a VPC adds an ENI cold-start penalty</b> (improved a lot in 2019, but still real for first invocation). Only do it if you genuinely need private-subnet resources. Outbound internet from VPC Lambda needs NAT, which costs money 24/7.</li>
<li><b>S3 access from a VPC Lambda needs a VPC gateway endpoint or NAT.</b> Without one, your S3 calls hang and time out — looks like a code bug, isn't.</li>
<li><b>CloudWatch log groups default to "Never expire" retention.</b> Verbose Lambdas can rack up real cost in CW Logs alone — set retention (7/14/30 days) on every log group you create.</li>
<li><b>Lambda execution role is implicit on every action.</b> Forgetting <code>s3:GetObject</code> or <code>kms:Decrypt</code> on the bucket's CMK is the most common "but it works locally" failure. CloudTrail tells you what was denied.</li>
<li><b>Resource policy vs execution role are different layers.</b> Resource policy says "who can <i>invoke</i> this Lambda"; execution role says "what this Lambda can <i>do</i>". Both must allow.</li>
<li><b>X-Ray needs an SDK call <i>and</i> tracing enabled on the function <i>and</i> IAM permission.</b> Three switches. People flip one and conclude X-Ray is broken.</li>
</ol>
<h3>Deployment, dependencies, runtimes</h3>
<ol start="24">
<li><b>The boto3 in the Python runtime lags pip.</b> If you need a recent API (e.g. new S3 features), bundle current boto3 in your zip. The runtime version is "good enough" for stable APIs, "sometimes wrong" for fresh ones.</li>
<li><b>Native wheels must match Lambda's runtime architecture.</b> <code>pip install</code> on a Mac and zip-uploading <code>cryptography</code> is a classic foot-gun. Build in a Docker image matching <code>public.ecr.aws/lambda/python:3.13</code>.</li>
<li><b>arm64 saves ~20 % at the same memory</b> but <i>some</i> wheels are still x86-only. Audit your deps before flipping the architecture.</li>
<li><b>Layers are merge-ordered; later layers overwrite earlier.</b> A "base" layer for your shared dependencies works; conflicting layers silently shadow each other.</li>
<li><b>Container-image deploys are cached on the Lambda host.</b> First cold start can be slow (image pull); subsequent are normal. Keep images small even though the limit is 10 GB.</li>
</ol>
<h3>Time, scheduling, secrets</h3>
<ol start="29">
<li><b>EventBridge schedule (cron/rate) is always UTC.</b> "9 AM" in your local time means something different in production. Use the new EventBridge Scheduler (2022) for time-zone-aware schedules.</li>
<li><b>Async invocations have a 6-hour event age.</b> If retries fail past that, the event is silently dropped unless you've set a DLQ or on-failure destination.</li>
<li><b>Secrets in env vars are visible to anyone with <code>lambda:GetFunctionConfiguration</code>.</b> Encrypted at rest, plaintext in the console. Use Secrets Manager / Parameter Store for actual secrets.</li>
</ol>
<div class="callout warn">
<b>Skim test:</b> if you can re-state the cold-start split (Init / Handler), the 6 MB / 256 KB / 4 KB / 250 MB / 10 GB constants, and the difference between resource policy and execution role from memory, you'll handle most "tell me about Lambda" interview questions.
</div>
</div>
</section>
<!-- ===================================================================== -->
<!-- ADJACENT — placeholder -->
<!-- ===================================================================== -->
<section id="adjacent" class="graph-section">
<h2>ADJACENT</h2>
<p class="lead">Brief orientation on AWS Glue and Prometheus/Grafana — the secondary gaps from the interview.</p>
<div class="prose">
<h3>AWS Glue</h3>
<p>Glue is a managed Spark-based ETL service. Lambda and Glue solve different problems:</p>
<table class="cmp-table">
<thead><tr><th></th><th>Lambda</th><th>Glue</th></tr></thead>
<tbody>
<tr><td><b>Runtime model</b></td><td>Serverless; up to 15 min; one handler at a time per env</td><td>Managed Spark cluster; hours-long jobs; distributed compute</td></tr>
<tr><td><b>Data scale</b></td><td>Up to a few GB comfortably</td><td>TB to PB natively</td></tr>
<tr><td><b>Language</b></td><td>Python, Node, Java, Go, custom runtime</td><td>PySpark, Scala; Glue Studio for no-code</td></tr>
<tr><td><b>Startup time</b></td><td>Milliseconds (warm)</td><td>12 minutes to provision Spark cluster</td></tr>
<tr><td><b>Cost model</b></td><td>Per request + per ms</td><td>Per DPU-hour (1 DPU = $0.44/hr); 10-minute minimum billing</td></tr>
<tr><td><b>Use for</b></td><td>Light transforms, event reactions, API backends</td><td>Large-scale joins, aggregations, schema inference on data lake</td></tr>
</tbody>
</table>
<p>Key Glue concepts to know: <b>DynamicFrame</b> (Glue's DataFrame variant with schema flexibility), <b>Glue Catalog</b> (centralised metadata store for table schemas — also used by Athena), <b>Job Bookmarks</b> (Glue tracks processed S3 partitions to avoid reprocessing on incremental runs).</p>
<p>The decision is usually straightforward: if the data fits in Lambda's memory and the job finishes in under 15 minutes, use Lambda. If you're joining multiple large S3 datasets or transforming daily partition files, use Glue.</p>
<h3>Prometheus</h3>
<p>Prometheus is a pull-based time-series metrics system. It scrapes HTTP <code>/metrics</code> endpoints on a schedule. The fundamental tension with Lambda: Lambda functions are ephemeral — there's no persistent HTTP endpoint to scrape, and the function may be at zero concurrency between invocations.</p>
<p>Options for Lambda → Prometheus:</p>
<ul>
<li><b>EMF → CloudWatch → Grafana CloudWatch plugin</b> — no Prometheus involved. Grafana reads directly from CloudWatch. Easiest for AWS-native stacks.</li>
<li><b>Remote write to Amazon Managed Prometheus (AMP)</b> — the function pushes metrics to AMP via the Prometheus remote_write API at the end of each invocation. Grafana or Amazon Managed Grafana reads from AMP. Requires the <code>prometheus_client</code> library and SIGV4 signing on the remote_write request.</li>
<li><b>Push gateway</b> — a persistent intermediate that Lambda pushes to; Prometheus scrapes the gateway. More infrastructure to manage, stale metric risk if the push gateway isn't flushed between invocations.</li>
</ul>
<h3>Grafana</h3>
<p>Grafana is a dashboarding layer — it doesn't store data, it queries data sources. Relevant data sources for Lambda observability:</p>
<ul>
<li><b>CloudWatch</b> — built-in Grafana plugin; queries CW Metrics and CW Logs Insights. Zero extra infrastructure. The standard choice for Lambda metrics (invocations, errors, duration, throttles, concurrent executions).</li>
<li><b>Amazon Managed Prometheus</b> — query via PromQL if you've pushed custom metrics.</li>
<li><b>Amazon Managed Grafana (AMG)</b> — Grafana-as-a-service; integrates with AWS IAM; auto-discovers CW namespaces. Avoids self-hosting Grafana.</li>
</ul>
<p>For a Lambda-only stack with no existing Prometheus investment, the practical answer is: use EMF for custom metrics, use CloudWatch for the built-in Lambda metrics, and connect Grafana to CloudWatch. It requires no extra infrastructure and gives you dashboards in an hour.</p>
</div>
</section>
<!-- ===================================================================== -->
<!-- LABS — placeholder -->
<!-- ===================================================================== -->
<section id="labs" class="graph-section">
<h2>LABS</h2>
<p class="lead">Hands-on walkthroughs that modify the existing app. Each mutates what you already have — no throw-away exercises.</p>
<div class="prose">
<h3>Lab 0 — Local sandbox (start here)</h3>
<p><b>Goal:</b> run the full stack locally against MinIO with real PDFs.</p>
<ol>
<li><code>make install</code> — creates <code>.venv</code> and installs deps</li>
<li><code>make up</code> — starts MinIO on :9000 (API) and :9001 (console)</li>
<li><code>SOURCE_DIR=~/path/to/pdfs make seed</code> — uploads PDFs to MinIO bucket</li>
<li><code>make invoke</code> — runs <code>invoke.py</code> which calls <code>handler()</code> with a minimal event</li>
<li>Open <code>http://localhost:9001</code> (minioadmin/minioadmin) and find the generated manifest in the <code>manifests/</code> prefix</li>
</ol>
<p><b>What you can break:</b> set <code>PREFIX</code> to a non-existent prefix and observe the handler returns count=0. Set <code>QUEUE_MAX=1</code> and observe the backpressure on the producer. Remove <code>S3_ENDPOINT_URL</code> and watch it fail to connect.</p>
<h3>Lab 1 — Deploy to real AWS</h3>
<p><b>Goal:</b> package and deploy the function to AWS Lambda, invoke it against a real S3 bucket.</p>
<ol>
<li>Create an S3 bucket and upload sample PDFs to <code>2026/04/</code> prefix</li>
<li>Create an IAM execution role with <code>s3:GetObject</code>, <code>s3:PutObject</code>, <code>s3:ListBucket</code>, and <code>logs:*</code></li>
<li>Build the deployment zip inside the Lambda image:<br><code>docker run --rm -v $PWD:/var/task public.ecr.aws/lambda/python:3.13 pip install -r requirements.txt -t package/</code></li>
<li>Create the function: <code>aws lambda create-function --handler lambda_function.handler …</code></li>
<li>Invoke: <code>aws lambda invoke --function-name pdf-scanner --payload '{}' out.json</code></li>
<li>Verify the manifest appeared in S3 and the presigned URL works</li>
</ol>
<p><b>What you can break:</b> invoke without <code>s3:ListBucket</code> on the bucket (not the object ARN) — observe AccessDenied. Watch CloudTrail to see the denied call.</p>
<h3>Lab 2 — Add an S3 trigger</h3>
<p><b>Goal:</b> make the function fire automatically when a PDF is uploaded.</p>
<ol>
<li>Add a resource policy entry granting S3 <code>lambda:InvokeFunction</code></li>
<li>Configure an S3 event notification on the bucket for <code>s3:ObjectCreated:*</code> filtered to <code>*.pdf</code></li>
<li>Upload a PDF and check CloudWatch Logs for the invocation</li>
<li>Notice the event structure differs from the manual invoke — update the handler to extract the key from <code>event["Records"][0]["s3"]["object"]["key"]</code></li>
</ol>
<p><b>What you can break:</b> upload a non-PDF to the same prefix and verify the filter prevents invocation. Remove the resource policy and verify the trigger silently stops firing (no error to the uploader — this is the async invocation model).</p>
<h3>Lab 3 — Switch to arm64</h3>
<p><b>Goal:</b> migrate to Graviton2 and verify 20% cost reduction.</p>
<ol>
<li>Rebuild the zip using the arm64 Lambda image: <code>public.ecr.aws/lambda/python:3.13-arm64</code></li>
<li>Update the function architecture: <code>aws lambda update-function-configuration --architectures arm64</code></li>
<li>Update the function code with the arm64 zip</li>
<li>Invoke and compare REPORT duration and billed duration in CloudWatch</li>
</ol>
<p><b>What you can break:</b> try deploying the x86 zip against the arm64 architecture — the function will import-error on any C-extension wheels.</p>
<h3>Lab 4 — Enable Provisioned Concurrency</h3>
<p><b>Goal:</b> eliminate cold starts on the production alias.</p>
<ol>
<li>Publish version 1: <code>aws lambda publish-version --function-name pdf-scanner</code></li>
<li>Create alias <code>prod</code> pointing to version 1</li>
<li>Enable PC: <code>aws lambda put-provisioned-concurrency-config --function-name pdf-scanner --qualifier prod --provisioned-concurrent-executions 2</code></li>
<li>Invoke via the alias ARN and confirm <code>Init Duration</code> is absent from REPORT lines</li>
<li>Check your AWS bill after 1 hour — note the PC charges</li>
</ol>
<h3>Lab 5 — Add X-Ray tracing</h3>
<p><b>Goal:</b> see a trace with S3 subsegments in the X-Ray console.</p>
<ol>
<li>Add <code>aws-xray-sdk</code> to <code>requirements.txt</code> and rebuild the zip</li>
<li>Add to <code>lambda_function.py</code>: <code>from aws_xray_sdk.core import patch_all; patch_all()</code></li>
<li>Enable active tracing on the function and add X-Ray permissions to the execution role</li>
<li>Invoke and open X-Ray → Traces in the console — verify S3 <code>list_objects_v2</code> and <code>generate_presigned_url</code> appear as subsegments</li>
</ol>
<h3>Lab 6 — Fan out with Step Functions</h3>
<p><b>Goal:</b> process multiple S3 prefixes in parallel using a Map state.</p>
<ol>
<li>Update the handler to accept a <code>prefix</code> key in the event (instead of reading from env var)</li>
<li>Create a Step Functions state machine with a Map state that iterates over a list of prefixes and invokes the Lambda for each</li>
<li>Start an execution with input: <code>{"prefixes": ["2026/01/", "2026/02/", "2026/03/"]}</code></li>
<li>Observe parallel Lambda invocations in the execution graph and CloudWatch</li>
<li>Add error handling: configure the Map state to catch Lambda errors and continue rather than fail the whole execution</li>
</ol>
</div>
</section>
<!-- ===================================================================== -->
<!-- REPOSITORY — placeholder -->
<!-- ===================================================================== -->
<section id="repo" class="graph-section">
<h2>REPOSITORY</h2>
<p class="lead">Tree of <code>eth/</code> — the sandbox plus this study site.</p>
<div class="tree-container">
<pre class="repo-tree"><span class="t-root">eth/</span>
├── <span class="t-dir">lambda_function.py</span> <span class="t-comment">— handler: async PDF scan → presigned URLs → JSONL manifest</span>
├── <span class="t-dir">invoke.py</span> <span class="t-comment">— local runner: calls handler() with a minimal event, prints result</span>
├── <span class="t-dir">seed.py</span> <span class="t-comment">— uploads PDFs from a local directory to MinIO</span>
├── <span class="t-dir">requirements.txt</span> <span class="t-comment">— aioboto3, aiofiles (+ transitive: aiobotocore, botocore…)</span>
├── <span class="t-dir">docker-compose.yml</span> <span class="t-comment">— runs MinIO on :9000 (S3 API) and :9001 (web console)</span>
├── <span class="t-dir">Makefile</span> <span class="t-comment">— install / up / down / seed / invoke / graphs / docs</span>
├── <span class="t-dir">def/</span>
│ └── <span class="t-dir">task.md</span> <span class="t-comment">— original interview exercise specification</span>
└── <span class="t-dir">docs/</span>
├── <span class="t-dir">index.html</span> <span class="t-comment">— this study site (single-page, no build step)</span>
├── <span class="t-dir">viewer.html</span> <span class="t-comment">— pan/zoom SVG viewer (opened by graph links)</span>
└── <span class="t-dir">graphs/</span>
├── system_overview.dot / .svg <span class="t-comment">— caller → handler → MinIO/S3 → manifest</span>
├── lifecycle.dot / .svg <span class="t-comment">— init / handler / freeze / thaw / shutdown</span>
└── cold_warm_timeline.dot / .svg <span class="t-comment">— cold vs warm invocation timeline</span></pre>
</div>
<div class="prose" style="margin-top: 24px;">
<h3>What the function does, end to end</h3>
<p>The function lists every PDF inside an S3 prefix. For each one, it generates a presigned download URL that expires in 15 minutes. It writes those (key, URL) pairs into a JSONL file in <code>/tmp</code> as it goes. When the listing is done, it uploads the JSONL to S3 as a manifest, generates one more presigned URL pointing to the manifest itself, deletes the local file, and returns the manifest URL plus the count.</p>
<p>The use case: you want to ship a batch of files to someone who isn't on your AWS account. Send them one URL. They open it, get back a list of links, every link works for 15 minutes, then everything dies.</p>
<h3>Imports and module-scope config</h3>
<pre><code>import asyncio, json, os, uuid
import aioboto3
import aiofiles
BUCKET = os.environ.get("BUCKET_NAME", "my-company-reports-bucket")
PREFIX = os.environ.get("PREFIX", "2026/04/")
EXPIRY = int(os.environ.get("URL_EXPIRY_SECONDS", "900"))
ENDPOINT = os.environ.get("S3_ENDPOINT_URL") or None
QUEUE_MAX = int(os.environ.get("QUEUE_MAX", "2000"))
_DONE = object()</code></pre>
<p>Five environment reads at module scope — <b>init phase</b>. They run once per cold start and every warm invocation reuses them for free. <code>ENDPOINT</code> is the MinIO trick: on real Lambda the var is unset, value is <code>None</code>, aioboto3 talks to real S3. Locally, set it to <code>http://localhost:9000</code> and the same code talks to MinIO with no other changes. <code>_DONE</code> is a sentinel: an <code>object()</code> instance whose identity is unique and can't collide with any real S3 key — comparing with <code>is</code> (not <code>==</code>) is unambiguous.</p>
<h3>The handler — minimal on purpose</h3>
<pre><code>def handler(event, context):
result = asyncio.run(_run())
return {"statusCode": 200, "body": json.dumps(result)}</code></pre>
<p>Sync because Lambda's contract is sync. <code>asyncio.run</code> opens a fresh event loop per invocation — means async clients can't be shared across invocations the way sync boto3 clients could, which is why the S3 client lives inside <code>_run</code>. The API-Gateway response shape is a habit: harmless for direct invoke, required if you later front this with API Gateway.</p>
<p>Why async at all? Lambda bills per millisecond of wall-clock time. Anything you can overlap, you save money on. S3 LIST calls overlap with presigning and file writes. That overlap directly reduces duration and cost.</p>
<h3><code>_run()</code> — the actual work</h3>
<pre><code>async def _run():
session = aioboto3.Session()
async with session.client("s3", endpoint_url=ENDPOINT) as s3:
queue = asyncio.Queue(maxsize=QUEUE_MAX)
manifest_path = f"/tmp/{uuid.uuid4()}.jsonl"</code></pre>
<p>Session created inside <code>_run</code> (not module scope) because aioboto3 async clients are tied to the event loop — and each invocation gets a fresh loop. The queue bound gives backpressure: when full, <code>await queue.put(...)</code> blocks until the consumer takes something off. Without the bound, a million-file bucket would OOM before the first URL is presigned. UUID in the manifest path prevents collision between back-to-back warm invocations sharing the same <code>/tmp</code>.</p>
<h3>The producer</h3>
<pre><code> async def producer():
paginator = s3.get_paginator("list_objects_v2")
async for page in paginator.paginate(Bucket=BUCKET, Prefix=PREFIX):
for obj in page.get("Contents", []) or []:
key = obj["Key"]
if key.lower().endswith(".pdf"):
await queue.put(key)
await queue.put(_DONE)</code></pre>
<p>Defined as a closure inside <code>_run</code> — captures <code>s3</code> and <code>queue</code> without arguments; signals it's a private implementation detail. The paginator transparently fetches subsequent pages (S3 returns ≤1000 per page). <code>await queue.put(key)</code> blocks when the queue is full — that's the backpressure. After all pages, it puts <code>_DONE</code> to signal the consumer to stop (<code>asyncio.Queue</code> has no close method; the sentinel is the standard pattern).</p>
<h3>The consumer</h3>
<pre><code> async def consumer():
count = 0
async with aiofiles.open(manifest_path, "w") as f:
while True:
item = await queue.get()
if item is _DONE:
break
url = await s3.generate_presigned_url(
"get_object",
Params={"Bucket": BUCKET, "Key": item},
ExpiresIn=EXPIRY,
)
await f.write(json.dumps({"key": item, "url": url}) + "\n")
count += 1
return count</code></pre>
<p>Same closure pattern. <code>generate_presigned_url</code> is a <b>local computation</b> — no network call. It uses your credentials, bucket, key, and expiry to produce a signed URL deterministically. Fast. JSONL (one JSON object per line) instead of a JSON array because it streams: write one line at a time without buffering the whole array, read one line at a time. Stays usable even at gigabyte scale.</p>
<h3>Running them together</h3>
<pre><code> prod_task = asyncio.create_task(producer())
count = await consumer()
await prod_task</code></pre>
<p><code>create_task</code> schedules the producer on the event loop and returns immediately — producer runs in the background. <code>await consumer()</code> runs in the foreground until it sees the sentinel. <code>await prod_task</code> makes the guarantee explicit and propagates any producer exceptions. The overlap: while S3 prepares the next LIST page (network), the consumer presigns and writes the previous page. Sequential would stack list latency + presign latency. Async pays only the larger of the two.</p>
<h3>Upload, presign, clean up</h3>
<pre><code> manifest_key = f"manifests/{uuid.uuid4()}.jsonl"
body = await (aiofiles.open(manifest_path, "rb")).__aenter__() ...
await s3.put_object(Bucket=BUCKET, Key=manifest_key, Body=body,
ContentType="application/x-ndjson")
manifest_url = await s3.generate_presigned_url("get_object", ...)
os.unlink(manifest_path)
return {"count": count, "manifest_key": manifest_key, "manifest_url": manifest_url}</code></pre>
<p><code>put_object</code> over <code>upload_file</code> because aioboto3's async multipart handling is simpler this way for files in the KBMB range. Content type <code>application/x-ndjson</code> is the registered MIME for newline-delimited JSON. <code>os.unlink</code> is required — <code>/tmp</code> persists across warm invocations; a thousand runs without cleanup would fill it and crash the next.</p>
<h3>Why this design?</h3>
<ul>
<li><b>Presigned URLs, not raw data.</b> Recipient needs no AWS account. URL expires automatically. No egress from Lambda.</li>
<li><b>Manifest in S3, not inline.</b> The 6 MB sync response cap is silent — function succeeds, caller gets 413 with no warning. Manifest in S3 has no upper bound.</li>
<li><b>Bounded queue.</b> Backpressure prevents producer from outrunning consumer and exhausting memory regardless of bucket size.</li>
<li><b>Sentinel <code>_DONE = object()</code>.</b> <code>asyncio.Queue</code> has no close. An <code>object()</code> instance can't collide with any S3 key; <code>is</code> comparison is unambiguous.</li>
<li><b>Nested functions as closures.</b> Capture <code>s3</code>, <code>queue</code>, <code>manifest_path</code> from the enclosing scope without arguments. Scope is explicit — nobody outside <code>_run</code> can call them.</li>
<li><b>UUID in <code>/tmp</code>.</b> <code>/tmp</code> persists across warm invocations. Fixed filename = race condition between back-to-back runs on the same environment.</li>
</ul>
<h3>Cold start vs warm — CloudWatch REPORT line</h3>
<pre><code># Cold start
REPORT RequestId: ... Duration: 312.45 ms Billed Duration: 313 ms
Memory Size: 256 MB Max Memory Used: 89 MB
Init Duration: 423.12 ms
# Warm (next invocation within ~30 s)
REPORT RequestId: ... Duration: 287.91 ms Billed Duration: 288 ms
Memory Size: 256 MB Max Memory Used: 91 MB</code></pre>
<p><code>Init Duration</code> ~400 ms covers importing aioboto3 → aiobotocore → botocore (heavy chain). No <code>Init Duration</code> on warm runs — saved ~30 ms. For a function that runs once a day every invocation is cold. For one that runs every few seconds, init is irrelevant.</p>
<h3>What happens if it times out</h3>
<p>Default timeout is 3 s — too short. Set it explicitly to 3060 s for a small prefix, up to 900 s (15 min) for large ones. On timeout, Lambda kills the process. The <code>/tmp</code> file may not have been deleted; the manifest may not have been uploaded. Re-running produces a fresh manifest with new UUIDs — no dedup, so two manifests for the same job can coexist in S3. If "exactly one manifest per job" is required, add a DynamoDB dedup table keyed on request ID.</p>
<h3>How would you scale this</h3>
<p><b>Fan out by prefix.</b> Wrap in a Step Functions Map state. Pass a list of prefixes; each iteration runs one Lambda for one prefix. <code>MaxConcurrency</code> controls parallelism without saturating the account concurrency quota.</p>
<p><b>Go event-driven.</b> Subscribe to S3 <code>ObjectCreated</code> filtered to <code>*.pdf</code>. The function fires once per upload, handles one file at a time — no producer/consumer needed. Simpler, but semantically different: "process new files as they arrive" vs "scan the existing bucket."</p>
<h3>What I'd change before production</h3>
<ol>
<li><b>Move <code>BUCKET</code> and <code>PREFIX</code> to the event payload.</b> Currently set at deploy time (one function per prefix). Event-driven config lets one function serve many prefixes.</li>
<li><b>Structured logging.</b> JSON to stdout with <code>request_id</code>, <code>bucket</code>, <code>prefix</code>, <code>count</code>. Logs Insights can aggregate without regex.</li>
<li><b>EMF metric for <code>count</code>.</b> Free CloudWatch metric, no additional API call. Dashboard "PDFs processed per invocation" over time.</li>
<li><b>Producer error handling.</b> If <code>paginator.paginate</code> raises, the producer task fails but the consumer blocks on <code>queue.get()</code> forever — function times out. Wrap producer body in <code>try/finally</code> that always puts <code>_DONE</code> so the consumer exits cleanly.</li>
<li><b>Explicit timeout on <code>queue.get()</code>.</b> <code>asyncio.wait_for(queue.get(), timeout=X)</code> prevents the consumer hanging indefinitely if the producer dies without putting the sentinel.</li>
<li><b>Consider sync boto3.</b> <code>aioboto3</code> adds ~200 ms to the cold start. If cold start matters and file counts are small, sync boto3 with threading is simpler and starts faster. Async pays off only when file counts are large enough that overlap is significant.</li>
</ol>
<h3>Design Q&amp;A</h3>
<p>Architectural questions that came up and are worth keeping so they don't have to be re-derived.</p>
<p><b>Why not split producer and consumer into separate Lambda functions?</b><br>
<code>generate_presigned_url</code> is local computation — no network call, just CPU. The consumer isn't blocked on anything external. The async coroutine pattern already provides the overlap benefit (S3 LIST wait ↔ presign + write) without any infrastructure overhead. Splitting into two Lambdas would mean: in-process <code>asyncio.Queue</code> → SQS (latency + cost per message), two cold starts, two IAM roles, coordination logic — and the actual bottleneck (S3 LIST pagination) would be unchanged. Split producer/consumer into separate Lambdas when the consumer does real per-item I/O (external API calls, content downloads), per-item processing takes seconds not milliseconds, or you need independent retry semantics. None of those apply here. Scale-out for this function belongs one level up: Step Functions Map state across prefixes (one Lambda per prefix), not within-prefix producer/consumer separation.</p>
<h3>Makefile targets</h3>
<table class="cmp-table">
<thead><tr><th>Target</th><th>What it does</th></tr></thead>
<tbody>
<tr><td><code>make install</code></td><td>Creates <code>.venv</code>, installs <code>requirements.txt</code></td></tr>
<tr><td><code>make up</code></td><td>Starts MinIO via <code>docker compose up -d</code></td></tr>
<tr><td><code>make down</code></td><td>Stops MinIO (keeps volumes)</td></tr>
<tr><td><code>make clean</code></td><td>Stops MinIO and deletes volumes (wipes bucket data)</td></tr>
<tr><td><code>SOURCE_DIR=path make seed</code></td><td>Uploads all files from <code>path</code> to MinIO</td></tr>
<tr><td><code>make invoke</code></td><td>Runs <code>invoke.py</code> (calls <code>handler()</code> directly)</td></tr>
<tr><td><code>make graphs</code></td><td>Renders all <code>docs/graphs/*.dot</code><code>.svg</code> via Graphviz <code>dot</code></td></tr>
<tr><td><code>make docs</code></td><td>Renders graphs then opens <code>docs/index.html</code></td></tr>
</tbody>
</table>
</div>
</section>
</main>
</div>
<script>
function show(id) {
document.querySelectorAll('.graph-section').forEach(s => s.classList.remove('active'));
document.querySelectorAll('nav a').forEach(a => a.classList.remove('active'));
var section = document.getElementById(id);
if (section) section.classList.add('active');
var navLink = document.querySelector('nav a[onclick="show(\'' + id + '\')"]');
if (navLink) navLink.classList.add('active');
document.querySelector('.layout').classList.remove('nav-open');
// scroll main back to top so reading position resets per section
document.querySelector('main').scrollTop = 0;
}
function toggleNav() {
document.querySelector('.layout').classList.toggle('nav-open');
}
</script>
</body>
</html>