update docs
This commit is contained in:
@@ -1432,24 +1432,22 @@ jobs:
|
||||
<p class="lead">Tree of <code>eth/</code> — the sandbox plus this study site.</p>
|
||||
|
||||
<div class="tree-container">
|
||||
<div class="repo-tree">
|
||||
<span class="t-root">eth/</span>
|
||||
├── <span class="t-dir">lambda_function.py</span> <span class="t-comment">— handler: async PDF scan → presigned URLs → JSONL manifest</span>
|
||||
<pre class="repo-tree"><span class="t-root">eth/</span>
|
||||
├── <span class="t-dir">lambda_function.py</span> <span class="t-comment">— handler: async PDF scan → presigned URLs → JSONL manifest</span>
|
||||
├── <span class="t-dir">invoke.py</span> <span class="t-comment">— local runner: calls handler() with a minimal event, prints result</span>
|
||||
├── <span class="t-dir">seed.py</span> <span class="t-comment">— uploads PDFs from a local directory to MinIO</span>
|
||||
├── <span class="t-dir">requirements.txt</span> <span class="t-comment">— aioboto3, aiofiles (+ transitive: aiobotocore, botocore…)</span>
|
||||
├── <span class="t-dir">docker-compose.yml</span> <span class="t-comment">— runs MinIO on :9000 (S3 API) and :9001 (web console)</span>
|
||||
├── <span class="t-dir">Makefile</span> <span class="t-comment">— install / up / down / seed / invoke / graphs / docs</span>
|
||||
├── <span class="t-dir">def/</span>
|
||||
│ └── task.md <span class="t-comment">— original interview exercise specification</span>
|
||||
│ └── <span class="t-dir">task.md</span> <span class="t-comment">— original interview exercise specification</span>
|
||||
└── <span class="t-dir">docs/</span>
|
||||
├── index.html <span class="t-comment">— this study site (single-page, no build step)</span>
|
||||
├── viewer.html <span class="t-comment">— pan/zoom SVG viewer (opened by graph links)</span>
|
||||
├── <span class="t-dir">index.html</span> <span class="t-comment">— this study site (single-page, no build step)</span>
|
||||
├── <span class="t-dir">viewer.html</span> <span class="t-comment">— pan/zoom SVG viewer (opened by graph links)</span>
|
||||
└── <span class="t-dir">graphs/</span>
|
||||
├── system_overview.dot / .svg <span class="t-comment">— caller → handler → MinIO/S3 → manifest</span>
|
||||
├── lifecycle.dot / .svg <span class="t-comment">— init / handler / freeze / thaw / shutdown</span>
|
||||
└── cold_warm_timeline.dot / .svg <span class="t-comment">— cold vs warm invocation timeline</span>
|
||||
</div>
|
||||
├── system_overview.dot / .svg <span class="t-comment">— caller → handler → MinIO/S3 → manifest</span>
|
||||
├── lifecycle.dot / .svg <span class="t-comment">— init / handler / freeze / thaw / shutdown</span>
|
||||
└── cold_warm_timeline.dot / .svg <span class="t-comment">— cold vs warm invocation timeline</span></pre>
|
||||
</div>
|
||||
|
||||
<div class="prose" style="margin-top: 24px;">
|
||||
@@ -1569,6 +1567,12 @@ REPORT RequestId: ... Duration: 287.91 ms Billed Duration: 288 ms
|
||||
<li><b>Consider sync boto3.</b> <code>aioboto3</code> adds ~200 ms to the cold start. If cold start matters and file counts are small, sync boto3 with threading is simpler and starts faster. Async pays off only when file counts are large enough that overlap is significant.</li>
|
||||
</ol>
|
||||
|
||||
<h3>Design Q&A</h3>
|
||||
<p>Architectural questions that came up and are worth keeping so they don't have to be re-derived.</p>
|
||||
|
||||
<p><b>Why not split producer and consumer into separate Lambda functions?</b><br>
|
||||
<code>generate_presigned_url</code> is local computation — no network call, just CPU. The consumer isn't blocked on anything external. The async coroutine pattern already provides the overlap benefit (S3 LIST wait ↔ presign + write) without any infrastructure overhead. Splitting into two Lambdas would mean: in-process <code>asyncio.Queue</code> → SQS (latency + cost per message), two cold starts, two IAM roles, coordination logic — and the actual bottleneck (S3 LIST pagination) would be unchanged. Split producer/consumer into separate Lambdas when the consumer does real per-item I/O (external API calls, content downloads), per-item processing takes seconds not milliseconds, or you need independent retry semantics. None of those apply here. Scale-out for this function belongs one level up: Step Functions Map state across prefixes (one Lambda per prefix), not within-prefix producer/consumer separation.</p>
|
||||
|
||||
<h3>Makefile targets</h3>
|
||||
<table class="cmp-table">
|
||||
<thead><tr><th>Target</th><th>What it does</th></tr></thead>
|
||||
|
||||
Reference in New Issue
Block a user