improve frontend and celery task architecture
This commit is contained in:
46
def/k8splan/envsink8s
Normal file
46
def/k8splan/envsink8s
Normal file
@@ -0,0 +1,46 @@
|
||||
You're absolutely right! In Kubernetes, environment variables are handled at the deployment level through ConfigMaps and Secrets, not
|
||||
individual service .env files.
|
||||
|
||||
Kubernetes approach:
|
||||
# k8s/configmap.yaml
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: media-analyzer-config
|
||||
data:
|
||||
DB_HOST: "postgres-service"
|
||||
HLS_BASE_URL: "http://nginx-hls-service"
|
||||
|
||||
# k8s/deployment.yaml
|
||||
env:
|
||||
- name: DB_HOST
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: media-analyzer-config
|
||||
key: DB_HOST
|
||||
|
||||
Better structure:
|
||||
- Development: Single .env for docker-compose
|
||||
- Production: K8s ConfigMaps/Secrets per environment
|
||||
- No service-specific .env files
|
||||
|
||||
Recommended approach:
|
||||
|
||||
● Write(.env.development)
|
||||
⎿ Wrote 8 lines to .env.development
|
||||
# Development environment variables for docker-compose
|
||||
DB_HOST=postgres
|
||||
DB_NAME=media_analyzer
|
||||
DB_USER=media_user
|
||||
DB_PASSWORD=media_pass
|
||||
RTMP_PORT=1935
|
||||
HLS_BASE_URL=http://localhost:8081
|
||||
DEBUG=1
|
||||
|
||||
● Then K8s handles it via:
|
||||
- k8s/base/configmap.yaml - Non-secret config
|
||||
- k8s/base/secret.yaml - Passwords, keys
|
||||
- k8s/overlays/staging/ - Staging overrides
|
||||
- k8s/overlays/production/ - Production overrides
|
||||
|
||||
This way each deployment environment controls its own config, not the services themselves.
|
||||
43
def/k8splan/moving_to_k8s
Normal file
43
def/k8splan/moving_to_k8s
Normal file
@@ -0,0 +1,43 @@
|
||||
> just keep k8s in mind but I think is better to dedicate a session for k8s. anything I should know now? I understand the concepts well and have some practical
|
||||
experience (on the job and running a cluster locally) but its been a while
|
||||
|
||||
● Good to plan ahead! Here are the key things to keep in mind for the K8s session:
|
||||
|
||||
K8s-Specific Considerations for Your App
|
||||
|
||||
Stateful Components:
|
||||
- PostgreSQL (needs StatefulSet + PVC)
|
||||
- Redis (can be deployment but consider persistence)
|
||||
- Media files storage (shared PVC across workers)
|
||||
|
||||
Service Architecture:
|
||||
- Django ASGI (HTTP + WebSocket)
|
||||
- Celery workers (separate deployment)
|
||||
- Nginx (HLS file serving)
|
||||
- External services (PostgreSQL, Redis)
|
||||
|
||||
Scaling Patterns:
|
||||
- HPA for Celery workers based on queue depth
|
||||
- Video processing is CPU/memory intensive
|
||||
- WebSocket connections need session affinity
|
||||
|
||||
Storage:
|
||||
- Shared volume for HLS segments (/media)
|
||||
- Database persistence
|
||||
- Consider GCP Cloud Storage integration
|
||||
|
||||
Networking:
|
||||
- Ingress for external access
|
||||
- Internal services communication
|
||||
- WebSocket support in ingress
|
||||
|
||||
GCP Integration Points:
|
||||
- GKE cluster
|
||||
- Cloud Storage for media
|
||||
- Cloud SQL (optional PostgreSQL upgrade)
|
||||
- Container Registry for images
|
||||
|
||||
The current Docker Compose setup translates well to K8s, but you'll need persistent volumes and proper service discovery. Your video processing workload is perfect
|
||||
for demonstrating K8s scaling capabilities.
|
||||
|
||||
We can tackle the architecture cleanup first, then dedicate a full session to the K8s deployment!
|
||||
Reference in New Issue
Block a user