Files
media-analyzer/CLAUDE.md
buenosairesam 05224c712b init layout
2025-08-15 16:17:59 -03:00

14 KiB

Real-Time Video Analysis Platform Project Overview A scalable real-time video streaming and AI analysis platform that demonstrates modern cloud-native architecture and machine learning integration. The system ingests RTMP video streams (from sources like OBS), processes them with computer vision AI models, and provides live analysis results through a responsive web dashboard. Core Functionality

Video Ingestion: Accept RTMP streams and convert to HLS for web playback AI Processing: Real-time object detection (YOLO) and scene analysis (CLIP) on video segments Live Dashboard: Angular frontend with WebSocket-powered real-time analysis overlays Scalable Architecture: Kubernetes-deployed microservices with configurable processing modes Cloud Integration: GCP services integration while maintaining platform agnostic design

Technical Stack

Backend: Django + Django Channels, PostgreSQL, Redis, Celery AI/ML: OpenCV, YOLO, CLIP, Whisper (Hugging Face Transformers) Frontend: Angular 17+ with HLS.js video player and Canvas overlays Infrastructure: Docker containers, Kubernetes, NGINX Cloud: Google Cloud Platform integration (Storage, Vision API, Build, Logging) Streaming: FFmpeg for RTMP→HLS conversion, WebSocket for real-time data

Key Features

Dual processing modes (real-time vs batch) with runtime switching Live video analysis overlays (bounding boxes, object labels, confidence scores) OBS Studio integration for testing with various video sources Kubernetes auto-scaling based on processing queue depth Performance monitoring and benchmarking for 1080p30 video streams Platform-agnostic design with cloud-specific optimizations

Architecture Goals

Demonstrate event-driven microservices architecture Showcase AI model deployment and inference at scale Implement real-time data streaming and WebSocket communication Show proficiency with modern web frameworks and container orchestration Prove understanding of video processing pipelines and streaming protocols

This project serves as a technical showcase for backend development, AI/ML integration, cloud platform expertise, and modern web application architecture - directly addressing the requirements for a streaming media and AI-focused development role.

Master Implementation Checklist Core Infrastructure Setup

Django project structure - Create apps: streaming, ai_processing, api, dashboard Database models - Stream, MediaSegment, VideoAnalysis, ObjectDetection tables Docker containers - Django app, PostgreSQL, Redis, Nginx Basic Kubernetes manifests - Deployments, services, configmaps, PVCs RTMP ingestion endpoint - Accept OBS streams, trigger ffmpeg HLS conversion HLS segment monitoring - File watcher to detect new video segments

AI Processing Pipeline

Video analysis models setup - CLIP for scene understanding, YOLO for object detection Frame extraction service - Extract keyframes from HLS segments Real-time vs batch processing abstraction - Strategy pattern implementation AI processing worker - Celery tasks for video analysis Results storage - Store detection results with timestamps and confidence scores Processing queue management - Handle backlog and prioritization

Real-Time Video Effects Pipeline

GPU-accelerated shader pipeline - OpenGL/CUDA integration with FFmpeg AI-triggered effects system - Apply shaders based on detection results Custom filter development - Face blur, object highlighting, scene-based color grading WebGL frontend effects - Client-side shader preview and control

Angular Frontend

Angular 17+ project setup - Standalone components, signals, new control flow WebSocket service - Real-time communication with Django Channels Video player component - HLS.js integration for stream playback Analysis overlay component - Canvas-based bounding boxes and labels Stream control dashboard - Start/stop streams, processing mode toggle Real-time results panel - Live object detection and scene analysis display

Real-time Communication

Django Channels setup - WebSocket consumers for live data streaming WebSocket message protocols - Define message types for analysis results Frontend WebSocket integration - Angular service for real-time updates Overlay synchronization - Match analysis results with video timeline

GCP Integration Features

Cloud Storage integration - Store processed video segments and analysis results Cloud Vision API comparison - Benchmark against local models (within free tier) Cloud Logging - Structured logging for monitoring and debugging Cloud Build CI/CD - Automated Docker builds and deployments GKE-specific configurations - Node selectors, resource requests/limits Cloud Load Balancer - External access to the application

Video Processing & Effects

FFmpeg integration - HLS generation with proper segment duration Video quality optimization - Handle 1080p streams efficiently Frame rate analysis - Detect motion and activity levels Color analysis - Dominant colors and scene brightness Performance monitoring - Track processing latency and throughput

Kubernetes Production Setup

Resource management - CPU/memory limits for AI processing Auto-scaling configuration - HPA based on processing queue length Persistent storage - Claims for video segments and database Service mesh setup - Istio for traffic management (optional) Monitoring stack - Prometheus and Grafana integration Ingress configuration - NGINX ingress with SSL termination

Testing & Validation

OBS integration testing - Verify RTMP stream ingestion AI model accuracy testing - Validate object detection results Performance benchmarking - Process 1080p30 streams in real-time WebSocket stress testing - Multiple concurrent viewers End-to-end pipeline testing - OBS → Processing → Angular display

Documentation & Deployment

API documentation - OpenAPI/Swagger specs Kubernetes deployment guide - Step-by-step cluster setup GCP setup instructions - Service account, IAM, and resource creation Architecture diagrams - System overview and data flow Performance metrics documentation - Benchmarks and optimization notes

Suggested Implementation Order: Phase 1 (Foundation): Infrastructure Setup → Django + Models → Docker Setup Phase 2 (Core Features): RTMP Ingestion → AI Processing Pipeline → Results Storage Phase 3 (Frontend): Angular Setup → WebSocket Service → Video Player Phase 4 (Integration): Real-time Communication → Analysis Overlays → Stream Control Phase 5 (Cloud): GCP Services → Kubernetes Production → Monitoring Phase 6 (Polish): Testing → Documentation → Performance Optimization This order ensures each component builds on the previous ones and you can test functionality incrementally. The GCP integration is spread throughout to demonstrate platform knowledge without being overwhelming. Ready to start with the foundation phase?

Media Analyzer - Complete Project Structure media-analyzer/ ├── README.md ├── docker-compose.yml # Development environment ├── requirements.txt # Python dependencies ├── manage.py # Django management │ ├── backend/ # Django application root │ ├── media_analyzer/ # Main Django project │ │ ├── init.py │ │ ├── settings/ │ │ │ ├── init.py │ │ │ ├── base.py # Base settings │ │ │ ├── development.py # Dev overrides │ │ │ └── production.py # Prod overrides │ │ ├── urls.py # Main URL routing │ │ ├── wsgi.py # WSGI application │ │ └── asgi.py # ASGI for Django Channels │ │ │ ├── streaming/ # RTMP/HLS handling app │ │ ├── init.py │ │ ├── models.py # Stream, MediaSegment models │ │ ├── views.py # RTMP endpoints │ │ ├── consumers.py # WebSocket consumers │ │ ├── rtmp_handler.py # RTMP server logic │ │ ├── hls_monitor.py # File system watcher │ │ └── urls.py │ │ │ ├── ai_processing/ # AI analysis app │ │ ├── init.py │ │ ├── models.py # Analysis results models │ │ ├── processors/ │ │ │ ├── init.py │ │ │ ├── base.py # Processing strategy interface │ │ │ ├── realtime.py # Real-time processor │ │ │ ├── batch.py # Batch processor │ │ │ └── video_analyzer.py # CLIP/YOLO integration │ │ ├── tasks.py # Celery tasks │ │ └── apps.py # App configuration │ │ │ ├── effects/ # Real-time video effects app │ │ ├── init.py │ │ ├── models.py # Effect presets, shader configs │ │ ├── processors/ │ │ │ ├── init.py │ │ │ ├── gpu_pipeline.py # OpenGL/CUDA processing │ │ │ ├── ffmpeg_filters.py # FFmpeg custom filters │ │ │ └── effect_engine.py # Main effects orchestrator │ │ ├── shaders/ # GLSL shader files │ │ │ ├── vertex/ # Vertex shaders │ │ │ ├── fragment/ # Fragment shaders │ │ │ ├── blur.glsl # Face blur shader │ │ │ ├── highlight.glsl # Object highlight shader │ │ │ └── color_grade.glsl # Scene-based color grading │ │ ├── triggers/ │ │ │ ├── init.py │ │ │ ├── ai_triggers.py # AI detection → effect mapping │ │ │ └── manual_triggers.py # User-controlled effects │ │ ├── tasks.py # GPU processing Celery tasks │ │ └── apps.py │ │ │ └── api/ # REST API app │ ├── init.py │ ├── serializers.py # DRF serializers │ ├── views.py # API endpoints │ └── urls.py │ ├── frontend/ # Angular application │ ├── package.json │ ├── angular.json │ ├── src/ │ │ ├── app/ │ │ │ ├── app.component.ts │ │ │ ├── app.config.ts # Angular 17+ config │ │ │ │ │ │ │ ├── components/ │ │ │ │ ├── stream-viewer/ │ │ │ │ │ ├── stream-viewer.component.ts │ │ │ │ │ ├── stream-viewer.component.html │ │ │ │ │ └── stream-viewer.component.scss │ │ │ │ ├── analysis-panel/ │ │ │ │ │ └── analysis-panel.component.ts │ │ │ │ ├── stream-control/ │ │ │ │ │ └── stream-control.component.ts │ │ │ │ └── effects-panel/ │ │ │ │ ├── effects-panel.component.ts │ │ │ │ ├── effects-panel.component.html │ │ │ │ └── shader-editor.component.ts │ │ │ │ │ │ │ ├── services/ │ │ │ │ ├── websocket.service.ts │ │ │ │ ├── stream.service.ts │ │ │ │ ├── analysis.service.ts │ │ │ │ └── effects.service.ts │ │ │ │ │ │ │ ├── webgl/ # Client-side effects engine │ │ │ │ ├── shader.service.ts │ │ │ │ ├── effects-engine.ts │ │ │ │ ├── webgl-utils.ts │ │ │ │ └── shaders/ # WebGL shaders │ │ │ │ ├── overlay.vert.glsl │ │ │ │ ├── overlay.frag.glsl │ │ │ │ ├── particle.vert.glsl │ │ │ │ └── particle.frag.glsl │ │ │ │ │ │ │ └── models/ │ │ │ ├── stream.interface.ts │ │ │ ├── analysis.interface.ts │ │ │ └── effects.interface.ts │ │ │ │ │ └── main.ts # Angular bootstrap │ └── ... │ ├── docker/ # Docker configurations │ ├── Dockerfile.django # Django app container │ ├── Dockerfile.nginx # Nginx for HLS serving │ └── docker-compose.override.yml │ ├── k8s/ # Kubernetes manifests │ ├── base/ # Base configurations │ │ ├── namespace.yaml │ │ ├── django-deployment.yaml │ │ ├── nginx-deployment.yaml │ │ ├── postgres-statefulset.yaml │ │ ├── redis-deployment.yaml │ │ ├── services.yaml │ │ └── configmaps.yaml │ │ │ ├── overlays/ # Environment-specific │ │ ├── development/ │ │ │ └── kustomization.yaml │ │ └── production/ │ │ ├── kustomization.yaml │ │ └── gcp-specific.yaml │ │ │ └── scripts/ │ ├── deploy.sh │ └── setup-cluster.sh │ ├── media/ # Media storage (mounted volume) │ ├── segments/ # HLS video segments │ ├── playlists/ # HLS playlists │ └── uploads/ # Uploaded content │ ├── scripts/ # Helper scripts │ ├── setup-dev.sh # Development environment setup │ ├── start-rtmp-server.sh # RTMP server startup │ └── load-ai-models.py # Download and setup AI models │ └── docs/ # Documentation ├── api.md # API documentation ├── deployment.md # Deployment guide └── architecture.md # System architecture

vertical separation instead of phases

Slice 1: Basic Stream Ingestion

RTMP endpoint (basic) HLS conversion (minimal) Stream model + API Simple Angular video player Basic K8s deployment

Slice 2: AI Processing Foundation

Video segment detection Single AI model integration (YOLO) Processing strategy abstraction Results storage + API Analysis display component

Slice 3: Real-time Communication

WebSocket setup Live analysis streaming Frontend overlay system Processing queue monitoring

Slice 4: GCP Integration

Cloud Storage for segments GKE deployment Monitoring integration