System Architecture
Understanding the FIG ecosystem's distributed architecture and data flow
Browser
User Interface
mitmproxy
Traffic Interception
WebSocket Bridge
Communication Layer
Processing Backend
Request Processing
Yellow Network
Blockchain Integration
IPFS Storage
Decentralized Storage
Core Components
Detailed breakdown of each component in the FIG ecosystem, their responsibilities, and technical specifications
Proxy Layer (app/)
Python + mitmproxyIntercepts and processes HTTP traffic from browsers
Key Features
- HTTP/HTTPS traffic interception
- Request serialization to JSON format
- Concurrent processing (up to 50 requests)
- WebSocket communication with backend
- Response processing and forwarding
- IPFS integration for large payloads
Key Files
- app.py - Main proxy logic and request handling
- ws-code.py - WebSocket bridge server
- ipfs_utils.py - IPFS storage utilities
- requirements.txt - Python dependencies
Processing Backend (miner/)
TypeScript + Node.jsHandles request processing and blockchain integration
Key Features
- ERC-7824 Nitrolite authentication
- HTTP request execution via curl
- High-concurrency processing (up to 100 requests)
- Connection pooling optimization
- App session management
- Blockchain state updates
Key Files
- listener.ts - Main service entry point
- utils/auth.ts - Authentication utilities
- utils/session.ts - Session management
- utils/websocket.ts - WebSocket service
WebSocket Bridge
Python WebSocket ServerFacilitates real-time communication between components
Key Features
- Real-time bidirectional communication
- Message broadcasting to multiple clients
- Large message support (up to 100MB)
- IPFS integration for large payloads
- Client connection management
- Error handling and recovery
Key Files
- ws-code.py - WebSocket server implementation
- Connection management and broadcasting
- IPFS payload handling
- Message serialization/deserialization
IPFS Storage Layer
Lighthouse Protocol + IPFSDecentralized storage for large content and data
Key Features
- Automatic large payload detection (>512KB)
- Lighthouse API integration
- Content addressing via CIDs
- Distributed storage network
- Automatic content retrieval
- Bandwidth optimization
Key Files
- ipfs_utils.py - IPFS utilities
- Lighthouse API integration
- CID generation and resolution
- Content upload/download handling
Data Flow Process
Step-by-step breakdown of how data flows through the FIG ecosystem from browser request to response delivery
Traffic Interception
mitmproxy intercepts HTTP requests from the browser
Process Details:
- Browser sends HTTP/HTTPS request
- mitmproxy captures the request
- Request is parsed and validated
- Headers, body, and metadata are extracted
- Request is queued for processing
Request Serialization
Request data is serialized into structured JSON format
Process Details:
- Request converted to JSON format
- Headers array structure created
- Body data encoded (base64 for binary)
- Query parameters extracted
- Metadata added (timestamp, session ID)
WebSocket Forwarding
Request is sent via WebSocket to the backend service
Process Details:
- WebSocket connection established
- JSON payload sent to backend
- Message queued in processing pipeline
- Concurrent request handling (up to 100)
- Real-time status updates
Request Processing
Backend processes the request using curl for actual HTTP execution
Process Details:
- ERC-7824 Nitrolite authentication
- curl command constructed from request data
- HTTP request executed to target server
- Response captured and processed
- Blockchain session management updated
IPFS Storage (if needed)
Large payloads are automatically stored in IPFS
Process Details:
- Payload size checked (>512KB threshold)
- Large data uploaded to IPFS via Lighthouse
- Content ID (CID) generated
- Only CID transmitted instead of full data
- Bandwidth usage optimized
Response Delivery
Modified response is sent back through the WebSocket chain
Process Details:
- Response serialized to JSON format
- IPFS CIDs resolved if present
- WebSocket message sent to proxy
- mitmproxy reconstructs HTTP response
- Response delivered to browser
Performance Metrics
Security Architecture
Multi-layered security approach ensuring complete privacy and data protection in the FIG ecosystem
End-to-End Encryption
All data transmission is encrypted using public/private key encryption
- RSA-OAEP with 2048-bit keys for asymmetric encryption
- AES-GCM for symmetric encryption of large data
- Hybrid encryption approach for optimal performance
- Client-side key generation and management
- No server-side key exposure or storage
TEE-Based Processing
Mining nodes run in Trusted Execution Environment hardware
- Hardware-level security guarantees
- Zero-knowledge processing capabilities
- Tamper-proof execution environment
- Prevents data access even by miners
- Privacy by design architecture
Blockchain Authentication
Uses ERC-7824 Nitrolite protocol for secure authentication
- ECDSA signing for all blockchain interactions
- Decentralized identity management
- Session-based authentication system
- Private key security in environment variables
- Cryptographic proof of authenticity
Zero-Knowledge Browsing
No one can see any text or content about your browsing
- Complete data protection from source to destination
- No centralized data collection or storage
- Anonymous browsing capabilities
- No tracking or analytics collection
- Privacy-first design principles
Decentralized Storage
IPFS-based storage with content addressing
- Content addressing via CIDs (Content Identifiers)
- Distributed storage across multiple nodes
- No single point of failure
- Automatic content deduplication
- Censorship-resistant storage
State Channel Security
Yellow Network state channels ensure secure user interactions
- Off-chain transaction processing
- Reduced transaction costs and complexity
- Real-time secure messaging
- Scalable architecture for high volume
- Efficient user interaction model
Security Comparison
Traditional VPNs
- ISP sees VPN usage
- VPN company sees all data
- Centralized infrastructure
- Single point of failure
Our Solution
Complete anonymity with no third-party visibility
Tor Network
- Entry and exit nodes can monitor traffic
- Slow performance
- Limited scalability
- Known vulnerabilities
Our Solution
TEE-based processing prevents any monitoring
Traditional Browsers
- All data flows through centralized servers
- User tracking and analytics
- Data collection for ads
- Privacy violations
Our Solution
Decentralized, encrypted, privacy-preserving architecture
Security Benefits
Privacy Protection
- Complete data encryption end-to-end
- Zero-knowledge browsing capabilities
- No tracking or data collection
- Anonymous browsing experience
Technical Security
- TEE-based hardware security
- Blockchain-based authentication
- Decentralized architecture
- No single point of failure
Performance Specifications
Detailed performance metrics, optimization features, and system capabilities of the FIG ecosystem
Concurrency
Backend Processing
Maximum concurrent request processing capacity
Proxy Layer
Concurrent requests handled by proxy layer
WebSocket Connections
Maximum WebSocket client connections
Storage & Data
IPFS Threshold
Automatic IPFS storage for large payloads
Max Message Size
Maximum WebSocket message size support
Connection Pool
HTTP connection pool size for optimization
Network Performance
Request Latency
Average request processing time
Throughput
Maximum requests per minute capacity
Bandwidth Optimization
Bandwidth savings with IPFS integration
System Resources
Memory Usage
Typical memory consumption per instance
CPU Usage
CPU utilization under normal load
Startup Time
Time to initialize all services
Performance Optimizations
Connection Pooling
Reuses HTTP connections for better performance and reduced latency
- Reduced connection overhead
- Faster request processing
- Lower resource usage
Intelligent Queuing
Smart request queuing when at capacity to prevent system overload
- Prevents system crashes
- Maintains performance under load
- Graceful degradation
IPFS Integration
Automatic large payload storage in IPFS to reduce bandwidth usage
- 90% bandwidth reduction
- Faster transfers
- Decentralized storage
Binary Data Handling
Efficient base64 encoding/decoding for binary content processing
- Optimized memory usage
- Faster binary processing
- Reduced CPU overhead
System Requirements
Minimum Requirements
- Python 3.8+ with mitmproxy
- Node.js 18+ with TypeScript
- 2GB RAM minimum
- 1GB available disk space
Recommended Setup
- Python 3.12+ with latest mitmproxy
- Node.js 20+ with latest TypeScript
- 8GB RAM for optimal performance
- SSD storage for faster I/O