Clustering Overview
ApiCharge supports horizontal scaling through clustered deployment, allowing you to distribute load across multiple instances for improved performance, redundancy, and high availability.
Key Benefits of Clustering
- Horizontal Scalability - Add instances to handle higher loads without vertical scaling
- High Availability - Maintain service during instance failures or maintenance
- Geographic Distribution - Deploy instances closer to users for reduced latency
- Load Distribution - Spread traffic across multiple instances
Clustering Architecture
ApiCharge's clustering architecture has several key components:
- Load Balancer - Distributes incoming client requests across instances
- ApiCharge Instances - Multiple identical ApiCharge deployments
- Redis Cache - Shared cache for distributed state (rate limiters, sessions)
- Stellar RPC Servers - For blockchain interactions (can be shared or per-instance)
- Backend Services - Your API services that ApiCharge proxies to
+------------------+
| |
| Load Balancer |
| |
+--------+---------+
|
+------------+------------+
| | |
+---------v----+ +------v------+ +---v----------+
| | | | | |
| ApiCharge #1 | | ApiCharge #2| | ApiCharge #3 |
| | | | | |
+-----+--------+ +------+------+ +--------+-----+
| | |
| | |
+-----v-----------------v-----------------v-----+
| |
| Redis Cache |
| |
+---------------+---------------+---------------+
| |
+------------v----+ +-----v---------+
| | | |
| Stellar RPC | | Your Backend |
| | | Services |
+-----------------+ +---------------+
Redis Configuration
Redis is a critical component for clustered ApiCharge deployments, providing shared state across instances. Here's how to configure Redis integration:
Redis Connection Configuration
Add the Redis connection settings to your appsettings.json
:
{
"Redis": {
"ConnectionString": "redis-server:6379,abortConnect=false,connectTimeout=5000",
"InstanceName": "ApiCharge-"
}
}
Key Redis configuration options:
Setting | Description |
---|---|
ConnectionString |
The Redis server connection string with hostname:port format. You can add additional StackExchange.Redis connection options as comma-separated values. |
InstanceName |
A prefix for Redis keys to avoid conflicts with other applications using the same Redis instance. |
Redis Persistence for Rate Limiters
Enable Redis-based rate limiter persistence to ensure consistent rate limiting across the cluster:
{
"RateLimiterPersistence": {
"UseRedis": true,
"PersistenceIntervalSeconds": 30,
"RedisExpiryMultiplier": 2
}
}
Setting | Description |
---|---|
UseRedis |
Set to true to use Redis for rate limiter state persistence. |
PersistenceIntervalSeconds |
How often rate limiter state should be synchronized with Redis. |
RedisExpiryMultiplier |
Multiplier applied to ApiCharge Subscription duration to determine the Redis key expiry time. |
Redis High Availability Options
For production deployments, consider these Redis high-availability configurations:
-
Redis Sentinel
{ "Redis": { "ConnectionString": "sentinel-1:26379,sentinel-2:26379,sentinel-3:26379,serviceName=mymaster,abortConnect=false", "InstanceName": "ApiCharge-" } }
-
Redis Cluster
{ "Redis": { "ConnectionString": "redis-1:6379,redis-2:6379,redis-3:6379,redis-4:6379,redis-5:6379,redis-6:6379,serviceName=mymaster,abortConnect=false", "InstanceName": "ApiCharge-" } }
Blockchain Connection Architectures
When deploying ApiCharge in a clustered environment, you need to choose how your instances will connect to the Stellar blockchain. Each approach offers different benefits in terms of performance, reliability, and operational simplicity.
Single Connection Architecture
In this simple model, all ApiCharge instances connect to a single blockchain service.
+----------------+ +----------------+ +----------------+
| | | | | |
| ApiCharge #1 | | ApiCharge #2 | | ApiCharge #3 |
| | | | | |
+--------+-------+ +-------+--------+ +--------+-------+
| | |
| | |
v v v
+------------+---------------+---------+
|
+----v-----+
| |
| Stellar |
| Network |
| |
+----------+
Benefits:
- Simplest configuration with minimal setup
- Good for small to medium deployments
- Consistent blockchain state across all instances
- Efficient resource utilization
- Minimal operational overhead
Best For:
- Development and testing environments
- Small production deployments (1-5 instances)
- Teams with limited operational resources
- Standard transaction volumes
Configuration:
This is the default configuration. Simply set the same StellarEndpointRPC
value for all instances.
Dedicated Blockchain Connection
For larger deployments, a dedicated blockchain connection service provides better performance and reliability.
+----------------+ +----------------+ +----------------+
| | | | | |
| ApiCharge #1 | | ApiCharge #2 | | ApiCharge #3 |
| | | | | |
+--------+-------+ +-------+--------+ +--------+-------+
| | |
| | |
| v |
| +------------------------+ |
+---->| |<------+
| Blockchain Connection |
| Service |
| |
+----------+-------------+
|
+-----v-----+
| |
| Stellar |
| Network |
| |
+-----------+
Benefits:
- Better performance for high-volume deployments
- Resource optimization - payment processing separated from API handling
- Consistent transaction state across all ApiCharge instances
- Independent scaling of blockchain components
- Better reliability and fault tolerance
Best For:
- Medium to large deployments (6+ instances)
- High transaction volume environments
- Production deployments with strict performance requirements
- Environments where blockchain availability is critical
Regional Connection Model
For global deployments, a regional model provides lower latency and better disaster recovery.
US Region EU Region
+----------------+ +----------------+
| | | |
| ApiCharge US | | ApiCharge EU |
| Instances | | Instances |
+--------+-------+ +-------+--------+
| |
v v
+----------------+ +----------------+
| | | |
| US Blockchain | | EU Blockchain |
| Connection | | Connection |
+--------+-------+ +-------+--------+
| |
| |
+-----------------+-----------------+
|
+------v------+
| |
| Stellar |
| Network |
| |
+-------------+
Benefits:
- Lower latency for geographically distributed users
- Better regional availability during network issues
- Improved disaster recovery capabilities
- Tailored capacity for regional user loads
- Compliance with regional data requirements
Best For:
- Global deployments with users across multiple regions
- Enterprise environments requiring geo-redundancy
- Applications with strict latency requirements
- Deployments needing regional disaster recovery
Blockchain Integration Performance Guidelines
For optimal blockchain connection performance in any architecture:
- Use high-performance infrastructure - SSD storage and modern CPUs provide better transaction processing
- Ensure sufficient resources - Allocate adequate memory and storage based on expected transaction volume
- Monitor performance metrics - Track response times and success rates to identify bottlenecks
- Implement redundancy - Deploy multiple connection points for critical production environments
- Consider managed services - For production, managed blockchain connection services reduce operational overhead
Load Balancer Configuration
The load balancer is a critical component in clustered deployments. It should be configured to support:
- Session Affinity (Sticky Sessions) - Important for WebSocket connections
- Health Checks - For detecting and removing unhealthy instances
- SSL Termination - Handling HTTPS connections
- Connection Draining - For graceful instance replacement
Example NGINX Load Balancer Configuration
upstream apicharge_backend {
# Use ip_hash for session affinity
ip_hash;
server apicharge1.example.com:80 max_fails=3 fail_timeout=30s;
server apicharge2.example.com:80 max_fails=3 fail_timeout=30s;
server apicharge3.example.com:80 max_fails=3 fail_timeout=30s;
}
server {
listen 443 ssl http2;
server_name api.example.com;
ssl_certificate /path/to/cert.pem;
ssl_certificate_key /path/to/key.pem;
# Other SSL settings...
location / {
proxy_pass http://apicharge_backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# WebSocket support
proxy_read_timeout 86400;
# Health checks
health_check interval=10 fails=3 passes=2;
}
}
HAProxy Configuration Example
For production environments, HAProxy provides excellent performance and flexibility:
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
frontend apicharge_frontend
bind *:443 ssl crt /etc/ssl/certs/apicharge.pem
mode http
option forwardfor
http-request set-header X-Forwarded-Proto https
default_backend apicharge_backend
backend apicharge_backend
mode http
balance source
option httpchk GET /health
http-check expect status 200
cookie SERVERID insert indirect nocache
server apicharge1 apicharge1.internal:80 check cookie s1 check inter 5s
server apicharge2 apicharge2.internal:80 check cookie s2 check inter 5s
server apicharge3 apicharge3.internal:80 check cookie s3 check inter 5s
# WebSocket support
option http-server-close
option http-pretend-keepalive
Load Balancer Health Checks
Configure health checks to monitor the status of ApiCharge instances:
- Basic Health Check - Use the
/health
endpoint for simple availability checks - Deep Health Check - Use
/health/deep
to verify database and RPC connectivity - Custom Checks - Implement custom health checks for specific requirements
Proper health check configuration ensures that traffic is only routed to healthy instances, improving overall reliability.
Rate Limiting in Clustered Environments
Different rate limiting strategies have specific requirements in clustered deployments:
Rate Limiter Type | Clustering Requirements |
---|---|
Call Count | Requires shared state across instances to maintain accurate counts |
Data Limits | Requires shared state across instances to track total data transfer |
Stream Rate | Requires session affinity (sticky sessions) for consistent streaming experience |
Credits | Requires shared state across instances to maintain accurate credit balance |
Redis Cache Configuration for Different Scales
The Redis configuration needs to scale with your deployment size. Here are recommendations for different deployment scales:
Scale | ApiCharge Instances | Redis Configuration | Hardware Recommendations |
---|---|---|---|
Small | 2-3 | Single Redis instance | 2 vCPUs, 4GB RAM, SSD |
Medium | 4-10 | Redis Sentinel (1 master, 2 replicas) | 4 vCPUs, 8GB RAM, SSD |
Large | 11-30 | Redis Cluster (3+ shards) | 8+ vCPUs, 16+ GB RAM, SSD |
Enterprise | 31+ | Multi-region Redis Cluster | Custom sizing based on load testing |
Important Redis configuration parameters for production:
# Example redis.conf for medium deployment
maxmemory 4gb
maxmemory-policy allkeys-lru
appendonly yes
appendfsync everysec
save 900 1
save 300 10
save 60 10000
tcp-keepalive 60
timeout 300
tcp-backlog 511
databases 16
State Sharing Requirements
ApiCharge requires shared state and connection persistence for proper functionality in a clustered environment:
- Authentication State - Token information must be accessible by any instance
- Rate Limiter Tracking - Usage data must be consistent across all instances
- WebSocket Connections - Require routing to the same instance throughout the session
- Server-Sent Events - Require consistent routing for uninterrupted streaming
Configure state sharing in your deployment by enabling Redis integration in appsettings.json
.
Load Balancer Session Management
For streaming protocols, your load balancer must support "sticky sessions" that route clients consistently to the same instance:
Protocol | Load Balancer Requirement |
---|---|
WebSockets | Sticky sessions + WebSocket protocol support |
Server-Sent Events (SSE) | Sticky sessions + long connection support |
Standard HTTP streaming | Sticky sessions for optimal performance |
gRPC streaming | Sticky sessions + HTTP/2 support |
Scaling Considerations
When scaling ApiCharge in clustered environments, consider these factors:
Vertical Scaling
- Each ApiCharge instance benefits from increased CPU cores for request processing
- Memory requirements scale with the number of concurrent connections and rate limiter state
- SSD storage improves Redis and RPC server performance
Horizontal Scaling
- ApiCharge instances can be added virtually without limits
- Redis may become a bottleneck under extremely high loads
- Consider Redis Cluster for very large deployments
- RPC servers may need independent scaling for high transaction volumes
Regional Deployment
For multi-region deployments, consider these approaches:
- Separate Clusters - Independent ApiCharge clusters per region, with region-local Redis and RPC servers
- Global Redis - Region-local ApiCharge instances with a global Redis cluster (higher latency but consistent state)
- Hybrid Approach - Regional ApiCharge and Redis instances with a global RPC server cluster
Network Topology Recommendations
Optimize your network topology for performance and security:
- Public Subnet - Load balancers and public-facing components
- Private Subnet - ApiCharge instances and application servers
- Database Subnet - Redis and other persistent storage
- Blockchain Subnet - RPC servers with controlled internet access
+--------------------+
| Public Subnet |
| |
| +-------------+ |
| |Load Balancer| |
| +------+------+ |
+----------|--------+
|
+----------|--------+
| Private Subnet |
| | |
| +-------v------+ |
| | ApiCharge | |
| | Instances | |
| +-------+------+ |
| | |
+----------|--------+
|
+-----|-----+ +---------------+
|DB Subnet | |Blockchain Net |
| | | | | |
| +---v--+ | | +---v-----+ |
| |Redis | | | |RPC Server| |
| +------+ | | +---------+ |
+----------- +---------------+
Monitoring Clustered Deployments
Effective monitoring is essential for clustered ApiCharge deployments:
- Health Checks - Monitor the health endpoint at
/health
- Redis Metrics - Track Redis memory usage, hit rate, and connection count
- Instance Metrics - Monitor CPU, memory, and network usage per instance
- RPC Server Metrics - Track RPC request latency and failure rates
- Rate Limiter Metrics - Monitor rate limiter hit rates and near-limit warnings
ApiCharge exposes metrics through a Prometheus-compatible endpoint at /metrics
.
Key Metrics to Monitor
Metric Category | Important Metrics | Warning Thresholds |
---|---|---|
System | CPU usage, Memory usage, Disk I/O | CPU > 80%, Memory > 85%, Disk latency > 10ms |
Application | Request rate, Error rate, Response time | Error rate > 1%, P95 response time > 500ms |
Redis | Memory usage, CPU usage, Evictions | Memory > 80%, Evictions > 0, Connected clients > 90% max |
RPC Server | Transaction throughput, Block height lag, Error rate | Height lag > 5 blocks, Transaction failure rate > 0.1% |
For comprehensive monitoring, consider integrating with monitoring platforms like Prometheus, Grafana, or cloud-native monitoring services.
Backup and Recovery
For production deployments, implement backup and recovery procedures:
- Regularly backup Redis data using
redis-cli --rdb /path/to/dump.rdb
- Backup RPC server databases and configuration
- Document recovery procedures and test them regularly
- Consider Redis replication for real-time backup
Backup Strategy Recommendations
Design a comprehensive backup strategy for production environments:
- Regular RDB Snapshots - Configure Redis to create periodic RDB snapshots
- AOF Persistence - Enable AOF for more granular persistence between snapshots
- Offsite Backups - Move backup files to secure offsite storage
- Backup Retention - Keep hourly backups for 24 hours, daily for 30 days, monthly for 1 year
- Recovery Testing - Regularly test the recovery process to ensure backups are valid
Sample Redis backup configuration:
# Redis backup settings in redis.conf
save 900 1
save 300 10
save 60 10000
appendonly yes
appendfsync everysec
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
dir /var/lib/redis/backup
Disaster Recovery Planning
Prepare for potential failures with a disaster recovery plan:
- Define RPO/RTO - Establish Recovery Point Objective and Recovery Time Objective
- Document Procedures - Create step-by-step recovery procedures for different scenarios
- Assign Responsibilities - Define who is responsible for each recovery task
- Regular Testing - Conduct disaster recovery drills to validate processes
- Multi-region Failover - For critical systems, consider multi-region active-active or active-passive setups
Next Steps
After setting up your clustered deployment, consider:
- Implementing authentication flows for your clients
- Setting up purchase flows for ApiCharge Subscriptions
- Reviewing troubleshooting for common clustering issues
- Explore advanced rate limiter configurations for different client tiers