Menu

Clustering Overview

ApiCharge supports horizontal scaling through clustered deployment, allowing you to distribute load across multiple instances for improved performance, redundancy, and high availability.

Note: Clustered deployments are recommended for production environments with high traffic volumes or where high availability is required.

Key Benefits of Clustering

Clustering Architecture

ApiCharge's clustering architecture has several key components:

  1. Load Balancer - Distributes incoming client requests across instances
  2. ApiCharge Instances - Multiple identical ApiCharge deployments
  3. Redis Cache - Shared cache for distributed state (rate limiters, sessions)
  4. Stellar RPC Servers - For blockchain interactions (can be shared or per-instance)
  5. Backend Services - Your API services that ApiCharge proxies to
                           +------------------+
                           |                  |
                           |  Load Balancer   |
                           |                  |
                           +--------+---------+
                                    |
                      +------------+------------+
                      |             |            |
            +---------v----+ +------v------+ +---v----------+
            |              | |             | |              |
            | ApiCharge #1 | | ApiCharge #2| | ApiCharge #3 |
            |              | |             | |              |
            +-----+--------+ +------+------+ +--------+-----+
                  |                 |                 |
                  |                 |                 |
            +-----v-----------------v-----------------v-----+
            |                                               |
            |              Redis Cache                      |
            |                                               |
            +---------------+---------------+---------------+
                            |               |
               +------------v----+    +-----v---------+
               |                 |    |               |
               | Stellar RPC     |    | Your Backend  |
               |                 |    | Services      |
               +-----------------+    +---------------+

Redis Configuration

Redis is a critical component for clustered ApiCharge deployments, providing shared state across instances. Here's how to configure Redis integration:

Redis Connection Configuration

Add the Redis connection settings to your appsettings.json:

{
  "Redis": {
    "ConnectionString": "redis-server:6379,abortConnect=false,connectTimeout=5000",
    "InstanceName": "ApiCharge-"
  }
}

Key Redis configuration options:

Setting Description
ConnectionString The Redis server connection string with hostname:port format. You can add additional StackExchange.Redis connection options as comma-separated values.
InstanceName A prefix for Redis keys to avoid conflicts with other applications using the same Redis instance.

Redis Persistence for Rate Limiters

Enable Redis-based rate limiter persistence to ensure consistent rate limiting across the cluster:

{
  "RateLimiterPersistence": {
    "UseRedis": true,
    "PersistenceIntervalSeconds": 30,
    "RedisExpiryMultiplier": 2
  }
}
Setting Description
UseRedis Set to true to use Redis for rate limiter state persistence.
PersistenceIntervalSeconds How often rate limiter state should be synchronized with Redis.
RedisExpiryMultiplier Multiplier applied to ApiCharge Subscription duration to determine the Redis key expiry time.
Important: Redis persistence is crucial for clustered deployments. Without it, rate limiters would be tracked independently per instance, allowing clients to exceed their quota by spreading requests across instances.

Redis High Availability Options

For production deployments, consider these Redis high-availability configurations:

  1. Redis Sentinel
    {
      "Redis": {
        "ConnectionString": "sentinel-1:26379,sentinel-2:26379,sentinel-3:26379,serviceName=mymaster,abortConnect=false",
        "InstanceName": "ApiCharge-"
      }
    }
  2. Redis Cluster
    {
      "Redis": {
        "ConnectionString": "redis-1:6379,redis-2:6379,redis-3:6379,redis-4:6379,redis-5:6379,redis-6:6379,serviceName=mymaster,abortConnect=false",
        "InstanceName": "ApiCharge-"
      }
    }

Blockchain Connection Architectures

When deploying ApiCharge in a clustered environment, you need to choose how your instances will connect to the Stellar blockchain. Each approach offers different benefits in terms of performance, reliability, and operational simplicity.

Single Connection Architecture

In this simple model, all ApiCharge instances connect to a single blockchain service.

  +----------------+  +----------------+  +----------------+
  |                |  |                |  |                |
  |  ApiCharge #1  |  |  ApiCharge #2  |  |  ApiCharge #3  |
  |                |  |                |  |                |
  +--------+-------+  +-------+--------+  +--------+-------+
           |                  |                   |
           |                  |                   |
           v                  v                   v
           +------------+---------------+---------+
                         |
                    +----v-----+
                    |          |
                    | Stellar  |
                    | Network  |
                    |          |
                    +----------+

Benefits:

Best For:

Configuration:

This is the default configuration. Simply set the same StellarEndpointRPC value for all instances.

Dedicated Blockchain Connection

For larger deployments, a dedicated blockchain connection service provides better performance and reliability.

  +----------------+  +----------------+  +----------------+
  |                |  |                |  |                |
  |  ApiCharge #1  |  |  ApiCharge #2  |  |  ApiCharge #3  |
  |                |  |                |  |                |
  +--------+-------+  +-------+--------+  +--------+-------+
           |                  |                   |
           |                  |                   |
           |                  v                   |
           |     +------------------------+       |
           +---->|                        |<------+
                 | Blockchain Connection  |
                 |      Service           |
                 |                        |
                 +----------+-------------+
                            |
                      +-----v-----+
                      |           |
                      | Stellar   |
                      | Network   |
                      |           |
                      +-----------+

Benefits:

Best For:

Regional Connection Model

For global deployments, a regional model provides lower latency and better disaster recovery.

  US Region                             EU Region
  +----------------+                   +----------------+
  |                |                   |                |
  |  ApiCharge US  |                   |  ApiCharge EU  |
  |  Instances     |                   |  Instances     |
  +--------+-------+                   +-------+--------+
           |                                   |
           v                                   v
  +----------------+                   +----------------+
  |                |                   |                |
  | US Blockchain  |                   | EU Blockchain  |
  | Connection     |                   | Connection     |
  +--------+-------+                   +-------+--------+
           |                                   |
           |                                   |
           +-----------------+-----------------+
                             |
                      +------v------+
                      |             |
                      |  Stellar    |
                      |  Network    |
                      |             |
                      +-------------+

Benefits:

Best For:

Blockchain Integration Performance Guidelines

For optimal blockchain connection performance in any architecture:

Load Balancer Configuration

The load balancer is a critical component in clustered deployments. It should be configured to support:

Example NGINX Load Balancer Configuration

upstream apicharge_backend {
    # Use ip_hash for session affinity
    ip_hash;
    
    server apicharge1.example.com:80 max_fails=3 fail_timeout=30s;
    server apicharge2.example.com:80 max_fails=3 fail_timeout=30s;
    server apicharge3.example.com:80 max_fails=3 fail_timeout=30s;
}

server {
    listen 443 ssl http2;
    server_name api.example.com;
    
    ssl_certificate /path/to/cert.pem;
    ssl_certificate_key /path/to/key.pem;
    
    # Other SSL settings...
    
    location / {
        proxy_pass http://apicharge_backend;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        
        # WebSocket support
        proxy_read_timeout 86400;
        
        # Health checks
        health_check interval=10 fails=3 passes=2;
    }
}

HAProxy Configuration Example

For production environments, HAProxy provides excellent performance and flexibility:

global
    log /dev/log local0
    log /dev/log local1 notice
    chroot /var/lib/haproxy
    stats socket /run/haproxy/admin.sock mode 660 level admin
    stats timeout 30s
    user haproxy
    group haproxy
    daemon

defaults
    log global
    mode http
    option httplog
    option dontlognull
    timeout connect 5000
    timeout client 50000
    timeout server 50000

frontend apicharge_frontend
    bind *:443 ssl crt /etc/ssl/certs/apicharge.pem
    mode http
    option forwardfor
    http-request set-header X-Forwarded-Proto https
    default_backend apicharge_backend

backend apicharge_backend
    mode http
    balance source
    option httpchk GET /health
    http-check expect status 200
    cookie SERVERID insert indirect nocache
    
    server apicharge1 apicharge1.internal:80 check cookie s1 check inter 5s
    server apicharge2 apicharge2.internal:80 check cookie s2 check inter 5s
    server apicharge3 apicharge3.internal:80 check cookie s3 check inter 5s
    
    # WebSocket support
    option http-server-close
    option http-pretend-keepalive

Load Balancer Health Checks

Configure health checks to monitor the status of ApiCharge instances:

Proper health check configuration ensures that traffic is only routed to healthy instances, improving overall reliability.

Rate Limiting in Clustered Environments

Different rate limiting strategies have specific requirements in clustered deployments:

Rate Limiter Type Clustering Requirements
Call Count Requires shared state across instances to maintain accurate counts
Data Limits Requires shared state across instances to track total data transfer
Stream Rate Requires session affinity (sticky sessions) for consistent streaming experience
Credits Requires shared state across instances to maintain accurate credit balance
Important: Stream Rate limiters require sticky sessions to function properly in clustered deployments. Ensure your load balancer configuration supports session affinity.

Redis Cache Configuration for Different Scales

The Redis configuration needs to scale with your deployment size. Here are recommendations for different deployment scales:

Scale ApiCharge Instances Redis Configuration Hardware Recommendations
Small 2-3 Single Redis instance 2 vCPUs, 4GB RAM, SSD
Medium 4-10 Redis Sentinel (1 master, 2 replicas) 4 vCPUs, 8GB RAM, SSD
Large 11-30 Redis Cluster (3+ shards) 8+ vCPUs, 16+ GB RAM, SSD
Enterprise 31+ Multi-region Redis Cluster Custom sizing based on load testing

Important Redis configuration parameters for production:

# Example redis.conf for medium deployment
maxmemory 4gb
maxmemory-policy allkeys-lru
appendonly yes
appendfsync everysec
save 900 1
save 300 10
save 60 10000
tcp-keepalive 60
timeout 300
tcp-backlog 511
databases 16
Note: In production environments, consider using managed Redis services from cloud providers, which handle the operational complexity of Redis clustering and high availability.

State Sharing Requirements

ApiCharge requires shared state and connection persistence for proper functionality in a clustered environment:

Configure state sharing in your deployment by enabling Redis integration in appsettings.json.

Load Balancer Session Management

For streaming protocols, your load balancer must support "sticky sessions" that route clients consistently to the same instance:

Protocol Load Balancer Requirement
WebSockets Sticky sessions + WebSocket protocol support
Server-Sent Events (SSE) Sticky sessions + long connection support
Standard HTTP streaming Sticky sessions for optimal performance
gRPC streaming Sticky sessions + HTTP/2 support
Important: Most load balancers require specific configuration for supporting WebSockets and other streaming protocols. Ensure your load balancer is properly configured to handle these connection types.

Scaling Considerations

When scaling ApiCharge in clustered environments, consider these factors:

Vertical Scaling

Horizontal Scaling

Regional Deployment

For multi-region deployments, consider these approaches:

  1. Separate Clusters - Independent ApiCharge clusters per region, with region-local Redis and RPC servers
  2. Global Redis - Region-local ApiCharge instances with a global Redis cluster (higher latency but consistent state)
  3. Hybrid Approach - Regional ApiCharge and Redis instances with a global RPC server cluster

Network Topology Recommendations

Optimize your network topology for performance and security:

  1. Public Subnet - Load balancers and public-facing components
  2. Private Subnet - ApiCharge instances and application servers
  3. Database Subnet - Redis and other persistent storage
  4. Blockchain Subnet - RPC servers with controlled internet access
  +--------------------+
  |   Public Subnet   |
  |                   |
  |  +-------------+  |
  |  |Load Balancer|  |
  |  +------+------+  |
  +----------|--------+
             |
  +----------|--------+
  |  Private Subnet   |
  |          |        |
  |  +-------v------+ |
  |  |  ApiCharge   | |
  |  |  Instances   | |
  |  +-------+------+ |
  |          |        |
  +----------|--------+
             |
       +-----|-----+  +---------------+
       |DB Subnet |  |Blockchain Net |
       |     |    |  |      |        |
       | +---v--+ |  |  +---v-----+  |
       | |Redis | |  |  |RPC Server|  |
       | +------+ |  |  +---------+  |
       +-----------  +---------------+

Monitoring Clustered Deployments

Effective monitoring is essential for clustered ApiCharge deployments:

ApiCharge exposes metrics through a Prometheus-compatible endpoint at /metrics.

Key Metrics to Monitor

Metric Category Important Metrics Warning Thresholds
System CPU usage, Memory usage, Disk I/O CPU > 80%, Memory > 85%, Disk latency > 10ms
Application Request rate, Error rate, Response time Error rate > 1%, P95 response time > 500ms
Redis Memory usage, CPU usage, Evictions Memory > 80%, Evictions > 0, Connected clients > 90% max
RPC Server Transaction throughput, Block height lag, Error rate Height lag > 5 blocks, Transaction failure rate > 0.1%

For comprehensive monitoring, consider integrating with monitoring platforms like Prometheus, Grafana, or cloud-native monitoring services.

Backup and Recovery

For production deployments, implement backup and recovery procedures:

Backup Strategy Recommendations

Design a comprehensive backup strategy for production environments:

  1. Regular RDB Snapshots - Configure Redis to create periodic RDB snapshots
  2. AOF Persistence - Enable AOF for more granular persistence between snapshots
  3. Offsite Backups - Move backup files to secure offsite storage
  4. Backup Retention - Keep hourly backups for 24 hours, daily for 30 days, monthly for 1 year
  5. Recovery Testing - Regularly test the recovery process to ensure backups are valid

Sample Redis backup configuration:

# Redis backup settings in redis.conf
save 900 1
save 300 10
save 60 10000
appendonly yes
appendfsync everysec
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
dir /var/lib/redis/backup

Disaster Recovery Planning

Prepare for potential failures with a disaster recovery plan:

  1. Define RPO/RTO - Establish Recovery Point Objective and Recovery Time Objective
  2. Document Procedures - Create step-by-step recovery procedures for different scenarios
  3. Assign Responsibilities - Define who is responsible for each recovery task
  4. Regular Testing - Conduct disaster recovery drills to validate processes
  5. Multi-region Failover - For critical systems, consider multi-region active-active or active-passive setups

Next Steps

After setting up your clustered deployment, consider: