Basic Configuration
Learn essential configuration settings for AI security systems. Master initial setup, security policies, and optimization techniques for robust AI deployments across development, staging, and production environments.
Configuration Framework
Infrastructure
- • Server and container configuration
- • Network security settings
- • Load balancing and scaling
- • Monitoring and logging setup
Security
- • Authentication and authorization
- • Encryption and key management
- • Access controls and permissions
- • Security policy enforcement
Performance
- • Resource allocation and limits
- • Caching and optimization
- • Performance monitoring
- • Auto-scaling configuration
Initial System Setup
Configuration File Structure
Recommended Directory Structure
ai-security-system/ ├── config/ │ ├── base.yaml # Base configuration │ ├── development.yaml # Development overrides │ ├── staging.yaml # Staging overrides │ ├── production.yaml # Production overrides │ └── secrets/ │ ├── api-keys.yaml # API keys (encrypted) │ └── certificates/ # SSL certificates ├── policies/ │ ├── security.yaml # Security policies │ ├── validation.yaml # Data validation rules │ └── monitoring.yaml # Monitoring configuration ├── docker/ │ ├── Dockerfile │ └── docker-compose.yaml └── scripts/ ├── setup.sh # Initial setup script └── deploy.sh # Deployment script
Base Configuration Template
# config/base.yaml application: name: ai-security-system version: "1.0.0" environment: development debug: false server: host: "0.0.0.0" port: 8080 workers: 4 timeout: 30 max_request_size: "10MB" security: jwt: algorithm: "HS256" expiration: 3600 # 1 hour rate_limiting: enabled: true requests_per_minute: 100 burst_size: 20 cors: enabled: true allowed_origins: [] allowed_methods: ["GET", "POST", "PUT", "DELETE"] ai_models: default_model: "gpt-3.5-turbo" max_tokens: 4096 temperature: 0.7 timeout: 30 validation: input_max_length: 10000 output_max_length: 5000 database: type: "postgresql" host: "localhost" port: 5432 name: "ai_security" pool_size: 10 ssl_mode: "require" logging: level: "INFO" format: "json" file: "/var/log/ai-security.log" max_size: "100MB" backup_count: 5 monitoring: metrics: enabled: true port: 9090 health_check: enabled: true path: "/health" timeout: 5 cache: type: "redis" host: "localhost" port: 6379 ttl: 3600 # 1 hour max_memory: "1GB"
Security Configuration
Authentication & Authorization
JWT Configuration
# config/security.yaml authentication: jwt: secret_key: "${JWT_SECRET_KEY}" # From environment algorithm: "HS256" access_token_expire: 3600 # 1 hour refresh_token_expire: 604800 # 7 days issuer: "ai-security-system" audience: "api-users" oauth: enabled: true providers: google: client_id: "${GOOGLE_CLIENT_ID}" client_secret: "${GOOGLE_CLIENT_SECRET}" redirect_uri: "/auth/google/callback" authorization: rbac: enabled: true default_role: "user" roles: admin: permissions: - "system:manage" - "users:manage" - "models:manage" - "security:configure" user: permissions: - "ai:query" - "data:validate" - "reports:view" readonly: permissions: - "reports:view" - "status:check" # Python implementation import jwt import datetime from functools import wraps from flask import request, jsonify, current_app class AuthManager: def __init__(self, app_config): self.secret_key = app_config['JWT_SECRET_KEY'] self.algorithm = app_config.get('JWT_ALGORITHM', 'HS256') self.expiration = app_config.get('JWT_EXPIRATION', 3600) def generate_token(self, user_id, role='user'): """Generate JWT token for user""" payload = { 'user_id': user_id, 'role': role, 'exp': datetime.datetime.utcnow() + datetime.timedelta(seconds=self.expiration), 'iat': datetime.datetime.utcnow(), 'iss': 'ai-security-system' } token = jwt.encode(payload, self.secret_key, algorithm=self.algorithm) return token def verify_token(self, token): """Verify and decode JWT token""" try: payload = jwt.decode(token, self.secret_key, algorithms=[self.algorithm]) return {'valid': True, 'payload': payload} except jwt.ExpiredSignatureError: return {'valid': False, 'error': 'Token expired'} except jwt.InvalidTokenError: return {'valid': False, 'error': 'Invalid token'} def require_auth(self, required_permission=None): """Decorator for protecting endpoints""" def decorator(f): @wraps(f) def decorated_function(*args, **kwargs): token = request.headers.get('Authorization') if not token or not token.startswith('Bearer '): return jsonify({'error': 'Missing or invalid authorization header'}), 401 token = token.split(' ')[1] verification = self.verify_token(token) if not verification['valid']: return jsonify({'error': verification['error']}), 401 # Check permissions if required if required_permission: user_role = verification['payload'].get('role') if not self.has_permission(user_role, required_permission): return jsonify({'error': 'Insufficient permissions'}), 403 request.user = verification['payload'] return f(*args, **kwargs) return decorated_function return decorator
Encryption & Data Protection
Encryption Configuration
# Encryption settings encryption: at_rest: algorithm: "AES-256-GCM" key_rotation_days: 90 backup_encryption: true in_transit: tls_version: "1.3" cipher_suites: - "TLS_AES_256_GCM_SHA384" - "TLS_CHACHA20_POLY1305_SHA256" certificate_path: "/etc/ssl/certs/" private_key_path: "/etc/ssl/private/" api_keys: encryption_key: "${API_KEY_ENCRYPTION_KEY}" storage: "encrypted_database" # Python encryption utility import os import base64 from cryptography.fernet import Fernet from cryptography.hazmat.primitives import hashes from cryptography.hazmat.primitives.kdf.pbkdf2 import PBKDF2HMAC class EncryptionManager: def __init__(self, master_key=None): if master_key: self.key = master_key.encode() else: self.key = os.environ.get('ENCRYPTION_KEY', '').encode() if not self.key: raise ValueError("Encryption key not provided") # Derive key using PBKDF2 salt = b'stable_salt_for_consistency' # In production, use dynamic salt kdf = PBKDF2HMAC( algorithm=hashes.SHA256(), length=32, salt=salt, iterations=100000, ) key = base64.urlsafe_b64encode(kdf.derive(self.key)) self.cipher_suite = Fernet(key) def encrypt_data(self, data: str) -> str: """Encrypt sensitive data""" encrypted_data = self.cipher_suite.encrypt(data.encode()) return base64.urlsafe_b64encode(encrypted_data).decode() def decrypt_data(self, encrypted_data: str) -> str: """Decrypt sensitive data""" encrypted_bytes = base64.urlsafe_b64decode(encrypted_data.encode()) decrypted_data = self.cipher_suite.decrypt(encrypted_bytes) return decrypted_data.decode() def encrypt_api_keys(self, api_keys: dict) -> dict: """Encrypt API keys for storage""" encrypted_keys = {} for service, key in api_keys.items(): encrypted_keys[service] = self.encrypt_data(key) return encrypted_keys
Performance Optimization
Resource Management
Memory Configuration
# Resource limits resources: memory: max_heap_size: "2GB" model_cache_size: "1GB" request_buffer_size: "256MB" cpu: max_workers: 8 worker_timeout: 30 max_requests_per_worker: 1000 connections: max_concurrent: 1000 keepalive_timeout: 75 connection_pool_size: 20 # Docker resource limits version: '3.8' services: ai-security: image: ai-security:latest deploy: resources: limits: cpus: '2.0' memory: 4G reservations: cpus: '1.0' memory: 2G
Caching Strategy
# Caching configuration cache: redis: host: "redis-cluster" port: 6379 db: 0 password: "${REDIS_PASSWORD}" cluster_mode: true layers: model_responses: ttl: 3600 # 1 hour max_size: "500MB" validation_results: ttl: 1800 # 30 minutes max_size: "100MB" user_sessions: ttl: 86400 # 24 hours max_size: "50MB"
Monitoring & Logging
Comprehensive Monitoring Setup
Monitoring Configuration
# monitoring.yaml monitoring: prometheus: enabled: true port: 9090 metrics_path: "/metrics" scrape_interval: "15s" logging: level: "INFO" format: "json" outputs: - type: "file" path: "/var/log/ai-security/app.log" rotation: "daily" retention: "30d" - type: "elasticsearch" hosts: ["elasticsearch:9200"] index: "ai-security-logs" alerts: slack: webhook_url: "${SLACK_WEBHOOK_URL}" channel: "#ai-security-alerts" email: smtp_server: "smtp.company.com" from: "alerts@company.com" to: ["security-team@company.com"] rules: high_error_rate: condition: "error_rate > 0.05" duration: "5m" severity: "critical" high_response_time: condition: "response_time_p95 > 2000" duration: "10m" severity: "warning" model_failure: condition: "model_error_rate > 0.01" duration: "1m" severity: "critical" # Python monitoring setup import logging import time from prometheus_client import Counter, Histogram, Gauge, start_http_server from functools import wraps class MonitoringManager: def __init__(self): # Prometheus metrics self.request_count = Counter('requests_total', 'Total requests', ['method', 'endpoint', 'status']) self.request_duration = Histogram('request_duration_seconds', 'Request duration') self.active_connections = Gauge('active_connections', 'Active connections') self.model_response_time = Histogram('model_response_time_seconds', 'Model response time') # Setup logging self.setup_logging() def setup_logging(self): """Configure structured logging""" logging.basicConfig( level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', handlers=[ logging.FileHandler('/var/log/ai-security.log'), logging.StreamHandler() ] ) self.logger = logging.getLogger('ai-security') def monitor_endpoint(self, endpoint_name): """Decorator to monitor endpoint performance""" def decorator(f): @wraps(f) def decorated_function(*args, **kwargs): start_time = time.time() try: result = f(*args, **kwargs) status = 'success' return result except Exception as e: status = 'error' self.logger.error(f"Endpoint {endpoint_name} failed: {str(e)}") raise finally: duration = time.time() - start_time self.request_duration.observe(duration) self.request_count.labels( method='POST', endpoint=endpoint_name, status=status ).inc() return decorated_function return decorator
Environment-Specific Settings
Development
- • Debug mode enabled
- • Detailed error messages
- • Hot reloading active
- • Mock external services
- • Relaxed security for testing
# development.yaml debug: true log_level: DEBUG security: rate_limiting: enabled: false database: name: "ai_security_dev"
Staging
- • Production-like settings
- • Full security enabled
- • Performance testing
- • Load testing ready
- • Integration testing
# staging.yaml debug: false log_level: INFO security: rate_limiting: enabled: true database: name: "ai_security_staging"
Production
- • Maximum security settings
- • Error logging only
- • Performance optimized
- • High availability setup
- • Disaster recovery ready
# production.yaml debug: false log_level: ERROR security: rate_limiting: requests_per_minute: 60 database: name: "ai_security_prod" ssl_mode: "require"
Configuration Management
Version Control & Deployment
Configuration Management Script
#!/bin/bash # setup.sh - Configuration management script CONFIG_DIR="/opt/ai-security/config" ENVIRONMENT=${1:-development} echo "Setting up AI Security System for ${ENVIRONMENT} environment..." # Create directory structure mkdir -p ${CONFIG_DIR}/{secrets,policies,logs} mkdir -p /var/log/ai-security # Set proper permissions chmod 700 ${CONFIG_DIR}/secrets chown -R ai-security:ai-security ${CONFIG_DIR} # Copy environment-specific configuration cp config/base.yaml ${CONFIG_DIR}/ cp config/${ENVIRONMENT}.yaml ${CONFIG_DIR}/environment.yaml # Generate secrets if they don't exist if [ ! -f ${CONFIG_DIR}/secrets/jwt-secret ]; then openssl rand -hex 32 > ${CONFIG_DIR}/secrets/jwt-secret chmod 600 ${CONFIG_DIR}/secrets/jwt-secret fi # Setup SSL certificates for production if [ "${ENVIRONMENT}" = "production" ]; then echo "Setting up SSL certificates..." certbot certonly --webroot -w /var/www/html -d yourdomain.com ln -sf /etc/letsencrypt/live/yourdomain.com/fullchain.pem ${CONFIG_DIR}/ssl/cert.pem ln -sf /etc/letsencrypt/live/yourdomain.com/privkey.pem ${CONFIG_DIR}/ssl/key.pem fi # Validate configuration python -c " import yaml import sys try: with open('${CONFIG_DIR}/base.yaml', 'r') as f: base_config = yaml.safe_load(f) with open('${CONFIG_DIR}/environment.yaml', 'r') as f: env_config = yaml.safe_load(f) # Merge configurations merged_config = {**base_config, **env_config} # Validate required keys required_keys = ['server', 'security', 'database', 'logging'] for key in required_keys: if key not in merged_config: print(f'Missing required configuration key: {key}') sys.exit(1) print('Configuration validation successful') except Exception as e: print(f'Configuration validation failed: {e}') sys.exit(1) " echo "Configuration setup completed successfully!" # Start services if [ "${ENVIRONMENT}" = "production" ]; then systemctl enable ai-security systemctl start ai-security else echo "Run 'docker-compose up' to start the development environment" fi
Configuration Best Practices
Security Best Practices
- Store secrets in environment variables or secret management systems
- Use encryption for configuration files containing sensitive data
- Implement proper file permissions (600 for secrets, 644 for config)
- Regularly rotate API keys and certificates
- Use separate configurations for each environment
Operational Best Practices
- Version control all configuration files (except secrets)
- Implement configuration validation and testing
- Document all configuration options and their purposes
- Monitor configuration changes and their impact
- Maintain backup copies of working configurations
Quick Setup Guide
1
Download Configuration Templates
Get the base configuration files and customize for your environment.
2
Set Environment Variables
Configure secrets, API keys, and environment-specific settings.
3
Run Configuration Validation
Validate your configuration files before deployment.
4
Deploy and Monitor
Deploy your configured system and monitor for any issues.