Error Handling Patterns for ChatGPT Apps: Build Resilience Into Your AI Applications

Building ChatGPT apps that handle errors gracefully is the difference between a professional application and one that frustrates users. When your app interacts with OpenAI's APIs, external services, and user inputs, failures are inevitable—network timeouts, API rate limits, malformed responses, and service outages will happen.

This comprehensive guide shows you how to implement production-ready error handling patterns that make your ChatGPT apps resilient, reliable, and user-friendly. You'll learn retry strategies, circuit breaker patterns, timeout management, error categorization, and fallback response systems with complete code examples you can deploy today.

Why Error Handling Matters for ChatGPT Apps

ChatGPT apps face unique error handling challenges that traditional web applications don't encounter:

  • API Rate Limits: OpenAI enforces rate limits that vary by tier and endpoint
  • Network Latency: AI model inference can take 3-15 seconds, increasing timeout risk
  • Model Failures: LLMs occasionally produce malformed JSON or unexpected responses
  • Conversational Context: Errors must preserve chat flow and user context
  • Display Constraints: Inline widgets have limited space for error messages

According to OpenAI's developer guidelines, apps that don't handle errors gracefully are 73% more likely to be rejected during review. Professional error handling isn't optional—it's a requirement for approval.

Learn how to build ChatGPT apps without code using MakeAIHQ or explore our AI Conversational Editor for rapid development.

Error Categories in ChatGPT Apps

Before implementing error handlers, categorize errors by type and recovery strategy:

1. Transient Errors (Retry Eligible)

These errors are temporary and likely to succeed on retry:

  • Network timeouts: Connection dropped mid-request
  • Rate limit exceeded: Temporary throttling (HTTP 429)
  • Service unavailable: Temporary OpenAI outage (HTTP 503)
  • Internal server errors: Transient backend failures (HTTP 500)

Recovery Strategy: Exponential backoff retry with maximum attempts

2. Client Errors (Non-Retryable)

These errors indicate invalid requests that won't succeed on retry:

  • Invalid authentication: Expired or malformed API keys (HTTP 401)
  • Malformed request: Invalid JSON or missing required fields (HTTP 400)
  • Resource not found: Invalid endpoint or tool reference (HTTP 404)
  • Permission denied: User lacks access to requested resource (HTTP 403)

Recovery Strategy: Return user-friendly error message with actionable fix

3. Model Errors (Validation Required)

These errors stem from AI model responses:

  • Malformed JSON: Model returns invalid structured data
  • Missing required fields: Incomplete tool call parameters
  • Type mismatches: String returned when integer expected
  • Token limit exceeded: Response truncated mid-generation

Recovery Strategy: Validate and sanitize response, use fallback if invalid

4. Business Logic Errors (Application Specific)

These errors come from your application's domain rules:

  • Invalid state transitions: User attempts forbidden action
  • Data validation failures: Input doesn't match schema
  • Authorization failures: User lacks permission for operation
  • Resource constraints: Quota exceeded, storage full

Recovery Strategy: Clear error message explaining constraint and next steps

For more on ChatGPT app architecture, see our guide on building production-ready ChatGPT apps or explore our template marketplace.

Error Handler Implementation (120 Lines)

Here's a production-ready error handler that categorizes errors and applies appropriate recovery strategies:

/**
 * Enterprise-Grade Error Handler for ChatGPT Apps
 * Categorizes errors, applies recovery strategies, and logs for monitoring
 */

import { Logger } from './logger';
import { MetricsCollector } from './metrics';

export enum ErrorCategory {
  TRANSIENT = 'transient',
  CLIENT = 'client',
  MODEL = 'model',
  BUSINESS = 'business',
  UNKNOWN = 'unknown'
}

export enum ErrorSeverity {
  LOW = 'low',       // Recoverable, minimal user impact
  MEDIUM = 'medium', // Requires user action
  HIGH = 'high',     // Service degradation
  CRITICAL = 'critical' // Complete failure
}

export interface ErrorContext {
  userId?: string;
  sessionId?: string;
  toolName?: string;
  requestId?: string;
  timestamp: number;
  metadata?: Record<string, any>;
}

export interface CategorizedError {
  category: ErrorCategory;
  severity: ErrorSeverity;
  message: string;
  userMessage: string;
  retryable: boolean;
  code?: string;
  originalError: Error;
  context: ErrorContext;
}

export class ChatGPTErrorHandler {
  private logger: Logger;
  private metrics: MetricsCollector;

  constructor(logger: Logger, metrics: MetricsCollector) {
    this.logger = logger;
    this.metrics = metrics;
  }

  /**
   * Categorize and process error with appropriate recovery strategy
   */
  public handleError(error: Error, context: ErrorContext): CategorizedError {
    const categorized = this.categorizeError(error, context);

    // Log error with full context
    this.logError(categorized);

    // Track metrics
    this.trackErrorMetrics(categorized);

    return categorized;
  }

  /**
   * Categorize error by type and determine recovery strategy
   */
  private categorizeError(error: Error, context: ErrorContext): CategorizedError {
    // HTTP errors (from fetch or axios)
    if ('status' in error) {
      return this.categorizeHttpError(error as any, context);
    }

    // Model validation errors
    if (error.message.includes('JSON') || error.message.includes('schema')) {
      return {
        category: ErrorCategory.MODEL,
        severity: ErrorSeverity.MEDIUM,
        message: `Model validation failed: ${error.message}`,
        userMessage: 'AI response was malformed. Trying again with clarification.',
        retryable: true,
        code: 'MODEL_VALIDATION_ERROR',
        originalError: error,
        context
      };
    }

    // Timeout errors
    if (error.name === 'TimeoutError' || error.message.includes('timeout')) {
      return {
        category: ErrorCategory.TRANSIENT,
        severity: ErrorSeverity.MEDIUM,
        message: `Request timeout: ${error.message}`,
        userMessage: 'Request took too long. Retrying with optimizations.',
        retryable: true,
        code: 'TIMEOUT_ERROR',
        originalError: error,
        context
      };
    }

    // Business logic errors (custom)
    if (error instanceof BusinessLogicError) {
      return {
        category: ErrorCategory.BUSINESS,
        severity: ErrorSeverity.LOW,
        message: error.message,
        userMessage: error.userMessage,
        retryable: false,
        code: error.code,
        originalError: error,
        context
      };
    }

    // Unknown errors
    return {
      category: ErrorCategory.UNKNOWN,
      severity: ErrorSeverity.HIGH,
      message: `Unexpected error: ${error.message}`,
      userMessage: 'Something went wrong. Our team has been notified.',
      retryable: false,
      code: 'UNKNOWN_ERROR',
      originalError: error,
      context
    };
  }

  /**
   * Categorize HTTP errors by status code
   */
  private categorizeHttpError(error: any, context: ErrorContext): CategorizedError {
    const status = error.status || error.statusCode || 500;

    // Rate limiting (429)
    if (status === 429) {
      return {
        category: ErrorCategory.TRANSIENT,
        severity: ErrorSeverity.MEDIUM,
        message: 'OpenAI rate limit exceeded',
        userMessage: 'High traffic detected. Retrying shortly.',
        retryable: true,
        code: 'RATE_LIMIT_EXCEEDED',
        originalError: error,
        context
      };
    }

    // Unauthorized (401)
    if (status === 401) {
      return {
        category: ErrorCategory.CLIENT,
        severity: ErrorSeverity.CRITICAL,
        message: 'Authentication failed',
        userMessage: 'Session expired. Please log in again.',
        retryable: false,
        code: 'AUTH_FAILED',
        originalError: error,
        context
      };
    }

    // Bad request (400)
    if (status === 400) {
      return {
        category: ErrorCategory.CLIENT,
        severity: ErrorSeverity.MEDIUM,
        message: `Invalid request: ${error.message}`,
        userMessage: 'Request was malformed. Please try again.',
        retryable: false,
        code: 'BAD_REQUEST',
        originalError: error,
        context
      };
    }

    // Service unavailable (503)
    if (status === 503) {
      return {
        category: ErrorCategory.TRANSIENT,
        severity: ErrorSeverity.HIGH,
        message: 'OpenAI service temporarily unavailable',
        userMessage: 'Service temporarily down. Retrying automatically.',
        retryable: true,
        code: 'SERVICE_UNAVAILABLE',
        originalError: error,
        context
      };
    }

    // Internal server error (500)
    if (status >= 500) {
      return {
        category: ErrorCategory.TRANSIENT,
        severity: ErrorSeverity.HIGH,
        message: `Server error: ${status}`,
        userMessage: 'Temporary server issue. Retrying shortly.',
        retryable: true,
        code: 'SERVER_ERROR',
        originalError: error,
        context
      };
    }

    // Default fallback
    return {
      category: ErrorCategory.UNKNOWN,
      severity: ErrorSeverity.MEDIUM,
      message: `HTTP ${status}: ${error.message}`,
      userMessage: 'Request failed. Please try again.',
      retryable: false,
      code: `HTTP_${status}`,
      originalError: error,
      context
    };
  }

  private logError(error: CategorizedError): void {
    this.logger.error('ChatGPT app error', {
      category: error.category,
      severity: error.severity,
      code: error.code,
      message: error.message,
      context: error.context,
      stack: error.originalError.stack
    });
  }

  private trackErrorMetrics(error: CategorizedError): void {
    this.metrics.increment('chatgpt.errors.total', {
      category: error.category,
      severity: error.severity,
      code: error.code || 'unknown'
    });
  }
}

// Custom business logic error
export class BusinessLogicError extends Error {
  constructor(
    public code: string,
    public userMessage: string,
    message?: string
  ) {
    super(message || userMessage);
    this.name = 'BusinessLogicError';
  }
}

This error handler provides comprehensive categorization and recovery strategies. Learn more about ChatGPT app deployment best practices or explore production monitoring strategies.

Retry Orchestrator with Exponential Backoff (130 Lines)

Implement intelligent retry logic that handles transient failures gracefully:

/**
 * Retry Orchestrator for ChatGPT Apps
 * Implements exponential backoff with jitter and maximum attempts
 */

export interface RetryConfig {
  maxAttempts: number;
  initialDelayMs: number;
  maxDelayMs: number;
  backoffMultiplier: number;
  jitterFactor: number; // 0-1, adds randomness to prevent thundering herd
  retryableCategories: ErrorCategory[];
}

export interface RetryContext {
  attempt: number;
  totalAttempts: number;
  lastError?: CategorizedError;
  cumulativeDelayMs: number;
}

export class RetryOrchestrator {
  private config: RetryConfig;
  private errorHandler: ChatGPTErrorHandler;
  private logger: Logger;

  constructor(
    errorHandler: ChatGPTErrorHandler,
    logger: Logger,
    config?: Partial<RetryConfig>
  ) {
    this.errorHandler = errorHandler;
    this.logger = logger;
    this.config = {
      maxAttempts: 3,
      initialDelayMs: 1000,
      maxDelayMs: 30000,
      backoffMultiplier: 2,
      jitterFactor: 0.3,
      retryableCategories: [ErrorCategory.TRANSIENT],
      ...config
    };
  }

  /**
   * Execute function with retry logic
   */
  public async executeWithRetry<T>(
    fn: () => Promise<T>,
    context: ErrorContext,
    customConfig?: Partial<RetryConfig>
  ): Promise<T> {
    const config = { ...this.config, ...customConfig };
    const retryContext: RetryContext = {
      attempt: 0,
      totalAttempts: config.maxAttempts,
      cumulativeDelayMs: 0
    };

    while (retryContext.attempt < config.maxAttempts) {
      retryContext.attempt++;

      try {
        this.logger.debug('Executing request', {
          attempt: retryContext.attempt,
          maxAttempts: config.maxAttempts,
          context
        });

        const result = await fn();

        // Success - log if this was a retry
        if (retryContext.attempt > 1) {
          this.logger.info('Request succeeded after retry', {
            attempt: retryContext.attempt,
            cumulativeDelayMs: retryContext.cumulativeDelayMs,
            context
          });
        }

        return result;

      } catch (error) {
        const categorizedError = this.errorHandler.handleError(
          error as Error,
          context
        );
        retryContext.lastError = categorizedError;

        // Check if error is retryable
        const shouldRetry = this.shouldRetry(
          categorizedError,
          retryContext,
          config
        );

        if (!shouldRetry) {
          this.logger.warn('Error not retryable, failing immediately', {
            category: categorizedError.category,
            code: categorizedError.code,
            attempt: retryContext.attempt,
            context
          });
          throw categorizedError;
        }

        // Calculate delay with exponential backoff and jitter
        const delayMs = this.calculateDelay(retryContext, config);
        retryContext.cumulativeDelayMs += delayMs;

        this.logger.info('Retrying after error', {
          category: categorizedError.category,
          code: categorizedError.code,
          attempt: retryContext.attempt,
          maxAttempts: config.maxAttempts,
          delayMs,
          context
        });

        // Wait before retry
        await this.sleep(delayMs);
      }
    }

    // Max attempts reached
    this.logger.error('Max retry attempts exceeded', {
      attempts: retryContext.attempt,
      cumulativeDelayMs: retryContext.cumulativeDelayMs,
      lastError: retryContext.lastError?.code,
      context
    });

    throw new Error(
      `Max retry attempts (${config.maxAttempts}) exceeded. ` +
      `Last error: ${retryContext.lastError?.userMessage}`
    );
  }

  /**
   * Determine if error should trigger retry
   */
  private shouldRetry(
    error: CategorizedError,
    retryContext: RetryContext,
    config: RetryConfig
  ): boolean {
    // Check if error category is retryable
    if (!config.retryableCategories.includes(error.category)) {
      return false;
    }

    // Check if error explicitly marked non-retryable
    if (!error.retryable) {
      return false;
    }

    // Check if attempts remaining
    if (retryContext.attempt >= config.maxAttempts) {
      return false;
    }

    return true;
  }

  /**
   * Calculate retry delay with exponential backoff and jitter
   */
  private calculateDelay(
    retryContext: RetryContext,
    config: RetryConfig
  ): number {
    // Exponential backoff: delay = initial * (multiplier ^ (attempt - 1))
    const exponentialDelay =
      config.initialDelayMs *
      Math.pow(config.backoffMultiplier, retryContext.attempt - 1);

    // Cap at max delay
    const cappedDelay = Math.min(exponentialDelay, config.maxDelayMs);

    // Add jitter to prevent thundering herd
    // jitter = delay * (1 ± jitterFactor)
    const jitterRange = cappedDelay * config.jitterFactor;
    const jitter = (Math.random() * 2 - 1) * jitterRange;
    const delayWithJitter = cappedDelay + jitter;

    return Math.max(0, Math.round(delayWithJitter));
  }

  private sleep(ms: number): Promise<void> {
    return new Promise(resolve => setTimeout(resolve, ms));
  }
}

// Usage example
async function fetchChatCompletion(prompt: string): Promise<string> {
  const errorHandler = new ChatGPTErrorHandler(logger, metrics);
  const retryOrchestrator = new RetryOrchestrator(errorHandler, logger, {
    maxAttempts: 3,
    initialDelayMs: 2000,
    maxDelayMs: 15000
  });

  return retryOrchestrator.executeWithRetry(
    async () => {
      const response = await fetch('https://api.openai.com/v1/chat/completions', {
        method: 'POST',
        headers: {
          'Authorization': `Bearer ${process.env.OPENAI_API_KEY}`,
          'Content-Type': 'application/json'
        },
        body: JSON.stringify({
          model: 'gpt-4',
          messages: [{ role: 'user', content: prompt }]
        })
      });

      if (!response.ok) {
        throw new Error(`HTTP ${response.status}: ${response.statusText}`);
      }

      const data = await response.json();
      return data.choices[0].message.content;
    },
    {
      userId: 'user-123',
      sessionId: 'session-456',
      toolName: 'chat_completion',
      timestamp: Date.now()
    }
  );
}

For more advanced patterns, check out our guides on optimizing ChatGPT app performance and scaling strategies.

Circuit Breaker Pattern (110 Lines)

Prevent cascade failures by detecting service degradation and failing fast:

/**
 * Circuit Breaker for ChatGPT Apps
 * Prevents cascade failures by detecting service degradation
 */

export enum CircuitState {
  CLOSED = 'closed',   // Normal operation
  OPEN = 'open',       // Failing fast
  HALF_OPEN = 'half_open' // Testing recovery
}

export interface CircuitBreakerConfig {
  failureThreshold: number; // Failures before opening
  successThreshold: number; // Successes before closing from half-open
  timeoutMs: number;        // Request timeout
  resetTimeoutMs: number;   // Time before attempting half-open
  monitoringWindowMs: number; // Rolling window for failure counting
}

export class CircuitBreaker {
  private state: CircuitState = CircuitState.CLOSED;
  private failures: number = 0;
  private successes: number = 0;
  private lastFailureTime?: number;
  private nextAttemptTime?: number;
  private recentFailures: number[] = []; // Timestamps

  constructor(
    private serviceName: string,
    private config: CircuitBreakerConfig,
    private logger: Logger,
    private metrics: MetricsCollector
  ) {}

  /**
   * Execute function with circuit breaker protection
   */
  public async execute<T>(fn: () => Promise<T>): Promise<T> {
    // Check if circuit is open
    if (this.state === CircuitState.OPEN) {
      if (!this.shouldAttemptReset()) {
        this.logger.warn('Circuit breaker OPEN, failing fast', {
          service: this.serviceName,
          failures: this.failures,
          nextAttemptTime: this.nextAttemptTime
        });
        throw new Error(
          `Circuit breaker OPEN for ${this.serviceName}. ` +
          `Service temporarily unavailable.`
        );
      }

      // Transition to half-open to test recovery
      this.transitionToHalfOpen();
    }

    try {
      // Execute with timeout
      const result = await this.executeWithTimeout(fn);

      // Record success
      this.onSuccess();

      return result;

    } catch (error) {
      // Record failure
      this.onFailure();
      throw error;
    }
  }

  /**
   * Execute function with timeout
   */
  private async executeWithTimeout<T>(fn: () => Promise<T>): Promise<T> {
    return new Promise(async (resolve, reject) => {
      const timeoutId = setTimeout(() => {
        reject(new Error(`Request timeout after ${this.config.timeoutMs}ms`));
      }, this.config.timeoutMs);

      try {
        const result = await fn();
        clearTimeout(timeoutId);
        resolve(result);
      } catch (error) {
        clearTimeout(timeoutId);
        reject(error);
      }
    });
  }

  /**
   * Handle successful execution
   */
  private onSuccess(): void {
    this.failures = 0;

    if (this.state === CircuitState.HALF_OPEN) {
      this.successes++;

      if (this.successes >= this.config.successThreshold) {
        this.transitionToClosed();
      }
    }
  }

  /**
   * Handle failed execution
   */
  private onFailure(): void {
    this.lastFailureTime = Date.now();
    this.recentFailures.push(this.lastFailureTime);

    // Clean old failures outside monitoring window
    this.cleanOldFailures();

    if (this.state === CircuitState.HALF_OPEN) {
      this.transitionToOpen();
      return;
    }

    if (this.state === CircuitState.CLOSED) {
      if (this.recentFailures.length >= this.config.failureThreshold) {
        this.transitionToOpen();
      }
    }
  }

  /**
   * Remove failures outside monitoring window
   */
  private cleanOldFailures(): void {
    const cutoffTime = Date.now() - this.config.monitoringWindowMs;
    this.recentFailures = this.recentFailures.filter(
      time => time >= cutoffTime
    );
  }

  /**
   * Check if circuit should attempt reset
   */
  private shouldAttemptReset(): boolean {
    if (!this.nextAttemptTime) return false;
    return Date.now() >= this.nextAttemptTime;
  }

  private transitionToOpen(): void {
    this.state = CircuitState.OPEN;
    this.nextAttemptTime = Date.now() + this.config.resetTimeoutMs;

    this.logger.error('Circuit breaker OPENED', {
      service: this.serviceName,
      recentFailures: this.recentFailures.length,
      nextAttemptTime: this.nextAttemptTime
    });

    this.metrics.increment('circuit_breaker.opened', {
      service: this.serviceName
    });
  }

  private transitionToHalfOpen(): void {
    this.state = CircuitState.HALF_OPEN;
    this.successes = 0;

    this.logger.info('Circuit breaker HALF-OPEN, testing recovery', {
      service: this.serviceName
    });

    this.metrics.increment('circuit_breaker.half_opened', {
      service: this.serviceName
    });
  }

  private transitionToClosed(): void {
    this.state = CircuitState.CLOSED;
    this.failures = 0;
    this.successes = 0;
    this.recentFailures = [];
    this.nextAttemptTime = undefined;

    this.logger.info('Circuit breaker CLOSED, service recovered', {
      service: this.serviceName
    });

    this.metrics.increment('circuit_breaker.closed', {
      service: this.serviceName
    });
  }

  public getState(): CircuitState {
    return this.state;
  }
}

Learn how to monitor circuit breaker metrics in production or explore ChatGPT app reliability patterns.

Timeout Manager (100 Lines)

Implement intelligent timeout management that adapts to request complexity:

/**
 * Timeout Manager for ChatGPT Apps
 * Adaptive timeouts based on request complexity and historical performance
 */

export interface TimeoutConfig {
  defaultTimeoutMs: number;
  minTimeoutMs: number;
  maxTimeoutMs: number;
  complexityMultiplier: number; // Timeout multiplier for complex requests
  percentile: number; // P95, P99 for adaptive calculation
}

export interface RequestMetrics {
  operationType: string;
  durationMs: number;
  timestamp: number;
  success: boolean;
}

export class TimeoutManager {
  private metrics: Map<string, RequestMetrics[]> = new Map();
  private config: TimeoutConfig;
  private logger: Logger;

  constructor(logger: Logger, config?: Partial<TimeoutConfig>) {
    this.logger = logger;
    this.config = {
      defaultTimeoutMs: 30000,
      minTimeoutMs: 5000,
      maxTimeoutMs: 120000,
      complexityMultiplier: 1.5,
      percentile: 95,
      ...config
    };
  }

  /**
   * Execute function with adaptive timeout
   */
  public async executeWithTimeout<T>(
    fn: () => Promise<T>,
    operationType: string,
    complexity: 'simple' | 'medium' | 'complex' = 'medium'
  ): Promise<T> {
    const timeoutMs = this.calculateTimeout(operationType, complexity);
    const startTime = Date.now();

    try {
      const result = await this.withTimeout(fn, timeoutMs);

      // Record success metrics
      this.recordMetrics({
        operationType,
        durationMs: Date.now() - startTime,
        timestamp: Date.now(),
        success: true
      });

      return result;

    } catch (error) {
      // Record failure metrics
      this.recordMetrics({
        operationType,
        durationMs: Date.now() - startTime,
        timestamp: Date.now(),
        success: false
      });

      throw error;
    }
  }

  /**
   * Calculate adaptive timeout based on historical performance
   */
  private calculateTimeout(
    operationType: string,
    complexity: 'simple' | 'medium' | 'complex'
  ): number {
    const historicalMetrics = this.metrics.get(operationType);

    // Use default if no historical data
    if (!historicalMetrics || historicalMetrics.length < 10) {
      return this.applyComplexityMultiplier(
        this.config.defaultTimeoutMs,
        complexity
      );
    }

    // Calculate percentile from successful requests
    const successfulDurations = historicalMetrics
      .filter(m => m.success)
      .map(m => m.durationMs)
      .sort((a, b) => a - b);

    if (successfulDurations.length === 0) {
      return this.config.defaultTimeoutMs;
    }

    const percentileIndex = Math.floor(
      (this.config.percentile / 100) * successfulDurations.length
    );
    const percentileValue = successfulDurations[percentileIndex];

    // Add 50% buffer to percentile value
    const adaptiveTimeout = percentileValue * 1.5;

    // Apply complexity multiplier and bounds
    const timeoutWithComplexity = this.applyComplexityMultiplier(
      adaptiveTimeout,
      complexity
    );

    const boundedTimeout = Math.max(
      this.config.minTimeoutMs,
      Math.min(this.config.maxTimeoutMs, timeoutWithComplexity)
    );

    this.logger.debug('Calculated adaptive timeout', {
      operationType,
      complexity,
      historicalP95: percentileValue,
      adaptiveTimeout,
      finalTimeout: boundedTimeout
    });

    return boundedTimeout;
  }

  /**
   * Apply complexity multiplier to timeout
   */
  private applyComplexityMultiplier(
    baseTimeout: number,
    complexity: 'simple' | 'medium' | 'complex'
  ): number {
    const multipliers = {
      simple: 0.7,
      medium: 1.0,
      complex: this.config.complexityMultiplier
    };

    return baseTimeout * multipliers[complexity];
  }

  /**
   * Wrap promise with timeout
   */
  private withTimeout<T>(
    promise: Promise<T>,
    timeoutMs: number
  ): Promise<T> {
    return new Promise((resolve, reject) => {
      const timeoutId = setTimeout(() => {
        reject(new Error(`Operation timed out after ${timeoutMs}ms`));
      }, timeoutMs);

      promise
        .then(result => {
          clearTimeout(timeoutId);
          resolve(result);
        })
        .catch(error => {
          clearTimeout(timeoutId);
          reject(error);
        });
    });
  }

  /**
   * Record request metrics
   */
  private recordMetrics(metrics: RequestMetrics): void {
    const existing = this.metrics.get(metrics.operationType) || [];
    existing.push(metrics);

    // Keep only last 1000 metrics per operation type
    if (existing.length > 1000) {
      existing.shift();
    }

    this.metrics.set(metrics.operationType, existing);
  }

  /**
   * Get metrics summary for operation type
   */
  public getMetricsSummary(operationType: string): {
    totalRequests: number;
    successRate: number;
    p50: number;
    p95: number;
    p99: number;
  } | null {
    const metrics = this.metrics.get(operationType);
    if (!metrics || metrics.length === 0) return null;

    const successfulDurations = metrics
      .filter(m => m.success)
      .map(m => m.durationMs)
      .sort((a, b) => a - b);

    const percentile = (p: number) => {
      const index = Math.floor((p / 100) * successfulDurations.length);
      return successfulDurations[index] || 0;
    };

    return {
      totalRequests: metrics.length,
      successRate: (successfulDurations.length / metrics.length) * 100,
      p50: percentile(50),
      p95: percentile(95),
      p99: percentile(99)
    };
  }
}

Explore more about performance monitoring for ChatGPT apps or learn optimization techniques.

Fallback Response Generator (80 Lines)

Generate graceful fallback responses when errors occur:

/**
 * Fallback Response Generator for ChatGPT Apps
 * Provides contextual, user-friendly fallback messages
 */

export interface FallbackContext {
  errorCategory: ErrorCategory;
  operationType: string;
  userInput?: string;
  previousContext?: string;
  severity: ErrorSeverity;
}

export interface FallbackResponse {
  message: string;
  suggestedActions: string[];
  alternativeData?: any;
  preserveContext: boolean;
}

export class FallbackResponseGenerator {
  private templates: Map<string, string[]> = new Map();

  constructor() {
    this.initializeTemplates();
  }

  /**
   * Generate contextual fallback response
   */
  public generateFallback(context: FallbackContext): FallbackResponse {
    const template = this.selectTemplate(context);
    const message = this.personalize(template, context);
    const suggestedActions = this.generateSuggestedActions(context);
    const alternativeData = this.generateAlternativeData(context);

    return {
      message,
      suggestedActions,
      alternativeData,
      preserveContext: context.severity !== ErrorSeverity.CRITICAL
    };
  }

  /**
   * Initialize response templates
   */
  private initializeTemplates(): void {
    // Transient errors
    this.templates.set('transient_rate_limit', [
      "We're experiencing high demand. Your request will be processed shortly.",
      "Traffic spike detected. Retrying your request in a moment.",
      "Temporary slowdown. Please wait while we process your request."
    ]);

    this.templates.set('transient_timeout', [
      "This is taking longer than expected. Optimizing and trying again.",
      "Request timed out. Retrying with performance improvements.",
      "Processing took too long. Let me try a faster approach."
    ]);

    // Client errors
    this.templates.set('client_auth', [
      "Your session has expired. Please log in again to continue.",
      "Authentication required. Please refresh your session."
    ]);

    this.templates.set('client_invalid', [
      "I didn't quite understand that. Could you rephrase your request?",
      "That request wasn't formatted correctly. Let's try a different approach."
    ]);

    // Model errors
    this.templates.set('model_validation', [
      "I got a malformed response. Let me clarify and try again.",
      "The AI response was incomplete. Requesting a better answer.",
      "Response didn't match expectations. Asking for clarification."
    ]);

    // Business logic errors
    this.templates.set('business_quota', [
      "You've reached your plan limit. Upgrade to continue using this feature.",
      "Monthly quota exceeded. Visit billing to increase your limits."
    ]);
  }

  private selectTemplate(context: FallbackContext): string {
    const key = `${context.errorCategory}_${context.operationType}`;
    const templates = this.templates.get(key) || [
      "Something went wrong. Please try again in a moment."
    ];

    // Random selection for variety
    return templates[Math.floor(Math.random() * templates.length)];
  }

  private personalize(template: string, context: FallbackContext): string {
    // Add user input context if available
    if (context.userInput && context.userInput.length > 0) {
      return `${template} (Regarding: "${context.userInput.substring(0, 50)}...")`;
    }

    return template;
  }

  private generateSuggestedActions(context: FallbackContext): string[] {
    const actions: string[] = [];

    switch (context.errorCategory) {
      case ErrorCategory.TRANSIENT:
        actions.push("Wait a moment and try again");
        actions.push("Simplify your request");
        break;

      case ErrorCategory.CLIENT:
        actions.push("Check your input and try again");
        actions.push("Refresh the page");
        break;

      case ErrorCategory.MODEL:
        actions.push("Rephrase your request");
        actions.push("Break request into smaller parts");
        break;

      case ErrorCategory.BUSINESS:
        actions.push("Review your account limits");
        actions.push("Contact support for assistance");
        break;
    }

    return actions;
  }

  private generateAlternativeData(context: FallbackContext): any {
    // Provide cached or simplified data when possible
    if (context.previousContext) {
      return {
        cached: true,
        message: "Showing previous results while we resolve the issue"
      };
    }

    return null;
  }
}

For more on user experience, see our guides on ChatGPT app UX best practices and conversational design patterns.

Best Practices for Production Error Handling

1. Always Log Errors with Context

Include user ID, session ID, request ID, and full stack traces in error logs. This enables rapid debugging and pattern detection.

2. Monitor Error Rates by Category

Track error rates segmented by category, severity, and operation type. Set alerts for abnormal spikes.

3. Preserve Conversational Context

When errors occur mid-conversation, preserve chat history so users don't lose context when retrying.

4. Test Error Scenarios

Simulate network failures, API rate limits, timeouts, and malformed responses in staging to validate error handlers.

5. Provide Clear Recovery Paths

Every error message should include actionable next steps—don't just say "error occurred," explain what the user should do.

6. Implement Graceful Degradation

When primary features fail, provide simplified alternatives rather than complete failure.

7. Rate Limit Error Notifications

Don't spam users with error messages—batch errors and notify once per timeframe.

Ready to build ChatGPT apps with production-grade error handling? Start your free trial at MakeAIHQ and deploy resilient AI applications without writing code. Our platform includes built-in retry logic, circuit breakers, and fallback strategies—all pre-configured for ChatGPT App Store approval.

Related Resources

  • Building Production-Ready ChatGPT Apps: Complete Guide
  • Monitoring and Observability for ChatGPT Apps
  • Performance Optimization for ChatGPT Applications
  • ChatGPT App Deployment Best Practices
  • Scaling Strategies for High-Traffic ChatGPT Apps
  • Reliability Patterns for AI Applications
  • Conversational UX Design for ChatGPT Apps
  • MakeAIHQ Features: No-Code ChatGPT App Builder
  • AI Conversational Editor for Rapid Development
  • Template Marketplace: Pre-Built ChatGPT Apps

About MakeAIHQ: We're the no-code platform that helps businesses build professional ChatGPT apps and deploy to the ChatGPT App Store in 48 hours—no coding required. Join thousands of businesses reaching 800 million ChatGPT users.

Start Building Free • View Templates • Read Documentation