Widget Touch Gesture Support for ChatGPT Apps

Mobile devices account for over 60% of ChatGPT usage, making touch gesture support critical for widget success. Users expect intuitive swipe, pinch, drag, and tap interactions that feel native to their platform. This guide demonstrates how to implement production-ready touch gesture support for ChatGPT widgets using modern web APIs and React patterns.

Touch gestures transform static widgets into interactive experiences. A fitness app widget benefits from swipeable workout cards, a map widget needs pinch-to-zoom, and a task manager requires drag-and-drop. Proper gesture implementation enhances usability while maintaining performance on mobile devices with limited processing power.

This article covers touch event handling, gesture recognition algorithms, performance optimization techniques, multi-touch support, and accessibility considerations. Each section includes production-ready TypeScript and React code examples tested across iOS, Android, and desktop browsers. You'll learn to build gesture systems that respond instantly to user input while consuming minimal battery and CPU resources.

Understanding Touch Event Handling

Touch events differ fundamentally from mouse events. While mouse interactions track a single pointer, touch events must handle multiple simultaneous contact points. The web platform provides three core touch event types: touchstart fires when fingers contact the screen, touchmove tracks finger movement, and touchend detects when fingers lift. Each event contains a touches array listing all active contact points with screen coordinates.

Modern browsers also support Pointer Events, a unified API handling mouse, touch, and stylus input through a single interface. Pointer Events simplify cross-device development by abstracting input differences. However, some gesture patterns require direct touch event access for multi-touch scenarios like pinch-to-zoom.

Event delegation improves performance for widgets with many interactive elements. Rather than attaching listeners to individual buttons or cards, attach a single listener to a parent container. The listener examines event.target to determine which child element received the touch, reducing memory overhead and speeding up rendering.

Passive event listeners prevent touch interactions from blocking scrolling. By default, browsers must wait for touch event handlers to complete before scrolling, creating lag. Marking listeners as passive with { passive: true } tells the browser the handler won't call preventDefault(), enabling smooth scrolling even during gesture processing.

Here's a production-ready touch event handler implementing these concepts:

interface TouchPoint {
  id: number;
  x: number;
  y: number;
  timestamp: number;
}

interface TouchHandlerConfig {
  element: HTMLElement;
  onTouchStart?: (points: TouchPoint[]) => void;
  onTouchMove?: (points: TouchPoint[]) => void;
  onTouchEnd?: (points: TouchPoint[]) => void;
  passive?: boolean;
  preventScroll?: boolean;
}

class TouchEventHandler {
  private element: HTMLElement;
  private config: TouchHandlerConfig;
  private activeTouches: Map<number, TouchPoint> = new Map();
  private isPassive: boolean;

  constructor(config: TouchHandlerConfig) {
    this.element = config.element;
    this.config = config;
    this.isPassive = config.passive ?? true;
    this.attachListeners();
  }

  private attachListeners(): void {
    const options = {
      passive: this.isPassive && !this.config.preventScroll
    };

    this.element.addEventListener(
      'touchstart',
      this.handleTouchStart.bind(this),
      options
    );
    this.element.addEventListener(
      'touchmove',
      this.handleTouchMove.bind(this),
      options
    );
    this.element.addEventListener(
      'touchend',
      this.handleTouchEnd.bind(this),
      options
    );
    this.element.addEventListener(
      'touchcancel',
      this.handleTouchEnd.bind(this),
      options
    );
  }

  private handleTouchStart(event: TouchEvent): void {
    if (this.config.preventScroll && !this.isPassive) {
      event.preventDefault();
    }

    const points: TouchPoint[] = [];
    for (let i = 0; i < event.changedTouches.length; i++) {
      const touch = event.changedTouches[i];
      const point: TouchPoint = {
        id: touch.identifier,
        x: touch.clientX,
        y: touch.clientY,
        timestamp: Date.now()
      };
      this.activeTouches.set(touch.identifier, point);
      points.push(point);
    }

    this.config.onTouchStart?.(points);
  }

  private handleTouchMove(event: TouchEvent): void {
    if (this.config.preventScroll && !this.isPassive) {
      event.preventDefault();
    }

    const points: TouchPoint[] = [];
    for (let i = 0; i < event.changedTouches.length; i++) {
      const touch = event.changedTouches[i];
      const point: TouchPoint = {
        id: touch.identifier,
        x: touch.clientX,
        y: touch.clientY,
        timestamp: Date.now()
      };
      this.activeTouches.set(touch.identifier, point);
      points.push(point);
    }

    this.config.onTouchMove?.(points);
  }

  private handleTouchEnd(event: TouchEvent): void {
    const points: TouchPoint[] = [];
    for (let i = 0; i < event.changedTouches.length; i++) {
      const touch = event.changedTouches[i];
      const point = this.activeTouches.get(touch.identifier);
      if (point) {
        points.push(point);
        this.activeTouches.delete(touch.identifier);
      }
    }

    this.config.onTouchEnd?.(points);
  }

  public getActiveTouches(): TouchPoint[] {
    return Array.from(this.activeTouches.values());
  }

  public destroy(): void {
    this.element.removeEventListener('touchstart', this.handleTouchStart);
    this.element.removeEventListener('touchmove', this.handleTouchMove);
    this.element.removeEventListener('touchend', this.handleTouchEnd);
    this.element.removeEventListener('touchcancel', this.handleTouchEnd);
    this.activeTouches.clear();
  }
}

// Usage example
const handler = new TouchEventHandler({
  element: document.getElementById('widget-container')!,
  onTouchStart: (points) => {
    console.log(`Touch started with ${points.length} finger(s)`);
  },
  onTouchMove: (points) => {
    console.log(`Moving ${points.length} finger(s)`);
  },
  onTouchEnd: (points) => {
    console.log(`Touch ended for ${points.length} finger(s)`);
  },
  passive: true
});

This foundation supports all gesture patterns while maintaining performance and compatibility.

Implementing Gesture Recognition

Gesture recognition transforms raw touch events into meaningful user actions. A swipe gesture requires detecting directional movement exceeding a minimum distance threshold. Users expect swipes to feel responsive, typically requiring 50-75 pixels of movement within 300 milliseconds. Too sensitive creates accidental swipes; too strict makes the interface feel sluggish.

Velocity calculations improve swipe detection accuracy. By measuring pixels traveled per millisecond, your gesture recognizer can distinguish deliberate swipes from slow dragging. A swipe velocity threshold of 0.3 pixels per millisecond works well across devices. Calculate velocity by dividing distance by time elapsed between touchstart and touchend.

Direction detection requires comparing horizontal and vertical movement. If horizontal distance exceeds vertical distance by a threshold (typically 2:1 ratio), classify as a horizontal swipe. This prevents diagonal movements from triggering unwanted swipes. Lock the dominant axis after the first 10 pixels of movement to prevent oscillation.

Here's a production-ready swipe gesture detector:

import { useState, useEffect, useRef, useCallback } from 'react';

interface SwipeGestureConfig {
  minDistance?: number;
  minVelocity?: number;
  maxDuration?: number;
  directionThreshold?: number;
}

type SwipeDirection = 'left' | 'right' | 'up' | 'down';

interface SwipeResult {
  direction: SwipeDirection;
  distance: number;
  velocity: number;
  duration: number;
}

const DEFAULT_CONFIG: Required<SwipeGestureConfig> = {
  minDistance: 60,
  minVelocity: 0.3,
  maxDuration: 300,
  directionThreshold: 2
};

export function useSwipeGesture(
  elementRef: React.RefObject<HTMLElement>,
  config: SwipeGestureConfig = {}
) {
  const [swipeResult, setSwipeResult] = useState<SwipeResult | null>(null);
  const configRef = useRef({ ...DEFAULT_CONFIG, ...config });
  const touchStartRef = useRef<{
    x: number;
    y: number;
    timestamp: number;
  } | null>(null);

  const handleTouchStart = useCallback((event: TouchEvent) => {
    const touch = event.touches[0];
    touchStartRef.current = {
      x: touch.clientX,
      y: touch.clientY,
      timestamp: Date.now()
    };
    setSwipeResult(null);
  }, []);

  const handleTouchMove = useCallback((event: TouchEvent) => {
    if (!touchStartRef.current) return;

    // Prevent default only if we detect intentional swipe
    const touch = event.touches[0];
    const deltaX = Math.abs(touch.clientX - touchStartRef.current.x);
    const deltaY = Math.abs(touch.clientY - touchStartRef.current.y);

    if (deltaX > 10 || deltaY > 10) {
      const cfg = configRef.current;
      const isHorizontalSwipe = deltaX / (deltaY || 1) > cfg.directionThreshold;
      const isVerticalSwipe = deltaY / (deltaX || 1) > cfg.directionThreshold;

      if (isHorizontalSwipe || isVerticalSwipe) {
        event.preventDefault();
      }
    }
  }, []);

  const handleTouchEnd = useCallback((event: TouchEvent) => {
    if (!touchStartRef.current) return;

    const touch = event.changedTouches[0];
    const startPos = touchStartRef.current;
    const cfg = configRef.current;

    const deltaX = touch.clientX - startPos.x;
    const deltaY = touch.clientY - startPos.y;
    const duration = Date.now() - startPos.timestamp;

    const absDeltaX = Math.abs(deltaX);
    const absDeltaY = Math.abs(deltaY);

    // Check if swipe meets criteria
    if (duration > cfg.maxDuration) {
      touchStartRef.current = null;
      return;
    }

    const distance = Math.sqrt(deltaX * deltaX + deltaY * deltaY);
    const velocity = distance / duration;

    if (distance < cfg.minDistance || velocity < cfg.minVelocity) {
      touchStartRef.current = null;
      return;
    }

    // Determine direction
    let direction: SwipeDirection;
    const isHorizontalSwipe = absDeltaX / (absDeltaY || 1) > cfg.directionThreshold;
    const isVerticalSwipe = absDeltaY / (absDeltaX || 1) > cfg.directionThreshold;

    if (isHorizontalSwipe) {
      direction = deltaX > 0 ? 'right' : 'left';
    } else if (isVerticalSwipe) {
      direction = deltaY > 0 ? 'down' : 'up';
    } else {
      // Diagonal swipe - not supported
      touchStartRef.current = null;
      return;
    }

    setSwipeResult({
      direction,
      distance,
      velocity,
      duration
    });

    touchStartRef.current = null;
  }, []);

  useEffect(() => {
    const element = elementRef.current;
    if (!element) return;

    element.addEventListener('touchstart', handleTouchStart, { passive: true });
    element.addEventListener('touchmove', handleTouchMove, { passive: false });
    element.addEventListener('touchend', handleTouchEnd, { passive: true });

    return () => {
      element.removeEventListener('touchstart', handleTouchStart);
      element.removeEventListener('touchmove', handleTouchMove);
      element.removeEventListener('touchend', handleTouchEnd);
    };
  }, [elementRef, handleTouchStart, handleTouchMove, handleTouchEnd]);

  return swipeResult;
}

// Usage example
function SwipeableCard() {
  const cardRef = useRef<HTMLDivElement>(null);
  const swipe = useSwipeGesture(cardRef, {
    minDistance: 75,
    minVelocity: 0.4
  });

  useEffect(() => {
    if (swipe) {
      console.log(`Swiped ${swipe.direction} at ${swipe.velocity.toFixed(2)} px/ms`);

      if (swipe.direction === 'left') {
        // Handle swipe left (e.g., next card)
      } else if (swipe.direction === 'right') {
        // Handle swipe right (e.g., previous card)
      }
    }
  }, [swipe]);

  return (
    <div
      ref={cardRef}
      style={{
        width: '100%',
        height: '200px',
        background: '#f0f0f0',
        borderRadius: '8px',
        touchAction: 'pan-y' // Allow vertical scroll, prevent horizontal
      }}
    >
      Swipe me left or right
    </div>
  );
}

This swipe detector handles edge cases like diagonal movement and accidental touches while maintaining smooth performance.

Optimizing Touch Performance

Performance optimization prevents janky interactions that frustrate users. Touch event handlers execute on the main thread, competing with rendering and JavaScript execution. A slow handler blocks scrolling, creating stuttering that makes widgets feel broken. The key optimization techniques are passive listeners, debouncing, throttling, and requestAnimationFrame scheduling.

Passive listeners, as demonstrated earlier, eliminate scroll blocking by declaring handlers won't prevent default browser behavior. This single change can improve scroll performance by 30-50% on mobile devices. Use passive listeners unless you explicitly need to prevent scrolling, such as in drag-and-drop or drawing interfaces.

RequestAnimationFrame synchronizes gesture processing with browser rendering cycles. Rather than updating UI state immediately in a touchmove handler (which fires at unpredictable intervals), schedule updates for the next frame. This prevents multiple redundant updates between frames and ensures smooth 60fps animations.

Touch debouncing reduces processing overhead for rapid-fire events. A touchmove event can fire 60-120 times per second during fast gestures. If your handler performs expensive calculations, debounce to process at most once per 16ms (60fps). Use a timestamp check rather than setTimeout for more reliable timing.

Here's a performance-optimized touch handler:

interface PerformanceOptimizedHandlerConfig {
  element: HTMLElement;
  onGestureUpdate: (state: GestureState) => void;
  throttleMs?: number;
}

interface GestureState {
  x: number;
  y: number;
  deltaX: number;
  deltaY: number;
  velocity: number;
  timestamp: number;
}

class PerformanceOptimizedTouchHandler {
  private element: HTMLElement;
  private onUpdate: (state: GestureState) => void;
  private throttleMs: number;
  private lastProcessedTime: number = 0;
  private rafId: number | null = null;
  private pendingState: GestureState | null = null;
  private startPos: { x: number; y: number; timestamp: number } | null = null;
  private lastPos: { x: number; y: number; timestamp: number } | null = null;

  constructor(config: PerformanceOptimizedHandlerConfig) {
    this.element = config.element;
    this.onUpdate = config.onGestureUpdate;
    this.throttleMs = config.throttleMs ?? 16; // 60fps default
    this.attachListeners();
  }

  private attachListeners(): void {
    this.element.addEventListener(
      'touchstart',
      this.handleTouchStart.bind(this),
      { passive: true }
    );
    this.element.addEventListener(
      'touchmove',
      this.handleTouchMove.bind(this),
      { passive: false } // Need preventDefault for custom gestures
    );
    this.element.addEventListener(
      'touchend',
      this.handleTouchEnd.bind(this),
      { passive: true }
    );
  }

  private handleTouchStart(event: TouchEvent): void {
    const touch = event.touches[0];
    const now = performance.now();

    this.startPos = {
      x: touch.clientX,
      y: touch.clientY,
      timestamp: now
    };
    this.lastPos = { ...this.startPos };
    this.lastProcessedTime = now;
  }

  private handleTouchMove(event: TouchEvent): void {
    if (!this.startPos || !this.lastPos) return;

    const touch = event.touches[0];
    const now = performance.now();

    // Throttle processing
    if (now - this.lastProcessedTime < this.throttleMs) {
      return;
    }

    const deltaX = touch.clientX - this.lastPos.x;
    const deltaY = touch.clientY - this.lastPos.y;
    const timeDelta = now - this.lastPos.timestamp;
    const distance = Math.sqrt(deltaX * deltaX + deltaY * deltaY);
    const velocity = timeDelta > 0 ? distance / timeDelta : 0;

    this.pendingState = {
      x: touch.clientX,
      y: touch.clientY,
      deltaX,
      deltaY,
      velocity,
      timestamp: now
    };

    this.lastPos = {
      x: touch.clientX,
      y: touch.clientY,
      timestamp: now
    };

    // Schedule update on next animation frame
    if (this.rafId === null) {
      this.rafId = requestAnimationFrame(this.processUpdate.bind(this));
    }

    this.lastProcessedTime = now;
  }

  private processUpdate(): void {
    this.rafId = null;

    if (this.pendingState) {
      this.onUpdate(this.pendingState);
      this.pendingState = null;
    }
  }

  private handleTouchEnd(): void {
    if (this.rafId !== null) {
      cancelAnimationFrame(this.rafId);
      this.rafId = null;
    }

    this.startPos = null;
    this.lastPos = null;
    this.pendingState = null;
  }

  public destroy(): void {
    if (this.rafId !== null) {
      cancelAnimationFrame(this.rafId);
    }

    this.element.removeEventListener('touchstart', this.handleTouchStart);
    this.element.removeEventListener('touchmove', this.handleTouchMove);
    this.element.removeEventListener('touchend', this.handleTouchEnd);
  }
}

These optimizations maintain 60fps even during complex gestures on low-end mobile devices.

Supporting Multi-Touch Gestures

Multi-touch gestures enable advanced interactions like pinch-to-zoom, rotation, and multi-finger swipes. The challenge lies in tracking multiple simultaneous touch points and computing aggregate metrics. Pinch-to-zoom requires measuring the distance between two fingers and detecting expansion or contraction. Rotation gestures measure angle changes between two touch points.

Touch point identification uses the identifier property to track fingers across events. Each touch maintains a unique identifier for its lifecycle, enabling you to match touchmove events to their originating touchstart events. Store active touches in a Map keyed by identifier for O(1) lookup performance.

Distance calculation for pinch detection uses the Euclidean distance formula between two points. Track the initial distance on the first touchmove after detecting two fingers, then compare subsequent distances to compute scale factors. A distance increase indicates zoom in; a decrease indicates zoom out. Apply exponential smoothing to prevent jittery scaling.

Here's a production-ready pinch-to-zoom implementation:

import { useState, useEffect, useRef, useCallback } from 'react';

interface PinchGestureConfig {
  minScale?: number;
  maxScale?: number;
  scaleSensitivity?: number;
}

interface PinchGestureState {
  scale: number;
  centerX: number;
  centerY: number;
  isActive: boolean;
}

const DEFAULT_PINCH_CONFIG: Required<PinchGestureConfig> = {
  minScale: 0.5,
  maxScale: 4,
  scaleSensitivity: 1
};

export function usePinchZoom(
  elementRef: React.RefObject<HTMLElement>,
  config: PinchGestureConfig = {}
) {
  const [gestureState, setGestureState] = useState<PinchGestureState>({
    scale: 1,
    centerX: 0,
    centerY: 0,
    isActive: false
  });

  const configRef = useRef({ ...DEFAULT_PINCH_CONFIG, ...config });
  const initialDistanceRef = useRef<number | null>(null);
  const initialScaleRef = useRef<number>(1);
  const touchCacheRef = useRef<Map<number, Touch>>(new Map());

  const getDistance = useCallback((touch1: Touch, touch2: Touch): number => {
    const dx = touch1.clientX - touch2.clientX;
    const dy = touch1.clientY - touch2.clientY;
    return Math.sqrt(dx * dx + dy * dy);
  }, []);

  const getCenter = useCallback((touch1: Touch, touch2: Touch) => {
    return {
      x: (touch1.clientX + touch2.clientX) / 2,
      y: (touch1.clientY + touch2.clientY) / 2
    };
  }, []);

  const handleTouchStart = useCallback((event: TouchEvent) => {
    // Update touch cache
    for (let i = 0; i < event.touches.length; i++) {
      const touch = event.touches[i];
      touchCacheRef.current.set(touch.identifier, touch);
    }

    // Only process pinch with exactly 2 fingers
    if (event.touches.length === 2) {
      event.preventDefault();

      const touch1 = event.touches[0];
      const touch2 = event.touches[1];

      initialDistanceRef.current = getDistance(touch1, touch2);
      initialScaleRef.current = gestureState.scale;

      const center = getCenter(touch1, touch2);

      setGestureState(prev => ({
        ...prev,
        centerX: center.x,
        centerY: center.y,
        isActive: true
      }));
    }
  }, [getDistance, getCenter, gestureState.scale]);

  const handleTouchMove = useCallback((event: TouchEvent) => {
    if (event.touches.length !== 2 || initialDistanceRef.current === null) {
      return;
    }

    event.preventDefault();

    const touch1 = event.touches[0];
    const touch2 = event.touches[1];

    const currentDistance = getDistance(touch1, touch2);
    const center = getCenter(touch1, touch2);

    // Calculate scale change
    const distanceRatio = currentDistance / initialDistanceRef.current;
    const cfg = configRef.current;
    const rawScale = initialScaleRef.current * distanceRatio * cfg.scaleSensitivity;

    // Clamp scale to min/max bounds
    const clampedScale = Math.max(
      cfg.minScale,
      Math.min(cfg.maxScale, rawScale)
    );

    setGestureState({
      scale: clampedScale,
      centerX: center.x,
      centerY: center.y,
      isActive: true
    });
  }, [getDistance, getCenter]);

  const handleTouchEnd = useCallback((event: TouchEvent) => {
    // Update touch cache by removing ended touches
    for (let i = 0; i < event.changedTouches.length; i++) {
      const touch = event.changedTouches[i];
      touchCacheRef.current.delete(touch.identifier);
    }

    // If we no longer have 2 fingers, end pinch gesture
    if (event.touches.length < 2) {
      initialDistanceRef.current = null;
      setGestureState(prev => ({
        ...prev,
        isActive: false
      }));
    }
  }, []);

  useEffect(() => {
    const element = elementRef.current;
    if (!element) return;

    element.addEventListener('touchstart', handleTouchStart, { passive: false });
    element.addEventListener('touchmove', handleTouchMove, { passive: false });
    element.addEventListener('touchend', handleTouchEnd, { passive: true });
    element.addEventListener('touchcancel', handleTouchEnd, { passive: true });

    return () => {
      element.removeEventListener('touchstart', handleTouchStart);
      element.removeEventListener('touchmove', handleTouchMove);
      element.removeEventListener('touchend', handleTouchEnd);
      element.removeEventListener('touchcancel', handleTouchEnd);
    };
  }, [elementRef, handleTouchStart, handleTouchMove, handleTouchEnd]);

  return gestureState;
}

// Usage example with image zoom
function ZoomableImage({ src }: { src: string }) {
  const containerRef = useRef<HTMLDivElement>(null);
  const pinchState = usePinchZoom(containerRef, {
    minScale: 1,
    maxScale: 5,
    scaleSensitivity: 1
  });

  return (
    <div
      ref={containerRef}
      style={{
        width: '100%',
        height: '400px',
        overflow: 'hidden',
        touchAction: 'none',
        position: 'relative'
      }}
    >
      <img
        src={src}
        alt="Zoomable content"
        style={{
          width: '100%',
          height: '100%',
          objectFit: 'contain',
          transform: `scale(${pinchState.scale})`,
          transformOrigin: `${pinchState.centerX}px ${pinchState.centerY}px`,
          transition: pinchState.isActive ? 'none' : 'transform 0.2s ease-out'
        }}
      />
    </div>
  );
}

This implementation handles multi-touch tracking, distance calculations, and smooth scaling with proper bounds checking.

Additional Gesture Patterns

Drag-and-drop gestures enable card reordering, customizable dashboards, and interactive layouts. Implement drag by tracking touch position in touchmove and translating elements with CSS transforms. Use transform: translate3d() instead of left/top for hardware-accelerated performance.

Long press gestures trigger contextual menus or alternative actions. Detect long press by starting a timer on touchstart and firing if the touch hasn't moved more than 10 pixels after 500ms. Cancel the timer on touchmove or touchend. Provide visual feedback like a expanding circle to indicate progress.

Here's a reusable drag-and-drop hook:

import { useState, useEffect, useRef, useCallback } from 'react';

interface DragState {
  isDragging: boolean;
  currentX: number;
  currentY: number;
  offsetX: number;
  offsetY: number;
}

export function useDragGesture(elementRef: React.RefObject<HTMLElement>) {
  const [dragState, setDragState] = useState<DragState>({
    isDragging: false,
    currentX: 0,
    currentY: 0,
    offsetX: 0,
    offsetY: 0
  });

  const startPosRef = useRef<{ x: number; y: number } | null>(null);
  const hasMoved = useRef(false);

  const handleTouchStart = useCallback((event: TouchEvent) => {
    const touch = event.touches[0];
    const element = elementRef.current;
    if (!element) return;

    const rect = element.getBoundingClientRect();

    startPosRef.current = {
      x: touch.clientX,
      y: touch.clientY
    };
    hasMoved.current = false;

    setDragState({
      isDragging: true,
      currentX: touch.clientX,
      currentY: touch.clientY,
      offsetX: touch.clientX - rect.left,
      offsetY: touch.clientY - rect.top
    });
  }, [elementRef]);

  const handleTouchMove = useCallback((event: TouchEvent) => {
    if (!startPosRef.current) return;

    const touch = event.touches[0];
    const deltaX = Math.abs(touch.clientX - startPosRef.current.x);
    const deltaY = Math.abs(touch.clientY - startPosRef.current.y);

    // Prevent scroll if dragging
    if (deltaX > 5 || deltaY > 5) {
      event.preventDefault();
      hasMoved.current = true;
    }

    setDragState(prev => ({
      ...prev,
      currentX: touch.clientX,
      currentY: touch.clientY
    }));
  }, []);

  const handleTouchEnd = useCallback(() => {
    startPosRef.current = null;
    setDragState(prev => ({
      ...prev,
      isDragging: false
    }));
  }, []);

  useEffect(() => {
    const element = elementRef.current;
    if (!element) return;

    element.addEventListener('touchstart', handleTouchStart, { passive: true });
    element.addEventListener('touchmove', handleTouchMove, { passive: false });
    element.addEventListener('touchend', handleTouchEnd, { passive: true });

    return () => {
      element.removeEventListener('touchstart', handleTouchStart);
      element.removeEventListener('touchmove', handleTouchMove);
      element.removeEventListener('touchend', handleTouchEnd);
    };
  }, [elementRef, handleTouchStart, handleTouchMove, handleTouchEnd]);

  return dragState;
}

And a long press detector:

import { useEffect, useRef, useCallback } from 'react';

interface LongPressConfig {
  duration?: number;
  threshold?: number;
  onLongPress: () => void;
}

export function useLongPress(
  elementRef: React.RefObject<HTMLElement>,
  config: LongPressConfig
) {
  const timerRef = useRef<number | null>(null);
  const startPosRef = useRef<{ x: number; y: number } | null>(null);
  const duration = config.duration ?? 500;
  const threshold = config.threshold ?? 10;

  const clearTimer = useCallback(() => {
    if (timerRef.current !== null) {
      window.clearTimeout(timerRef.current);
      timerRef.current = null;
    }
  }, []);

  const handleTouchStart = useCallback((event: TouchEvent) => {
    const touch = event.touches[0];
    startPosRef.current = {
      x: touch.clientX,
      y: touch.clientY
    };

    timerRef.current = window.setTimeout(() => {
      config.onLongPress();
      timerRef.current = null;
    }, duration);
  }, [config, duration]);

  const handleTouchMove = useCallback((event: TouchEvent) => {
    if (!startPosRef.current) return;

    const touch = event.touches[0];
    const deltaX = Math.abs(touch.clientX - startPosRef.current.x);
    const deltaY = Math.abs(touch.clientY - startPosRef.current.y);

    // Cancel if moved beyond threshold
    if (deltaX > threshold || deltaY > threshold) {
      clearTimer();
      startPosRef.current = null;
    }
  }, [threshold, clearTimer]);

  const handleTouchEnd = useCallback(() => {
    clearTimer();
    startPosRef.current = null;
  }, [clearTimer]);

  useEffect(() => {
    const element = elementRef.current;
    if (!element) return;

    element.addEventListener('touchstart', handleTouchStart, { passive: true });
    element.addEventListener('touchmove', handleTouchMove, { passive: true });
    element.addEventListener('touchend', handleTouchEnd, { passive: true });
    element.addEventListener('touchcancel', handleTouchEnd, { passive: true });

    return () => {
      clearTimer();
      element.removeEventListener('touchstart', handleTouchStart);
      element.removeEventListener('touchmove', handleTouchMove);
      element.removeEventListener('touchend', handleTouchEnd);
      element.removeEventListener('touchcancel', handleTouchEnd);
    };
  }, [elementRef, handleTouchStart, handleTouchMove, handleTouchEnd, clearTimer]);
}

Ensuring Touch Accessibility

Touch accessibility ensures users with motor impairments can interact successfully with widgets. WCAG 2.2 requires touch targets measuring at least 44x44 CSS pixels, with 8 pixels spacing between adjacent targets. Small buttons clustered together frustrate users and create accidental taps.

Provide alternatives to complex gestures. Not all users can perform pinch-to-zoom or multi-finger swipes. Include zoom buttons as an alternative to pinching, and provide arrow buttons alongside swipe gestures. Screen reader users benefit from ARIA labels describing gesture-activated actions.

Visual feedback confirms touches registered successfully. Add a subtle scale animation or background color change on touchstart to provide immediate feedback. This prevents users from tapping repeatedly when the interface feels unresponsive. Use :active pseudo-class styling for simple feedback, or implement custom touch feedback for more control.

Reduced motion preferences must be respected. Some users experience vertigo or nausea from animations. Check prefers-reduced-motion media query and disable or reduce animation intensity for users who need it. This applies to swipe transitions, scale animations, and scroll effects.

// Accessible touch target component
interface TouchTargetProps {
  children: React.ReactNode;
  onTap: () => void;
  ariaLabel: string;
  minSize?: number;
}

function AccessibleTouchTarget({
  children,
  onTap,
  ariaLabel,
  minSize = 44
}: TouchTargetProps) {
  const [isPressed, setIsPressed] = useState(false);
  const elementRef = useRef<HTMLButtonElement>(null);
  const touchStartPosRef = useRef<{ x: number; y: number } | null>(null);

  const handleTouchStart = (event: React.TouchEvent) => {
    const touch = event.touches[0];
    touchStartPosRef.current = {
      x: touch.clientX,
      y: touch.clientY
    };
    setIsPressed(true);
  };

  const handleTouchEnd = (event: React.TouchEvent) => {
    if (!touchStartPosRef.current) return;

    const touch = event.changedTouches[0];
    const deltaX = Math.abs(touch.clientX - touchStartPosRef.current.x);
    const deltaY = Math.abs(touch.clientY - touchStartPosRef.current.y);

    // Only trigger if touch didn't move significantly (not a swipe)
    if (deltaX < 10 && deltaY < 10) {
      onTap();
    }

    setIsPressed(false);
    touchStartPosRef.current = null;
  };

  const handleTouchCancel = () => {
    setIsPressed(false);
    touchStartPosRef.current = null;
  };

  return (
    <button
      ref={elementRef}
      onTouchStart={handleTouchStart}
      onTouchEnd={handleTouchEnd}
      onTouchCancel={handleTouchCancel}
      onClick={onTap} // Fallback for mouse/keyboard
      aria-label={ariaLabel}
      style={{
        minWidth: `${minSize}px`,
        minHeight: `${minSize}px`,
        padding: '8px',
        border: 'none',
        background: isPressed ? '#e0e0e0' : '#f5f5f5',
        borderRadius: '8px',
        cursor: 'pointer',
        transform: isPressed ? 'scale(0.95)' : 'scale(1)',
        transition: 'transform 0.1s ease-out',
        touchAction: 'manipulation' // Prevents double-tap zoom
      }}
    >
      {children}
    </button>
  );
}

Conclusion

Touch gesture support transforms ChatGPT widgets from static displays into interactive mobile experiences. This guide covered the complete gesture implementation stack: touch event handling with passive listeners, gesture recognition algorithms for swipes and pinches, performance optimization through requestAnimationFrame, multi-touch support for advanced interactions, and accessibility considerations for inclusive design.

Production-ready code examples demonstrated real-world patterns: swipe detection with velocity thresholds, pinch-to-zoom with scale bounds, drag-and-drop with position tracking, and long press with movement thresholds. Each implementation balances responsiveness with performance, ensuring smooth 60fps interactions on mobile devices.

Ready to build gesture-driven ChatGPT widgets without writing complex touch handling code? MakeAIHQ provides a visual builder with built-in gesture support, accessibility compliance, and performance optimization. Create professional ChatGPT apps with swipe navigation, pinch-to-zoom galleries, and drag-and-drop interfaces through an intuitive no-code platform. Start building your gesture-enabled ChatGPT app today and deploy to 800 million ChatGPT users in 48 hours.

Related Resources

  • Complete Guide to Building ChatGPT Applications - Master ChatGPT app development from concept to deployment
  • Mobile-First Widget Design for ChatGPT - Design principles for mobile ChatGPT widgets
  • Widget Performance Optimization for ChatGPT - Advanced performance techniques for widgets
  • Widget Accessibility and WCAG Compliance - Build accessible ChatGPT widgets
  • React Hooks for ChatGPT Widget Development - Reusable React patterns for widgets

External References