Vulnerability Management for ChatGPT Apps
Proactive vulnerability management is the cornerstone of maintaining secure ChatGPT applications in production. Unlike reactive security approaches that address breaches after they occur, vulnerability management creates a continuous lifecycle of detection, assessment, remediation, and verification that prevents security incidents before they materialize.
The vulnerability lifecycle follows a structured path: detection identifies potential weaknesses through automated scanning, assessment evaluates severity using CVE/CVSS scoring systems, remediation applies patches or implements mitigations, and verification confirms the fix without introducing regressions. For ChatGPT apps interfacing with OpenAI's infrastructure, this lifecycle must account for both your application code and the entire dependency chain—from npm packages and Python libraries to Docker base images and runtime environments.
Modern vulnerability management relies on Service Level Agreement (SLA) targets that align remediation timelines with risk severity. Industry-standard SLAs mandate critical vulnerabilities (CVSS 9.0-10.0) be patched within 24 hours, high-severity issues (CVSS 7.0-8.9) within 7 days, medium-severity vulnerabilities (CVSS 4.0-6.9) within 30 days, and low-severity findings (CVSS 0.1-3.9) within 90 days. These targets ensure that the most dangerous exploits—such as remote code execution or authentication bypasses—receive immediate attention while still maintaining a systematic approach to lower-risk issues.
For ChatGPT applications that process user data, integrate with external APIs, and expose web interfaces, vulnerability management extends beyond traditional web application security. Your MCP servers handle sensitive authentication tokens, your widget code executes in user browsers, and your backend services interact with OpenAI's GPT models—each representing distinct attack surfaces requiring specialized scanning and remediation strategies.
This guide provides production-ready code for dependency scanning, container security, static/dynamic application security testing (SAST/DAST), and automated patch management pipelines that integrate seamlessly into ChatGPT app development workflows.
Dependency Scanning
Dependency vulnerabilities represent the most common security risk in modern applications. ChatGPT apps typically include hundreds of transitive dependencies through npm (Node.js), pip (Python), or other package managers—each potentially harboring known vulnerabilities tracked in public CVE databases.
Automated dependency scanning must run continuously across three critical checkpoints: pre-commit hooks that prevent vulnerable code from entering version control, CI/CD pipeline gates that block deployments with unresolved high/critical issues, and scheduled scans that detect newly disclosed vulnerabilities in already-deployed code.
Beyond vulnerability detection, dependency scanning should enforce license compliance policies. Many organizations prohibit GPL-licensed dependencies in commercial applications, while others restrict copyleft licenses that could impose source code disclosure requirements. Integrating license checks alongside vulnerability scans creates a unified gatekeeper for third-party code.
Production-Ready Dependency Scanner (TypeScript)
// vulnerability-scanner.ts
import { exec } from 'child_process';
import { promisify } from 'util';
import { readFile, writeFile } from 'fs/promises';
import * as semver from 'semver';
const execAsync = promisify(exec);
interface VulnerabilityReport {
package: string;
version: string;
severity: 'critical' | 'high' | 'medium' | 'low';
cvss: number;
cve: string;
fixedIn?: string;
exploitable: boolean;
}
interface ScanResult {
vulnerabilities: VulnerabilityReport[];
licenses: LicenseViolation[];
totalVulnerabilities: number;
criticalCount: number;
highCount: number;
mediumCount: number;
lowCount: number;
scanTimestamp: string;
}
interface LicenseViolation {
package: string;
license: string;
reason: string;
}
export class DependencyScanner {
private readonly prohibitedLicenses = ['GPL-3.0', 'AGPL-3.0', 'LGPL-3.0'];
private readonly slaHours = { critical: 24, high: 168, medium: 720, low: 2160 };
async scanNpmDependencies(): Promise<ScanResult> {
console.log('🔍 Scanning npm dependencies...');
// Run npm audit with JSON output
const { stdout: auditOutput } = await execAsync('npm audit --json', {
cwd: process.cwd()
});
const auditData = JSON.parse(auditOutput);
const vulnerabilities: VulnerabilityReport[] = [];
// Parse npm audit results
for (const [name, vuln] of Object.entries(auditData.vulnerabilities || {})) {
const vulnData = vuln as any;
vulnerabilities.push({
package: name,
version: vulnData.range || 'unknown',
severity: vulnData.severity as 'critical' | 'high' | 'medium' | 'low',
cvss: this.calculateCVSS(vulnData.severity),
cve: vulnData.via?.[0]?.source || 'N/A',
fixedIn: vulnData.fixAvailable?.version,
exploitable: vulnData.severity === 'critical' || vulnData.severity === 'high'
});
}
// Run license check
const licenses = await this.checkLicenses();
return this.generateReport(vulnerabilities, licenses);
}
async scanPythonDependencies(): Promise<ScanResult> {
console.log('🔍 Scanning Python dependencies...');
// Run pip-audit with JSON output
const { stdout: auditOutput } = await execAsync(
'pip-audit --format json --require requirements.txt',
{ cwd: process.cwd() }
);
const auditData = JSON.parse(auditOutput);
const vulnerabilities: VulnerabilityReport[] = [];
for (const vuln of auditData.vulnerabilities || []) {
vulnerabilities.push({
package: vuln.name,
version: vuln.version,
severity: this.mapPipSeverity(vuln.severity),
cvss: vuln.cvss_score || 0,
cve: vuln.id,
fixedIn: vuln.fix_versions?.[0],
exploitable: vuln.cvss_score >= 7.0
});
}
const licenses = await this.checkPythonLicenses();
return this.generateReport(vulnerabilities, licenses);
}
private async checkLicenses(): Promise<LicenseViolation[]> {
const { stdout } = await execAsync('npm list --json --depth=0');
const packageData = JSON.parse(stdout);
const violations: LicenseViolation[] = [];
for (const [name, pkg] of Object.entries(packageData.dependencies || {})) {
const pkgData = pkg as any;
const license = pkgData.license || 'UNKNOWN';
if (this.prohibitedLicenses.includes(license)) {
violations.push({
package: name,
license: license,
reason: `Prohibited copyleft license: ${license}`
});
}
}
return violations;
}
private async checkPythonLicenses(): Promise<LicenseViolation[]> {
try {
const { stdout } = await execAsync('pip-licenses --format=json');
const licenses = JSON.parse(stdout);
const violations: LicenseViolation[] = [];
for (const pkg of licenses) {
if (this.prohibitedLicenses.includes(pkg.License)) {
violations.push({
package: pkg.Name,
license: pkg.License,
reason: `Prohibited copyleft license: ${pkg.License}`
});
}
}
return violations;
} catch (error) {
console.warn('⚠️ pip-licenses not installed, skipping license check');
return [];
}
}
private calculateCVSS(severity: string): number {
const scores = { critical: 9.5, high: 7.5, medium: 5.0, low: 2.5 };
return scores[severity as keyof typeof scores] || 0;
}
private mapPipSeverity(severity: string): 'critical' | 'high' | 'medium' | 'low' {
if (severity === 'CRITICAL') return 'critical';
if (severity === 'HIGH') return 'high';
if (severity === 'MEDIUM') return 'medium';
return 'low';
}
private generateReport(vulnerabilities: VulnerabilityReport[], licenses: LicenseViolation[]): ScanResult {
const criticalCount = vulnerabilities.filter(v => v.severity === 'critical').length;
const highCount = vulnerabilities.filter(v => v.severity === 'high').length;
const mediumCount = vulnerabilities.filter(v => v.severity === 'medium').length;
const lowCount = vulnerabilities.filter(v => v.severity === 'low').length;
return {
vulnerabilities,
licenses,
totalVulnerabilities: vulnerabilities.length,
criticalCount,
highCount,
mediumCount,
lowCount,
scanTimestamp: new Date().toISOString()
};
}
async enforcePolicy(result: ScanResult): Promise<boolean> {
console.log('\n📊 Vulnerability Scan Report:');
console.log(` Critical: ${result.criticalCount}`);
console.log(` High: ${result.highCount}`);
console.log(` Medium: ${result.mediumCount}`);
console.log(` Low: ${result.lowCount}`);
console.log(` License Violations: ${result.licenses.length}`);
// Block deployment if critical/high vulnerabilities exist
if (result.criticalCount > 0 || result.highCount > 0) {
console.error('\n❌ Deployment blocked: Unresolved critical/high vulnerabilities');
return false;
}
if (result.licenses.length > 0) {
console.error('\n❌ Deployment blocked: License violations detected');
return false;
}
console.log('\n✅ Vulnerability policy passed');
return true;
}
}
// Usage in CI/CD pipeline
async function main() {
const scanner = new DependencyScanner();
// Scan both npm and Python dependencies
const npmResult = await scanner.scanNpmDependencies();
const pythonResult = await scanner.scanPythonDependencies();
// Enforce policy
const npmPassed = await scanner.enforcePolicy(npmResult);
const pythonPassed = await scanner.enforcePolicy(pythonResult);
// Exit with error code if policy failed
if (!npmPassed || !pythonPassed) {
process.exit(1);
}
}
if (require.main === module) {
main().catch(console.error);
}
This dependency scanner integrates with npm audit and pip-audit to detect vulnerabilities across both Node.js and Python ecosystems. The enforcePolicy() method blocks deployments when critical or high-severity vulnerabilities exist, ensuring that only secure code reaches production.
Container Security
Docker containers introduce additional attack surfaces through base image vulnerabilities, misconfigurations, and runtime exploits. ChatGPT apps deployed in containerized environments must scan images for known CVEs, harden base images to minimal attack surfaces, and monitor runtime behavior for anomalous activity.
Container security scanning should occur at three stages: build time (when creating Docker images), push time (before uploading to registries), and runtime (continuous monitoring of running containers). Tools like Trivy and Clair excel at build/push-time scanning, while Falco provides runtime threat detection.
Base image hardening follows the principle of least privilege: use distroless or Alpine images with minimal packages, run containers as non-root users, disable unnecessary capabilities, and implement read-only root filesystems where possible. For ChatGPT MCP servers, this might mean running Node.js applications under a dedicated node user with restricted permissions.
Container Security Scanner (TypeScript)
// container-scanner.ts
import { exec } from 'child_process';
import { promisify } from 'util';
import { readFile } from 'fs/promises';
import { createHash } from 'crypto';
const execAsync = promisify(exec);
interface ContainerVulnerability {
imageId: string;
imageName: string;
vulnerabilityId: string;
severity: 'CRITICAL' | 'HIGH' | 'MEDIUM' | 'LOW' | 'UNKNOWN';
packageName: string;
installedVersion: string;
fixedVersion?: string;
description: string;
}
interface ScanReport {
image: string;
digest: string;
vulnerabilities: ContainerVulnerability[];
totalVulnerabilities: number;
criticalCount: number;
highCount: number;
mediumCount: number;
lowCount: number;
scanDate: string;
passed: boolean;
}
export class ContainerSecurityScanner {
async scanImage(imageName: string): Promise<ScanReport> {
console.log(`🔍 Scanning container image: ${imageName}`);
// Run Trivy scan with JSON output
const { stdout } = await execAsync(
`trivy image --format json --severity CRITICAL,HIGH,MEDIUM,LOW ${imageName}`
);
const scanData = JSON.parse(stdout);
const vulnerabilities: ContainerVulnerability[] = [];
// Extract image digest
const { stdout: inspectOutput } = await execAsync(`docker inspect ${imageName}`);
const imageData = JSON.parse(inspectOutput)[0];
const digest = imageData.RepoDigests?.[0]?.split('@')[1] || 'unknown';
// Parse Trivy results
for (const result of scanData.Results || []) {
for (const vuln of result.Vulnerabilities || []) {
vulnerabilities.push({
imageId: imageData.Id,
imageName: imageName,
vulnerabilityId: vuln.VulnerabilityID,
severity: vuln.Severity as 'CRITICAL' | 'HIGH' | 'MEDIUM' | 'LOW',
packageName: vuln.PkgName,
installedVersion: vuln.InstalledVersion,
fixedVersion: vuln.FixedVersion,
description: vuln.Description || 'No description available'
});
}
}
return this.generateReport(imageName, digest, vulnerabilities);
}
async scanDockerfile(dockerfilePath: string): Promise<string[]> {
const issues: string[] = [];
const content = await readFile(dockerfilePath, 'utf-8');
const lines = content.split('\n');
for (let i = 0; i < lines.length; i++) {
const line = lines[i].trim();
// Check for root user
if (line.startsWith('USER root')) {
issues.push(`Line ${i + 1}: Running as root user (use USER node or USER 1001)`);
}
// Check for latest tag
if (line.startsWith('FROM') && line.includes(':latest')) {
issues.push(`Line ${i + 1}: Using :latest tag (pin specific version)`);
}
// Check for exposed privileged ports
if (line.startsWith('EXPOSE') && this.isPrivilegedPort(line)) {
issues.push(`Line ${i + 1}: Exposing privileged port (<1024)`);
}
// Check for secret in ENV
if (line.startsWith('ENV') && this.containsSecret(line)) {
issues.push(`Line ${i + 1}: Potential secret in ENV variable`);
}
}
return issues;
}
private isPrivilegedPort(exposeLine: string): boolean {
const portMatch = exposeLine.match(/EXPOSE\s+(\d+)/);
if (portMatch) {
const port = parseInt(portMatch[1], 10);
return port < 1024;
}
return false;
}
private containsSecret(envLine: string): boolean {
const secretKeywords = ['password', 'secret', 'api_key', 'token', 'credential'];
const lowerLine = envLine.toLowerCase();
return secretKeywords.some(keyword => lowerLine.includes(keyword));
}
private generateReport(imageName: string, digest: string, vulnerabilities: ContainerVulnerability[]): ScanReport {
const criticalCount = vulnerabilities.filter(v => v.severity === 'CRITICAL').length;
const highCount = vulnerabilities.filter(v => v.severity === 'HIGH').length;
const mediumCount = vulnerabilities.filter(v => v.severity === 'MEDIUM').length;
const lowCount = vulnerabilities.filter(v => v.severity === 'LOW').length;
const passed = criticalCount === 0 && highCount === 0;
return {
image: imageName,
digest,
vulnerabilities,
totalVulnerabilities: vulnerabilities.length,
criticalCount,
highCount,
mediumCount,
lowCount,
scanDate: new Date().toISOString(),
passed
};
}
async enforcePolicy(report: ScanReport): Promise<boolean> {
console.log('\n📊 Container Security Report:');
console.log(` Image: ${report.image}`);
console.log(` Digest: ${report.digest}`);
console.log(` Critical: ${report.criticalCount}`);
console.log(` High: ${report.highCount}`);
console.log(` Medium: ${report.mediumCount}`);
console.log(` Low: ${report.lowCount}`);
if (!report.passed) {
console.error('\n❌ Container scan failed: Unresolved critical/high vulnerabilities');
// Print critical/high vulnerabilities
const criticalVulns = report.vulnerabilities.filter(
v => v.severity === 'CRITICAL' || v.severity === 'HIGH'
);
for (const vuln of criticalVulns) {
console.error(` ${vuln.severity}: ${vuln.vulnerabilityId} in ${vuln.packageName}`);
if (vuln.fixedVersion) {
console.error(` Fix: Upgrade to ${vuln.fixedVersion}`);
}
}
return false;
}
console.log('\n✅ Container security policy passed');
return true;
}
}
// Usage in CI/CD pipeline
async function main() {
const scanner = new ContainerSecurityScanner();
const imageName = process.argv[2] || 'my-chatgpt-app:latest';
// Scan Dockerfile for misconfigurations
const dockerfileIssues = await scanner.scanDockerfile('./Dockerfile');
if (dockerfileIssues.length > 0) {
console.warn('\n⚠️ Dockerfile issues detected:');
dockerfileIssues.forEach(issue => console.warn(` ${issue}`));
}
// Scan container image
const report = await scanner.scanImage(imageName);
const passed = await scanner.enforcePolicy(report);
if (!passed) {
process.exit(1);
}
}
if (require.main === module) {
main().catch(console.error);
}
This container scanner uses Trivy to detect vulnerabilities in Docker images and performs static analysis on Dockerfiles to identify misconfigurations. The scanner blocks deployments when critical or high-severity container vulnerabilities exist.
For detailed guidance on securing ChatGPT app infrastructure, see our comprehensive ChatGPT App Security Best Practices pillar guide.
Static Application Security Testing (SAST)
Static Application Security Testing analyzes source code without executing it, identifying vulnerabilities like SQL injection, cross-site scripting (XSS), insecure deserialization, and authentication flaws. For ChatGPT apps, SAST tools must understand TypeScript/JavaScript patterns used in MCP servers and React/vanilla JS patterns in widget code.
Modern SAST tools like Semgrep use pattern-based rules to detect security anti-patterns specific to your codebase. For example, a custom rule might flag any code that passes user input directly to eval() or Function() constructors—a critical vulnerability in ChatGPT widgets that execute in user browsers.
Integrating SAST into pre-commit hooks and CI/CD pipelines creates quality gates that prevent vulnerable code from reaching production. Tools like SonarQube provide IDE plugins that highlight security issues in real-time, enabling developers to fix vulnerabilities before committing code.
SAST Integration (TypeScript)
// sast-scanner.ts
import { exec } from 'child_process';
import { promisify } from 'util';
import { readFile } from 'fs/promises';
const execAsync = promisify(exec);
interface SASTFinding {
rule: string;
severity: 'error' | 'warning' | 'info';
message: string;
file: string;
line: number;
column: number;
code: string;
cwe?: string;
}
interface SASTReport {
findings: SASTFinding[];
errorCount: number;
warningCount: number;
infoCount: number;
scanDate: string;
passed: boolean;
}
export class SASTScanner {
async scanWithSemgrep(targetPath: string = '.'): Promise<SASTReport> {
console.log('🔍 Running Semgrep SAST scan...');
try {
const { stdout } = await execAsync(
`semgrep --config=auto --json ${targetPath}`
);
const semgrepData = JSON.parse(stdout);
const findings: SASTFinding[] = [];
for (const result of semgrepData.results || []) {
findings.push({
rule: result.check_id,
severity: this.mapSemgrepSeverity(result.extra.severity),
message: result.extra.message,
file: result.path,
line: result.start.line,
column: result.start.col,
code: result.extra.lines,
cwe: result.extra.metadata?.cwe?.[0]
});
}
return this.generateReport(findings);
} catch (error: any) {
// Semgrep exits with code 1 if findings exist
if (error.stdout) {
const semgrepData = JSON.parse(error.stdout);
const findings: SASTFinding[] = [];
for (const result of semgrepData.results || []) {
findings.push({
rule: result.check_id,
severity: this.mapSemgrepSeverity(result.extra.severity),
message: result.extra.message,
file: result.path,
line: result.start.line,
column: result.start.col,
code: result.extra.lines,
cwe: result.extra.metadata?.cwe?.[0]
});
}
return this.generateReport(findings);
}
throw error;
}
}
private mapSemgrepSeverity(severity: string): 'error' | 'warning' | 'info' {
if (severity === 'ERROR' || severity === 'error') return 'error';
if (severity === 'WARNING' || severity === 'warning') return 'warning';
return 'info';
}
private generateReport(findings: SASTFinding[]): SASTReport {
const errorCount = findings.filter(f => f.severity === 'error').length;
const warningCount = findings.filter(f => f.severity === 'warning').length;
const infoCount = findings.filter(f => f.severity === 'info').length;
return {
findings,
errorCount,
warningCount,
infoCount,
scanDate: new Date().toISOString(),
passed: errorCount === 0
};
}
async enforcePolicy(report: SASTReport): Promise<boolean> {
console.log('\n📊 SAST Scan Report:');
console.log(` Errors: ${report.errorCount}`);
console.log(` Warnings: ${report.warningCount}`);
console.log(` Info: ${report.infoCount}`);
if (report.errorCount > 0) {
console.error('\n❌ SAST scan failed: Security vulnerabilities detected');
const errors = report.findings.filter(f => f.severity === 'error');
for (const finding of errors.slice(0, 10)) {
console.error(`\n ${finding.file}:${finding.line}:${finding.column}`);
console.error(` ${finding.rule}: ${finding.message}`);
if (finding.cwe) {
console.error(` CWE: ${finding.cwe}`);
}
}
if (errors.length > 10) {
console.error(`\n ... and ${errors.length - 10} more errors`);
}
return false;
}
console.log('\n✅ SAST policy passed');
return true;
}
}
// Usage in pre-commit hook
async function main() {
const scanner = new SASTScanner();
const report = await scanner.scanWithSemgrep('.');
const passed = await scanner.enforcePolicy(report);
if (!passed) {
process.exit(1);
}
}
if (require.main === module) {
main().catch(console.error);
}
This SAST scanner integrates Semgrep to detect security vulnerabilities in source code before deployment. The scanner blocks commits when security errors are detected, ensuring only secure code enters your repository.
Learn more about building secure development pipelines in our DevOps CI/CD for ChatGPT Apps guide.
Dynamic Application Security Testing (DAST)
Dynamic Application Security Testing analyzes running applications by simulating attacks against live endpoints. Unlike SAST which examines code statically, DAST tools like OWASP ZAP and Burp Suite send malicious payloads to your ChatGPT app's API endpoints, testing for vulnerabilities that only manifest at runtime.
DAST excels at detecting authentication bypasses, injection flaws, insecure API configurations, and session management vulnerabilities. For ChatGPT apps, DAST should test your MCP server endpoints, widget API calls, and OAuth flows to ensure proper input validation and authorization checks.
Continuous DAST scanning in staging environments catches vulnerabilities introduced by new code deployments. Automated scans should run after every deployment, with results integrated into your CI/CD pipeline to block production releases when critical vulnerabilities are detected.
DAST Automation (TypeScript)
// dast-scanner.ts
import { exec } from 'child_process';
import { promisify } from 'util';
import { writeFile } from 'fs/promises';
const execAsync = promisify(exec);
interface DASTVulnerability {
url: string;
risk: 'High' | 'Medium' | 'Low' | 'Informational';
confidence: 'High' | 'Medium' | 'Low';
name: string;
description: string;
solution: string;
cweid?: string;
wascid?: string;
}
interface DASTReport {
targetUrl: string;
vulnerabilities: DASTVulnerability[];
highRiskCount: number;
mediumRiskCount: number;
lowRiskCount: number;
scanDate: string;
passed: boolean;
}
export class DASTScanner {
async scanWithOWASPZAP(targetUrl: string, apiKey: string): Promise<DASTReport> {
console.log(`🔍 Running OWASP ZAP DAST scan on ${targetUrl}...`);
const zapBaseUrl = 'http://localhost:8080';
// Start spider scan
console.log(' Starting spider scan...');
const spiderScanId = await this.startSpider(zapBaseUrl, apiKey, targetUrl);
await this.waitForSpider(zapBaseUrl, apiKey, spiderScanId);
// Start active scan
console.log(' Starting active scan...');
const activeScanId = await this.startActiveScan(zapBaseUrl, apiKey, targetUrl);
await this.waitForActiveScan(zapBaseUrl, apiKey, activeScanId);
// Retrieve alerts
const vulnerabilities = await this.getAlerts(zapBaseUrl, apiKey, targetUrl);
return this.generateReport(targetUrl, vulnerabilities);
}
private async startSpider(zapBaseUrl: string, apiKey: string, targetUrl: string): Promise<string> {
const { stdout } = await execAsync(
`curl -s "${zapBaseUrl}/JSON/spider/action/scan/?apikey=${apiKey}&url=${encodeURIComponent(targetUrl)}"`
);
const response = JSON.parse(stdout);
return response.scan;
}
private async waitForSpider(zapBaseUrl: string, apiKey: string, scanId: string): Promise<void> {
while (true) {
const { stdout } = await execAsync(
`curl -s "${zapBaseUrl}/JSON/spider/view/status/?apikey=${apiKey}&scanId=${scanId}"`
);
const response = JSON.parse(stdout);
const progress = parseInt(response.status, 10);
if (progress >= 100) {
break;
}
await new Promise(resolve => setTimeout(resolve, 2000));
}
}
private async startActiveScan(zapBaseUrl: string, apiKey: string, targetUrl: string): Promise<string> {
const { stdout } = await execAsync(
`curl -s "${zapBaseUrl}/JSON/ascan/action/scan/?apikey=${apiKey}&url=${encodeURIComponent(targetUrl)}"`
);
const response = JSON.parse(stdout);
return response.scan;
}
private async waitForActiveScan(zapBaseUrl: string, apiKey: string, scanId: string): Promise<void> {
while (true) {
const { stdout } = await execAsync(
`curl -s "${zapBaseUrl}/JSON/ascan/view/status/?apikey=${apiKey}&scanId=${scanId}"`
);
const response = JSON.parse(stdout);
const progress = parseInt(response.status, 10);
if (progress >= 100) {
break;
}
await new Promise(resolve => setTimeout(resolve, 5000));
}
}
private async getAlerts(zapBaseUrl: string, apiKey: string, targetUrl: string): Promise<DASTVulnerability[]> {
const { stdout } = await execAsync(
`curl -s "${zapBaseUrl}/JSON/core/view/alerts/?apikey=${apiKey}&baseurl=${encodeURIComponent(targetUrl)}"`
);
const response = JSON.parse(stdout);
return response.alerts.map((alert: any) => ({
url: alert.url,
risk: alert.risk as 'High' | 'Medium' | 'Low',
confidence: alert.confidence as 'High' | 'Medium' | 'Low',
name: alert.name,
description: alert.description,
solution: alert.solution,
cweid: alert.cweid,
wascid: alert.wascid
}));
}
private generateReport(targetUrl: string, vulnerabilities: DASTVulnerability[]): DASTReport {
const highRiskCount = vulnerabilities.filter(v => v.risk === 'High').length;
const mediumRiskCount = vulnerabilities.filter(v => v.risk === 'Medium').length;
const lowRiskCount = vulnerabilities.filter(v => v.risk === 'Low').length;
return {
targetUrl,
vulnerabilities,
highRiskCount,
mediumRiskCount,
lowRiskCount,
scanDate: new Date().toISOString(),
passed: highRiskCount === 0
};
}
async enforcePolicy(report: DASTReport): Promise<boolean> {
console.log('\n📊 DAST Scan Report:');
console.log(` Target: ${report.targetUrl}`);
console.log(` High Risk: ${report.highRiskCount}`);
console.log(` Medium Risk: ${report.mediumRiskCount}`);
console.log(` Low Risk: ${report.lowRiskCount}`);
if (report.highRiskCount > 0) {
console.error('\n❌ DAST scan failed: High-risk vulnerabilities detected');
const highRiskVulns = report.vulnerabilities.filter(v => v.risk === 'High');
for (const vuln of highRiskVulns) {
console.error(`\n ${vuln.name} (${vuln.confidence} confidence)`);
console.error(` URL: ${vuln.url}`);
console.error(` Solution: ${vuln.solution}`);
if (vuln.cweid) {
console.error(` CWE: ${vuln.cweid}`);
}
}
return false;
}
console.log('\n✅ DAST policy passed');
return true;
}
}
// Usage in CI/CD pipeline
async function main() {
const scanner = new DASTScanner();
const targetUrl = process.env.TARGET_URL || 'https://staging.example.com';
const apiKey = process.env.ZAP_API_KEY || '';
const report = await scanner.scanWithOWASPZAP(targetUrl, apiKey);
const passed = await scanner.enforcePolicy(report);
// Save report for audit trail
await writeFile('dast-report.json', JSON.stringify(report, null, 2));
if (!passed) {
process.exit(1);
}
}
if (require.main === module) {
main().catch(console.error);
}
This DAST scanner automates OWASP ZAP to test running applications for runtime vulnerabilities. The scanner performs spider and active scans, then blocks deployments when high-risk vulnerabilities are detected.
For comprehensive penetration testing strategies, see our Penetration Testing for ChatGPT Apps cluster article.
Patch Management
Effective patch management balances security with stability—applying critical patches quickly while ensuring updates don't introduce regressions. For ChatGPT apps, patch management must coordinate dependency updates, container base image updates, and infrastructure patches across development, staging, and production environments.
Automated patch deployment pipelines should include multi-stage testing: unit tests verify individual components, integration tests ensure dependencies work together correctly, smoke tests confirm critical paths function in staging, and canary deployments gradually roll out patches to production while monitoring for errors.
Rollback strategies are essential for patch management. When a patch introduces regressions, automated rollback mechanisms should revert to the previous stable version within minutes. For containerized ChatGPT apps, this typically means reverting to a previous Docker image tag and redeploying pods/containers.
Patch Management Pipeline (TypeScript)
// patch-manager.ts
import { exec } from 'child_process';
import { promisify } from 'util';
import { readFile, writeFile } from 'fs/promises';
import * as semver from 'semver';
const execAsync = promisify(exec);
interface PatchMetadata {
package: string;
currentVersion: string;
targetVersion: string;
severity: 'critical' | 'high' | 'medium' | 'low';
cve?: string;
breakingChange: boolean;
}
interface PatchResult {
package: string;
success: boolean;
oldVersion: string;
newVersion: string;
testsPassed: boolean;
error?: string;
}
interface PatchReport {
patches: PatchResult[];
totalPatches: number;
successfulPatches: number;
failedPatches: number;
patchDate: string;
}
export class PatchManager {
async identifyPatches(ecosystem: 'npm' | 'python'): Promise<PatchMetadata[]> {
console.log(`🔍 Identifying available patches for ${ecosystem}...`);
if (ecosystem === 'npm') {
return this.identifyNpmPatches();
} else {
return this.identifyPythonPatches();
}
}
private async identifyNpmPatches(): Promise<PatchMetadata[]> {
const { stdout } = await execAsync('npm outdated --json');
const outdated = JSON.parse(stdout || '{}');
const patches: PatchMetadata[] = [];
for (const [packageName, info] of Object.entries(outdated)) {
const pkgInfo = info as any;
const currentVersion = pkgInfo.current;
const latestVersion = pkgInfo.latest;
// Check if security update
const isSecurityUpdate = await this.isSecurityUpdate(packageName, currentVersion);
patches.push({
package: packageName,
currentVersion: currentVersion,
targetVersion: latestVersion,
severity: isSecurityUpdate ? 'high' : 'medium',
breakingChange: semver.major(latestVersion) > semver.major(currentVersion)
});
}
return patches;
}
private async identifyPythonPatches(): Promise<PatchMetadata[]> {
try {
const { stdout } = await execAsync('pip list --outdated --format json');
const outdated = JSON.parse(stdout);
const patches: PatchMetadata[] = [];
for (const pkg of outdated) {
patches.push({
package: pkg.name,
currentVersion: pkg.version,
targetVersion: pkg.latest_version,
severity: 'medium',
breakingChange: false // Would need additional logic to detect breaking changes
});
}
return patches;
} catch (error) {
return [];
}
}
private async isSecurityUpdate(packageName: string, currentVersion: string): Promise<boolean> {
try {
const { stdout } = await execAsync('npm audit --json');
const auditData = JSON.parse(stdout);
for (const [name, vuln] of Object.entries(auditData.vulnerabilities || {})) {
if (name === packageName) {
return true;
}
}
return false;
} catch (error) {
return false;
}
}
async applyPatches(patches: PatchMetadata[], autoApprove: boolean = false): Promise<PatchReport> {
console.log(`📦 Applying ${patches.length} patches...`);
const results: PatchResult[] = [];
for (const patch of patches) {
// Skip breaking changes unless auto-approved
if (patch.breakingChange && !autoApprove) {
console.warn(`⚠️ Skipping ${patch.package}: Breaking change detected`);
results.push({
package: patch.package,
success: false,
oldVersion: patch.currentVersion,
newVersion: patch.targetVersion,
testsPassed: false,
error: 'Breaking change requires manual review'
});
continue;
}
try {
// Apply patch
console.log(` Patching ${patch.package}...`);
await execAsync(`npm install ${patch.package}@${patch.targetVersion}`);
// Run tests
console.log(` Running tests...`);
const testsPassed = await this.runTests();
if (testsPassed) {
results.push({
package: patch.package,
success: true,
oldVersion: patch.currentVersion,
newVersion: patch.targetVersion,
testsPassed: true
});
console.log(` ✅ ${patch.package} patched successfully`);
} else {
// Rollback on test failure
console.error(` ❌ Tests failed, rolling back ${patch.package}`);
await execAsync(`npm install ${patch.package}@${patch.currentVersion}`);
results.push({
package: patch.package,
success: false,
oldVersion: patch.currentVersion,
newVersion: patch.targetVersion,
testsPassed: false,
error: 'Tests failed after patch'
});
}
} catch (error: any) {
results.push({
package: patch.package,
success: false,
oldVersion: patch.currentVersion,
newVersion: patch.targetVersion,
testsPassed: false,
error: error.message
});
}
}
return this.generateReport(results);
}
private async runTests(): Promise<boolean> {
try {
await execAsync('npm test', { timeout: 300000 }); // 5 minute timeout
return true;
} catch (error) {
return false;
}
}
private generateReport(results: PatchResult[]): PatchReport {
const successfulPatches = results.filter(r => r.success).length;
const failedPatches = results.filter(r => !r.success).length;
return {
patches: results,
totalPatches: results.length,
successfulPatches,
failedPatches,
patchDate: new Date().toISOString()
};
}
async generatePatchSummary(report: PatchReport): Promise<void> {
console.log('\n📊 Patch Management Report:');
console.log(` Total Patches: ${report.totalPatches}`);
console.log(` Successful: ${report.successfulPatches}`);
console.log(` Failed: ${report.failedPatches}`);
if (report.failedPatches > 0) {
console.warn('\n⚠️ Failed Patches:');
const failed = report.patches.filter(p => !p.success);
for (const patch of failed) {
console.warn(` ${patch.package}: ${patch.error}`);
}
}
// Save report
await writeFile('patch-report.json', JSON.stringify(report, null, 2));
}
}
// Usage in automated patch pipeline
async function main() {
const manager = new PatchManager();
// Identify patches
const npmPatches = await manager.identifyPatches('npm');
const pythonPatches = await manager.identifyPatches('python');
// Prioritize critical/high severity patches
const criticalPatches = npmPatches.filter(
p => p.severity === 'critical' || p.severity === 'high'
);
// Apply critical patches with auto-approval (for non-breaking changes)
const report = await manager.applyPatches(criticalPatches, false);
await manager.generatePatchSummary(report);
// Exit with error if any critical patches failed
if (report.failedPatches > 0) {
process.exit(1);
}
}
if (require.main === module) {
main().catch(console.error);
}
This patch management pipeline identifies outdated dependencies, applies patches, runs tests, and automatically rolls back on failures. The pipeline prioritizes critical/high-severity patches while skipping breaking changes that require manual review.
For secure container deployment strategies, see our Docker Container Security guide.
CVE Tracking and Remediation Workflow
Comprehensive vulnerability management requires tracking CVEs from detection through remediation and verification. A structured workflow ensures that every vulnerability receives appropriate attention based on severity and exploitability.
// cve-tracker.ts
import { readFile, writeFile } from 'fs/promises';
interface CVERecord {
id: string;
cve: string;
severity: 'critical' | 'high' | 'medium' | 'low';
cvss: number;
component: string;
version: string;
status: 'detected' | 'triaged' | 'in_progress' | 'remediated' | 'verified';
detectedDate: string;
remediationDate?: string;
verifiedDate?: string;
assignedTo?: string;
notes: string[];
}
export class CVETracker {
private records: CVERecord[] = [];
async loadRecords(filePath: string = 'cve-records.json'): Promise<void> {
try {
const data = await readFile(filePath, 'utf-8');
this.records = JSON.parse(data);
} catch (error) {
this.records = [];
}
}
async saveRecords(filePath: string = 'cve-records.json'): Promise<void> {
await writeFile(filePath, JSON.stringify(this.records, null, 2));
}
addCVE(cve: Omit<CVERecord, 'id' | 'detectedDate' | 'status' | 'notes'>): CVERecord {
const record: CVERecord = {
...cve,
id: this.generateId(),
detectedDate: new Date().toISOString(),
status: 'detected',
notes: []
};
this.records.push(record);
return record;
}
updateStatus(cveId: string, status: CVERecord['status'], note?: string): void {
const record = this.records.find(r => r.id === cveId);
if (!record) throw new Error(`CVE record not found: ${cveId}`);
record.status = status;
if (status === 'remediated') {
record.remediationDate = new Date().toISOString();
}
if (status === 'verified') {
record.verifiedDate = new Date().toISOString();
}
if (note) {
record.notes.push(`[${new Date().toISOString()}] ${note}`);
}
}
private generateId(): string {
return `CVE-${Date.now()}-${Math.random().toString(36).substr(2, 9)}`;
}
getOverdueCVEs(): CVERecord[] {
const now = Date.now();
const slaHours = { critical: 24, high: 168, medium: 720, low: 2160 };
return this.records.filter(record => {
if (record.status === 'remediated' || record.status === 'verified') {
return false;
}
const detectedTime = new Date(record.detectedDate).getTime();
const hoursElapsed = (now - detectedTime) / (1000 * 60 * 60);
const sla = slaHours[record.severity];
return hoursElapsed > sla;
});
}
}
This CVE tracker maintains a structured record of all detected vulnerabilities, tracking their remediation status and ensuring SLA compliance.
For compliance requirements, see our SOC 2 Certification for ChatGPT Apps guide and Security Auditing and Logging article.
Remediation Workflow Automation
// remediation-workflow.ts
import { CVETracker } from './cve-tracker';
import { PatchManager } from './patch-manager';
export class RemediationWorkflow {
private cveTracker: CVETracker;
private patchManager: PatchManager;
constructor() {
this.cveTracker = new CVETracker();
this.patchManager = new PatchManager();
}
async executeWorkflow(): Promise<void> {
console.log('🚀 Starting vulnerability remediation workflow...');
// Load existing CVE records
await this.cveTracker.loadRecords();
// Identify overdue CVEs
const overdue = this.cveTracker.getOverdueCVEs();
if (overdue.length > 0) {
console.warn(`⚠️ ${overdue.length} overdue CVEs detected`);
for (const cve of overdue) {
console.warn(` ${cve.cve} (${cve.severity}): ${cve.component}`);
}
}
// Identify patches
const patches = await this.patchManager.identifyPatches('npm');
// Apply critical/high patches
const criticalPatches = patches.filter(
p => p.severity === 'critical' || p.severity === 'high'
);
if (criticalPatches.length > 0) {
console.log(`\n📦 Applying ${criticalPatches.length} critical/high patches...`);
const report = await this.patchManager.applyPatches(criticalPatches);
// Update CVE records
for (const result of report.patches) {
if (result.success) {
// Find matching CVE record
const cveRecord = this.cveTracker['records'].find(
r => r.component === result.package && r.status !== 'verified'
);
if (cveRecord) {
this.cveTracker.updateStatus(
cveRecord.id,
'remediated',
`Patched from ${result.oldVersion} to ${result.newVersion}`
);
}
}
}
}
// Save updated CVE records
await this.cveTracker.saveRecords();
console.log('\n✅ Remediation workflow complete');
}
}
This remediation workflow orchestrates the entire vulnerability lifecycle from detection through verification.
Conclusion
Vulnerability management for ChatGPT apps requires a comprehensive, automated approach spanning dependency scanning, container security, static/dynamic testing, and patch management. By implementing the production-ready code examples in this guide, you establish a proactive security posture that detects vulnerabilities early, remediates them quickly, and verifies fixes without introducing regressions.
The vulnerability lifecycle—detection → assessment → remediation → verification—becomes a continuous loop that maintains security as your ChatGPT app evolves. Automated scanners catch new vulnerabilities within hours of disclosure, SLA-based remediation ensures critical issues receive immediate attention, and comprehensive testing prevents security patches from breaking functionality.
For ChatGPT apps processing sensitive data and serving users at scale, vulnerability management is not optional—it's the foundation of trust that enables your application to operate securely in production environments.
Ready to build secure ChatGPT apps with automated vulnerability management? Start your free trial with MakeAIHQ and deploy production-ready security pipelines in minutes, not weeks.
Related Resources
- ChatGPT App Security Best Practices - Comprehensive security pillar guide
- Penetration Testing for ChatGPT Apps - Advanced security testing
- Security Auditing and Logging - Compliance and monitoring
- Incident Response Planning - Security incident procedures
- SOC 2 Certification for ChatGPT Apps - Compliance requirements
- DevOps CI/CD for ChatGPT Apps - Secure deployment pipelines
- Docker Container Security - Container hardening strategies
External References
- NIST National Vulnerability Database - Official CVE database
- OWASP Dependency Check - Dependency scanning tools
- Snyk Vulnerability Database - Comprehensive vulnerability intelligence
Built with the precision and foresight that Harold Finch would demand.