Security Audit Compliance: SOC 2, ISO 27001 & HIPAA Audits
Introduction: Building Trust Through Compliance
When deploying ChatGPT applications that handle sensitive data, security compliance isn't optional—it's a business requirement. Enterprise customers, healthcare providers, and financial institutions demand proof that your AI applications meet rigorous security standards through independent audits like SOC 2, ISO 27001, and HIPAA.
These compliance frameworks provide structured approaches to information security, demonstrating to customers, partners, and regulators that you've implemented proper controls to protect their data. Achieving compliance certification isn't just about checking boxes; it's about building a security-first culture that permeates every aspect of your ChatGPT application development.
This comprehensive guide walks you through preparing for major security audits, implementing audit-ready logging systems, collecting evidence systematically, and maintaining continuous compliance. Whether you're pursuing your first SOC 2 Type II report or adding HIPAA compliance to an existing ISO 27001 program, you'll find production-ready code examples and practical strategies that auditors expect to see.
For organizations building ChatGPT applications on platforms like MakeAIHQ, understanding these compliance requirements early in development prevents costly retrofitting and accelerates time-to-certification.
SOC 2 Type II Preparation: Trust Service Criteria
Service Organization Control (SOC) 2 Type II audits evaluate your security controls over a minimum 6-month period, focusing on five Trust Service Criteria: Security, Availability, Processing Integrity, Confidentiality, and Privacy. For ChatGPT applications, auditors scrutinize how you protect user prompts, model responses, and any personally identifiable information (PII) processed through your AI systems.
The key distinction between SOC 2 Type I (design evaluation) and Type II (operating effectiveness) means you must demonstrate consistent, documented application of security controls over time. This requires comprehensive audit logging, access controls, incident response procedures, and evidence of regular security reviews.
Successful SOC 2 preparation involves mapping your ChatGPT application architecture to the Trust Service Criteria, implementing detective and preventive controls, and maintaining audit evidence that proves continuous operation of these controls. Automated control validation significantly reduces audit preparation time while ensuring nothing slips through the cracks.
Production-Ready SOC 2 Control Validator
/**
* SOC 2 Control Validator
* Automates validation of Trust Service Criteria controls
* Generates audit-ready evidence for SOC 2 Type II reports
*/
interface ControlTest {
controlId: string;
category: 'Security' | 'Availability' | 'Processing Integrity' | 'Confidentiality' | 'Privacy';
description: string;
testProcedure: () => Promise<ControlTestResult>;
frequency: 'daily' | 'weekly' | 'monthly';
}
interface ControlTestResult {
passed: boolean;
evidence: string[];
findings: string[];
timestamp: Date;
testedBy: string;
}
interface SOC2Report {
reportDate: Date;
periodStart: Date;
periodEnd: Date;
controlsTested: number;
controlsPassed: number;
exceptions: ControlException[];
evidencePackage: string[];
}
interface ControlException {
controlId: string;
severity: 'high' | 'medium' | 'low';
description: string;
remediation: string;
remediationDate?: Date;
}
class SOC2ControlValidator {
private controls: Map<string, ControlTest> = new Map();
private testHistory: Map<string, ControlTestResult[]> = new Map();
constructor() {
this.initializeControls();
}
private initializeControls(): void {
// Security Controls
this.addControl({
controlId: 'SEC-01',
category: 'Security',
description: 'Multi-factor authentication enforced for all administrative access',
frequency: 'daily',
testProcedure: async () => this.testMFAEnforcement()
});
this.addControl({
controlId: 'SEC-02',
category: 'Security',
description: 'ChatGPT API keys rotated every 90 days',
frequency: 'monthly',
testProcedure: async () => this.testAPIKeyRotation()
});
// Availability Controls
this.addControl({
controlId: 'AVL-01',
category: 'Availability',
description: 'ChatGPT service uptime exceeds 99.9% SLA',
frequency: 'monthly',
testProcedure: async () => this.testUptimeSLA()
});
// Confidentiality Controls
this.addControl({
controlId: 'CNF-01',
category: 'Confidentiality',
description: 'All ChatGPT prompts and responses encrypted in transit and at rest',
frequency: 'weekly',
testProcedure: async () => this.testEncryptionControls()
});
// Privacy Controls
this.addControl({
controlId: 'PRV-01',
category: 'Privacy',
description: 'PII detection in ChatGPT interactions with automatic redaction',
frequency: 'daily',
testProcedure: async () => this.testPIIRedaction()
});
}
private addControl(control: ControlTest): void {
this.controls.set(control.controlId, control);
this.testHistory.set(control.controlId, []);
}
private async testMFAEnforcement(): Promise<ControlTestResult> {
// Simulate validation of MFA enforcement
const adminUsers = await this.getAdminUsers();
const mfaEnabled = adminUsers.filter(u => u.mfaEnabled);
return {
passed: mfaEnabled.length === adminUsers.length,
evidence: [
`Total admin users: ${adminUsers.length}`,
`MFA enabled: ${mfaEnabled.length}`,
`MFA enforcement rate: ${(mfaEnabled.length / adminUsers.length * 100).toFixed(2)}%`
],
findings: mfaEnabled.length < adminUsers.length
? [`${adminUsers.length - mfaEnabled.length} admin users without MFA`]
: [],
timestamp: new Date(),
testedBy: 'automated-validator'
};
}
private async testAPIKeyRotation(): Promise<ControlTestResult> {
const apiKeys = await this.getAPIKeys();
const rotationThreshold = 90 * 24 * 60 * 60 * 1000; // 90 days
const now = Date.now();
const staleKeys = apiKeys.filter(key =>
now - key.createdAt.getTime() > rotationThreshold
);
return {
passed: staleKeys.length === 0,
evidence: [
`Total API keys: ${apiKeys.length}`,
`Keys within rotation policy: ${apiKeys.length - staleKeys.length}`,
`Average key age: ${this.calculateAverageAge(apiKeys)} days`
],
findings: staleKeys.length > 0
? [`${staleKeys.length} API keys exceed 90-day rotation policy`]
: [],
timestamp: new Date(),
testedBy: 'automated-validator'
};
}
private async testUptimeSLA(): Promise<ControlTestResult> {
const uptimeMetrics = await this.getUptimeMetrics();
const slaTarget = 99.9;
return {
passed: uptimeMetrics.uptime >= slaTarget,
evidence: [
`Uptime percentage: ${uptimeMetrics.uptime.toFixed(3)}%`,
`Total downtime: ${uptimeMetrics.downtimeMinutes} minutes`,
`SLA target: ${slaTarget}%`
],
findings: uptimeMetrics.uptime < slaTarget
? [`Uptime ${uptimeMetrics.uptime.toFixed(3)}% below SLA target ${slaTarget}%`]
: [],
timestamp: new Date(),
testedBy: 'automated-validator'
};
}
private async testEncryptionControls(): Promise<ControlTestResult> {
const encryptionStatus = await this.validateEncryption();
return {
passed: encryptionStatus.transitEncrypted && encryptionStatus.restEncrypted,
evidence: [
`TLS 1.3 enforced: ${encryptionStatus.transitEncrypted}`,
`AES-256 encryption at rest: ${encryptionStatus.restEncrypted}`,
`Encryption key rotation: ${encryptionStatus.keyRotationDays} days`
],
findings: [],
timestamp: new Date(),
testedBy: 'automated-validator'
};
}
private async testPIIRedaction(): Promise<ControlTestResult> {
const redactionMetrics = await this.getPIIRedactionMetrics();
return {
passed: redactionMetrics.detectionRate >= 99.0,
evidence: [
`PII detection rate: ${redactionMetrics.detectionRate.toFixed(2)}%`,
`Prompts scanned: ${redactionMetrics.promptsScanned}`,
`PII instances detected: ${redactionMetrics.piiDetected}`,
`PII instances redacted: ${redactionMetrics.piiRedacted}`
],
findings: redactionMetrics.detectionRate < 99.0
? ['PII detection rate below 99% threshold']
: [],
timestamp: new Date(),
testedBy: 'automated-validator'
};
}
async runControlTests(category?: string): Promise<SOC2Report> {
const periodEnd = new Date();
const periodStart = new Date(periodEnd);
periodStart.setMonth(periodStart.getMonth() - 6); // 6-month audit period
const exceptions: ControlException[] = [];
const evidencePackage: string[] = [];
let controlsTested = 0;
let controlsPassed = 0;
for (const [controlId, control] of this.controls) {
if (category && control.category !== category) continue;
const result = await control.testProcedure();
this.testHistory.get(controlId)?.push(result);
controlsTested++;
if (result.passed) controlsPassed++;
evidencePackage.push(...result.evidence);
if (!result.passed) {
exceptions.push({
controlId,
severity: 'high',
description: result.findings.join('; '),
remediation: `Address findings for control ${controlId}`
});
}
}
return {
reportDate: new Date(),
periodStart,
periodEnd,
controlsTested,
controlsPassed,
exceptions,
evidencePackage
};
}
// Utility methods
private async getAdminUsers() {
return [
{ id: '1', email: 'admin@example.com', mfaEnabled: true },
{ id: '2', email: 'security@example.com', mfaEnabled: true }
];
}
private async getAPIKeys() {
return [
{ id: 'key1', createdAt: new Date(Date.now() - 30 * 24 * 60 * 60 * 1000) },
{ id: 'key2', createdAt: new Date(Date.now() - 60 * 24 * 60 * 60 * 1000) }
];
}
private calculateAverageAge(keys: any[]): number {
const now = Date.now();
const totalAge = keys.reduce((sum, key) =>
sum + (now - key.createdAt.getTime()), 0
);
return Math.floor(totalAge / keys.length / (24 * 60 * 60 * 1000));
}
private async getUptimeMetrics() {
return {
uptime: 99.95,
downtimeMinutes: 21.6
};
}
private async validateEncryption() {
return {
transitEncrypted: true,
restEncrypted: true,
keyRotationDays: 30
};
}
private async getPIIRedactionMetrics() {
return {
promptsScanned: 15000,
piiDetected: 342,
piiRedacted: 342,
detectionRate: 100.0
};
}
}
// Usage Example
const validator = new SOC2ControlValidator();
const report = await validator.runControlTests();
console.log(`SOC 2 Compliance: ${report.controlsPassed}/${report.controlsTested} controls passed`);
This automated validator demonstrates to auditors that you've implemented continuous control monitoring rather than point-in-time testing. The 6-month test history provides the evidence auditors need for SOC 2 Type II certification.
ISO 27001 Certification: Information Security Management
ISO 27001 certification requires implementing an Information Security Management System (ISMS) with 114 controls across 14 domains. For ChatGPT applications, particularly relevant controls include access control (Annex A.9), cryptography (A.10), communications security (A.13), and supplier relationships (A.15)—since you're relying on OpenAI's infrastructure.
The risk-based approach central to ISO 27001 means you must identify information assets (user prompts, model outputs, configuration data), assess threats and vulnerabilities, and implement proportionate controls. Your Statement of Applicability (SoA) documents which controls apply to your ChatGPT application and justifies any exclusions.
Unlike SOC 2's focus on service providers, ISO 27001 applies to any organization handling information. The certification process involves internal audits, management review, and external certification audits by accredited bodies. Automation accelerates both preparation and ongoing compliance maintenance.
Production-Ready ISO 27001 Risk Assessment
/**
* ISO 27001 Risk Assessment Engine
* Systematic risk identification and treatment for ChatGPT applications
* Aligns with Annex A controls and ISMS requirements
*/
interface InformationAsset {
id: string;
name: string;
description: string;
classification: 'public' | 'internal' | 'confidential' | 'restricted';
owner: string;
location: string[];
}
interface Threat {
id: string;
name: string;
source: 'internal' | 'external' | 'environmental';
description: string;
}
interface Vulnerability {
id: string;
description: string;
affectedAssets: string[];
cveId?: string;
}
interface RiskAssessment {
id: string;
asset: InformationAsset;
threat: Threat;
vulnerability: Vulnerability;
likelihood: 1 | 2 | 3 | 4 | 5;
impact: 1 | 2 | 3 | 4 | 5;
riskScore: number;
riskLevel: 'low' | 'medium' | 'high' | 'critical';
controls: ISO27001Control[];
residualRisk: number;
treatmentPlan: RiskTreatmentPlan;
}
interface ISO27001Control {
controlId: string;
annexReference: string;
description: string;
implemented: boolean;
effectiveness: 'low' | 'medium' | 'high';
evidenceLocation: string;
}
interface RiskTreatmentPlan {
strategy: 'mitigate' | 'accept' | 'transfer' | 'avoid';
actions: string[];
responsibleParty: string;
deadline: Date;
budget?: number;
}
class ISO27001RiskAssessment {
private assets: Map<string, InformationAsset> = new Map();
private threats: Map<string, Threat> = new Map();
private vulnerabilities: Map<string, Vulnerability> = new Map();
private assessments: RiskAssessment[] = [];
private controls: Map<string, ISO27001Control> = new Map();
constructor() {
this.initializeAssets();
this.initializeThreats();
this.initializeVulnerabilities();
this.initializeControls();
}
private initializeAssets(): void {
this.assets.set('chatgpt-prompts', {
id: 'chatgpt-prompts',
name: 'User ChatGPT Prompts',
description: 'All user inputs submitted to ChatGPT API',
classification: 'confidential',
owner: 'Data Protection Officer',
location: ['Application Database', 'Audit Logs', 'OpenAI Servers']
});
this.assets.set('api-keys', {
id: 'api-keys',
name: 'OpenAI API Keys',
description: 'Authentication credentials for ChatGPT API access',
classification: 'restricted',
owner: 'Security Team',
location: ['Secrets Manager', 'Application Configuration']
});
this.assets.set('user-data', {
id: 'user-data',
name: 'User Account Information',
description: 'PII including names, emails, payment information',
classification: 'restricted',
owner: 'Data Protection Officer',
location: ['User Database', 'Payment Processor']
});
}
private initializeThreats(): void {
this.threats.set('data-breach', {
id: 'data-breach',
name: 'Unauthorized Data Access',
source: 'external',
description: 'Attacker gains unauthorized access to ChatGPT prompts or responses'
});
this.threats.set('prompt-injection', {
id: 'prompt-injection',
name: 'Prompt Injection Attack',
source: 'external',
description: 'Malicious user crafts prompts to extract sensitive information or bypass controls'
});
this.threats.set('api-key-leak', {
id: 'api-key-leak',
name: 'API Key Exposure',
source: 'internal',
description: 'OpenAI API keys accidentally committed to source code or leaked'
});
}
private initializeVulnerabilities(): void {
this.vulnerabilities.set('weak-access-controls', {
id: 'weak-access-controls',
name: 'Insufficient Access Controls',
description: 'Lack of role-based access control on ChatGPT administration',
affectedAssets: ['chatgpt-prompts', 'api-keys']
});
this.vulnerabilities.set('unencrypted-storage', {
id: 'unencrypted-storage',
name: 'Unencrypted Data at Rest',
description: 'ChatGPT responses stored without encryption',
affectedAssets: ['chatgpt-prompts']
});
}
private initializeControls(): void {
this.controls.set('A.9.2.1', {
controlId: 'A.9.2.1',
annexReference: 'A.9.2.1',
description: 'User registration and de-registration',
implemented: true,
effectiveness: 'high',
evidenceLocation: 'docs/isms/access-control-procedures.md'
});
this.controls.set('A.10.1.1', {
controlId: 'A.10.1.1',
annexReference: 'A.10.1.1',
description: 'Policy on the use of cryptographic controls',
implemented: true,
effectiveness: 'high',
evidenceLocation: 'docs/isms/encryption-policy.md'
});
this.controls.set('A.13.1.1', {
controlId: 'A.13.1.1',
annexReference: 'A.13.1.1',
description: 'Network controls',
implemented: true,
effectiveness: 'medium',
evidenceLocation: 'docs/isms/network-security.md'
});
this.controls.set('A.15.1.1', {
controlId: 'A.15.1.1',
annexReference: 'A.15.1.1',
description: 'Information security policy for supplier relationships',
implemented: true,
effectiveness: 'high',
evidenceLocation: 'docs/isms/openai-supplier-agreement.md'
});
}
assessRisk(
assetId: string,
threatId: string,
vulnerabilityId: string,
likelihood: 1 | 2 | 3 | 4 | 5,
impact: 1 | 2 | 3 | 4 | 5
): RiskAssessment {
const asset = this.assets.get(assetId);
const threat = this.threats.get(threatId);
const vulnerability = this.vulnerabilities.get(vulnerabilityId);
if (!asset || !threat || !vulnerability) {
throw new Error('Invalid asset, threat, or vulnerability ID');
}
const riskScore = likelihood * impact;
const riskLevel = this.calculateRiskLevel(riskScore);
const applicableControls = this.identifyApplicableControls(assetId, threatId);
const residualRisk = this.calculateResidualRisk(riskScore, applicableControls);
const assessment: RiskAssessment = {
id: `risk-${Date.now()}`,
asset,
threat,
vulnerability,
likelihood,
impact,
riskScore,
riskLevel,
controls: applicableControls,
residualRisk,
treatmentPlan: this.generateTreatmentPlan(riskLevel, riskScore, residualRisk)
};
this.assessments.push(assessment);
return assessment;
}
private calculateRiskLevel(score: number): 'low' | 'medium' | 'high' | 'critical' {
if (score >= 20) return 'critical';
if (score >= 12) return 'high';
if (score >= 6) return 'medium';
return 'low';
}
private identifyApplicableControls(assetId: string, threatId: string): ISO27001Control[] {
// Simplified control mapping
const controlMap: Record<string, string[]> = {
'chatgpt-prompts': ['A.10.1.1', 'A.13.1.1'],
'api-keys': ['A.9.2.1', 'A.10.1.1'],
'user-data': ['A.9.2.1', 'A.10.1.1']
};
const controlIds = controlMap[assetId] || [];
return controlIds.map(id => this.controls.get(id)).filter(Boolean) as ISO27001Control[];
}
private calculateResidualRisk(inherentRisk: number, controls: ISO27001Control[]): number {
const effectivenessReduction: Record<string, number> = {
'low': 0.2,
'medium': 0.5,
'high': 0.7
};
let totalReduction = 0;
controls.forEach(control => {
if (control.implemented) {
totalReduction += effectivenessReduction[control.effectiveness];
}
});
const reduction = Math.min(totalReduction, 0.9); // Max 90% reduction
return Math.ceil(inherentRisk * (1 - reduction));
}
private generateTreatmentPlan(
riskLevel: string,
inherentRisk: number,
residualRisk: number
): RiskTreatmentPlan {
if (residualRisk <= 3) {
return {
strategy: 'accept',
actions: ['Document acceptance rationale', 'Review annually'],
responsibleParty: 'Risk Owner',
deadline: new Date(Date.now() + 365 * 24 * 60 * 60 * 1000)
};
}
if (riskLevel === 'critical') {
return {
strategy: 'mitigate',
actions: [
'Implement additional Annex A controls',
'Conduct penetration testing',
'Deploy intrusion detection systems'
],
responsibleParty: 'CISO',
deadline: new Date(Date.now() + 30 * 24 * 60 * 60 * 1000),
budget: 50000
};
}
return {
strategy: 'mitigate',
actions: ['Enhance existing controls', 'Increase monitoring'],
responsibleParty: 'Security Team',
deadline: new Date(Date.now() + 90 * 24 * 60 * 60 * 1000),
budget: 15000
};
}
generateRiskRegister(): string {
let register = '# ISO 27001 Risk Register\n\n';
register += `Generated: ${new Date().toISOString()}\n\n`;
const byRiskLevel = {
critical: this.assessments.filter(a => a.riskLevel === 'critical'),
high: this.assessments.filter(a => a.riskLevel === 'high'),
medium: this.assessments.filter(a => a.riskLevel === 'medium'),
low: this.assessments.filter(a => a.riskLevel === 'low')
};
Object.entries(byRiskLevel).forEach(([level, risks]) => {
register += `## ${level.toUpperCase()} Risk (${risks.length})\n\n`;
risks.forEach(risk => {
register += `### ${risk.asset.name} - ${risk.threat.name}\n`;
register += `- **Inherent Risk**: ${risk.riskScore} (L:${risk.likelihood} × I:${risk.impact})\n`;
register += `- **Residual Risk**: ${risk.residualRisk}\n`;
register += `- **Treatment**: ${risk.treatmentPlan.strategy}\n`;
register += `- **Deadline**: ${risk.treatmentPlan.deadline.toISOString().split('T')[0]}\n\n`;
});
});
return register;
}
}
// Usage Example
const riskAssessment = new ISO27001RiskAssessment();
const risk = riskAssessment.assessRisk('api-keys', 'api-key-leak', 'weak-access-controls', 4, 5);
console.log(`Risk Level: ${risk.riskLevel}, Treatment: ${risk.treatmentPlan.strategy}`);
const register = riskAssessment.generateRiskRegister();
This risk assessment engine demonstrates systematic ISO 27001 compliance, with clear traceability from assets through threats, vulnerabilities, controls, and treatment plans—exactly what certification auditors require.
HIPAA Audit Readiness: Technical Safeguards
If your ChatGPT application processes Protected Health Information (PHI), HIPAA compliance becomes mandatory. The Security Rule requires implementing administrative, physical, and technical safeguards, with technical safeguards most relevant to AI applications: access control (§164.312(a)), audit controls (§164.312(b)), integrity controls (§164.312(c)), and transmission security (§164.312(e)).
HIPAA audits focus heavily on encryption of PHI at rest and in transit, audit logging of all PHI access, and Business Associate Agreements (BAAs) with service providers—including OpenAI. Organizations must document a thorough risk analysis addressing all PHI-containing systems, including ChatGPT conversation logs.
Preparation involves implementing granular access controls, comprehensive audit trails, encryption everywhere PHI exists, and incident response procedures specific to PHI breaches. The Office for Civil Rights (OCR) conducts both desk audits (documentation review) and on-site audits (technical validation).
Production-Ready HIPAA Audit Logger
/**
* HIPAA Compliance Audit Logger
* Comprehensive logging of all PHI access in ChatGPT applications
* Meets HIPAA Security Rule §164.312(b) requirements
*/
interface PHIAccessEvent {
eventId: string;
timestamp: Date;
userId: string;
userRole: string;
action: 'create' | 'read' | 'update' | 'delete' | 'export';
resourceType: 'patient-record' | 'chatgpt-conversation' | 'clinical-note';
resourceId: string;
phiFields: string[];
accessMethod: 'web-ui' | 'api' | 'mobile-app';
ipAddress: string;
outcome: 'success' | 'denied' | 'error';
denialReason?: string;
sessionId: string;
}
interface AuditReport {
reportId: string;
generatedAt: Date;
periodStart: Date;
periodEnd: Date;
totalAccesses: number;
accessesByAction: Record<string, number>;
accessesByUser: Record<string, number>;
deniedAccesses: PHIAccessEvent[];
suspiciousActivity: PHIAccessEvent[];
complianceScore: number;
}
class HIPAAAuditLogger {
private events: PHIAccessEvent[] = [];
private readonly retentionPeriodYears = 6; // HIPAA minimum retention
async logPHIAccess(event: Omit<PHIAccessEvent, 'eventId' | 'timestamp'>): Promise<void> {
const fullEvent: PHIAccessEvent = {
...event,
eventId: this.generateEventId(),
timestamp: new Date()
};
// Validate event completeness
this.validateEvent(fullEvent);
// Store in tamper-proof audit log (append-only)
await this.persistEvent(fullEvent);
// Real-time monitoring for suspicious patterns
await this.detectAnomalies(fullEvent);
// Alert on failed access attempts
if (fullEvent.outcome === 'denied') {
await this.alertSecurityTeam(fullEvent);
}
this.events.push(fullEvent);
}
private generateEventId(): string {
return `hipaa-${Date.now()}-${Math.random().toString(36).substr(2, 9)}`;
}
private validateEvent(event: PHIAccessEvent): void {
const requiredFields: (keyof PHIAccessEvent)[] = [
'userId', 'action', 'resourceType', 'resourceId', 'phiFields', 'ipAddress', 'outcome'
];
for (const field of requiredFields) {
if (!event[field]) {
throw new Error(`HIPAA audit log missing required field: ${field}`);
}
}
if (event.outcome === 'denied' && !event.denialReason) {
throw new Error('Denied access must include denial reason');
}
}
private async persistEvent(event: PHIAccessEvent): Promise<void> {
// In production: write to append-only, encrypted audit database
// Implement write-once-read-many (WORM) storage
console.log(`[AUDIT] ${event.timestamp.toISOString()} - ${event.action} ${event.resourceType} by ${event.userId}`);
}
private async detectAnomalies(event: PHIAccessEvent): Promise<void> {
// Detect unusual access patterns
const recentEvents = this.events.filter(e =>
e.userId === event.userId &&
Date.now() - e.timestamp.getTime() < 3600000 // Last hour
);
// Alert on excessive access
if (recentEvents.length > 50) {
console.warn(`[SECURITY] User ${event.userId} accessed ${recentEvents.length} PHI records in 1 hour`);
}
// Alert on after-hours access
const hour = event.timestamp.getHours();
if (hour < 6 || hour > 22) {
console.warn(`[SECURITY] After-hours PHI access by ${event.userId} at ${event.timestamp.toISOString()}`);
}
}
private async alertSecurityTeam(event: PHIAccessEvent): Promise<void> {
console.error(`[ALERT] Denied PHI access - User: ${event.userId}, Reason: ${event.denialReason}`);
}
async generateAuditReport(periodStart: Date, periodEnd: Date): Promise<AuditReport> {
const periodEvents = this.events.filter(e =>
e.timestamp >= periodStart && e.timestamp <= periodEnd
);
const accessesByAction: Record<string, number> = {};
const accessesByUser: Record<string, number> = {};
const deniedAccesses: PHIAccessEvent[] = [];
const suspiciousActivity: PHIAccessEvent[] = [];
for (const event of periodEvents) {
// Count by action
accessesByAction[event.action] = (accessesByAction[event.action] || 0) + 1;
// Count by user
accessesByUser[event.userId] = (accessesByUser[event.userId] || 0) + 1;
// Collect denied accesses
if (event.outcome === 'denied') {
deniedAccesses.push(event);
}
// Flag suspicious activity
const hour = event.timestamp.getHours();
if (hour < 6 || hour > 22) {
suspiciousActivity.push(event);
}
}
const complianceScore = this.calculateComplianceScore(periodEvents, deniedAccesses);
return {
reportId: `audit-${Date.now()}`,
generatedAt: new Date(),
periodStart,
periodEnd,
totalAccesses: periodEvents.length,
accessesByAction,
accessesByUser,
deniedAccesses,
suspiciousActivity,
complianceScore
};
}
private calculateComplianceScore(events: PHIAccessEvent[], deniedAccesses: PHIAccessEvent[]): number {
let score = 100;
// Penalize excessive denied accesses
const denialRate = deniedAccesses.length / events.length;
if (denialRate > 0.1) score -= 20;
// Penalize missing audit fields (should never happen with validation)
const incompleteEvents = events.filter(e => !e.sessionId);
if (incompleteEvents.length > 0) score -= 30;
// Reward comprehensive logging
if (events.length > 0 && events.every(e => e.phiFields.length > 0)) score += 5;
return Math.max(0, Math.min(100, score));
}
async exportAuditLog(format: 'json' | 'csv' = 'json'): Promise<string> {
if (format === 'json') {
return JSON.stringify(this.events, null, 2);
}
// CSV export for OCR auditors
const headers = 'EventID,Timestamp,User,Action,Resource,Outcome,IP Address';
const rows = this.events.map(e =>
`${e.eventId},${e.timestamp.toISOString()},${e.userId},${e.action},${e.resourceType},${e.outcome},${e.ipAddress}`
);
return [headers, ...rows].join('\n');
}
async purgeExpiredLogs(): Promise<number> {
const retentionMs = this.retentionPeriodYears * 365 * 24 * 60 * 60 * 1000;
const cutoffDate = new Date(Date.now() - retentionMs);
const initialCount = this.events.length;
this.events = this.events.filter(e => e.timestamp >= cutoffDate);
const purgedCount = initialCount - this.events.length;
console.log(`[RETENTION] Purged ${purgedCount} audit logs older than ${this.retentionPeriodYears} years`);
return purgedCount;
}
}
// Usage Example
const auditLogger = new HIPAAAuditLogger();
await auditLogger.logPHIAccess({
userId: 'dr-smith-12345',
userRole: 'physician',
action: 'read',
resourceType: 'chatgpt-conversation',
resourceId: 'conv-789',
phiFields: ['patient-name', 'diagnosis', 'medications'],
accessMethod: 'web-ui',
ipAddress: '10.0.2.45',
outcome: 'success',
sessionId: 'sess-xyz'
});
const report = await auditLogger.generateAuditReport(
new Date('2026-01-01'),
new Date('2026-12-31')
);
console.log(`HIPAA Compliance Score: ${report.complianceScore}/100`);
This HIPAA audit logger provides the granular, tamper-proof logging that OCR auditors expect, with built-in anomaly detection and compliance scoring.
Tamper-Proof Audit Logging
All three compliance frameworks—SOC 2, ISO 27001, and HIPAA—require demonstrating that audit logs cannot be modified or deleted by unauthorized users, including system administrators. Tamper-proof logging typically involves cryptographic hashing, write-once-read-many (WORM) storage, and separation of duties.
For ChatGPT applications, audit logs must capture every interaction with the AI, including prompts submitted, responses received, any PII detected and redacted, access control decisions, and configuration changes. These logs serve as evidence during audits and as investigative tools during security incidents.
Modern tamper-proof logging systems use blockchain-inspired techniques: each log entry includes a hash of the previous entry, creating an unbreakable chain. Any attempt to modify historical logs breaks the chain, immediately alerting security teams.
Production-Ready Tamper-Proof Log Store
/**
* Tamper-Proof Audit Log Storage
* Cryptographically verifiable audit trail using hash chaining
* Supports SOC 2, ISO 27001, and HIPAA audit requirements
*/
import * as crypto from 'crypto';
interface AuditLogEntry {
sequenceNumber: number;
timestamp: Date;
eventType: string;
userId: string;
data: Record<string, any>;
previousHash: string;
hash: string;
}
interface VerificationResult {
isValid: boolean;
totalEntries: number;
verifiedEntries: number;
tamperedEntries: number[];
verificationTime: number;
}
class TamperProofLogStore {
private logs: AuditLogEntry[] = [];
private sequenceCounter = 0;
append(eventType: string, userId: string, data: Record<string, any>): AuditLogEntry {
const previousHash = this.logs.length > 0
? this.logs[this.logs.length - 1].hash
: '0000000000000000000000000000000000000000000000000000000000000000';
const entry: Omit<AuditLogEntry, 'hash'> = {
sequenceNumber: ++this.sequenceCounter,
timestamp: new Date(),
eventType,
userId,
data,
previousHash
};
const hash = this.calculateHash(entry);
const fullEntry: AuditLogEntry = { ...entry, hash };
this.logs.push(fullEntry);
return fullEntry;
}
private calculateHash(entry: Omit<AuditLogEntry, 'hash'>): string {
const content = JSON.stringify({
seq: entry.sequenceNumber,
ts: entry.timestamp.toISOString(),
type: entry.eventType,
user: entry.userId,
data: entry.data,
prev: entry.previousHash
});
return crypto.createHash('sha256').update(content).digest('hex');
}
verify(): VerificationResult {
const startTime = Date.now();
const tamperedEntries: number[] = [];
for (let i = 0; i < this.logs.length; i++) {
const entry = this.logs[i];
const expectedPreviousHash = i > 0
? this.logs[i - 1].hash
: '0000000000000000000000000000000000000000000000000000000000000000';
// Verify hash chain
if (entry.previousHash !== expectedPreviousHash) {
tamperedEntries.push(i);
continue;
}
// Recalculate hash to verify integrity
const recalculatedHash = this.calculateHash({
sequenceNumber: entry.sequenceNumber,
timestamp: entry.timestamp,
eventType: entry.eventType,
userId: entry.userId,
data: entry.data,
previousHash: entry.previousHash
});
if (recalculatedHash !== entry.hash) {
tamperedEntries.push(i);
}
}
return {
isValid: tamperedEntries.length === 0,
totalEntries: this.logs.length,
verifiedEntries: this.logs.length - tamperedEntries.length,
tamperedEntries,
verificationTime: Date.now() - startTime
};
}
exportForAudit(): string {
return JSON.stringify(this.logs, null, 2);
}
getEntriesByUser(userId: string): AuditLogEntry[] {
return this.logs.filter(log => log.userId === userId);
}
getEntriesByType(eventType: string): AuditLogEntry[] {
return this.logs.filter(log => log.eventType === eventType);
}
getEntriesByDateRange(start: Date, end: Date): AuditLogEntry[] {
return this.logs.filter(log => log.timestamp >= start && log.timestamp <= end);
}
}
// Usage Example
const logStore = new TamperProofLogStore();
logStore.append('chatgpt-query', 'user-123', {
prompt: 'Analyze patient symptoms',
model: 'gpt-4',
tokensUsed: 450
});
logStore.append('phi-access', 'dr-smith', {
resourceId: 'patient-789',
action: 'read'
});
const verification = logStore.verify();
console.log(`Audit log integrity: ${verification.isValid ? 'VERIFIED' : 'COMPROMISED'}`);
This hash-chained log store provides mathematical proof of log integrity, satisfying auditor requirements for tamper-proof audit trails across all major compliance frameworks.
Evidence Collection and Management
Compliance audits require extensive documentation: policies, procedures, system diagrams, access logs, incident reports, training records, and evidence of control operation. For ChatGPT applications, you'll need evidence of API key rotation, encryption implementation, access control enforcement, and vendor management (OpenAI contracts and BAAs).
Systematic evidence collection throughout the year—rather than scrambling before audits—dramatically reduces audit preparation time. Modern compliance automation tools continuously collect evidence from your systems, linking technical artifacts to specific control requirements.
For ChatGPT applications specifically, maintain evidence of: prompt filtering implementation, PII detection accuracy, model output validation, rate limiting enforcement, conversation logging, and data retention policy enforcement. Each piece of evidence should tie directly to a control requirement in your chosen framework.
Production-Ready Evidence Collector
/**
* Compliance Evidence Collector
* Automated evidence gathering for SOC 2, ISO 27001, and HIPAA audits
* Links technical artifacts to control requirements
*/
interface EvidenceItem {
id: string;
collectedAt: Date;
controlFramework: 'SOC2' | 'ISO27001' | 'HIPAA';
controlId: string;
evidenceType: 'screenshot' | 'log-export' | 'configuration' | 'report' | 'document';
description: string;
filePath: string;
hash: string;
collectedBy: string;
}
interface EvidencePackage {
packageId: string;
createdAt: Date;
framework: string;
auditPeriodStart: Date;
auditPeriodEnd: Date;
items: EvidenceItem[];
completenessScore: number;
}
class ComplianceEvidenceCollector {
private evidence: Map<string, EvidenceItem[]> = new Map();
async collectEvidence(
framework: 'SOC2' | 'ISO27001' | 'HIPAA',
controlId: string,
evidenceType: EvidenceItem['evidenceType'],
description: string,
filePath: string
): Promise<EvidenceItem> {
const fileHash = await this.hashFile(filePath);
const item: EvidenceItem = {
id: `evidence-${Date.now()}`,
collectedAt: new Date(),
controlFramework: framework,
controlId,
evidenceType,
description,
filePath,
hash: fileHash,
collectedBy: 'automated-collector'
};
const key = `${framework}-${controlId}`;
if (!this.evidence.has(key)) {
this.evidence.set(key, []);
}
this.evidence.get(key)!.push(item);
return item;
}
private async hashFile(filePath: string): Promise<string> {
// In production: read file and calculate SHA-256
return crypto.createHash('sha256')
.update(filePath + Date.now())
.digest('hex');
}
async collectChatGPTEvidence(): Promise<void> {
// SOC 2 Evidence
await this.collectEvidence(
'SOC2',
'CC6.1',
'configuration',
'ChatGPT API encryption configuration',
'/configs/chatgpt-tls.json'
);
await this.collectEvidence(
'SOC2',
'CC6.6',
'log-export',
'API key rotation logs (90-day period)',
'/logs/api-key-rotation.csv'
);
// ISO 27001 Evidence
await this.collectEvidence(
'ISO27001',
'A.10.1.1',
'document',
'Cryptographic controls policy',
'/docs/encryption-policy.pdf'
);
await this.collectEvidence(
'ISO27001',
'A.15.1.1',
'document',
'OpenAI Business Associate Agreement',
'/contracts/openai-baa.pdf'
);
// HIPAA Evidence
await this.collectEvidence(
'HIPAA',
'164.312(a)(1)',
'screenshot',
'Role-based access control configuration',
'/screenshots/rbac-settings.png'
);
await this.collectEvidence(
'HIPAA',
'164.312(b)',
'log-export',
'PHI access audit logs',
'/logs/phi-access-audit.json'
);
}
generateEvidencePackage(
framework: string,
periodStart: Date,
periodEnd: Date
): EvidencePackage {
const relevantEvidence: EvidenceItem[] = [];
for (const [key, items] of this.evidence) {
if (key.startsWith(framework)) {
const periodItems = items.filter(item =>
item.collectedAt >= periodStart && item.collectedAt <= periodEnd
);
relevantEvidence.push(...periodItems);
}
}
const completenessScore = this.calculateCompleteness(framework, relevantEvidence);
return {
packageId: `pkg-${framework}-${Date.now()}`,
createdAt: new Date(),
framework,
auditPeriodStart: periodStart,
auditPeriodEnd: periodEnd,
items: relevantEvidence,
completenessScore
};
}
private calculateCompleteness(framework: string, evidence: EvidenceItem[]): number {
const requiredControls: Record<string, string[]> = {
'SOC2': ['CC6.1', 'CC6.6', 'CC7.2'],
'ISO27001': ['A.9.2.1', 'A.10.1.1', 'A.15.1.1'],
'HIPAA': ['164.312(a)(1)', '164.312(b)', '164.312(e)(1)']
};
const required = requiredControls[framework] || [];
const collected = new Set(evidence.map(e => e.controlId));
const coverage = required.filter(r => collected.has(r)).length / required.length;
return Math.round(coverage * 100);
}
exportPackage(pkg: EvidencePackage): string {
let report = `# ${pkg.framework} Evidence Package\n\n`;
report += `**Package ID**: ${pkg.packageId}\n`;
report += `**Audit Period**: ${pkg.auditPeriodStart.toISOString().split('T')[0]} to ${pkg.auditPeriodEnd.toISOString().split('T')[0]}\n`;
report += `**Completeness**: ${pkg.completenessScore}%\n\n`;
const byControl = pkg.items.reduce((acc, item) => {
if (!acc[item.controlId]) acc[item.controlId] = [];
acc[item.controlId].push(item);
return acc;
}, {} as Record<string, EvidenceItem[]>);
for (const [controlId, items] of Object.entries(byControl)) {
report += `## Control ${controlId} (${items.length} items)\n\n`;
items.forEach(item => {
report += `- **${item.evidenceType}**: ${item.description}\n`;
report += ` - File: ${item.filePath}\n`;
report += ` - Hash: ${item.hash}\n`;
report += ` - Collected: ${item.collectedAt.toISOString()}\n\n`;
});
}
return report;
}
}
// Usage Example
const collector = new ComplianceEvidenceCollector();
await collector.collectChatGPTEvidence();
const package = collector.generateEvidencePackage(
'SOC2',
new Date('2026-01-01'),
new Date('2026-06-30')
);
console.log(`Evidence package completeness: ${package.completenessScore}%`);
This evidence collector automates the tedious task of gathering audit artifacts, ensuring you have comprehensive documentation when auditors arrive.
Compliance Dashboard
Visualizing compliance status helps identify gaps before audits and demonstrates ongoing compliance monitoring to stakeholders. A comprehensive compliance dashboard tracks control implementation, test results, evidence collection, and risk treatment—all in real-time.
Production-Ready Compliance Dashboard
/**
* Compliance Dashboard Component
* Real-time visualization of SOC 2, ISO 27001, and HIPAA compliance status
*/
import React, { useState, useEffect } from 'react';
interface ComplianceMetrics {
framework: string;
controlsImplemented: number;
totalControls: number;
lastAuditDate: Date;
nextAuditDate: Date;
openFindings: number;
evidenceCompleteness: number;
}
const ComplianceDashboard: React.FC = () => {
const [metrics, setMetrics] = useState<ComplianceMetrics[]>([]);
useEffect(() => {
// Fetch compliance metrics
setMetrics([
{
framework: 'SOC 2 Type II',
controlsImplemented: 52,
totalControls: 64,
lastAuditDate: new Date('2024-06-15'),
nextAuditDate: new Date('2026-06-15'),
openFindings: 3,
evidenceCompleteness: 87
},
{
framework: 'ISO 27001',
controlsImplemented: 98,
totalControls: 114,
lastAuditDate: new Date('2024-09-01'),
nextAuditDate: new Date('2026-09-01'),
openFindings: 7,
evidenceCompleteness: 76
},
{
framework: 'HIPAA',
controlsImplemented: 42,
totalControls: 45,
lastAuditDate: new Date('2024-11-01'),
nextAuditDate: new Date('2026-11-01'),
openFindings: 2,
evidenceCompleteness: 93
}
]);
}, []);
const getCompliancePercentage = (implemented: number, total: number): number => {
return Math.round((implemented / total) * 100);
};
const getStatusColor = (percentage: number): string => {
if (percentage >= 90) return '#10b981'; // Green
if (percentage >= 70) return '#f59e0b'; // Yellow
return '#ef4444'; // Red
};
return (
<div style={{ padding: '20px', fontFamily: 'Arial, sans-serif' }}>
<h1>Security Compliance Dashboard</h1>
{metrics.map((metric) => {
const percentage = getCompliancePercentage(
metric.controlsImplemented,
metric.totalControls
);
const color = getStatusColor(percentage);
return (
<div
key={metric.framework}
style={{
border: '1px solid #e5e7eb',
borderRadius: '8px',
padding: '20px',
marginBottom: '20px',
backgroundColor: '#ffffff'
}}
>
<h2>{metric.framework}</h2>
<div style={{ marginBottom: '15px' }}>
<div style={{ display: 'flex', justifyContent: 'space-between', marginBottom: '5px' }}>
<span>Control Implementation</span>
<span style={{ fontWeight: 'bold', color }}>{percentage}%</span>
</div>
<div style={{
width: '100%',
height: '20px',
backgroundColor: '#e5e7eb',
borderRadius: '10px',
overflow: 'hidden'
}}>
<div style={{
width: `${percentage}%`,
height: '100%',
backgroundColor: color,
transition: 'width 0.3s ease'
}} />
</div>
<div style={{ fontSize: '14px', color: '#6b7280', marginTop: '5px' }}>
{metric.controlsImplemented} of {metric.totalControls} controls implemented
</div>
</div>
<div style={{ display: 'grid', gridTemplateColumns: '1fr 1fr', gap: '15px' }}>
<div>
<div style={{ fontSize: '14px', color: '#6b7280' }}>Last Audit</div>
<div style={{ fontSize: '16px', fontWeight: 'bold' }}>
{metric.lastAuditDate.toLocaleDateString()}
</div>
</div>
<div>
<div style={{ fontSize: '14px', color: '#6b7280' }}>Next Audit</div>
<div style={{ fontSize: '16px', fontWeight: 'bold' }}>
{metric.nextAuditDate.toLocaleDateString()}
</div>
</div>
<div>
<div style={{ fontSize: '14px', color: '#6b7280' }}>Open Findings</div>
<div style={{
fontSize: '16px',
fontWeight: 'bold',
color: metric.openFindings > 5 ? '#ef4444' : '#10b981'
}}>
{metric.openFindings}
</div>
</div>
<div>
<div style={{ fontSize: '14px', color: '#6b7280' }}>Evidence Completeness</div>
<div style={{ fontSize: '16px', fontWeight: 'bold' }}>
{metric.evidenceCompleteness}%
</div>
</div>
</div>
</div>
);
})}
<div style={{
marginTop: '30px',
padding: '15px',
backgroundColor: '#fef3c7',
borderRadius: '8px',
border: '1px solid #fbbf24'
}}>
<h3 style={{ marginTop: 0, color: '#92400e' }}>Action Items</h3>
<ul style={{ color: '#92400e', margin: 0 }}>
<li>Complete evidence collection for ISO 27001 controls A.9.x (24% remaining)</li>
<li>Address 3 open SOC 2 findings before next audit (194 days)</li>
<li>Schedule HIPAA technical safeguards review (due in 30 days)</li>
</ul>
</div>
</div>
);
};
export default ComplianceDashboard;
This React dashboard provides executives and compliance teams with real-time visibility into audit readiness, helping prioritize remediation efforts and track progress toward certification.
Conclusion: Building Audit-Ready ChatGPT Applications
Achieving SOC 2, ISO 27001, or HIPAA certification for ChatGPT applications requires more than implementing security controls—it demands systematic evidence collection, continuous monitoring, and a compliance-first culture. The production-ready code examples in this guide demonstrate how to automate control validation, risk assessment, audit logging, evidence collection, and compliance reporting.
Start your compliance journey by selecting the most relevant framework for your business (SOC 2 for SaaS, ISO 27001 for global reach, HIPAA for healthcare), implementing comprehensive audit logging from day one, and collecting evidence continuously rather than scrambling before audits. Remember that compliance is an ongoing process, not a one-time achievement.
Ready to build audit-ready ChatGPT applications without the compliance headaches? Start your free trial at MakeAIHQ and deploy SOC 2-compliant ChatGPT apps in 48 hours. Our platform includes built-in audit logging, evidence collection, and compliance reporting—so you can focus on building great AI experiences while we handle the compliance infrastructure.
Internal Links
- ChatGPT Security Deep Dive: Encryption, Authentication & Threat Models
- HIPAA Compliance for ChatGPT Apps: PHI Protection & BAAs
- Security Compliance Checklist: Pre-Launch Audit Guide
- Audit Logging Best Practices for ChatGPT Applications
- Incident Response Planning for AI Applications
- Data Privacy in ChatGPT Apps: GDPR, CCPA & Global Regulations
- Penetration Testing ChatGPT Applications
External Resources
- AICPA SOC 2 Trust Service Criteria
- ISO/IEC 27001:2022 Information Security Standard
- HHS HIPAA Security Rule Technical Safeguards
Last updated: December 2026 | Article ID: cluster-security-audit-compliance