Slack Bot Development for ChatGPT Apps: Complete Guide
Building Slack bots that integrate with ChatGPT apps opens new possibilities for workplace automation. This comprehensive guide covers everything from Bolt framework fundamentals to advanced integration patterns, helping you create production-ready Slack bots that leverage ChatGPT's conversational AI capabilities.
Understanding Slack Bot Architecture for ChatGPT Integration
Slack bots that work with ChatGPT apps require a solid architectural foundation. The Bolt framework provides the most efficient path to building robust, scalable Slack applications that can communicate with ChatGPT's API and handle complex conversational workflows.
Why Bolt Framework for ChatGPT-Powered Slack Bots
The Bolt framework simplifies Slack app development by abstracting complex API interactions into intuitive, event-driven patterns. When building ChatGPT-powered applications, Bolt's middleware architecture allows seamless integration with OpenAI's API while maintaining clean, maintainable code.
Unlike custom HTTP handlers or older Slack SDKs, Bolt provides built-in support for:
- Event subscriptions with automatic verification
- Interactive components (buttons, modals, select menus)
- Slash commands with type-safe handlers
- OAuth flows for workspace installation
- Rate limiting and retry mechanisms
- WebSocket connections for Socket Mode development
Setting Up Your Bolt-Based Slack Bot
Here's a complete Bolt application configured for ChatGPT integration with proper error handling, logging, and security:
// slack-bot-app.js (120 lines)
const { App, LogLevel } = require('@slack/bolt');
const { Configuration, OpenAIApi } = require('openai');
// Initialize OpenAI client
const openaiConfig = new Configuration({
apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(openaiConfig);
// Initialize Bolt app with proper configuration
const app = new App({
token: process.env.SLACK_BOT_TOKEN,
signingSecret: process.env.SLACK_SIGNING_SECRET,
socketMode: process.env.NODE_ENV === 'development',
appToken: process.env.SLACK_APP_TOKEN, // For Socket Mode
logLevel: process.env.LOG_LEVEL || LogLevel.INFO,
// Custom error handler
customRoutes: [
{
path: '/health',
method: ['GET'],
handler: (req, res) => {
res.writeHead(200);
res.end('OK');
},
},
],
});
// Conversation context storage (use Redis in production)
const conversationContexts = new Map();
// Middleware: Rate limiting per user
const rateLimitMap = new Map();
const RATE_LIMIT_WINDOW = 60000; // 1 minute
const MAX_REQUESTS_PER_WINDOW = 20;
app.use(async ({ context, next, body }) => {
const userId = body.user_id || body.user?.id;
if (userId) {
const now = Date.now();
const userRateData = rateLimitMap.get(userId) || { count: 0, resetTime: now + RATE_LIMIT_WINDOW };
if (now > userRateData.resetTime) {
// Reset window
rateLimitMap.set(userId, { count: 1, resetTime: now + RATE_LIMIT_WINDOW });
} else if (userRateData.count >= MAX_REQUESTS_PER_WINDOW) {
throw new Error('Rate limit exceeded. Please wait a minute.');
} else {
userRateData.count++;
rateLimitMap.set(userId, userRateData);
}
}
await next();
});
// Middleware: Logging
app.use(async ({ body, next, logger }) => {
const startTime = Date.now();
logger.info(`Incoming request: ${JSON.stringify(body, null, 2)}`);
await next();
const duration = Date.now() - startTime;
logger.info(`Request completed in ${duration}ms`);
});
// Helper: Call ChatGPT with conversation context
async function callChatGPT(userId, channelId, userMessage, systemPrompt = null) {
const contextKey = `${userId}-${channelId}`;
let context = conversationContexts.get(contextKey) || [];
// Build messages array
const messages = systemPrompt
? [{ role: 'system', content: systemPrompt }, ...context]
: context;
messages.push({ role: 'user', content: userMessage });
try {
const response = await openai.createChatCompletion({
model: 'gpt-4',
messages: messages,
max_tokens: 500,
temperature: 0.7,
user: userId, // OpenAI user tracking
});
const assistantMessage = response.data.choices[0].message.content;
// Update context (keep last 10 messages)
context.push(
{ role: 'user', content: userMessage },
{ role: 'assistant', content: assistantMessage }
);
if (context.length > 20) {
context = context.slice(-20);
}
conversationContexts.set(contextKey, context);
return {
content: assistantMessage,
usage: response.data.usage,
};
} catch (error) {
console.error('OpenAI API Error:', error.response?.data || error.message);
throw new Error('Failed to get ChatGPT response. Please try again.');
}
}
// Export app and helper
module.exports = { app, callChatGPT };
// Start server (only if not imported as module)
if (require.main === module) {
(async () => {
const port = process.env.PORT || 3000;
await app.start(port);
console.log(`⚡️ Slack bot is running on port ${port}`);
})();
}
This Bolt application provides a production-ready foundation for building no-code ChatGPT apps with proper error handling, rate limiting, and conversation context management.
Implementing Slash Commands for ChatGPT Interactions
Slash commands provide the most intuitive way for users to interact with your ChatGPT-powered Slack bot. Here's a comprehensive implementation with advanced features:
// slash-commands.js (130 lines)
const { app, callChatGPT } = require('./slack-bot-app');
// /chatgpt command - Main conversation interface
app.command('/chatgpt', async ({ command, ack, client, logger }) => {
await ack();
const { text, user_id, channel_id } = command;
if (!text || text.trim().length === 0) {
await client.chat.postEphemeral({
channel: channel_id,
user: user_id,
text: '❌ Please provide a message. Usage: `/chatgpt your question here`',
});
return;
}
try {
// Post loading message
const loadingMsg = await client.chat.postMessage({
channel: channel_id,
text: '🤔 Thinking...',
user: user_id,
});
// Call ChatGPT
const response = await callChatGPT(user_id, channel_id, text);
// Update with response
await client.chat.update({
channel: channel_id,
ts: loadingMsg.ts,
text: response.content,
blocks: [
{
type: 'section',
text: {
type: 'mrkdwn',
text: response.content,
},
},
{
type: 'context',
elements: [
{
type: 'mrkdwn',
text: `💬 Tokens used: ${response.usage.total_tokens} | <@${user_id}>`,
},
],
},
{
type: 'actions',
elements: [
{
type: 'button',
text: { type: 'plain_text', text: '🔄 Regenerate' },
action_id: 'regenerate_response',
value: JSON.stringify({ original_prompt: text, channel_id, user_id }),
},
{
type: 'button',
text: { type: 'plain_text', text: '📋 Copy' },
action_id: 'copy_response',
value: response.content,
},
{
type: 'button',
text: { type: 'plain_text', text: '🗑️ Clear Context' },
action_id: 'clear_context',
value: JSON.stringify({ channel_id, user_id }),
style: 'danger',
},
],
},
],
});
logger.info(`ChatGPT response sent to user ${user_id} in channel ${channel_id}`);
} catch (error) {
logger.error('Error handling /chatgpt command:', error);
await client.chat.postEphemeral({
channel: channel_id,
user: user_id,
text: `❌ Error: ${error.message}`,
});
}
});
// /chatgpt-reset command - Clear conversation context
app.command('/chatgpt-reset', async ({ command, ack, client }) => {
await ack();
const { user_id, channel_id } = command;
const contextKey = `${user_id}-${channel_id}`;
const { conversationContexts } = require('./slack-bot-app');
conversationContexts.delete(contextKey);
await client.chat.postEphemeral({
channel: channel_id,
user: user_id,
text: '✅ Conversation context cleared. Starting fresh!',
});
});
// /chatgpt-help command - Show usage instructions
app.command('/chatgpt-help', async ({ command, ack, client }) => {
await ack();
await client.chat.postEphemeral({
channel: command.channel_id,
user: command.user_id,
blocks: [
{
type: 'header',
text: { type: 'plain_text', text: '🤖 ChatGPT Bot Help' },
},
{
type: 'section',
text: {
type: 'mrkdwn',
text: '*Available Commands:*',
},
},
{
type: 'section',
text: {
type: 'mrkdwn',
text: '• `/chatgpt <message>` - Chat with GPT-4\n• `/chatgpt-reset` - Clear conversation history\n• `/chatgpt-help` - Show this help message',
},
},
{
type: 'divider',
},
{
type: 'section',
text: {
type: 'mrkdwn',
text: '*Tips:*\n• Your conversation context is maintained per channel\n• Use the "Regenerate" button to get alternative responses\n• Clear context if you want to start a new topic',
},
},
],
});
});
module.exports = { app };
These slash commands integrate seamlessly with AI-powered app builders, providing users with an intuitive conversational interface.
Building Interactive Message Handlers
Interactive messages transform static bot responses into dynamic, actionable interfaces. Here's how to handle buttons, select menus, and other interactive components:
// interactive-handlers.js (110 lines)
const { app, callChatGPT } = require('./slack-bot-app');
// Handle "Regenerate" button clicks
app.action('regenerate_response', async ({ ack, body, client, logger }) => {
await ack();
const { channel, message, user, action } = body;
const { original_prompt, channel_id, user_id } = JSON.parse(action.value);
try {
// Update message to show loading state
await client.chat.update({
channel: channel.id,
ts: message.ts,
text: '🔄 Regenerating response...',
blocks: [
{
type: 'section',
text: { type: 'mrkdwn', text: '🔄 Regenerating response...' },
},
],
});
// Call ChatGPT again with same prompt
const response = await callChatGPT(user_id, channel_id, original_prompt);
// Update with new response
await client.chat.update({
channel: channel.id,
ts: message.ts,
text: response.content,
blocks: [
{
type: 'section',
text: { type: 'mrkdwn', text: response.content },
},
{
type: 'context',
elements: [
{
type: 'mrkdwn',
text: `💬 Tokens: ${response.usage.total_tokens} | 🔄 Regenerated | <@${user_id}>`,
},
],
},
{
type: 'actions',
elements: [
{
type: 'button',
text: { type: 'plain_text', text: '🔄 Regenerate Again' },
action_id: 'regenerate_response',
value: action.value,
},
{
type: 'button',
text: { type: 'plain_text', text: '📋 Copy' },
action_id: 'copy_response',
value: response.content,
},
],
},
],
});
} catch (error) {
logger.error('Error regenerating response:', error);
await client.chat.postEphemeral({
channel: channel.id,
user: user.id,
text: `❌ Failed to regenerate: ${error.message}`,
});
}
});
// Handle "Copy" button clicks
app.action('copy_response', async ({ ack, body, client }) => {
await ack();
const { user, action } = body;
const responseText = action.value;
// Send ephemeral message with copyable text
await client.chat.postEphemeral({
channel: body.channel.id,
user: user.id,
text: `📋 *Response copied:*\n\`\`\`\n${responseText}\n\`\`\``,
});
});
// Handle "Clear Context" button clicks
app.action('clear_context', async ({ ack, body, client }) => {
await ack();
const { user, action } = body;
const { channel_id, user_id } = JSON.parse(action.value);
const { conversationContexts } = require('./slack-bot-app');
const contextKey = `${user_id}-${channel_id}`;
conversationContexts.delete(contextKey);
await client.chat.postEphemeral({
channel: body.channel.id,
user: user.id,
text: '✅ Conversation context cleared!',
});
});
module.exports = { app };
These interactive handlers create a responsive user experience similar to professional ChatGPT app templates.
Creating Advanced Modal Workflows
Modals provide rich form interfaces for complex interactions. Here's a modal manager for ChatGPT prompt configuration:
// modal-manager.js (100 lines)
const { app, callChatGPT } = require('./slack-bot-app');
// Open configuration modal
app.shortcut('configure_chatgpt', async ({ shortcut, ack, client, body }) => {
await ack();
try {
await client.views.open({
trigger_id: shortcut.trigger_id,
view: {
type: 'modal',
callback_id: 'chatgpt_config_modal',
title: { type: 'plain_text', text: '⚙️ ChatGPT Config' },
submit: { type: 'plain_text', text: 'Generate' },
close: { type: 'plain_text', text: 'Cancel' },
blocks: [
{
type: 'input',
block_id: 'prompt_block',
label: { type: 'plain_text', text: 'Your Prompt' },
element: {
type: 'plain_text_input',
action_id: 'prompt_input',
multiline: true,
placeholder: { type: 'plain_text', text: 'Enter your prompt here...' },
},
},
{
type: 'input',
block_id: 'system_block',
label: { type: 'plain_text', text: 'System Instructions (Optional)' },
optional: true,
element: {
type: 'plain_text_input',
action_id: 'system_input',
multiline: true,
placeholder: { type: 'plain_text', text: 'You are a helpful assistant...' },
},
},
{
type: 'input',
block_id: 'temperature_block',
label: { type: 'plain_text', text: 'Temperature (0.0 - 2.0)' },
optional: true,
element: {
type: 'plain_text_input',
action_id: 'temperature_input',
placeholder: { type: 'plain_text', text: '0.7' },
},
},
],
private_metadata: JSON.stringify({
channel_id: shortcut.channel.id,
user_id: shortcut.user.id,
}),
},
});
} catch (error) {
console.error('Error opening modal:', error);
}
});
// Handle modal submission
app.view('chatgpt_config_modal', async ({ ack, body, view, client, logger }) => {
const { channel_id, user_id } = JSON.parse(view.private_metadata);
const prompt = view.state.values.prompt_block.prompt_input.value;
const systemPrompt = view.state.values.system_block.system_input.value || null;
const temperatureStr = view.state.values.temperature_block.temperature_input.value;
// Validate temperature
const temperature = temperatureStr ? parseFloat(temperatureStr) : 0.7;
if (isNaN(temperature) || temperature < 0 || temperature > 2) {
await ack({
response_action: 'errors',
errors: {
temperature_block: 'Temperature must be between 0.0 and 2.0',
},
});
return;
}
await ack();
try {
// Post loading message
const loadingMsg = await client.chat.postMessage({
channel: channel_id,
text: '🤔 Generating response with custom configuration...',
});
// Call ChatGPT with configuration
const response = await callChatGPT(user_id, channel_id, prompt, systemPrompt);
// Update with response
await client.chat.update({
channel: channel_id,
ts: loadingMsg.ts,
text: response.content,
blocks: [
{
type: 'section',
text: { type: 'mrkdwn', text: `*Generated Response:*\n${response.content}` },
},
{
type: 'context',
elements: [
{
type: 'mrkdwn',
text: `⚙️ Custom config | Tokens: ${response.usage.total_tokens}`,
},
],
},
],
});
} catch (error) {
logger.error('Error in modal submission:', error);
await client.chat.postEphemeral({
channel: channel_id,
user: user_id,
text: `❌ Error: ${error.message}`,
});
}
});
module.exports = { app };
Modals enable sophisticated workflows similar to those in instant ChatGPT app builders.
Implementing Thread-Based Conversations
Thread handlers maintain conversation context within Slack threads, creating isolated ChatGPT conversations:
// thread-handler.js (80 lines)
const { app, callChatGPT } = require('./slack-bot-app');
// Listen for messages in threads where bot is mentioned
app.event('message', async ({ event, client, logger }) => {
// Only process threaded messages mentioning the bot
if (!event.thread_ts || !event.text) return;
const botUserId = process.env.SLACK_BOT_USER_ID;
if (!event.text.includes(`<@${botUserId}>`)) return;
// Remove bot mention from text
const cleanText = event.text.replace(/<@[A-Z0-9]+>/g, '').trim();
if (!cleanText) return;
try {
// Use thread_ts as part of context key for thread-specific context
const threadContextKey = `${event.user}-${event.channel}-${event.thread_ts}`;
// Post "typing" indicator
await client.chat.postMessage({
channel: event.channel,
thread_ts: event.thread_ts,
text: '💭 Thinking...',
});
// Call ChatGPT (context is thread-specific)
const response = await callChatGPT(
event.user,
`${event.channel}-${event.thread_ts}`,
cleanText
);
// Reply in thread
await client.chat.postMessage({
channel: event.channel,
thread_ts: event.thread_ts,
text: response.content,
blocks: [
{
type: 'section',
text: { type: 'mrkdwn', text: response.content },
},
{
type: 'context',
elements: [
{
type: 'mrkdwn',
text: `🧵 Thread conversation | Tokens: ${response.usage.total_tokens}`,
},
],
},
],
});
} catch (error) {
logger.error('Error handling threaded message:', error);
await client.chat.postMessage({
channel: event.channel,
thread_ts: event.thread_ts,
text: `❌ Error: ${error.message}`,
});
}
});
module.exports = { app };
Thread-based conversations provide isolated contexts for different topics, similar to how ChatGPT app analytics track separate conversation flows.
OAuth Implementation for Workspace Installation
Proper OAuth 2.0 implementation enables secure workspace installations. Here's how to implement the complete OAuth flow:
// oauth-handler.js (95 lines)
const { InstallProvider } = require('@slack/oauth');
const { app } = require('./slack-bot-app');
// Initialize OAuth provider
const installer = new InstallProvider({
clientId: process.env.SLACK_CLIENT_ID,
clientSecret: process.env.SLACK_CLIENT_SECRET,
stateSecret: process.env.SLACK_STATE_SECRET,
// Installation store (use database in production)
installationStore: {
storeInstallation: async (installation) => {
// Store installation data
const teamId = installation.team.id;
// In production, save to database
// await db.installations.create({ teamId, data: installation });
console.log(`Installation stored for team ${teamId}`);
return;
},
fetchInstallation: async (installQuery) => {
// Fetch installation data
const { teamId } = installQuery;
// In production, fetch from database
// return await db.installations.findOne({ teamId });
console.log(`Fetching installation for team ${teamId}`);
return null;
},
deleteInstallation: async (installQuery) => {
// Delete installation
const { teamId } = installQuery;
// In production, delete from database
// await db.installations.destroy({ where: { teamId } });
console.log(`Installation deleted for team ${teamId}`);
return;
},
},
});
// OAuth initiation route
app.receiver.app.get('/slack/install', async (req, res) => {
try {
const url = await installer.generateInstallUrl({
scopes: [
'chat:write',
'commands',
'channels:history',
'groups:history',
'im:history',
'mpim:history',
],
userScopes: [],
});
res.redirect(url);
} catch (error) {
console.error('Error generating install URL:', error);
res.status(500).send('Error initiating installation');
}
});
// OAuth callback route
app.receiver.app.get('/slack/oauth_redirect', async (req, res) => {
try {
const installation = await installer.handleCallback(req, res);
// Send success message to installing user
const { WebClient } = require('@slack/web-api');
const client = new WebClient(installation.bot.token);
await client.chat.postMessage({
channel: installation.user.id,
text: '🎉 ChatGPT Bot successfully installed! Use `/chatgpt-help` to get started.',
});
res.send('✅ Installation successful! You can close this window and return to Slack.');
} catch (error) {
console.error('Error handling OAuth callback:', error);
res.status(500).send('❌ Installation failed. Please try again.');
}
});
module.exports = { installer };
This OAuth implementation provides secure workspace authentication, essential for multi-tenant ChatGPT apps.
Rate Limiting and Error Handling Best Practices
Production Slack bots require robust rate limiting and error handling:
Rate Limiting Strategies
- Per-User Rate Limits: Prevent individual users from overwhelming your ChatGPT API quota
- Per-Workspace Limits: Control costs across entire Slack workspaces
- Adaptive Throttling: Slow down requests during high load periods
- Token Bucket Algorithm: Allow burst traffic while maintaining average limits
Error Handling Patterns
// Graceful error handling with user-friendly messages
async function safeCallChatGPT(userId, channelId, message) {
try {
return await callChatGPT(userId, channelId, message);
} catch (error) {
if (error.response?.status === 429) {
throw new Error('ChatGPT is experiencing high demand. Please try again in a moment.');
} else if (error.response?.status === 401) {
throw new Error('Authentication error. Please contact your workspace admin.');
} else if (error.code === 'ECONNABORTED') {
throw new Error('Request timed out. Please try a shorter message.');
} else {
throw new Error('An unexpected error occurred. Please try again.');
}
}
}
Event Subscriptions and Real-Time Processing
Event subscriptions enable real-time bot interactions. Configure these events in your Slack app settings:
message.channels- Process channel messagesmessage.groups- Handle private channel messagesmessage.im- Respond to direct messagesapp_mention- React when bot is mentionedteam_join- Welcome new team members
Learn more about building scalable ChatGPT applications with event-driven architectures.
Deployment and Production Considerations
Environment Configuration
# .env
SLACK_BOT_TOKEN=xoxb-your-bot-token
SLACK_SIGNING_SECRET=your-signing-secret
SLACK_APP_TOKEN=xapp-your-app-token
SLACK_CLIENT_ID=your-client-id
SLACK_CLIENT_SECRET=your-client-secret
SLACK_STATE_SECRET=your-state-secret
OPENAI_API_KEY=sk-your-openai-key
NODE_ENV=production
PORT=3000
LOG_LEVEL=info
Monitoring and Logging
Implement comprehensive logging for production environments:
const winston = require('winston');
const logger = winston.createLogger({
level: process.env.LOG_LEVEL || 'info',
format: winston.format.combine(
winston.format.timestamp(),
winston.format.json()
),
transports: [
new winston.transports.File({ filename: 'error.log', level: 'error' }),
new winston.transports.File({ filename: 'combined.log' }),
],
});
Scaling Strategies
For high-traffic Slack bots:
- Horizontal Scaling: Deploy multiple instances behind a load balancer
- Queue-Based Processing: Use Redis or RabbitMQ for async message handling
- Connection Pooling: Reuse OpenAI API connections
- Caching: Cache frequent responses to reduce API calls
Testing Your Slack Bot Integration
Before deploying to production:
- Test Socket Mode: Use Socket Mode for local development without exposing public endpoints
- Validate Event Signatures: Ensure all incoming requests are genuinely from Slack
- Mock OpenAI Responses: Test error handling without consuming API quota
- Load Testing: Simulate concurrent users to verify rate limiting works
Explore our ChatGPT app builder features to accelerate your development process.
Advanced Integration Patterns
Multi-Channel Broadcasting
Broadcast ChatGPT responses across multiple channels:
async function broadcastToChannels(channels, message) {
const promises = channels.map(channel =>
client.chat.postMessage({
channel: channel,
text: message,
})
);
await Promise.all(promises);
}
Scheduled Messages with ChatGPT
Combine Slack's scheduled messages with ChatGPT-generated content:
const scheduleTime = Math.floor(Date.now() / 1000) + 3600; // 1 hour from now
await client.chat.scheduleMessage({
channel: channelId,
text: generatedContent,
post_at: scheduleTime,
});
Integration with External APIs
Chain ChatGPT responses with external data:
// Get ChatGPT analysis of external data
const apiData = await fetch('https://api.example.com/data').then(r => r.json());
const analysis = await callChatGPT(
userId,
channelId,
`Analyze this data: ${JSON.stringify(apiData)}`
);
Common Pitfalls and Solutions
Issue: Rate Limit Errors
Solution: Implement exponential backoff and request queuing:
async function retryWithBackoff(fn, maxRetries = 3) {
for (let i = 0; i < maxRetries; i++) {
try {
return await fn();
} catch (error) {
if (i === maxRetries - 1) throw error;
await new Promise(resolve => setTimeout(resolve, Math.pow(2, i) * 1000));
}
}
}
Issue: Context Management
Solution: Use Redis for distributed context storage:
const redis = require('redis');
const client = redis.createClient();
async function saveContext(key, messages) {
await client.setEx(key, 3600, JSON.stringify(messages)); // 1 hour TTL
}
async function getContext(key) {
const data = await client.get(key);
return data ? JSON.parse(data) : [];
}
Security Best Practices
- Validate All Signatures: Always verify Slack request signatures
- Environment Variables: Never hardcode tokens or secrets
- Input Sanitization: Clean user input before sending to ChatGPT
- Scope Minimization: Request only necessary OAuth scopes
- Token Rotation: Regularly rotate API keys and tokens
Learn about pricing and security features for enterprise deployments.
Resources and Further Learning
Official Documentation
Community Resources
MakeAIHQ Resources
Conclusion
Building Slack bots that integrate with ChatGPT apps requires understanding Bolt framework fundamentals, implementing proper OAuth flows, and handling complex interactive workflows. By following the patterns and code examples in this guide, you can create production-ready Slack bots that leverage ChatGPT's conversational AI capabilities while maintaining security, performance, and user experience best practices.
Start building your ChatGPT-powered Slack bot today with MakeAIHQ's no-code platform, and deploy to the ChatGPT App Store in 48 hours without writing complex integration code.
Frequently Asked Questions
Q: How do I handle Slack rate limits when using ChatGPT? A: Implement per-user rate limiting (20 requests/minute), use exponential backoff for retries, and queue requests during high traffic periods.
Q: Can I use Slack modals with ChatGPT responses? A: Yes, use the modal submission handler to collect user input, then call ChatGPT and display results in the modal's view update.
Q: How do I maintain conversation context across multiple Slack threads?
A: Use a unique context key combining userId-channelId-threadTs and store conversation history in Redis or a database.
Q: What's the best way to deploy a production Slack bot? A: Use Socket Mode for development, then deploy to a cloud platform (AWS Lambda, Google Cloud Functions, or Heroku) with proper environment variable management.
Q: How do I handle OpenAI API errors gracefully? A: Catch specific error codes (429, 401, 500), provide user-friendly error messages, and implement retry logic with exponential backoff.
Ready to build your Slack bot integration? Start with MakeAIHQ's templates and deploy your first ChatGPT-powered Slack app today.