Performance Optimization
Build lightning-fast bots that can handle thousands of users. Learn caching strategies, database optimization, and async processing techniques.
Key Performance Metrics
Response Time
< 200msTime from user message to bot response. Users expect near-instant replies.
Throughput
> 1000 msg/sMessages your bot can handle per second. Critical during peak usage.
Memory Usage
< 512MBRAM consumption under normal operation. Affects hosting costs and stability.
CPU Utilization
< 70%Processor usage during peak load. High CPU can cause delays.
Optimization Strategies
Caching Strategies
Response Caching
Cache frequently requested data to reduce database queries and API calls.
// Redis caching example
const cacheKey = `user:${userId}:preferences`;
let preferences = await redis.get(cacheKey);
if (!preferences) {
preferences = await db.getUserPreferences(userId);
await redis.setex(cacheKey, 3600, JSON.stringify(preferences));
}Session State Management
Store user session data in memory for quick access during conversations.
// Session middleware
const sessions = new Map();
bot.use((ctx, next) => {
const userId = ctx.from.id;
ctx.session = sessions.get(userId) || {};
return next().then(() => {
sessions.set(userId, ctx.session);
});
});Database Optimization
Index Your Queries
Add database indexes for frequently queried columns to speed up lookups.
-- PostgreSQL index examples CREATE INDEX idx_users_telegram_id ON users(telegram_id); CREATE INDEX idx_messages_chat_id ON messages(chat_id); CREATE INDEX idx_orders_user_id_status ON orders(user_id, status);
Batch Operations
Group multiple database operations into single transactions.
// Instead of individual inserts
for (const msg of messages) {
await db.insert('messages', msg); // Slow!
}
// Use batch insert
await db.insertMany('messages', messages); // Fast!Async Processing
Message Queues
Offload heavy processing to background workers using message queues.
// Using BullMQ for background jobs
import { Queue } from 'bullmq';
const imageQueue = new Queue('image-processing');
bot.on('photo', async (ctx) => {
await ctx.reply('Processing your image...');
await imageQueue.add('process', {
fileId: ctx.message.photo[0].file_id,
userId: ctx.from.id
});
});Non-blocking I/O
Use async/await properly to prevent blocking the event loop.
// Parallel execution for independent operations const [user, settings, stats] = await Promise.all([ getUserById(userId), getSettings(userId), getStats(userId) ]); // Instead of sequential const user = await getUserById(userId); const settings = await getSettings(userId); const stats = await getStats(userId);
Network Optimization
Connection Pooling
Reuse database connections instead of creating new ones for each request.
// Prisma connection pooling
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
// Connection pool settings
// ?connection_limit=20&pool_timeout=30
}Webhook vs Polling
Use webhooks in production for better performance and lower latency.
// Development - Long polling
if (process.env.NODE_ENV === 'development') {
bot.start();
}
// Production - Webhooks
if (process.env.NODE_ENV === 'production') {
app.use(bot.webhookCallback('/webhook'));
bot.telegram.setWebhook(WEBHOOK_URL);
}Quick Performance Tips
Minimize API calls
Combine multiple Telegram API calls where possible. Use sendMediaGroup for multiple photos.
Use connection pooling
Maintain a pool of database connections to avoid connection overhead.
Set appropriate timeouts
Configure timeouts for external services to prevent hanging requests.
Implement rate limiting
Protect your bot from abuse and stay within Telegram API limits.
Monitor and profile
Use APM tools to identify bottlenecks and track performance over time.
Cache static content
Store file_ids for media you send frequently instead of uploading each time.
Anti-Patterns to Avoid
Synchronous File Processing
// Bad: Blocks the event loop
const data = fs.readFileSync('large-file.json');
const result = processData(JSON.parse(data));// Good: Non-blocking
const data = await fs.promises.readFile('large-file.json');
const result = await processDataAsync(JSON.parse(data));N+1 Query Problem
// Bad: N+1 queries
const users = await getUsers();
for (const user of users) {
user.orders = await getOrdersByUser(user.id);
}// Good: Single query with join const users = await getUsersWithOrders(); // Uses JOIN or includes in ORM
Memory Leaks
// Bad: Growing array
const allMessages = [];
bot.on('message', (ctx) => {
allMessages.push(ctx.message); // Never cleaned up!
});// Good: Use bounded cache or database
const cache = new LRUCache({ max: 1000 });
bot.on('message', (ctx) => {
cache.set(ctx.message.message_id, ctx.message);
});