Elysia and Hono.js are two of the fastest and most developer-friendly web frameworks in the TypeScript ecosystem. When combined with flashQ's high-performance job queue, you get a stack that can handle millions of background jobs while maintaining sub-millisecond API response times.
In this comprehensive guide, we'll build production-ready background job systems with both frameworks, showing you patterns that scale from prototype to millions of users.
Why Elysia & Hono.js with flashQ?
Both frameworks share a philosophy of performance and developer experience:
| Feature | Elysia | Hono.js | flashQ |
|---|---|---|---|
| Runtime | Bun-native | Multi-runtime | Rust-powered |
| Performance | ~2.5M req/sec | ~1.5M req/sec | ~1.9M jobs/sec |
| Type Safety | End-to-end | Full TypeScript | Typed SDK |
| DX | Excellent | Excellent | BullMQ-compatible |
Together, they form a stack where your API responds instantly while heavy work happens asynchronously in the background.
Architecture Overview
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β Elysia/Hono β β flashQ β β Worker β
β API ββββββΆβ Server βββββββ Process β
β β β β β β
β - Routes β β - Job Queue β β - AI Tasks β
β - Validation β β - Persistence β β - Emails β
β - Auth β β - Scheduling β β - Processing β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β β
βββββββββββββββββββββββββββββββββββββββββββββββββ
Same Bun process (optional)
The beauty of Bun is that you can run both your API and workers in the same process for simplicity, or separate them for scale.
Elysia Integration
Project Setup
# Create new Elysia project
bun create elysia flashq-elysia
cd flashq-elysia
# Install flashQ
bun add flashq
Queue Configuration
// src/queue.ts
import { FlashQ, Worker } from 'flashq';
// Singleton client instance
let client: FlashQ | null = null;
export async function getClient(): Promise<FlashQ> {
if (!client) {
client = new FlashQ({
host: process.env.FLASHQ_HOST || 'localhost',
port: parseInt(process.env.FLASHQ_PORT || '6789'),
token: process.env.FLASHQ_TOKEN,
});
await client.connect();
}
return client;
}
// Queue names as constants for type safety
export const QUEUES = {
EMAIL: 'email',
AI_PROCESSING: 'ai-processing',
WEBHOOKS: 'webhooks',
NOTIFICATIONS: 'notifications',
} as const;
// Type-safe job data interfaces
export interface EmailJob {
to: string;
subject: string;
template: string;
data: Record<string, any>;
}
export interface AIJob {
prompt: string;
model: 'gpt-4' | 'gpt-4-turbo' | 'claude-3';
userId: string;
maxTokens?: number;
}
Elysia Plugin for flashQ
// src/plugins/flashq.ts
import { Elysia } from 'elysia';
import { getClient, QUEUES } from '../queue';
export const flashqPlugin = new Elysia({ name: 'flashq' })
.decorate('queue', {
// Push a job to any queue
async push<T>(queue: string, data: T, options?: any) {
const client = await getClient();
return client.push(queue, data, options);
},
// Get job status
async getJob(jobId: string) {
const client = await getClient();
return client.getJob(jobId);
},
// Get job result (for finished() pattern)
async waitForResult(jobId: string, timeout = 30000) {
const client = await getClient();
return client.finished(jobId, timeout);
},
// Queue stats
async stats() {
const client = await getClient();
return client.stats();
},
QUEUES,
});
API Routes with Elysia
// src/index.ts
import { Elysia, t } from 'elysia';
import { flashqPlugin } from './plugins/flashq';
const app = new Elysia()
.use(flashqPlugin)
// Health check with queue stats
.get('/health', async ({ queue }) => {
const stats = await queue.stats();
return {
status: 'healthy',
queue: stats,
timestamp: new Date().toISOString(),
};
})
// Send email endpoint
.post('/api/email', async ({ body, queue }) => {
const job = await queue.push(queue.QUEUES.EMAIL, body, {
attempts: 5,
backoff: 5000,
});
return {
success: true,
jobId: job.id,
message: 'Email queued for delivery',
};
}, {
body: t.Object({
to: t.String({ format: 'email' }),
subject: t.String({ minLength: 1 }),
template: t.String(),
data: t.Record(t.String(), t.Any()),
}),
})
// AI generation with sync response option
.post('/api/generate', async ({ body, query, queue }) => {
const job = await queue.push(queue.QUEUES.AI_PROCESSING, {
prompt: body.prompt,
model: body.model || 'gpt-4-turbo',
userId: body.userId,
maxTokens: body.maxTokens,
}, {
priority: body.priority || 5,
timeout: 120000, // 2 min for AI
});
// Optional: wait for result (sync mode)
if (query.sync === 'true') {
const result = await queue.waitForResult(job.id, 60000);
return { success: true, result };
}
return {
success: true,
jobId: job.id,
statusUrl: `/api/jobs/${job.id}`,
};
}, {
body: t.Object({
prompt: t.String({ minLength: 1 }),
model: t.Optional(t.Union([
t.Literal('gpt-4'),
t.Literal('gpt-4-turbo'),
t.Literal('claude-3'),
])),
userId: t.String(),
maxTokens: t.Optional(t.Number({ minimum: 1, maximum: 4096 })),
priority: t.Optional(t.Number({ minimum: 1, maximum: 100 })),
}),
query: t.Object({
sync: t.Optional(t.String()),
}),
})
// Get job status
.get('/api/jobs/:id', async ({ params, queue }) => {
const job = await queue.getJob(params.id);
if (!job) {
return { error: 'Job not found' };
}
return job;
})
// Batch job creation
.post('/api/batch/emails', async ({ body, queue }) => {
const client = await getClient();
const jobs = await client.pushBatch(
queue.QUEUES.EMAIL,
body.emails.map((email: any) => ({
data: email,
options: { attempts: 3 },
}))
);
return {
success: true,
count: jobs.length,
jobIds: jobs.map(j => j.id),
};
}, {
body: t.Object({
emails: t.Array(t.Object({
to: t.String({ format: 'email' }),
subject: t.String(),
template: t.String(),
data: t.Record(t.String(), t.Any()),
})),
}),
})
.listen(3000);
console.log(`π¦ Elysia running at ${app.server?.hostname}:${app.server?.port}`);
Elysia's schema validation using t provides end-to-end type safety. Your TypeScript types are automatically inferred from your schema definitions.
Hono.js Integration
Project Setup
# Create new Hono project
bun create hono@latest flashq-hono
cd flashq-hono
# Install flashQ and Zod for validation
bun add flashq zod @hono/zod-validator
Queue Middleware for Hono
// src/middleware/queue.ts
import { createMiddleware } from 'hono/factory';
import { FlashQ } from 'flashq';
let client: FlashQ | null = null;
async function getClient(): Promise<FlashQ> {
if (!client) {
client = new FlashQ({
host: process.env.FLASHQ_HOST || 'localhost',
port: parseInt(process.env.FLASHQ_PORT || '6789'),
token: process.env.FLASHQ_TOKEN,
});
await client.connect();
}
return client;
}
export const QUEUES = {
EMAIL: 'email',
AI: 'ai-processing',
WEBHOOKS: 'webhooks',
} as const;
type QueueName = typeof QUEUES[keyof typeof QUEUES];
export type QueueVariables = {
queue: {
push: <T>(name: QueueName, data: T, options?: any) => Promise<any>;
getJob: (id: string) => Promise<any>;
finished: (id: string, timeout?: number) => Promise<any>;
stats: () => Promise<any>;
};
};
export const queueMiddleware = createMiddleware<{ Variables: QueueVariables }>(
async (c, next) => {
const flashq = await getClient();
c.set('queue', {
push: (name, data, options) => flashq.push(name, data, options),
getJob: (id) => flashq.getJob(id),
finished: (id, timeout) => flashq.finished(id, timeout),
stats: () => flashq.stats(),
});
await next();
}
);
Hono Routes
// src/index.ts
import { Hono } from 'hono';
import { cors } from 'hono/cors';
import { logger } from 'hono/logger';
import { zValidator } from '@hono/zod-validator';
import { z } from 'zod';
import { queueMiddleware, QUEUES, type QueueVariables } from './middleware/queue';
const app = new Hono<{ Variables: QueueVariables }>();
// Global middleware
app.use('*', cors());
app.use('*', logger());
app.use('/api/*', queueMiddleware);
// Schemas
const emailSchema = z.object({
to: z.string().email(),
subject: z.string().min(1),
template: z.string(),
data: z.record(z.any()),
});
const generateSchema = z.object({
prompt: z.string().min(1),
model: z.enum(['gpt-4', 'gpt-4-turbo', 'claude-3']).default('gpt-4-turbo'),
userId: z.string(),
maxTokens: z.number().min(1).max(4096).optional(),
priority: z.number().min(1).max(100).default(5),
});
// Health endpoint
app.get('/health', async (c) => {
const queue = c.get('queue');
const stats = queue ? await queue.stats() : null;
return c.json({
status: 'healthy',
queue: stats,
timestamp: new Date().toISOString(),
});
});
// Email endpoint
app.post(
'/api/email',
zValidator('json', emailSchema),
async (c) => {
const body = c.req.valid('json');
const queue = c.get('queue');
const job = await queue.push(QUEUES.EMAIL, body, {
attempts: 5,
backoff: 5000,
});
return c.json({
success: true,
jobId: job.id,
message: 'Email queued',
});
}
);
// AI Generation endpoint
app.post(
'/api/generate',
zValidator('json', generateSchema),
async (c) => {
const body = c.req.valid('json');
const queue = c.get('queue');
const sync = c.req.query('sync') === 'true';
const job = await queue.push(QUEUES.AI, body, {
priority: body.priority,
timeout: 120000,
});
// Sync mode: wait for result
if (sync) {
const result = await queue.finished(job.id, 60000);
return c.json({ success: true, result });
}
return c.json({
success: true,
jobId: job.id,
statusUrl: `/api/jobs/${job.id}`,
});
}
);
// Job status endpoint
app.get('/api/jobs/:id', async (c) => {
const queue = c.get('queue');
const job = await queue.getJob(c.req.param('id'));
if (!job) {
return c.json({ error: 'Job not found' }, 404);
}
return c.json(job);
});
// Webhook handler with idempotency
app.post('/api/webhooks/:provider', async (c) => {
const provider = c.req.param('provider');
const body = await c.req.json();
const queue = c.get('queue');
// Use webhook ID for idempotency
const webhookId = body.id || c.req.header('x-webhook-id');
const job = await queue.push(QUEUES.WEBHOOKS, {
provider,
payload: body,
}, {
jobId: `webhook-${provider}-${webhookId}`, // Idempotent
attempts: 3,
});
return c.json({ received: true, jobId: job.id });
});
export default {
port: 3000,
fetch: app.fetch,
};
Worker Implementation
The worker can be the same for both Elysia and Hono since it uses the flashQ SDK directly.
// worker/index.ts
import { Worker, FlashQ } from 'flashq';
import OpenAI from 'openai';
import { Resend } from 'resend';
const openai = new OpenAI();
const resend = new Resend(process.env.RESEND_API_KEY);
// Email Worker
const emailWorker = new Worker('email', async (job) => {
const { to, subject, template, data } = job.data;
// Render template (use your preferred engine)
const html = renderTemplate(template, data);
const result = await resend.emails.send({
from: 'noreply@yourdomain.com',
to,
subject,
html,
});
return { emailId: result.id, sentAt: new Date().toISOString() };
}, {
connection: {
host: process.env.FLASHQ_HOST || 'localhost',
port: parseInt(process.env.FLASHQ_PORT || '6789'),
},
concurrency: 10,
});
// AI Worker
const aiWorker = new Worker('ai-processing', async (job) => {
const { prompt, model, maxTokens, userId } = job.data;
await job.updateProgress(10, 'Connecting to AI...');
let response;
if (model.startsWith('gpt')) {
response = await openai.chat.completions.create({
model,
messages: [{ role: 'user', content: prompt }],
max_tokens: maxTokens || 1000,
});
await job.updateProgress(90, 'Processing complete');
return {
content: response.choices[0].message.content,
model,
tokens: response.usage?.total_tokens,
userId,
};
}
// Claude handling would go here
throw new Error(`Unsupported model: ${model}`);
}, {
connection: {
host: process.env.FLASHQ_HOST || 'localhost',
port: parseInt(process.env.FLASHQ_PORT || '6789'),
},
concurrency: 3, // Limit AI concurrency
});
// Webhook Worker
const webhookWorker = new Worker('webhooks', async (job) => {
const { provider, payload } = job.data;
switch (provider) {
case 'stripe':
return await handleStripeWebhook(payload);
case 'github':
return await handleGithubWebhook(payload);
default:
throw new Error(`Unknown provider: ${provider}`);
}
}, {
connection: {
host: process.env.FLASHQ_HOST || 'localhost',
port: parseInt(process.env.FLASHQ_PORT || '6789'),
},
concurrency: 20,
});
// Event handlers
[emailWorker, aiWorker, webhookWorker].forEach(worker => {
worker.on('completed', (job, result) => {
console.log(`β [${worker.name}] Job ${job.id} completed`);
});
worker.on('failed', (job, error) => {
console.error(`β [${worker.name}] Job ${job.id} failed: ${error.message}`);
});
});
console.log('π§ Workers started: email, ai-processing, webhooks');
Advanced Patterns
Rate Limiting per User
// Elysia example with per-user rate limiting
.post('/api/generate', async ({ body, queue }) => {
const client = await getClient();
// Check user's current job count
const userQueueName = `ai-${body.userId}`;
const counts = await client.getJobCounts(userQueueName);
if (counts.active + counts.waiting > 5) {
return {
error: 'Rate limit exceeded',
message: 'Maximum 5 pending jobs allowed',
};
}
// Use user-specific queue with rate limiting
await client.setRateLimit(userQueueName, {
max: 10, // 10 jobs
duration: 60000, // per minute
});
const job = await client.push(userQueueName, body);
return { success: true, jobId: job.id };
})
Job Workflows with Dependencies
// Hono example: Multi-step AI pipeline
app.post('/api/pipeline', async (c) => {
const queue = c.get('queue');
const body = await c.req.json();
const client = await getClient();
// Step 1: Extract text from document
const extractJob = await client.push('extract', {
documentUrl: body.documentUrl,
});
// Step 2: Summarize (depends on extraction)
const summarizeJob = await client.push('summarize', {
sourceJobId: extractJob.id,
}, {
depends_on: [extractJob.id],
});
// Step 3: Generate embeddings (depends on extraction)
const embedJob = await client.push('embed', {
sourceJobId: extractJob.id,
}, {
depends_on: [extractJob.id],
});
// Step 4: Store in vector DB (depends on embeddings)
const storeJob = await client.push('store-vectors', {
sourceJobId: embedJob.id,
}, {
depends_on: [embedJob.id],
});
return c.json({
pipelineId: extractJob.id,
jobs: {
extract: extractJob.id,
summarize: summarizeJob.id,
embed: embedJob.id,
store: storeJob.id,
},
});
});
Real-time Progress with SSE
// Elysia SSE for job progress
import { Elysia } from 'elysia';
new Elysia()
.get('/api/jobs/:id/stream', async function* ({ params }) {
const client = await getClient();
let lastState = '';
let lastProgress = -1;
while (true) {
const job = await client.getJob(params.id);
if (!job) {
yield { event: 'error', data: 'Job not found' };
break;
}
const state = await client.getState(params.id);
const progress = await client.getProgress(params.id);
// Only emit on changes
if (state !== lastState || progress?.percent !== lastProgress) {
yield {
event: 'update',
data: JSON.stringify({ state, progress }),
};
lastState = state;
lastProgress = progress?.percent || 0;
}
if (state === 'completed' || state === 'failed') {
const result = await client.getResult(params.id);
yield {
event: state,
data: JSON.stringify({ result }),
};
break;
}
await Bun.sleep(500);
}
});
For high-throughput scenarios, use flashQ's binary protocol (MessagePack) by setting useBinary: true in the client options. This reduces payload size by 40% and speeds up serialization by 3-5x.
Deployment
Docker Compose Setup
# docker-compose.yml
version: '3.8'
services:
flashq:
image: ghcr.io/egeominotti/flashq:latest
ports:
- "6789:6789"
- "6790:6790"
environment:
- DATABASE_URL=postgres://flashq:flashq@postgres:5432/flashq
- HTTP=1
- AUTH_TOKENS=your-secret-token
depends_on:
- postgres
postgres:
image: postgres:16-alpine
environment:
- POSTGRES_USER=flashq
- POSTGRES_PASSWORD=flashq
- POSTGRES_DB=flashq
volumes:
- postgres_data:/var/lib/postgresql/data
api:
build: .
ports:
- "3000:3000"
environment:
- FLASHQ_HOST=flashq
- FLASHQ_PORT=6789
- FLASHQ_TOKEN=your-secret-token
depends_on:
- flashq
worker:
build:
context: .
dockerfile: Dockerfile.worker
environment:
- FLASHQ_HOST=flashq
- FLASHQ_PORT=6789
- FLASHQ_TOKEN=your-secret-token
- OPENAI_API_KEY=${OPENAI_API_KEY}
- RESEND_API_KEY=${RESEND_API_KEY}
depends_on:
- flashq
deploy:
replicas: 3
volumes:
postgres_data:
Dockerfile for Bun
# Dockerfile
FROM oven/bun:1 AS base
WORKDIR /app
# Install dependencies
FROM base AS deps
COPY package.json bun.lockb ./
RUN bun install --frozen-lockfile
# Build
FROM base AS build
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN bun build ./src/index.ts --outdir ./dist --target bun
# Production
FROM base AS production
COPY --from=build /app/dist ./dist
COPY --from=deps /app/node_modules ./node_modules
EXPOSE 3000
CMD ["bun", "run", "dist/index.js"]
Conclusion
Elysia and Hono.js combined with flashQ create an exceptionally fast stack for building modern APIs with background job processing:
- Elysia: Best for Bun-native applications with end-to-end type safety
- Hono.js: Perfect for multi-runtime support (Bun, Node, Cloudflare Workers)
- flashQ: Handles your background jobs with 1.9M jobs/sec throughput
Both frameworks integrate seamlessly with flashQ's TypeScript SDK, giving you type-safe job queues with minimal configuration.