BullMQ is the most popular job queue for Node.js, and for good reason. It's battle-tested, feature-rich, and has an excellent API. We used it ourselves for years before building flashQ.
So why build an alternative? The short answer: Redis.
Redis is an incredible piece of software, but for job queues—especially AI workloads—it introduces friction that we wanted to eliminate. This article explores the key differences between flashQ and BullMQ, and helps you decide which is right for your project.
Quick Comparison
| Feature | flashQ | BullMQ |
|---|---|---|
| External dependency | None | Redis required |
| API compatibility | BullMQ-compatible | - |
| Max payload size | 10 MB | ~5 MB practical |
| Push throughput | 1.9M/sec | ~50K/sec |
| Processing throughput | 280K/sec | ~30K/sec |
| Latency (p99) | <1ms | 5-10ms |
| Job dependencies | âś… | âś… |
| Rate limiting | âś… Built-in | âś… |
| Priorities | âś… | âś… |
| Delayed jobs | âś… | âś… |
| Cron jobs | âś… | âś… (repeatable) |
| Written in | Rust | TypeScript + Redis |
| Persistence | PostgreSQL (optional) | Redis |
| License | MIT | MIT |
The Redis Problem
Don't get us wrong—Redis is amazing. It's fast, reliable, and incredibly versatile. But using it as a job queue backend comes with costs:
1. Operational Overhead
Running Redis in production means:
- Provisioning and configuring instances
- Setting up persistence (RDB snapshots, AOF logs)
- Monitoring memory usage and eviction
- Managing high availability (Sentinel or Cluster)
- Handling failovers and split-brain scenarios
- Tuning maxmemory and eviction policies
For a startup or small team, this is significant overhead just to run background jobs.
2. Memory Costs
Redis stores everything in RAM. At scale, this gets expensive:
- 1 million jobs with 1KB payloads = 1GB+ of Redis memory
- AI workloads with embeddings = 10-100x larger payloads
- Cloud Redis pricing: $0.10-0.50 per GB/hour
A moderately sized AI queue can easily cost $500-1000/month just in Redis.
3. Network Latency
Every job operation requires a network round-trip:
Your App → Network → Redis → Network → Your App
~0.5ms ~0.1ms ~0.5ms
Total: ~1-2ms per operation
This adds up when you're processing thousands of jobs per second.
4. Payload Limitations
While Redis technically supports values up to 512MB, performance degrades significantly above a few MB. For AI workloads that need to pass embeddings (1536+ floats), images, or long context windows, this becomes a problem.
How flashQ Solves These Problems
Zero External Dependencies
flashQ is a single binary. Download it, run it, done:
# That's it. No Redis, no Docker, no configuration.
./flashq-server
For persistence, you can optionally connect to PostgreSQL. But for development or smaller workloads, the in-memory mode works perfectly.
Native Performance
flashQ is written in Rust and optimized for job queue workloads:
- 32 shards for lock-free parallel access
- Binary protocol (MessagePack) for fast serialization
- Efficient data structures (indexed priority queues)
- Zero-copy operations where possible
The result: 10x higher throughput and sub-millisecond latency.
Large Payloads
flashQ natively supports payloads up to 10MB, making it perfect for:
- Embedding vectors (1536 floats = ~6KB per embedding)
- Images for processing
- Long text contexts for LLMs
- Batch data for inference
API Compatibility
We designed flashQ's API to be compatible with BullMQ. If you're migrating, most code works unchanged:
// BullMQ
import { Queue, Worker } from 'bullmq';
import Redis from 'ioredis';
const connection = new Redis();
const queue = new Queue('my-queue', { connection });
// flashQ
import { Queue, Worker } from 'flashq';
const queue = new Queue('my-queue'); // No connection needed!
The Queue and Worker APIs are nearly identical:
// Adding jobs (same API)
await queue.add('task-name', { data: 'value' }, {
priority: 1,
delay: 5000,
attempts: 3,
backoff: { type: 'exponential', delay: 1000 }
});
// Processing jobs (same API)
const worker = new Worker('my-queue', async (job) => {
console.log(job.name, job.data);
return { result: 'done' };
});
Feature Parity
flashQ supports most BullMQ features:
| Feature | flashQ | Notes |
|---|---|---|
| Job priorities | âś… | Same API |
| Delayed jobs | âś… | Same API |
| Job retries | âś… | Exponential backoff |
| Dead letter queue | âś… | Automatic after max attempts |
| Job dependencies | âś… | depends_on option |
| Rate limiting | âś… | Token bucket per queue |
| Concurrency control | âś… | Per-queue limits |
| Progress tracking | âś… | job.updateProgress() |
| Cron/Repeatable | âś… | 6-field cron expressions |
| Pause/Resume | âś… | Same API |
| Events | âś… | completed, failed, progress |
| Flows | âś… | Parent-child job relationships |
When to Choose BullMQ
BullMQ is still a great choice if:
- You're already running Redis for caching or sessions
- You need Redis-specific features like pub/sub or streams
- Your team has Redis expertise and established operations
- You want the BullMQ Pro features like groups and rate limiting per group
When to Choose flashQ
flashQ is the better choice if:
- You want zero infrastructure to manage
- You're building AI/ML applications with large payloads
- You need high throughput (100K+ jobs/sec)
- You're a startup or small team wanting to move fast
- You're in development and don't want to spin up Redis
Migration Guide
Migrating from BullMQ to flashQ is straightforward:
1. Install flashQ
npm install flashq
2. Start the flashQ server
docker run -d -p 6789:6789 flashq/flashq
3. Update your imports
// Before
import { Queue, Worker } from 'bullmq';
// After
import { Queue, Worker } from 'flashq';
4. Remove Redis connection
// Before
const queue = new Queue('my-queue', { connection: redis });
// After
const queue = new Queue('my-queue');
That's it! Most of your code should work unchanged.
Some advanced BullMQ features like job groups (Pro) and rate limiting per worker don't have direct equivalents in flashQ. Check the documentation for the full feature list.
Conclusion
Both BullMQ and flashQ are excellent job queues. BullMQ is mature, well-documented, and has a large community. flashQ offers simplicity, performance, and is purpose-built for modern AI workloads.
If you're starting a new project—especially one involving AI—we encourage you to give flashQ a try. The lack of Redis ops alone might make your life significantly easier.