If you've built anything in Web3, you know the pain: failed transactions, rate-limited RPC calls, stuck airdrops, and that dreaded "nonce too low" error at 3 AM. Most teams pay $500+/month for OpenZeppelin Defender or Gelato and still hit scaling walls. There's a better way.
flashQ handles 1.9M jobs/sec with sub-millisecond latency. It's self-hosted, open-source, and provides everything blockchain apps need: priority queues, rate limiting, retry logic, job dependencies, and persistence. No vendor lock-in, no per-transaction fees.
The Hidden Infrastructure Problem in Web3
Every blockchain application eventually needs to:
- Send transactions reliably (and handle failures gracefully)
- Process events at scale (without losing any)
- Rate-limit RPC calls (or get banned by Alchemy)
- Coordinate complex operations (mint → transfer → notify)
- Schedule recurring tasks (vesting releases, keeper operations)
These aren't blockchain problems. They're queue problems. Yet most Web3 teams treat infrastructure as an afterthought.
How flashQ Maps to Blockchain Needs
| flashQ Feature | Blockchain Problem It Solves |
|---|---|
| Priority Queue | TX ordering, MEV, gas bidding |
| Rate Limiting | RPC rate limits, anti-spam |
| Retry + Backoff | Failed TX, network congestion |
| Delayed Jobs | Vesting, time-locks, scheduled ops |
| Job Dependencies | Multi-step TX flows, approvals |
| DLQ | Failed TX investigation |
| Cron | Keeper automation, oracle updates |
| Batch Operations | Airdrops, mass minting |
| Progress Tracking | Long-running operations |
| Persistence | Crash recovery, audit trail |
1. Transaction Relayer
The problem: OpenZeppelin Defender costs $500+/month and still has scaling limits. Gelato charges per execution. Both create vendor lock-in.
The solution: Build your own relayer with flashQ as the backbone.
// Submit transaction to queue
await flashq.push('relayer:submit', {
to: contractAddress,
data: encodedFunctionCall,
value: 0,
chainId: 1,
gasStrategy: 'aggressive'
}, {
priority: isUrgent ? 1000 : 100,
maxAttempts: 5,
backoff: 2000,
timeout: 60000,
uniqueKey: `tx:${idempotencyKey}`
});
// Worker handles signing, nonce management, submission
// Automatic retry on revert, gas estimation, etc.
Why flashQ:
- Retry logic handles network failures and reverts
- Priority ensures critical transactions execute first
- Unique keys prevent duplicate submissions
- DLQ captures failed transactions for investigation
- 10-100x cheaper than managed solutions
2. NFT Minting Queue
The problem: High-demand mints crash servers, create gas wars, and frustrate users with failed transactions.
The solution: Queue-based minting with fairness and rate limiting.
// User requests mint
await flashq.push('nft:mint', {
wallet: userAddress,
quantity: 2,
proof: merkleProof
}, {
uniqueKey: `mint:${userAddress}`, // One request per wallet
priority: requestTimestamp, // FIFO fairness
maxAttempts: 5,
timeout: 120000
});
// Rate limit to prevent gas wars
await flashq.setRateLimit('nft:mint', {
max: 10, // 10 mints per second
window: 1000
});
// Concurrency limit for gas management
await flashq.setConcurrency('nft:mint', 5);
3. Airdrop Distribution
The problem: Sending tokens to 50,000 wallets is slow, expensive, and error-prone. One failed transaction can derail the entire process.
The solution: Batch processing with progress tracking and automatic recovery.
// Queue all recipients
for (const batch of chunk(recipients, 1000)) {
await flashq.pushBatch('airdrop:send',
batch.map(r => ({
data: {
wallet: r.address,
amount: r.amount,
tokenAddress: TOKEN
},
options: {
uniqueKey: `airdrop:${campaignId}:${r.address}`,
maxAttempts: 10,
backoff: 5000
}
}))
);
}
// Control spend rate
await flashq.setConcurrency('airdrop:send', 3);
// Monitor progress
const counts = await flashq.getJobCounts('airdrop:send');
console.log(`Completed: ${counts.completed}/${counts.total}`);
Resume after crashes (PostgreSQL persistence), progress tracking for transparency, concurrency limits control gas spending, batch operations for efficiency, DLQ captures failed sends for retry.
4. Keeper / Automation
The problem: Chainlink Automation and Gelato charge per execution. Complex conditions require custom logic.
The solution: Self-hosted automation with flashQ cron jobs.
// Compound-style yield harvesting
await flashq.addCron('keeper:harvest', {
schedule: '0 */4 * * *', // Every 4 hours
queue: 'keeper:execute',
data: {
action: 'harvest',
vaults: ['0x...', '0x...']
}
});
// Liquidation monitoring (every 30 seconds)
await flashq.addCron('keeper:liquidations', {
schedule: '*/30 * * * * *',
queue: 'keeper:execute',
data: { action: 'checkLiquidations' }
});
// Price oracle updates
await flashq.addCron('keeper:oracle', {
schedule: '*/5 * * * *', // Every 5 minutes
queue: 'keeper:execute',
data: { action: 'updatePriceFeeds' }
});
5. Arbitrage Execution
The problem: Arbitrage opportunities are time-sensitive. Failed executions mean missed profits. Exchange rate limits cause rejections.
The solution: Priority queue with LIFO processing and built-in rate limiting.
// Opportunity detected
await flashq.push('arb:execute', {
type: 'cross-exchange',
buyExchange: 'binance',
sellExchange: 'okx',
symbol: 'ETH/USDT',
spread: 0.15,
deadline: Date.now() + 2000
}, {
priority: Math.floor(spread * 10000), // Higher spread = higher priority
lifo: true, // Newest opportunities first
timeout: 3000, // Fast timeout
ttl: 5000 // Expire quickly
});
// Rate limits per exchange
await flashq.setRateLimit('arb:binance', { max: 1200, window: 60000 });
await flashq.setRateLimit('arb:okx', { max: 300, window: 1000 });
6. Cross-Chain Bridge
The problem: Bridge operations require coordination across chains. Message delivery must be reliable. Failures need manual intervention.
The solution: Job dependencies for multi-step workflows.
// Step 1: Verify on source chain
const verifyJob = await flashq.push('bridge:verify', {
sourceChain: 'ethereum',
txHash: sourceTxHash,
messageHash: messageHash
});
// Step 2: Wait for confirmations
const confirmJob = await flashq.push('bridge:confirm', {
txHash: sourceTxHash,
requiredConfirmations: 12
}, {
dependsOn: [verifyJob.id]
});
// Step 3: Execute on destination
const executeJob = await flashq.push('bridge:execute', {
destChain: 'arbitrum',
message: encodedMessage
}, {
dependsOn: [confirmJob.id],
maxAttempts: 10,
timeout: 300000
});
// Wait for completion
const result = await flashq.finished(executeJob.id);
7. Event Indexing
The problem: Processing blockchain events at scale requires handling backpressure, retries, and parallel processing without losing data.
The solution: Event-driven pipeline with flashQ.
// Webhook from Alchemy/QuickNode
app.post('/webhook/events', async (req, res) => {
const events = req.body.events;
await flashq.pushBatch('indexer:process',
events.map(e => ({
data: {
blockNumber: e.blockNumber,
txHash: e.transactionHash,
logIndex: e.logIndex,
event: e.decoded
},
options: {
priority: e.blockNumber, // Process in order
uniqueKey: `${e.transactionHash}:${e.logIndex}`
}
}))
);
res.status(200).send('OK');
});
// Workers process events in parallel
// Database writes, notifications, analytics, etc.
8. Oracle Price Feeds
// Multi-source price aggregation
await flashq.pushBatch('oracle:fetch', [
{ data: { source: 'binance', pair: 'ETH/USD' } },
{ data: { source: 'coinbase', pair: 'ETH/USD' } },
{ data: { source: 'kraken', pair: 'ETH/USD' } }
]);
// Aggregate and push on-chain
await flashq.push('oracle:update', {
pair: 'ETH/USD'
}, {
dependsOn: [job1.id, job2.id, job3.id],
delay: 5000 // Wait for all sources
});
9. DEX Order Routing
// Smart order routing - split across DEXes
const routes = calculateOptimalRoutes(order);
await flashq.pushBatch('dex:execute',
routes.map(route => ({
data: {
dex: route.dex,
path: route.path,
amountIn: route.amountIn,
minAmountOut: route.minAmountOut
},
options: {
priority: route.expectedOutput,
timeout: 30000,
maxAttempts: 3
}
}))
);
10. Liquidation Bots
// Monitor positions cron (every 10 seconds)
await flashq.addCron('liquidation:monitor', {
schedule: '*/10 * * * * *',
queue: 'liquidation:check',
data: { action: 'scanPositions' }
});
// When position is at risk
await flashq.push('liquidation:execute', {
protocol: 'aave',
positionId: position.id,
collateral: position.collateral,
debt: position.debt,
healthFactor: position.healthFactor
}, {
priority: Math.floor((1 / position.healthFactor) * 10000), // Lower health = higher priority
timeout: 5000,
maxAttempts: 3
});
11. Token Vesting
// Monthly vesting releases
await flashq.addCron('vesting:release', {
schedule: '0 0 1 * *', // 1st of each month
queue: 'vesting:process',
data: { vestingContract: '0x...' }
});
// Process vesting release
async function processVesting(job) {
const { vestingContract } = job.data;
const beneficiaries = await getBeneficiaries(vestingContract);
await flashq.pushBatch('vesting:transfer',
beneficiaries.map(b => ({
data: {
recipient: b.address,
amount: b.vestedAmount
},
options: {
uniqueKey: `vesting:${b.address}:${Date.now()}`,
maxAttempts: 5
}
}))
);
}
12. DAO Governance
// Timelock execution queue
await flashq.push('dao:execute', {
proposalId: 42,
targets: ['0x...'],
calldatas: ['0x...']
}, {
delay: 48 * 60 * 60 * 1000, // 48h timelock
uniqueKey: `proposal:42`
});
// Queue proposal notifications
await flashq.push('dao:notify', {
proposalId: 42,
event: 'queued',
channels: ['discord', 'telegram', 'email']
});
13. Webhook Processing (Alchemy/QuickNode)
// Receive webhooks with backpressure handling
app.post('/webhook/alchemy', async (req, res) => {
// Immediately acknowledge
res.status(200).send('OK');
// Queue for processing
await flashq.push('webhook:process', {
source: 'alchemy',
payload: req.body,
receivedAt: Date.now()
}, {
priority: req.body.blockNumber || 0
});
});
// Rate limit webhook processing
await flashq.setRateLimit('webhook:process', {
max: 100,
window: 1000
});
14. Wallet Notifications
// Fan-out notifications on large transfer
await flashq.push('notify:large-transfer', {
wallet: '0x...',
amount: '1000000',
token: 'USDC',
txHash: '0x...'
});
// Worker handles multi-channel delivery
async function processNotification(job) {
const { wallet, amount, token } = job.data;
const user = await getUserByWallet(wallet);
const channels = user.notificationSettings;
await flashq.pushBatch('notify:deliver',
channels.map(channel => ({
data: {
channel: channel.type,
destination: channel.address,
message: `Large transfer: ${amount} ${token}`
},
options: {
maxAttempts: 3,
backoff: 1000
}
}))
);
}
Comparison: flashQ vs. Alternatives
| Feature | flashQ | OZ Defender | Gelato | Redis + BullMQ |
|---|---|---|---|---|
| Throughput | 1.9M/sec | Limited | Limited | ~50K/sec |
| Self-hosted | Yes | No | No | Yes |
| Cost | Infra only | $500+/mo | Per execution | Infra |
| Priority queues | Native | Limited | No | Yes |
| Rate limiting | Built-in | Manual | N/A | Manual |
| Job dependencies | Native | No | No | Yes |
| Cron jobs | 6-field | Limited | Yes | Plugin |
| Clustering/HA | Native | Yes | Yes | Complex |
| Vendor lock-in | None | High | High | Low |
Architecture: flashQ Blockchain Layer
flashq-blockchain/
├── engine/ # Core flashQ server
├── blockchain/
│ ├── src/
│ │ ├── relayer/ # Transaction submission
│ │ │ ├── nonce.rs # Nonce management
│ │ │ ├── gas.rs # Gas estimation & EIP-1559
│ │ │ └── signer.rs # Key management (KMS, Vault)
│ │ ├── indexer/ # Event processing
│ │ ├── keeper/ # Automation tasks
│ │ ├── oracle/ # Price feeds
│ │ └── chains/ # Multi-chain configs
│ │ ├── ethereum.rs
│ │ ├── polygon.rs
│ │ ├── arbitrum.rs
│ │ └── base.rs
│ └── Cargo.toml
└── sdk/
└── typescript/
└── src/
└── blockchain.ts # Blockchain-specific SDK
Getting Started
# Quick start with Docker
docker run -p 6789:6789 -p 6790:6790 flashq/flashq
# Or with PostgreSQL persistence
docker-compose up -d
import { FlashQ } from 'flashq-sdk';
const client = new FlashQ({ host: 'localhost', port: 6789 });
await client.connect();
// Push your first blockchain job
await client.push('relayer:submit', {
to: '0x...',
data: '0x...',
chainId: 1
}, {
priority: 100,
maxAttempts: 5,
backoff: 2000
});
// Process it
const job = await client.pull('relayer:submit');
// Sign and submit transaction...
await client.ack(job.id, { txHash: '0x...' });
Conclusion
Blockchain applications have unique infrastructure needs: reliability, speed, and cost efficiency. Most teams either overpay for managed solutions or build fragile custom systems.
flashQ offers a third path: production-grade infrastructure you own and control.
Whether you're building a transaction relayer, NFT platform, DeFi protocol, or bridge—the queue is the foundation. Make it solid.
OpenZeppelin Defender: $500-2000/month
Gelato: $0.01-0.10 per execution
flashQ: ~$20/month (small VPS) for unlimited executions
Build Your Blockchain Infrastructure
Start building self-hosted blockchain infrastructure with flashQ today.
Get Started View on GitHub