He
HeliumTS
Note:

HeliumTS is under pre-beta and active development. Expect bugs and breaking changes. If you find any issues, please report them in our GitHub

A stable release is planned for early December 2025.

Background Workers

Overview

HeliumTS provides defineWorker for creating long-running background processes that start when the server starts and continue running until the server shuts down. This is ideal for:

  • Queue consumers (Redis, RabbitMQ, SQS, etc.)
  • Background task processors
  • Scheduled jobs and cron-like tasks
  • Real-time data synchronization
  • Cache warmers and data pre-loaders
  • WebSocket connection managers

Workers eliminate the need for separate microservices or monorepo setups like Turborepo - everything runs in the same process, sharing the same code, services, types, and models.

Basic Usage

Create a worker file in your src/server directory:

Server (src/server/workers/queueConsumer.ts):

1import { defineWorker } from "heliumts/server";
2
3export const queueConsumer = defineWorker(
4 async (ctx) => {
5 console.log("Queue consumer started");
6
7 while (true) {
8 // Poll for jobs
9 const job = await queue.pop();
10
11 if (job) {
12 await processJob(job);
13 }
14
15 // Wait before polling again
16 await new Promise((resolve) => setTimeout(resolve, 1000));
17 }
18 },
19 { name: "queueConsumer" }
20);

When the server starts, you'll see:

1Starting worker 'queueConsumer'

Worker Options

1interface WorkerOptions {
2 /**
3 * The name of the worker, used for logging and identification.
4 * If not provided, the handler function name will be used.
5 */
6 name?: string;
7
8 /**
9 * Whether the worker should automatically restart if it crashes.
10 * Default: true
11 */
12 autoRestart?: boolean;
13
14 /**
15 * Delay in milliseconds before restarting the worker after a crash.
16 * Default: 5000 (5 seconds)
17 */
18 restartDelayMs?: number;
19
20 /**
21 * Maximum number of restart attempts before giving up.
22 * Set to 0 for unlimited restarts.
23 * Default: 0 (unlimited)
24 */
25 maxRestarts?: number;
26
27 /**
28 * Whether to start the worker automatically on server startup.
29 * Default: true
30 */
31 autoStart?: boolean;
32}

Example with Options

1import { defineWorker } from "heliumts/server";
2
3export const dataSync = defineWorker(
4 async (ctx) => {
5 while (true) {
6 await syncDataFromExternalAPI();
7 await new Promise((resolve) => setTimeout(resolve, 30000)); // Every 30 seconds
8 }
9 },
10 {
11 name: "dataSync",
12 autoRestart: true,
13 restartDelayMs: 10000, // Wait 10 seconds before restarting
14 maxRestarts: 5, // Give up after 5 restart attempts
15 }
16);

Context Access

Workers receive a HeliumContext object, similar to RPC methods:

1import { defineWorker } from "heliumts/server";
2
3export const contextExample = defineWorker(
4 async (ctx) => {
5 // Access context properties
6 console.log("Worker context:", ctx);
7
8 // You can add custom properties via middleware
9 const db = ctx.db; // If set by middleware
10
11 while (true) {
12 await performTask(db);
13 await new Promise((resolve) => setTimeout(resolve, 5000));
14 }
15 },
16 { name: "contextExample" }
17);

Error Handling

Workers automatically handle errors with configurable restart behavior:

1import { defineWorker } from "heliumts/server";
2
3export const resilientWorker = defineWorker(
4 async (ctx) => {
5 while (true) {
6 try {
7 await riskyOperation();
8 } catch (error) {
9 console.error("Operation failed:", error);
10 // The worker continues running
11 }
12
13 await new Promise((resolve) => setTimeout(resolve, 1000));
14 }
15 },
16 { name: "resilientWorker" }
17);
18
19// If the entire worker crashes, it will restart automatically
20export const crashingWorker = defineWorker(
21 async (ctx) => {
22 // This will crash and restart up to 3 times
23 throw new Error("Something went wrong!");
24 },
25 {
26 name: "crashingWorker",
27 autoRestart: true,
28 maxRestarts: 3,
29 restartDelayMs: 5000,
30 }
31);

Graceful Shutdown

Workers are automatically stopped when the server shuts down (SIGINT/SIGTERM):

1^C
2Shutting down...
3Stopped 3 worker(s)
4Server closed

Multiple Workers

You can define multiple workers in the same file or across different files:

Server (src/server/workers/index.ts):

1import { defineWorker } from "heliumts/server";
2
3export const worker1 = defineWorker(
4 async (ctx) => {
5 // Worker 1 logic
6 },
7 { name: "worker1" }
8);
9
10export const worker2 = defineWorker(
11 async (ctx) => {
12 // Worker 2 logic
13 },
14 { name: "worker2" }
15);
16
17export const worker3 = defineWorker(
18 async (ctx) => {
19 // Worker 3 logic
20 },
21 { name: "worker3" }
22);

Startup output:

1Starting worker 'worker1'
2Starting worker 'worker2'
3Starting worker 'worker3'

Best Practices

  1. Use descriptive names: Give workers meaningful names for easy identification in logs
  2. Handle errors gracefully: Catch errors within your worker loop to prevent unnecessary restarts
  3. Use appropriate restart settings: Set maxRestarts to prevent infinite restart loops
  4. Clean up resources: If your worker allocates resources (connections, file handles), clean them up on errors
  5. Log important events: Add logging for visibility into worker behavior
  6. Use long polling: For queue consumers, use long polling to reduce CPU usage
  7. Monitor worker health: Use getWorkerStatus() to monitor worker states

TypeScript Support

Workers are fully typed:

1import { defineWorker, WorkerOptions, HeliumWorkerDef } from "heliumts/server";
2
3const options: WorkerOptions = {
4 name: "typedWorker",
5 autoRestart: true,
6 maxRestarts: 5,
7};
8
9export const typedWorker: HeliumWorkerDef = defineWorker(async (ctx) => {
10 // Fully typed worker
11}, options);

Why Workers Instead of Monorepos?

Traditional approaches require:

  • Separate repositories or monorepo tools (Turborepo, Nx)
  • Separate deployment pipelines
  • Code duplication or complex package sharing
  • Multiple running processes

With Helium workers:

  • Single codebase: Everything in one place
  • Shared code: Workers use the same services, types, and models as your RPC methods
  • Single deployment: Deploy once, run everything
  • Simplified architecture: No inter-service communication needed
  • Type safety: Full TypeScript support across the entire application