dbos-sqs 2.1.6-preview
Install from the command line:
Learn more about npm packages
$ npm install @dbos-inc/dbos-sqs@2.1.6-preview
Install via package.json:
"@dbos-inc/dbos-sqs": "2.1.6-preview"
About this version
Message queues are a common building block for distributed systems. Message queues allow processing to occur at a different place or time, perhaps in another programming environment. Due to its flexibility, robustness, integration, and low cost, Amazon Simple Queue Service is the most popular message queuing service underpinning distributed systems in AWS.
This package includes a DBOS communicator for sending messages using SQS, as well as an event receiver for exactly-once processing of incoming messages (even using standard queues).
In order to send and receive messages with SQS, it is necessary to register with AWS, create a queue, and create access keys for the queue. (See Send Messages Between Distributed Applications in AWS documentation.)
First, ensure that the DBOS SQS package is installed into the application:
npm install --save @dbos-inc/dbos-sqs
Second, place appropriate configuration into the dbos-config.yaml
file; the following example will pull the AWS information from the environment:
application:
aws_sqs_configuration: aws_config # Optional if the section is called `aws_config`
aws_config:
aws_region: ${AWS_REGION}
aws_access_key_id: ${AWS_ACCESS_KEY_ID}
aws_secret_access_key: ${AWS_SECRET_ACCESS_KEY}
If a different configuration file section should be used for SQS, the aws_sqs_configuration
can be changed to indicate a configuration section for use with SQS. If multiple configurations are to be used, the application code is responsible for naming them.
First, ensure that the communicator is imported:
import { DBOS_SQS } from "@dbos-inc/dbos-sqs";
DBOS_SQS
is a configured class. This means that the configuration (or config file key name) must be provided when a class instance is created, for example:
const sqsCfg = configureInstance(DBOS_SQS, 'default', {awscfgname: 'aws_config'});
Within a DBOS Transact Workflow, call the DBOS_SQS
function from the workflow context:
const sendRes = await sqsCfg.sendMessage(
{
MessageBody: "{/*app data goes here*/}",
},
);
Sending to SQS FIFO queues is the same as with standard queues, except that FIFO queues need a MessageDeduplicationId
(or content-based deduplication) and can be sharded by a MessageGroupId
.
const sendRes = await sqsCfg.sendMessage(
{
MessageBody: "{/*app data goes here*/}",
MessageDeduplicationId: "Message key goes here",
MessageGroupId: "Message grouping key goes here",
},
);
The DBOS SQS receiver provides the capability of running DBOS Transact workflows exactly once per SQS message, even on standard "at-least-once" SQS queues.
The package uses decorators to configure message receipt and identify the functions that will be invoked during message dispatch.
First, ensure that the method decorators are imported:
import { SQSMessageConsumer, SQSConfigure } from "@dbos-inc/dbos-sqs";
The @SQSConfigure
decorator should be applied at the class level to identify the credentials useed by receiver functions in the class:
interface SQSConfig {
awscfgname?: string;
awscfg?: AWSServiceConfig;
queueUrl?: string;
getWFKey?: (m: Message) => string; // Calculate workflow OAOO key for each message
workflowQueueName?: string;
}
@SQSConfigure({awscfgname: 'sqs_receiver'})
class SQSEventProcessor {
...
}
Then, within the class, one or more methods should be decorated with SQSMessageConsumer
to handle SQS messages:
@SQSConfigure({awscfgname: 'sqs_receiver'})
class SQSEventProcessor {
@SQSMessageConsumer({queueUrl: process.env['SQS_QUEUE_URL']})
@DBOS.workflow()
static async recvMessage(msg: Message) {
// Workflow code goes here...
}
}
NOTE: The DBOS @SQSMessageConsumer
decorator should be applied to a method decorated with @DBOS.workflow
. It is also possible to decorate an old-style DBOS workflow, which is also decorated with @Workflow
and requires a first argument of type WorkflowContext
.
By default, @SQSMessageConsumer
workflows are started immediately after message receipt. If workflowQueueName
is specified in the SQSConfig
at either the method or class level, then the workflow will be enqueued in a workflow queue.
Typical application processing for standard SQS queues implements "at least once" processing of the message:
- Receive the message from the SQS queue
- If necessary, extend the visibility timeout of the message during the course of processing
- After all processing is complete, delete the message from the queue If there are any failures, the message will remain in the queue and be redelivered to another consumer.
The DBOS receiver proceeds differently:
- Receive the message from the SQS queue
- Start a workflow (using an OAOO key computed from the message)
- Quickly delete the message
This means that, instead of the SQS service redelivering the message in the case of a transient failure, it is up to DBOS to restart any interrupted workflows. Also, since DBOS workflows execute to completion exactly once, it is not necessary to use a SQS FIFO queue for exactly-once processing.
The sqs.test.ts
file included in the source repository demonstrates sending and processing SQS messages. Before running, set the following environment variables:
-
SQS_QUEUE_URL
: SQS queue URL with access for sending and receiving messages -
AWS_REGION
: AWS region to use -
AWS_ACCESS_KEY_ID
: The access key with permission to use the SQS service -
AWS_SECRET_ACCESS_KEY
: The secret access key corresponding toAWS_ACCESS_KEY_ID
- To start a DBOS app from a template, visit our quickstart.
- For DBOS Transact programming tutorials, check out our programming guide.
- To learn more about DBOS, take a look at our documentation or our source code.
Assets
- dbos-sqs-2.1.6-preview.tgz
Download activity
- Total downloads 0
- Last 30 days 0
- Last week 0
- Today 0