visit
How to choose a decoupling service that suits your use case? In this article we’ll take you though some comparisons between AWS services – Kinesis vs SNS vs SQS – that allow you to decouple sending and receiving data. We’ll show you examples using Python to help you choose a decoupling service that suits your use case.
Decoupling offers a myriad of advantages, but choosing the right tool for the job may be challenging. AWS alone provides several services that allow us to decouple sending and receiving data. While on the surface those services seem to provide similar functionality, they are designed for different use cases and each of them can be useful if applied properly to the problem at hand.As one of the oldest services at AWS, SQS has a track record of providing an extremely simple and effective decoupling mechanism. The entire service is based on sending messages to the queue and allowing for applications (ex. ECS containers, Lambda functions) to poll for messages and process them. The message stays in the queue until some application picks it up, processes it, and deletes the message when it’s done.
The most important distinction between SQS and other decoupling services is that it’s NOT a publish-subscribe service. SQS has no concept of producers, consumers, topics, or subscribers. All it does is to provide a distributed queue that allows:SQS does not push messages to any applications. Instead, once a message is sent to SQS, an application must actively poll for messages to receive and process them. Also, it’s not enough to pick up the message from the queue to make it disappear — the message stays in the queue until:
Example showing SQS usage in Python — image by the author
By default, SQS does not guarantee that the messages will be processed in the same order they were sent to the queue unless you choose the FIFO queue. This can be easily configured when creating a queue.sqs.create_queue(QueueName=queue_name,
Attributes={‘VisibilityTimeout’: ‘3600’, ‘FifoQueue’: ‘true’})
Even though SNS stands for Simple Notification Service, it provides much more functionality than just the ability to send push notifications (emails, SMS, and mobile push).
In fact, it’s a serverless publish-subscribe messaging system allowing to send events to multiple applications (subscribers) at the same time (fan-out), including SQS queues, Lambda functions, Kinesis Data Streams, and generic HTTP endpoints.
In order to use the service, we only need to:How to decide whether you need to use SQS vs. SNS? Anytime multiple services need to receive the same event, you should consider SNS rather than SQS. A message from an SQS queue can only be successfully processed by a single worker node or process.
Therefore, if you need a fan-out mechanism, you need to create an SNS topic and implement queues for all applications that need to receive the specific event or data. Multiple queues can then subscribe to this SNS topic and receive the messages simultaneously.For instance, imagine a scenario as simple as having the possibility to publish the same event (message) to both the development (staging) and production environment:
Using SNS to implement a fan-out mechanism allowing to distinguish between DEV and PROD resources — image by the author
Again, that demonstrates how to:Example showing SNS usage in Python — image by the author
AWS provides an entire suite of services under the Kinesis family. When people say Kinesis, they typically refer to Kinesis Data Streams — a service allowing to process large amounts of streaming data in near real-time by leveraging producers and consumers operating on shards of data records.
To demonstrate how Kinesis Data Streams can be used, we will request the current cryptocurrency market prices (data producer) and ingest them into a Kinesis data stream.
To create a data stream in the AWS console, we need to provide a stream name and configure the number of shards:Create a data stream— image by the author
Then, we can start sending live market prices into the stream. In the example below, we send them every 10 seconds. with the same code.Example showing Kinesis Data Streams usage in Python — image by the author
The script will run indefinitely until we manually stop it.We can configure Kinesis Data Firehose to send data to S3 directly in the AWS console. We need to select our previously created data stream and for everything else, we can apply the defaults.
Create a delivery stream — image by the author
The most important part is to configure the destination — in our use case, we choose S3 and select a specific bucket:Create a delivery stream — image by the author
We need to configure how frequently the micro-batches of data should be sent to S3:Create a delivery stream — image by the author
We can then confirm to create a delivery stream:Create a delivery stream — image by the author
Shortly after the delivery stream’s creation, we should be able to see new data arriving every minute in our S3 bucket:Data from a delivery stream in S3 — image by the author
To see how this data looks like, we can download one file from S3 and inspect its content:Data from the delivery stream — image by the author
Alerts for Kinesis Data Streams in — image by the author
This is how the alert could look like if triggered:Alerts for Kinesis Data Streams in — image by the author
SNS vs. SQS vs. Kinesis Data Streams — image by the author Conclusion
In this article, we looked at the differences between SNS, SQS, and Kinesis Data Streams. We’ve built a simple demo showing how to send data to Kinesis from data producers, how a delivery stream can consume the data, and how to monitor any potential write throttles. For each service, we demonstrated how it can be used in Python and concluded with a table comparing the service characteristics.References and further reading:
Previously published at