The acceleration of your business\u2019 digital aspirations depends on fast, connected and reliable data flows. This need will be amplified in the post-COVID era where the transition to digital customer channels, business models and remote working is expected to accelerate.\nIn the first of this two-part series, we explored the value proposition of a cloud-native approach to enabling integration and how this overcomes the cost, time and operational inertia of implementing these using traditional approaches. In part two, we will demonstrate the adoption of three native AWS services \u2013 Amazon SQS, Amazon SNS and Amazon Kinesis Data Streams \u2013 to enable point-to-point messaging, publish-subscribe messaging and real-time streaming.\nCloud-Native Integration Solutions in Action\nThe original Enterprise Integration Patterns were published in 2003 by Gregor Hohpe and Bobby Wolf to help architects tackle complex integration challenges in the enterprise using proven, composable and reusable patterns. Whilst the technology landscape has changed drastically since that time, evolving from point-to-point and pub-sub messaging to real-time event stream processing, the foundational patterns have proven to be durable.\nAs highlighted in part one of this series, AWS, as a platform-of-best-fit, allows business stakeholders, architects and builders to compose the right patterns with the right tools and to accelerate the delivery of their innovations and business value.\nSCENARIO 1: POINT TO POINT, ASYNCHRONOUS MESSAGING\nTestament to the timeless nature of the EAI patterns, the simple point-to-point messaging scenario of getting information from A to B continues to be relevant in enterprises, large and small.\nBusiness Use Cases\n\nOne-way data push from one source system to one destination system\nEvent notification from one microservice to another microservice\nRequest-response data exchange between two applications or application components (SaaS, on-premise or microservice)\n\nEnterprise Application Integration Patterns\nA point-to-point solution typically involves:\n\nA Point-to-Point Channel with ordered messaging\nPolling Consumers that makes calls to the channel when it is ready to consume a message\nA Dead Letter Channel to collect messages that cannot be delivered\n\n AWS\n\nFigure 2: A point-to-point messaging integration with a dead letter channel handling messages that cannot be delivered. Diagram courtesy of Enterprise Integration Patterns \u2013 Dead Letter Channel\n\n\nAWS Implementation\nA common architecture to implement this pattern in AWS involves:\n\nPoint-to-Point Channel implemented using Amazon SQS (Simple Queue Service) with the option of using FIFO queue if sequence preservation is needed.\nPolling Consumer (and dead letter queue handler) using AWS Lambda serverless functions\nDead Letter Channel using the SQS Dead Letter Queues\n\n AWS\n\nFigure 3: A point-to-point messaging integration with a dead letter channel implemented with native AWS services\n\n\nAWS alternatives to Amazon SQS include:\n\nAmazon MQ, a managed message broker service for Apache ActiveMQ and RabbitMQ that is useful for customers who are already investment in that technology suite\nAppFlow, a fully managed integration service for securely transferring data between Software-as-a-Service (SaaS) applications like Salesforce, Slack and ServiceNow, and AWS services like Amazon S3 and Amazon Redshift\n\nKey Business Benefits\n\nFull parity with the enterprise integration pattern and applicability of existing team skills and knowledge\nEliminate the undifferentiated heavy lifting of managing legacy message brokers by taking advantage of the serverless architecture of SQS (no infrastructure or software to provision, configure, patch, scale or backup)\nAmazon SQS is offered with an SLA of 99.9%\nPay-as-you-pricing of $0.40 per million requests to SQS (with a generous free tier of 1 million requests per month)\nSecurity posture that includes data encryption at rest and in-flight, PCI-DSS certification and HIPAA eligibility\nVirtually unlimited throughput and number of messages, with single-digit millisecond latency\nFully automated and repeatable architecture using CloudFormation infrastructure as code\n\nFor more details see Amazon Simple Queue Service Documentation\nSCENARIO 2: PUBLISH-SUBSCRIBE, ASYNCHRONOUS MESSAGING\nSimilar to radio broadcast, the idea behind publish-subscribe, or commonly called pub-sub, is to send information once and have it received by multiple subscribers listening to a \u201ctopic\u201d. Unlike point-to-point messaging where consumers poll for messages, this pattern features the ability to push messages to interested receivers (subscribers) so that they may process and act upon the data immediately. This is the crux of an \u201cevent-driven\u201d architecture.\nBusiness Use Cases\n\nOne-way event notification from an event producer to multiple consumers\nPublication of data from a source to multiple interested targets\n\nEnterprise Application Integration Patterns:\nA publish-subscribe pattern typically involves:\n\nA Publish-Subscribe Channel featuring at least one message producer and multiple consumers\nAn Event-Driven Consumer that automatically receives messages as they arrive on the channel\n\n\u00a0\u00a0\nAWS\n\n\nFigure 4: A publish-subscribe integration with\n\u00a0push-enabled consumers. Diagram courtesy of Enterprise Integration Patterns \u2013 Publish-Subscribe Channels \u2013 Publish-Subscribe Channel\n\n\nAWS Implementation\nA common architecture to implement this pattern in AWS involves:\n\nPublish-Subscribe Channel using Amazon SNS (Simple Notification Service) with support for FIFO Pub\/Sub message if sequence preservation is needed.\nMultiple Event-Driven Consumer endpoints including AWS services (such as Lambda, SQS and Kinesis) and external services (such as HTTP endpoints, email and mobile push notification)\nOptional Dead Letter Channel using Amazon SQS (Simple Queue Service) for messages that fail delivery\n\n AWS\n\nFigure 5: An AWS-native publish-subscribe messaging architecture with multiple AWS and non-AWS receivers\n\n\nAmazon EventBridge is a serverless event bus and can be used as an alternative to SNS for publish-subscribe scenarios. There are 3 main points of differentiation between the two:\n\nEventBridge provides out-of-the-box integration with SaaS applications, such as Salesforce, Slack, New Relic, Data Dog, Shopify and Pager Duty\nEventBridge enables archival and replay of events\nEventBridge has a schema registry feature and is capable of discovering OpenAPI schema of events routed through the bus\n\nKey Business Benefits \n\nServerless and fully managed messaging broker that mitigates the undifferentiated heavy lifting of managing infrastructure, software, capacity, backups and availability\nCommitment-free, pay-as-you-pricing of $0.50 per million SNS requests (with a free tier of 1 million requests per month) plus delivery charges based on endpoints (such as $0.60 per million HTTP endpoints or $0.50 per million mobile push notifications)\nHigh throughput and elastically scalable, supporting virtually unlimited number of messages per second across up to 100,000 topics and 12.5M subscription per topic\nConforms with established enterprise integration pattern and enables transfer of existing skills\nData durability provided by redundant storage of messages across multiple, geographically distributed facilities and integration with SQS for dead-letter channel capability\nSecurity posture that includes data encryption at rest and in-flight, private message flow (using VPC endpoints) and compliance with major regimes including HIPAA, FedRamp, PCI-DSS, SOC and IRAP\n\nFor more details see Amazon Simple Notification Service Documentation\nSCENARIO 3: LOW-LATENCY DATA STREAMING\nData and events generated in modern systems lose value over time. The earlier an insight can be derived in order to inform a decision or course of action, then the sooner organisations can respond, intervene, and act in order to preempt or correct an outcome.\nBusiness Use Cases:\nApplications of data streaming technology are broad and varied. They include:\n\nDetection of fraudulent financial activities and transactions\nLive status and telemetry readings from connected (IoT) devices and sensors for digital twin applications\nAnalysis of website clickstream (user behavior) data for personalization and recommendation\nProcessing of equipment and asset health indicators to enable preventative maintenance\n\nEnterprise Application Integration Patterns:\nData streaming can be seen as a modern evolution of the Message Bus pattern, specialised for high-volume, time-sensitive data delivery. It enables a customer to ingest, process and analyse large volumes of high velocity data from a variety of sources concurrently and in real time.\nA data streaming solution is unique in that it combines messaging, storage and processing of events all in one place:\n\nData streams overcome high concurrency by distributing messages into partitions, or shards, which are effectively lightweight Publish-Subscribe Channels that can be consumed by multiple consumers.\nData stream storage allows consumers to go back in time and \u201cresume\u201d or \u201creplay\u201d events that they have missed using a single source of truth, which can be useful for reconstructing state, enabling temporal data analysis and event sourcing applications.\nA Competing Consumers pattern allows consumers to read from partitions concurrently and independently whilst keep an offset to track their progress.\nReal-time processing of event enables as they are ingested can be used to accelerate analysis and insights of data as it arrives\n\n AWS\n\nFigure 6: A Message Bus pattern forms the basis of a modern low latency data streaming platform. Diagram courtesy of Enterprise Integration Patterns - Message Bus\n\n\nAWS Implementation:\nA common architecture to implement this pattern in AWS involves:\n\nAmazon Kinesis Data Streams, which caters for data stream ingestion and distribution into multiple partitions called shards.\nKinesis shards support both push and pull consumer semantics and enable message order preservation.\nAmazon Kinesis Data Analytics can be used for in-flight processing of the data payload.\n\n AWS\n\nFigure 7 Real-time event streaming using Amazon Kinesis, a native AWS stream ingestion and storage service.\n\n\nAWS alternatives to Amazon Kinesis Data Streams include:\n\nAmazon MSK (Managed Streaming for Apache Kafka) is a service for customers who are invested in the Apache Kafka ecosystem and tools and wish to continue using it, but who want to offload the undifferentiated heavy lifting of managing Kafka clusters. MSK makes it easier for customers to build and run production applications on Apache Kafka without needing infrastructure and cluster management expertise\nKinesis Firehose is an adjacent capability that can supplement or replace Kinesis Data Streams where there is some tolerance for throughput latency (Firehose will buffer messages based on configurable buffer size or interval, with a minimum of 1MB or 60 seconds) or where data needs to be streamed to select targets such as S3, Redshift, Splunk or ElasticSearch and the native integration between Firehose and these services is a value-add accelerant\n\nBusiness Benefits:\n\nManaged service with high availability, strong data durability and simple administration. The undifferentiated heavy lifting of provisioning infrastructure, installing software, performing backups or managing availability is avoided\nSimple, elastically resizable capacity with zero downtime\nPay-as-you-go pricing of $0.014 for up to 1 million events (payload size conditions apply) plus $0.015 per hour of data ingestion (at a rate of 1MB\/second)\nAmazon Kinesis offers an SLA of 99.9%\nExtensive integration options (APIs, SDKs, client libraries and agents) as well deep, native integration with a large number of AWS services\nIn-order data persistence (up to 365 days) allowing message replay and event sourcing architectures\nFully automated and repeatable architecture using CloudFormation infrastructure as code\nVerified compliance with SOC, PCI, FedRAMP and HIPAA regimes\n\nFor more details see Amazon Kinesis Documentation.\nConclusion\nIntegration is diverse, complex and business-critical. Given the wide spectrum of potential uses cases, from simple intra-organizational file transfers to real-time telemetry ingestion and processing from IoT sensors, a feature-rich integration is a business imperative. Traditional platforms-of-all-fit cannot evolve and refresh fast enough to keep pace with the rate of technology innovation. This is the reason that AWS, with its breadth and depth of functionality and a rate of innovation second to none, is the platform-of-best-fit for connected businesses of this generation.