Skip to main content

Data & Intelligence

Simulating Synchronous Operations with Asynchronous Code in Distributed Systems

Gears of business

Ensuring real-time status updates for end users in web applications can be challenging, particularly when working with Databricks, which lacks native support for synchronous updates. This means that changes made in Databricks may not be immediately reflected to end users, impacting the real-time nature of status updates. In this technical blog post, we will explore the limitations of Databricks regarding synchronous updates, introduce the pattern of “Simulating Synchronous Operations with Asynchronous Code,” and compare it with the widely adopted event-driven architecture.

Before evaluating either of these alternatives, make sure you understand the business reasons behind the true upper bound of latency. The performance characteristics of a distributed system such as Databricks will be different from those of a traditional relational database depending on the data characteristics and access patterns, but the impact on the business may not necessarily call for doing anything.

By understanding these alternatives, developers can make informed decisions to achieve real-time status updates while considering factors such as response time, cost, and technical complexity. While the acceptable amount of latency depends on the specific business requirements and use case, in general, minimizing latency is crucial for business web applications to provide a responsive and engaging user experience. Keep in mind the importance of understanding the true financial impact of average response time as well as the potential frequency of excessively delay. There is a big difference between engineering away a significant delay that demonstrably occurs multiple times a day versus an occasional hiccup. Assuming there is a strong business case for providing consistent near-realtime response, there are different options available. We will compare and contrast two; event driven architecture and a code-based approach.

Event-Driven Architecture

Data Intelligence - The Future of Big Data
The Future of Big Data

With some guidance, you can craft a data platform that is right for your organization’s needs and gets the most return from your data capital.

Get the Guide

Event-driven architecture is a popular approach that facilitates seamless communication and responsiveness in distributed systems. In this architecture, events are produced and consumed by different components, enabling real-time data flow and immediate reaction to changes.

Key Components of Event-Driven Architecture:

  1. Event Producers: These components generate events when changes occur, such as updates to the status field in Databricks. The events are then published to a message broker or event bus.
  2. Message Broker/Event Bus: This intermediary component receives the events and dispatches them to interested event consumers. Popular options include Apache Kafka, Amazon Kinesis, or cloud-native services like AWS EventBridge.
  3. Event Consumers: These components subscribe to specific event types and react accordingly. For example, a consumer can listen for status update events and update the corresponding records in PostgreSQL, ensuring real-time updates are propagated.

Event-Driven Architecture Implementation:

Implementing the event-driven architecture approach in AWS involves utilizing various AWS services. Here’s a high-level overview of how you can leverage AWS services to achieve the desired functionality:

  1. AWS Lambda: Use AWS Lambda as the compute service to process the events and perform the necessary updates in Databricks or other AWS services.
  2. Amazon API Gateway: Configure Amazon API Gateway as the entry point for your web UI. API Gateway can handle incoming requests from the web UI and trigger the Lambda function for further processing.
  3. Amazon Kinesis Data Streams or Amazon Simple Queue Service (SQS): Utilize Kinesis Data Streams or SQS to capture and store the events from the web UI. These services act as the message queues that decouple the web UI from the processing system.
  4. AWS Lambda Trigger: Set up the event source mapping or trigger between the Kinesis Data Streams or SQS and the Lambda function. This ensures that the Lambda function is invoked whenever new events are available in the queue or stream.
  5. AWS Glue or AWS DMS: If you need to synchronize data between PostgreSQL and Databricks, you can use AWS Glue or AWS Database Migration Service (DMS) to handle the data replication and synchronization tasks. These services can help keep the data consistent between the two systems.
  6. AWS Database Services: Leverage AWS database services like Amazon RDS (for PostgreSQL) or Amazon Redshift (for analytics) as the backend data storage systems. These services can handle the synchronous operations and provide the necessary data to the web UI.

By combining these AWS services, you can establish the event-driven architecture flow in AWS. The web UI interacts with API Gateway, which triggers the Lambda function to process the events. The Lambda function can then update the data in Databricks or synchronize it with PostgreSQL using AWS Glue or DMS. The data from AWS database services can be fetched and displayed in the web UI.

Simulating Synchronous Operations with Asynchronous Code

To address the challenge of real-time status updates in Databricks, the “Simulating Synchronous Operations with Asynchronous Code” pattern can be employed. This pattern involves leveraging technologies like Redis, Node.js, and Scala to achieve near-real-time synchronization of updates. Assuming each record has a unique ID (UUID) and we are updating a rapidly changing dimension like status, the pattern could be implemented as follows:

Components of the Pattern:

  1. Redis: A memory-resident database, Redis serves as an intermediary storage for status updates. When Databricks modifies the status, it writes the updated value to Redis, associating it with a UUID.
  2. Node.js: The Node.js application, running in the web server, captures status updates asynchronously. When a change occurs, Node.js writes the new status to Redis using the corresponding UUID.
  3. Scala: A backend service implemented in Scala periodically checks Redis for new status updates. It retrieves the updated statuses using the UUIDs and synchronizes them with the primary data store, such as PostgreSQL, ensuring real-time updates.

Comparisons and Recommendations

Now let’s compare the two approaches based on important considerations:

  1. Response Time: Both approaches aim to achieve near-real-time status updates. However, “Simulating Synchronous Operations with Asynchronous Code” might offer faster response times as updates are directly written to Redis, whereas event-driven architecture introduces some overhead due to message passing.
  2. Cost: Event-driven architecture typically involves the use of managed services like message brokers, which may incur additional costs. On the other hand, “Simulating Synchronous Operations with Asynchronous Code” relies on self-managed components like Redis, potentially offering cost savings.
  3. Technical Complexity: Event-driven architecture requires understanding and configuration of message brokers, event schemas, and event handling. Implementing the “Simulating Synchronous Operations with Asynchronous Code” pattern involves building custom components in Node.js and Scala, requiring familiarity with these technologies.

My $0.02 comes down to practical implementation considerations. If your enterprise already has a robust EDA in place, it makes sense to use it. Its well documented on the internet and, if its wide use internally, it is hopefully well supported and maintained. If not, a good memory resident database is almost always a good tool to have and is fairly simple to manage.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

David Callaghan, Solutions Architect

As a solutions architect with Perficient, I bring twenty years of development experience and I'm currently hands-on with Hadoop/Spark, blockchain and cloud, coding in Java, Scala and Go. I'm certified in and work extensively with Hadoop, Cassandra, Spark, AWS, MongoDB and Pentaho. Most recently, I've been bringing integrated blockchain (particularly Hyperledger and Ethereum) and big data solutions to the cloud with an emphasis on integrating Modern Data produces such as HBase, Cassandra and Neo4J as the off-blockchain repository.

More from this Author

Follow Us