Reducing the Expense of Cloud-based Kafka Deployments

How we operate and think about software has changed as a result of cloud computing: We can elastically scale our apps on demand and have quick access to a seemingly infinite capacity of computational resources at any time. We are only charged for the resources that we have really used, therefore we are no longer required to purchase physical computers and pay for them beforehand. Although usage-based pricing seems fantastic at first, it complicates cost planning and can result in unexpectedly high bills for cloud services. This has been documented in a number of blog posts by authors who have woken up to find their invoice amounts to be several orders of magnitude higher than they had anticipated.

The primary cost components of Kafka workloads in the cloud are examined in this talk: computation, network, and storage. We identify their share of the total cost and walk through various ways to reduce their cost footprint, such as scaling streaming applications “”to zero”” in the absence of incoming events to save compute resources, or using Kafka’s follower fetching to reduce cross-AZ traffic. We concentrate on configurations where Kafka is run on cloud platforms along with related workloads like Flink applications or Kafka Streams.

This lecture by Stefan Sprenger aims to provide developers with the tools they need to manage associated expenses while taking use of cloud computing benefits in the context of Kafka workloads. Stefan Sprenger is co-founder and CEO at DataCater GmbH, the company behind the real-time ETL platform based on Apache Kafka.

Total
0
Shares
Share 0
Tweet 0
Pin it 0
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post

Aryaka Releases Aryaka AI>Perform: GenAI Network Acceleration Solution

Next Post

AWS to Invest €8.8 Bn in Its Frankfurt Cloud Region

Related Posts