Azure Event Hubs is a highly scalable publish-subscribe PaaS service that can ingest millions of events per second with low latency and stream them into other applications. We can consider Event Hub as the starting point in an event processing pipeline often it represents the “front door” for an event pipeline. Event Hubs provides a unified streaming platform with time retention buffer, decoupling event producers from event consumers.
With Azure event hub, you can ingest, buffer, store, and process your stream in real time to get actionable insights. Event hubs uses partitions to let consumers process the events independently. You can capture your data in near-real time in an Azure Blob storage or Azure Data Lake Storage for long-term retention or batch processing.

The following scenarios are some of the scenarios where you can use Even Hubs:
- Anomaly detection (fraud/outliers)
- Application logging
- Analytics pipelines, such as clickstreams
- Live dashboarding
- Archiving data
- Transaction processing
- User telemetry processing
- Device telemetry streaming

Event Hub Components
- Event producers
- Partitions
- Consumer groups
- Event receivers
- Throughput units or processing units
Event Producers
A producer is responsible for sending events to an Event Hub. A producer is entity that sends data to an event hub. An event is published via AMQP 1.0 or HTTPS.
Partitions
As we can see in the image above, an Event Hub contains multiple partitions. Event Hub receives data and it divides it into partitions which are buffers into which the data is saved. Because of these buffers, an event is not missed just because a subscriber is busy or even offline. The subscriber can always use this buffer to get the events. By default, events stay in the buffer for 24 hours before they automatically expire. But they can also be configured to hold messages for up to 7 days.
These buffers are called partitions because the data is divided amongst them. Every event hub has at least two partitions, and each partition has a separate set of subscribers. It also provides each multiple consuming application with a separate view of the event stream, enabling those consumers to act independently.

Consumer Groups
The publish/subscribe mechanism of Event Hubs is enabled through consumer groups. A consumer group is a view (state or position) of an entire event hub. Consumer groups enable multiple consuming applications to each have a separate view of the event stream, and to read the stream independently at their own pace and with their own offsets.
Event receivers
Any entity that reads event data from an event hub. All Event Hubs consumers connect via the AMQP 1.0 session. The Event Hubs service delivers events through a session as they become available. All Kafka consumers connect via the Kafka protocol 1.0 and later.
Throughput units or processing units
A pre purchased capacity units that can help you control the throughput and its capacity of event hubs.
Checkpoint
Checkpointing is a process by which readers mark or commit their position within a partition event sequence. Checkpointing is the responsibility of the consumer and occurs on a per-partition basis within a consumer group. This responsibility means that for each consumer group, each partition reader must keep track of its current position in the event stream, and can inform the service when it considers the data stream complete.
Advantages
One big advantage of Event Hubs is that it can provide a Kafka endpoint that can be used by our existing Kafka with a small configuration change. The big difference between Kafka and Event Hubs is that Event Hubs is a pure cloud service. You don’t need to set up, configure, and manage your own Kafka and Zookeeper clusters or use some Kafka-as-a-Service offering not native to Azure.
It can be used for logging and telemetry and an added benefit is that it can be integrated with the serverless real-time analytics, Stream Analytics and the business analytics service, Power BI. We will look at practical use case and demos for implementing Event Hubs in the future posts.