by

Fabric Performance: Deep Dive into Smoothing

From the previous article we learned that Bursting gives Fabric short term acceleration. In this article we will look at Smoothing, which prevents that acceleration from turning into chaos. In practical workloads – streaming ingestion, adhoc queries, mixed pipelines the demand is going to be always uneven. You already know that bursts come and go. Smoothing is what stops those spikes from immediately consuming all of your compute or triggering throttling the moment they occur. You can think of it as a buffer. Fabric lets you go above your allocated compute for short periods and then spreads that extra usage across a longer window so that the system stays stable for every user on that capacity.

What Smoothing actually does?

Fabric tracks usage in 30 sec timepoints and applies Smoothing to avoid charging all burst consumption at once. Interactive workloads (queries, notebooks, UI driven operations) typically get their extra consumption spread across a few minutes to an hour. Background workloads (scheduled ETL pipelines, refreshes, batch processing) often get their compute spread across a 24 hour window.

The logic is simple: even when you temporarily exceed your capacity through bursting, Smoothing averages out the cost. You get speed immediately, but repayment happens gradually.

This gives you a better experience with fewer surprises. Capacity does not flip from green to red just because one notebook ran 20% heavier than usual. Dashboards do not stall just because a batch job kicked off at the same time. Heavy ingestion will not immediately starve interactive reports.

What Smoothing does not do?

  • Smoothing does not make compute cheaper; it just spreads the accounting. If you routinely burst, you will eventually pay for it in throttling or cost.
  • It does not improve raw performance. Bursting does that; Smoothing only manages the aftermath.
  • Too many bursts in a short window can pile up “Smoothing debt”, eventually leading to throttling even if everything looked fine earlier.
  • Smoothing cannot compensate for bad scheduling. Running multiple heavy workloads at the same time will still create pressure.
  • In trial or smaller capacities, the impact of Smoothing is narrower. You cannot rely on it to hide poorly planned concurrency.

In short Smoothing is a helpful mechanism, not a substitute for capacity planning.

How to use Smoothing effectively

  1. Separate interactive and batch workloads intentionally:
    This lets background jobs benefit from 24-hour Smoothing without disrupting real-time dashboards or user-driven queries.
  2. Watch your capacity metrics, not just job durations:
    The Fabric admin metrics page will show how Smoothing is spreading the load. If smoothed usage keeps rising, you are heading for throttling even if everything “looks fast” right now.
  3. Avoid stacking heavy workloads unnecessarily:
    If two Spark jobs and a warehouse load can run at different times go ahead & do that. Smoothing can absorb mistakes but not recurring patterns.
  4. Use Smoothing to absorb predictable spikes:
    For example, sentiment analysis ingestion may spike based on tweet volume which is a good usecase. Whereas your batch transforms or maintenance scripts are not.
  5. Be cautious during demos or time sensitive workloads:
    If you have smoothed debt from overnight ETL your F64 or F128 might feel sluggish during a live session. Clearing the deck before a demo is a simple way to avoid embarrassment.

Coming back to our twitter sentiment demo (which we have been using for this fabric performance series) of streaming data into a Lakehouse, transforming through Notebooks and visualising live in PBI dashboard; Smoothing gives you the breathing room you need. Spikes in ingestion will not immediately starve the report refresh. Notebook runs will not be throttled because a query burst happened 10 mins earlier. It keeps everything usable without constant micromanagement.

But the flip side is also true. Fabric will let you overshoot today but will quietly take it back over the next few hours. If you keep overshooting on a daily basis, Smoothing will stop helping and start blocking.

To Conclude

Smoothing is not glamorous, but it is the mechanism that makes Fabric’s shared capacity model workable. Use it deliberately, monitor it regularly and do not depend on it to fix structural issues in your workload design. When used well, it gives you stability without slowing you down which is exactly what you need in realtime data architectures.