Tradeoffs in BFT: Latency vs. Robustness
Modern Byzantine Fault Tolerant (BFT) consensus protocols typically operate under the partial synchrony model, which assumes that after an unspecified point in time the network eventually becomes stable and message delays remain bounded. While this model has proven practical for protocol design, real-world deployments rarely enjoy long periods of uninterrupted stability. Instead, systems frequently experience periods of synchrony followed by short disruptions such as latency spikes, node outages, or adversarial conditions. These transient disruptions are referred to as “blips”. Under such conditions, existing consensus protocols are forced to choose between low latency in stable network conditions and robustness in the presence of faults.- Traditional view-based BFT protocols, such as PBFT and HotStuff, are optimized for responsiveness during good intervals when the network is stable. However, they suffer from degraded performance when a blip occurs. This degradation, known as a hangover, can persist even after the network has recovered, as backlogged requests accumulate and delay subsequent transactions.
- DAG-based BFT protocols, such as Narhwal & Tusk/Bullshark, decouple data dissemination(DAG) from consensus(BFT) and propagate transactions asynchronously across replicas. This design enables high throughput and allows the system to continue making progress during network disruptions. However, these protocols tend to incur high latency even during good intervals due to the complexity of their asynchronous ordering mechanisms.
Autobahn Architecture Overview
Autobahn is architected around a clear separation of responsibilities between its two core layers: a data dissemination layer and a consensus layer. This decoupling is inspired by the design of DAG-based systems like Narwhal, but Autobahn enhances this structure to support seamlessness and lower latency. The data dissemination layer is responsible for broadcasting client transactions in a scalable, asynchronous manner. It allows each replica to maintain its own lane of transaction batches, which can be propagated and certified independently of the consensus state. These lanes grow continuously, even when the consensus process stalls, ensuring that the system remains responsive to clients at all times. On top of this, Autobahn runs a partially synchronous consensus layer based on a PBFT-style protocol. However, instead of reaching agreement on individual batches of transactions, consensus is reached on “tip cuts,” which is compact summaries of the latest state of all data lanes. This design allows Autobahn to commit arbitrarily large amounts of data in a single step, minimizing the impact of blips. Compared to HotStuff, which tightly couples data and consensus and stalls when a leader fails, and Bullshark, which incurs high commit latencies due to DAG traversal and data synchronization, Autobahn provides a smoother and faster consensus experience. It inherits the parallelism of DAGs while avoiding their latency pitfalls.Data Dissemination Layer: Lanes and Cars

Autobahn: Seamless high speed BFT
Consensus Layer: Low-Latency Agreement

Autobahn: Seamless high speed BFT