Cyber Security

Cluster Mempool, Simple Problems with Chunks

Cluster Mempool1 is a complete reworking of how mempool handles sorting and sorting operations, conceived and implemented by Suhas Daftuar and Pieter Wuille. The design aims to simplify the overall architecture, better align transaction planning with miners’ motivations, and improve the security of second-layer protocols. Merged into Bitcoin Core in PR #336292 on November 25, 2025.

A mempool is a large pool of pending transactions that your node must keep track of for a number of reasons: liquidity, transaction verification, and block creation if you’re a miner.

This is a lot of different purposes for a single task for your node to run. Bitcoin Core up to version 30.0 organizes the mempool in two different ways to help with these tasks, both from the relative perspective of any given transaction: a combined feerate looking forward to the transaction and its children (descendant feerate), and a combined feerate looking back to the transaction and its parents (ancestor feerate).

These are used to determine which functions to remove from your mempool when it is full, and which should be loaded first when creating a new block template.

How Is My Mempool Managed?

When a miner decides whether to include a transaction in his block, his node looks at that transaction, and any ancestors that must be verified first to run on the block, and looks at the feerate per byte for all of them together taking into account the individual fees they paid in total. If that group of jobs fits within the block limit while competing with others in payments, it is placed in the next block. This is done for every transaction.

When your node decides which transactions to evict from its mempool when it’s full, it looks at each transaction and any children it has, evicting the transaction and all its children if the mempool is already full of transactions (and their descendants) paying a higher fee.

Look at the above example graph of functions, the feerates are shown in parentheses (ancestor feerate, descendant). A miner looking at transaction E can probably put it in the next block, a small operation that pays a very high fee for one small ancestor. However, when a node’s mempool fills up, it will look at transaction A with the two largest children paying the lowest relative cost, and may or may not accept it and keep it if it is newly found.

These two levels, or orders, are not entirely compatible. The mempool must reliably broadcast what miners will mine, and users must be sure that their local mempool accurately predicts what miners will mine.

Working with mempool in this way is important for:

  • Classification of mines: discovery everything miners are the most profitable set of transactions
  • User reliability: accurate and reliable rate and transaction confirmation times
  • Second-layer security: reliable and accurate use of second-layer protocols

The current behavior of the mempool does not fully correspond to the reality of mining incentives, which creates blind spots that can be a problem in the second layer security by creating uncertainty about whether the transaction will be made to the miner, and the pressure of non-public broadcasting channels to the miners, which may further complicate the first problem.

This is especially problematic when it comes to changing unverified transactions, either simply to motivate miners to replace them soon, or as part of a second layer protocol used on the chain.

Existing behavior changes are unpredictable depending on the shape and size of your web hosting. In the simple case of paying money this may fail to distribute and shift work, even if mining the shift would be better for the miner.

In the context of second-layer agreements, the current logic allows participants to get required ancestor functions removed from the mempool, or make it impossible for another participant to send a required child function to the mempool under the current rules due to child functions created by a malicious participant, or the removal of required ancestor functions.

All of these problems are the result of these inconsistent input and output levels and the misalignment of stimuli that create them. Having a single global level can fix these problems, but reconfiguring the mempool globally for every transaction doesn’t work.

It’s All Just a Graph

Interdependent transactions are a graph, or a directed chain of “paths.” If a transaction uses output created by someone in the past, it is linked to that past transaction. If it reuses the output created by the second previous transaction, it links together the two historical transactions.

If it is not guaranteed, shopping chains like this it should have the previous sale verified first for the later to work. After all, you can’t waste output that hasn’t been created yet.

This is an important concept to understand the mempool, it is clearly ordered by orientation.

Everything is just a graph.

Clusters Create Clusters Create Mempools

In a cluster mempool, the concept of a collection it is a group of non-guaranteed activities that are directly related, i.e. spending money created by others in the group or vice versa. This becomes the basic unit of the new mempool architecture. Analyzing and ordering an entire mempool is an impossible task, but analyzing and ordering clusters is a much more manageable task.

Each group is divided into piecessubsets of transactions from the cluster, which are then sorted in order of highest feerate per byte to lowest, respecting the directionality. So for example, let’s say from highest to lowest the pieces in cluster (A) are: [A,D], [B,E], [C,F], [G, J]and lastly [I, H].

This allows pre-sorting of all these fragments and clusters, and efficient sorting of the entire mempool in the process.

Miners can now simply grab the highest feerate parts of every batch and add them to their token, if there is still room they can move down to the next highest parts, and continue until the block is full and just needs to find the last transaction that can go in. This is probably the best way to create a template block that takes access to all available transactions.

When a node’s mempools are full, they can simply grab the lowest feerate chunks from all clusters, and start removing those from their mempool until they don’t exceed the set limit. If that’s not enough, it moves on to the next lowest level, and so on, until it’s within its mempool limits. Done in this way it removes the odd cases from matching the mining motives.

The replacement logic is also greatly simplified. Compare group (A) with group (B) where the activity K replaces G, I, J, and H. The only way it needs to be met is a new part. [K] should have a higher chunk feerate than [G, J] again [I, H], [K] he has to pay more money than [G, J, I, H]and K cannot exceed the upper limit of how many functions it can change.

In the collective paradigm all these different uses are mutually exclusive.

New Mempool

This new architecture allows us to simplify transaction pool limits, removing the previous limits on how many unconfirmed ancestors a transaction can have in the mempool and replacing them with a global pool limit of 64 transactions and 101 kvB per pool.

This limit is necessary to keep the computational cost of pre-sorting the clusters and their subsets low enough to be practical for the nodes to run on a regular basis.

This is really important information for the cluster mempool. By keeping shards and batches relatively small, you simultaneously make the creation of the correct block template cheaper, simplify the understanding of transaction switching (paying fees) and improve the security of the second layer, and fix the logic of withdrawal, all at the same time.

No more expensive and slow on-the-fly calculation of template creation, or unexpected behavior in billing. By correcting the mismatch of motivations for how the mempool handles transaction organization in different situations, the mempool works better for everyone.

Cluster mempool is a project that has been years in the making, and will make a material impact in ensuring that profitable block templates are open to all miners, that second-layer protocols have sound and predictable mempool behavior on which to build, and that Bitcoin can continue to function as a decentralized financial system.

For those interested in delving deeper into the nitty gritty of how a cluster mempool is implemented and works under the hood, here are two Delving Bitcoin threads to read:

High Level Usage Overview (By Design):

How Cluster Mempool Feerate Works:

Get your copy of the Core Issue today!

Don’t miss your chance to become an owner The Main Problem – featuring articles written by many Core Developers describing the projects they are working on themselves!

This piece is a Letter from the Editor featured in Bitcoin Magazine’s latest Print issue, The Core Issue. We share it here as an early look at the ideas explored throughout the entire issue.

[1]

[2]

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button