Per-block non-interactive Schnorr signature aggregation

The idea was first submitted by adiabat at May 7, 2017, in bitcoin-dev mailing list, and strengthened by Andrew Poelstra.

And recently, it’s mentioned by tevador in his Minglejingle (MJ) protocol.

I’m not putting the comments on MJ protocol here, but instead the idea of using “per-block non-interactive Schnorr signature aggregation” is quite interesting for me.

Copy & Paste tevador’s usage here:

Two Schnorr signatures (R1, s1), (R2, s2) of two different messages m1, m2 with two different public keys P1, P2 can be partially aggregated into one signature (R1, R2, s) with s = s1 + y*s2, where y = Hs(Tagg, R1, R2, P1, P2, m1, m2) (or an equivalent random oracle output). The scheme can be easily extended for any number of signatures.

With this aggregation, the half payload of Schnorr signatures (R,s) in each block (i.e. the s part) can be aggregated as one single s and put into the block header.

I’m wondering why Bitcoin not use this aggregation scheme, or maybe in the plan? Perhaps I guess it’s not easy for Bitcoin to adapt for that because it need a new block header structure but that would affect all existing Bitcoin miner and that makes it even impossible in reality.

For MWC NIT feature, it looks possible to take this aggregation scheme for Input signatures, to get better scalability.

Comments Welcome

It’s very appreciated to get your comments/inputs for this direction, otherwise it’s impossible to take it hasty :slight_smile:

1 Like

I think this should work for the whole block. Miner can do aggregation without problem and nodes can verify that. For Bitcoin it is enough.

But we have compacting with cut through. It is mean that we will need to delete some of the signed input.
If we delete it, then aggregated signature will be not valid or not verifiable. Deleting of the input mean that it public key that it was signed plus message will be gone.

In this case we have 2 choices:

  • We are not keeping the signatures below the 2 week horizon. But in this case attacker with 2+ weeks reorg can spend any NIT input because there is no signature. That is not fine.
  • We can keep all inputs at the block or delete all of them. For example if block has 5 such inputs, they can be compacted only when all 5 will be spent. But in this case the problems will be size of the blockchain. So we will save space on the block but loose more on the chain. For this case it looks like optimization problem. If we define the acceptable tradeoff between gain for the block and loose for the chain, we can try to do some modelling. That might work.

I don’t see where tevador explaining how exactly cut through can be done. As I understand there is no way to reaggregate the inputs. Because of Hash, it is a one way transformation.

1 Like

Thanks @Konstantin for your comments.

tevador’s MJ protocol keeps all the Inputs info when cut through, only the spent Outputs info are pruned.

Some corrections here.

  1. The running nodes are not keeping the Input signatures beyond the 1 week horizon. It’s the cut_through_horizon consensus, 1 instead of 2 here.
  2. “attacker with 1+ week reorg can spend any NIT output because there is no signature” is not accurate. Not “any”, only the dishonest sender (who knows the shared Ephemeral key) can double-spend his/her NIT outputs.
    Comparing to the normal double-spend attacking with reorg (in depth of tens blocks), this kind of attacking makes no sense because it need much more cost.

I’m thinking whether we should forbid going back to State Sync for any running state nodes, to kill the Horizon attacking completely.

1 Like

@suem thank you for clarification. Signature aggregation looks pretty good. It is nice that we can put more transactions inside the block.

Yes, probably node should never accept any new headers or block below horizon. That will prevent the reorg larger than 1 week.
In worst case scenario, when reorg depth is near 1 week, the network can be splitted. Some nodes accept reorg blocks, some doesn’t. The mining pool nodes will select one branch and it will win. As a result some nodes will stuck. I think it is totally acceptable. It is much better than reorg below the horizon. At least I don’t see any other side effect. In this case the horizon works as natural check point that every node calculates by itself.


I guess this requires a hard fork?

1 Like

NIT require hard fork in any case. We can put in this hard fork as many features as we need. Also NRD kernel to be activated need fork as well.

1 Like