Distributed Message pool, Traceability, Swap Marketplace

We need to come with some design that helped to improve Traceability and implement Swap Marketplace.

The biggest challenge is proof that player is ‘honest’.

Here is a link to the document with proposal

Ideas of how we can proof the stakes for the wallet and keep it as private as possible are very welcome.


The MD document is updated for traceability. Decoy has too many problems to solve. Let’s try something more simple. The proposed traceability approaches should be applicable to both IT and NIT.

Some questions:

  1. You have Expiration for both push_message and query_message, but no timestamp at the message, how to handle this "expiration" : 600?
  2. In case a bad player, what will happen if his/her node broadcasting push_message and query_message to the network at 1M Msgs/s for example?
  3. What’s the system capacity and cost for example in case of 10k nodes + 10k wallets(users)? i.e. the estimated total messages in one second, in case of a typical querying in every 5 seconds for each wallet.

And regarding the traceability improvement:

  1. In " Payment workflow (Interactive Transactions)", that should be 2 transactions instead of 1. One transaction from wallet A to C (to accept decoy coins), another transaction from wallet A to B (real recipient), and wallet A merge both transactions and then post to network.
  2. In this interactive mode, I worry about the usability of this payment workflow. If a bad player (or attacker) publish a lot of online wallets (to accept decoy coins) but few of them are well maintained (means works), the former part of this payment workflow will often fail.
  3. In “Payment workflow (NIT)”, the so-called step of “Wallet A requesting online wallets that are ready to accept decoy coins” is not needed. We just need collect some Stealth Addresses (for example 10) and then wallet A can create multiple decoy coins to these Stealth Addresses. Quite simple and well usability!

Regarding to the " Traceability with Multisig", that’s too complex and I don’t think we need this complexity for achieving this traceability improvement.

Regarding to the " Traceability with Multikernel" and the question of “Will it be possible to separate the kernels with matched inputs/outputs?” If wallet A merge both transactions firstly and only post the merged transaction (with 2 or more kernels), it’s impossible to separate the kernels with matched inputs/outputs.

  1. You have Expiration for both push_message and query_message, but no timestamp at the message, how to handle this "expiration" : 600?

Timestamp will be added by the node. Because there is one instance of the message on a single node, the node will use current time as a timestamp.

  1. In case a bad player, what will happen if his/her node broadcasting push_message and query_message to the network at 1M Msgs/s for example?

push_message in case of bad player affecting only player’s mode. There messages are not broadcasted, they will stay local on the node that player control. The ‘public’ node can be affected, but eventually we want every wallet run it’s own node. That is why I think it is acceptable.
query_message does broadcasting, so attacker can do DDOS with that. We need to be able to ban such nodes effectively. The request will come from the peers, so it can be banned. It is expected that first ban will happen for largest depth, it it is not, it is mean that that node is a bad player, so it will be banned by the next level. The problem is how to recognize if that message is originate from the same source. I will think about that. Currently I am thinking of using output with some significant amount (like 10 MWC) as a collateral (that output will not be private any more). Outputs are limited resources, so for ddos attacker will need many of them and it is impossible to achieve.
Actually, if everything cacheable, we can just ban the node that making too many requests. I think we can keep it simple. My point that we don’t even need to know the source of the request. We can ban the peer if we are getting too many income requests from it. Bigger deepness, less request frequency will be allowed. Look like even this simple model will work.

  1. What’s the system capacity and cost for example in case of 10k nodes + 10k wallets(users)? i.e. the estimated total messages in one second, in case of a typical querying in every 5 seconds for each wallet.

We are talking about query_message, because it is broadcasted. (push_message is fine, it is local). Let’s say the deepness for the call is 10. In this case the all 10k nodes probably will be reachable.
The are 2 use cases: swaps and traceability.

For swaps keys are will be very similar (the same keys combinations for every coin), so the nodes will build a cache pretty quickly. The first call will be broadcasted for all nodes, but the next ones will hit the cash.

For traceability - the same. There is no keys variation, the data request will be cached.

We will start having issues when there will be too many use cases. With such approach, every use case add load to the network. But I think it is fine.

Also please note, request has response size limit, that is why cache size will be limited to some relatively small amount. There is no need to store the data from whole network.

About the traceability. Thank you for your inputs, really helps. Looks like “Traceability with Multikernel” is a winner. Decoy has too many downside and potential to degrade the network (if decoy outputs are never spent, that will kill the network scalability). Traceability with Multikernel using MW natural feature and doesn’t add much complexity. I don’t see why we should do something extra.

About the caches.
For design it is critical to know if requests data will be cashable or not. The data is cacheable well if it is static and amount of the data is limited. So far our use cases are expecting to have static data, the requests are limit the data. Looks like we can design it this way. So the question if our use cases does have dynamic data or not.
@suem, what do you think? Do you see any dynamic data that we need to handle?

Timestamp will be added by the node. Because there is one instance of the message on a single node, the node will use current time as a timestamp.

  • ^ this a case of the “dynamic” data?
  • could you give detail about how a node add the timestamp on push_message/query_message? and it can be modified freely by any node?
  • is there a MAC for these message? if not, a bad player can just modify every received msgs and broadcast the modified ones. If yes, how?

This data is well cacheable. The cache need to be updated once in a few minutes. Because of that 10000 wallets or 10 wallets will generate pretty much the same traffic. The requests are very similar.

Attacker’s node can modify any message and I don’t think we can prevent that. The MAC I think really make sense because we don’t attacker modify messages from another honest nodes (attacker still will be able to create many ‘virtual’ nodes and flood, I will think about that, seems like some validation is still needed for the peers). The nodes will run on Tor, or at least have tor address. That can be used to sign the messages. The signature will guarantee the message is not changes as well, so it is technically MAC. In this case we will know that the message is really from that node. That can be useful for the filtering out attacker data. Attacker node can be recognized and banned. I think it is good idea, we can do that. I will update the design with that.

I think we can’t think about message pool data as trustable at all. We have pool’s response with some data. Even if most of the data might be from attacker, we need to be able to filter out the noise.

Every message can be verified by the wallet (for example for Swap the proof can be requested), so eventually wallet learn the attacker nodes and ban that data. I think it needs to be done on the wallet level. The nodes will be responsible for the communication lever, we don’t want nodes to be complicated.

Let me update the design, with that.

@suem, looks like Beam using the blockchain to store the data. I really don’t want to do that because we still want data to be expired pretty quickly. But we might prevent flood with that approach. We can write some authorize token (public key) to the chain and it will be available for 2 weeks until the blocks will be compacted. Such token might cost some significant fees that will go to the miners. Because tokens cost money, it will be costly to attack. Every message can be associated with such token.
The problem for that approach that it require hard fork and add complexity to the consensus. I really don’t like that.

Here is updated version of the document:

I hope that it is final version.

@suem, please take a look. It is much more simple now.

Here is a tracking ticket for transactions. There are will be many of them. Will start posting soon.

Here are the summary of the changes:

Here are revisited design documents.
The network layer (is almost done so far). The design explains main aspects of functionality

The atomic swap marketplace design document explaining what will be done. I hope to start with that pretty soon. Also, please note, implementation will be done on qt-wallet level, the mwc-wallet will provide generic transport only.

The CoinJoin is future project, it is on the roadmap only. So far no implementation is started. But we need design in order to understand the network transport requirements and attacks mitigation.

In Discord I was wondering the following regarding mwc-wallet_coinjoin.md:

  1. Im not sure I understand why the wallet needs to periodically spend its own outputs. Is this connected to the wallet checking if it’s own Outputs are traceable mentioned further below?

Over a certain time period, because of the cut-though, the number of kernels and outputs will decline. Because of that the wallet will need to track it’s old outputs and periodically spend them. The time period will depend on network activity. *** suem can confirm

  1. Can someone verify the following is only true after T>5 for my understanding? (with example values)

As a result, if really needed, any of those participants can publish Coinjoin transaction with guarantee result. None of attacker will be able to interrupt them.

@suem wasn’t aware of this as it’s a draft document so I will leave away his anwser here for overview’s sakes. I wrongfully assumed this was a hint in the rfc of whom to ask, sry suem. @Konstantin elaborated the following:

This is related to the attacking plus ability to use coinjoin for publishing regular IT transaction. The root of the problem is outputs regeneration in case of attack (for IT transaction receiver expect to generate output once). But because aggregation can be finished earlier, if last participant agree to pay the higher fee, it is possible to publish IT transaction with coinjoin so no attacker can interrupt the process. For NIT it is not a problem at all. It is just a natural property, I pointed to it. Even if we will not use it, it is nice to have it.

Also, for that example, CoinJoin process can go forward after all 10 participant are done. There will be no risk for them. They can send to the next participants. If somebody will try to mess up with the data, any in the chain has ready to publish transaction, so it can be published. The point is that situation for them can be better only (it can’t be worse). This side effect we might use as well to make coinjoin larger. In this case T will be initial minimal value of coinjoin. The maximum value will be limited by block size only.

I think I understood the implications for my question 1)

For Question 2) can @Konstantin verify this means a coinjoin transaction cannot be posted before 10 participants are finished contributing? (And hence “the mission can fail” until after T*2>10?)

1 Like

To clarify that we need first to verity the compact block structure. Let’s keep it in design until it will be done. When we start with this task, we will need to do that in any case.

Transaction can be posted with any number of participants. For our example it is expected at least 5 participants, but more is better. So we can say that CoinJoin succeeded if number of participants 5 or more. In order to prevent attacker to publish transaction too early the transaction fees are distributed not evenly, the document explain that. The fee distribution is a key point. We want attack pay high fees in order to do that.
Mission Fail - mean that the last participant was not honest, or just had some network issues. In this case it make sense to retry the whole process unless prev participant agree to pay extra fees as attacker and publish. So failure might not correct term because participants will get coinjoin but result can be better.

1 Like