Distributed Message pool, Traceability, Swap Marketplace

We need to come with some design that helped to improve Traceability and implement Swap Marketplace.

The biggest challenge is proof that player is ‘honest’.

Here is a link to the document with proposal

Ideas of how we can proof the stakes for the wallet and keep it as private as possible are very welcome.

2 Likes

The MD document is updated for traceability. Decoy has too many problems to solve. Let’s try something more simple. The proposed traceability approaches should be applicable to both IT and NIT.

Some questions:

  1. You have Expiration for both push_message and query_message, but no timestamp at the message, how to handle this "expiration" : 600?
  2. In case a bad player, what will happen if his/her node broadcasting push_message and query_message to the network at 1M Msgs/s for example?
  3. What’s the system capacity and cost for example in case of 10k nodes + 10k wallets(users)? i.e. the estimated total messages in one second, in case of a typical querying in every 5 seconds for each wallet.

And regarding the traceability improvement:

  1. In " Payment workflow (Interactive Transactions)", that should be 2 transactions instead of 1. One transaction from wallet A to C (to accept decoy coins), another transaction from wallet A to B (real recipient), and wallet A merge both transactions and then post to network.
  2. In this interactive mode, I worry about the usability of this payment workflow. If a bad player (or attacker) publish a lot of online wallets (to accept decoy coins) but few of them are well maintained (means works), the former part of this payment workflow will often fail.
  3. In “Payment workflow (NIT)”, the so-called step of “Wallet A requesting online wallets that are ready to accept decoy coins” is not needed. We just need collect some Stealth Addresses (for example 10) and then wallet A can create multiple decoy coins to these Stealth Addresses. Quite simple and well usability!

Regarding to the " Traceability with Multisig", that’s too complex and I don’t think we need this complexity for achieving this traceability improvement.

Regarding to the " Traceability with Multikernel" and the question of “Will it be possible to separate the kernels with matched inputs/outputs?” If wallet A merge both transactions firstly and only post the merged transaction (with 2 or more kernels), it’s impossible to separate the kernels with matched inputs/outputs.

  1. You have Expiration for both push_message and query_message, but no timestamp at the message, how to handle this "expiration" : 600?

Timestamp will be added by the node. Because there is one instance of the message on a single node, the node will use current time as a timestamp.

  1. In case a bad player, what will happen if his/her node broadcasting push_message and query_message to the network at 1M Msgs/s for example?

push_message in case of bad player affecting only player’s mode. There messages are not broadcasted, they will stay local on the node that player control. The ‘public’ node can be affected, but eventually we want every wallet run it’s own node. That is why I think it is acceptable.
query_message does broadcasting, so attacker can do DDOS with that. We need to be able to ban such nodes effectively. The request will come from the peers, so it can be banned. It is expected that first ban will happen for largest depth, it it is not, it is mean that that node is a bad player, so it will be banned by the next level. The problem is how to recognize if that message is originate from the same source. I will think about that. Currently I am thinking of using output with some significant amount (like 10 MWC) as a collateral (that output will not be private any more). Outputs are limited resources, so for ddos attacker will need many of them and it is impossible to achieve.
Actually, if everything cacheable, we can just ban the node that making too many requests. I think we can keep it simple. My point that we don’t even need to know the source of the request. We can ban the peer if we are getting too many income requests from it. Bigger deepness, less request frequency will be allowed. Look like even this simple model will work.

  1. What’s the system capacity and cost for example in case of 10k nodes + 10k wallets(users)? i.e. the estimated total messages in one second, in case of a typical querying in every 5 seconds for each wallet.

We are talking about query_message, because it is broadcasted. (push_message is fine, it is local). Let’s say the deepness for the call is 10. In this case the all 10k nodes probably will be reachable.
The are 2 use cases: swaps and traceability.

For swaps keys are will be very similar (the same keys combinations for every coin), so the nodes will build a cache pretty quickly. The first call will be broadcasted for all nodes, but the next ones will hit the cash.

For traceability - the same. There is no keys variation, the data request will be cached.

We will start having issues when there will be too many use cases. With such approach, every use case add load to the network. But I think it is fine.

Also please note, request has response size limit, that is why cache size will be limited to some relatively small amount. There is no need to store the data from whole network.

About the traceability. Thank you for your inputs, really helps. Looks like “Traceability with Multikernel” is a winner. Decoy has too many downside and potential to degrade the network (if decoy outputs are never spent, that will kill the network scalability). Traceability with Multikernel using MW natural feature and doesn’t add much complexity. I don’t see why we should do something extra.

About the caches.
For design it is critical to know if requests data will be cashable or not. The data is cacheable well if it is static and amount of the data is limited. So far our use cases are expecting to have static data, the requests are limit the data. Looks like we can design it this way. So the question if our use cases does have dynamic data or not.
@suem, what do you think? Do you see any dynamic data that we need to handle?

Timestamp will be added by the node. Because there is one instance of the message on a single node, the node will use current time as a timestamp.

  • ^ this a case of the “dynamic” data?
  • could you give detail about how a node add the timestamp on push_message/query_message? and it can be modified freely by any node?
  • is there a MAC for these message? if not, a bad player can just modify every received msgs and broadcast the modified ones. If yes, how?

This data is well cacheable. The cache need to be updated once in a few minutes. Because of that 10000 wallets or 10 wallets will generate pretty much the same traffic. The requests are very similar.

Attacker’s node can modify any message and I don’t think we can prevent that. The MAC I think really make sense because we don’t attacker modify messages from another honest nodes (attacker still will be able to create many ‘virtual’ nodes and flood, I will think about that, seems like some validation is still needed for the peers). The nodes will run on Tor, or at least have tor address. That can be used to sign the messages. The signature will guarantee the message is not changes as well, so it is technically MAC. In this case we will know that the message is really from that node. That can be useful for the filtering out attacker data. Attacker node can be recognized and banned. I think it is good idea, we can do that. I will update the design with that.

I think we can’t think about message pool data as trustable at all. We have pool’s response with some data. Even if most of the data might be from attacker, we need to be able to filter out the noise.

Every message can be verified by the wallet (for example for Swap the proof can be requested), so eventually wallet learn the attacker nodes and ban that data. I think it needs to be done on the wallet level. The nodes will be responsible for the communication lever, we don’t want nodes to be complicated.

Let me update the design, with that.

@suem, looks like Beam using the blockchain to store the data. I really don’t want to do that because we still want data to be expired pretty quickly. But we might prevent flood with that approach. We can write some authorize token (public key) to the chain and it will be available for 2 weeks until the blocks will be compacted. Such token might cost some significant fees that will go to the miners. Because tokens cost money, it will be costly to attack. Every message can be associated with such token.
The problem for that approach that it require hard fork and add complexity to the consensus. I really don’t like that.

Here is updated version of the document:

I hope that it is final version.

@suem, please take a look. It is much more simple now.

Here is a tracking ticket for transactions. There are will be many of them. Will start posting soon.

Here are the summary of the changes: