Proposition for changing the distance rule

Thanks for this proposition,
What is exactly a “sentry” ? ;o

I edited the main post to give a link to the definition.

You can find it here: sentry.

1 Like

“Imagine: we might have to test all the links of 5 steps of a 100k members WoT for each sentry and for each newcomer/renewer, each time we receive a block with identities in it.”

  • It is not sure there is not an algorithm to do it quickly for a max number of 10 000 000 members for instance
  • It is also possible to compute this at a specific frequency, for instance every 10 days or every 40 days, and not doing it for “every block”. So the computation can be much more easier.

I think this problem is like the Floyd Warshall algorithm. Its complexity is O(n^3).

It can be O(n³), and can be solved in 30 secs for 10 000 000 members. that is to say for 100 000 000 members it will be solved in 30 secs x (100 000 000)³ / (10 000 000)³ = 30 secs x 1000 = 8h33

So without knowing the time necessary to solve it for a number, we cannot conclude, if there is a good solution for 1000 000 members for instance, which can be enough to know.

OK, I will make tests with Floyd Warshall algorithm, and also with Dijkstra’s algorithm.

The tests will consist in a random distribution of 16 links per member, with 1.000.000 members. And test roughly 2% (this is worldwide birth rate) random members for distance.

I will be able then to make the links vary in quantity and also the quantity of members.

The problem is not the frequency of the block, but the frequency of the newcomers/renewers. Each generates a distance test against the WoT.

Also, the cost is in the extraction of the links to create a WoT matrix that can be computed in-memory.

Finally, the current rule still have a big problem as an attack has been described here. So the current distance rule cannot be applied to the whole WoT at any time, but a part of it.

Maybe you could use networkx to do so (or maybe I could, since I already know the library).

This library can

As it is implemented in python, it’s not like ucoin nodes, but it’s not too far for it probably.

Well I planned to use C code, both for fun and because we do not exactly need shortest path, but just to know if at least one path exists.

This is probably easier challenge than shortest path.

In the worst case, its complexity is the same as the shortest path, thought.

What I said is still available. You can have a test every n days, and so a newcomer has to wait the next WoT test to be fully in, I don’t see any problem for that. So you don’t need to calculate at a frequency you don’t control.

There are rules solutions for that. For instance, to be a member in the links calculation, you need to have minimum a recursive link with Dmax path. So this attack cannot occur with that rule.

Could you explain a bit more?

The problem of this attack is to block expansion of the WoT by limitation. To be a member you need at least a Dmax path to at least another member. So you cannot have members with all link paths < Dmax

Another rule can be : Sum(from path=1 to path=2) of all linked people >= 100. So you don’t need 100 people directly, but 10x10, is much more to control indirectly. And so you cannot have a WoT < 100(2 paths), and that sibyll attack with 4 people cannot occur.

Generally, what could be usefull to help other contributors to think about the WoT rules, would be to propose different images with definition of the constants, and time history events in WoT(t) and WoT(t+dt) illustrating some problems, and what is happening. With only text it is not easy to “see” what is happening.

Ok so this is my rule with s=2. I think it’s good. :slightly_smiling:

to the complexity and ram use:
For a single joiner / re-newer the complexity to check if he reaches all or x% of sentries in distance n is O(n).
the trick is to check all sentries on the same run.

Here the algorithm:
list a = empty
list b = new member to check

for i=1 to maxdistance
a = b
b = empty

do while a not empty
Take m = the first member from list a
remove m from a

for each certification c of m
if c.issuer is not marked then mark c.issuer and put in list b
loop
loop

the memory per member with average certifications per member <10 is around 10 * 8 bytes (64 bit pointer).
so lets say 100 bytes. With 1 GB ram we should therefore be able to handle 1.000.000.000 /100 = 10 million people.
that looks ok for the beginning :slightly_smiling:

later on if needed we could also fragment the members in groups as abstraction.

for example one group contains all members that fulfill the WOT rules to members of this group with max distance d1

each group trusts all groups that one single member of this group trusts.

on the second level the WOT is tested for each group equally handled like member wot is handled with max distance d2

because we know that each member is in max distance d1 to each member of this group and because we know each group is of max distance to each group of d2, we know that the wot d1 + d1+d2 <= dmax is fulfilled on a global scale.

this would only increase the count of attackers from 4 to Dmax+1, therefore is no true solution for this attack.

this solution is better, but increases the count of attackers to 100.
an better solution like suggested in the attack topic is to require to reach x% lets say 3/4 of the sentries to become a sentry
and to require for a newcomer to only reach x% lets say 3/4 of the sentries.

another solution would be to count only both way certifications as true trust relations.
of course both solutions could be combined

Yes, It seems a nice combined solution because of symmetrical and scalable properties.

There is a point to think about : proportions are not always the way to obtain a good scalable law, it depends many times of the subject. I’m not sure a proportional law (x%) is the most precise one concerning graphs.

1 Like

yea, i think it would solve many problems and would make the need of sentries obsolete.
The only situation which remains is what to do if the web of trust is broken.

In this case the above outlined solution to require only X% of members could be a good solution.
A more ¨strict¨ solution would be to count which of the broken WOT parts has more members.
The new WOT is then the biggest.

Just to have an example to better understand the implications: In a way the WOT could be looked at as an liquid democracy with (currently) only positive votes / delegations (vote / delegation = certification) and where the max distance of your delegated vote is dmax.

Currently you need the vote of 100% of the sentries to be in, therefore one sentry is enough to block all newcomers / renewers
With 3/4% of the votes 1/4 of the sentries could block a newmember, or the other way round you need 3/4 of all sentries votes to join / renew your membership.

Both way certifications are a wrong good idea, since it would alter the motivation of certifying someone (do I certify him because I checked its identity, or do I do it to pass the both ways rule?) and change the meaning of the certifying act: someday it means A (I checked its identity), some day it means B (I want a both ways path), some other day A and B.

uCoin certification meaning today is unequivocal: I trust this identity to exist and represent only human in the currency.

If you give two meanings to a certification, then the WoT won’t have any meaning at all.

Actually we can work with 32 bits integers, so you can divide memory consumption by 2.

But since we need to also have the original WoT to travel across it, actually you can multiply by 2.

So yes, 100MB for 1.000.000 people, that’s good :slight_smile: