Common difficulty & proof-of-work behavior

I am trying to figure out what parameter values we could choose for time components:

  • avgGenTime the average time for mining a block
  • dtDiffEval the period of difficulty reevalutation
  • medianTimeBlocks the number of blocks for evaluating median time

Few months ago, I had the idea to have a super precise method for evaluating and tuning common difficulty.

But as I try it today, with avgGenTime = 1 minute, I get something like:

See the green and purple lines? That’s our target of range [56’’, 64’’], which stands for 1 minute with this method.

See the orange line? That’s the actual duration time for generating a block, over the last dtDiffEval = 20 blocks.

So, this method seemed quite sexy. Why isn’t working?

A study of SHA256 function

The proof-of-work, our mechanism to make the network « wait » for the next block, uses SHA256 hash function. A hash function, in few words, is what transforms some data into a fixed-length bytearray.

For example:

SHA256("I am a super string to be hashed.") = f74df019052e9e2dd88fbe8a813b638ebca3349741abbd8f75c090ecf16c748d

If our « super string » was a block, f74df019052e9e2dd88fbe8a813b638ebca3349741abbd8f75c090ecf16c748d would be the hash of our lock. We could say our proof start with 1 « f ».

The whole goal of a proof-of-work is to find a hash for our block, by making just one of its field change, so it makes the proof starts with « x zeros ». For example:

00a1c038a718c45abb95897bf36754322d67a23cb8a93ce0acdecf7abbc7e8d0

would be a valid proof of the required difficulty was 0 + 0 + a = 16 + 16 + 10 = 42

The rule about how much zeros are required is defined in the protocol, proof-of-work section.

I was wondering how the SHA256 behaved according the difficulty level given that, by defintion, a hash function should have good randomness.

Few graphs:

With 100 tries of SHA256

With 1.000 tries

With 10000 tries

With 100.000 tries

With 1.000.000 tries

You get the point: like many random functions, the more tests we make, the better we see its distribution.

The impact on the median time

From these graphs I conclude: the less the difficulty, the worst the precision for the impact of the proof-of-work on the generating time.

Hence, if we want a tight target for our block time generation, we need more computations. Given the fact our best possible interval with this method is 6% (see 1,06 in the first link of this post), we need standard deviation to be under 6%.

This corresponds to 10.000+ tests in our graphs.

Do you know how much tests we do on average for super_currency? Around 5.000. So we are not that bad, still this not the only problem.

The dtDiffEval and medianTimeBlocks parameters

The dtDiffEval parameter is the period by which we evaluate the difficulty again. If dtDiffEval = 10, we reevaluate difficulty every 10 blocks.

Let’s assume with put dtDiffEval = 1, so we evalute the difficulty each time a block is issued. This gives us an instant speed for every block issued. And between 2 blocks, we get the speed difference.

But a better precision of the speed would be given by a higher dtDiffEval value, because it would accumulates data. This is even more true of network CPU power increases/decreases, because this would smooth the speed variations.

Yet, this is not enough. We have the medianTimeBlocks, which says on how much previous blocks the current time of the blockchain (the medianTime) is computed. From the beginning of this post we reasoned with medianTimeBlocks = 1, but what happens with higher values?

We get a time shift, an offset between real world time and blockchain time.

And so depending this value, we should consider to adapt dtDiffEval value to give the opportunity for median time to include the time variation induced by the difficulty level variation.

This dtDiffEval should always be >> medianTimeBlocks. Probably a multiple, I don’t know yet.

Now this still do not tell me what to choose. :slight_smile: But I wanted to share this.

The harder the proof-of-work, the better the average time of generation.

We still need experience to confirm this.

2 Likes