One of the most common things I’ve been asked for recently is a list of Pros vs Cons for an upcoming network upgrade. It makes sense, not everybody has the time to do an in-depth technical dive about these, and keeping the community informed behind the logic / rationale of other community members seems a worthwhile endeavor.
This list is not exhaustive, I’m sure others can come up with additional things to add to certain parts, however having run this past half a dozen other prominent DigiByte community members they’ve provided feedback to the list which I’ve included. The intent is a fair representation of all perspectives.
I’ve also had Kurt Wuckert Jr. confirm he’s willing to do a video stream and play devils advocate with me regarding this list. The goal would be for us to mutually go over and pick apart as many of these points as possible. I also plan to do a follow-up video stream with Kristy-Leigh Minehan, the “If” part of “IfDefElse” (Authors of ProgPoW).
The goal here is to assist with information that a lot of non-technical people have asked for, so although some of this could be extrapolated further (with more technical information, the idea here is that with this table and with the video-streams, we would have an ELI5 summary afterwards to compliment it.
Instead of simply focusing on one “path” for an upgrade, I’ve gone for 3x different questions, and a Pros vs Cons for each.
- Which algo should we implement?
- Which algo should be replaced?
- How should this be implemented?
You’ll see there is a table for each question, which we’ll briefly cover, along with my initial thoughts from this. Keep in mind this is happening prior to my stream with Kurt Wuckert Jr. who is an incredible representative on behalf of the BSV community, who I believe to be incredibly level-headed, reasonable, and very articulate with his discussion / viewpoints too. As such, following our discussion, this may be added to, however at this point I feel we’re mostly at a social-consensus within the DigiByte community about how to proceed.
So let’s see these tables!
There are several options here. ProgPoW seems all but guaranteed at this point with both the miners and the community at large interested in it. Kristy-Leigh Minehan has graciously been assisting with this implementation, for which we are incredibly grateful. This would simply then require the DigiByte community to test, decide on a block-date, decide on an algorithm to replace, and then release the updated Core wallet. The “easy stuff” as it were.
RandomX has a lot of benefits to it as well, though not quite so many as ProgPoW. There’s still effort on the implementation side to get it in to the codebase, however the offer to do-so and get the coding completed is currently there.
An alternative to RandomX would be to implement another CPU-based algorithm, though the primary benefit would be the possibility of becoming the predominant hashpower. This algorithm could be the likes of SIGMA or VerusHash 2.1. This hashrate dominance is naturally not guaranteed, but worth considering.
Staking, while popular among certain people, has a significant number of down-sides. The core among these being that it’s not specifically “more secure”, however staking could be used to compliment the PoW mining, perhaps as a 6th algorithm with a very minor stake as has been suggested (19% for the 5x PoW algos, with 5% staking rewards). The security benefits of such an arrangement are not entirely clear, nor is there any developer currently willing to implement staking. Although anecdotal, most staking proponents seem more interested in the price short-term than the actual security / distribution. Coupled with the fact that certain Exchanges hold around 20–25% of the circulating supply, if they were to stake it would be very ineffective for the “home user” to do-so.
Hard Drive mining could be an alternative to staking, a low-environmental impact PoW. Again as with staking, there is no development interest, nor sufficient investigation in to such an option. This also goes for an alternative CPU-focused algorithm.
In the interest of fairness, I have included all 5x existing algorithms, despite the fact that it’s universally accepted that Odocrypt will stay.
Given the hashrate of DigiByte vs the global hashrate for SHA256d, it seems more prudent that be replaced than Scrypt. It’s entirely possible that one day DigiByte could be the worldwide dominant hashrate in Scrypt. At the very least, rental attack attempts wouldn’t get anywhere near as far compared with SHA256d attemtps.
Of Skein vs Qubit, there certainly seems to be more benefits in replacing Qubit, mostly due to the additional overall hardware support for Skein. Certain miners on Qubit hardware could switch over to Skein, but the same cannot be said the other way given this additional hardware vendor distribution of miners.
However when it comes to the “dominant hashrate” vs “non-dominant hashrate” algo’s, replacing one vs the other, I’ve personally not been able to decide on which would be most beneficial. It seems to depend on which way you want to prioritize the security, though there is significant merit to both.
We have three different ways that we could approach this:
- Upgrade just one algorithm for now, and leave it at that
- Upgrade one algorithm now, with another in the near future, say 6–12 months down the line
- Upgrade both algorithms now in a single “Core” upgrade, staggering the algo replacements a few hundred thousand blocks apart
The reason that (3) would upgrade them a few thousand blocks apart would be to allow the network time to build up it’s hashrate in the first replace algorithm, to maintain maximum security. This would also have the added benefit of only requiring the one single upgrade of the Core wallet, rather than multiple over time for service providers & vendors, such as wallets, exchanges, pools etc.
Doing just one upgrade (Option 1) would be the most simple from a testing / implementation perspective, it’s almost ready to go as-is, however it seems pretty clear that we could / should be replacing 2 of the algorithms at least.
If we were to release just one algorithm replacement now, and another down the line in 6–12 months, that would be simple from a testing and implementation perspective, however we also run the risk of ostracizing a significant portion of the network with the mandatory upgrades, as well as this posing the highest security risk with two hard-forks.
Although not something DigiByte has ever shied away from due to the significant merit of a network upgrade, there is always a risk with a hard-fork that should not be ignored. In this case, that risk seems best mitigated with Option (1) or Option (3).
If you would like to read up on the ProgPoW historical changes, you can do-so here:
ProgPoW is a proof-of-work algorithm designed to close the efficiency gap available to specialized ASICs. It utilizes…
And also on Medium:
IfDefElse - Medium
ProgPoW: Progress Update #1 Testnet, AMD bug, and Reducing Compute The testnet is almost up to block 250,000 and has a…
If you would like to read up on the RandomX audits, you can do-so here: