Tomorrow, November 15 @ 04:40UTC, the biggest computer battle that has ever happened will take place.
The Bitcoin Cash network is secured by about 5EH of compting power (5,000,000,000,000,000,000 dSHA256 hashes/second). At the moment, a malicious actor, Craig Wright, is controlling about 75% of that power. He intends to cause a hard fork on November 15 and make BitcoinSV the leading Bitcoin Cash implementation.
His goal appears to be to destroy Bitcoin Cash. His twitter feed (seriously, check this out) has become crazy over the last few weeks. He appears to want to take away the permissionless aspect of Bitcoin Cash. Transactions that use 'non-allowed' op codes would become recoverable by miners (of which, he is conveniently, the majority miner). He also talks of recovering funds from addresses that have been inactive for a long time. I believe that the end goal is to recover Satoshi's coins.
In just under 24 hours, the war will start. It is likely that hashrate will be diverted from BTC to defend this attack. This will result in lower hash rate for BTC, slower block times, and likely transaction congestion.
During this time, block reorgs on the bitcoin cash network are likely. Transactions may be undone during the attack. It is also possible that only empty blocks will be mined, preventing any transactions from occurring.
There is also speculation that 'poison blocks' will be used as part of the attack. The new SV client allows upto 128MB blocks. However, the current software only has a throughput of ~22MB before other limits come into play. It is speculated that Craig Wright will use malicious pre-computed blocks to 'poision' the network. These blocks would take a long time to validate on honest nodes giving CW an advantage.
This is going to be an epic battle. It's the most expensive computer attack to ever be launched and is going to be a critical moment for the future of all cryptocurrencies, not just Bitcoin Cash.
It's going to be interesting... information during the attack can be found here: https://reddit.com/r/btc and here: https://cash.coin.dance/
EDIT: Ars just picked it up.
On or around November 15, 2018 Bitcoin Cash will undergo its 3rd hard fork. There has been quite a bit of drama over this upcoming fork. On one side we have Bitcoin ABC, and the other we have BitcoinSV.
When Bitcoin Cash was forked from the original Bitcoin on August 1, 2017, it did so with the Bitcoin ABC code. When that happened, the Bitcoin ABC implementation became the de facto reference client. In my opinion, while Bitcoin ABC has done great work on progressing the Bitcoin protocol, their communication and community involvement leaves something to be desired.
Bitcoin ABC has proposed and implemented a few changes for the upcoming fork. The biggest and most controversial of these changes is something called Canonical Transaction Ordering (CTOR). In the current implementation, transactions can be included in a block in any order. If/When CTOR is accepted, then transactions will have to be ordered in a specific way.
CTOR offers some advantages with block propagation. There are a number of technologies (Compact Blocks, Thin Blocks, Graphene) that allow faster block propagation. Essentially, when a block is found, the block needs to be propagated to the rest of the network. As blocks get larger, the time to transmit the block also increases. In order to propagate blocks to the network in a timely fashion, Bloom Filters are used. Having a fixed order of transactions inside a block means that the order of transactions don't have to be transmitted to propagate a block. This results in a significant (>50%) reduction in data required to transmit a block.
On the other side, we have BitcoinSV. Craig Wright is BitcoinSV's Lead Scientist. He has been claiming for several years now that he is the real Satoshi. Gavin Andresen (one of the earliest Bitcoin coders) even agreed that Craig Wright was Satoshi. Craig started a company called nChain that has been working on their own implementation. They are closely affiliated with Coingeek who operates a mining operation/pool.
There have been criticisms of both sides. Criticisms of Bitcoin ABC's CTOR include that it's too big of a change too fast. Changes to consensus rules should happen very slowly and be tested very thoroughly. Impact on CPU usage is not fully understood at this time and if there are any bugs in the CTOR implementation that could result in an unintended chain split.
On the other side, we have BitconSV and Craig Wright. Craig is a character. He constantly makes outrageous claims and consistently fails to provide any proof to his claims. Any papers he produces contain mostly plagiarized content. His personality would definately fall into the 'asshole' category. BitcoinSV did not have any publicly released code until several weeks ago -- Way too late to be taken seriously IMO.
Craig has made claims that there will be no chain split. He has stated that his BitconSV implementation will win and he 'will prevent' a chain split from happening and will use his hash power offensively if required.
So here we are, two weeks away from the date the fork must occur, and no one really knows what will happen. We are about to witness the first true 'Nakamoto Consensus' hash war. It's going to be exciting to watch.
My predictions: Bitcoin ABC will easily win (will be clear winner in less than one hour). BitcoinSV was released way too late in the game to be taken seriously. They do not even have a testnet. How can we even know that it will work as advertised?
However, will the minority chain persist? There is intentionally no replay protection as part of this fork, so a transaction submitted to one chain will be valid on the other chain. People have stated that they will be replaying transactions from each chain to the other. There are still ways to force your coins to split however.
My prediction on persistance is that the minority chain will persist for quite some time (months-years), much like Bitcoin Gold persisted. It will have very little value, but it will have some value.
There's never a dull day in Bitconland.
DRAMA UPDATE!!!: Craig Wright says that he will blacklist any address that uses a new op code introduced by Bitcoin ABC. Very un-satoshilike.
All this makes the “state” of state specific memory quite crucial, and I haven’t yet defined it. To say “it’s all those things that current memories get attached to” is true, but not very useful. I tend to conceive of “state”, in this sense, as the active world image, but this is also a bit vague. Experiments have shown that consciously being aware of something isn’t necessary for it to participate in state. It doesn’t seem useful to conceptualize it as an object, though it is the top container of active objects. Perhaps it’s pure epiphenomenon, and not a real thing at all, but in that case one needs to explain how the activity of the rest of the system could create the illusion that “state” exists, i.e. would provide the effects.
Still, state existing as phenomenon rather than as epiphenomenon seems to create numerous problems. E.g. it seems to exist in innumerable variations and seems to experience partial activation. The only problem it really seems to solve is limiting the necessity for centralized communication. So I need to address that.
Possibly an answer lies in the hierarchical embedding of objects. So, for example, kitchen remains kitchen whether or not the cat is currently being fed. There’s a time linked variation in the “current state of activation” of kitchen. In other words, objects need to allow for components that are not always active (or even present).
The result of this is that objects being linked into any nested subcomponent of the current foreground or top active object are linked into the entire chain, with the strongest link at the lowest level. Repeated stimulation will over time strengthen some links. Links that are not strengthened (unless above a threshold of strength) will decay. Perhaps there can also be degrees of strengthening, so that rubber will be strongly linked with tires, but linked to carts much more weakly.
The “separation” of neurons needs consideration. Clearly sensations that are physically close should be considered close, and likely have direct physical contacts at the neuron level, but other linkages are more difficult. On the other hand, these other linkages probably only happen at a higher level (i.e. at a more abstract level). A snarl combined with teeth being bared is at a relatively high level, white spot combining with white spot to for partial image that will be parsed as a tooth is at a much lower level. At what level is centralized communication needed, and on what basis should this be decided? Well, what’s the purpose of communication? One answer is to create links between objects, so perhaps only top level objects need to link … but this seems insufficient. Actually, the proposal seems roughly equivalent to “frames”, with lots of things left hanging, and that is known not to suffice.
I think that what is needed to solve this problem is the “state specific memory”. When a signal is already a part of the “state” then it doesn’t need the centralized communication, but can simply strengthen and expand the current one. Only when a new signal is being associated with the state does it need to communicate centrally to determine to what stat it is to be added. Since this will generate lots of false or “noisy” connections, it’s important that weak connections fade over time.
The co-occurrence of objects even in description is sufficient to create the perception of a connection. Consider how this is used in the “Grandfather’s Clock” song. It is not without reason that Crowley said that the basic rule of ritual magic is “invoke often”.
These seem to all be things that are implemented via Hebbs Law1, but the mechanism is obscure. When there is a synaptic connection, then the mechanism is reasonably clear, but when there’s no connection except synchronicity it’s harder to explain. It does take many repetitions, so even a weak connection would be reinforced, rather like math tables … in fact probably exactly like math tables … but that doesn’t explain the mechanism. We know that physiologically it’s connected somehow to the hippocampus, so some specialized mechanism is quite appropriate. It has to be done via “passive monitoring”, i.e. via receiving signals from the active neurons … but probably only at a rather high level. And we believe that unusual wiring in this area is behind synethesia.
So … I am assuming that when a cluster of sensations above threshold of strength is activated that a signal is sent to a central function that receives the signals sent during a small interval of time2 and establishes or reinforces a connection between them. This appears as if it might strengthen the perception of boundaries between different clusters of sensation. It would also seem to foster the creation of composite objects. Perhaps it also enables the invention of new composite objects from known pieces.
The persistence of objects means not only that they continue to exist when you can’t observe them, but also, and more primitively, that when you are watching them they remain the same object1. This will probably be inherent in what it means to be an object, but such a concept cannot predate the concept of object.
The distinction between within and without is not easy. Even many adults haven’t really managed it, as denoted in phrases such as “You made me love you.” or “You made me so angry”, where internal actions are attributed to external causes, even though others would react to the same stimuli in different ways. This is probably because episodic events tend to be externally attributed, though of course denial of responsibility is another reason. But originally denial isn’t a reason as the mere existence of a separation between “me” and “not me” isn’t yet given, much less the bounds.