Bitcoin SV Unneccesarily Re-Invented Bitcoin Maximalism ...

HPB (High-Performance Blockchain) Whitepaper breakdown

If you'd like to read the first article I published on reddit on HPB, please take a look here
https://redd.it/7qt54x
 
People often skim over white papers as they simply cannot be bothered to read through them. Let’s be honest, most of them are as dull as dishwater and even more so when full of technical blockchain related buzzwords that most people new to cryptocurrencies simply don’t understand.
 
Well as someone now invested in High-Performance Blockchain (HPB), I want to know and understand what the company is trying to achieve, so I’ve spent some time dissecting the white paper and actually gathering the information behind the buzzwords to determine if the company offers real key differentiators and unique selling points that allow the proposal to stand separately from the competition.
 
So here is my breakdown of some of the key sections from the soon-to-be-updated HPB whitepaper
 
TPS
 
Ok so TPS stands for “transactions per second” and is reasonably well recognised in the world of blockchain but often misunderstood or under-appreciated. Essentially HPB are stating in their white paper that TPS is a bottleneck for all current blockchain solutions and this bottleneck restricts development and simply will not meet future business needs.
 
So let’s just explore this for a minute. Anyone who knows Bitcoin and Ethereum and have tried to transfer their coins from a wallet to an exchange or vice-versa, may at some point have experienced slow transfer or “transaction” times. This is usually when the network is congested, and transactions which usually take a few minutes, are suddenly slowed down considerably. Let's say you are transferring some Eth to an online exchange to buy another coin as you’ve noticed that this other coins price is dropping, and you want to catch the low price to buy in before the bounce……so you setup the transfer, increase your Ether Gwei to 50 to get things moving quicker, and then you wait for your 12 block confirmations to be confirmed before the Eth appears in your exchange wallet. You wait 10-15 minutes and the Eth suddenly appears, only to find the price as already bounced on the coin you wanted to buy and it’s already up 10% on what it happened to be 15 minutes ago! That delay has just cost you $500!
 
Delay can be extremely frustrating, and can often be expensive. Now whilst individuals tend to tolerate slight delays on occasion, (for now!) It will simply be unacceptable moving forward. Imagine typing in your pin at a cashpoint/ATM and having to wait 4-5 minutes to get your money! No way!
 
So TPS is important….in fact it’s more than important, it’s fundamental to the success of blockchain technology that TPS speeds improve, and blockchain networks scale accordingly! So how fast are current TPS rates of the big crypto’s?
 
Here is the estimated TPS of the Top 10 cryptos. I should point out that this is the CURRENT TPS speed. Almost all of the cryptos mentioned have plans in the pipeline to scale up and improve TPS using various ingenious solutions, but as of TODAY this is the average speed.
 
  1. Bitcoin ~7 TPS
  2. Ethereum ~15 TPS
  3. Ripple ~1000 TPS
  4. Bitcoin Cash ~40 TPS
  5. Cardano ~10 TPS
  6. Litecoin ~56
  7. Stellar ~3700
  8. NEM ~4 TPS
  9. EOS ~0 TPS
  10. NEO ~1000 TPS
 
Like I say, almost all of these have plans to increase transaction speed and plans to address scalability, but these are the numbers I have researched as of this particular moment in time.
 
Let’s compare this to Visa, the global payment processor, which has an “average” daily peak of around 4,500 TPS and is capable of 56,000 TPS.
 
Some of you may say, “Well that doesn’t matter, as in a few months’ time [insert crypto I own here] will be releasing [insert scalability plan of my crypto here] which means it will be capable of running [insert claimed future TPS speed of my crypto here] so my crypto will be the best in the world!”
 
But this isn’t the whole story….. far from it. You see this doesn’t address a fundamental element of blockchain…..and that is the PHYSICAL transference of information from one node to another to allow for block validation and consensus. You know….the point where the data processed moves up and down the OSI stack and hits the physical layer on the network card and gets transported through the physical Ethernet cable or fibre that takes it off to somewhere else.
 
Also, you have to factor in the actual transaction size (measured in bytes or kilobytes) that is being transferred. VISA transactions vary in size from about 0.2 kilobytes to a little over 1 kilobyte. In order to maintain 4500 TPS, and if we use an average of 0.5kb (512bytes) per transaction, then you need to be physically transporting approximately 2.25mb of data per second. OK so this seems tiny! We all have 100mb broadband at home and the NIC network cards in your computers are capable of running 10gb ….so 2.25mb is nothing…… for now!
 
If we go back to actual blocks on the blockchain, let’s first look at bitcoin. It has a fixed 1mb block size (1,000,000 bytes) so if bitcoin TPS is at around 7 TPS, then we need to be physically transporting 6.83mb per second per block. Still pretty small and easy to cope with….Well if that’s the case then why is bitcoin so slow?
 
Well if you consider the millions of transactions being requested every day, and that you can only fit 1mb of data into a single block, then if you imagine the first block in the queue gets processed first (max 1mb of data), but the rest of the transactions have to wait, to see if they hopefully are in the next block, or maybe the next one? Or maybe the next one? Or the next one?
 
Now the whole point of “decentralization” is that every node on the blockchain network is in agreement that the block is valid…this consensus typically takes around 10 minutes for the blockchain network to fully “sync” on the broadcasted block. Once the entire network is in agreement, they start to “sync” the next block. Unfortunately if your transaction isn’t at the front of the queue, then you can see how it might take a while for your transaction to get processed. So is there a way of moving to the front of the queue, similar to the way you can get a “queue jump pass” at a theme park? Sure there is….you can pay as higher-than-average transaction fee to get prioritized….but if the transaction fees are relative to the cryptocurrency itself, then the greater the value of the crypto becomes (i.e. the more popular it becomes), the higher the transaction fee becomes in order to allow you to make your transactions.
 
Once again using the cashpoint ATM analogy, it’s like going to withdraw your money, and being presented with some options on screen similar to that of, “You can have your money in less around 10 minutes for $50, or you can wait 20 minutes for $20, or you can camp out on the street and wait until tomorrow and get your money for $5”
 
So it’s clear to see the issue…..as blockchain scales up and more people use it, the value of it rises, the cost to use it goes up, and the speed of actually using it gets slower. This is not progress, and will not be acceptable as more people and businesses use blockchain to transact with money, information, data, whatever.
 
So what can be done? …Well you could increase the block size……more data held in a block means that you have a greater chance of being in a block at the front of the queue……Well that kind of works, but then you still have to factor in how long it takes for all nodes on the blockchain network to “sync” and reach consensus.
 
The more data per block, the more data there is that needs to be fully distributed.
 
I used visa as an example earlier as this processes very small amounts of transactional data. Essentially this average 512 bytes will hold the following information: transaction amount, transaction number, transaction date and time, transaction type (deposits, withdrawal, purchase or refund), type of account being debited or credited, card number, identity of the card acceptor (organization/store address) as well as the identity of the terminal (company name from which the machine operates). That’s pretty much all there is to a transaction. I’m sure you will agree that it’s a very small amount of data.
 
Moving forward, as more people and businesses use block-chain technology, the information transacted across blockchain will grow.
 
Let’s say, (just for a very simplistic example) that a blockchain network is being used to store details on a property deed via an Ethereum Dapp, and there is the equivalent of 32 pages of information in the deed. Well one ascii character stored as data represents one byte.
 
This “A” right here is one byte.
 
So if an A4 page holds let’s say 4000 ascii characters, then that’s 4000 bytes per page, or 4000x32= 128,000 bytes of data. Now if a 1mb block size can hold 1,000,000 bytes of data, then my single document alone has just consumed (128,000/1,000,000)*100 = 12.8% of a 1mb block!
 
Now going further, what if 50,000 people globally decide to transfer their mortgage deeds? Alongside those are another 50,000 people transferring their Will in another Dapp, alongside 50,000 other people transferring their sale-of-business documents in another Dapp, alongside 50,000 people transferring other “lengthy” documents in yet another Dapp? All of a sudden the network comes to a complete and utter standstill! That’s not to mention all the other “big data” being thrown around from company to company, city to city, and continent to continent!
 
Ok in some respects that's not really a fair example, as I mentioned the 1mb block limit with bitcoin, and we know that bitcoin was never designed to be anything more than a currency.
 
But that’s Bitcoin. Other blockchains are hoping/expecting people to embrace and adopt their network for all of their decentralized needs, and as time goes by and more data is sent around, then many (if not all) of the suggested scalability solutions will not be able to cope…..why?
 
Because sooner or later we won’t be talking about megabytes of data….we’ll be talking about GB of data….possibly TB of data on the blockchain network! Now at this stage, addressing this level of scalability will definitely not be purely a software issue….we have to go back to hardware!
 
So…finally coming to my point about TPS…… as time goes by, in order for block chains to truly succeed, the networking HARDWARE needs to be developed to physically move the data quickly enough to be able to cope with the processing of the transactions…..and quite frankly this is something that has not been addressed…..it’s just been swept under the carpet.
 
That is, until now. High-Performance Blockchain (HPB) want to address this issue…..they want to give blockchain the opportunity to scale up to meet customer demand, which may not be there right at this moment, but is likely to be there soon.
 
According to this website from just over a year ago, more data will be produced in 2017, then in the entire history of civilization spanning 5000 years!
https://appdevelopermagazine.com/4773/2016/12/23/more-data-will-be-created-in-2017-than-the-previous-5,000-years-of-humanity-/
That puts things into perspective when it comes to data generation and expected data transference.
 
Ok so visa can handle 56,000 TINY transactions per second….Will that be enough for block chain TPS in 5 years’ time? Well I’ll simply leave that for you to decide.
 
So what are HPB doing about this? They have been developing a specialist hardware accelerated network card known as a TOE card (TOE stands for TCP/IP Offload Engine) which is capable of supporting MILLIONS of transactions per second. Now there are plenty of blockchains out there looking to address speed and scaling, and some of them are truly fascinating, and they will most likely address scalability in the short term….but at some point HARDWARE will still be the bottleneck and this will still need to be addressed like the bad smell in the room that won’t go away. As far as I know (and I am honestly happy to stand corrected here) HPB are the ONLY Company right now who see hardware acceleration as fundamental to blockchain scalability.
 
No doubt more companies will follow over time, but if you appreciate “first mover advantage” you will see how critical this is from a crypto investment perspective.
 
Here are some images of the HBP board
HPB board
HPB board running
Wang Xiaoming holding the HPB board
 
GVM (General Virtual Machine mechanism)
The HPB General virtual machine is currently being developed to allow the HPB solution to work with other blockchains to enhance them and help them to scale. Currently the GVM is being developed for the NEOVM (NEO Virtual Machine) and The EVM (Ethereum Virtual Machine) with others planned for the future.
 
Now a lot of people feel that if Ethereum were not hampered with scalability issues, then it would be THE de-facto blockchain globally (possibly outside of Asia due to things like Chinese regulation) and that NEO is the “Ethereum of China” developed specifically to accommodate things like Chinese regulation. So if HPB is working on a hardware solution to help both Ethereum and NEO, then in my opinion this could add serious value to both blockchains.
 
Claim of Union Pay partnership
To quote directly (verbatim) from the whitepaper:
After listening to the design concept of HPB, China's largest financial data company UnionPay has joined as a partner with HPB, with the common goal of technological practice and exploration of financial big data and high-performance blockchain platform. UnionPay Wisdom currently handles 80% of China's banking transaction data, with an annual turnover of 80 trillion yuan. HPB will join hands with China UnionPay to serve all industry partners, including large banks, insurance, retail enterprises, fintech companies and so on.
 
Why is this significant? Have a read of this webpage to get an idea of the scale of this company:
http://usa.chinadaily.com.cn/business/2017-10/10/content_33060535.htm
 
Now some people will say, there’s no proof of this alliance, and trust me I am one of the biggest sceptics you will come across….I question everything!
 
Now at this stage I have no concrete evidence to support HPB’s claim, however let me offer you my train of thought. Whilst HPB hasn’t really been marketed in the West (a good thing in my opinion!) The leader of HPB Wang Xiaoming is literally attending every single major Asian blockchain event to personally present his solution to major audiences. HPB also has the backing of NEO, who angel invested the project.
 
Take a look at this YouTube video of Da Hongfei talking about NEO, and bringing up a slide at the recent “BlockChain Revolution Conference” on January 18th 2018 – If you don’t want to watch the entire video (it’s obviously all about NEO) then skip forward to exactly 9m13s into the video and take a look at the slide he brings up. You will see it shows HPB. Do you honestly thing Da Hongfei, the leader of NEO, would bring up details of a company that he felt to be untrustworthy to share with a global audience?
Blockchain Revolution 2018 video
 
Here are further pictures of numerous events that HPB’s very own Wang Xiaoming has presented HPB…..in the blockchain world he is very respected after releasing multiple whitepapers and publishing several books over the years on blockchain technology. This is a “techie” with a very public profile…..this is not some guy who knows nothing about blockchain looking to scam people with a dodgy website full of lies and exaggerations!
Wang Xiao Ming presentation at Lujiazui Blockchain event
Wang Xiao Ming presenting at the BTAS2017 summit
Wang Xiao Ming Blockchain presentation
 
I won’t go into some of the other “dubious” altcoins on the markets who claim to be in bed with companies like IBM, Huwawei, Apple etc, but when you do some digging they have a registered address at a drop-mail and you can only find 3-4 baidu links about the company on the internet, you have to question their trustworthiness
 
So do I believe in HPB…..very much so :-)
 
Currently the HPB price sits at $6.00 on www.bibox.com and isn’t really moving. I believe this is due to a number of factors.
 
Firstly, the entire crypto market has gone bonkers this last week or so, although this apparently happens every January.
 
Secondly the coin is still on relatively obscure exchanges that most people have never heard of.
 
Thirdly, because of the current lack of expose, the coin trades at low volume, which means (in my opinion...I can’t actually prove it) that crypto “bots” are effectively controlling the price range as it goes up to around $9.00 and then back down to $6.00, then back up to $9.00, then back down to $6.00 and over and over again.
 
Finally the testnet proof of concept hasn’t been launched yet. We’ve been told that it’s Q1 this year, so it’s imminent, and as soon as it launches I think the company will get a lot more press coverage.
 
UPDATE - It has now been officially confirmed that HPB will be listed on Kucoin
The tentative date is February 5th
 
So, for the investors out there….. It’s trading at $6.00 per coin, and with a circulating supply of 28 million coins, it gives the company an mcap of $168,000,000
 
So what could the price go to? I have no idea as unfortunately I do not have a crystal ball in my posession….however some are referring to HPB as the EOS of China (only HPB has an actual working, hardware-focussed product as opposed to plans for the future) and EOS currently has an mcap of $8.30 billion dollars…… so for HPB to match that mcap, the price of HPB would have to effectively increase almost 50-fold to $296.4 - Now that’s obviously on the optimistic side, but even still, it shows its potential. :-)
 
I believe hardware acceleration alongside software optimization is the key to blockchain success moving forward. I guess it’s up to you to decide if you agree or disagree with me.
 
Whatever you do though…..remember that Most importantly of all…… DYOR!
 
My wallet address, if you found this useful and would like to donate is: 0xd7FAbB675D9401931CefE9E633Ef525BfBa7a139
submitted by jpowell79 to u/jpowell79 [link] [comments]

Preventing double-spends is an "embarrassingly parallel" massive search problem - like Google, [email protected], [email protected], or PrimeGrid. BUIP024 "address sharding" is similar to Google's MapReduce & Berkeley's BOINC grid computing - "divide-and-conquer" providing unlimited on-chain scaling for Bitcoin.

TL;DR: Like all other successful projects involving "embarrassingly parallel" search problems in massive search spaces, Bitcoin can and should - and inevitably will - move to a distributed computing paradigm based on successful "sharding" architectures such as Google Search (based on Google's MapReduce algorithm), or [email protected], [email protected], or PrimeGrid (based on Berkeley's BOINC grid computing architecture) - which use simple mathematical "decompose" and "recompose" operations to break big problems into tiny pieces, providing virtually unlimited scaling (plus fault tolerance) at the logical / software level, on top of possibly severely limited (and faulty) resources at the physical / hardware level.
The discredited "heavy" (and over-complicated) design philosophy of centralized "legacy" dev teams such as Core / Blockstream (requiring every single node to download, store and verify the massively growing blockchain, and pinning their hopes on non-existent off-chain vaporware such as the so-called "Lightning Network" which has no mathematical definition and is missing crucial components such as decentralized routing) is doomed to failure, and will be out-competed by simpler on-chain "lightweight" distributed approaches such as distributed trustless Merkle trees or BUIP024's "Address Sharding" emerging from independent devs such as u/thezerg1 (involved with Bitcoin Unlimited).
No one in their right mind would expect Google's vast search engine to fit entirely on a Raspberry Pi behind a crappy Internet connection - and no one in their right mind should expect Bitcoin's vast financial network to fit entirely on a Raspberry Pi behind a crappy Internet connection either.
Any "normal" (ie, competent) company with $76 million to spend could provide virtually unlimited on-chain scaling for Bitcoin in a matter of months - simply by working with devs who would just go ahead and apply the existing obvious mature successful tried-and-true "recipes" for solving "embarrassingly parallel" search problems in massive search spaces, based on standard DISTRIBUTED COMPUTING approaches like Google Search (based on Google's MapReduce algorithm), or [email protected], [email protected], or PrimeGrid (based on Berkeley's BOINC grid computing architecture). The fact that Blockstream / Core devs refuse to consider any standard DISTRIBUTED COMPUTING approaches just proves that they're "embarrassingly stupid" - and the only way Bitcoin will succeed is by routing around their damage.
Proven, mature sharding architectures like the ones powering Google Search, [email protected], [email protected], or PrimeGrid will allow Bitcoin to achieve virtually unlimited on-chain scaling, with minimal disruption to the existing Bitcoin network topology and mining and wallet software.
Longer Summary:
People who argue that "Bitcoin can't scale" - because it involves major physical / hardware requirements (lots of processing power, upload bandwidth, storage space) - are at best simply misinformed or incompetent - or at worst outright lying to you.
Bitcoin mainly involves searching the blockchain to prevent double-spends - and so it is similar to many other projects involving "embarrassingly parallel" searching in massive search spaces - like Google Search, [email protected], [email protected], or PrimeGrid.
But there's a big difference between those long-running wildly successful massively distributed infinitely scalable parallel computing projects, and Bitcoin.
Those other projects do their data storage and processing across a distributed network. But Bitcoin (under the misguided "leadership" of Core / Blockstream devs) instists on a fatally flawed design philosophy where every individual node must be able to download, store and verify the system's entire data structure. And it's even wore than that - they want to let the least powerful nodes in the system dictate the resource requirements for everyone else.
Meanwhile, those other projects are all based on some kind of "distributed computing" involving "sharding". They achieve massive scaling by adding a virtually unlimited (and fault-tolerant) logical / software layer on top of the underlying resource-constrained / limited physical / hardware layer - using approaches like Google's MapReduce algorithm or Berkeley's Open Infrastructure for Network Computing (BOINC) grid computing architecture.
This shows that it is a fundamental error to continue insisting on viewing an individual Bitcoin "node" as the fundamental "unit" of the Bitcoin network. Coordinated distributed pools already exist for mining the blockchain - and eventually coordinated distributed trustless architectures will also exist for verifying and querying it. Any architecture or design philosophy where a single "node" is expected to be forever responsible for storing or verifying the entire blockchain is the wrong approach, and is doomed to failure.
The most well-known example of this doomed approach is Blockstream / Core's "roadmap" - which is based on two disastrously erroneous design requirements:
  • Core / Blockstream erroneously insist that the entire blockchain must always be downloadable, storable and verifiable on a single node, as dictated by the least powerful nodes in the system (eg, u/bitusher in Costa Rica), or u/Luke-Jr in the underserved backwoods of Florida); and
  • Core / Blockstream support convoluted, incomplete off-chain scaling approaches such as the so-called "Lightning Network" - which lacks a mathematical foundation, and also has some serious gaps (eg, no solution for decentralized routing).
Instead, the future of Bitcoin will inevitably be based on unlimited on-chain scaling, where all of Bitcoin's existing algorithms and data structures and networking are essentially preserved unchanged / as-is - but they are distributed at the logical / software level using sharding approaches such as u/thezerg1's BUIP024 or distributed trustless Merkle trees.
These kinds of sharding architectures will allow individual nodes to use a minimum of physical resources to access a maximum of logical storage and processing resources across a distributed network with virtually unlimited on-chain scaling - where every node will be able to use and verify the entire blockchain without having to download and store the whole thing - just like Google Search, [email protected], [email protected], or PrimeGrid and other successful distributed sharding-based projects have already been successfully doing for years.
Details:
Sharding, which has been so successful in many other areas, is a topic that keeps resurfacing in various shapes and forms among independent Bitcoin developers.
The highly successful track record of sharding architectures on other projects involving "embarrassingly parallel" massive search problems (harnessing resource-constrained machines at the physical level into a distributed network at the logical level, in order to provide fault tolerance and virtually unlimited scaling searching for web pages, interstellar radio signals, protein sequences, or prime numbers in massive search spaces up to hundreds of terabytes in size) provides convincing evidence that sharding architectures will also work for Bitcoin (which also requires virtually unlimited on-chain scaling, searching the ever-expanding blockchain for previous "spends" from an existing address, before appending a new transaction from this address to the blockchain).
Below are some links involving proposals for sharding Bitcoin, plus more discussion and related examples.
BUIP024: Extension Blocks with Address Sharding
https://np.reddit.com/btc/comments/54afm7/buip024_extension_blocks_with_address_sharding/
Why aren't we as a community talking about Sharding as a scaling solution?
https://np.reddit.com/Bitcoin/comments/3u1m36/why_arent_we_as_a_community_talking_about/
(There are some detailed, partially encouraging comments from u/petertodd in that thread.)
[Brainstorming] Could Bitcoin ever scale like BitTorrent, using something like "mempool sharding"?
https://np.reddit.com/btc/comments/3v070a/brainstorming_could_bitcoin_ever_scale_like/
[Brainstorming] "Let's Fork Smarter, Not Harder"? Can we find some natural way(s) of making the scaling problem "embarrassingly parallel", perhaps introducing some hierarchical (tree) structures or some natural "sharding" at the level of the network and/or the mempool and/or the blockchain?
https://np.reddit.com/btc/comments/3wtwa7/brainstorming_lets_fork_smarter_not_harder_can_we/
"Braiding the Blockchain" (32 min + Q&A): We can't remove all sources of latency. We can redesign the "chain" to tolerate multiple simultaneous writers. Let miners mine and validate at the same time. Ideal block time / size / difficulty can become emergent per-node properties of the network topology
https://np.reddit.com/btc/comments/4su1gf/braiding_the_blockchain_32_min_qa_we_cant_remove/
Some kind of sharding - perhaps based on address sharding as in BUIP024, or based on distributed trustless Merkle trees as proposed earlier by u/thezerg1 - is very likely to turn out to be the simplest, and safest approach towards massive on-chain scaling.
A thought experiment showing that we already have most of the ingredients for a kind of simplistic "instant sharding"
A simplistic thought experiment can be used to illustrate how easy it could be to do sharding - with almost no changes to the existing Bitcoin system.
Recall that Bitcoin addresses and keys are composed from an alphabet of 58 characters. So, in this simplified thought experiment, we will outline a way to add a kind of "instant sharding" within the existing system - by using the last character of each address in order to assign that address to one of 58 shards.
(Maybe you can already see where this is going...)
Similar to vanity address generation, a user who wants to receive Bitcoins would be required to generate 58 different receiving addresses (each ending with a different character) - and, similarly, miners could be required to pick one of the 58 shards to mine on.
Then, when a user wanted to send money, they would have to look at the last character of their "send from" address - and also select a "send to" address ending in the same character - and presto! we already have a kind of simplistic "instant sharding". (And note that this part of the thought experiment would require only the "softest" kind of soft fork: indeed, we haven't changed any of the code at all, but instead we simply adopted a new convention by agreement, while using the existing code.)
Of course, this simplistic "instant sharding" example would still need a few more features in order to be complete - but they'd all be fairly straightforward to provide:
  • A transaction can actually send from multiple addresses, to multiple addresses - so the approach of simply looking at the final character of a single (receive) address would not be enough to instantly assign a transaction to a particular shard. But a slightly more sophisticated decision criterion could easily be developed - and computed using code - to assign every transaction to a particular shard, based on the "from" and "to" addresses in the transaction. The basic concept from the "simplistic" example would remain the same, sharding the network based on some characteristic of transactions.
  • If we had 58 shards, then the mining reward would have to be decreased to 1/58 of what it currently is - and also the mining hash power on each of the shards would end up being roughly 1/58 of what it is now. In general, many people might agree that decreased mining rewards would actually be a good thing (spreading out mining rewards among more people, instead of the current problems where mining is done by about 8 entities). Also, network hashing power has been growing insanely for years, so we probably have way more than enough needed to secure the network - after all, Bitcoin was secure back when network hash power was 1/58 of what it is now.
  • This simplistic example does not handle cases where you need to do "cross-shard" transactions. But it should be feasible to implement such a thing. The various proposals from u/thezerg1 such as BUIP024 do deal with "cross-shard" transactions.
(Also, the fact that a simplified address-based sharding mechanics can be outlined in just a few paragraphs as shown here suggests that this might be "simple and understandable enough to actually work" - unlike something such as the so-called "Lightning Network", which is actually just a catchy-sounding name with no clearly defined mechanics or mathematics behind it.)
Addresses are plentiful, and can be generated locally, and you can generate addresses satisfying a certain pattern (eg ending in a certain character) the same way people can already generate vanity addresses. So imposing a "convention" where the "send" and "receive" address would have to end in the same character (and where the miner has to only mine transactions in that shard) - would be easy to understand and do.
Similarly, the earlier solution proposed by u/thezerg1, involving distributed trustless Merkle trees, is easy to understand: you'd just be distributing the Merkle tree across multiple nodes, while still preserving its immutablity guarantees.
Such approaches don't really change much about the actual system itself. They preserve the existing system, and just split its data structures into multiple pieces, distributed across the network. As long as we have the appropriate operators for decomposing and recomposing the pieces, then everything should work the same - but more efficiently, with unlimited on-chain scaling, and much lower resource requirements.
The examples below show how these kinds of "sharding" approaches have already been implemented successfully in many other systems.
Massive search is already efficiently performed with virtually unlimited scaling using divide-and-conquer / decompose-and-recompose approaches such as MapReduce and BOINC.
Every time you do a Google search, you're using Google's MapReduce algorithm to solve an embarrassingly parallel problem.
And distributed computing grids using the Berkeley Open Infrastructure for Network Computing (BOINC) are constantly setting new records searching for protein combinations, prime numbers, or radio signals from possible intelligent life in the universe.
We all use Google to search hundreds of terabytes of data on the web and get results in a fraction of a second - using cheap "commodity boxes" on the server side, and possibly using limited bandwidth on the client side - with fault tolerance to handle crashing servers and dropped connections.
Other examples are [email protected], [email protected] and PrimeGrid - involving searching massive search spaces for protein sequences, interstellar radio signals, or prime numbers hundreds of thousands of digits long. Each of these examples uses sharding to decompose a giant search space into smaller sub-spaces which are searched separately in parallel and then the resulting (sub-)solutions are recomposed to provide the overall search results.
It seems obvious to apply this tactic to Bitcoin - searching the blockchain for existing transactions involving a "send" from an address, before appending a new "send" transaction from that address to the blockchain.
Some people might object that those systems are different from Bitcoin.
But we should remember that preventing double-spends (the main thing that the Bitcoin does) is, after all, an embarrassingly parallel massive search problem - and all of these other systems also involve embarrassingly parallel massive search problems.
The mathematics of Google's MapReduce and Berkeley's BOINC is simple, elegant, powerful - and provably correct.
Google's MapReduce and Berkeley's BOINC have demonstrated that in order to provide massive scaling for efficient searching of massive search spaces, all you need is...
  • an appropriate "decompose" operation,
  • an appropriate "recompose" operation,
  • the necessary coordination mechanisms
...in order to distribute a single problem across multiple, cheap, fault-tolerant processors.
This allows you to decompose the problem into tiny sub-problems, solving each sub-problem to provide a sub-solution, and then recompose the sub-solutions into the overall solution - gaining virtually unlimited scaling and massive efficiency.
The only "hard" part involves analyzing the search space in order to select the appropriate DECOMPOSE and RECOMPOSE operations which guarantee that recomposing the "sub-solutions" obtained by decomposing the original problem is equivalent to the solving the original problem. This essential property could be expressed in "pseudo-code" as follows:
  • (DECOMPOSE ; SUB-SOLVE ; RECOMPOSE) = (SOLVE)
Selecting the appropriate DECOMPOSE and RECOMPOSE operations (and implementing the inter-machine communication coordination) can be somewhat challenging, but it's certainly doable.
In fact, as mentioned already, these things have already been done in many distributed computing systems. So there's hardly any "original work to be done in this case. All we need to focus on now is translating the existing single-processor architecture of Bitcoin to a distributed architecture, adopting the mature, proven, efficient "recipes" provided by the many examples of successful distributed systems already up and running like such as Google Search (based on Google's MapReduce algorithm), or [email protected], [email protected], or PrimeGrid (based on Berkeley's BOINC grid computing architecture).
That's what any "competent" company with $76 million to spend would have done already - simply work with some devs who know how to implement open-source distributed systems, and focus on adapting Bitcoin's particular data structures (merkle trees, hashed chains) to a distributed environment. That's a realistic roadmap that any team of decent programmers with distributed computing experience could easily implement in a few months, and any decent managers could easily manage and roll out on a pre-determined schedule - instead of all these broken promises and missed deadlines and non-existent vaporware and pathetic excuses we've been getting from the incompetent losers and frauds involved with Core / Blockstream.
ASIDE: MapReduce and BOINC are based on math - but the so-called "Lightning Network" is based on wishful thinking involving kludges on top of workarounds on top of hacks - which is how you can tell that LN will never work.
Once you have succeeded in selecting the appropriate mathematical DECOMPOSE and RECOMPOSE operations, you get simple massive scaling - and it's also simple for anyone to verify that these operations are correct - often in about a half-page of math and code.
An example of this kind of elegance and brevity (and provable correctness) involving compositionality can be seen in this YouTube clip by the accomplished mathematician Lucius Greg Meredith presenting some operators for scaling Ethereum - in just a half page of code:
https://youtu.be/uzahKc_ukfM?t=1101
Conversely, if you fail to select the appropriate mathematical DECOMPOSE and RECOMPOSE operations, then you end up with a convoluted mess of wishful thinking - like the "whitepaper" for the so-called "Lightning Network", which is just a cool-sounding name with no actual mathematics behind it.
The LN "whitepaper" is an amateurish, non-mathematical meandering mishmash of 60 pages of "Alice sends Bob" examples involving hacks on top of workarounds on top of kludges - also containing a fatal flaw (a lack of any proposed solution for doing decentralized routing).
The disaster of the so-called "Lightning Network" - involving adding never-ending kludges on top of hacks on top of workarounds (plus all kinds of "timing" dependencies) - is reminiscent of the "epicycles" which were desperately added in a last-ditch attempt to make Ptolemy's "geocentric" system work - based on the incorrect assumption that the Sun revolved around the Earth.
This is how you can tell that the approach of the so-called "Lightning Network" is simply wrong, and it would never work - because it fails to provide appropriate (and simple, and provably correct) mathematical DECOMPOSE and RECOMPOSE operations in less than a single page of math and code.
Meanwhile, sharding approaches based on a DECOMPOSE and RECOMPOSE operation are simple and elegant - and "functional" (ie, they don't involve "procedural" timing dependencies like keeping your node running all the time, or closing out your channel before a certain deadline).
Bitcoin only has 6,000 nodes - but the leading sharding-based projects have over 100,000 nodes, with no financial incentives.
Many of these sharding-based projects have many more nodes than the Bitcoin network.
The Bitcoin network currently has about 6,000 nodes - even though there are financial incentives for running a node (ie, verifying your own Bitcoin balance.
[email protected] and [email protected] each have over 100,000 active users - even though these projects don't provide any financial incentives. This higher number of users might be due in part the the low resource demands required in these BOINC-based projects, which all are based on sharding the data set.
[email protected]
As part of the client-server network architecture, the volunteered machines each receive pieces of a simulation (work units), complete them, and return them to the project's database servers, where the units are compiled into an overall simulation.
In 2007, Guinness World Records recognized [email protected] as the most powerful distributed computing network. As of September 30, 2014, the project has 107,708 active CPU cores and 63,977 active GPUs for a total of 40.190 x86 petaFLOPS (19.282 native petaFLOPS). At the same time, the combined efforts of all distributed computing projects under BOINC totals 7.924 petaFLOPS.
[email protected]
Using distributed computing, [email protected] sends the millions of chunks of data to be analyzed off-site by home computers, and then have those computers report the results. Thus what appears an onerous problem in data analysis is reduced to a reasonable one by aid from a large, Internet-based community of borrowed computer resources.
Observational data are recorded on 2-terabyte SATA hard disk drives at the Arecibo Observatory in Puerto Rico, each holding about 2.5 days of observations, which are then sent to Berkeley. Arecibo does not have a broadband Internet connection, so data must go by postal mail to Berkeley. Once there, it is divided in both time and frequency domains work units of 107 seconds of data, or approximately 0.35 megabytes (350 kilobytes or 350,000 bytes), which overlap in time but not in frequency. These work units are then sent from the [email protected] server over the Internet to personal computers around the world to analyze.
Data is merged into a database using [email protected] computers in Berkeley.
The [email protected] distributed computing software runs either as a screensaver or continuously while a user works, making use of processor time that would otherwise be unused.
Active users: 121,780 (January 2015)
PrimeGrid
PrimeGrid is a distributed computing project for searching for prime numbers of world-record size. It makes use of the Berkeley Open Infrastructure for Network Computing (BOINC) platform.
Active users 8,382 (March 2016)
MapReduce
A MapReduce program is composed of a Map() procedure (method) that performs filtering and sorting (such as sorting students by first name into queues, one queue for each name) and a Reduce() method that performs a summary operation (such as counting the number of students in each queue, yielding name frequencies).
How can we go about developing sharding approaches for Bitcoin?
We have to identify a part of the problem which is in some sense "invariant" or "unchanged" under the operations of DECOMPOSE and RECOMPOSE - and we also have to develop a coordination mechanism which orchestrates the DECOMPOSE and RECOMPOSE operations among the machines.
The simplistic thought experiment above outlined an "instant sharding" approach where we would agree upon a convention where the "send" and "receive" address would have to end in the same character - instantly providing a starting point illustrating some of the mechanics of an actual sharding solution.
BUIP024 involves address sharding and deals with the additional features needed for a complete solution - such as cross-shard transactions.
And distributed trustless Merkle trees would involve storing Merkle trees across a distributed network - which would provide the same guarantees of immutability, while drastically reducing storage requirements.
So how can we apply ideas like MapReduce and BOINC to providing massive on-chain scaling for Bitcoin?
First we have to examine the structure of the problem that we're trying to solve - and we have to try to identify how the problem involves a massive search space which can be decomposed and recomposed.
In the case of Bitcoin, the problem involves:
  • sequentializing (serializing) APPEND operations to a blockchain data structure
  • in such a way as to avoid double-spends
Can we view "preventing Bitcoin double-spends" as a "massive search space problem"?
Yes we can!
Just like Google efficiently searches hundreds of terabytes of web pages for a particular phrase (and [email protected], [email protected], PrimeGrid etc. efficiently search massive search spaces for other patterns), in the case of "preventing Bitcoin double-spends", all we're actually doing is searching a massive seach space (the blockchain) in order to detect a previous "spend" of the same coin(s).
So, let's imagine how a possible future sharding-based architecture of Bitcoin might look.
We can observe that, in all cases of successful sharding solutions involving searching massive search spaces, the entire data structure is never stored / searched on a single machine.
Instead, the DECOMPOSE and RECOMPOSE operations (and the coordination mechanism) a "virtual" layer or grid across multiple machines - allowing the data structure to be distributed across all of them, and allowing users to search across all of them.
This suggests that requiring everyone to store 80 Gigabytes (and growing) of blockchain on their own individual machine should no longer be a long-term design goal for Bitcoin.
Instead, in a sharding environment, the DECOMPOSE and RECOMPOSE operations (and the coordination mechanism) should allow everyone to only store a portion of the blockchain on their machine - while also allowing anyone to search the entire blockchain across everyone's machines.
This might involve something like BUIP024's "address sharding" - or it could involve something like distributed trustless Merkle trees.
In either case, it's easy to see that the basic data structures of the system would remain conceptually unaltered - but in the sharding approaches, these structures would be logically distributed across multiple physical devices, in order to provide virtually unlimited scaling while dramatically reducing resource requirements.
This would be the most "conservative" approach to scaling Bitcoin: leaving the data structures of the system conceptually the same - and just spreading them out more, by adding the appropriately defined mathematical DECOMPOSE and RECOMPOSE operators (used in successful sharding approaches), which can be easily proven to preserve the same properties as the original system.
Conclusion
Bitcoin isn't the only project in the world which is permissionless and distributed.
Other projects (BOINC-based permisionless decentralized [email protected], [email protected], and PrimeGrid - as well as Google's (permissioned centralized) MapReduce-based search engine) have already achieved unlimited scaling by providing simple mathematical DECOMPOSE and RECOMPOSE operations (and coordination mechanisms) to break big problems into smaller pieces - without changing the properties of the problems or solutions. This provides massive scaling while dramatically reducing resource requirements - with several projects attracting over 100,000 nodes, much more than Bitcoin's mere 6,000 nodes - without even offering any of Bitcoin's financial incentives.
Although certain "legacy" Bitcoin development teams such as Blockstream / Core have been neglecting sharding-based scaling approaches to massive on-chain scaling (perhaps because their business models are based on misguided off-chain scaling approaches involving radical changes to Bitcoin's current successful network architecture, or even perhaps because their owners such as AXA and PwC don't want a counterparty-free new asset class to succeed and destroy their debt-based fiat wealth), emerging proposals from independent developers suggest that on-chain scaling for Bitcoin will be based on proven sharding architectures such as MapReduce and BOINC - and so we should pay more attention to these innovative, independent developers who are pursuing this important and promising line of research into providing sharding solutions for virtually unlimited on-chain Bitcoin scaling.
submitted by ydtm to btc [link] [comments]

What the hell is going on in BTC FAQ: Noobs come here!

Hello, I'm seeing a lot of confusing about what segwit is and what it actually does for the network. Hopefully this post can clear things up for some people. This is targeted at the noobs of the subreddit
Everyone told me that segwit would decrease the fees, wtf is going on???
So you've probably read about how when segwit is activated we'll have an increased blocksize. This isnt entirely true. Segwit actually does away with the whole concept of a blocksize, replacing it with a new parameter, "block weight."
Bitcoin blocks will now have a "blockweight" limit of 4,000,000. The reason for the switch from size to weight is the way it handles the different type of data in a transaction.
Inside a transaction, there are two types of data that is included. The first being whats called "witness data." This is signature of a transaction. The signature proves that the transaction is completely valid. The other type of data is the transaction data, which includes who you're sending the funds too and how much you're sending. This is going to get slightly mathy from here on in, sorry.
We can simply convert bytes of data to weighted units by saying every 1 byte of data is worth 4 weighted units. However, this is only the case for transaction data, witness data is converted on a scale of every 1 byte of witness data is worth only 1 weighted unit.
Lets give an example. Lets say the mempool (mempool - a big pool of all the transactions that are currently unconfirmed and waiting to be included in a block) has 1000 transactions in it, each transaction being 1 KB of data.
Now lets have each one of these transactions be 400 bytes of witness data, and 600 bytes of transaction data. If segwit wasn't a factor here, 1000 one kilobyte transactions would fill up a 1MB block. There would be no room for other transactions.
Lets convert these transactions to weighted units. The transaction data would be worth 2,400 units, and since the witness data is discounted its only worth 400 weighted units. Giving it a combined weight of 2,800 1000 of these transactions would give us a total weight of 2,800,000. with 1,200,000 units of space left, we can fit in a bunch more transactions!
If any of this makes absolutely no sense, leave a comment down below. I'll try to help as many people understand as possible
Okay so I get kinda how it works, why have fees this week been so high if it was activated? Why have they been only coming down in the past day or two?
Segwit isn't instantly available for everyone to use right away. To send a Segwit transaction, you first need to send it to a Segwit compatible wallet. From that wallet you'll be able to send Segwit transactions. To fully realize the affect of Segwit, it will probably take weeks and weeks if not months to have all the coins that are transacted regularly to be moved to Segwit wallets.
Another problem with the network at the moment is the huge hashpower oscillations. Many of you have probably heard about the fork that happened have at the beginning of August. Currently, the other network is having problems due to something called EDA, or emergency difficulty adjustment.
See, Bitcoin works so that if a bunch of people turned on mining hardware, after a certain number of time it would become harder to create a block. This keeps the average block creation time to an average of 10 minutes. The "other coin's" EDA system works so that if the average block creation time is below two per hour for twelve hours, the difficulty will go down so that the average is once again 10 minutes.
Here's where the problem comes in. Miners are taking advantage of this by mining the the other chain when the difficulty is super low after an EDA, making it much more profitable. And once the difficult adjusts again through normal means, they switch back to the Bitcoin chain until another EDA happens.
Again, I'd love to help out as many people as possible get informed about what the tf is going on right now in the community because for a new comer this is probably massively overwhelming.
Whats all this Segwit2x cheese I've been hearing about?
So the 2x part is the second half of a scaling agreement known as the New York Agreement. It was a compromise between dozens of Bitcoin business and over 80% of the hashpower. This is an unprecedented amount of support, the likes of which really haven't been seen in Bitcoin.
The original agreement was to activate Segwit ASAP and roughly three months after increase the blocksize to 2MB. With Segwit it would be a new blockweight of 8,000,000.
I suggest investigating the pros and cons of a 2x blockweight increase for yourself. There are a LOT of conflicting opinions, and you shouldn't be blindly believing anyone.
Leave a comment below if you have any further questions, I'll do my best to answer most.
If you appreciated this FAQ, feel free to send a tip :) 1Gi9uberSWjPnWT6UUKePUFUWWryqUxaPk
submitted by hrones to Bitcoin [link] [comments]

08-27 17:52 - 'Bitcoin Current Affairs FAQ: Segwit, fees, EDA, and 2x' (self.Bitcoin) by /u/hrones removed from /r/Bitcoin within 0-10min

'''
Hello, I'm seeing a lot of confusing about what segwit is and what it actually does for the network. Hopefully this post can clear things up for some people. This is targeted at the noobs of the subreddit
Everyone told me that segwit would decrease the fees, wtf is going on??? So you've probably read about how when segwit is activated we'll have an increased blocksize. This isnt entirely true. Segwit actually does away with the whole concept of a blocksize, replacing it with a new parameter, "block weight."
Bitcoin blocks will now have a "blockweight" limit of 4,000,000. The reason for the switch from size to weight is the way it handles the different type of data in a transaction.
Inside a transaction, there are two types of data that is included. The first being whats called "witness data." This is signature of a transaction. The signature proves that the transaction is completely valid. The other type of data is the transaction data, which includes who you're sending the funds too and how much you're sending.
This is going to get slightly mathy from here on in, sorry.
We can simply convert bytes of data to weighted units by saying every 1 byte of data is worth 4 weighted units. However, this is only the case for transaction data, witness data is converted on a scale of every 1 byte of witness data is worth only 1 weighted unit.
Lets give an example. Lets say the mempool (mempool - a big pool of all the transactions that are currently unconfirmed and waiting to be included in a block) has 1000 transactions in it, each transaction being 1 KB of data.
Now lets have each one of these transactions be 400 bytes of witness data, and 600 bytes of transaction data. If segwit wasn't a factor here, 1000 one kilobyte transactions would fill up a 1MB block. There would be no room for other transactions.
Lets convert these transactions to weighted units. The transaction data would be worth 2,400 units, and since the witness data is discounted its only worth 400 weighted units. Giving it a combined weight of 2,800 1000 of these transactions would give us a total weight of 2,800,000. with 1,200,000 units of space left, we can fit in a bunch more transactions!
If any of this makes absolutely no sense, leave a comment down below. I'll try to help as many people understand as possible
Okay so I get kinda how it works, why have fees this week been so high if it was activated? Why have they been only coming down in the past day or two?
Segwit isn't instantly available for everyone to use right away. To send a Segwit transaction, you first need to send it to a Segwit compatible wallet. From that wallet you'll be able to send Segwit transactions. To fully realize the affect of Segwit, it will probably take weeks and weeks if not months to have all the coins that are transacted regularly to be moved to Segwit wallets.
Another problem with the network at the moment is the huge hashpower oscillations. Many of you have probably heard about the fork that happened have at the beginning of August. Currently, the other network is having problems due to something called EDA, or emergency difficulty adjustment. See, Bitcoin works so that if a bunch of people turned on mining hardware, after a certain number of time it would become harder to create a block. This keeps the average block creation time to an average of 10 minutes. the "other coins's" EDA system works so that if the average block creation time is below two per hour for twelve hours, the difficulty will go down so that the average is once again 10 minutes.
Here's where the problem comes in. Miners are taking advantage of this by mining the other chain when the difficulty is super low after an EDA, making it much more profitable. And once the difficult adjusts again through normal means, they switch back to the Bitcoin chain until another EDA happens.
Again, I'd love to help out as many people as possible get informed about what the tf is going on right now in the community because for a new comer this is probably massively overwhelming.
Whats all this Segwit2x cheese I've been hearing about?
So the 2x part is the second half of a scaling agreement known as the New York Agreement. It was a compromise between dozens of Bitcoin business and over 80% of the hashpower. This is an unprecedented amount of support, the likes of which really haven't been seen in Bitcoin.
The original agreement was to activate Segwit ASAP and roughly three months after increase the blocksize to 2MB. With Segwit it would be a new blockweight of 8,000,000.
I suggest investigating the pros and cons of a 2x blockweight increase for yourself. There are a LOT of conflicting opinions, and you shouldn't be blindly believing anyone.
Leave a comment below if you have any further questions, I'll do my best to answer most.
If you appreciated this FAQ, feel free to send a tip :) 1Gi9uberSWjPnWT6UUKePUFUWWryqUxaPk
'''
Bitcoin Current Affairs FAQ: Segwit, fees, EDA, and 2x
Go1dfish undelete link
unreddit undelete link
Author: hrones
submitted by removalbot to removalbot [link] [comments]

BGI - Bitcoin Greg Italia - YouTube Byte Range Requests Simple way to reduce image size upto 4 kb:Fastest:English ... Computer data memory units.  bits,byte,kb,mb,gb,tb,pb,zb,yb (HINDI) Understanding Computer Storage

Bitcoin full nodes maintain a local copy of the blockchain, starting at the genesis block. ... 4 kilobytes. 4 hashes. 128 bytes. 512 transactions. 128 kilobytes. 9 hashes. 288 bytes. 2048 transactions. 512 kilobytes. ... a node can download just the block headers (80 bytes per block) and still be able to identify a transaction’s inclusion in ... Bit Calculator - Convert between bits/bytes/kilobits/kilobytes/megabits/megabytes/gigabits/gigabytes. Enter a number and choose the type of Units Weight units are a measurement used to compare the size of different Bitcoin transactions to each other in proportion to the consensus -enforced maximum block size limit . Weight units are also used to measure the size of other block chain data, such as block headers . As of Bitcoin Core 0.13.0 (released August 2016) This makes BSV fundamentally different from Bitcoin (BTC) and Bitcoin Cash (BCH). Both the BTC and the BCH community (to a lesser degree) would classify the upload of unrelated files as an attack. However, due to the limitation of op_return transactions to 80 bytes (BTC) and 220 bytes (BCH). The “threat” of such attacks is limited anyway. The block header is 80 bytes, whereas the average transaction is at least 250 bytes. A complete 32MB Bitcoin Cash block would contain nearly 128,000 transactions. A complete block, with all transactions, is therefore 400,000 times larger than the block header.

[index] [15973] [32979] [20452] [798] [18013] [6164] [29154] [27114] [657] [5393]

BGI - Bitcoin Greg Italia - YouTube

In this video, we compare different types of computer storage units, explain their formation and talk about why you might want to think twice about your hard drive size. ----- Plug into BitMerge ... bitcoin, bitcoin news, cryptocurrency, cryptocurrency news, crypto, litecoin, crypto news, cryptocurrency trading, altcoin, cryptocurrency market, blockchain, ethereum, cryptocurrencies, eth, btc ... Bit, Byte, KB, MB, GB, TB, PB? Computer Data Memory Units (Hindi) ... What is Bitcoin? & Bitcoin Mining Explained ll in telugu ll by prasad ll - Duration: 16:36. All data unit explained in detail. Which is big data unit? Which is small data unit? -~-~~-~~~-~~-~- Please watch: "All details about Bitcoin till 2017? It's history ... BGI - Il Bitcoin perde l '80%! Tutti perdono soldi! Che bolla! by IL GREG. 14:48. Jack MA promuove la Blockchain e riconosce il Bitcoin by IL GREG. 7:22.

#