visit
The Ethereum-blockchain size has exceeded 1TB, and yes, it’s an issue_(TL;DR: It has nothing to do with storage space limits)_gzht888.com
The following is an exchange I had with Vitalik following the publication of the article above. Although initiated by , the purpose of the exchange from my perspective became to extract the underlying reason him and many other Ethereum fans don’t see sharding as diminishing the integrity of the network it’s being applied to. Fortunately he was cooperatively replying to the questions I asked, which were intentionally done to break down the logic so we could ultimately arrive to this semi-expected response: I’m highlighting this because it sums the entirety of this article pretty well.Vitalik: “This is the way Ethereum is now, and Sharding will be no different when we switch to it, except that some things will be easier for some people.”
Me: “I don’t agree with that at all, and here’s a full-length counter-response.”
It’s going to be long, but easy. If you care about understanding this, read it.
Here is Vitalik saying Sharding is a Layer 1 scaling solution:
Note that scaling is used here without context, so let’s provide some context. We can’t keep going around using the term scalability without an agreed upon definition of what the word even means. Even if the definition is different for different people, the one making an argument should at least provide theirs.
When I talk about scaling in this article, I’m talking about one thing and one thing only, increasing functionality without sacrificing decentralization, and the total set of validating nodes is one of the most direct representations of how decentralized the network is. Focusing on anything else when discussing scaling in regards to blockchain networks is either a result of not properly understanding this, disagreeing with it, or an act of intentionally misleading for whatever reasons one may have to do so.
It’s important to make that clear first, because understanding the differences in the following types of scaling and how they apply to decentralized networks require that baseline.The term “node” gets thrown around a lot but more often than not remains undefined to an outsider trying to follow along. I have a friend who tried telling me that Nano lets everyone run a node and have their own blockchain. People just don’t know what they’re talking about, and it’s because the right information isn’t readily available for them. Increasing the node count is a meaningless endeavor if they aren’t the right of nodes. So when I say you’re being sold a scaling-in solution, it’s because the important kinds of nodes are going to go down as a result of the change. Not necessarily right away, but over time, and I’ll touch on why that’s important as well.
/// Read my BCash article if you’re interested in why the “Bitcoin” Twitter account blocked me. The first thing I want to do is make a simple case for why these are important, then I’ll present you with the common arguments against it. I’ll respond to those arguments briefly, and then go in depth into how the Bitcoin network actually works later in this article, and then you’ll be able to see just from my explanation where these anti-node arguments fall short. Just a heads-ups: If you came here to read about Ethereum and you don’t want to learn about how Bitcoin’s network functions, then you deserve to lose any money you’ve invested, or any time you’ll waste developing for Ethereum.
This one I’ll respond to by diving straight into how Bitcoin’s network works. You’ll see very easily at the end of the “Bitcoin Network Topology” section how it answers this. I also published that section as a standalone article because of how rampant misinformation is spread on this subject alone. (This will all circle back to Sharding and Ethereum, trust me.)
Just to make this clear for the rest of the article, when I say decentralized I mean (c) in the following diagram (but in “4D” not 2D) that I stole from Vitalik, who used it to :
Edges are just the connections from one node to another. The following diagrams are networks of 16 nodes each. Same amount of nodes, but one has much less edges. The other one has every node connected to every other node.
The difference between these?
The first one has enough edges to propagate (we’ll get further into this later) to every node on the network in a sufficient amount of “hops” and none of the nodes are censorable because the connections (edges) are properly distributed.
The other network?It’s the opposite of private, and it’s the opposite of secure.
In respect to edges, there is definitely an “enough” amount, as you’ll see later. I also talk about propagation in my prior article and it’s only getting better despite these “limited” amount of edges. It’s mostly an argument used by scam artists (who even Vitalik frequently calls out) that want to sound smart when they’re scamming :
He also comparing a Layer 1 network to a Layer 2 network here, like an idiot.
The “how many is enough to secure” argument is probably one of the biggest signs, in my opinion, of someone not getting the bigger picture. By asking this question they’ve already agreed to the condition that there needs to be some to be considered secure, and now they are just trying to take the argument towards figuring that value out. Sometimes they’re genuine, other times they’re diverting the subject.
Here’s some simple questions I don’t necessarily expect you to have an answer to. Let’s say we both determine and agree that number is 20,000 full-nodes.My point here is, there is no way to do this. You can’t code the network to “hover” at a certain node count. I’m very inclined to believe the following, and I’ve seen no good arguments against this: Any blockchain network, if the protocol is left unchanged and demand continues to grow, will decentralize or centralize over time depending on how they are built. So if decentralization is a feature you want, the protocol needs to be inherently decentralizing. This means that the protocol needs to be designed in a way that ensures the validating node count will grow over time. If it’s ensured growth, it’s not ensured , which brings me to the next section.
Hopefully the following is clear, but I’ll elaborate further:
With a set blocksize that never goes up, as technology grows it becomes easier and easier to run a node, thus the total node count will go up over time. This is what I mean when I say “inherently decentralizing”. When Bitcoin upgraded to Segwit, the requirements to run a node that does full-validation did go up, but only marginally. It didn’t kick anyone else off the network or make it harder for people running pre-Segwit nodes, but most importantly, it remained inherently decentralizing:
When the size was changed for Segwit, it was done for other reasons than arbitrarily “adding space”. Right now the Bitcoin blocksize is regulated, the cap is set, and it’s not being changed. If a block is too large it’s invalid. This is ideal because it ensures a static volume of data over time. Nobody votes on this, it’s not a 51% vs. 49% situation. It’s always invalid if it’s too big.
The network has no way to tell if your node mined a block, so the protocol enforces privacy equality in that sense, but the blocksize cap enforces physical equality, in the sense that there is zero differentiation between validators whether they mine or not (more on this later). Their ability to process transactions doesn’t segregate them because it’s easy for everyone’s node. Removing the blocksize cap separates these nodes into tiers, where one group has the power to cut the others off with force by creating blocks that shut down the other half of the network, destroying the network in the long-run.
Changing Bitcoin’s design to allow a variable blocksize would result in:
I also want to point out that it doesn’t matter if the cap is set at 2 MB, or 8 MB. At some point technology will allow for that blocksize to be viable, but that’s another debate because we could set the blocksize to 50 terabytes now and “just wait for it to catch up”, but Bitcoin will have become centralized and ruined by that point. Where the cap should be is a different debate, but my only argument right now is there needs to be a hard-limit that doesn’t change over time where blocks are invalid if above it.
This leads me into the next example: Ethereum and its arbitrary “gas limit”:
Without a proper cap like in the chart above, or in Ethereum, the blocksize keeps growing and technology can never grow at a fast enough rate to catch up effectively so that you’ll be able to continue running your node. This is what I mean when I say “inherently centralizing”. Unbounded growth requirements determined by a small group of centralized actors is not good. Even with Sharding, this limit will increase over time. Sharding might be temporarily successful in splitting up the work, but Ethereum’s inherent direction is south:
An Ethereum block’s size is determined by the miners, who set the gas limit for that block. If you don’t understand Ethereum’s gas limit, here’s a very simple explanation that may rustle the jimmies of some technical people: In Ethereum, instead of bytes, a block is limited to how many units of gas it can have. For this example, let’s say it’s 1000 units of gas. When you want to create a transaction, or a “contract”, it costs gas to process. Let’s say your transaction cost 2 gas and mine costs 5. We can both fit into a single block, along with 993 more units of gas worth of transactions. So when a miner makes a block, they’re limited to only including 1000 units of gas worth of transactions or the network deems it invalid.
Except they’re not limited to 1000 units…
They can make a block with 1200 units. Then they can make a block with 1500 units. The consensus rules let them increase it gradually without it being invalid. Other miners can make smaller blocks, which helps bring the average (and thus, the limit) down, but these are only other miners. If you’re operating a fully validating node under these kinds of consensus rules you have no ability to decide this metric. Miners are a tier above all other nodes on this network, and Vitalik doesn’t even deny that.
Because of this differentiation in nodes by code within the Ethereum network, Ethereum is inherently centralizing. This is the fundamental difference between Bitcoin and Ethereum’s network properties as they currently exist. Ethereum’s set of fully validating nodes doesn’t have equal voting rights because their external abilities allow them to change the protocol, which affects other nodes. In Bitcoin there are no voting rights that affect your ability to run your node.
Bitcoin is more than just a chain of blocks, and I want to help you understand how Bitcoin’s blockchain network is designed first because it’s the simplest one of the bunch, and there are fundamental attributes to its simplicity that you need to understand for the rest of this article. I say blockchain network because Bitcoin also has a payment channel network (lightning) layered on top of it that doesn’t effect the structure of the blockchain network. I won’t be discussing Bitcoin’s lightning network in this article though, as it’s not that relevant to the points I’ll make.
Below is a rough example of the Bitcoin network scaled down to 1000 fully validating nodes (there’s really 115,000 currently). Each node here has 8 connections to other nodes, because this is the default amount of connections the client makes without any changes made to it. My node is in here somewhere, and if you’re running one, it’s in there too. Coinbase’s nodes are in there, Bitmain’s nodes are in there, and if Satoshi is still around, Satoshi’s node is in there too.
Please note that this is just a diagram, and that the real network topology can (and probably does) vary from this. Some nodes have more than the default amount of connections while others may opt to connect to a limited number or stay behind just one other node. There’s no way to know what it actually looks like because it’s designed with privacy in mind (although some monitoring companies certainly try to get very close approximations) and nodes can and do routinely change who their peers are.
I started with that diagram because I want you to understand that there are no differences in these nodes because they all fully validate. The ones on the inside are no different than the ones on the outside, they all have the same amount of connections. When you start up a brand new node, it finds peers and becomes one of the hive. The longest distance in this graph from any of these nodes to another is 6. In real life there are some deviations to this distance because isn’t a perfectly automated process that distributes everyone evenly, but generally, adding more nodes to the network doesn’t change this. There are 6 degrees of Kevin Bacon, and in 6 hops my transaction is in the hands of (almost) every node, if it’s valid.
I’m going to select “my” node from this group and drag it out, so I can demonstrate what happens when I create a transaction and announce it to the network. Below you’ll see my node all the way to the right, and then you’ll see the 8 other nodes (peers) that mine is connected to.
When I create a transaction and “send it out to the world”, it’s actually only to going these 8 peers. Since Bitcoin is designed from the ground up to make every node a fully validating node, when these 8 nodes receive my transaction they check to see if it’s valid before sending it out to their 8 peers. If my transaction is invalid it will never break the “surface” of the network. My peers will never send that bad transactions to their peers. They actually don’t even know that I created that transaction. There’s no way for them to tell, and they treat all data as equal, but if I were to keep sending invalid transactions to any of my 8 peers, they would all eventually block me. This is done by them automatically to prevent me from spamming my connection to them. No matter who you are, or how big your company is, your transaction won’t propagate if it’s invalid.
Now let’s say you’re not running a full-node, but you’re using a instead. Various light-clients exist for the desktop, and for your mobile phone. Some of them are Electrum, Armory, Bread, and Samourai Wallet. Light-clients tether to a specific node. Some can be set up to change the one they connect to over time, but they are still ultimately tethered. This is what tethering looks like:
The reason I’m showing you this will become more apparent further on in this article, but I want you to note that this is just a diagram, and it’s easy to demonstrate tethering using a node that happens to be on the rim, but there is no real rim, and tethering is tethering wherever that node happens to be within this diagram. I’ve highlighted this in yellow. The nodes being tethered to are green, and the blue dots are light-clients. All information going to or coming from the light-client goes through the node they’re tethered to. They depend on that node. They are not part of the network. They’re not nodes. Remember this, because in Ethereum their behavior is slightly different, but their effect on the network is the same: nothing.
Here’s where it gets fun, and where other people try to misrepresent how the network actually works: What if I wanted to start mining?
Mining a block is the act of creating a block. Much like a transaction you want to send, you must create the block and announce it to the network. Any node can announce a new block, there’s nothing special about that process, you just need a new block. Mining has gotten increasingly difficult, but if you want you can purchase specialized hardware and connect it to your personal node.
Remember that bit about invalid transactions? Same goes for blocks, but you need to understand something very specific about how blocks are created.
First watch this video. I skipped to the important part about hashing, using nonces (random value) and appending the chain with that new block header:
Please watch the entire thing if you have time. It’s personally my favorite video explaining how mining works. When you get to the following part in the video where the labels “Prev hash” are applied, those are the block headers:
What’s not mentioned in this video is you can create valid blocks headers even if all the transactions inside the block are invalid. It still requires the same amount of time to mine blocks with invalid transactions as it does to mine a block with valid transactions. The incentive to spend all that time and energy creating such a block would be to push through a transaction that rewards you with Bitcoin that aren’t yours. This is why it’s important that all nodes check not just the block headers, but the transactions as well. This is what stops miners from spending that time. Because all nodes check, no miners can cheat the system. If all nodes didn’t check you’d have to rely on the ones that do check. This would separate nodes into “types”, and the only type that would matter would be the ones that check. Ethereum does this currently, and I’ll touch on that in the next section.
So what if you join a mining pool? You might do this because mining is too difficult for you to do alone, or if you’re a slightly larger entity you might prefer a steady income as opposed to a sporadic one. Many miners do this, and they connected their specialized hardware directly to a mining pool using an entirely different protocol call the . Just like creating a transaction with your non-node cellphone, you don’t have to run a node to connect your hardware to a mining pool. You can mine without running a node, and many miners do exactly that. Here’s what that looks like below in blue. I’ve used Slush Pool for this example:
Remember, I dragged these pool-run nodes out of the diagram for demonstration purposes. Just like any other node, these pool-run nodes need peers. They need peers to receive transactions & blocks, and they need peers to announce blocks they create. Allow me to reiterate again: all nodes validate all blocks and transactions. If any of these pools announce an invalid block, their peers will know because they fully-validate, and they won’t send it out to other nodes. Just like transactions, invalid blocks do not enter the network.
Here’s another way to look at this without pulling these nodes out from the diagram. Below is a private miner who doesn’t want to be known, it has 8 random peers, and none of those peers knows that it’s a miner. Again, this is intentionally designed this way for privacy reasons. There’s no way for any node in the network to know that the block they received was created by their peer, or relayed by their peer. All they know is if it’s valid or not, and if it is they send it along, if it’s not, they don’t.
Hopefully you’re getting the picture, and I don’t believe I used any fancy math or equations to get here. I’d like to move on because I feel like this is complete coverage, but there is one final thing I’d like to address because it’s this final aspect that is used to confuse others who don’t fully understand everything I just explained. It’s so rampantly used that I need to address it.
My original comment was talking about light-clients, also called SPV clients, and how they aren’t part of the network. I demonstrated this above with the blue tethered dots. His follow-up comment tries to imply that nodes that mine are the only nodes whose rejection matters. Remember: nodes have no way of knowing which other nodes mined a block versus who relayed a block, this was designed intentionally.
Now for a final diagram so I can try and explain the logic that’s used when people say “only mining nodes matter”. Some miners connect directly to other miners so that out of their peer list with the network, some of them are also other miners. Not all miners do this. Some of these miners that connect directly also use optional relay networks like the FIBRE network by Bitcoin Core developer but even this side-network isn’t exclusive to miners, anyone can join including you or me and it’s just there to help block relay across the network. Either way, people try to argue that this interconnectivity of nodes that mine (whether using something like FIBRE or not) implies they’re the only ones that matter, and it’s absurd:
In this example I left the node’s peers inside the diagram. You should get the point by now. They reject invalid blocks. That group of nodes inside the green circles are most definitely not the only set of nodes that matter in this network, and with that being said, I think I’ve covered everything you’ll need to know about Bitcoin’s blockchain network for me to move on to Ethereum’s.
This one’s going to be relatively the same with a few key differences. The biggest takeaway out of all of this is that your fully-validating node can’t reject blocks based on their size or the gas limit. Having no throttle on this external procedure puts pressure on these fully-validating nodes to process that information at a pace they may not be able to keep up with, reducing the amount of nodes over time and skewing the node set towards much larger entities.
Much like Bitcoin, Ethereum currently uses a Proof of Work system for its blockchain appending process & token distribution process. Since the intended function of the Ethereum blockchain network is different, the data that is put inside a block is also different. This won’t be about the kind of data, “smart contracts”, or anything of that sort. This will just be about the volume of that data, and the network topology.
The following diagram, like the Bitcoin one, is just a visual and not the actual topology. Instead of every node having an even distribution of peers, I’ve put the number of peers per node on a curve, because it’s well known and admitted that Ethereum is having peer issues since the node count keeps dropping, and “good” peers that serve sufficient amounts of data are hard to come by these days.
That’s what a “decentralized” network looks like when the good peers are limited in number, and this becomes problematic when people trying to sync up a new Ethereum node can’t because there’s just not enough peers seeding the data they are asking for. You get a small group of highly connected peers serving all the other ones the blockchain. This is very bad for a broadcast network. What’s even worse is the gas limit (and in return the total blocksize) keeps going up because there’s no restriction on it, putting more strain on these limited nodes and shrinking the amount that exist, despite claims that “the gas limit hasn’t moved in X months”:
The gas “limit” isn’t a limit and like I mentioned earlier, miners choose this at their leisure. The important takeaway is Ethereum nodes don’t reject blocks no matter what the gas limit is. This is one of the fundamental differences between Bitcoin and Ethereum. Nodes aren’t set up to prevent external pressures from centralizing them with data that has no regulation. Miners are not increasing this limit now for altruistic reasons, and because Vitalik is telling them not to. Sounds decentralized, right? This is not how you want a blockchain to function. What’s going to happen when the fees get too high?
Take Vitalik’s response, and the following blocksize chart, as you will:Ethereum has 2 options:
Remember how I demonstrated earlier that SPV clients (that only sync headers) are tethered to a specific node and not actually part of the network? In Ethereum they took that a step further and created a “sub-network” for these light-clients, where they can share block headers. If you didn’t know, it turns out most people don’t actually run fully-validating nodes in Ethereum (for various reasons), they actually run light-clients.
/// You can also refer to my prior article.
They’ve been having some issues getting enough full-nodes to supply the light-clients with the block headers they need. Light-clients can’t stay peered with each other because people are too lenient turning them off and on, so they become even more dependent on the full nodes who voluntarily give them that data. In Bitcoin there’s no volunteering, all full-nodes perform the same relaying functions, and it’s easy to do. All in all, I actually don’t think there’s anything wrong with having a subnet for light-clients. I think anyone who wants to run one should be able to. I think having a subnet of them is a good thing, at best more people wouldn’t have to trust specific nodes for headers, at worst, they can’t adequately meet their own demand they create. The issue is when the developers start calling these “nodes” and the community is led to believe that they contributing to a network. They’re not “nodes” and they do nothing for the network.
And the Ethereum developers do call them nodes. The following is about sharding, which I’ll get into next, but theyshouldn’t go around telling the community that the light-clients they’re running are nodes. Then they get node counts that keep going up, but all that’s really happening is the light-client count goes up while the full-node count slowly drops. Calling these nodes disguises the issue.
Hopefully I’ve drilled it in by now. Verifying block headers do nothing for the network. So this is a more accurate model of what the network looks like:
Seeing that, what do you think now when you see this total “node” count? Are they discerning between these nodes? Did you understand the difference prior to these articles? Even if they aren’t including the light-nodes, what’s going to happen over time?
Over time, even though that total “node” count might be going up, this is what happens to a network that is inherently centralized and doesn’t pay mind to its fully-validating set of nodes: It gets worse.
Not only does the network start dropping in validator count, the miners begin to connect directly to each other out of necessity to avoid bad uncle/orphan rates. Uncles/Orphans are dropped blocks that occur because block times are too close to each other. As different miners make valid blocks at the same time you end up with two valid chains. Eventually one of those blocks is built on top of and the other block is orphaned.In this diagram the purple blocks are orphans. Do you know who loses out the most when their blocks are dropped because the network selected a different branch to follow? The smaller miners, further centralizing the network because they can’t handle the income volatility. So now you have:
The responses to raising this subject are either agreed upon concern, or complete dismissal of the issue. When people dismiss this issue, they typical use the “non-mining” tactic we already went over. They say “all those nodes in the middle that are shrinking, they were never doing anything anyway unless they were mining/staking.”
Is it really the least comprehensible argument you’ve heard all this month?
At its peak price, you would have hypothetically needed $45,000 to be one of those nodes. Pooling funds doesn’t change anything, the pool runs the node, not you. Fortunately PoS is coming with Sharding bundled in so we can end this section here.
As the title states, Sharding introduces scaling-in while making you believe it’s helping Ethereum scale-out. As you can imagine it has much to do with the validating node count, but with a twist. Validating responsibilities are split up among various groups, each with their own shard. The intent is to relieve the amount of work a single validating node must do so there can be more of them, but it only results in prolonging the issue, and not fixing the problem. Furthermore, there’s now a huge cost for some of these nodes, as staking is required to be one of them.
Full Video:
This is just a blockchain news website, so it’s expected to have buzzwording and zero technical information. I’m highlighting it because it’s littered with a bunch of words and terms that seem to get glanced over by the uninformed crypto-community. “Scalability” remains undefined, “processing” needs further clarification, and every single mention of the word “node” doesn’t apply to you or your light-node. All of these mentions:
Everywhere you read or hear about Sharding, the explanation appears to be saying “things will be easier on the nodes”, but the nodes that can afford $16,000 to stake don’t need things to be easier. They can already process much larger blocks. Datacenters don’t need shards, and you won’t be running one of the important nodes on a laptop. There are many kinds of nodes in this system, and it’s still unclear which ones will actually exist when the protocol is finalized. I’ll start by explaining the basic structure, and then defining the main kinds of nodes within this system so we can highlight the ones that matter and the ones that don’t.
Sharding takes a single blockchain, turn it into multiple blockchains called Collations, then puts a twist tie on top and hopes mold doesn’t grow. Joking aside, this diagram of a single collation should help you understand:
Joking aside, here’s a good one that took way too long to make look nice:
Let’s break down what you’re looking at:
Within each shard, the only nodes that matter are the Executor & Collator nodes. Both require 32 ETH to run. Every light-node can “pick a shard they ‘care’ about (if they want to), sync that shard, and the block headers of the main chain. They probably won’t need to unless they are an application or service that is dependent on validating that shard because their contract sits on it.
Above you’ll see multiple Collation-chains, sets of Executor/Collator nodes that do the work on those chains (32 ETH), the “Main chain” (green), and of course your light node at the top if you selected a specific shard to “validate”.
Few things you should note:
My point here was even though “full-validation” is divided into sub-jobs, the group of nodes that do those jobs is still limited. He says these nodes would be processing less than current Ethereum nodes, but again that was never the issue. The issue is that the difficulty to do so grows over time, and the amount of nodes shrinks over time because of it. It’s inherently centralizing. Vitalik even agreed that this number would shrink over time if the gas limits kept going up, and there’s nothing stopping that from happening. Right now miners are being altruistic, but what happens when mining doesn’t even exist? What happens when it’s just staking and the people doing it don’t care about other people’s blocks getting orphaned? Why would they keep the gas limit down? Remember they can manually adjust this, so why would they intentionally keep it low if they’re hyper-connected to each other and fully capable of processing that data? What happens when they start compounding their staking earnings, setting up more nodes, and gain more control of the network?
What happens when people don’t think it’s a shitcoin though? Most are going to fail, but what happens when one of them is convincingly decentralized just enough for the time being to keep people using it?
I said all of this was going to be easy. I don’t feel like it turned out that way, but I tried to keep it as simple as I could. I mention this because I’d like to close this article with a link to the where there’s a long list of admitted issues with Sharding, and how they plan to address them, and how each one of those introduces a new complexity with it’s own issue, and another solution to resolve that new issue. It’s convoluted, it took me too long just to decipher the verbiage for the node types, but it’s only fair to provide it. My issue was never with whether it “worked”, but whether it remains decentralized. It was kind of bad prior to Sharding, but I don’t think it could be any clearer that there’s only one path for this network on a long enough timescale. If you don’t mind having centralized validators, you might as well buy EOS. They skipped the whole pretending part and went straight to being centralized. They don’t even need Sharding because they just handle the blockchain data in a centralized fashion. Google can process everyones payment’s. We don’t want Google to process everyone’s payments. We don’t want the Fortune 500 or the Forbes 400 processing it either. So what did we learn?
This is *severely* uninformed. Ethereum already has a block size limit in the form of its gas limit, and this gas limit is at 8 million and has been there for the last six months.I addressed this above. You will raise the gas limit if Sharding isn’t ready soon enough to come in and stall this issue.
Fast sync datadir growth has for the last six months and it’s not going to go much higher, if only because increasing the gaslimit much further would lead to uncle rate centralization issues. So we *already are* experiencing the worst of it and have been for half a year.If you don’t increase the gas limit the fees will disable Dapps and cause outrage among the community because they have expectations and demands. I went over this above as well. Uncle rate won’t matter when you have no other solution, right now the miners are just being altruistic by listening to you. That’s an issue in and of itself.
Also, focusing on archive node size is highly fallacious because (i) you can have a much lower datadir size by either resyncing even once per year, or just running a Parity node, which prunes for you, and (ii) it includes a bunch of extraneous data (technically, all historical states, plus Patricia tree nodes) that could be recalculated from the blockchain (under 50 GB) anyway, so you’re not even “throwing away history” in any significant information-theoretic sense by doing so. And if you *are* ok with throwing away history, you could run Parity in state-only mode and the disk requirements drop to under 10 GB.I addressed this “conflict” when I showed the data throughput over time graph. The directory size is analogous to the same exponential growth that’s occurring with the nodes processing requirements. The only counter-response to this is that you won’t raise the gas limit. You will.
The whole point of sharding is that the network can theoretically survive with ZERO of what you call “full nodes”. And if there are five full nodes, those five full nodes don’t have any extra special power to decide consensus; they just verify more stuff and so will find their way to the correct chain more quickly, that’s all. Consensus-forming nodes need only be shard nodes.I addressed sharding above.
Finally, you’re using the term “BCash” wrong; it’s an implementation, not a blockchain/cryptocurrency.
That’s not Bitcoin, that’s BCash_Or, There and Back Again, a Full-Node’s tale_gzht888.com
1: “(…) the incentive structure of the base layer is completely broken because there is no cap on Ethereum’s blocksize (…)”
This is misleading at best and false at worst. Ethereum has a block size limit due to the block gas limit enforced by the consensus protocol.
they are unlikely to vote for a blocksize increase that would break the network
I addressed the gas limit further in this article. Thanks for motivating me. The network doesn’t break because validators drop off and peers are lost. The network functions with two datacenters. What breaks is decentralization. The connected nodes have no incentive to care about other less-connected nodes validation abilities.
2: “Even if one [blocksize cap] was put in place it would have to be reasonable, and then these Dapps wouldn’t even work because they’re barely working now with no cap.”
Nobody goes to the beach anymore — it’s too crowded.
If a blockchain network is at capacity, with all blocks filled with transaction, then all the tx senders found utility in sending their txs with their particular fees.
This completely misses the point because this is the same argument we make in Bitcoin. It’s very popular, but Bitcoin doesn’t promise low fees and usability to Dapp developers and users. When those Dapps get priced out to basic transactions from mixers (lol) using 90% of the block space to dump their hacked/stolen coins because mixing is worth paying more than using silly Dapps no one really uses, what marketing do you have left? That’s the point. You can argue this down all you want, but at some point you start justifying Ethereum’s existence with only the matching properties of Bitcoin without the fancy bells and whistles, and Bitcoin does Bitcoin better.
The author continues their fallacy by directly contradicting themselves by arguing for apps on Ethereum to move over to Bitcoin. If apps become useless on Ethereum due to increased tx fees, then they would be useless on Bitcoin too if crowded out by other users who pay higher tx fees.I suggested developpers develop on top of Bitcoin. I didn’t say take the same program ideas you have that may never actually work and build it on Bitcoin. Almost all Dapps are centralized to begin with and aren’t actually “dapps”. They can all be built on Lightning. You won’t have fee issues no matter the blocksize on a payment channel network. Again you could argue Ethereum can do this too, but that doesn’t give its base layer promises any extra reason to exist.
There is no such thing as apps crippling Ethereum due to high load
3: “The [Bitcoin] blocksize doesn’t restrict transaction flow, it regulates the amount of broadcast-to-all data being sent over the network.”
We are including them, which is why I said it doesn’t restrict flow. The blocksize is a dam that generates power in the form of fees. The overflow spills into the Lightning Network, which has no upper limitations on transaction throughput outside the volume of Lightning nodes and payment channels, which have no limit themselves. Also, any transaction where you receive Bitcoin, can be received straight to a newly opened channel. This isn’t a two-step process.This is false for any sane definition of “transaction flow”. An arbitrary limit on tx/s does restrict transaction flow, as more transactions cannot flow within a given time period… And if we’re including off-chain solutions such as the lightning network as an argument that L1 tx/s limits does not decrease flow, then we should include in such discussions on Ethereum too. Or recognize that the cost to setup e.g. payment channels increases as the L1 fees goes up…
4: “I am saying that this information needs to stop being obscured. I’m also saying that if/when it ever is unobscured, it’ll be too late and nothing can be done about it anyway.”
This information is not obscured. You can simply run a full node and query it
Just because you haven’t found a website doing this
the argument that it “it’ll be too late” when it is unobscured is at best a faulty generalization and at worst the post hoc fallacy.
It’s obscured. It’s not a matter of me “not being able to locate” sites that track this. The sites that did track it stopped tracking it.
You forgot to include the next sentence: “It’s already too late.” It was a quip, not a fallacy. Take it or leave it.5: “Keep in mind, none of this information [block propagation times and transaction times] is available for Ethereum”
This is false. Block propagation times can easily be measured by running a few, geographically distributed full nodes, connect them to each other and measure when they see and relay new blocks and transactions.
too lazy to spend a few hours learning how to deploy, use and even add debugging to Ethereum clients, in order to gather such information, they can always check propagation times for nodes connected to
First, that’s opposite of easy. Again this isn’t about me, because I’m clearly able to discern the differences in these networks and gather the information together. I’m the one sharing it because I did so. Second, I don’t need to set up nodes around the globe to check this, and all the complaints online plus the data to the left of this from the very website you suggested only solidifies the consensus online. When half the nodes that volunteer their data to this website have terrible latency it’s indicative of an issue.
Thirdly, a lazy person wouldn’t go through the effort I am, nor am I false stating this isn’t publicly available. Network data in general is not publicly available, it’s literally not there for the public to see, and some of it once was. You need more than common knowledge to access it.
6: [vague rant about using the blockchain the “right” way and hatin’ on CryptoKitties]
The author presumes there is a “right” way to use a public, permissionless blockchain. The beauty of blockchains such as Bitcoin and Ethereum is that users can use them for whatever they want as long as they can convince a miner to accept their tx.
For example, a lot of people actually _enjoy_ CryptoKitties, to the extent of bidding $140,000 worth of ETH for one cat at a recent auction.This isn’t about what transactions miners accept. I’m saying that even though they are being accepted now, in the future any Dapps that can only function using low fees won’t be usable unless the limit is raised, or decentralization is sacrificed. You might want to start looking elsewhere for this functionality. If you don’t care about decentralization then this just doesn’t apply to you, that’s totally fine. But this is literally Ethereum’s selling point right now:
Putting money laundering aside, idiots exist. CryptoKitties is a great tool to demonstrate this. I actually like CryptoKitties because of this valuable publicly available litmus test, and I don’t hate cats:
7: “The Bitcoin network has about 115,000 nodes, of which about 12,000 are listening-nodes.”
This appears to contradict several other sources on Bitcoin node counts
If all these sources are wrong, they would probably love to know exactly how nodes are counted by .
Moreover, who has audited the scripts calculating these larger node count numbers?All it does is count non-listening nodes as well as the listening nodes. Counting both is harder to do, so websites don’t do it. Likewise, segregating light-node and validating nodes in Ethereum is harder, so websites don’t do it.
8: “That Ethereum node count? Guarantee you those are mostly Light-Nodes doing absolutely zero validation work (checking headers isn’t validation). Don’t agree with that? Prove me wrong. Show me data.”
I admit to presuming, but to say my credibility is lost is a bit far-fetched. My concerns are legitimate and shouldn’t be ignored. You can disagree, but you need to make a case for why you disagree, and this and my prior article laid out a pretty clear case: Validating nodes are important and Ethereum neglects them at a protocol level.How about the author provides some data supporting their speculative claims? “Guarantee you” implies an , and given the above false claims and misunderstandings, the author has in my mind lost enough credibility to be taken seriously on matters of (Ethereum) protocols and networks.
9: When your node can’t stay in sync it downgrades to a light client.
False. Even if a node is behind a number of blocks when syncing, it can still answer queries for past blocks and transactions and service other nodes that are syncing. The author would do good to examine the concurrency and state handling of clients such as parity and go-ethereum to understand more how nodes currently implement syncing and will work with new sharding proposals.
It’s not false, you just took it literally. All the comments online about peoples nodes falling out of sync result in the person then deciding to use fast sync, usually after being compelled by someone else telling them “it’s fine”. From a zoomed out perspective, this results in validating nodes offlining &light-nodes onlining, like the diagram I showed above.
10: “How would you even know how many fully validating nodes there are in this set up? You can’t even tell now because the only sites tracking it count the light clients in the total. How would you ever know that the full-nodes centralized to let’s say, 10 datacenters? You’ll never know. You. Will. Never. Know.”
OK, so right now we are able to know, with full certainty, that there are 115000 correctly verifying full Bitcoin nodes, but in this hypothetical future the author imagines we are unable to know how many correctly verifying full nodes there are in the Ethereum network?
Clearly there is some network engineering design magic currently present in Bitcoin that this future Ethereum network could leverage. Given that both Bitcoin and Ethereum clients are open source, I expect this magic to soon be discovered by Ethereum developers and then merged in, enabling us all to know exactly how many full nodes are present at any given time.
The reason you can be sure for Bitcoin is because all nodes validate. Every participating actor in the network validates the chain, it’s the only way you can know the next block is valid without trusting anyone else. There are no light-nodes in Bitcoin.
In Ethereum there are so many ambiguous ways nodes interact with each other that the only way to reasonably detect which nodes are fully validating would be to request random blocks from the past to see if they have that full block, but most Ethereum nodes typically don’t keep the history because Ethereum is state based. The networks are fundamentally different, which is why it’s easy to poll the network for Bitcoin, and problematic at best for Ethereum. — — —Naturally, it requires more to run an Ethereum full node. And it can strain especially older laptops, and definitely requires an SSD. However, it does not require a beefy server by any reasonable measure. In fact, any dedicated machine with a CPU from the last 6 years, 8 GB of RAM and a modern SSD can process an Ethereum full node just fine (or several full nodes as run on my pretty modest server). The bandwidth usage of tx and block relay is something to consider but is generally not a problem on well connected networks.1: It’s getting more difficult with time. 2: It’s also kind of moot given the $45,000 validating nodes. 3: Bandwidth on non-$45,000 validating node networks is most certainly important because “well-connected” is dangerous for privacy, as I’ve described above. — — —
Miners are aware of the current block size (gas) limit and actively take part in discussions with other parts of the community around what an ideal block size limit is
Miners have historically acted both to lower and to increase the limit beforeNone of this matters in a PoS centralized network. It’s very dangerous in a PoW system over the long term though. There’s no incentive to keep “other” nodes connected or in sync “when they can just sync the headers”. They might be acting altruistic now, but there’s no reason to expect this behavior in the future. It’s a dangerous proposition to start trust honesty among those in power as these networks start scaling up. — — —
Overall, as clients have continuously improved performance since the launch of the network, miners have gradually increased the limit towards the current value of 8M (Ethereum launched with 3.14M). Generally, if syncing issues become significant enough to affect the ETH price, miners become incentivized to lower the limit to regulate the network.I have no reason to believe the limit will be lowered, as I’ve made clear throughout these articles. — — —
As others have already discussed the various sync modes supported by Ethereum clients and their varying resource requirements, another thing worth talking about as an emergency remedy — if the Ethereum network does indeed grow so fast that it becomes hard for most full nodes to keep up — are .
Checkpoints have their functions, but you’re presuming a bit in regards to what I think about them. Regardless, sync modes don’t matter, they’re fine for you if that’s what you want to do. My concern, again, is the validating node set, and checkpoints only address history data, not data processing requirements after getting synced.Someone like StopAndDecrypt probably panics at the very mention of something as unholy and sinful as blockchain checkpoints. How can a blockchain be decentralized if clients implementing the consensus protocol agree on a checkpoint block rather than syncing from the genesis block?!
In practice, a reorg in either Bitcoin or Ethereum deeper than a few hours is extremely unlikelyI agree. — — —
Epilogue
No one actually knows how many full nodes are required for a network to be “secure”.
Until then, we cannot know if 1K, 5K, 10K or some other number is the minimum required to keep a network reasonable secure.See above. — — —
That said, we should continue to encourage individuals and projects working on Ethereum apps — or anyone interested in contributing to the network — to run their own full node.I hope that have upwards of $45,000 once PoS+Sharding comes. If ETH goes up, that’s even worse. DASH requires 1000 coins, it was $1,000,000 to run a masternode at one point. — — —
For those for who read this far —I did, and I don’t hate you. People tend to take my writing as hostile. It’s not.