Skip to content

Releases: ethersphere/bee

v2.2.0

12 Sep 13:34
06a0aca
Compare
Choose a tag to compare

v2.2.0

The Bee team is elated to announce the v2.2.0 release. 🎉

The release includes major new features and important fixes which operators and users should take note of.

See the Official Release Announcement and Node Operator's Guide.

The Debug API and port has been removed and endpoints under the Debug API have been merged into the main Bee API. Configuration options related to the Debug API must be removed for the node to run as normal (see 2.2.0 upgrade guide for details).

Based on a community poll, we have also removed the API authorization and --restricted option.

The release contains new redistribution and staking contracts, so it's advised to upgrade as soon as possible. Before upgrading, however, the stake must be manually migrated by operators. The instructions on how to do so are available here.

Warning

The 2.2.0 upgrade includes a localstore migration which will take an extended time to complete (the exact time will vary based on your particular system specs). You can check your node’s logs for messages related to the migration in order to check on the migration progress. Turning off your node before the migration is complete could cause your node to become corrupted! ⚠️ Before every Bee client upgrade, it is best practice to ALWAYS take a full backup of your node.

Access Control Trie (ACT)

The Access Control Trie (ACT) is a major new feature that gives DAPP developers specific control over who can access encrypted data on Swarm. ACT introduces two main roles: “content publishers” and “content viewers.” Content publishers can grant or revoke access to content viewers on a fine-grained, chunk-by-chunk basis. Publishers can also establish and update group lists for managing access.

Neighborhood Hopping & Staking Changes

Previously, using the target-neighborhood option, node operators were able to mine overlays for their new nodes to join specific neighborhood, generally underpopulated ones, to increase winning rewards and strengthen the network. Now, with this release, different neighborhood can be targeted for old nodes as well by supplying a new option. The node will also automatically reassociate the stake with the new overlay address if the node was previously staked.

Before, stake was non-withdrawable and set arbitrarily to 10 xBZZ. The new partial stake withdrawal feature allows operators to withdraw part of their stake when the price of the oracle drops below the price of when the node was staked. The withdrawable amount can fetched and withdrawn using the endpoints GET stake/withdrawable and DELETE /stake/withdrawable endpoints, respectively.

Postage Contract Safe Guards

To protect the network, we are introducing new safe guards for when the postage contract is paused. The contract is paused generally when there is a new contract available and operators are required to upgrade as soon as possible. As such, nodes that pick up the pause event from the contract will automatically shutdown and won't be able to start up again.

For questions, comments, and feedback, reach out on Discord.

Features

  • Added the last synced block to the status endpoint response ( #4710 ).
  • ACT feature ( #4692 ).
  • Added the ability to carry the stake to new neighborhoods and withdraw part of the stake ( #4718 #4685 #4720 ).

Bug fixes

  • Fixed a bug where the peer prune function was not counting peers correctly and changed the pruning to run periodically and not on every connection attempt ( #4759 #4774).
  • Fixed the issue of the pullsync protocol and ultimately the reserve ignoring chunks that have been previously synced but that are re-uploaded with higher stamp timestamps. ( #4717 ) .

Deprecation

  • Debug API and authorization deprecated (#4679 #4732).

Hardening

  • Stewardship API endpoint now checks leaf chunks as well and downloads all chunks belonging to the root reference ( #4735 ).
  • Node is shutdown and is unable to start up if the postage contract is ( #4748 ).
  • Stopped unnecessary pullsyncing with peers beyond two proximity orders below the storage radius ( #4764 ).
  • Added a per peer rate limiter to the pullsync handler ( #4799 ).
  • Improved batch error response for new batches when the amount is below the minimum allowed value ( #4729 ).

For a full PR rundown, please consult the v2.2.0 diff.

v2.1.0

28 May 10:21
de7eccc
Compare
Choose a tag to compare

v2.1.0

The Bee team is excited to announce the v2.1.0 release. 🎉

In this release, localstore transactions have been refactored to bring increased stability and performance gains.

We have also detected that some nodes have experienced the corruption of their reserves. To address this, the release introduces the new bee db repair-reserve --data-dir=... command, which will scan the node’s reserve and fix any corrupted chunks. All node operators should make sure to run this command immediately following the update.

Warning

Make sure to run the command one by one rather than concurrently for nodes which are running on the same physical disk, since running the command concurrently on multiple nodes could lead to drastic slowdowns. As is the case with all the db commands, the nodes must be stopped first.

The release also includes a new redistribution contract which introduces a limit to the number of freezes per round. The specific rate of the limit is configurable by the team. At the time of the release, the default behavior will be the same as the old contract. The goal of the new db repair-reserve command and the localstore improvements is to decrease the freezing rate so it is closer to an acceptable level, in which case the freezing limit can be left untouched.

The Bee team will also coordinate the pausing of the old contract based on a predetermined block height of the Gnosis Chain.

With this release, the endpoints under the Debug API have been also included in the main API. The Debug API will be removed entirely in the next release (v2.2.0)

For questions, comments, and feedback, reach out on Discord.

Features

  • A new redistribution contract has been released that controls the freezing limit. #240

Bug fixes

  • Fixed an error when uploading the same file with pinning multiple times. (#4638)
  • Fixed a data race in the reserve sampler which may resolve inclusion proof related errors in the redistribution game. (#4665)

Hardening

  • Localstore refactoring (#4626)
    • The same leveldb transaction is now used for both indexstore and chunkstore writes.
    • The stewardship upload endpoint now requires a valid batchID in the request header.
    • When the reserve capacity is reached, only enough chunks to fall below the capacity are evicted. Previously, the evictor would remove the entire bin of chunks belonging to a batch, without regard to how much capacity is recovered during the process. With this change, the loss of chunks belonging to shallower bins than the storage radius in the neighborhood is minimized.
    • When the radius decreases, the bins which have been evicted previously are all properly reset to re-initiate syncing.
  • Improved logging when the node is out balance for buying a batch. (#4666)

For a full PR rundown, please consult the v2.1.0 diff.

v2.0.1

25 Apr 12:06
Compare
Choose a tag to compare

v2.0.01

This is patch release that updates libp2p to the latest version which addresses a memory leak issue.

v2.0.0

26 Mar 11:54
501f8a4
Compare
Choose a tag to compare

v2.0.0

The Bee team is elated to announce the official v2.0.0 release. 🎉

In this release we introduce a brand new mechanism of data redundancy in Swarm with erasure coding, which, under the hood, makes use of Reed-Solomon erasure coding and dispersed replicas. This brings a whole new level of protection against potential data loss.

Erasure Coding

A new header Swarm-Redundancy-Level: n can be passed to upload requests to turn on erasure coding where n is [0, 4]. Refer to the table below for different levels of redundancy and chunk loss tolerance.

Redundancy Level Pseudonym Chunk Retrieval Failure Tolerance
0 None 0%
1 Medium 1%
2 Strong 5%
3 Insane 10%
4 Paranoid 50%

Testnet

With this milestone release, the Swarm Testnet is now officially running on the Sepolia blockchain.

Apply the configuration changes below to a fresh node to be able connect to the Sepolia Testnet.

bootnode:
- /dnsaddr/sepolia.testnet.ethswarm.org
blockchain-rpc-endpoint: {a-sepolia-rpc-endpoint}

For questions, comments, and feedback, reach out on Discord.

Features

  • Uploads may now be equipped with erasure coding which brings a new level of data redundancy to Swarm. ( #4491 ).
  • Added a new API endpoint to obtain the content type and length of uploads using the /bzz endpoint with the Head request type. ( #4588 )
  • Re-added livesyncing to chunk syncing in the puller service. ( #4554 )
  • Default testnet setting are now configured for the Sepolia blockchain. ( #4491 )
  • Added a new db command that verifies the integrity of the pinned content. ( #4565 )
  • The pinned content integrity verification can also be done using the API, namely with the new pins/check endpoint. ( #4573 )
  • Added the ability for fresh nodes to use an external neighborhood suggester through the config options for mining the overlay address into a specific neighborhood. By default, the Swarmscanner's least populated/most optimal neighborhood suggester API is used. ( #4580 )

Bug fixes

  • Localstore fixes
    • Fixed a bug where deleting a pin reference that has been pinned more than once was not removing the chunks from the localstore. ( #4558 )
    • Fixed a race condition in the cachestore that was causing refCnt inconsistencies. ( #4525 )
    • Fixed a bug in the cachestore that would not deference already cached chunks after a reserve eviction. ( #4567 )
    • Fixed a cache size bug that would undercount the number of chunks removed from the cache, leading to a cache leak until the next restart of the node. ( #4571 )
    • Fixed a leak in the upload store where the metadata of the individual chunks persists in the localstore long after the chunks have been successfully uploaded to the network. ( #4562 )
    • Fixed the storage radius metric being set incorrectly. ( #4518 )
    • Fixed a bug where the storage radius does not decrease even though the true reserve size is lower than the threshold. ( #4514 )
  • Fixed a vulnerability in the encryption of uploaded data. ( #4604 )

Hardening

  • Updated the btcd crypto library version. ( #4516 )
  • ReserveSizeWithRadius field, which is the count of chunks in the reserve that falls under the responsibility of the node has been added to the status protocol. ( #4585 )
  • Stamper changes
    • The rules for how chunks are stamped before uploading have been changed: regardless of batch type (immutable or mutable), if a chunk has been stamped before, the chunk is restamped using the old batch index and a new timestamp. ( #4556 )
    • Regardless of batch type, the reserve now overwrites chunks that have the same batch index with the higher timestamp. ( #4559 )

For a full PR rundown, please consult the v2.0.0 diff.

v1.18.2

14 Dec 14:56
759f56f
Compare
Choose a tag to compare

Building upon the previous release, the sync intervals are re-synced so that nodes may collect any potentially missing chunks from the network.

The initial syncing a node performs to collect missing chunks from peers, aka historical syncing, is now rate limited to lower and stabilize CPU usage.

For questions, comments, and feedback, reach out on Discord.

Bug fixes

  • Fixed a panic when running compact with an empty db. ( #4488 )

Features

  • Puller historical syncing is now rate limited to not exceed 500 chunks/second. ( #4504 )

Hardening

  • Puller sync intervals are reset to sync missing chunks. ( #4499 )
  • Various UX improvements. ( #4487 #4466 #4489 )

For a full PR rundown, please consult the v1.18.2 diff.

v1.18.1

07 Dec 09:47
ed24b89
Compare
Choose a tag to compare

This is a patch release that properly resets the batchstore so that batches can be resynced from the new postage stamp contract.

For questions, comments, and feedback, reach out on Discord.

For a full PR rundown please consult the v1.18.1 diff.

v1.18.0

06 Dec 16:13
dd14545
Compare
Choose a tag to compare

The main theme of this release is the delivery of the last phase of storage incentives, the fourth phase, and thus the end of the storage incentive saga. For this reason, this is a breaking release, as the handshake version has been bumped. The release also includes one bug fix and minor improvements, which can be found below.

Breaking changes

  • The handshake protocol has been bumped as there is a new redistribution contract release (#4490)

New features

  • Introduction of a command that lists all chunk hashes for a given file (#4484)
  • Swarm cache header has been added to several API endpoints (#4457, #4486)
  • Phase four of storage incentives (#4373)

Bugfixes

  • Re-upload of a file that was previously manually cancelled during upload (#4468)

v1.17.6

09 Nov 12:24
48a603c
Compare
Choose a tag to compare

v1.17.6

With this release, many hardening issues were tackled. The team's focus has been mostly on improving connectivity of nodes across the network and bringing performance improvements to chunk caching operations.

Also featured is a new DB command that will perform a chunk validation of the chunkstore, similar to the optional step in the compaction command.

The retrieval protocol now has a similar multiplexing capability like pushsync, where multiple, in parallel, requests are fired from a forwarder peer that can directly access the neighborhood of a chunk.

For questions, comments, and feedback, reach out on Discord.

Bug fixes

  • Fixed a bug where parallel uploads can cause a race condition in the uploadstore. ( #4434 )

Features

  • Added a new DB cmd that performs a validation check of the chunks in the chunkstore. ( #4435 )
  • Added multiplexing to the retrieval protocol where a forwarding peer that can reach in to the neighborhood of the chunk fires multiple attempts to different peers. ( #4405 )

Hardening

  • Added extra documentation about the logger API in the CODING.md. ( #4406 )
  • Fixed logs containing wrong token name. ( #4408 )
  • Added metrics for zero addressed chunks received by the pullsync protocol. ( #4407 )
  • Kademlia depth value is overwritten by the storage radius for full nodes. ( #4410 )
  • Salud response duration check is now more stricter. ( #4417 #4426 )
  • Upgraded libp2p to the latest version v0.30.0. ( #3927 )
  • When batches expire and are removed the batchstore, the stamp issuer data is also removed ( #4416 #4431 #4439 )
  • Add a new log to display the amount of time the postage listener will sleep for until the next blockchain sync event ( #4444 #4426 )
  • API now returns 404 instead of 500 when no peer can be found for a chunk retrieval attempt. ( #4436 )
  • Upgraded crypto related packages. ( #4425 )
  • Added various connectivity related improvements: ( #4412 )
    • The reachability of connected peers is tested periodically instead of once at the time of the initial connection.
    • All and not a small subset of neighbor peers are broadcast to a newly connected neighbor.
    • Neighbor peers are periodically broadcast to other neighbors.
    • A peer will be re-added to the addressbook if hive detects an underlay change.

Performance

  • Added a cache eviction worker so cached chunks do not need to removed immediately when adding new entries to an over-capacity cache. ( #4423 #4433 )
  • The POST /pins/{ref} API endpoint now stores chunks in parallel. ( #4427 )

For a full PR rundown please consult the v1.17.6 diff.

v1.17.5

16 Oct 14:25
Compare
Choose a tag to compare

v1.17.5

In this small but important release, the Bee team introduces a new db compaction command to recover disk space. To prevent any data loss, operators should run the compaction on a copy of the localstore directory and, if successful, replace the original localstore with the compacted copy. The command is available as a sub-command under db as such:

bee db compact --data-dir=

The pushsync and retrieval protocols now feature a fallback mechanism of trying un-reachable and un-healthy peers in the case that no reachable or healthy peers are left.

We've also added new logging guidelines for contributors in the readme.

For questions, comments, and feedback, reach out on Discord.

Bug fixes

  • Fixed a bug where a node can get stuck syncing the same interval if the upstream peer is unable to send the chunk data. ( #4339 )

Features

  • Added a new localstore compaction command that resizes sharky to the smallest size possible. ( #4329 )

Hardening

  • Added a new logging guideline for contributors ( #4352)
  • Improved logging of the retrieval pkg and increased the min healthy peers per bin in the salud service.
  • Varying levels of peer filtering for the pushsync and retrieval protocols ( #4388 )

For a full PR rundown please consult the v1.17.5 diff.

v1.17.4

21 Sep 13:43
038dbfb
Compare
Choose a tag to compare

For the past few weeks, the Bee team's focus has been on improving network health, observability, and user experience.

Node operators, can now mine an overlay address for specific neighborhoods for fresh nodes by using the new --target-neighborhood option. The new Swarm Scanner neighborhoods page displays neighborhood sizes and is a great tool to be used in tandem with this new feature.

Uploads are now by default deferred, as they were before the v1.17.0 release.

Additionally, the default postage stamp batch type is now immutable.

Another behavioral change is that swap-enable is now by default false and the bee start command without additional options starts the node in ultra-light mode. Full node operators must enable the option with swap-enable: true if not already enabled for their nodes to continue to operate as normal.

We have also improved logging across many different services and protocols.
Pushsync and retrieval protocols now report error messages back to the origin node instead of the generic "stream reset" errors. As a result, the protocol version has been bumped, making this a breaking change. It is imperative that operators update their nodes asap.

Previously, nuking a node could cause syncing problems due to the fact that
syncing intervals were never reset. This issue has now been tackled by having nodes detect that a peer's localstore has been nuked. They are able to do this by comparing the peer's localstore epoch time across connections.

For questions, comments, and feedback, reach out on Discord.

Bug fixes

  • Pullsync intervals are now reset when the peer's localstore epoch timestamp changes. ( #4290)
  • Fixed the issue of not being able to pass a "0" parameter in the API. ( #4301)
  • Fixed a bug where the context used for cachestore operations is canceled before a chunk can be cached. ( #4307 )

Features

  • Mining an overlay address for specific neighborhoods. ( #4293 )
  • Pushsync and retrieval protocol propagate the message of the error that terminates the request back to the origin peer. ( #4321 )
  • The naked bee start cmd starts the node in ultra-light mode. ( #4326)

Hardening

  • Logging improvements. ( #4295 #4296 #4297 #4302)
  • The type of refCnt field of the chunkstore changed from uint8 to uint32, increasing its capacity. ( #4299 #4309 )
  • Default batch type is now immutable. ( #4304 )
  • The testnet bootnode address is now supplied by default in the configuration. ( #4317 )
  • Uploads are now deferred by default. ( #4318 )

For a full PR rundown please consult the v1.17.4 diff.