Select Page

Optimized chunk production for compact usage of postage buckets: A Swarm Hack Week success

During the recent Swarm Hack Week, the Solar Punk team hosted a hackathon where Mirko from Etherna developed a project aimed at addressing the inefficiencies in postage batch consumption in Swarm’s data storage. Currently, storing data in Swarm requires purchasing postage batches with a depth much larger than necessary, leading to significant inefficiencies and increased costs. The project focused on optimizing this process to make the nominal space in postage batches truly usable.

Steps of development

Using Bee.Net, an open-source C# library, he introduced a “compaction level” ranging from 0 to 100. This compaction level controls the effort put into compacting chunks within buckets. At level 0, there is no effect on chunk compaction, while at level 100, the compaction is maximized. The compaction level sets a trigger limit on bucket collisions, prompting the system to mine a better chunk hash when collisions occur. To enhance precision at higher compaction levels, he implemented this using a parabolic function.

Mirko added a custom byte in front of each data chunk’s payload to enable the mining of different chunk hashes, resulting in data chunks containing 4095 bytes of actual information instead of the original 4096 bytes. To interpret these optimized chunks, the reader simply drops the first byte of each data chunk. This approach ensures that the optimization can be executed solely on the client side, though it would be more efficient if handled server-side.

The key advantages of this approach include making nominal space in postage batches usable, reducing postage batch costs, and not requiring additional resources for storing decryption keys. The algorithm works even if not all chunks within the postage batch are optimized, and different files can utilize different compaction settings, enhancing flexibility.

If you would like to take a closer look on the project’s code, you can reach it on the following link: https://github.com/Etherna/bee-net/tree/feature/BNET-99-swarm-hackathon-2024 

Future work

Future work will focus on developing a deterministic method for hash production to enhance consistency, refining the trigger level formula for better performance at lower levels, and investigating solutions for the potential impact of unoptimized chunks on lower depths due to the birthday paradox.

This Swarm Hack Week project has significantly advanced the optimization of Swarm’s storage. By implementing a compaction level and optimizing data chunks, he has made Swarm’s storage more efficient and cost-effective. This collaborative innovation exemplifies the potential for future improvements in decentralized data storage. Stay tuned for more updates as we continue to enhance Swarm’s capabilities!

Tags