Storage Fees improvements ( and few random ideas on the way )

Storage fees ( with upcoming changes with 100x increase [0] ) is a concern in developer community, in this post I would like to propose some solutions, which I was thinking on and off for 2-3 years. I believe these can make Flow much better blockchain and provide better user experience.

Although I am trying to target wider audience some parts can be little too advanced (sorry in advance, my targeting is usually worse than a stormtrooper’s)

Now the main concerns are:

  • Existing users and their accounts will be unable run transactions after the change.
  • Second concern, putting fees for storage (sending some small of flow ) after user action on the app ( usually manages as refill methods ) are prone to error.
  • Things get more complicated with Hybrid Custody and people having more than one account.

In this sense, discussion with developers, main improvement can be achieved by removing storage fees from account balance ( so your account balance will be always available balance ) and making a simple logic like: "writer/creator pays for the storage fee / destroyer (deleter) of the resource redeems the storage fee ) I suggested similar solutions before ( [1], [2] )

[ Ideally we could use payer as ( writer / destroyer ) One question raised here [Q1], to brainstorm on, wallets paying for transaction fees and how to handle them. ]

Now if we expand this a bit, to search for an alternative, we can use some pseudo resource like StorageWrapper(owner, uuid, FlowToken.Vault) so each item stored can have some kind of fee vault builtin, while locking fees for the item inside.

If we take it one step further, we can only put structs in this resource, and define a fee vault for every resource. So every resource will have a fee vault. Then we can play a bit with ownership ( keeping last owner maybe ) and instead of payer we can make last owner to redeem the fees.

( Totally off topic but while we are at this maybe even we can separate ownership and storage addressing, resources can live at their own namespace, so moving a resource will be just updating its owner and setting a link at current owner to that resource. Which would allow us things like query a resource by udid etc. I think we have a very advanced resource based language Cadence, and it deserves a resource based storage on blockchain side too. )

[0] FLIP 66 - Revisiting Flow storage minimum account balance


Thanks @bluesign for putting all this together!

As you obviously know I am a big supporter of this approach and I think it would work great for many reasons

  1. The users won’t have to worry about storage fees at all and would remove one big friction point in having to educate them how everything works under the hood.

  2. There would be an incentive in doing “garbage collection” to remove unused/unwanted resources from the blockchain, by returning back the Flow locked with them.

  3. Would make things much more simple to handle also than having to re-fill accounts all the time like it happens with Dapper Wallet.

The thing I like the most is that it could be totally invisible to the user if the project (like Flovatar for example) would provide the Storage fee within the create() method itself and get the Vault from the smart contract account automatically. It would actually make it much easier to scale the storage fees to even 1000x compared to the current ones, because it would be taken care by the project itself and the user wouldn’t have to worry about it at all.
And in case of a non-profit project like FLOAT for example, the user could still provide the storage fees easily by adding a couple lines of Cadence in the transaction.


I think it is a promising idea, but it would be a huge change to the language that would need to get included with stable cadence, right? If so, we should turn this into a FLIP as soon as possible so we can get a real discussion going.


Thanks @flowjosh, I think we can somehow do this transparent to Cadence user. But I think some eyes from flow-go before FLIP would be the best. I don’t know how feasible for them.


I approve of the pattern of mutator pays and then increase the write factor of chain operations to reflect this.

Getting flow back when a resource is destroyed is nice to avoid bloat but it also is very complex to implement.


If this were to go in with stable cadence, you’d just have destroy return a FlowToken.Vault resource I’d guess


Whatever the storage model we end up with, we have to be careful that we prevent gaming/trading storage for FLOW.

In a “writer pays for storage, destroyer gets storage back” model if Alice would know that storage costs would increase Alice could write a bunch of garbage on chain into some resources. once storage cost would be increased Alice could just destroy those resources herself to get more FLOW back.

Lets say that the resources Alice created also keep track of the FLOW that was paid to store them (or alternatively we could decide that no Flow is returned on destruction) then Alice could just sell those garbage items to Bob. Bob could use those garbage items as storage by modifying their contents, and this would be cheaper than Bob paying for the creation of that storage (because storage is now more expensive).

On its own trading for storage is not that bad, but it encourages storing stuff on chain just for the sake of “reserving”/“buying” storage which is terrible.


Method we suggesting is every resource is like having something like pseudo flowVault inside it. (essentially just a UFix64 feesPaid field actually) Somehow when writer paying for the storage ( let’s say via tx fee ) instead of fee going to fee vault, it is going to this storage vault. So you only get back what you put. ( even if storage fee changes )

Technically of course I can put garbage on chain, but it is no different than current situation. ( Unless we decide to delete accounts if they don’t have enough fees to cover over time )

Main objective is who is creating the content should pay for the storage, not the user. For user, it is too complicated to know if one NFT is using 1MB space another is 1kb.

But also for the user it has no effect technically 1MB or 1kb. If we want to encourage people to not bloat chain, I think only one has power is the developer. Maybe if I mint million NFTs, it makes sense as a developer to me to save bytes to save from storage fees.

Another objective is every storage fee now has risk to break user account, unless they add some flow, but this way we can prevent that. ( even it also motivates us to set rational fees instead of setting absurd than increase 100x )


This every sub resource is taking care of their own fees, if you have array of resources inside, they have already paid fees on creation time. For user doesn’t matter what they put inside this, container is paid when it is created.


Let me try to illustrate what I tried to say.

If storage is 1FLOW per MB
I can create a resource with a byte array that is 100MB long.

if storage price increases to 2FLOW per MB I can sell this resource for 1.5FLOW per MB to someone else and they ca use it to write meaningful data in it. They benefit because they saved 0.5 FLOW per MB and I benefit because I earned 0.5 FLOW per MB.

I’m not trying to comment on the price increase FLIP, and I’m not trying to defend the current model. I’m just trying to point out that the storage model should be designed in a way where “buying” a bunch storage (by making garbage objects) in speculation that storage price will increase is not feasible.

I just remembered: A possible modification for the current model that was floating around at some point was that Alice could reserve X FLOW specifically for Bob’s storage. Basically Alice would set aside 1FLOW and with this increase the storage capacity of Bobs account. What do you think about that?


We charge writes ( not for space ) if you paid 100 FLOW to 100 MB then gave it to me, I write 100MB another data, I pay 100 FLOW too. And resource has 200 FLOW now ( in feeVault )

Actually first idea was to put it into tx fees directly ( but that does not allow redeem )

If somehow resource namespace is also implemented ( little off topic ) then writes will be minimal ( when resource doesn’t mutate, just owner change for example ), and fees will be minimal too


I see,… I think.

Are you saying that if I have a resource with a 1k string in it and I paid 1 FLOW to make it, if I then overwrite the string, I should pay 1 FLOW again?

Without the possibility of redeeming that sounds ok… But with redeeming I’m not sure how that would work.

Its also kind of strange for objects like the FlowVault, which is of constant size but changing the amount in the vault will force you to pay for storing it again.

However there should still be an incentive to clean old objects… I have to think about this some more.


Amount of the fee charged needs to be carefully designed for sure, in flowVault case temp vault is usually on the stack, so there will be only few bytes of write to balance in normal cases. I think it can be very small amount. Will be similar to tx fees ( storage_read / storage_write ) now, probably very minimal for user to feel. ( as they don’t feel tx fees now ) also redeeming that even can cost more gas fee than earned ( in most cases )


I’d like to thank everyone for their thoughtful and constructive contributions to this topic. Incentive design for byzantine-fault-tolerant systems is generally a highly non-trivial topic.

Non-technical considerations

An important argument for Kshitij’s proposal to increase storage price by 100x is its ease of implementation: it is simply an adjustment of a protocol parameter. There are other challenges, some of which bluesign summarized at the top. While initially uncomfortable or a headache, I am confident that we together can work out the problems reasonably quickly, with ample opportunities for hands-on community contributions.

In comparison, embedding the storage price into a resource is a major research and development task requiring broad changes deeply within the security-sensitive sections of the code base (I’ll provide more details below). Therefore, I think that the core-protocol R&D team would have to implement a large portion of the necessary changes.

To be transparent, I don’t think we have the engineering resources to even start this for the upcoming multiple months. We could push out other work, like stable cadence. Though, it is hard for me to see how this could be a favourable trade-off tbh.
Alternatively, we could we could keep the current storage price until we find the engineering resources, which would leave the network with its existing vulnerability surface for storage exhaustion attacks and a lack of incentives for developers to use storage economical. Also not a good option in my opinion.


  • Overall, I am very much in favour of working out a mature solution to storage pricing on a conceptual level. Nevertheless, I think we need to be realistic that we won’t be able to implement this within the next year or so.
  • Therefore, I would suggest we also try to answer the question what an interim solution could look like.

Technical considerations

To illustrate the complexities of implementing a system where each resource knows its own storage price, it would be best to have Bastian’s input here. I am not sufficiently familiar with Cadence internals, so the following challenges are partially speculation (and likely incomplete)

  1. While a UFix64 is sufficient for memorizing the storage fee payed for a resource, this would still require every single resource to be updated.
  2. While a StorageWrapper(owner, uuid, FlowToken.Vault) is conceptually easy, from Cadence’s perspective the wrapper would also just be a resource as it stands right now. In other words, Cadence would need additional logic to understand that the wrapper essentially only provides system-level information, that the wrapper cannot be removed or altered by the owner and resources can’t exist without such a wrapper.
    • introducing a wrapper for each resource also has the risk of massively bloating the state, especially if we implement the wrapper naively using a vault
  3. When a developer creates an array of resources, does each array element track its own storage or do we track the storage of the array as a whole?
    • depending on the approach we take, we might undermine array inlining, potentially leading to a much less efficient storage representation and significant performance degradation
    • management logic for compound resources (arrays, wrappers, etc) is doable but probably there are a bunch of edge-cases to carefully consider depending on cadence internals.

Thought experiment:

What do you think about the following variation on bluesign’s proposal (?)

  • We introduce a storage token (like a specialized type of gas that is only used for storage), which corresponds to a fixed amount of bytes.
  • When you create a resource, the creator can either pay for the necessary storage directly in flow tokens or pay with storage tokens.
  • if somebody deletes a resource, they get the amount of storage tokens corresponding to the freedy bytes.
  • Important: the Flow network only allows you to buy storage for flow tokens, but never refunds any flow tokens for storage tokens. In other words, all storage every bought remains in circulation, either in used by stored resources or as storage allowance in the form of storage tokens. Reasoning:
    • Essentially we tokenize the right to store stuff on the Flow blockchain. The conversion rate of bytes to/from storage tokens is an immutable constant. We already have the software in place to measure and track storage use of resources. Therefore, it is straight forward to denominate the space taken by a resource in storage tokens without memorizing any additional information in each resource.
    • Thereby, we move the variable portion of the problem, i.e. the variable price per stored byte out of the implementation.
  • It is hard for me to imagine a scenario where speculation on storage price is completely impossible. Bluesign presented an idea above, which introduces its own complexities (happy to elaborate further). Nevertheless, by introducing a storage token, we separate the protocol from the speculative aspects. Instead of building a custom solution just for storage, the speculation aspects can be covered by existing DeFi stacks as storage rights are now tokenized.
    • On the one hand, I don’t like people buying a bunch of storage upfront to speculate on its value. On the other hand, storage that has once been allocated and is rarely touched is not a significant problem for Flow on a technical level. If priced correctly, it might even be a funding opportunity to support the network in the earlier years, because price per megabyte tends to decrease in its cost rapidly over the years.
    • In comparison, repetitive mutations of the same resource have a significant runtime cost, because the protocol generates Merkle proofs for each touched register for the verification nodes under the hood.

Thank you very much @AlexTheArchitect for detailed response and transparency about timelines (workload etc). Actually this was the reason I didn’t start with a FLIP, but wanted to get some feedback beforehand. I totally agree this a core-protocol R&D task.

I think @bastian and Faye can chime in some parts with Cadence and atree. I will just try to fix some misunderstandings, which is totally on me, as in the first post I tried to move to the solution by building up on it, to make this forum post more accessible for people with no internal knowledge of flow-go, cadence or atree, it misfired again as I predicted :slight_smile:

  1. While a UFix64 is sufficient for memorizing the storage fee payed for a resource, this would still require every single resource to be updated.

I think this is a spork storage reorganisation task ( I don’t think this this more complicated than the migration happening for atree register inlining [], upcoming on the next spork )

introducing a wrapper for each resource also has the risk of massively bloating the state, especially if we implement the wrapper naively using a vault

This was a bit for visualization concept ( actually it was building up to ‘off-topic’ point ). When you put something into storage, it is put into a box, then put into storage. If we can develop this said box system, this helps us later to get the thing inside the box, put it into somewhere else ( like resource namespace ) and put into box just a pointer to the resource.

An example; currently if Alice has a resource and sends the resource to Bob. Whole resource is read from Alice’s storage, then whole resource is written to Bob’s storage. This is highly inefficient, touches a lot of registers and generates a lot of Merkle proofs for each touched register. And this is the one of the most common use cases. ( sending an NFT, or moving an NFT collection etc ) ( @rrrkren may have some comments on moving NFT collections )

When a developer creates an array of resources, does each array element track its own storage or do we track the storage of the array as a whole? * depending on the approach we take, we might undermine array inlining, potentially leading to a much less efficient storage representation and significant performance degradation

Can you expand on this? how it can undermine array inlining, considering size will stay the same? ( UFix64 feesPaid case)

I am totally happy with storage token idea; ( if it means that it will be faster solution ) It will provide a better UX (from current one) and I think will be easier to implement.

Anything creator will pay, and user doesn’t want to worry about it, satisfies the needs I suppose.


Thanks @bluesign for the detailed response and follow-up questions. While I am happy to share my thoughts about the points you brought up, I am not sure how much value my answer will actually provide, given that I have too limited understanding of Cadence-internals. Hoping Bastian can spare a bit of time to help us out with facts (compared to the educated speculations on Cadence-internals that I can provide). In a nutshell, all I wanted to point out is the complexity of the task.


As a side note (totally off-topic), as I spent most of the weekend analysing current storage, most of the state bloat is caused by pre-minting excessive amounts ( 80-90 million resources for one account, which is like 35GB storage )

Unfortunately it is some pattern TopShot used before, which developers copied. Top 20 accounts with 75GB storage counts for 35% of the state ( which is around 205GB )


Switching from the current storage rent model to another model, be it to the common model of storage payment as part of the transaction (like in e.g. Ethereum), or the “writer/creator pays” model as discussed here, had been discussed before, most recently as part of the resource force-deletion problem.

Some notes from those previous discussions:

  • As pointed out above, other blockchains who have/adopted this model, like EOS, suffer(ed) from speculation
  • Storage is not free, so financial pressure to delete data is needed
  • @dete had described how the current model has its problems (e.g. user runs out), but is the only workable solution. Unfortunately, the notes were not detailed enough here as it was just a side-discussion for the resource force-deletion problem, but maybe he could jump in here and provide some more details

Hey everyone, sorry it has been quiet from our end. It’s a technically complex area. Finding the time to engage at the necessary tech depth to move this exploratory discussion forward is right now a bit challenging.

What you suggested is a really interesting and important avenue, which coincidentally could also solve some challenges in other areas. We really want to pursue this, though I would expect that us all together working through the details will take a decent amount of time. After all, overlooking important but subtle details might also have significant impact.

Hence, my question:
Would you be ok with the proposal in FLIP 66 - Revisiting Flow storage minimum account balance as an interim solution. Its not an ‘either one or the other’ decision. I feel that implementing FLIP 66 would give us the necessary time to work through more advanced proposals.

Appreciate your thoughts, and thanks for your time and all your contributions :heart:


@AlexTheArchitect FLIP 66 is a must in any case, is there any change we increase the storage 100x ( like FLIP 66 suggested ) but also maybe grant a fixed storage per account ?

I think as I pointed earlier, offenders are a really a few. So if we can little tweak the formula, let’s say; each account has 100kb storage space + 1MB per 1FLOW. We don’t even need to update the storage fees for a long time, and normal users will not be affected at all.

I don’t think people will create accounts to abuse this. Most dapps ( using 10GB+ storage ) using it because it is convenient to keep it all in one account. Doesn’t even worth to hassle of creating accounts and mapping where is what etc.