Reducing the default maximum script execution effort on the execution node to 9999 computation units

Scripts are a light-weight method to query chain data. Scripts, unlike transactions, can only read chain data and not modify it. Also unlike a transaction, script do not require any transaction fees. For more information about what are scripts and how to use them, please refer to this link.

Scripts serve well for several use cases such as validating a transaction before submitting it, chain auditing etc. (more examples here). However, scripts are currently executed on the execution node and hence they compete with transactions for resources (CPU, memory etc.) on the execution node.

Currently, the default maximum execution effort for a script is 100K, which means a user can submit a script which can use up to 100K in computation units on an execution node. Compare this to the maximum execution effort for a transaction which is currently set to 9999.

I propose that the default maximum script execution computation limit set on an execution node, be reduced to 9999 (same as the maximum for a transaction) from the current 100K limit.

The following are the main reasons for it:

  1. Fair play

    As mentioned earlier, to execute a script the user does not have to pay any fees. Hence, script submission must be capped at a reasonable limit to ensure all users get an equal opportunity to run scripts on the network.

  2. Disburden the execution nodes

    The execution node raison dโ€™etre is to execute transactions. Script execution on the execution node was an interim solution until it could be offloaded to the edge nodes - access nodes and observer nodes. Script execution partially takes away resources on the execution nodes such as memory, CPU etc. which can be utilized for transaction execution. Reducing the maximum execution effort ensures that the execution nodes can focus on transaction execution and improve overall network availability.

Impact of the change

  1. The change will impact only around 4% of the scripts

After the change, scripts which require more than 9999 in execution effort will start failing. Letโ€™s look at what percentage of scripts will be impacted.

The following table describes the computation units used by scripts on an execution node on mainnet on June 1st, looking back 90 days and averaging over a one-day time interval.

Percentile Computation units
96th Percentile 9776
95th Percentile 9604
50th Percentile (median) 144
Average (mean) 3526

As can be seen, 96% of all the total scripts that were executed, used less than 10K computation units.

Hence, the proposed change will only affect around 4% of the submitted scripts.

Currently, the average number of scripts executed per second is ~30 per second (looking back 90 days). Hence this change only impacts ~2 scripts (per second).

For comparison, here are the computation units used by transactions,

Percentile Computation units
99th Percentile 4853
95th Percentile 3860
50th Percentile (median) 207
Average (mean) 870

Computation units used by transactions are well below the maximum permissible limit of 9999 and also lesser than scripts.

The data was reported for the script_computation_used and the transaction_computation_used metric by the two execution nodes run by the foundation. Refer to the Appendix for more data.

Scripts that use more than 9999 computation units can be refactored to use fewer computation units. Here are some best practices for revising such scripts.

  1. Script execution on the access node

Script execution on the access node is now ready! Access nodes can now sync execution state data and optionally execute scripts locally without forwarding the script request to an execution node. This is a major unlock as it enables unbounded non-rate limited script execution on a self-hosted access node. It also allows users to run scripts which require more than 10K computation units on their self-hosted access nodes.

Note

  1. The proposal is to change the default computation limit for scripts. However, an execution node operator can override this default and choose a different and possibly more stricter limit.

  2. This proposal should not be mixed with the rate limits on the script execution API call. Those limits define the acceptable rate of requests per second by the access node operators.

Rollout

The change can be rolled out in stages going from 100K (current limit) to 50K first. And then, after a few weeks, going down from 50K to 25K and eventually to 9,999. Such a gradual rollout will allow everyone to revise their scripts.

I would appreciate your feedback and am more than happy to answer any questions or gather more data.

Thank you,

Vishal

Appendix

Following is a heat map showing the distribution of computation used by scripts over time confirming that only a small percentage of scripts use more than 10000 computation units.

4 Likes

Link to the discord discussion earlier on this topic

1 Like

I think this is very much needed. Currently scripts can create very heavy load.

2 Likes