What data should be in the flow blockchain and what data should be in a standard DB for my project?


I want to know if 100% of my project’s data should be on the blockchain or if that makes sense that some lives in the blockchain and others on a standard database.

Quick summary of my project:

It is very similar to Reddit Place, a 1000*1000 pixels html canvas where users can paint on each pixel in realtime. Meaning that, if I paint on the pixel #123456 with the color blue, it will visually change it for everyone. Of course, I am able to paint as many time as I want with the color I want (from a color palette tool) on that pixel. The goal is to create beautiful piece of art and to compose with other artists creation.

Where is the blockchain stuff here? Well, every pixels are NFTs, you can buy and sell those.
At project launch, the canvas is 100% white, users can buy pixels at a fixed price (in flow), doing so they will become owner of the pixels they bought, which allow them to paint on the pixels. In another words, you have to own a pixel to paint on it. As an owner, you can also grant access to specific users to your pixels so they can paint on them.
An owner can set his pixels for sale at the price he wants.
A user can directly buy an owner’s pixels at the price fixed by the owner. The user can also bid on a pixel, if he does, the owner will receive a notification of the bid which he can accept, reject or ignore.

A pixel gathers several informations: owner address, owner username (or anonymous), artist address, artist username (or anonymous), current color, ID, last sale price or null (if not sold already), current sale price, bidding list if applicable

So the project requires realtime changes with possibly hundreds or thousands of request per second (I’m referring to the painting requests). I know this is alot of requests, this is “easily” doable with websocket and a standard database, but i’m ensure with the flow blockchain.

My question is: should I consider putting some data within a standard DB and adding a standard websocket system? Should the canvas matrix and pixels data should live in a DB and the blockchain part (the smarts contracts) only occurs when performing buy/sell?
Maybe the data related to colors should belong to the standard DB and the data related to the ownership and price should belong to the smart contracts?

More generally, how would you approach this project in a technical manner, maybe I’m missing something?

Thanks in advance! :slight_smile:

1 Like

Good morning. This is a great set of questions. Im going to come at this from a perspective of actually going from concept to scaling. Hopefully we will touch on how to get started and work our way up to how to scale this.

The first step of this sort of thing is always the contracts, the contracts are the foundation of everything. You will need to think about what does ownership mean (do the accounts own the pixels or do they own the right to update the pixels), how will the data for each pixel be stored, and where, which will in turn model out who pays for storage (1000 * 1000 * (6 Bytes of color) ~= 6Mb i think, currently costs a minimum of 3 FLOW). Where things are stored and that structure will also indicate how you can query things. It is also at this time to decide how open or closed your system is, how can people buy these initial pixels from you, is it an auction, can other people see them in the same way, does listing them temporarily void ownership or is it a transfer capability that gets listed, does all that need to be another contract? How will granting the ability to other users to change the color of the pixel work?

Once you have the main contract in place you can start on the user interface, which may lead to more updates to the contract, stuff like helper functions that make it easier to interact with it. You may also find that your initial assumptions in the contracts aren’t working the way you thought they would, thats okay, that sort of experience is super valuable and will make future contract work much easier. I would personally start with trying to display the pixels on the screen, updating things directly in the contract as needed and making sure it is represented in the screen correctly. Can you make the interface subscribe to events emitted by the contract and update what you are seeing, decoupling the writes from the reads?

Next i would focus on the updating functionality, what does that look like? can you update more than one pixel at a time? What does buying look like from an interface perspective, can you buy from multiple people at once? The contracts are going to need to reflect these sorts of abilities, notice how everything keeps coming back to the contracts. These transactions should be doable directly from the browser via FCL, when they emit an event that updates the pixels the site should pick up those events and update what we see.

From this point i would start looking into scaling stuff, your core mechanism are there and we can start targeting different ways of optimizing different parts. Here are a couple ways we can start to do that.

We could have a server that listens for pixel update events and keeps track of what the current image looks like, we generally call this sort of thing a projection, this service can live in isolation from everything else, and support a very light and specific read only api giving you the data in a way more tailored to your interface.

We could take that one step further, and create an actual image every time the pixel changes, this can be heavily optimized and could mean that it would then become sharable.

Next we could take that image that is produced when ever a pixel changes and store it on ipfs, as well as update a value on chain with that new images hash. This “service” now starts looking more like an “oracle”. If you were to keep a history of these ipfs hashes, you could technically also add to your interface the ability to scrub through the history of your 1000*1000 pixel image. Make sure you are pinning those hashes though :slight_smile: Don’t want them to go away.

At this point it probably doesn’t make sense to be using the chain to be querying the data other than getting the latest ipfs hash (or history of ipfs hashes). Instead your interface will probably more want to focus on what it looks like to update pixels and buy pixels, maybe you could have overlays that show what you own, what you can edit and what you can buy. Each of these could be queried directly from chain (maybe if the contracts make it convenient) or they could also be coming from additional projections, services that provide their own api to query the data.

A thing that is really important to remember is that at no point should you ever need to do a transaction on your users behalf, they are submitting those transactions in the browser via fcl or directly on the chain (and while they may get some optimistic updates) the thing that truly matters is your backend services are consuming events and either acting on those events, or updating optimized ways of querying that data.

If eventually you find your self with lots of little projection services consuming events and saving things in databases and exposing apis to that information in the databases. You could then add additional layer like graphql or a rest api that consolidates your interfaces under a single service and proxies to the appropriate projection service.

Fast forward a little bit and your projection services are getting smashed, you have your graphql service out in front proxying requests back to your projection services, but your projecting services just cant keep up with the number of requests. You can look at the individual projection that is having trouble, figure out a way to shard or horizontally scale it while providing the same external interface, spin it up beside the one that is currently under load, play the events into it to bring it up to parity with the current one, test it, then point the graphql layer at the new projection service, keeping the older one around until you are 100% confident, and then eventually retiring the old one. As new bottle necks in the projections are discovered or you expand to new and exciting features this same patterns works, over and over again.

I hope this helps in some way. If you take one thing away from it, it should be focus on the contracts first and play with them to get comfortable with them. Everything starts with the contracts, because everything else is just an optimization/projection/interface/window into them.

TL;DR - focus on contracts, think of everything else as optimizations and optimize as needed to get the experience you want.


Thanks for an extremely interesting post, I’m building a similar project.

Could you give some guidance on how to do this. My understanding is that the record would need to be kept on the contract, and that it’s not possible to query the chain as a whole. Have I understood this correctly?
Would it be a case of storing a dictionary in the contract that updates with any transaction?

many thanks