I don’t know you, but ME is the one I trust the less. I have been already bitten so many hundreds of times by a new field on a record, or me changing the type of a (deeply) nested fields so many times that I don’t even want to try to remember that. Happened to me a lot when I was using firebase, a NoSql database where, guess what? the data is schema-less, unstructured, as wild as it can be.
Nah, I don’t trust the database, I don’t trust the wire, and over all those things, I don’t trust myself
Test? Yes, they are valuable. But no test will protect you against runtime errors because the API you are contacting has changed overnight, or the value parsed from local-storage has been altered, or you thought you knew your data good enough and manually input the wrong field (guess who suffered this several times )
On top of all the things I wrote… if using decco/spice were significantly more work… we could have a discussing, but the thing is that it is even less effort, less code to write and maintain for a better outcome. For me it is just a no-brain.
I have been there hundreds of times @DZakh . On top of that, the very well written article parse don’t validate has changed the way I see data ( and my life as a programmer)
In what way? Off the top of my head, an SQL db has schema, but what are the guarantees that schema always matches the one that your code expects? (Outside of the codegen scenarios)
Isn’t one of the “advantages” of NoSQL is it has a very flexible schema? So flexible, I think, that records don’t even have the same structure. A properly designed SQL database schema is the opposite of flexible, and sure you can change it, but all the data has to conform to that new schema.
I’m not trying to convince anyone, nor waste anyone’s time into a discussion about how to handle your data.
If you think you are safe that way, sure, go for it. Still you will have to send the data over the wire, so if you also want to trust the wire, again, go for it.
Yes, but that schema can (and likely will) be changed without updating the code, especially if the schema is owned by a different team. So now you have to think of syncing schema migrations to code update, of deployment order, of failing gracefully when your client version doesn’t match the API version—and to fail gracefully, it’s a good idea to fail early, as DZach mentioned.
If one team is changing the db schema without the other team knowing about it, and worse, and your deployment pipeline isn’t catching it, you’ve got bigger problems than validating data at the client.
I don’t know in which companies you have worked. But in my experience, this kind of communication problems is more common than not, and I would not call it “a bigger problem than using a validation library”, specially because that sentence suggest very bad things IMO.
Please, put the pitchforks and torches back into the barn - where they belong…
A case can be made for both sides. I agree. But I believe there is no right decision (using some serialization lib or not) it all depends on the requirements, the actual reality of the team(s) working on it, tradeoffs the team is willing to make and finally simply preferences.
In the end (in both approaches), you can’t show/process the data if your recieve something different than expected. The difference is in how to do error-handling and performance.
I found the thread ReScript decoding libraries benchmark interesting in regard of the performance aspect.
We (a colleague of mine at work and I) are currently researching the code-gen approach (slowly, as a side-project) because I want to get rid of all ppxes in our code base in the long run. I believe this should hit quite a sweet-spot regarding dev-experience and performance. (Currently we heavliy rely on decco, but if we had less and simpler data-structures I’d be tempted to write my own simplest possible decoder.)