Using Variants with data from a JSON http request

GraphQL is a great fit with Reason because we have a typed protocol. No need to write decoders and if you use the Query.Raw.t it’s even zero-cost (when performance is an issue), otherwise a very small conversion because the ReScript types map quite well.

This really makes Reason super productive AND typesafe. The main benefit of Reason/ReScript compared to TypeScript is that the source of most bugs for us was a mismatch between data a component expects and what it gets (even with GraphQL, it’s easy to forget to include a field in a fragment or to forget that a field is nullable).

graphql-ppx does make the build a little slower, but mostly because it does work that is valuable. Due to native tooling it’s still incredibly fast. My benchmarking is that it’s an order of magnitude faster than ReScript in most files that contain queries (so it mainly makes builds slower because it produces code that you didn’t have to write yourself). (Otherwise you’d have to type your whole query - and still have crashes because typing by hand is error prone).

6 Likes

I heard some people talk about this but never seen actual reproduction of it with recent version s of graphql-ppx. So mostly FUD. And otherwise I’d happy to help diagnose!

If you deal with a lot of REST, ReScript is not great tbh, that is one of the weak points of this language, but it really shines when you have a typed data layer like GraphQL combined with graphql-ppx. The great thing is that GraphQL is becoming ubiquitous.

BTW if you are on a legacy REST stack, in my opinion the best way to interface with them is to just write the data types as records just like you would write externals (and only include the fields that you need). I think that is kind of the point @ryyppy is making. (But I havent dealt with REST in years on the frontend - fortunate position, I know).

Could we please put that in a separate topic?

1 Like

I’ve been in similar situations to the one you describe (where you cannot get proper specs for what you get), although not at that scale.

Well, here’s the deal with the manually written types and decoders: you can write them on per-need basis and ignore the rest. So maybe, and especially for third-party APIs with massive amount of data, manually written decoders strike a better balance. Same as JS libs interop, actually: you write bindings for what you need, not for everything that is there.

I guess my point is, even situations where “spec your endpoints and generate the types” don’t make sense don’t mean you should go by assumptions alone, becase fine-tuning things "as you find errors” might be easier if your data goes through decoders (you can log decoding errors etc.). I mean, if you write them manually, decoders are assumptions, so why not fine-tune them.

2 Likes

BTW we have an app on ReScript and graphql-ppx that has a near zero crash rating :tada:. Crashes were really a large concern before the introduction of ReScript and graphql-ppx (even with TypeScript). Especially if you have an app in the app store that needs to pass review before a bug can be fixed. So a typesafe data layer is worth something!

3 Likes

For REST there is the great atdgen. We use a little script to convert JSON-schema to .atd and adtgen creates the types/decoders/encoders. So backend and frontend (sitting in a monorepo) always have the same “contract”.

The only problem is, you have to learn a separate language (ATD) which is not even OCaml (but very similar). It would be nice to have something like this in a ReScript syntax, or even one step further, directly converting JSON-schema to .res/.resi.

8 Likes

If you have a specific case you can open a topic for sure, happy to dive in performance issue(s) with specific queries. I just wanted to react to the unfounded claims about graphql-ppx that I saw in your post.

Of course, different scenarios call for different concrete solutions. Here are some factors to consider:

  • If we have a massive data structure (and this was pointed out earlier), we’re almost never going to decode the entire structure, we’re going to walk through it to find only the parts we’re interested in. Decoders can generate that code instead of writing it by hand
  • Generated JS size may be an issue, but imho a well-designed solution (for the travel deal search scenario) is not going to be doing this deal search on the client side; it’s going to be on the server. So the user would never notice
  • PPXs may slow down compilation but I bet what slows it down even more is not using interface files. Without an interface file, the compiler has to calculate the effective interface (.resi/.rei/.mli) of every source file by first recompiling the implementation (.res/.re/.ml) file, every time the implementation is touched. With an explicit interface file, the compiler recompiles it only if the interface is changed. For best results, also turn on -opaque during development and the compiler will be even more aggressive about this [EDIT: ReScript doesn’t support -opaque]. I promise you people who are serious about saving compile time, are writing interface files.
  • If PPX perf is still not acceptable then we also have great libraries like bs-json that offer powerful, compositional functions for writing JSON encoders/decoders.
  • Decoding is (thankfully) becoming more popular in the TypeScript world with libraries like io-ts and it’s been the norm for a long time in Elm with elm-json.

Finally–if we are scraping third-party sites then we can expect a certain amount breakage from time to time, it doesn’t mean decoders are bad. The same way that HTML parsers are not bad because websites change their DOM from time to time and break scrapers.

4 Likes

Thanks, I didn’t know about the perf consequences of having interface files, even though it sounds rather obvious now. But as -opaque, is it available in ReScript?

My mistake, -opaque is not available in ReScript. But as far as I know ReScript works in the same way as upstream OCaml w.r.t. looking at interface files.

EDIT: I wanted to mention one more thing: another easy way to speed up builds especially in CI is to check in ReScript JS outputs to the repo. ReScript will not rebuild outputs that are already present (at least last time I checked). In fact committing ReScript outputs has been a recommendation for a long time.

I didn’t know that either :slight_smile: Isn’t committing ReScript outputs basically caching? Well, I guess, if the output is deterministic, there’s no harm in that. The only potential problem I see, off the top of my head, is where you’ve upgraded ReScript but haven’t rebuilt your project, which is a rather strange moment to commit anyway (and rebuilt can be enforced by npm hooks).

Yeah, you could look at it as basically caching. ReScript compiler output is indeed deterministic (assuming that projects are locking to a specific version of bs-platform). Indeed, when upgrading bs-platform, make sure to -clean-world then -build-world.

1 Like

I just tried checking in the .bs.js files and my CI broke. I wonder what did I do wrong

Would you mind creating a new thread? I’d be happy to discuss.

@yawaramin never mind. I reopened the GitHub pull request and everything built fine. Probably it was something cached