Is using a functor for a fetch binding overkill or just right?

Motivation: because javascript’s fetch can return anything, a fetch binding isn’t very reusable. My solution here seems to work but its a bit nerdy. But isn’t this what functors basically are for, passing types to library code, since you can’t pass types to functions themselves, or do I have that wrong?

file Fetch.res

module Make = (
  I: {
    type resultType
  },
) => {
  module Response = {
    type t<'a>
    @send external json: t<'a> => Promise.t<'a> = "json"
  }
  @val external do: string => Promise.t<Response.t<I.resultType>> = "fetch"
}

file Main.res

module Fetcher = Fetcher.Make({
  type resultType = array<AppData.nestedFolder>
})


Fetcher.do("/api/data")
->Promise.then(x => {
  Fetcher.Response.json(j)
})
->Promise.thenResolve(x => {
  setData(_ => x)  // React hook thing
})

Functors don’t just take type definitions in a module, but the complete module you pass in.
If the only thing you use a functor for is just a single type definition, I don’t see any benefit over a simple type parameter.

Either create a binding for fetch having a type parameter in it’s return type and explicitly annotate that, when using it.

Or maybe better - depending on your use-case - create a module specifically to your use-case and create a binding with the explicit return type you need.

It all depends what you’re after.

Finally, if your example was stripped down and your functor would do more than just wrapping the fetch binding, it would become more reasonable again.

2 Likes

Ok I removed the functor and just changed the type to 'a and everything works the same, sometimes ocaml’s type system is spooky to me.

module Response = {
  type t<'a>
  @send external json: t<'a> => Promise.t<'a> = "json"
}
@val external do: string => Promise.t<Response.t<'a>> = "fetch" // <-change here
1 Like

More correct bindings will be:

module Response = {
  type t
  @send external json: t => Promise.t<Js.Json.t> = "json"
}
@val external do: string => Promise.t<Response.t> = "fetch"
1 Like

You should be parsing the json response, and there is where you should put the types you expect to get.

3 Likes

The json method already parses the json in the response

Note that despite the method being named json(), the result is not JSON but is instead the result of taking JSON as input and parsing it to produce a JavaScript object.

Yes, sorry the term is not correct. I mean decoding. Take a look at spice,it’s great

3 Likes

Can you explain more about this, like an ELI5?

Also if anyone can explain the differences between Js.Json.parseExn, Js.Json.deserializeUnsafe and using json decoders like decco/spice and what they are supposed to be used for?

Don’t you think that’s too much if you have total control of the data?

For example, if you expect this

type rec nestedFolder = {
  id: string,
  name: string,
  parentId: Js.Null.t<string>,
  folders: array<nestedFolder>, 
}

The only “decoding” you really need to do is convert that potentially null parentId to an option, which you can do with Js.Null.toOption like this

switch Js.Null.toOption(row.parentId) {
| Some(id) => Js.log2("parentId", id)
| None => Js.log2("parentId", "None")
}

+1. Js.Null.toOption is something I am relying on as well for transforming with JSON data to ReScript records. Is this an inferior approach?

I can explain a little and hopefully not do too much damage. JSON over the network is just a string, and then when that string is parsed into an object (of your preferred language), its an extra pain when that language is typed.

When you parse a json into a object in JS using JSON.parse, since its dynamically typed, it happily makes an object without an typed structure (just a bag of nulls, strings, numbers). You deal with the typing later when your app blows up.

But with something like like rescript, typescript, go, etc, it wants a type right away dammit. So your option is to create a type, and bind it the data from over the network.

looks like parseExn isn’t very impressive, its just JSON,parse

file.res

let j = `{"a":1,"b":"2"}`
let p = Js.Json.parseExn(j)
Js.log(p)

file.mjs

// Generated by ReScript, PLEASE EDIT WITH CARE
var j = "{\"a\":1,\"b\":\"2\"}";
var p = JSON.parse(j);
console.log(p);

Little baffled by Js.Json.deserializeUnsafe, seems to result in the same thing but is undocumented. Looks like it has an extra step in there for validation or something.

file.res

let j = `{"a":1,"b":"2"}`
// let p = Js.Json.parseExn(j)
let p = Js.Json.deserializeUnsafe(j)
Js.log(p)

file.mjs

// Generated by ReScript, PLEASE EDIT WITH CARE
import * as Js_json from "rescript/lib/es6/js_json.js";
var j = "{\"a\":1,\"c\":\"2\"}";
var p = Js_json.deserializeUnsafe(j);
console.log(p);

And Js_json.deserializeUnsafe(j) is just

function deserializeUnsafe(s) {
  return patch(JSON.parse(s));
}

Looks like json decoders like decco/spice will take rescript object and its type and encode it to a json (string), which then can be decoded back into a rescript type with all the rescript goodness preserved, like variants, etc. Not sure how that works, does it use reflection? I have no idea if they do validation too.

But what I’m doing is just pretending the underlying data is already the correct type as an added bonus to using a binding (since bindings are very gullible about returned types), even though its really just a total fantasy, but because I have total control over the data I don’t see why not. The reason for the post was I was just surprised that rescript figured out what type I wanted without me having to tell it what I wanted, just by using the binding itself, but I guess rescript is gonna rescript.

Ok I’ll stop before I make you as clueless as me on this subject.

3 Likes

In my opinion, there are very few situations where using something like decco or spice is not the best option. I am on my mobile phone, so I can’t give a very good answer, but I have a public REPL with decco and spice setup. Will share it with you with some examples of code and what you get in the JS side.

2 Likes

@kswope In a more formalized way, here is my workflow for parsing JSON of an certain type w with nullable fields. I have been using this for the past 6 months with no hiccups.

Critiques against this technique are welcome since I would like to understand why this should not be used compared to decco/spice. @danielo515 waiting for your REPL with the decco/spice setup for comparison.

I highly appreciate the presence of the Js.Nullable module that helps me work with old REST APIs and convert the HTTP responses to ReScript Records without relying on additional NPM packages.

To pile onto the case for decoding, every other fully typed language, including C#, Java, and Go, require a decoding step (or “marshalling” for Go) to safely convert a json string into the runtime’s respective data types.

We have the luxury of not having to do this for the javascript runtime, a blessing in my opinion. It’s totally fine that you pretend the underlying data is correct though since you do own the data. It’s the same case for whether someone decides to decode the data coming from a database they control.

Matter of time vs safety

4 Likes

The biggest problem with the setup compared to using Js.Json or decoding libraries is that it missing validation of the incoming data.
I personally saved hours of debugging by not trusting any external data coming into my application and failing fast when I get something I don’t expect. The same goes for data I own because there might be a typo, a breaking release, or just a bug. So there’s no way of having an invalid application state like “true” instead of true in the bool field.
To prevent this, you either need a shared schema with the backend and codegen of types, or pass all the data through decoders.

3 Likes

Being a creator of rescript-struct I’m a little bit biased about decco/spice and prefer to have decoding and data mapping in one go. Not talking about other benefits. But the convenience of creating a decoder is definitely top-notch. Just by writing:

@decco.decode
type data = {foo: string, num: float, bool: bool}

You can decode any Js.Json.t to the data type with:

json->data_decode

And it’ll generate following Js code:

function data_decode(v) {
  var dict = Js_json.classify(v);
  if (typeof dict === "number") {
    return Decco.error(undefined, "Not an object", v);
  }
  if (dict.TAG !== /* JSONObject */2) {
    return Decco.error(undefined, "Not an object", v);
  }
  var dict$1 = dict._0;
  var foo = Decco.stringFromJson(Belt_Option.getWithDefault(Js_dict.get(dict$1, "foo"), null));
  if (foo.TAG === /* Ok */0) {
    var num = Decco.floatFromJson(Belt_Option.getWithDefault(Js_dict.get(dict$1, "num"), null));
    if (num.TAG === /* Ok */0) {
      var bool = Decco.boolFromJson(Belt_Option.getWithDefault(Js_dict.get(dict$1, "bool"), null));
      if (bool.TAG === /* Ok */0) {
        return {
                TAG: /* Ok */0,
                _0: {
                  foo: foo._0,
                  num: num._0,
                  bool: bool._0
                }
              };
      }
      var e = bool._0;
      return {
              TAG: /* Error */1,
              _0: {
                path: ".bool" + e.path,
                message: e.message,
                value: e.value
              }
            };
    }
    var e$1 = num._0;
    return {
            TAG: /* Error */1,
            _0: {
              path: ".num" + e$1.path,
              message: e$1.message,
              value: e$1.value
            }
          };
  }
  var e$2 = foo._0;
  return {
          TAG: /* Error */1,
          _0: {
            path: ".foo" + e$2.path,
            message: e$2.message,
            value: e$2.value
          }
        };
}

Do you do this for database schemas too? as @dangdennis pointed out. Somebody might change a field name! On the other hand, this could be guarded against by simple tests which should be there anyway.

Have you considered stripping nulls from the remote source and using optional record fields to handle options automatically? disclaimer: I haven’t tried it yet in a real stack.

For example, up at the server, in JS all that is needed is using the optional “replacer” parameter

let str = JSON.stringify(data, (k, v) => v ?? undefined)

and in golang you just need to tag the struct field with a

type ColorGroup struct {
    ID     int `json:",omitempty"`
    Name   string
    Colors []string
}

I wrote a whole thread about looking for a type safe database lib in rescript. Without code gen or schema inference, decoding from the database is necessary for type safety. This incurs a runtime cost. I’d love to see something like zapatos (typescript) for rescript. Or sqlc for rescript.

The typical “low-level” non codegen approach is that each row in the queried results gets decoded to their respective data model (usually an object). It usually is positional based instead of keying off the actual field name. Libraries like node-postgres and others usually do this work of mapping values back to their field names.

Ultimately the codegen approach is best imo. C# and F# has LINQ. Great library.

Okay done talking to myself :joy:

1 Like

I’m glad you posted an example, because now we have some common ground to work with.
There is nothing wrong with your way of doing things. At my very beginning of Rescript journey I was quite in the same boat, wanted to do everything myself. However, the JSON parsing was always frustrating for the big amount of work they require for even the smallest piece of data, and still I was getting some runtime errors and unexpected outcomes, and still I didn’t wanted to use any ppx or annotation. This made my development of any app that involved JSON (almost all of them) slow and eventually I just abandoned rescript for a couple of years.

Your example, it is just to parse a very little piece of data, and it is still a lot to code, not only to write, but also to maintain! Every time you add a little property you need to write new parsers and update the existing ones. Put something as simple as a nested record and you will have to expand your (already long) example from 30 lines of code to probably 50 or 60.
I’m pretty sure it took you, a decent amount of code to write that code (which is nothing bad if you enjoy it, really, but it is a lot of work), compare it with the 30 seconds that took me to write a spice parser:

@spice
type t = {
  id: int,
  name: string,
  description: option(string)
}

That is! that is all that I need to do to have, not the safety level of parsing than the one you wrote manually, but even better.

I actually run your example adding this line

let w = `{"id":99, "name": "bro", "description": 55 }`->s2w->w2t;
Js.log(w);

Guess what I got on the console?

{ id: 99, name: 'bro', description: 55 }

Yes, a correctly parsed invalid value.

Compare it with the error I get using the parser spice wrote for me:

let w = `{"id":99, "name": "bro", "description": 55 }`->t_decode;
Js.log(w);

Which correctly lies to

{
  TAG: 1,
  _0: { path: '.description', message: 'Not a string', value: 55 },
  [Symbol(name)]: 'Error'
}

Path, meaningful error message and even the original value that is incorrect. All this is valuable information when you are trying to understand why X failed.
Again, this is just a very little example and things already went wrong. Imagine dozens of fields and nested records. Thing can get wild quite fast.
Having this speed at writing json decoders and encoders, almost as fast as just using plain JS but with the safety and explicit errors of rescript is a godsend.
This is the REPL if you want to play with it (reason syntax): Reason Node.js - Replit