Pipes are used to emulate object-oriented programming. For example, myStudent.getName in other languages like Java would be myStudent->getName in ReScript (equivalent to getName(myStudent)). This allows us to have the readability of OOP without the downside of dragging in a huge class system just to call a function on a piece of data.
Could the compiler not figure out that since offset is declared as an int, then calling ->toString on it should naturally use the Belt.Int.toString function?
In line with the quoted spirit of the language manual.
You can do open Belt.Int above offset->toString to clean up the actual code.
The compiler needs to be pointed in the right direction though - things need to be in scope for it to be able to infer properly. And thatâs what open does, bring a module into scope.
Yeah, I know about open. My question is more about if it is possible to automatically bring modules into scope. Either based on the type of the variable, as mentioned. Alternatively based on naming convention:
roomStore->RoomStore.create(room)
// could turn into:
roomStore->create(room)
// since the variable is named the same as the module
My 2c: I do agree with you that code can end up looking quite cluttered, and that it can be annoying to have to open multiple things. However, I think the trade off is good in this case. I personally think itâd be quite confusing to automatically bring things into scope, and on the contrary I think the explicitness is preferable here compared to the alternatives.
My laymanâs opinion is also that itâd be really hard to build something robust for that other than for the most trivial cases. Inferring module names from variable names is also not desirable imo, itâd tempt developers into naming variables according to what type they are, rather than what function they have in the code. And itâd only work or the first binding/a single binding.
Worth noting also is that Iâm guessing that one of the reasons the compiler can stay fast is that it doesnât do potentially expensive lookup on modules not explicitly brought into scope. Imagine with a solution of âlook for a module with a function called X that operates on the type Yâ - the compiler would need to do that lookup for any function call that isnât explicitly annotated with a module, and itâd potentially need to walk through all defined top level modules in the project. That most likely wonât scale that well.
I think the current autocomplete behavior for pipes is along the lines of whatâs desirable - a heuristic for the typename t itself.
This kind of inference is called âmodular implicitsâ and itâs currently being researched for the OCaml compiler. My understanding is that itâs a lot more complicated than it looks, though (for example, multiple modules could have the same types and functions), and itâs several years away from being added to the language. Itâs an interesting area of research, but I wouldnât hold my breath on ReScript adding it any time soon.
I personally donât think explicitness is a bad thing, and that it probably makes the code more readable to see exactly what module is being used. If youâre worried about verbosity, I usually alias modules with shorter names:
module I = Belt.Int
let offset = offset->I.toString
Based on that, the annotation doesnât actually say âthis is declared as an intâ, it says âinfer the type yourself and if the inferred type doesnât equal int, then throw a type errorâ.
The compiler is actually doing very little work, as zth pointed out. Once it has all the information it needs e.g. which modules names are coming from, it just runs the type inference algorithm and figures everything out.
the annotation doesnât actually say âthis is declared as an intâ, it says âinfer the type yourself and if the inferred type doesnât equal int, then throw a type errorâ.
Yes, but the compiler still knows that it was declared as an int, because it independently inferred it, checked, and didnât throw a type error. So at the time offset is later referenced, the compiler would have this knowledge. Right?
Could it do all that? Perhaps. But it would likely take massive changes to the internal design of the compiler. I donât really see a big appetite for that If you look at the history of the ReScript compiler, it has been a series of small incremental changes to get better and better JavaScript output, while sticking to the philosophy that explicit is better than implicit, and fast compilation is a core requirement. Can a lot of things be done to ReScript to make it more like TypeScript? Sure, and people ask about that from time to time. But ReScript has its own design philosophy, so I would try to refocus on building and shipping cool stuff with it, rather than posing a bunch of hypotheticals
Iâm not posing a hypothetical. I am trying to understand how the type inference actually worksâŚ
The explanation you gave didnât make sense to me because the result should logically be the same whether or not annotations inform the compiler or is simply a point of reference against which the compiler verifies itâs own inference.
As mentioned before, the compiler doesnât track which functions are available in all modules that fit the expected input and output types during type inference. That would introduce a massive amount of implicit search. And a lot of ambiguity if multiple modules had functions of the given types available. Which one should the compiler pick? It avoids all that complexity and just asks the developer to explicitly pick the exact function. This also makes the code a lot more readable because you always know where a function is coming from.
You can check how slow the Scala compiler is. Part of that is because of exactly this reasonâit has to search for implicit conversions in the environment.
Well, sure. Though I donât know what the actual capabilities or limitations of the compiler are (as you may do). What I meant by saying Iâm not posing a hypothetical, is that Iâm not asking these questions just for the sake of âposing a bunch of hypotheticalsâ, as I got the impression that you implied. I am asking because I am evaluating the capability of ReScript, to see if I want to use it to âbuilding and shipping cool stuff with itâ, which in my case is for a big multi-year project that is intended to go into production. (I am curious about these things as I happen to care a great deal about the readability of the code, especially as it scales to hundreds of lines).
the compiler doesnât track which functions are available in all modules that fit the expected input and output types during type inference. That would introduce a massive amount of implicit search. And a lot of ambiguity if multiple modules had functions of the given types available. Which one should the compiler pick?
I imagine the compiler could keep an index, and do a lookup in O(1) time, so it wouldnât involve a massive search during parsing⌠On name conflicts, it could give a compile time error, so the programmer could include just enough code to disambiguate?
But keeping and updating an index is not exactly free, right? Have you ever over-indexed a database table and slowed down a query? Same problem.
It also massively complicates type inference and checking when you canât just assume the final type of an expression is exactly the same as its apparent type when inference begins, and instead you have to constantly do bookkeeping about what the appropriate type might be depending on what functions are being called on it.
I understand youâre trying to evaluate ReScript for a real-world project but I honestly think talking about large internal changes to the compiler is not a useful way to do it. My advice would be to try out an actual, low-risk project with it and then do an analysis of how it went. Plenty of people have done it that way, and I think itâs been a useful method.