Hi there! I asked this question a long time ago on the discord and it was mentioned that there is a blog post the team was wanting to write about this. As someone who is very interested in parsers and dev tooling, I’m curious why Rescript chose the approach it did for the syntax parser, as opposed to parser combinators? (something like https://github.com/inhabitedtype/angstrom)
Even before the post, there’s a little bit here: https://github.com/rescript-lang/syntax#why
I can only answer for myself:
Parser tooling such as PEGs, Combinators, LALR(1), LR(1), GLR, etc exists as tools to make it easy to define and get the ball rolling. I don’t share Jonathan Blow’s opinion, as they are often simpler to write, and far easier to maintain when you have large changes being done to the language you are looking at.
However, they often fall very short as soon as you need to do error reporting. The most important programmer is often the newcomer to a language. They don’t know the grammar or syntax rules, so they are likely to make many mistakes, and they are also likely to have a harder time understanding the error messages. This by itself should be enough that you start being more serious about parsing, if the tooling doesn’t support it.
The other major reason is that in many cases, you want to parse a superset of the accepted language. If you look at programming as a dialogue between a compiler and a programmer, you want early feedback in the editor of your choice. This argues one should be able to parse programs with obvious syntax errors so one can report more problems in the code base faster.
Once a language start being more set in stone, the changes tend to be smaller in nature, so you can hand-roll the lexer/parser phase without too much worry. And you can start working on the ergonomics of the parser errors.
As for parser combinators, remember that they always provide support for a certain language class. “Classic” combinator parsers target LL(1), and this class is only used by a handful of languages. If you are parsing e.g., Pascal, Oberon, or GraphQL, you are in luck because those languages either directly fit into the LL(1) category, or almost does so. Most “modern” languages are typically LALR(1) or LR(1), at the very least, so you need special support in your parser combinators for these.
Yeah we’re gonna write up a blog post about this soon, since now the parser’s been tested in the real world. Your point about error messages and language complexity is on point. Those 2 constitute our major motivation. The ROI also goes down very quickly the moment you start to try to extend some parser tool; might as well roll your own. Like jblow said, a parser is just another program. Our entire community can do well with more of such mentality.
Note that all of the mainstream Turing-complete languages we’ve looked into use a hand-rolled parser instead of parser generators/combinators. From our prior experiences, the latters indeed fall short. Somewhat interesting to play around with and to think about, but again, see all the real-world examples.
Not to mention that our current parser has no dependency. One little self-contained repo we can carry everywhere. Nothing to fork, nothing to extend. Another general software philosophy we’ve tried to push in the community.
We also get to have some important control over the performance and memory characteristics with a hand-rolled one. This has been proven to be critical. More on this in the upcoming blog post.