I wonder if ReScript compiler developers make use of automatic compiler testing techniques like, Csmith for C or LangFuzz for JS. As RS is relatively young, I think there would be many chances for those fuzzing techniques to reveal RS compiler bugs. Currently I’m studying on software testing, so I’ve been curious. Do developers face any need in systematic & automated testing of ReScript compiler?
Any comments will be appreciated.
Thanks in advance
Can you cash the parser?
That area is already tackled by static analysis (using https://github.com/rescript-association/reanalyze). No unsafe code is use so the remaining issue is throwing exception. The analysis reports on uses of unchecked functions that might raise exceptions. That said, recent integration of the parser in the editor extension, which effectively uses users as “fuzz testers” revealed one case of crash (unhandled exception) where the source code was explicitly silencing the static analysis.
The other area handled via static analysis is infinite loops, which were quite common before turning on static analysis for termination.
Can you crash the editor extension?
The code is currently not as strong as the parser, and there have been, and likely are left, cases where it can crash (raise exceptions) under certain inputs.
Sounds interesting that there existed a corner case which made parser malfunctioning. I have a question as I little confused. Does it mean that rescript parser crashed with unexpected exception which reanalyzer didn’t report? What makes me confusing is, rescript parser is an ocaml program, while reanalyzer seems targeting rescript program.
There’s an explicit annotation [@doesNotRaise] that is used to silence the analyser. Either the problematic case was missed in the beginning, or some small change has introduced it afterwards.
In any case, when something goes wrong, there are only a few spots to inspect.
Thank you for your kind answer, and besides that concrete example is very helpful for me. Because complex semantic restrictions(e.g. def before use) are not needed when a parser input is made, this task seems to be a good starting point. Even I’m not going to do it right now as I’m working on an urgent project, I’ll post here if I make some progress about it
There’s no rush. Definitely interested in knowing more at some point.
Summarising the context.
Fuzzing discovered a new infinite loop in the parser, which opened up a new category of loops not considered. This is when the parser keeps on reading Eof over and over again e.g. during parser recovery, and never terminates.
With that info in mind, it was possible to adapt the use of the analysis and discover new infinite loops.
It would be interesting to see what could be found when pointing the fuzzer towards that specific category of loops. There’s a PR that turns them into crashes, and that’s normally the thing that helps fuzzers.
@cristianoc Thank you for your opinion! As far as I know, AFL++, an evolutionary fuzzer, guides testing by updating seed pool when a mutated input turned out to be useful. And the criterion of the usefulness is (approximated)code coverage which is generally applicable for any software. For now, I’m not sure AFL++ supports other(or custom) strategies for feedback.
By the way, I just posted a tutorial which I had done to find bugs of Rescript parser with AFL++. As I mentioned before, I just followed ordinary procedures so the instructions will not be so complicated. Please leave comments if you find any faults or questions.