Small suggestion, ReScript should not be a devDep, it should be a normal dep. It has runtime libraries that are used in consuming applications (like Curry, etc.).
Beyond that–23 minutes seems pretty excessive. What is it spending all that time on, mostly?
That’s a great question and impossible for me to tell; the npm log does that “blah blah blah uuid@v1 is deprecated” and just sits there, so not sure where in the npm install drama it is hanging. Is there a way to do more verbose logging to see?
Case in point, there’s a 20 minute gap between the warning and the “added 954”.
npm WARN deprecated request@2.88.2: request has been deprecated, see https://github.com/request/request/issues/3142
added 954 packages, and audited 955 packages in 20m
Try disabling npm’s fancy progress bar with npm set progress=false (this tells it to just print progress one per line), then running in verbose with npm --verbose ci, to force it to print more info.
Ah, there we go, it’s running ReScript install scripts for 20 minutes. You might be using a build agent OS for which ReScript doesn’t provide prebuilt binaries? You can try switching to something standard, like ubuntu-latest or something like that. Or if that doesn’t help, your remaining option is to Dockerize the build and use that image for all builds going forward.
I’m trying that now. I’m installing it as the last step on the image, installing them globally. My hope is, when my code goes to install ReScript, it should be fast “because it’s already in global with the same version”… I hope?
Yeah, that makes sense, but why have two dependencies when you can have just one? I assume you’re building a Node app, and don’t need to distribute a lightweight library. So depending on rescript as a normal dep doesn’t really hurt.
Unless you are building a library, in which case ignore my suggestion
Ok, running into same speed problem. Do I just remove rescript from package.json now that it’s global or something? I mean it should already be on the container; I updated the hash. Blarg…
Like in Docker I cd’d to /usr/bin and just npm i -g rescript; I saw it put it globally, which is cool, but for some reason my build using the latest Docker published image hash is attempting to build ReScript again.
The idea is to build prepare a Docker image with the npm ci part already done, i.e. rescript already installed. Then on each pipeline build, use that image in the pipeline. Then, doing npm ci again will just install the diff of any changed dependencies, and then a full clean build should be fairly fast.
My challenge here is directories. Like, in Docker, where do I cd too? I believe Gitlab does a git clone and then cd’s to something like /builds/your/gitlab/project/path. My base Docker image won’t know that, and the CI_BUILDS_DIR environment variable seems to be only available in gitlab-ci.yml file.
My entire CI team uses Alpine, so I’ll see if I can pawn this off to one of the Ops crew. I’ve tried all morning to get node:bullseye to work, but Alpine commands vs. Debian are completely different and my Google Fu isn’t strong enough for figure out bugs (like why is Alpine curl -o fine, but Debian is like “wtf is -o?”). I’ll keep you posted.
You don’t need to know where gitlab ci checkouts your project. If you do, then you probably use absolute paths and it’s better to replace them to relative ones.
What about curl. You can figure it out creating a small job with the script curl - - help, and compare the output.
Sorry, Omnicron + food poisoning had me crushed for 2 days. Still have the brain fog.
Figured it out; stupid \ trailing slash lelz. Docker continues to be the worst thing on the planet. However, using it + cache + artifacts, POW, 90 seconds vs 20 minutes, BOOOYYAAAA
ReScript in node_modules is 200 megs. That would fly for EC2, but not for serverless deploy into Lambda, lol, sorry, back to rescript/std.
Odd thing, too; it’s 250 meg bundle size for me 3 Lambdas in CI, but locally it’s 80. wat.
Hmm, thinking about this a bit more, it makes sense for the Docker image that actually does the build, to be large. It runs in your CI pipeline, caches all the build dependencies (download + build of ReScript, all npm dependency packages).
To really cut down on the output size, it makes sense to have the above pipeline produce a single minified, tree-shaken, bundled JS file. This single file should be pretty easy to deploy anywhere.
It’s ok to have 200mb Debian docker image. If it was alpine, it would be something like 15mb. It’s not related to the js output, but system and it’s dependencies
Naw, I mean the “yourcode.zip” that the Serverless framework generates from your ReScript compiled JS files + the package.json dependencies (not devDependencies) that it then uploads to S3 so it can then go “CloudFront, deploy that zip”. Lambdas can be like 250 meg, but they really shouldn’t be above 10 meg unless you’re doing beast work. Since API’s need to be fast, you want 'em in the kilobyte range, really, but hard to do with our current tooling. I don’t care about the Docker file size, lol, that works fine.
When I’m feeling better tomorrow I’ll take a look at what’s actually getting into the zip file locally vs. on the ci server. I reckon it’s my horrible “cache vs. artifacts” skills in the gitlab-ci.yml file or perhaps npm is installing different things remotely.