Exactly, arrays are heavily optimised by JS engines and are very easily processed on modern cpus. A single-linked list is not “bad”, it has its use cases. Just don’t use it if you need to store bulk data. Like all the enemies in your game, a list of all cities, all animations in your app etc. Linked lists are very expensive (memory) for this use case.
Lists are more for intermediate computations where you don’t know ahead of time how big they’ll be - not for storing data. You should definitely use arrays here!
And put the code through terser separating the two variables into different files:
Top level list: 4257 bytes
Top level array: 4080 bytes
Lists compile to literal objects with hd and tl fields, which is only a few more bytes than an array literal. There is no reason for the size difference being so huge in this specific isolated example, so there may be other reasons why this is happening.
There are valid reasons for using lists, they are immutable for one, so they are a great guarantee if your list of destinations is constant and you don’t want any code modifying it at runtime.
That is correct, whitespace takes bytes and the list version indents code more in the output so it will have more whitespace in the JS output. That is why I “put the code through terser” in my comparison.
Sorry if it wasn’t clear, terser is a JS minifier (successor to uglify) which you should always use for your production output combined with compression like gzip. See https://try.terser.org/
A missing advantage of using array instead of list for cross wire communication is that deserialisation is much faster, i.e, JSON.parse handles array much faster than nested objects .
Searching arrays is always faster, because the data is usually more lined up (and V8 is optimised for it). Lists are objects with pointers to other objects.
The speed advantage of reading arrays far outweighs the cost of creating them (which happens a lot when you treat them as immutable).
I still use list, but only when my recursive algorithm simply can’t be expressed as an Array (which isn’t often anymore)
Generally speaking, an array of a 1000 items is tiny for modern computers and I don’t think it will be a problem with either version.
Performance is always dependent on how often the code is called and how much work there is, and making blind assumptions is usually cargo culting and a waste of time. Always performance test your code with real workloads under real conditions.
For example, here is looking up 1000 ids in arrays and lists 1000 times in my computer running on node.js:
Pretty fast no? I stand by what I said “Always performance test your code with real workloads under real conditions.”, with an extra of “JITs are amazing and very hard to understand”.
Just FYI for those who want to run benchmark tests within Node.js…
I recently released a relatively complete bindings library for Benchmark.js. It’s called rescript-benchmarkjs, and you can find it here:
It does add a few helper functions around parts of the Benchmark.js API, mostly for type safety and ergonomics, but I was very careful to make sure that nothing interferes with the code under test.
I encourage people to use it for empirical performance tests like this. And obviously if you uncover any issues with the library, just let me know.
To be honest micro-benchmarks like this on a JIT don’t always make sense.
For example, I tried to use Array.getByU thinking that uncurrying the lambda would maybe make it faster, and here is the results I got on my machine with node v14.15.4: