Safety of Belt methods

Recently, I had to clone a nested array before I did some dangerous mutations. In my naivety, I thought Belt’s copy would do a deep clone. As I found out that it mutated the state of my reducer, I checked the source and saw that it is just an external binding to Array.prototype.slice (and thus only doing a shallow clone).

So what would be the right way to improve that? Add a copyDeep method and just document that copy only does a shallow copy. Or would it be better if the default copy was the deep one?

3 Likes

I’m not sure if it’s possible to have a polymorphic/generic deepCopy work in a type-safe way. The “depth” of a nested array is part of its type signature, array<int> vs array<array<int>> for example.

(Feel free to correct me if I’m wrong.)

But if you know the depth of your array (which you probably should) then it’s simple to write a size-specific deep-copy function:

// copy 2 levels deep
let deepCopy2 = a => Belt.Array.map(a, Belt.Array.copy)
// copy 3 levels deep
let deepCopy3 = a => Belt.Array.map(a, deepCopy2)
// copy 4 levels deep
let deepCopy4 = a => Belt.Array.map(a, deepCopy3)

I agree that a note about this would be a good addition to the documentation, though.

1 Like

Yeah that’s how I solved it for my case.

I agree a Belt method would probably need to bail out of the type system (or even use raw) for it, but that would be hidden from the user as the signature is still array<'a> => array<'a>.

At the end of the day, array is still a mutable type. So if you have an array of arrays, you have an array of mutable values.

One way to solve this could be to introduce an immutableArray type: even if it’s just a regular JS array underneath, you’d only have immutable functions to work with it. You’d have to copy on conversion to/from regular arrays though.

As for shallow copies, I think that’s the idiomatic immutable approach. It only becomes a problem when you mix mutable and immutable code (which in practice may happen a lot, of course).

2 Likes

I was recently working with some GADT code and thought of the deepCopy feature from this thread. Here’s one way someone could make function that deeply copies any sized array:

// Use Peano natural numbers to track array depth.
type rec nat<_, _> = Z: nat<'z, 'z> | S(nat<'a, 'z>): nat<array<'a>, 'z>

let rec copy:
  type a z. (array<a>, ~depth: nat<a, z>) => array<a> =
  (a, ~depth) =>
    switch depth {
    | Z => Js.Array2.copy(a)
    | S(depth) => Js.Array2.map(a, a => copy(a, ~depth))
    }

The nat type is used to track how deep the array is. For example, S(S(Z)) type-checks with array<array<'a>>. We can use the function like this:

copy([[1], [2]], ~depth=S(Z))
copy([[[1], [2]], [[3], [4]]], ~depth=S(S(Z)))

Trying to copy deeper than the actual array will produce a type-error:

copy([1], ~depth=S(S(Z)))
// This has type: nat<array<'a>, 'b>
//  Somewhere wanted: nat<int, 'c>
//  
//  The incompatible parts:
//    array<'a> vs int

However, it won’t stop you from copying less than the maximum depth, so shallow copies are still possible:

copy([[1], [2]], ~depth=Z) // Only copies the first level

It could be possible to enforce the exact depth, but that would require adding more types. array<'a> unifies with array<array<'a>>, so you would need to get rid of the polymorphic 'a. This probably means writing additional GADT constructors for every possible type, e.g. int, string, myCustomRecord, etc. At that point, I think we’ve hit diminishing returns on how useful the function is.

In almost any practical setting, I think I would still prefer to write deepCopy2, deepCopy3, etc. rather than copy(~depth=S(Z)) or copy(~depth=S(S(Z)), but I also imagine that the GADT version could still be useful in some cases.

1 Like

What about binding to a lenses library? Like partial lenses? Not sure if it may be too hard to type though