ZFS de-duplication will eat all of the RAM you can throw at it.
Otherwise, it is basically the SLUB memory block allocator that was used in the Linux kernel for a while. So yes, it can run on watchOS-level amount of RAM.
ZFS data deduplication does not require much more ram than the non-deduplicated case. Performance will depend heavily on IOPS when the DDT entries are not in cache, but the system will run slowly even with miniscule RAM.
Kernel memory on the platforms where ZFS runs is not subject to swap, so something else happened on that system. The code itself is currently somewhat bad at freeing memory efficiently due to the use of SLAB allocation. A single long lived object in each slab will keep it from being freed. That will change later this year with the ABD work that will switch ZFS from slab-based buffers to lists of pages.
If dedup is off, and max ARC size is limited, it will use a little memory (e.g. 512 Mb of RAM for 2x2TB RAID1 pool). I can say that from my own experience, I tried both approaches.
I probably should clarify that the system could definitely run unacceptably slow when deduplication is used and memory is not sufficient for the DDT to be cached. My point is that saying ram is needed is saying that the software will not run at all, which is not true here.
Or when the cache is cold. It REALLY hurts to reboot while a deferred destroy on a big deduplicated snapshot is in progress. No import today for you!
Well, unless your medium has no seek penalty, which is what hurts with deduplication. Dedup on SSDs is pretty much OK, as long as your checksum performs reasonably (skein is reasonable; sha256 is not).
DDTs that fit inside no-seek-penalty L2s don't hurt that much either, and big DDTs on spinny-disk pools are acceptable with persistent l2arc, although it's risky because if the l2 fails, especially at import, you can have a big highly deduplicated pool that isn't technically broken but is fundamentally useless if not outright harmful to the system it's imported (or ESPECIALLY attempting to be imported) by. "No returns from zpool(1) or zfs(1) commands for you today!"
When eventually openzfs can pin datasets and DDTs to specific vdevs (notably ones made out of no-seek-penalty devices), heavy deduplication on big spinny disk pools should be usable and reliable.
Until then, "well technically even if you have only ARC and it's very small, it will work, just slowly" while correct in the normal case, is unfortunately hiding some of the most frustrating downsides when things go wrong.
Otherwise, it is basically the SLUB memory block allocator that was used in the Linux kernel for a while. So yes, it can run on watchOS-level amount of RAM.