Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You'd still very much need virtual memory to isolate WASM linear memories of different processes, unless you want to range check every memory access. If we're dropping linear memory and using the new age GC WASM stuff, sure.

An exploit to the runtime in such a system obviously would of course be a disaster of upmost proportions, and to have any chance of a decent performance you'd need a very complex (read exploitable) runtime.



I suspect the underlying assumption here is that each WASM module/program would/could likely exist in its own unikernel on the hypervisor. Which is something I guess you could do since boot and startup times could be pretty minimal. How you would share state between the two, I'm unclear on, though.

The question is.. if you have full isolation and separation of the processes etc... why are you bothering with the WASM now?


> if you have full isolation and separation of the processes etc... why are you bothering with the WASM now?

WASM can help with portability.

Any sandbox layer can help with anomaly/exploit/bug detection, accelerating fixes to untrusted code, or a neighboring sandbox layer.

"Phrack: Twenty years of Escaping the Java Sandbox" (2018), https://www.exploit-db.com/papers/45517


Then we must go deeper! Put some WASM in a JVM in the WASM. In an OS. In a hypervisor.


haha, today's shiny network effect attractor is tomorrow's legacy quicksand to be abstracted, emulated or deprecated. The addition and deletion of turtles will continue.

> Put some WASM in a JVM in the WASM. In an OS. In a hypervisor.

Intel TDX comes to mind.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: