Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Ah, there it is, the JVM discussion in any WASM thread. I kid.

On topic though, I'll be curious to see how bad it really is. The demo video showed code that ended up looking and behaving like normal Rust code. Maybe you'll end up with some oddities like Rust enums not being supported, but w/e.

At the end of the day, I don't need to make weird and often gross C bindings in my Rust or Go code. Is that not a huge win?

But you're right, maybe this will go the way of the JVM and no one will care. Or maybe not. Does it matter?

edit: And I should add, maybe I keep forgetting where my well supported JVM Python <-> Rust or Go <-> Rust bridge is. Maybe someone can remind me, because I've not seen it.



I don't doubt that the ubiquity of WASM environments will result in more software targeting the environment. My contention is with the notion that we'll all be seamlessly mixing different languages.

Far more likely is that WASM influences the evolution of various languages, resulting in homogenization of semantics. C++ is already going in this direction--preferring some proposals over others because of easier interoperability with WASM's constraints. Other languages with irreconcilably distinct semantics, such as Go's goroutines, are likely doomed to be second-class citizens as compared to Rust or C++. Languages like Python and PHP are likely to see either major refactoring of their engines, or else see the rise of alternative implementations that are more performant in WASM environments and offer more seamless data interchange.


Goroutines only require a slightly fat runtime. Haskell uses basically the same mechanism, with C interoperability, and all it takes is starting (and optionally, stopping) the runtime.


It's not having a runtime that's the problem, it's limitations in WASM's control flow constructs: https://github.com/WebAssembly/design/issues/796#issuecommen...

And WASM has this limitation as a consequence of limitations in the V8 and SpiderMonkey engines. See http://troubles.md/posts/why-do-we-need-the-relooper-algorit... and https://news.ycombinator.com/item?id=19997091 And those engines have those limitations because they're a reasonable design tradeoff in the context of executing JavaScript.

I think even if WASM gets a limited goto sufficient to resolve the most problematic performance issues, languages like Go that use more sophisticated control flow constructs (e.g. stackful coroutines) will always be at a disadvantage relative to C or Rust when it comes to WASM as absence of other various low-level constructs (e.g. ability to directly handle page faults) will incur higher relative penalties. The same thing could be said about Rust's new async/await--WASM will compound the costs of the additional indirect branching in async-compiled functions (not to mention the greater number of function invocations, which is at least 2x even if a function never defers).

That said, I don't think anybody uses Go for the insane performance, so as long as the performance impact isn't too severe (and limited goto support will help tremendously) it shouldn't be much of an issue. OTOH, these sorts of tradeoffs are why "native" code will always have a significant place. Especially as the wall we've hit in terms of single-threaded performance begins to be felt more widely.


It's a balance. You probably won't see things like Rust's `Vec` exposed as a wasm API by itself, because at that granularity, language-speciric API details are really important, and the actual code you could reuse is relatively small. But at larger scopes, the advantages of mixing languages and introducing sandboxing become more interesting in the balance.


> At the end of the day, I don't need to make weird and often gross C bindings in my Rust or Go code. Is that not a huge win?

You just need to manually insert stack growth checks before all function calls in anything that can be called from go to support goroutines, because those can't live on a normal stack and still be sufficiently small to support desired go semantics. Async has similar issues, where the semantics are incompatible, which means that many things go horribly wrong in cross-language calls

Or you use a lowest common denominator and essentially treat the thing like an rpc, with a thick layer to paper over semantic mismatches.

And that's before we get to things like C++ or D templates, Rust or Lisp macros, and similar. Those are lazy syntactic transformations on source code, and are meaningless across languages. Not to mention that, if unused, they produce no data to import. But, especially in modern C++, they're incredibly important.


I've often wondered for client-side apps if a simple thread-per-goroutine model would work for Go. Sure, you'd lose some of the easy scalability. But I think that's mostly useful for server software and client apps don't usually have thousands of concurrent tasks anyway. You could also lower the overhead some for calling C libraries (which client software does more often).


I think Rust macros and C++ templates are not lazy wrt execution.


They are lazy wrt compilation. So, unless you use the specialization in C++ or Rust, there's no code to call.

    std::vector<Java.lang.Object> 
Simply doesn't exist for interop code to call.


> At the end of the day, I don't need to make weird and often gross C bindings in my Rust or Go code. Is that not a huge win?

That stuff only works if you limit yourself to exclusively run in a wasm runtime. If you want to compile to any other platform you still need C glue code.


If you're distributing an executable (not a library), you can just AOT compile the generated wasm to machine code.

For example, the WasmExplorer already lets you compile C++ -> wasm -> x86 assembly: https://mbebenita.github.io/WasmExplorer/?state=%7B%22option...


I doubt you get the same performance as direct compilation. LLVM ir carries lots of annotations useful to optimizing passes, e.g. aliasing information.


The twist is, you can produce wasm from LLVM, including running the full mid-level optimizer first, which is the part of LLVM where those annotations and aliasing information are most valuable.


If this turns out to be useful then what's stopping us adding similar annotations in a custom section for Webassembly?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: