Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm interested in understanding why you are "ajax[ing] in > 200 source files" in development mode? This provides a pretty poor developer experience IMHO; I've found a much better workflow in setting up a single concatenated unminified bundle for development using grunt-concat-sourcemaps [1] to provide source mappings that Chrome Developer Tools can browse.

This gives you the snappy page loads you'd expect with a single script element in the page, and still allows you to debug in your separate source files due to the sourcemappings. The only difference between javascript in development and production should be the minification step.

[1] https://github.com/kozy4324/grunt-concat-sourcemap



With requirejs, I run into the `mismatched anonymous define() module` issue when using concat-sourcemap. The "correct" way around it is to use r.js, but it takes about 2.5s to compile that way, which happens to be more than it takes for ajax to work its magic locally.

Running SPDY locally helps a lot. Even with all those files I hit DOMReady at about 2.7s with no concatenation.


I wouldn't choose to use requirejs on a project where I was making choices about the architecture for reasons mentioned here [1], but the current project I'm on was setup similarly and loads over 350 .js and .tpl files via AJAX in development mode pushing the DOMReady to a staggering 7.5s. (This is inside of Rails 3.2 using the requirejs gem).

Every time I've encountered requirejs in an application this has been representative of my experience with the tool and the pain of having to wait that long every time I reload a page simply doesn't seem worth it to me. There's also somewhat of a mismatch between async loading assets in development and sync loading them in production which I've seen responsible for bugs that show up in one environment but not the other, and/or vice versa.

A couple of questions for you: do you think the r.js optimizer taking 2.5s to compile is related to the complexity of the dependency tree in your application or just the number of files being loaded? Also, considering the previously mentioned mismatch between dev/prod and async loading, do you think it is appropriate to use something like SPDY to obviate the pain of a lengthy DOMReady event in development?

[1] - http://searls.testdouble.com/posts/2013-06-16-unrequired-lov...


Yes, I've run into issues with things running differently in an async environment (dev) and a sync environment (prod). To mitigate the issue I now throw events at key points in initialization and wait for those events to continue. Has solved my problem so far.

Using r.js in development isn't the worst idea. It's worth seeing how long it takes in order to make that decision. Compiling tpls is much faster (grunt-contrib-jst) and adding that to your grunt watch & including it directly is a good way to save time. I think it takes a long time on my end due to the complexity of the dependencies. I only include exactly what each module needs so some dependencies may be as many as 6 levels deep, or more (haven't really checked).

SPDY makes a big difference for me (big enough to ignore the problem for now) and I don't mind using a self-signed cert in dev.

EDIT: I hadn't been compiling tpls using JST in dev until I wrote this post - a great side effect is, it actually shows me now where the errors in my tpls are! Previously any tpl's stack terminated at the code that ajaxed in the tpl. This is far better for debugging and brought my DOMReady time down to about 1.75s.


One solution would be to stop anonymously defining modules... or...

Instead of having >200 files that need to be compiled with r.js every time, what about compiling them to intermediary builds? Abstract your code in to some bigger modules and wrap them up in a little bow with a nice interface, and basically consider that part of the code "solved" and focus only on what is currently changing or being built. Your build process should reflect this!

I've been doing this on larger projects with RequireJS and r.js and my build times are very short... it just builds from maybe 15-20 files, where a few of those files are the result of some other r.js build.

Now, should r.js possibly do things to better manage this sort of approach? You bet! There are many mature build environments that have a similar approach. Maybe r.js needs some sort of concept of "linking"?

Personally, I don't mind having to manage intermediary builds, because... that intermediary build is making something that I can use in OTHER projects... I haven't really come across a situation where pure business logic is dominating an application. Almost everything that a program does can be abstracted out and reused! I'm sure people out there can provide plenty of examples to the contrary, and I'd love to hear those!


I just started using Browserify with browserify-middleware and coffeeify, and it's fantastic. No build tool needed (even in production if you setup caching)


Browserify is wonderful. I had trouble with browserify-middleware on nodejs, because it took so long to load -- several seconds, compared with under 1 second using the browserify "binary" tool. So I paired the browserify binary with livereload and grunt-contrib-watch, and it's now a very quick process every time I change the code.

However, it doesn't at all address the issue of losing JavaScript state upon reload, which is one of the main problems this post is emphasizing.


Between browserify and requirejs I've definitely had a more positive experience with the former but my experience is that both seem to be subject to scaling problems with even a trivial number of dependencies in play; I wish there was a really good sample project setup that would allow accurate comparisons between the tools at the scale they are frequently used at, but for now I choose to use simple concatenation and a simple namespacing tool [1] to avoid lengthy watch-time issues during development.

grunt-contrib-watch is a pretty vital part of my workflow, but I don't use the livereload options because I can't afford to lose that state in the browser.

[1] - https://github.com/searls/extend.js


"the issue of losing JavaScript state upon reload"

Is this really that desirable? I can imagine a lot of scenarios where this would cause unexpected behavior.

Most other platforms don't support this, do they?




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: