Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Chris Granger's Light Table is also trying to do it. Of course, Bret Victor's talk breathed new life into this whole thing. But everything old is new again.

People want to build software from a small kernel that scaffolds itself. They want to immediately switch between "testing the app" mode and "building the app" mode. They want something more than a text file, a tool pipeline, and an executable at the other end. On the other hand, they don't want their code to turn into an opaque binary blob that can't be diffed, can't be read by text tools, and can't be shared with people who don't feel like using your sweet "Invented on Principle" tool.

It feels like we're converging on the ideal solution, but it's happening more slowly than I think many of us predicted.



Exactly, I'd love more interactivity but you can pry my plain text source files from my cold dead hands. The smalltalk environment was way too magical and too easy to mess up when I tried it.


Add me to the list of people dedicating themselves to trying something in this very area.

I don't exactly know what I'm creating yet (maybe there aren't terms for it yet), because I change things as I go and it's not done yet. But the current vision is a sustainable automation platform, where you can add/change/build/do anything (because it's open source) with what you're working with. So you can create yourself a "testing the app" button and "building the app" button, and that will become something that's available to you. Actually, I'm trying to make it so that those buttons will be automatically available for you as you naturally do the things you'd normally do, but I haven't gotten to this part yet. (Oh, and perhaps instead of buttons you have to press, new output can simply appear on your screen right away.)

In short, my vision agrees very much with what you're saying, but I still have a lot of work to make my project a viable building tool, and it is happening very slowly indeed.


Its not the same. Light Table and Bret's demos go way beyond what Smalltalk ever did. Its not just about hot swapping, but liveness, and the Smalltalk community never got that [1]. But it doesn't stop them from saying they've already done it because they don't understand what they are seeing, and thinking is hard.

[1] John Maloney got it with Morphic, and even coins the term Liveness at about the same time Tanimoto does. However, this was originally in Self and relatively independent from Self's hot swapping capabilities. It also didn't really map back to code very well (it was very dependent on direct manipulation via Morphic's edit menu).


More than (almost) any other language/environment, Smalltalk is capable of this kind of 'liveness'. That it hasn't been implemented in the way you refer is due more to a lack of development effort than anything else.

For the last four decades, Smalltalk has been providing a glimpse of the future. That future is still waiting to happen.


That's a vacuous statement if I've ever heard one. Nothing in Smalltalk makes achieving liveness easier than say Java or, more obviously, a language with very encapsulated state like Erlang, and definitely not various visual languages where you get liveness for free (Quartz Composer!). It's telling that Granger et al are basing Light Table on Clojure/Lisp rather than Smalltalk (it will be interesting to see what Bracha does with Newspeak, however). Also consider various game editing engines (Unreal, Unity) that offer live scene scripting capabilities in whatever scripting language and C++ they support.

Smalltalk was crazy innovative, while Self (Smalltalk's only real successor) gave us the first live graphics toolkit (Morphic). But the future is still being invented, and it will be a much better experience than Smalltalk ever was.


Smalltalk is capable of evaluating statements as they're entered, and it's a relatively small step to reflect those changes immediately on compilation - thus, 'liveness', as in Light Table. I can't imagine being able to do that in Java.

If I'm missing something, can you please provide some more detail?


Liveness is an experience, hotswapping is a mechanism that gets you 5% of the way their. Its like saying "we've done 5% of the work ,the rest of the 95% should be easy right?" Actually, just figuring out what the experience is difficult. So more details...

Hancock defines the term "live programming" in his thesis [1] and its where I get my definition from (before Hancock, the term doesn't exist, though liveness was defined by Tanimoto and Maloney in the 90s). Basically, live programming is about continuous feedback for which hot swapping might be useful (though it must be said its not exactly necessary nor is it sufficient). But there is much more to it: you want continuous feedback about the code you are editing, not just some idea that the code will run sometime in the future in a running program. You want to also observe the behavior of this code in a way that is comprehensible, and map this behavior back to your code.

I wrote and presented a paper [2] on live programming back in 2007. Ralph Johnson (a big Smalltalker) was in my audience and had the same complaint: he only saw hot swapping and not the experience I was presenting. To him, it was mechanism, not exeperience. I wonder if this is a problem with Smalltalkers in general.

[1] http://llk.media.mit.edu/papers/ch-phd.pdf

[2] http://research.microsoft.com/apps/pubs/default.aspx?id=1793...


I understand the difference between 'hot-swapping' and 'liveness'. However (admittedly, I may be mistaken) I believe the Smalltalk architecture already has the requisite functionality (eval and reflection - just like Lisp) to support this, although no one's actually implemented it yet. (And it ought to be more straightforward than building it in a Lisp-to-Javascript compiler on top of a Lisp on top of the JVM!)

Some of the basis for my assertion came from this article: http://liveprogramming.github.com/liveblog/2013/01/13/a-hist...


Any Turing-complete language has the requisite functionality to support liveness, but something like Time Warp support by the OS/VM would make it easier. But honestly, at this point, even designing the experience (vs. implementing it) is hard enough, and we owe a lot to Bret Victor's talk here. Hancock's thesis sets high standards on how the feedback must be comprehensible (as opposed to some random lights flashing on and off).

I wrote a lengthy post in the history article you linked to. Its just not the live programming history that I'm familiar with, they seem to be falling into the same smalltalk mechanism trap that I was talking about in this thread.


Similar to self, iolanguage would be a strong place to start for getting cheap liveness.


I don't quite understand this distinction between liveness and hotswapping, as all the examples of liveness that I've seen involve hotswapping code that causes graphic or audio side effects, and clearly that sort of thing has very real practical limitations.


There is one demo in Bret Victors' IoP talk where he is live programming a sorting algorithm and something non-graphical is visualized (in this case, control flow and local variable states). The hotswapping really isn't the focus at all; its the live feedback that is important.


You might be fighting a battle that's already lost. For most people, live programming is having a running system with a REPL or equivalent attached so that you can run & update code inside that system. For example a running web server with some mechanism to add a new request handler while the server is running. Or a program that's playing programmatic music or some kind of graphical demonstration where you can add and redefine functions to change the music/graphics. This kind of "live programming" is basically the same as hot swapping, but some people also associate it explicitly with a live stage performance (e.g. music or visual).

With the kind of live programming that you mean there is some meta system that is monitoring your code and continuously giving you feedback on it. Perhaps it does this by just running the code and showing the output, perhaps it displays the execution trace in some way, perhaps it displays a visualization of a data structure over time. In a way, live feedback on static type errors could also be considered a limited form of live programming. Maybe it's a good idea to adopt a new term for this kind of live programming? It would also help from a marketing perspective I think, to have a new thing that people can be excited about rather than a term that they associate with a boring, limited and old fashioned feature (i.e. hotswapping).

Even with this second notion of live programming the question of updating running code does not go away. If you are developing a game, you may want to do live programming by running the game next to the code and have that be updated whenever the code changes. But a game has state, and how do you transport that state to the next version of the code? Hot swapping code by blindly mutating a function pointer in the running game is obviously not the answer. That's just a hack that works some of the time: it doesn't work when updating code while the running game is still in the middle of something, and it corrupts the state when there is a bug in the code, and it doesn't work at all when data structure structure changes. The perspective "how to transport the state to the next version of the code" is much better than "how to I shove new code into the running system with the old state". The same issue comes up with most programs, not just games. This is still an open problem as far as I know. For live programming we also need tools to manage and reset the state. When you have corrupted your state with a bug in your code, you want to be able to quickly go back to a previous non-corrupt state. Even if you change the entire programming model, you'll still have to address this state update problem in some way.


> You might be fighting a battle that's already lost.

REPLs and interactive programming existed long before the "live programming" experience was defined (by Hancock), and I only use the term to describe what Bret was showing off in his IoP talk as well as the experience the Light Table people seem to be striving for. I might be a bit pedantic, but there are plenty of other terms to describe the older less live experiences! Hot swapping is just some mechanism to achieve some undefined experience; "I changed my code while my program is running" is vague enough. It typically has to be coupled with some other refresh mechanism (e.g. stack unwinding) to be useful, and even then it typically doesn't do more than it advertises (func pointer f was pointing to c_0 and now points to c_1).

Now live coding...is completely different and has an independent origin from live programming. Whereas "live coding" is about some programmer coding "live" in front of an audience, live programming is about receiving continuous comprehensible feedback about your program edits in the context of a running program. Quite a huge difference in meaning with very different goals!

> With the kind of live programming that you mean there is some meta system that is monitoring your code and continuously giving you feedback on it.

Its coding with a water hose vs. a bow and arrow. Debugging is not a spearate experience and happens continuously while editing, if you can't provide enough continuous feedback to get rid of a separate debugging phase, then its not really live programming.

> Maybe it's a good idea to adopt a new term for this kind of live programming? It would also help from a marketing perspective I think, to have a new thing that people can be excited about rather than a term that they associate with a boring, limited and old fashioned feature (i.e. hotswapping).

But the new term was coopted to describe an old experience! Hancock's definition is unique (no one used this term before 2003), fairly complete, and its very compatible with what Bret Victor was showing off in his IoP work. Why should we back off and invent yet another new term to describe the new experience whose original term was hijacked to descirbe old experiences because people couldn't understand the new one? Crazy!

> But a game has state, and how do you transport that state to the next version of the code?

Today this is framework specific, and all major game engines have a way of doing this as they want to allow the designers to script levels in real time without losing their context. It doesn't even require language support necessarily, but its not something you ever get "for free," its something that is baked explicitly into your framework.

> The same issue comes up with most programs, not just games. This is still an open problem as far as I know. For live programming we also need tools to manage and reset the state. When you have corrupted your state with a bug in your code, you want to be able to quickly go back to a previous non-corrupt state. Even if you change the entire programming model, you'll still have to address this state update problem in some way.

No one has figured out how to yet come up with an expressive general programming model that achieves this efficiently, but you can always "record" the input event history of your program and re-exec the entire program on a code change; i.e. there is an inefficient baseline. You still have problems with causality between program output and input; e.g. consider the user clicking a button that no longer exists or moved! Lots of work still to do...just don't take away my term please!


> Whereas "live coding" is about some programmer coding "live" in front of an audience, live programming is about receiving continuous comprehensible feedback about your program edits in the context of a running program. Quite a huge difference in meaning with very different goals!

Yes that's what I mean! A tiny difference in the terms we use: live coding vs live programming. That's why it's confusing to people.

> Why should we back off and invent yet another new term to describe the new experience whose original term was hijacked to descirbe old experiences because people couldn't understand the new one? Crazy!

Sometimes you have to cut your losses ;-) Another reason why I dislike the term "live programming" is because it confuses two separate concepts: continuous feedback and rich feedback. Conventional debugging is pressing a button to run your code and see what the result is. Instead of just displaying the result, you could display the entire execution trace (time traveling debuggers). You could write unit tests and display which passed and which failed. You could output some visualization of some data structure in the program. For a game you could output a series of frames overlaid on each other (like Bret Victor does). Then you have type checking, for numerical code sensitivity to floating point bit width, performance profiling, etc. This is all about giving different kinds of feedback. Continuous feedback is about getting feedback without having to press a button. Classical live programming is running the program continuously and continuously displaying its output. This is the continuous feedback version of ordinary debugging. Automated background unit test runners are the continuous version of unit testing. In the same way you have a continuous version of the other debugging techniques. Both continuous feedback and rich feedback are very valuable, and although they are stronger together they are separate concepts. Perhaps it would be a good idea to have separate words for them, that would certainly greatly clarify "live programming".

> but you can always "record" the input event history of your program and re-exec the entire program on a code change; i.e. there is an inefficient baseline. You still have problems with causality between program output and input; e.g. consider the user clicking a button that no longer exists or moved!

Yes, this is robust to internal data structure changes but no longer robust to UI changes. Viewing a program as a series of event stream transformers and time varying values as in FRP may help a bit. At the lowest level you have a stream of mouse clicks on pixel (x,y) and keyboard events with keycode k. Then the UI toolkit transforms that stream of events to event streams on UI elements: click on button "delete", text input to textfield "email address". Then that gets transformed to logical operations and data: delete_address_book_entry(...) and email_address. Then that gets transformed to the complete time varying high level state of the entire program (address_book_database). You can try to transport the state on each of the different levels, but in the end I think a completely automated solution is impossible. You are going to need domain specific info on how to do schema migration in the general case. For live programming that may not be worth it because you can just start over with a fresh state, but for things like web site databases you don't want to lose data so you have to manually migrate. [tangent: Currently there are a lot of ad-hoc solutions to this e.g. never remove an attribute from your data model, and when you add new attributes make sure all code works even if that attribute is missing. Reddit even goes so far as to structure its entire database as "key,attribute,value" triples instead of using a structured schema so that the schema never needs to change, but of course this just moves the problem from the database into the code that talks to the database. A principled approach where you write an explicit function to migrate your data from schema version n to schema version n+1 would work better. That migration function takes the entire state/database with schema n as input and produces an entire new state/database with schema n+1. When the state/database is large this would take too long to do it in one pass, but with laziness that can be done on-demand.]

You don't need to limit yourself to running one instance of the program. You could record multiple input sequences representing multiple testing scenarios, and display the results of running each of them, or even display each of them being continuously performed so that you can see all the steps in between. In any case as you say there is lots of work still to be done.


> Sometimes you have to cut your losses ;-) Another reason why I dislike the term "live programming" is because it confuses two separate concepts: continuous feedback and rich feedback.

Again if we go back to Hancock's thesis, it's all there! It's not just about continuous feedback, it's about feedback with respect to a steady frame, it's about feedback that is relevant to your programming taks, it's about feedback that is comprehensible. Hancock got it right the first time, there is no classical live programming (though there were other forms of liveness before). Actually, this is something I didn't get myself in my 2007 paper.

I don't think I need to abondon my word, especially since the standard bearer are Bret's demos; people want "that", not some sort of vaguely defined Smalltalk hot swapping experience. The community I'm fighting for the word is small and insignificant vs. the Bret fans :).

As for the rest of your post, explicit state migration is a big deal for deployment hot swapping (Erlang?) but ultimately a nuance during a debugging session. A "best" effort with reset as a back up is more usable.

But maybe take a look at our UIST déjà Vu paper [1]: here the input is defined as a recorded video that undergoes Kinect processing, and we are primarily interested in the intermediate output frames, not just the last one. So the primary problems are one of visualization, while we ignore the hard problem of memoization and just replay the whole program. We even have the possibility to manage multiple input streams and switch between them.

Kinect programs are good examples of extremely stateful programs with well defined inputs. One of the next problems I'm trying to solve is how to memoize between the frames of such programs to make the feedback more lively.

[1] http://research.microsoft.com/apps/pubs/default.aspx?id=1793...


> Again if we go back to Hancock's thesis, it's all there!

Yes, the problem is not with the definition of the term, but with the term "live programming" itself! It is too vague and can apply to too many concepts, and hence we're seeing people use it and interpret it for many different concepts. Nobody will go read a thesis to learn what a term means. But then again "object oriented programming" is vague as well. The notion of "steady frame" does seem oddly domain specific. In the words of that thesis: water hosing your way towards the correct floating point cutoff value or towards the value of a parameter in a formula that produces an aesthetically pleasing result works great, but I'm not convinced that you can "water hose" your way to a correct sorting algorithm for example. Perhaps I have misunderstood what he meant though.

> A "best" effort with reset as a back up is more usable.

Yeah, I agree. I think the same primitives that can be used for building good explicit state migration tools, like saving the entire state and recording input sequences or recording and replaying higher level internal program events, can also be used for building good custom live programming experiences. So they are not two entirely disjoint problems.

> But maybe take a look at our UIST déjà Vu paper [1]

That's very interesting and looks like an area where live programming can work particularly well! A meta toolkit for building such domain specific live programming environments may be very useful if live programming is to take off in the mainstream. Of course LightTable is trying to do some of that, but while it started out in a quite exciting way they seem to be going back to being a traditional editor more and more (albeit extensible).

> One of the next problems I'm trying to solve is how to memoize between the frames of such programs to make the feedback more lively.

Probably you've seen that already, but have you looked at self adjusting computation? http://www.umut-acar.org/self-adjusting-computation


> A meta toolkit for building such domain specific live programming environments may be very useful if live programming is to take off in the mainstream.

That's exactly what we're trying to do with LT, see my "The Future is Specific" post [1].

> they seem to be going back to being a traditional editor more and more (albeit extensible)

This is a necessary detour as we build a foundation that actually works and allows us to really make the more interesting stuff. If we can't even deal with files, what good are we going to be at dealing with the much more complicated scenario of groups of portions of files? :)

[1]: http://www.chris-granger.com/2012/05/21/the-future-is-specif...


That's great to hear! I really hope LightTable works out.


> Yes, the problem is not with the definition of the term, but with the term "live programming" itself!

True. But I think the word has worked well until recently.

> The notion of "steady frame" does seem oddly domain specific.

Not really, but please wait for a better explanation until my next paper. One of Bret's examples in his IoP video is a correct sorting algorithm, actually, programmed with live feedback. I also mentioned a return to printf debugging on LtU before, and its basically the direction I'm taking right now (that the UI represented by the steady frame is probably not the GUI that is used be an end user).

> Probably you've seen that already, but have you looked at self adjusting computation? http://www.umut-acar.org/self-adjusting-computation

Their work doesn't seem to scale yet (all examples seem to be small algorithmic functions) while I'm already writing complete programs, compilers even, with my own methods, which are based more on invalidate/recompute rather than computing exact repair functions. I'll be able to relate to this work better when they start dealing with bigger programs and state.


> True. But I think the word has worked well until recently.

I just saw this: http://www.infoq.com/presentations/Live-Programming

"Sam Aaron promotes the benefits of Live Programming using interactive editors, REPL sessions, real-time visuals and sound, live documentation and on-the-fly-compilation." :D

> One of Bret's examples in his IoP video is a correct sorting algorithm, actually, programmed with live feedback. I also mentioned a return to printf debugging on LtU before, and its basically the direction I'm taking right now

Yea, this interpretation of 'steady frame' is fully general I think: the ability to compare feedback of version n with feedback of version n+1 without getting lost. My interpretation was more specific because of the water hose vs bow and arrow analogy: continuously twiddling knobs until you get the result you want vs discrete aim-and-shoot. For example picking the color of a UI widget with a continuous slider vs entering the rgb value and reloading. Since a sorting algorithm is not a continuous quantity, aim-and-shoot is inevitable though you can still make it bit more "continuous" by making the aim-and-shoot cycle more rapid.

> which are based more on invalidate/recompute rather than computing exact repair functions

You can do this in their framework; you can specify at which granularity you want to have the 'repair functions' and at which granularity you just want to recompute. For example if you have a List<Changeable<T>> then each item in the list can be repaired independently, if you have Changeable<List<T>> the whole list will be recomputed. Although you probably want to automatically find the right granularity rather than force the user to specify it?


> I just saw this: http://www.infoq.com/presentations/Live-Programming

Ya, I saw it to. I didn't see the talk though, but I expect it to be more of the same promotion of live coding as somehow actually being live programming (programming is like playing music! Ya...).

> Since a sorting algorithm is not a continuous quantity, aim-and-shoot is inevitable though you can still make it bit more "continuous" by making the aim-and-shoot cycle more rapid.

A sorting algorithm can be fudged as a continuous function. But then here continuous means "continuous feedback", not "data with continuous values." The point is not that the code can be manipulated via knob, but that as I edit the code (usually with discrete keystrokes and edits), I can observe the results of those edits continuously.

> You can do this in their framework; you can specify at which granularity you want to have the 'repair functions' and at which granularity you just want to recompute.

I'll have to look at this work more closely, the fact that we need custom repair functions at all bother me (repair should just be defined simply as undo-replay). The granularity of memoization is an issue that has to be under programmer control, I think.


You don't need custom repair functions, the default is undo-replay, but in some cases it helps performance to have custom repair functions. For example suppose you have a changeable list, and a changeable multiset (a set that also keeps a count for each element). Now you do thelist.tomultiset(). If the list changes, then the multiset has to change as well. If you applied their framework all the way down, this might be reasonably efficient. But with custom repair functions it can be more efficient: if an element in the list gets added, just increment the count of that element in the multiset. If an element gets deleted, decrement the count of that element in the multiset.


I feel like we are turning hackernews into lambda-the-ultimate :)

I wrote this really bad unpublished paper once [1] that described the abstracting-over-space problem as a dual and complement of the abstracting-over-time problem. It turns out, for simple scalar (non-list) signal (reactive) values, the best thing to do was to simply recompute. However, for non-scalar signals (lists and sets), life gets much more complicated: it makes no sense to rebuild an entire UI table whenever one row is added or removed, and so we want change notifications that tell us what elements have been added and removed. However, I've changed my mind since: it is actually not bad to redo an entire table just to add or remove a row, as long as you can reuse the old row objects for persisting element. If my UI get's too big, I can create sub-components that memoize renderings unaffected by the change (basically partial aggregation).

Now how does that relate to theList.toMultiSet example? Well, the implementation of toMultiSet could be reduced to partially aggregated pieces very easily (many computations can actually), which could then be recombined in much the same way as rendering my UI! Yes, the solution that decrements/increments the count on a specific insertion/deletion is going to be "better", but a tree of partially aggregated memoizations works more often in general; its easier to do with minimal effort on behalf of the programmer.

I still need to understand their work better, but I approached my work from a direction opposite of algorithms (FRP signals, imperative constraints). I have a lot of catching up to do.

[1] http://lampwww.epfl.ch/~mcdirmid/papers/mcdirmid06turing.pdf


> but a tree of partially aggregated memoizations works more often in general; its easier to do with minimal effort on behalf of the programmer.

Yes, that's exactly what you get if you do not implement a custom traceable data type (their terminology for a data type that supports repair) provided you write your code in such a way that the memoization is effective. Note that traceable data types do not necessarily need to be compound data structures, it can be e.g. an integer as well. E.g. summing a list of integers to an integer, now if one of the integers in that list gets changed, you do not need to recompute the entire sum, or even a logarithmically sized part of it: you can just subtract the original int and add the new int back in.

Here is also an article that does something related but in an imperative context rather than a functional one: http://research.microsoft.com/pubs/150180/oopsla065-burckhar...


The Bret Victor presentation for those interested: http://vimeo.com/36579366

I can't help but smile while I watch.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: