Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why Systems Programmers Still Use C (2006) (bitc-lang.org)
107 points by willvarfar on March 18, 2012 | hide | past | favorite | 63 comments


We are starting to get alternatives, though. Besides Rust and D, there are Clay (http://claylabs.com/clay/), ATS (http://www.ats-lang.org/), and Deca (http://code.google.com/p/decac/). I'm sure there are more. I unfortunately haven't had time to play with them, and they're in various stages of development. But at least people are thinking about improving systems programming.


I was surprised that the author thinks the syntax of Haskell is no better than that of C++. Let's look at a typical example. In Haskell, the signature for sort is

sort :: Ord a => [a] -> [a]

which just says that it takes a list of comparable things and returns another list of the same type. Here's the type signature for sort in C++:

template <class RandomAccessIterator, class Compare> void sort ( RandomAccessIterator first, RandomAccessIterator last, Compare comp );

The difference in terseness and clarity is a big reason why I use Haskell when I have a choice.


Please. The difference in terseness and clarity is made up almost entirely of the lenths of the symbol names:

template<class I, class C> void sort(I a, I b, C cmp);

Now, the concepts aren't 1:1; Haskell for obvious reasons doesn't represent the idea of operating on storage directly, so you can't have an iterator and need to return a "new" list. C++ makes you write out the types you are parametrizing instead of getting it implicitly. And there are no doubt some really good arguments why Haskell is terser and clearer than C++.

But this isn't one of them. Come on.


The difference is not only due to identifier length. Haskell's sort is a function in the mathematical sense. By looking at its type, I can see the input and output at a glance and understand how to use it. It's not immediately obvious from the signature of the C++ sort procedure that the "first" and "last" arguments are being used for both input and output. When you abbreviate the identifiers for STL sort it's even less obvious what's going on.


There's nothing "immediately obvious" to that Haskell signature at all. You need to understand the [] syntax for list typing. You need to understand that the language defintion has a built-in notion of "list". You need to understand the idea of functions having types themselves (a huge hurdle if you're new to functional languages). And you need to understand that weird "Ord" decorator gadget and that it means the types can be compared. Basically, you can't understand that line noise at all unless you know Haskell. Duh.

Likewise, if you're truly confused about C++ STL iterators you're just waving your own ignorance around. They're a simple concept pervasively applied in the library. No experienced programmer is going to be confused by that function declaration.

Look, very good cases can be made for functional languages. But this is just surface-level stuff that frankly isn't going to help anyone. Expressing a sort simply isn't a complicated thing in C++ or Haskell and trying to claim otherwise is just dumb.


These days Java seems to become one of the choices to do system programming in a different area/level:

HBase, Hadoop, Cassandra, GWT tools, MQ, App Servers (Jetty, Tomcat, GlassFish, etc), EhCache.

Unless if people categorized the above software as non-system-programming.


Most database engines of moderate sophistication straddle the "systems" and "application" line because they start directly managing system resources at a low level. Application servers are a bit more like "applications".

Database engines highlight some of the reasons that Java is a poor systems language relative to C/C++. High-performance database engines these days are usually bottlenecked by memory I/O performance and efficiency, or storage I/O if you under-provision the machine resources for the workload. This is why you see I/O optimizations applied to in-memory databases, for example. Java may be fast at many things, but it is much slower than C/C++ for codes that are bound by memory performance and efficiency. You have to start doing very awkward things in Java to even come close to what comes naturally in languages that explicitly manage memory behavior and structure. As someone who has written a lot of database engine-y code in both C++ and Java, the difference in absolute performance for nominally equivalent code is not small, and is generally easier to achieve in C++. So for low-level performance-oriented systems, there is a pretty strong bias toward C++, particularly now that there is a lot of experience trying to implement the same systems in Java.

For performance sensitive codes, systems programming is really about carefully managing resources to optimize for characteristics of the system. C makes this very easy because it exposes all of it. To the extent that CPU architectures are increasingly bound by memory performance, I would not be surprised to see C++ supplanting Java for certain purposes.

This makes sense. Java was designed more as an application language that works well for codes that are unlikely to make heavy use of the memory system. Its popularity and generality has caused it to be widely used but that does not always make it a good choice outside of its original design case (see also: Perl).


It's mostly a semantics thing, but no: most "systems programmers" would not categorize those applications as "systems tasks". They're just apps. By convention, "systems programming" means dealing with the bottom of the abstraction stack. Java code can't make syscalls. Calling conventions for Java methods aren't specified at the level of CPU registers or instructions.

So writing a JVM is systems programming. Writing a Java-based data store is not, even though other stuff then sits on top of the store.


It is possible to do syscalls in Java.

Following your example there are also JVMs coded in Java.


It is certainly not possible to do syscalls in "Java". A system call is a special CPU instruction (e.g. INT, SYSENTER) not accessible to or defined in the Java VM specification.

What you do to effect a syscall is to call a JNI function to do the work. JNI is a C (!) API, defined in terms of the C (!) ABI for the platform.

And sure: you can generate native machine code in Java, just as you can in python or bash or even BASIC. But you can't call it.


Depends what you consider Java, as there are VM extensions coupled with compiler magic that allow it.

One such example is the Jikes VM:

http://jikesrvm.sourceforge.net/apidocs/latest/org/vmmagic/p...

Or the Sun's research in writing drivers in Java http://labs.oracle.com/techrep/2006/abstract-156.html


Curious as to how you would set up a DMA transaction in any of those languages? If you're doing system programming, you'll need to do DMA.


By providing an unsafe package that allows you do such things.

Have a look at:

Modula-3 + Spin

Oberon + Oberon System

Spec# + Singularity

Haskell + Home

OCaml + Mirage


>Unless if people categorized the above software as non-system-programming.

Probably the case, networked services represent an area where performance is at least somewhat a concern, but BitC is trying to address situations where one is writing a memory allocator, not building a server on top of a GC'd language known for having sloppy memory usage.


Interesting article, but he misses the right answer: if it isn't broken, don't fix it.

C may not be the easiest language to learn, but you wouldn't want newbies messing with systems programming anyway.

Higher level languages give you a more abstract view, but when you are doing systems programming that's not what you want, you need to be in full control. Only C gives you precise control of what the machine is doing all the time. You don't want a garbage collector to kick in unexpectedly, you don't want data structures to be allocated in mysterious ways.


You misunderstand his argument. He's not arguing that we should replace C with "higher-level languages" that don't give you precise control of the hardware; in fact, he bemoans the fact that the PL community at large seems interested only in such high-level languages. Nor is he complaining that C is hard to learn (I'm not sure where you got that impression). In fact, the paper mostly takes for granted that C needs to be replaced, and discusses the challenges in doing so, which may be why you assumed that BitC is (was, really) just another typical research-community high level programming language project.

The fact is, C is broken, and we all suffer every day because of it. From a security perspective, C is a nightmare. It is not just an unsafe language (in the PL sense of having undefined behavior), but a rampantly unsafe language. Null pointers (which C.A.R. Hoare famously called his "billion-dollar mistake"), buffer overruns, unrestricted pointer arithmetic, arbitrary casting, manual memory management; all of these are the cause of innumerable bugs in real code, often critical security vulnerabilities. C is also not the nicest of languages to code in. It is terribly verbose. There is no good way of writing generic code in it. The preprocessor is a gigantic ugly hack. Writing portable C is a pain in the ass, and building it portably is doubly so.

The worst part is, we know reasonable ways to solve most of these problems. In many cases we have known them for decades. Unfortunately, how to best integrate these solutions into a language and still keep it useful for systems programming was (and is) an open problem. I can't speak for Shapiro, but as I see it this is what BitC was aimed at doing.


C is far from "terribly verbose".


>You don't want a garbage collector to kick in unexpectedly, you don't want data structures to be allocated in mysterious ways.

All of that (and everything else C is attributed with) can be accomplished without using an arcane preprocessor/include system and you can have niceties such as a saner type system, generics/macros, namespaces, etc.

C is a language stuck with the design decisions that reflected the programming environments in the 60's and 70's but make absolutely no sense in modern context and now we are just stuck with it because of inertia.


I would agree and would elaborate that the major weaknesses of C are a combination of build system issues and an overly weak type system.

It wasn't realistic to have the whole compile process done in memory in the early 70's, and OS virtual memory wasn't realistic either; but we've long surpassed those concerns, and that means that all the thinking about the program namespace, modules, linking, pre-processing, etc. is worth re-evaluating.

As well, we can do better static checking now, without including any notion of GC; in an imperative execution model, leakage of memory, handles, processes etc. remains orthogonal to type safety. C has heavily bottlenecked program control logic from the beginning - a callstack, looping constructs, etc. - and allowing some equivalents to exist in its data structures would make user code tremendously more reliable.

These changes, often paired with some more attention to concurrency, show up in all the newer system language designs - D, Go, Rust, Clay, BitC, etc.


Anyone who tries building a better C (for example Go) seems to end up going "too far", and as well as fixing the things you mention, ends up adding garbage collection, and various other things which are not suitable for very low level systems programming.

I teach C, and I would love a "cleaner C" mode, which just got rid of lots of the bizareeness of C, several of which you mention. The fact that many compilers will warn with '-Wall' about code which is clearly incorrect, but due to the rules of C they cannot simply reject, is irritating.


Fact is there are many examples of operating systems implemented with GC enabled systems languages.

Spin, Oberon, Singularity, Home are just a few of them.

The main problem is that for a systems programing language to be used a such, there much exist a successful operating system that uses it as its main language.

In Windows 8, the main systems language is C++/CX (C++ with reference counting extensions), so the time will come.


The fact that many compilers will warn with '-Wall' about code which is clearly incorrect, but due to the rules of C they cannot simply reject, is irritating.

The rules of C are actually stricter than many reople realize (add -std=c99 -pedantic-errors to gcc and you can get an idea about that -- this, however, still won't catch semantic errors like aliasing violations).

Personally, I'm using clang with -Weverything and remove warnings as necessary. On gcc, there are a lot of useful warnings which are not included in -Wall. My current warning levels look like this:

  -std=c99 -pedantic -Werror -Wall -Wextra \
  -Wmissing-prototypes -Wmissing-declarations -Wshadow -Wpointer-arith \
  -Wcast-align -Wwrite-strings -Wredundant-decls -Wcast-qual \
  -Wnested-externs -Winline -Wno-long-long -Wconversion -Wstrict-prototypes


(add -Werror - your students will never know.)


>Only C gives you precise control of what the machine is doing all the time.

Maybe in 1979, it did. Modern C compilers alter and modify code quite radically, to the point where it's quite difficult these days to go back from optimized assembly back to the original C code. These days, C gives you a good illusion of control, but all too often, C programmers mistake illusion for reality.


No, C gives you control. I have a syscall (bind(), say) that takes or returns a polymorphic array via a pointer. How many "system" languages represent something like a C pointer cast or union type with clear storage semantics? Or I want to write a code generator for my fancy new language interpreter and need to call mmap() to get an executable range. How many "system" languages let me call into that memory with native syntax?

What you're describing is the difficulty in understanding what the guarantees of control are that you get form your compiler (and yes, that's a much more involved issue than simply reading a language spec). Yeah, C is hard. But the fact that you don't understand how the optimizer works (or how to read the generated assembly) isn't the work of an "illusion", it's just your own inexperience.


On the other hand, again and again bugs that come from the ability in C to alias have been exploited.

His answer was of course EROS.

Microsoft explored Singularity.

Some of his colleagues went and wrote Go as a systems language (if not a kernel-side language).


"Only C"? What about Forth?


why fix something that is not broken ? with systems programming you wouldn't want anyone/everyone to mess around with it anyways. i don't need bound-checked-arrays, or or garbage-collection (oh the humanity!) at random unpredictable times. c gives an unprecedented level of control over what the machine is doing, for more abstract level interface to the machine, use whatever suits your fancy...


Things I can think about:

- low level memory management

- faster execution (in most cases)

- cache line optimization

- avoiding language implementation magic


I have stopped when I read "The concrete syntax of Standard ML [16] and Haskell [17] are every bit as bad as C++".

The author definetely has no understanding of PL issues or has a huge bias toward C++ insane syntax.


bitc appears to be abandonned, but I cloned it from their hg repo and put it up on Github anyway. Here it is, for posterity:

https://github.com/bitc-repos/bitc

Projects like this make me wonder if they would have gotten further had they put it on a more social oss hub, be it Sourceforge in its day, or Github now.


The problem with systems design is that it's all so complex. Multitasking has become an excuse to run everything as some absurd daemon. Badly designed hardware has to be papered over with undocumented binary drivers. Everything is optimised for throughput benchmarks, so we get warm ups and unpredictable pauses visible to the user. And it's all full of security holes.

The plain truth of the matter is that none of these things are due to a lack of special language support. It's that the whole system is too complex, and the complexity isn't even quarantined in such a way as to be harmless. We need to be able to start over (something the author acknowledges). We can't do this if we build yet another complex system in the belief that we'll get it all right this time round.


The problem with that is "Worse is Better". And there's just so much momentum behind current technologies, that switching seems almost unimaginable to most. Can you imagine not having Unix-like systems, and not being able to use C and all the languages built around that ecosystem?

But if you're looking for a replacement for the entire software stack we use today, one that tries to shun complexity (20k LOC for everything from kernel to common GUI apps), here you go: http://www.vpri.org/pdf/tr2011004_steps11.pdf

More info (see previous STEPS reports): http://vpri.org/html/writings.php


Thanks for the link. I will check this out later.

>Can you imagine not having Unix-like systems, and not being able to use C and all the languages built around that ecosystem?

Yes! Charles Moore has been essentially living this since the 1970's. No many have his level of courage though.


You mean this one? http://en.wikipedia.org/wiki/Charles_H._Moore

Can you give some more detail? What does he use?


Towards the bottom of that page it mentions color forth. That is what he uses. It is a forth like language that removed punctuation and replaces it with colors.


Ah, upon further reading, colorForth has its own operating system, so he probably uses that too. I had seen the forth bits, but the GP implied he was using some non-Unix OS, and I was still curious about that.


Forth is conventionally a compiler, REPL, editor and OS all in one package. If I recall correctly, the reference implementation of ColorForth happens to run as a Windows application, but it pretty much lives in its own memory image and could probably be made bootable.


Sorry I wasn't clearer about that. Much like there were Lisp machines in the 80s Mr. Moore ran(perhaps still runs) a company that does custom Forth machines. So he uses his own hardware and his own software, turtles all the way down so to speak.


> Yes! Charles Moore has been essentially living this since the 1970's. No many have his level of courage though.

I'm an unbounded admirer of Charles Moore. But courageous? He's a software solipsist. His approach is certainly bold but I doubt he sees himself as courageous.


Call it bold or courageous. He's doing something few are willing to do. I'm not saying he's a hero in the same sense as a soldier or anything here, but he's worth 100x more than the drones who just keep piling mess upon mess without even being able to admit what they are doing.


But that would break the backwards compatibility with previous systems and architectures. The markets can't have that.


Also, this paper suggests that the answer to multi-tasking problems is to move the complexity out into the language. It's funny how PL researchers will always spin the situation so that it demands more PL research. If we're really moving into a many cores future as people suggest, then we should have architectures that give each application a dedicated core. No context switching. Of course, you'd need a simple system and not a billion daemons running in the background waiting to be exploited by criminals.


Jonathan Shapiro is not a traditional PL researcher. He is the architect and lead developer of EROS (www.eros-os.org) and CapROS (http://www.capros.org), and the author of a vast number of papers on secure and high-performance reliable real-time operating systems. The confinement mechanism in EROS was provably secure (i.e. provably impossible for applications to leak permissions) by Shapiro, and AFAIK is still the only meaningful and practical security mechanism demonstrated to do so, which is a pretty significant contribution in the field of computer science.

(Although having been a voyeur of his work for over a decade now, I'm sure he would be quick to point out that much of EROS was based on formalising ideas from GNOSIS and KeyKOS, and the work of Hardy, Franz, Landau et al. over 20-30 years earlier.)

BitC evolved out of a need to prove that the implementation of the confinement mechanism (prototyped in EROS) matched the model (proven in his PhD thesis), and so the CapROS system (built in BitC) was born (well that plus some architectural changes based on lessons learned from EROS). From that perspective it's much more than just another systems programming language - it has a definitive purpose to advance the state of the art in practical and theoretical computer science. (FWIW, they never did accomplish this goal [http://www.bitc-lang.org/docs/bitc/bitc-origins.html]).


I've been looking through EROS, CapROS, Coyotes, etc. for a bit - thanks for all the references.

Is there still any forward motion here? It seems like a lot of the sites haven't been updated for a good number of years. A notable exception was BitC, which was news from 2010 - still not that recent.


Not that I'm aware of, unfortunately. I haven't followed things much over the past 2-3 years, though.

I suspect there are two forces at play:

1. Building a pure capability-based operating system has minimal payoff. While personally I believe the result will be an amazingly reliable, high performance, secure system, the fact is the operating systems we have today are apparently "good enough" that nobody is interested in funding further work in this area (AFAIK Shapiro and others did form a venture in this regard; what came of it I don't know). Keep in mind that much existing software will have to be re-engineered, and a good part of the OS utilities redesigned since if you're going the pure capability route the significance of files becomes pretty much purely a user thing.

2. At a higher level, in my opinion (as an amateur capability-based systems theorist) the benefits of distributed capability-based systems have already been realised as the shape of the evolving web, albeit on much cruder foundations than those designed as part of the literature. Cookies + URLs are pretty much capabilities, and web services (including web sites!) are effectively distributed objects. Javascript + HTML have fulfilled the dream of being able to ship and run data, code and user interfaces remotely, which is the foundation for an unplanned human + computer usable distributed ecosystem.

Javascript, while not the cleanest language, is a solid language for a capability-based system (in terms of capability rules: everything is an object, objects can only be accessed via references [capabilities], objects references can only be acquired by (a) creating the object or (b) by receiving a capability), i.e. capabilities cannot be forged. If you're interested in what an even more pure approach, designed by people who really know what they're talking about, would look like then take a look at http://www.erights.org and http://www.waterken.com .

In this sense, the first true modern capability-based operating system will probably be the first true web operating system.

I think it's pretty cool, and a confirmation of the ideas in Gnosis, KeyKOS, EROS, Coyotos, CapROS, E, etc. that the natural evolution of the largest distributed system on the planet (the Internet) effectively took the form of a distributed capability-based system.


Thanks, I really appreciate the info. Sorry for my slow response, I hope you see this.

Building a pure capability-based operating system has minimal payoff.

Given the massive security problems we see today, if capabilities are the right solution (are they?), it seems like the payoff could be massive. It seems like "mass adoption" would be hard to achieve (except in the really long term), but it seems like it would be possible to find early adopters who "really really" need good security (e.g. certain military applications, maybe?).

At a higher level, in my opinion (as an amateur capability-based systems theorist) the benefits of distributed capability-based systems have already been realised as the shape of the evolving web, albeit on much cruder foundations than those designed as part of the literature.

This is extremely interesting. Reminds me of the idea behind the "separation kernel" which is supposed to mimic in software the security that can be achieved by connecting systems only over extremely well-defined channels (e.g., the systems are physically disconnected except for an ethernet port that is very well controlled). I can find a refence on security kernels if you're interested and not aware of them already.

Anyway, it does seem to me that there are lots of applications (e.g., embedded applications) where distributing everything over the Internet isn't really going to work (not that you're suggesting that). I'd really like to see people tackle security for this kind of system in an entirely new way. Maybe capabilities is part of that.


A better idea would be to just build a small system whose security was obvious. Computer scientists are right to mimic mathematics. They're just mimicking it too directly. Mathematics is based on construction from simple axioms and cross-checking of different theories. Computer systems should be reduced to small parts with redundant checks against human error. If you prove a kernel "correct" that just means it will be that much harder to rewrite it if it turns out not to be what you wanted.


I've read many of your comments on this thread, and I believe most of them to be unrealizable for non-trivial applications. Anyone can write a simple web-service, but how does one write a super low latency network file system in such a way that "security is obvious?"


Your accusation is useless because the PL tools don't solve this problem either. Currently the only tools we have in networks are cryptographic and social, and those are not provably secure. And the point is that what is considered "non-trivial" nowadays is actually "monstrously huge and self-serving". A computer is just a tool but people want to build a whole world in there. And after that they become addicts, leading to absurd rationalisations about how we need more to cure us.


Why downvote? HN has really gone down the tubes. Why do peopel who downvote harmless comments without a response get downvote privileges?


Wasn't me (I can't downvote) but I have had a similar experience in the Go thread so I share your frustration. This is the reason I stopped visiting Reddit.


I just downvoted you because you are saying something dismissive about someone who has clearly spent a long time studying the alternatives and making a balanced informed decision.

Now it may be that you have some great insight but your post doesn't seem to give me any epiphanies.


Well feel free to let software rot into pure shit under these continued rationalisations. What a great service you're doing to the users.


"If we're really moving into a many cores future as people suggest, then we should have architectures that give each application a dedicated core. No context switching."

You probably don't want that either because the cost of moving data between cores will be so high from both a performance and a power consumption POV. The days of free coherency between cores will eventually come to an end; you can already see evidence of that in multi-socket Intel and AMD machines (if you try to ignore the NUMA nature of memory, you swamp the link between sockets and perf tanks in a variety of apps).


Why would you be moving data between cores? I'm talking about an arrangement where each core is dedicated to a single program. No sharing of memory, even via the kernel. Each hardware devices talks to a single core only. Multiplexing of hardware would be done in.. hardware. For all intents and purposes the "core" could be out on the network.

All this throughput nonsense is benchmrk lies. Websites are not as a general rule highly responsive. They have huge variability in response times. Who cares if it costs a bit more to run the system if it means having a nice predictable system built for simplcity and safety from the ground up?


You need an event-driven system because the number of "tasks" rarely matches the number of physical cores available. One tried and true, fault-tolerant event-driven interface is provided by the kernel: processes (and threads). It has interfaces for binding to NUMA regions and such.

Any system you design will have to tackle a lot of the hard problems that kernels deal with. Now maybe you have a simpler system in mind, but it has to be drastically better for anyone to consider giving up binary compatibility.


You are completely misunderstanding me. I am not saying "let's all move to this new utopian system". I am suggesting that we move away from the idea of making universal systems entirely. Write software that is small enough that complete rewrites can be accomplished without trouble. The current situation is to build ourselves a problem that gets harder and harder to undo as time goes on.

You can't call something that's been around for less than 100 years "tried and true". I'm not saying we can't have "virtual CPUs" for tasks that can handle a performance hit. But it is becoming an unavoidable impediment. "Simple" doesn't mean "easy". It means predictable, small and undoable. The exact opposite to where we are heading.


I write a lot of code that runs in funny environments like the Compute Node Kernel running on Blue Gene systems, with offset-mapped memory (no TLB) and no over-subscription. It's fantastic for reproducible performance, but it's a specialized environment and people want to do things like run Python (as "glue" for some scientific applications) which needs dynamic loading. Since all IO is exported to separate IO nodes, and due to file system consistency semantics, dynamic loading is a tremendous bottleneck. This has escalated to the point where people burn a quarter million core hours to load Python (plus C extension modules) once on a large machine. To really solve the scalability problem, we needed to make dlopen() avoid touching the file system (by patching ld.so and implementing the POSIX file API with collective semantics served over the fast network).

The point here is that there are good reasons to want to use different systems together and then you end up putting in lot of effort working around the limitations of the specialized environments.


It sounds like you're basically proposing Plan 9--lots of very small, single-purpose components with one standardized way of passing data between them used across the entire system.

Also, you can ask a lot of people to give up binary compatibility and be okay, but asking people to give up source compatibility and change applications is a much, much harder pill to swallow. Specialized environments (like the syscall-shipping CNK on Blue Gene like jedbrown mentions) are usually seen as something to work around because they are impediments to productivity. A more robust system wouldn't be considered useful if everyone has to start from scratch.


I'm not really interested in what's "considered useful". Look at the state of web applications. Basically what we could already do in 1996 but using more resources and full of ads and other distractions. And yet to many people this step backwards represents state of the art computing. "Productivity" is highly overrated. Computers are already extremely useful without a bunch of gimmicks. Most of the Web 2.0 companies are creating the desire for their own product. Facebook has turned family photo sharing into a circus show. It doesn't serve any need other than to push for more neomaniacal computerisation. Some people will want to share family photos over the web. We can do that with very simple software. It can even be loaded on demand and sandboxed without any configuration without a hugely complex platform. _Simple_ virtual machines capable of this kind of work aren't hard to construct. But people keep chasing idiotic benchmarks instead. And if it's too slow then change the hardware to make sandboxing easier.


How do you handle DMA and other I/O in your system?


Each hardware device is accessed by a single process at a time. Any multiplexing that needs to be fast is done in hardware. Other multiplexing can be done in user space. For example, and application can request a dedicated channel to some blocks on the hard disk (adjudicated by the OS) or just use another hard disk. Remember, we're building simple applications here so things don't need to constantly write off the the disk for various tasks. You would only be dealing with the disk if your actual data was too big to fit into memory. Cases like that need careful design, not delegation off to some other programmer who doesn't know your problem.

If you're running a server application you split it into some reasonable number of tasks and allocate cores to them as needed. Nothing else runs on the system at all. If you need more cores either upgrade your hardware or you build another identical machine and talk to it over the network. If you look at the way Moore designs, he's crafting a complete artifact to solve a specific problem, not just throwing generic parts together.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: