Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
I spent 5 years writing my own operating system (github.com/halfer53)
939 points by halfer53 on June 26, 2021 | hide | past | favorite | 144 comments


In the era when system research is dying [1], the only saving graces are the Nix/Guix for OS management improvement and perhaps eBPF for OS performance enhancement. The Winix OS is a breath of fresh air in term of potential OS directions.

Like they have always said, timing is everything and probably this is the project similar to Apple's Newton project that have many goods in it but just born several years too early.

Winix targets RISC architecture and with RISC-V taking off exponentially at the very moment, having RISC biased OS will definitely provide edges and advantages for the platform, similar to x86 quirks to Linux, and Linux taking advantages when x86-32 and x86-64 took off.

This year when Linus was asked for the best Linux achievement compared to other OS, Linus has pointed out the innovative Linux based lock-free filesystem [2]. Winix has built-in innovative POSIX compatible in-memory file-system (IMFS) by default. Imagine an OS with IMFS that's also natively compatible with the increasingly popular Arrow and TileDB in-memory format. With Terabyte (TB) RAM computers becoming the norm in the near future this can easily be the fastest OS with the state-of-the-art filesystem around. Fuschia is another latest OS on-the-block but by focusing on the mobile rather than desktop it will probably optimized for the former, unlike Winix.

[1]https://tianyin.github.io/misc/irrelevant.pdf

[2]https://www.tag1consulting.com/blog/interview-linus-torvalds...


Systems research is more relevant than ever. You should take a look over the papers published at conferences such as SOSP, OSDI, EuroSys and HotOS. You'll see quite a lot of papers from industry giants. Lots of research OSs are also gaining traction[1][2][3]. I'd say that we're moving towards specialization right now, with a specific OS architecture for your usecase. OS research is far more than Linux right now.

[1] http://www.barrelfish.org/ [2] https://unikraft.org/ [3] https://mars-research.github.io/redleaf


Lol, I like your description of the project, sounds much more promising than mine


Woz, meet Steve.


Systems software research was dying 21 years ago when Rob gave that talk. (A better URL for the slides is http://doc.cat-v.org/bell_labs/utah2000/utah2000.html; the version you linked is pretty incomplete.)

Since then systems software research has been quite vital, in significant part due to Rob himself; in no particular order, relevant developments in systems software research since Rob's paper include Golang, MapReduce, HTML5 (including Web Workers, <canvas>, and WebSockets), Sawzall, Hadoop, Rust, wasm, Fuchsia (as you point out), V8, protobufs, Thrift, Docker, Xen, AWS, Azure, ZFS, btrfs, BitTorrent, Kafka, nearly all of Google's "warehouse-scale computing" stuff, memcached, OpenID, QEMU†, kvm, PyPy, SPARK, Julia and almost all the automatic differentiation stuff, Clojure, iOS, Swift, Factor, AMQP, RabbitMQ, ZeroMQ, Jupyter, the mainstream use of AJAX and Comet, QUIC and HTTP/2, Valgrind, LLVM, Kotlin, Bitcoin, Ethereum, OTR and Signal, Android, Dalvik, reproducible builds, seL4, Zig, Pony, CapnProto, Sandstorm.io, Fastly's fast-purging CDN, the Varnish cache it's based on, the fast SSDs that enabled it, TileDB as you mention (and Parquet), time-series databases like InfluxDB in general, Python 3, Racket, major new developments in ECMAScript, JSON, Z3, entity-component systems, general-purpose GPU computing, Intel ME (for better or worse, mostly worse, it's certainly relevant systems software research), OTR and Signal, Tor, Chrome, Firefox, Node.js, npm, LevelDB, record-replay for time-travel debugging, UBsan and Asan, stack canaries, epoll, io_uring, Qubes, Vulkan, CUDA, Wayland, Haskell's STM, XMPP, DTrace, the whole megillah around the shift to manycore, WPA for Wi-Fi, most of the work in making secure protocols resistant to timing and compression attacks, Git, SyncThing, ownCloud, rsync, zsync, GFS, BigTable, Cassandra, MQTT, and on and on and on. Oh yeah, and also eBPF.

Putting your filesystem on a ramdisk is a good idea but it's hardly innovative.

______

https://web.archive.org/web/20030601085257/http://fabrice.be...


> Putting your filesystem on a ramdisk is a good idea but it's hardly innovative.

I am way late to the party, but here goes: In 2007 I started working for a company in Norway that had developed their own database, from hardware and up. It included parallell processing (a few thousand processors when I joined) and everything hosted in memory. It was fast. Boot up took a while though, since it had to load everything from disks to memory.

When I joined they were on the third iteration already, the first version went live back in 1992.

We have since retired the concept since off-the-shelf hardware caught up in terms of speed and lower cost, also it didn't scale very well.


It's even older than 02007! You could put your filesystem on a ramdisk in CP/M in 01979 if you bought a third-party utility, though this needed bank-switched RAM to be really useful in CP/M: https://en.wikipedia.org/wiki/Silicon_Disk_System

Dataram used to sell RAMdisks ("BULK CORE") for PDP-11s starting in 01976, but those were hardware devices, so maybe we shouldn't count them. See First also sold them. I'm not aware of earlier ramdisks, but at and prior to that time RAM would have been magnetic core, which changes the value proposition a bit: a core ramdisk is more like an SSD than anything else, because RAM contents weren't lost if power was lost.

And ramdisks were a standard feature of Atari DOS 2.5 by 01985: https://en.wikipedia.org/wiki/Atari_DOS

It was also common to use ramdisks in MS-DOS around that time. Bankswitching was part of the reason: starting in 01985 you could easily put several megs of bank-switched RAM on a LIM EMS board in even an original IBM PC, but most application programs couldn't use more than a meg at all (or more than 640K without difficulty), so a ramdisk was an easy way to get some use out of it.

And it was common to netboot SunOS in the 01980s; this usually involved making /tmp a ramdisk and NFS-mounting /, /usr, maybe /usr/share, and /var.

And the PalmPilot only had DRAM for its filesystem from 01996 until they added flash support in PalmOS 5.4 in 02004.

And most LiveCD systems copy the CD into RAM at boot time these days. And Linux usually boots from an "initramfs". I'm typing this on a Linux computer with "tmpfs" ramdisks (named after the SunOS facility, but not used for /tmp itself) mounted on /run, /dev/shm, /run/lock, /sys/fs/cgroup, and a directory in /run/user. Amazingly, there's a discussion about whether Fedora should default to swapping to a ramdisk now: https://rwmj.wordpress.com/2020/06/15/compressed-ram-disks/


The list goes on.

Genode, unikernels like MirageOS, TempleOS, Singularity OS / Sing#, compiler services like Roslyn and Kotlin, MILEPOST GCC, C++ 11+, Tensorflow / TPUs, GPT-3, all of the machine learning in compilers [1] and so much more. I truly think Deep Learning Compilers will be huge.

[1] https://github.com/zwang4/awesome-machine-learning-in-compil...


Oh, I didn't think to mention any of those, resulting in many significant omissions from my list. Some of them are things I didn't even know about! A few are kind of on the boundary: GPT-3 arguably isn't "systems software" (although we'll see, I guess) and TPUs are hardware. But certainly TPUs have big implications for systems software design if you're training ANNs.

The time since Pike's paper has been a golden age of systems software research, perhaps even more significant than the 01959-01980 period.


Do you mean OS research or systems research more broadly? That paper was written 20 years ago... I don't think systems research is dying at all. Since that was written there have been huge advances in areas such as cloud etc. I would go so far as to say the paper's claims have been proven to be nonsense (as is so often the case for 'end of history' type claims).


Systems research has changed drastically in the last 20 years, but it’s still a pretty strong community. Interest in OS-specific research, on the other hand, is mostly dead, general internet in OS likewise (people generally have what they want in an OS by now).

PL research on the other hand, is in trouble, but that’s a different topic.


Why would you say that PL research is in trouble? Conferences like PLDI and more ones like LANGSEC are going strong and there are probably more out there.


What do you see going wrong in PL research?


> With Terabyte (TB) RAM computers becoming the norm in the near future

I don't know in which world you live but in mine people still mostly buy 4, sometimes 8GB of ram laptops


I think he was talking about the clusters of machines used in big data. Both in processing (Spark) and storage (SAP HANA). These use cases can need TBs of ram.


> These use cases can need TBs of ram.

These use cases can need TBs of RAM, but they don't get it in a single node but as a distributed system with tens of GB of RAM per node.


Um, err we have a few tens of 1.5TB memory nodes lurking around. There are plenty of use cases for that out there.


I work in a cloud environment where SAP HANA databases are provisioned. We offer VMs with multible TiB RAM for the larger HANAs.


> I don't know in which world you live but in mine people still mostly buy 4, sometimes 8GB of ram laptops

Not only that, but those terabytes are actually distributed systems running at most a few hundreds of megabytes of ram in each node.


You can get a node with ~26.4TB memory on AWS: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/memory-o...


I suppose what one considers the near future is relative.


He is probably not talking about Macbooks


With multiple accounts and VMs running Linux, which in turn run a JVM and a heavy IDE, my Mac Mini is at 64GB, and it needs all of it.


I think Fuchsia is aimed at more then just mobile.

Honestly I have high hopes for it, mostly because I like the colour ^^.

"The GitHub project suggests Fuchsia can run on many platforms, from embedded systems to smartphones, tablets, and personal computers." ("Google Fuchsia - Wikipedia" https://en.m.wikipedia.org/wiki/Google_Fuchsia)

Edit: fixed spelling of the name of the colour.


Ignorant person here, why is systems (assuming specifically OS) research dying? I just assume computer science related research in general is stable. I would think OS research would be quite stable as well but maybe I’m naive.


I am seeing a tendency that system research is very much underappreciated (not only OS). To me personally there is kind of a crisis in terms of methodology. People in my area (ubicomp, not the classical USENIX crowd) got fed up with too much of "yet another" and moving sideways. No research methodology to see things moving forward. The most frustrating thing in my area to me is that we are still teaching achievements at PARC on how to connect to right printer or include people into an overlay network based on context. In reality there wasn't much achievement over boring client server architectures with operating systems just focussing at getting those faster. Out of frustration the community largely moved to human interaction and data analytics. In the end IMHO it might have been the missing business models that then influence the available funding that triggered much of the missing love in the area. You need to give researchers incentives..


Ahh, that’s a fair critique.


Asian companies are still churning out new OSes all the time.

Also, big area of research right now is how to sandbox applications properly in this era of app stores.


Nobody knows what you mean when you downvote an on-topic post. You have to say something, else you only cause confusion.

Back on topic:

Systems theory seems to basically be object-oriented, meaning it sometimes has problems dealing with self-reference. This should not be a problem for most practical applications but for things like ecological dynamics the complexity is not user friendly.

Cybernetics seems to be a function-oriented approach to the same problem class. It deals with feedback loops as first-order citizens, which in complex ecosystems like desktop operating systems means it has less of a conceptual impedance mismatch.

The former is the popular discipline while the latter fell out of favor decades ago. Perhaps it is time to dust it off.


I didn’t downvote? I upvoted OP and the parent comment. It was all very insightful.


Excuse me it wasn't directed at you. People seem to use the downvote function to say they disagree, or don't understand, but that leaves the poster completely in the dark about what they think the problem is. I find it counterproductive and frustrating.


Absurd. Perplexing. Truly these people are mad.


Not an expert on OS development subject, but are there anything really new? ( Apart from eBPF )

Almost all the change log in Linux are Drivers and File Systems. And anything there is new to Linux seems to have come from other OS like Solaris. You have Minix in every Intel chip or seL4 for specific uses.

I mean Windows 11 I was hoping Microsoft would share something about new kernel update or something. or Apple macOS which seems to be in the direction of moving everything to userspace.

Are there anything that is really new that didn't get any coverage?


I think that stuff is fairly “done” now.

What is really the point of computing is the interaction and capabilities for the end user which is where the commercial focus is.


Nowadays everything is layers on top of layers. File systems from regular partitions and RAID to LVM then to ZFS/Btrfs. Virtualization of OS systems to KVM/Qemu to containers.

Operating systems function now in general is to provide a means for running arbitrary systems on top of the metal. It’s astonishing what one can do with cloud-init and a base Linux install these days. I can botch up all the layers above and quickly restart without installing anything on the base system install.

Granted, there’s always special cases but I see that “stable core” over which anything can run to be the target. Just look at the M1 chip—designed to run virtualized architectures efficiently.

So, to reiterate my point, Operating systems should target virtualization/abstraction features as efficiently as possible so it doesn’t matter which OS or workload folks run, or where it’s running (cloud, IoT, desktop, laptop, server, etc).

An additional area requiring research is security. Process obfuscation of some kind to prevent tampering etc there are still work to be done.


there were some kernel level new things in the last decade, nothing earth shattering, but interrupt coalescing was pretty nice, I remember seeing a few news like this over the years (some coming from MS first even:)


Are you talking about kernels or OSes? There is a huge difference in terms of opportunities for further research.


we're in a renaissance of systems research. Rob was right at the time but wrong now.


Redox?


I'm curious about the hardware it runs on, and didn't immediately find answers on Google. Could you talk about what hardware is on a usual system and what device drivers need to do for screen, serial port, disk drives, etc.? Does the CPU use floating point?

The reason I ask: for the past six months I've been dipping my toes in OS development for x86 with BIOS, SVGA and ATA interfaces: https://github.com/akkartik/mu And I'm way out of my depth. I have this suspicion that x86 might be one of the more difficult environments for device drivers, so I'm curious to hear what other hardware looks like.


I agree, X86 is too complex with its legacy hardware and obscure instructions and environments. It might not be a good starting place to develope hobby projects.

The OS is running in a customised RISV hardware in MIPS for educational purposes, you can read more here https://dl.acm.org/doi/pdf/10.1145/1275462.1275470 The simulator https://github.com/halfer53/rexsimulator

The BIOS has already been written for the serial ports, etc. So it is quite easy for me to interact with the devices in the OS.


This is really cool, to the extent that I'm surprised I've never heard of it. Are universities still using it in coursework?


Yeah they are still using to teach computer systems and operating system course


It's also by far the best documented and most consistent hardware out there, especially when it comes to configurations. Yes, it's grotty -- but at least you can figure out where the gunk is.


How hard would it be to run it on RaspberryPi ? (let's say RaspberryPi 400, the one that is within keyboard case)



I shared a co-working space with a guy who had written a distributed OS for rendering; it was quite neat - but about 1000 man years from being usable.

I felt sad for him - he'd poured his heart into it for more than 10 years (he said) and had no idea of how to turn it into anything other than a hobby project. He hadn't open sourced it, his aspiration was that "Facebook will buy it". I haven't seen him since the pandemic kicked in.


> I felt sad for him - he'd poured his heart into it for more than 10 years (he said) and had no idea of how to turn it into anything other than a hobby project.

There is nothing wrong with that. It's fun to do things just for fun.

> He hadn't open sourced it, his aspiration was that "Facebook will buy it".

It's hard to believe that's anyone's real motivation for such a project.


Often when you do things just for fun, you’re driven by the possibilities of what could be. That’s part of the fun. This is especially true of software, and there’s nothing wrong with it.

What there is something wrong with is criticizing others’ motivations because they seem outlandish to you. It’s fine and even healthy to be so ambitious. Not all of us want to sit around playing with legos assuming that’s the best a reasonable life has to offer.


Yea - agree about fun to do things for fun - but my impression was that he had given up a great deal to work on the system and unfortunately genuinely believed that somehow he would be able to monetize his efforts and that his hard work and insight would "pay off".

I hope it does, I don't think it will.


Did you ask him why this needed a new OS, as opposed to e.g. some daemons running on top of an existing OS?


Yes, and the answer was "to securely control the resources" which I think makes some sense; but he is very short on details about his approach. I asked him for a white paper - no white paper.


Hey, another Kiwi!

The about section says "A UNIX-style Operating System for the Waikato RISC Architecture Microprocessor (WRAMP)". Did you study at Waikato?

I'm curious what you think about tech in general in NZ as well. Compared to the US it feels like there's practically no community :/.


OP here, Yeha I did study in Waikato.

NZ is a very small market compared to the rest of the world, so there is a lot less opportunities here in NZ in terms of number of startups and number of tech firms. The salary is also lower compared to other developed countries. Any software engineer in NZ can easily get a 30-50% salary raise by going to Australia. A lot of talents ran away from NZ because of this.

There are still some exiting companies, for instance, Serko whom I am currently working for is expanding into the international market to seize the post-pandemic opportunities. But overall it's bit stagnant compared to other OECD countries.


Given that many want to work remote, the good things we hear about New Zealand, how difficult it is to get in, and that Billionaires are reserving space to prepare for Climate change related risks elsewhere, I was under the impression that people would want to get to NZ and stay put there. This is the sentiment among some of the people I talk to.


It's a beautiful country but because of the small population there's a lack of opportunities, especially compared to the US or even Australia. The time zone makes remote work difficult too.


If I had already established a career and wanted to settle down I'd probably like NZ more, but I feel like I'm missing out by being in NZ compared to somewhere more populous like the US or Europe.

How old are the people you're talking to? For reference, I'm 23 and just started my career.


True, agree NZ is a great place to work. In fact, I quite enjoy my lockdown-free life in NZ.


This was my thought as well.


Wow and with Auckland house prices so high why not move to Australia or even better USA for a better salary and lifestyle


Another Kiwi here too. First I've ever heard of the "Waikato RISC Architecture" but am curious to learn more.

I'm not the author but my view of the NZ tech sector is that it's pretty much stagnant. I ended up taking my skills elsewhere as did most of the people I remain in contact with from my UC days. It seems successive governments have put all their eggs into the agricultural export market. That's a real shame.


If you want to learn more there's a website [1] for the WRAMP architecture which goes into a fair bit of detail. The manual specifically is what students are (or at least were when I was at Waikato) given as a resource to understand it.

The toolchain and simulator are also available on GitHub [2], though it looks like the author has their own fork from an older version. A few years ago the the project was ported from a custom board to an off-the-shelf FPGA dev board and the simulator was renamed from "rexsim"/"RexSimulator" to "wsim", hence the difference in name.

[1] https://wramp.wand.nz/

[2] https://github.com/wandwramp


Wow, I didn't know WAND had a github page. I'll get my fork merged with the main repo in that case.

The custom fork basically adds a extra button "Quick Load", since the srec file can be quite large to transfer through serial port to load, so I just implemented extra button to read srec file directly into memory.


Also another kiwi: grew up in a coastal city in the North Island and moved to Auckland for uni, dropped out and started working in the industry (best decision) before eventually moving to London.

Been here 3.5 years now (although I still don't know if 2020 really counts) and the tech sector is great here. Maybe not as cutting edge/well funded/as high salaries as America, but I feel like the work/life balance is better in Europe.

NZ is great but I have a feeling I'll head back there once I'm a lot older; it's beautiful but there are very few opportunities when you're young.


That's basically what I thought, but it sucks to hear that.

Whereabouts did you go from NZ? I'm working remotely right now and want to leave NZ after travel opens up. Compared to other places in the world NZ seems terrible for a young, single guy in tech.


I went to Tokyo. Not where people typically go but it's a fun place to live. Salaries are about on par with Australia and Western Europe provided you have some experience.


I grew up in Hamilton, Waikato, until high school when my family moved to Australia.

My Dad taught math at the teachers college and my Mother worked at the University as a faculty secretary, so I was in and amongst University life.

I sometimes wonder what it would have been like if I had stayed and gone to Uni there.


Congratulations. I did the same thing in the early 90's and I still think it is the project that I learned by far the most from. I never released it because I think its time has passed but it was a fun exercise. Debugging the early stages of such development is super hard, especially if you are doing it on bare metal instead of on a VM.

Do you have a plan behind this or was it just to scratch an itch?


Thanks. The OS is roughly 90% compliant with POSIX.1, but there's still a lot of work to make it fully compliant. That would probably be my next goal. This is a hobby project, I will probably spend 1-4 horus every other week to work on this. I am working on optimising memory usage of the file system which is almost completed in a separate branch.


Neat. POSIX compliance is a tough nut to crack for some of the features that look pretty innocent on the outside, they can dictate huge amounts of under-the-hood architectural and conceptual elements. My project revolved around a 32 bit capable clone of QnX, which at the time had only been released in a 16 bit version. The company that I did regular consulting for had a bunch of requirements that the stock QnX could not fulfill so in a fit of madness I decided to roll my own. By the time it was finished QnX had finally released their own 32 bit OS.

Keep at it, I'm really curious how what you are making will end up.


That's funny because QNX now only supports 64-bit systems and hasn't done 16-bit systems for decades. You must be (relatively) ancient. Disclaimer: I currently work for the company that owns the QNX IP and would also qualify as (relatively) ancient.


Yes, I'm ancient :) 56 on this end. This work was done in the early 90's.


Why do you want to be compliant with POSIX?

Would it affect to the speed of execution or development not to be compliant?


Request:

Make a blog post on the experience. What does the heatmap of changes look like over 5 years? I cant imagine you burnt the candle on both ends for that whole time.

What is the final lines of code?

What blockers did you not anticipate that killed a lot of time? Hardest problem?

I think you've got a treasure of interesting retrospectives to share :)


Yeah, good questions that I'd also love to see see answers to!

My favorites for retro-like discussions (if you think it will be beneficial):

1. What did you worry about the most in the beginning that later turned out to be silly/inconsequential?

2. What did you intentionally put off in the beginning thinking it wasn't a big deal that you later regretted not spending more time on?


Thanks, that's a good point. I'll definitely do that


Email hn@ycombinator.com when it's up, and we can put it in the second-chance pool (https://news.ycombinator.com/pool, explained at https://news.ycombinator.com/item?id=26998308).


Thanks will do


Here are some quick insights until OP does his blog post.

For a high-level breakdown of the files changed:

https://public-001.gitsense.com/insights/github/repos?p=focu...

681 files were changed, with the most being C/C Header files.

The file age is interesting as it shows 175 files were changed over a 2 year period. And you have about 279 files that were probably just imported and never touched again or were infrequently changed. Note renaming files/directory can skew this number.

And if you are looking for a heatmap, you can look at

https://public-001.gitsense.com/insights/github/repos?p=comm...

The superscript value <number>v besides the file/directory indicates how many versions there are. So the higher the version, the greater the focus. In this case, the kernel, include and fs directories saw most of the activity.

In the future, I will gray out files that no longer exists on the latest tree, but for now you can look at the year beside the files/directories to get an idea of when they were last touched.

And here are the top 20 most frequently changed files

           path          | revs
   -----------------------+------
    kernel/system.c       |  151
    kernel/proc.c         |  128
    include/sys/syscall.h |  124
    kernel/exception.c    |  120
    include/kernel/proc.h |   95
    Makefile              |   90
    kernel/main.c         |   80
    user/shell.c          |   69
    fs/inode.c            |   67
    winix/sys_stdio.c     |   60
    README.md             |   53
    driver/tty.c          |   52
    kernel/sched.c        |   49
    kernel/clock.c        |   46
    winix/wini_ipc.c      |   45
    fs/fs.h               |   44
    winix/mm.c            |   44
    fs/makefs.c           |   42
    winix/sigsend.c       |   42
    init/init.c           |   41



Disclaimer: I'm the creator of the tool for the links above and there is a bug where the menu will say 45 days window but it is a 5 years window.


Thanks for sharing this, that's very insightful.


No problem. If you are going to do a blog post and need stats, you can contact me through my profile info.


Agreed. Not many devs tackle Mount Everest, so we’re interested in your journey.


Is building an OS considered Mount Everest in terms of software development?


No it's more like K2. Everest has become mainstream...


This looks like one of the most interesting and fulfilling game one can play


I just generally have tremendous respect for someone who even attempts to build an OS from scratch. You kept at it and didn't give up. Great work.


As someone who wants to get into systems programming, this is very inspiring.


Incredible!

What was the hardest part?

What did you learn that you weren’t expecting?


The hardest part is debugging concurrency and weird scheduling bug. It's hard to debug because I can't reproduce it, which is quite frustrating. Over time, I found that making educated guesses, or running the code in my brain, it a lot more efficient than debugging every line of the code.


As someone who wrote and debugged a lot of concurrent code, here's another advice: log everything. Log as much as you could in the part where you think the bug is. Log every line that's run if you have to. You'll then skim through the log file looking for any unexpected patterns.

This approach works better than using a debugger, even on a single-core system, because these kinds of bugs tend to be hard to reproduce and take many iterations. You don't want it to hit a breakpoint a zillion times before it finally shows itself.

And another one, tangential to what you said. Read your code line by line and ask yourself "what would break if a context switch happens right here" for each line.


That's a very smart way of debugging concurrency. I did a lot of logging as well when I was doing debugging concurrency. Overtime once you got familar with the projects, I started to develope instincts on how everything fits together. That's when reading code line by line, making educated gusses start to become viable way of debugging concurrency.

But for complex projects, reading code or relying on instincts may not work alone as brain power run out of capacity. That's why logging helps a lot


You are right. Great work!

One somewhat related theoretical observation about concurrency is in some article by Dijkstra (I don't remember the reference right now): he says that debugging using traces (essentially printf) does not work for concurrency, since it is projecting multidimensional data (data present at the same time in multiple processes) unnecessarily linearized onto a single dimension (a sequence of printfs) and then trying to make sense of what is happening. It may not work, even if you print timestamps.

His view was to promote theoretical proofs of correctness of concurrent code, rather than debugging, but to me at least, this is much more difficult.


Very true, and not only do print statements inherently serialized threads, they also change the timing so significantly that probably your bug disappears anyway.


I would expect blazing fast in-memory logging with thread-id and time stamps, so timings aren't affected (much).


Back on the N64, I updated the bit of code that swapped threads to write, to a ring buffer, the outgoing/incoming PCs, thread IDs and clock. Found tons of unexpected issues. In another thread you can print that or save it to disk or whatever. Or just wait till it crashes and read memory for it. Found the last crash bug with it. Meanwhile, a colleague took it, and drew color coded bars on the screen so we could see exactly what was taking the time. Those were the days. =)


If you don’t mean to reveal it, what were you working at on the N64 ?


Ironically, trying to make a name for myself.


My tip: if you can pinpoint the place where the bug occurs, trigger a SIGSEGV there and run the entire thing under Valgrind. It shows you a lot of interesting data.


Related: I am in academia, and literally these are the questions that I ask prospective PhD students : what was one hard thing you understood in your area of interest? What was unexpected? And what was the final enlightenment when finally things clicked into place?


That's right!

After spending 5 years to write the OS, if you can spare 1-2 additional days to write down your experience, it'll be extra useful.


Yeah, I'll definitely share that


...wow. Congratulations!


5 years is a long time. If I recall correctly, Linus took one year to churn out his first usable Unix clone. I wonder why this project took so much more time.

PS: This is not a criticism but genuine curiocity why basic OS projects are so expensive given so much already out there. Also, Linus spent majority of hours each day for a year while this might be just a side gig for you.


Linus is a genius, I don't think I could ever reach his level

Yeah it was a side gig for me, I did spend major of my days for the duration of 2 months last year to work on this.

In fact, I paused on the development of this project for 1-2 year.

But most of the time, it's just 1-2 hour per week side projects on average.


I think you don't have people teaching enough lots of low level coding in college is why. It used to be the first thing you had to learn in Linus's time. Now you use it for one class and use Python/Go/Java the rest of the time.

You pretty much have to seek out the knowledge regarding Operating systems in addition to taking on the challenge of learning something like C/C+ or Rust. You have to then break all the ways of thinking about your memory model after being given a GC language.

Understanding hardware is needed to debug specific CPU/IO issues that could be caused by assumptions or lack of knowledge.

Next comes learning the different algorithms needed by an OS in a new language and then implementing them. This takes a lot of time to get down correctly.

It's also a really small and niche community which means you don't have a lot of support from peers that understand the problems you encounter the community isn't an easy one to feel accepted into. It can be hard to get support because you need to be very detailed in your questions or you will be ignored.


Related: Waikato RISC Architecture Microprocessor (WRAMP)

https://wramp.wand.nz/

https://github.com/wandwramp/



Can it run tcc and self-host?


Author here, this project is just for fun, nothing more than that.

The OS is designed for MIPS architecture at this stage, I don't think tcc supports MIPS at the moment.


There have definitely been forks of tcc that have added in MIPS support. Depending which one you end up with (debian is pretty far diverged from the mob branch), I'm pretty sure it should be available.


[flagged]


This is not relevant, original, or funny.


I think it’s pretty funny


It's all three


It's weird anyways and that makes life interesting.


Boo


Join the linux clan. It's the one true way.


lol :)


Nice job. It's not easy for everyone to be dedicated to get things done.


Neat. Well done.


That's great! Thanks for sharing it.

I'll probably never have a use for it, myself, but I am always in support of passion projects.

Good luck!


I'm sorry, you're lacking the requisite expertise for this entry level developer position.

Well, I wrote my own operating system.

Sorry.


Depends on the audience

I did struggle to explain the significance of this project when I was applying for graduate roles, especially to HR people who only look for keywards in CV.

But there are people who appreciate my work.

I got my internship at AWS because my interviewer who had technical background was very impressed with my work, hence the reason I got hired.

Sometimes you just have to find the right audience.


Kudos to you!

Yes, good hiring managers with sharp vision will always want to hire new grades who is able to write operating systems, compilers or simulators from scratch. Having those essential CS skills are much better than having many buzz words in the resume.


yep, hiring is broken.


Is there a possibility of integrating GCC compiler and adding support for ELF executables?

If yes, then how?


It's a tough ask but I would like to understand what will be the steps to go through.


It's certainly plausible

It's currently using wcc to compile the kernel https://github.com/wandwramp/wcc. We can swap it with gcc for sure, but require bit of work, especially the output format and backend.

Winix assumes the output file is in srec format https://en.wikipedia.org/wiki/SREC_(file_format), so we would need to tweak gcc to support this header format and add backend support for WRAMP architecture.

ELF is quite complicated, if I have to do this, I probably just copy some codes from linux.


If you're not supporting DSOs (shared libraries) then ELF is pretty straightforward. In fact, from the kernel point of view it's still fairly straightforward since all you need to do is either load the (non-DSO) program and go, or else load the (non-DSO) program named in the interpreter entry and go, and leave the hard lifting to the interpreter. The kernel itself never needs to understand about DSOs or GOTs and PLTs or DWARF or any of the other fancy magic that ELF brigs to the table. Just how to find and load the LOAD segments and where the entry address is, or where the PT_INTERP string is.

It's really just that easy.


Thanks for the info, good to know that it's easy to implement from kernel's perspective


Has submitted the 'does it run crisis' issue yet?


Awesome, but how does it even work with so few tests


How did you get started with this project?


"Visual" memory?


Oops, typo :p


Awesome work! Very inspiring! :D


Oh my. Minix1. That predates me and my gray hairs.

I at least got Minix 2 with the printed source code in AST's book. Minix 2 was so annoying on real PC hardware that I had to stick a keyboard repeat rate and delay patch whenever I used it. It was all straightforward and the lab assignments on it were easy because it was built for tinkering rather than efficiency.

My uni replaced FreeBSD as the teaching operating system of choice. They also go rid of Oracle DBMS# for Postgres.

# Many, many moons ago I nearly got Aaron Swartz'ed. This is because I tried browsing "proprietary" docs using a Java-based internet proxy to bypass IP ACLs from my "off-campus" location. It was clearly located in North Korea. Then, the browsing was so slow (like Tor but even worse), I figured: Why not mirror it locally instead? And, what kind of documentation "toilet-paper" police are going to sit there watching interactively or SNORT monitor what happens to "precious" corporate documentation they forced me to use?


[flagged]


And is there a language like HolyC?


[flagged]


Thanks mate :D


How do you feel about the agents of the CIA?


I'm mildly annoyed by line 30 in include/posix_include/stdio.h


It's only an errant tab ffs.


:P


Ok, I'll ask - why?

You have made it POSIX-ish but wouldn't one goal be to create something new ? I imagine that you just wanted to use a bunch of POSIX tools but won't that limit you in some way?


Not the OP but, ummm, maybe for no other reason than because he wanted to. Or, he wanted to learn something. I see this question a lot when someone shares their work that they are proud of and I want to scream, "What difference does it make?"

In the very first sentence of the README it states "hobbyist, educational" . That pretty much sums up the OP's motivation.

Why can't learning and exploration be their own motivation?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: