Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm a hardcore *nix guy but boy do I love me some C#. Up until now it's been the best language I have worked with but the worst platform due to it's lack of 'nice things' that we just expect from languages/ecosystems these days.

Where 'nice things' is defined as being open-source, having open-source ecosystem of developer tools etc.

This isn't so much the beginning (as good stuff has been happening for a couple of years now) but it's a huge step.

Thankyou Microsoft.



I actually have this conversation frequently with my developers (most of them are .NET), so I'm really happy to hear someone else say this as well. I've done about a an even split in my career with C# and Java and feel as competent as the next guy bouncing between the two languages and delivering projects for clients, but I like writing C# the most, by far. The language is more modern, things like async/await, Linq, and Entity Framework make developing Java web apps drudgery in comparison. Spring does a lot to help (Spring Boot is great), but there's a lot of rolling your own and helping less senior developers figure out how to even START at Java WebApp project.

But when it comes to automating my build process, doing continuous testing and deployment, setting up a environments in containers, or just where I'd rather spend most of my time operating- I'll take *nix and Java all day over IIS/Windows server. No thank you, the only Windows I have in my house is a VM for using Visual Studio and writing C# web apps on the occasion I need to for personal stuff.

I will be truly happy when I can write C# code and deploy to something like Tomcat. I will never again go back to Java when that day comes.


Similar situation. I deal with Unix/windows approximately 50/50 from a devops perspective. My word I'll also take anything Unix over windows at the moment regardless of the tech. To be fair in the short term, windows is maximally productive but if you hit an edge case or a rough spot you're SOL, especially if it involves Microsoft support or scripting. Rough edges cost me a lot of money and willpower.

An example: I opened a case when IE9 was in developer preview with partner support. When they rewrote the download manager in IE for this release they changed how download prompting worked and removed a setting from the UI (and left it in the registry). This broke ClickOnce launches entirely for over 2000 users for us. Fast forward nearly 5 years and it's still broken, the case is still open and to get everything to work for the client we have to frig a registry setting on every workstation they roll out. That sucks for us and the client, badly.

This is a typical story for us. When you deal with VSTO, MSI packaging, managing large clusters of windows server machines, random bugs that just blow up in your face suddenly after working for years, signing code, trying to get repeatable builds out of a CLR solution and automating all of these things, forget it. PowerShell gets you 80% of the way there but the last 20% is a tar pit of pain and impossibility.

Now I've been in the Microsoft ecosystem for 20 years, certified and bought into it all. Perhaps I'm bitter and tired but I can't see past all this experience and have mixed memories of c#. All I see is hung instances of visual studio and working out which project file is buggered and all this is destroying the best language I ever used. I don't want to waste my life on giving them another chance.

There is literally none of the above on Unix platforms from experience. I've only encountered one bug in the last 15 years due to a CIFS kernel bug and RH fixed it within two days. I haven't had any automation friction at all.

I'm worn out and confused to be honest when I read back this post.


> There is literally none of the above on Unix platforms from experience.

Have you worked in commercial UNIXes?

I have some HP-UX, Aix and Solaris war stories.


Yes, SunOS4, Solaris and HP-UX. Never had any problems but then again we pretty much kept the hardware alive after vendors had deployed everything. Also Oracle on a VMS cluster which was a joy.

To be honest windows (NT series) operating systems were pretty trouble free in the NT4 era. The masses of fragmentation and numerous paradigm shifts were what broke it all for me.


THIS +1.

I feel that people need to feel and experience both side end-to-end deep in the bowel to be able to judge Java/C# because it's more than just the language; it's everything! IDE, Tools, Libraries, Environments, etc.


Thank you for summing it all up, so well. C# has a language has so much stuff built into itself. When we work with java we have to take help from frameworks such as Spring etc .


I've been using c# since beta and I love it but agree that IIS is a cluster fuck, and working with msbuild is no picnic, and supporting windows services is macabre at best. There are efforts to ease the pain with projects such as a topshelf/katana (owin) http://msdn.microsoft.com/en-us/magazine/dn745865.aspx

so MS is aware that this process should be more light weight for many situations, and appears to head in the direction of node.js type hosting for times when you don't need all the IIS features.


I'm not really hardcore anything but have been using Windows as main OS for years mainly because of the existence of VS. And boy it would be awesome if the stuff I write in C# now would just run on *nix :]


Microsoft has been playing catch-up on developing that "nice things" like trying to build an open-source community. NuGet, open-sourcing the ASP.Net and EF stacks, etc. A long way to go, but they've recognized their big failing and are moving on it.

MS makes some great technology and some terrible tech... but the problem is the terrible tech gets the same blessing as the great tech. An open-source community is the best way to develop best practices and properly replace their false-steps.


I'm not so sure I believe that they're trying to do a "nice thing" out of the goodness of their hearts. They've held pretty tightly to .NET because in order to deploy anything developed in .NET you needed to buy Windows licenses. Recently that hasn't played out so well. Software developers look to minimize costs and overhead that doesn't contribute to their mission (e.g. licenses and license compliance). There's just zero reason to pay for operating systems in 2014 when there are such capable free alternatives.

So if people won't pay for operating systems what will they pay for? Stuff that still costs them money, i.e. hardware infrastructure and attendant maintenance and administration. Hello cloud. Development shops are happily sending Amazon millions of dollars a month so they don't have to buy servers, set up server rooms, hire system administrators, and worry about things like air conditioning, big UPS systems and emergency power generators, etc.

Alongside Amazon, Microsoft is doing pretty well with Azure. And you can run free operating systems on Azure, and .NET is already well supported on Azure. So what will sell more Azure? Make .NET run on the free operating systems. Now all the developers who like the free OSs on their personal dev systems are suddenly able to develop stuff they can deploy on Azure. What about all the devs that like Macs? No problem, make .NET run on Mac OS X as well.

I think this is all about making Azure services more compelling and more able to compete with Amazon than it is wanting to do "nice things" for the open source community.


Your analysis is absolutely solid, but I'm not sure anyone read this and thought "Oh gee how nice of M$ to give back".

In the end I think this is overwhelmingly a net positive for both MSFT and developers.


You know, if MS had done this even a year ago, the project I'm working on would have the server-side in .Net instead of node. I'm replacing a very old .Net project having built up too much code debt to be maintained with a green field project using flux/react server and client. <br /><br /> I really like ASP.Net MVC since v3 it's been a pleasure to work with and the first release of anything ASP that actually mirrored how I used ASP.Net for a long time before it. MVC meant no more hacking with asmx/ashx to work around the web forms event lifecycle issues. Razor is such a great view/template language I still can't believe it hasn't become more popular. <br /><br /> All of that said, there really is a lot of power in being able to use the same JS tooling both server and client. JS has always been a favorite language of mine, and I've been using node tooling even in VS web projects for 3-4 years now for client resources (pre-build events with grunt, and now gulp). <br /><br /> I think this makes a ton of sense in the wake of Azure as a platform. Especially if MS can do services in docker containers (windows or linux) and run on their infrastructure with relative ease. Companies are paying for the tooling (beyond express, now community versions), and they pay for ease of deployment/infrastructure. They don't want to pay for a lot of enterprise windows licensing anymore... MS sees the writing on the wall. <br /><br /> What I think will be really telling, is if by the end of 2016, VS runs in linux and osx as well.


Well there was that big tada about MS/Docker, now this...

A lot of good things happening in Redmond right now.

Now if they can somehow fix this upcoming Lync is now Skype Business business, I'll be really impressed.


I agree with you on the Azure vs "nice things". I also think their long-term strategy doesn't rest with the .NET framework.

However, I think they know that if they completely abandoned the .NET framework, there would be a community of very upset devs that have spent their entire careers investing in Microsoft. Instead, they're doing a slow withdrawal and "giving it to the community".

I think that in the long term, they're going to be focusing more on Sharepoint, SQL Server, and Azure; less on .NET. That's just IMHO.


The power of OSS is often overstated when it comes to writing user friendly software and the sorry state of the *nix GUIs, where setting up a multi motor setup like you do in Windows 98 is still a few years away. The reality is that best practices often conflict with the need to ship resulting is wasted time refactoring, and redesigning.


When my dad's laptop broke, he picked up my spare Ubuntu laptop and has been using it for the last two years. I know it's just one anecdote, but when it saves me a crap-ton of money and time, it seems very real to me. He could have asked me to buy Windows and Office for him... too bad for Microsoft he hasn't needed either. Game over.


What kind of things does your Dad do on that laptop? It seems to me that Linux works well for people who are very technical and people who are not. The problem is with people who want to do more than browse the web and read email but aren't technical enough to do it on Linux. For example a user who wants to record themselves playing guitar. On a mac, plug in and open Garageband. On Windows download Audacity and plug in. On Linux figure out JACK/drivers/supported IO devices. I haven't used Linux in a few years so I admit my opinion could be out of date but I've definitely noticed that the problem is for that middle group who want to do more with their computer but aren't technical enough to do it on Linux.


He's actually surprised me lately. He figured out how to install a spreadsheet app (libre office I think) and has been using it for his work spreadsheets (he's a contractor).

To be fair, he probably wouldn't have been able to set it up initially by himself. But I've had really good luck getting family members on Linux lately. I can Ssh in and do updates or make changes and it's way easier than remotely administering windows.


While I do think this move is a valiant one, I don't believe that it will, in and of itself, build better practices and help Microsoft build better software. Open source is very hard to do right, and if you're a company that doesn't have open source in their DNA it could pose a huge challenge to building positive relationships with your developer community. If you're a big corporation like Microsoft, you have tons of people with their eyes on you at all times. Everyone can read and criticize your code.

Also, being open source means being open and transparent about release cycles and roadmaps, which takes a lot of effort and initiative. I do think Microsoft can do that if they build a solid team of technical community evangelists, but otherwise, they will be swimming against the stream.


Like Apple and Google, right?


What languages have you got experience with that you feel C# is the best language you have worked with?

And what makes it such a good language for you?


Generics, Partial Classes, awesome Reflection API, Lambda syntax, LINQ, C/C++ compatibility, async/await, concurrency primitives, etc.

I have worked with prettier languages, for instance I have picked up Haskell recently, I have also worked with faster languages, C was my forte for many years and I have worked with languages in the GSD category like Ruby and Python.

C# is appealing not because of some feature that it has that other languages don't. But rather the long list of things they didn't fuck up. For instance by building concurrency and evented programming semantics into the core there isn't multiple competing event loops for instance. (unlike say Python or Ruby which have about 5 each, Java has more than I can count).

It's not so much there is one single thing that makes C# really nice, it's the whole package.


async/await is HUGE.

It opens an almost frictionless asynchronous pathway from your code all the way down to asynchronous capabilities of the underlying platform. Which was always the problem with leveraging async/overlapped IO: It was too complex and it turned your application logic inside-out because everything had to be performed in callbacks. async/await takes that on directly: You get everything executed in callbacks - only the callbacks are transparently created for you along with the necessary finite state machines so that your code still composes with exceptions, loops, branches working the way you expect - even across those "invisible" callbacks.

For server scalability as well as responsiveness on small devices (keeping number of thread low - even at one - while still serving the UI), that is really an advantage.

There is no "fakes" on the way - no extra threads being started just to wait for completion of an otherwise synchronous API. If the platform supports asynchronous IO/network/disk/database you can leverage that without any of the strange complexities of other languages.

Worth mentioning here is that Windows has an easier scalable and better designed completion oriented async model compared to the readiness oriented model of Linux - which is mostly limited to sockets anyway.

OS X has GCD - which is also a completion oriented model. I look forward to seeing how .NET with real global optimizing compilers from MS will do on OS X versus Linux versos Windows (on a Mac, obviously).


FYI Python has had an await equivalent for a looong time through the Twisted networking library and now asyncio. It has the same semantics, but you use the yield keyword instead of await.

Also as of recently there is a standard library event loop package, and as far as I know Twisted was the defacto event loop on 2.x for most types of work.


Having used Python during my time at CERN, I would never use it for anything besides scripting tasks. Maybe when PyPy reaches feature parity with mainline Python.


Could you please contrast async/await implementation with Future in Java?

From the usability standpoint I see that they are more or less similar. However since you state that async/await exploits the asynchronous capabilities of the underlying platform I'm really curious to know how is this different from the way the way Future is implemented in a JVM.


Futures in .NET are tasks and the Fork/Join framework is TPL (Task Parallel Library).

Async/await allows developers to write code that looks sequential, but is rewritten by the compiler into a sequence of coroutines built on top of TPL, while taking care error propagation happens correctly.


FYI emcrazyone, it appears that you have been shadow-banned. I don't have any specific recommendations for your question, but I thought you should know.


Telling people they have been shadow-banned defeats the purpose of it.


In that case, it's a purpose that I don't support.


It does not serve a purpose to me. To whom does it serve a purpose to shadow-ban that user, and what purpose is it?


Tasks are pretty awesome and just about as convenient as Go-routines. Now if we could just get typed channels in .Net >:]


How about BlockingCollection<T>? If I understand it correctly you can even have select like sementics with BlockingCollection.TakeFromAny(...)

Edit: Sorry, messed it up a little bit. BlockingCollection will block your current thread for any other computation and does not allow to await on it.

But there seem to be alternatives: http://stackoverflow.com/questions/21225361/is-there-anythin... http://blog.stephencleary.com/2012/12/async-producer-consume...


I don't think it would be very hard at all to create a generic CSP system with C# (or F# for that matter). I've built similar systems using WCF.


Doesn't rx fill that role?


With Rx you can do similar things but the semantics are different. When you use an Rx Stream you will get the data pushed from a different thread. With Go-like channels you pull the data from the other thread (or the channel).


You know what I also really like about C# is the .net runtime. I know that allot of people dislike it because there's so much leftover stuff from previous versions, however despite this it's one of the most comprehensive runtimes I've every worked with.

Want to make a screenshot of the desktop? Got it. Want to post keystrokes to another application? No problem. Ohh I see you want to write a tcp server with asynchronous processing? Here's some generics you can just c&p.

I know other languages have these features too, but C# makes it so damn easy to use in the stdlib it's just not funny anymore.


> Ohh I see you want to write a tcp server with asynchronous processing? Here's some generics you can just c&p.

So we've come to the point where copy-pasteability is a language feature? Tell me I can use a library or module. Tell me that the language comes with built in features to help me do this or that. But never tell me I can copy-paste generic code. Please.


That's actually one of the features of C#, if something is needed or is hot in another language, it will appear in the core libraries.

They've realised recently this is having a negative effect on open source C# code as so much of the community waits for the "offical" version, but it has its positives too, almost anything you want to do is in the core libraries.


This is my main grudge with .NET development, a culture of preferring The One True Way. It (among other things) leads to a lack of .NET FOSS diversity.


The biggest problem with this culture is that it means that when the One True Way is wrong, we suck it up and use it anyways.

I mean, Entity Framework is great, WebAPI is slick, and MVC has its moments, but there are some real trainswrecks. I've used all three XML serializers in the .NET framework and they're all cringeworthy. ASP.Net WebForms could have been something spectacular with open development instead of the hideous monstrosity it was. And MSBuild would have been laughed out of the room and regarded as some kind of bizarre eccentricity like TempleOS or UrBit instead of a serious build system.


>I've used all three XML serializers in the .NET framework and they're all cringeworthy.

XmlSerializer, DataContractSerializer, and what is the third? I agree and was about to point out the same thing though; XML serializers in .NET are a joke but because they exist in the core libraries there is not really a compelling open source alternative.

That being said, it seems like the Asp.Net Web API is actually using Json.Net instead of the built-in serializer.


XamlServices serializer. Technically it serializes Xaml, but afaik it can be prodded to produce vanilla XML.


Dont mock msbuild.

i never slept as much or as well than when i tried to work through the documentation.


Generic code is made to be copy-pasted. Also, what's wrong with copy-pasting when it lets you write features faster?


It's a code smell that usually means there's something there to be abstracted, or that your API isn't as clean as it could be.


I'd take a bit of copy-paste over premature generalisation any day.


I'd take a premature generalisation over a false dichotomy.


So if its not a dichotomy, what are the other options? If you want to avoid copy/paste, you have to use generalisation techniques.


Yes, and generalisation doesn't mean premature generalisation. It's very possible to just have a well-designed generalisation.


Well OK. But in a lot of cases I see badly designed over complex generalisations ("premature generalisation") because of a phobia of copy/paste. A bit of copy/paste is OK guys, seriously. Wait and see how it goes for a bit and then refactor the general case in later _if_ it seems worth it.


Also it compiles fast and tools are really good.

People keep bringing up Scala, last time I tried Scala (~ year back) the compile times were pretty long and the IDE support was sluggish and I have a decent PC. Nowhere near to C#/VS development experience.


I would add "yield" to your lits - a small feature, but it allows for very elegant code and easy implementation of deferred evaluation / execution.


Dartlang also will get C# style await async (as of 1.8) and has very similiar syntax. Additionaly it has more CLI oriented tools (eg. pub) and Google ecosystem integration (recent app engine managed vms integration)

I'm happy .NET is going open source path but still think we need others to compete.


I'd like to add extension methods & default parameters to your list. Being able to extend any object with new methods has made programming in C# much more pleasurable then most languages I've used.


Thanks for your extensive answer.


multiple competing event loops for instance

Fortunately Windows has implemented message-passing event loops for you for a couple of decades. And they're pleasant to work with as long as you don't mind everything being a WPARAM/LPARAM.


Those type of event loops are for Win32 API GUI applications.

Server applications like a async socket server use IOCP, a kernel based type of event loop that is highly performant and scalable. And by the way, the .NET CLR already uses them at its core for sockets and other types of I/O.


Are Strings a separate entity to built-in types like they are in Java? If so, I find that behaviour odd and welcome C++'s "a user-defined type is not different to a built-in type" approach


LINQ is _the_ only reason I tolerate .net


You mean C#? Let's keep those separate shall we?

Linq makes C# much better, especially compared to java. It's a whole different story in F# though.


What's the story for F#?

I'm a Haskell enthusiast, but it's not so good at Windows support, or libraries. I've been thinking F# might be just the ticket.


F# has a huge momentum at the moment. Quickly rising in the Tiobe index. It's an ML-style functional lang for the CLR. It's not as powerful or pure as Haskell in terms of type system and style, and it is lacking (or is more pragmatic) in a few places to be interoperable with other .NET langs. It has some really cool features though that are quite unique, such as

Type providers http://fsharp.github.io/FSharp.Data/library/JsonProvider.htm... Code quotations http://msdn.microsoft.com/en-us/library/dd233212.aspx

Most importantly it has become a first class citizen of the ecosystem so the tooling is already great compared to a lot of similar languages.

http://fsharp.org/


F# 4.0 pre-release was just announced/shipped:

http://blogs.msdn.com/b/fsharpteam/archive/2014/11/12/announ...


F# has Query Expressions[1] which are just Computation Expressions[2], i.e. not a language extension, a library.

If you know Haskell then it's pretty easy to get into F#. The only thing you'll miss are higher-kinded types. Although they can be hacked in[3], it's not really worth the effort a lot of the time

[1] http://msdn.microsoft.com/en-GB/library/hh225374.aspx

[2] http://msdn.microsoft.com/en-us/library/dd233182.aspx

[3] https://code.google.com/p/fsharp-typeclasses/


Agree - also I think f# can't dispatch on the return type of a polymorphic expression as I thought that was only possible with type classes?


I think it's possible by including the return-type in the argument list and using inline operators. You really have to jump through hoops though to make it work and the resulting code is damn ugly. Take a look at the source for FsControl[1]. It implements what they call 'type methods'. So there's types for Monad.Bind, Functor.Map, Applicative.Pure, etc.

I'm still trying to get my head around it; I thought I'd try to do so by implementing the core types and functions in Haskell's Pipes. So far I've managed to convince the F# compiler to take well over an hour to compile 70 lines of code!

The main argument against them has been lack of CLR support. But it seems that there's already the FsControl way, so I think some syntactic sugar would go a long way.

So yeah, until there's language support for type-classes I think it's probably not worth it.

[1] https://github.com/gmpl/FsControl


oh, this is such a great response to a question that I didn't think anyone would be able to answer - thank you!

I'll have to go study FsControl now ;)


My pleasure :)


I'm so onboard with what you're saying. I've been using mono for the last couple years, so much so that I call myself a "mono developer" and not a ".NET developer".

I worked at Microsoft for a couple years and I could already smell this brewing. The devs I respected the most were also the most unhappy with the platform tie-in. This is a desperately needed political shift for the organization, and I think it will actually boost morale as well.


This is my impression too. I'm not terribly familiar with C#, and used to avoid anything from Microsoft like the plague, but a lot of stuff I hear about C# sounds like it's a bit ahead of Java. (Really, Java should be more like Groovy, if you ask me.)

But the JVM platform is great, and I'm just not going to get locked into a MS ecosystem. But if MS opens up to the rest of the world, well, that changes things considerably.


> Java should be more like Groovy

It has been since lambdas were introduced in Java 8. Groovy's original design by its creator James Strachan was about adding closures to the dynamic typing that Beanshell had at the time. The stuff added to Groovy since then (e.g. the MOP for Grails, DSL syntax, static typing in 2.0) are things Java already does better or things (if you ask me) it shouldn't do.


I think you're in a niche avoiding Microsoft technologies. It isn't so much as Microsoft opening up to the world (which is true to some extent) as embracing the smaller market shares of Linux + Mac OS. Windows market share is colossal, and C# is a big thing in Windows land, even more important than C++.


"Smaller market shares"? That depends entirely on how you limit your view. PC desktop, absolutely. But internet servers? Mobile? Those are places where Linux is huge, and Java is too. But the thing about Java is that it also works on Windows. And that's the great news here: you're not just stuck with Windows anymore if you use C#, and that makes it a lot more useful.


You prefer yourself an Oracle cage?


Exactly.. IMHO both Oracle & MS runtimes are dubious mechanisms, especially in the light of the native cross-compiling that Go and co. are gradually bringing to the fore.


C# as a language is better than Java, but not as good as Scala.


I don't know why you are being down voted. Erik Meijer, a huge .NET contributor who worked at Microsoft (on C# among other things), and was behind the reactive .NET extensions (and also a big Haskell contributor) - he was saying exactly the same thing about Scala vs C# - http://www.infoq.com/presentations/covariance-contravariance...

I can also testify as someone who uses Scala on a day to day basis and used C# at work as well - I completely understand your statement, although it's very subjective.


Probably because,

A), it was an uncalled-for digression into language wars, and

B), now that we're here anyways, a lot of us think the majority of features in scala and Haskell for that matter are fundamentally misguided. My goal is not to write elegant code, but to write the least complex code with the lowest cognitive overhead. TCO, 10x the man-hours in maintenance and all that. If the choice is between having to write null-checks or having to understand category theory to read my code, I'll take the null-checks.


A) I think C# is a great language actually, and Scala's success is probably due to Java's severe limitations. I just don't think someone working on Unix should be jealous of C# because there are great languages available here.

B) Precisely, unlike Haskell, Scala doesn't force you to write "elegant code", you can even use variables. Purists will flame you for that, but sometimes it's the best way to write a small piece of code and as soon as no state leaks outside of the function, it's not that bad. That said, I believe most of the features of Scala make maintenance easier, including the absence of null-checks as it moves errors from runtime to compile time and make major modifications without regressions easier.


You can use variables in Haskell as well, no problem. It just forces you to stack your monad on IO, STM or some other mutable-value-supporting monad, of which there are several.

The only difference worth mentioning is that they type system won't allow "leaks".


> If the choice is between having to write null-checks or having to understand category theory to read my code

Me too. Luckily this is a false dichotomy.


Here's a random line from the Scala standard collections library:

def :+[B >: A, That](elem: B)(implicit bf: CanBuildFrom[Repr, B, That]): That

This is from the standard collections library.

In Java, I've got a List.add function. In case it's not clear from the name, 'add', I can click through to the implementation and it's pretty obvious what's happening.

In Scala, I've got +, ++, +:, :+, and a bunch of other nonsensical bullshit, and when I click through to the implementation? Even less sense. Whenever I use a standard scala collection, I have no idea what it's actually doing. Additionally, everything favors allocation-happy overly-clever immutable wrappers rather than a simple ArrayList which will smoke those immutable implementations in real-world performance.

The cure is far worse than the disease, here.


For a Scala developer that signature makes sense. You want to add an element of type B to your collection with elements of type A, however the addition of an element of type B may not be supported (e.g. you may want to add a String to a BitSet and get in return a plain Set[Any]), so the above works only as long as there is a builder available that can build the new collection. The function also works on covariant collections, because it's building a new collection (i.e. it's for immutable collections), so it doesn't have the covariance gotcha of arrays.

Scala's collections have some quirks, but not as many as .NET's collections and you're actually comparing apples to oranges, because you won't find the equivalent of an "add" that returns a new collection instead of modifying the old one in .NET.

This is what happens when you pass judgement unto things you don't understand. Working with immutable data-structures is really, really awesome and Scala's API for these collections is very friendly and very type-safe - as in, if you feel the need to use `isInstanceOf` / `asInstanceOf`, then you're probably doing something wrong ;-)

And I really wish that C# would grow up a little in this regard, as modern programming languages need immutable collections as well, with a nice API to go along with it. And btw - working with Option is super awesome, no category theory needed.

That said, as I was saying in another comment, I'm really excited about this announcement, because this is mostly about the runtime, not the language. You can run things built with Scala on top of .NET right now by means of IKVM. And the JVM finally has some credible competition.


Oh btw C# does have immutable collections now. Adoption isn't too high yet AFAIK, but they're pretty awesome..

http://msdn.microsoft.com/en-us/library/dn385366(v=vs.110).a...


Interesting, thanks for the link.


>>> This is what happens when you pass judgement unto things you don't understand

I think it's kind of the point if the judgement is on the question "what is easier to understand". I think this is the sentiment many people staring to learn Scala are feeling - the barrier to entry, even if you are coming not from the blank slate but from the background of programming many years in many languages, is pretty high. It's not the judgement on "whether Scala and its collections are good/done right", which is entirely separate question from "whether it is easier for someone to understand how C# collections or Scala collections work".

>>> And btw - working with Option is super awesome, no category theory needed.

Well, if you want to do something like making a function that works on Option from a function that works on the underlying type, you pretty soon find yourself in that general area.


I agree that Option types are great, so great that I'm happy to work in a much 'weaker' language, use option types, and solve 90% of the problems that more 'theoretically robust' languages address with none of their downsides.

As far as not understanding how awesome immutable data structure are.. I'd take a step back before you make assumptions about what other people understand. Do you know anything about cache hierarchy and memory models on modern CPUs? Performance is important to some of us.


> Do you know anything about cache hierarchy and memory models on modern CPUs?

Yes I do. Worked 3 years on a soft real-time system with massive load, profiled the shit out of everything. There's an interesting discussion we could have about when immutable data-structures work best, when they've got problems and when it doesn't matter, especially given the extra benefits in dealing with accidental complexity. This isn't the right place though.

> Performance is important to some of us

Yes it is, but performance problems are fixed by means of profiling and optimizing the bottlenecks. Even in a system that has massive load, in many cases in doesn't matter and in some cases immutability increases performance by eliminating contention on reads. And seriously, most people invoking performance problems are not having those performance problems to begin with, therefore my assumption.


Well, that's good, sorry for asking. I've had a lot of people point at the big-0 notation of that linked-list append and tell me that it's faster than an ArrayList append on average. Usually this is right after telling me that "I just don't get it" with immutable collections. So I hope you can see why I'd react that way.

As I'm sure you know, when it comes to real-world situations, reducing inter-thread communication and isolating anything mutable is the key concern. That's why I find immutable wrappers that mock mutability to be a bit of a sideshow. You shouldn't have read contention with locking in the first place unless it's for a very good reason.


> You shouldn't have read contention with locking in the first place unless it's for a very good reason.

True, but you know how it is in practice :-)

For example I found that using persistent data-structures work best when you've got single producer, multiple consumers scenarios - so you mutate some state and you want to signal it over asynchronous boundaries to multiple consumers. With an immutable data-structure you just signal it, worry free and then you can keep on changing that state, completely non-blocking / wait-free and with good algorithmic complexity.

Actually non-blocking logic becomes really easy, as you can always shove an immutable value into an atomic reference (note - I'm not saying "wait free", which still takes a lot of work :))

So really, persistent data-structures are great in a multi-threading context, as long as you don't have multiple producers pounding on the same reference holding such an immutable value - if you do that, things can get bad, when compared to specialized concurrent mutable data-structures - because a good concurrent data-structure is able to distribute the contention in multiple buckets instead of just one. But then again, having multiple producers pounding on the same resource is just asking for trouble and has to be avoided, because Amdahl's law.

Also, as you've hinted at, the problem with a normal linked List is the level of indirection. And in general, persistent data-structures imply the usage of trees, which also implies indirections. More advanced persistent data-structures are much better than the linked list is and this is an active area of research, but on the whole there's still much room for improvement.

On the other hand, in my opinion when speaking about performance, the first problem one has is to actually use the available CPUs (e.g. getting CPU usage over, say 70-80%). Which usually is hard to achieve if you have a combination of CPU-bound and I/O-bound tasks and your I/O stuff is not asynchronous. Only after that you can then move on to optimizing the memory access patterns for cache locality and for minimizing the stop-the-world freezes.

Speaking about GC, that's another topic - persistent data-structures have a tendency to generate junk that is neither short term or long term and that invalidates the assumptions that current GCs are making. The JVM at least has really good GCs, but without paying for a pauseless one (like that one from Azul Systems), you can still end up into trouble if you don't pay attention - but then you fire up YourKit's Profiler, find the source for those STWs, optimize and it works out well.

All in all I encourage everybody to find a good library that implements persistent data-structures and integrate them in their toolbox.


Hey, so I agree with your analysis in general, and sorry for being opaque earlier (was at work), but you totally got what I was driving at and explained it probably better than I could have.

The one thing I disagree with, in many server applications, is using the available CPUs is pretty easy. You've got thread pools handling various tasks, just crank them up. In JVM-land, a very heavy 512kb stack per thread is still not really much penalty to pay as long as you re-use them. Aggregate application performance then becomes a matter of completing tasks faster while creating less garbage.

So it all comes down to what you consider a 'task' and how you handle the handoffs between them. The architecture decisions at this level dwarf the improvements from using an array vs list, as you implied, but they also make the usage of immutable types somewhat irrelevant IMO. Seal off mutable code within single-task boundaries and it doesn't matter how ugly it is, as long as you're passing immutable types (just plain javabeans with final members are fine) between boundaries.

Anyways, just my opinion. Great comment.


> Actually non-blocking logic becomes really easy, as you can always shove an immutable value into an atomic reference (note - I'm not saying "wait free", which still takes a lot of work :))

Why the atomic reference here? I know that provides CAS but if we're talking about a single writer aren't you okay to just replace things anyway?


Yes, I wasn't talking about a single writer. That was a new paragraph. Sorry about not making that clear.


Ah cool, just wanted to clarify as I'm only a beginner for much of this stuff, but found your posts very interesting.

Thanks!


Nice effort, but judging from the non-sense emitted by jbooth, he won't understand the things you are explaining anyway.


> Well, that's good, sorry for asking.

Gotta test for any chinks in the armour.


> Do you know anything about cache hierarchy and memory models on modern CPUs?

Yes.

> Performance is important to some of us.

... and in those cases, you don't have to use data structures that model your problem domain poorly.

The problem isn't "immutable data structures" or "theoretically robust" languages. The problem is finding ways to express computations and their constraints in a way that can be efficiently modeled for your problem domain.


The problem with this logic is that you won't know about your performance problem until a system is scaled up to production level tasks - except if you build a comprehensive performance model beforehand, which takes about as long as the implementation (and is thus nonsensical for most tasks) and is really hard to do IMO in an FP language.

So what will probably happen is that once you see your performance problems, you can pray that it's only some hot spots that you can then replace with faster code - often it's not, so you have 100 places using around 1% of your time budget for example, which is when you can go and start over.


>>And I really wish that C# would grow up a little in this regard, as modern programming languages need immutable collections as well, with a nice API to go along with it May I suggest the Immutable Collection library from Microsoft? http://msdn.microsoft.com/en-us/library/dn385366(v=vs.110).a...

Admittedly not part of the core library, but installing a NuGet package is pretty darn easy.

>>This is what happens when you pass judgement unto things you don't understand ;)

[edit: Guess I should've refreshed the page to see the prior response before writing this. Apologies.]


The operators on collections always have alternate named forms. I actually instinctively skip over the operators when perusing the documentation and have no trouble at all finding what I need.


Pointing to CBF is a complete straw-man.

You're rarely actually going to _see_ that signature (the docs actually simplify it for you), and in practice, it's completely meaningless to 99.9% of Scala you'll ever see.

You might as well consider it pixie-dust.

+ for one, ++ to add a collection, the others are generally going to apply to Cons-like. It takes all of a couple minutes to let these sink in.

Use a mutable.MutableList if that's what you want. Or just use ArrayList.

All this is about as pure FUD as I've ever seen...


It isn't FUD: It's just the kind of thing that turns newbies out of Scala.

Your typical Java developer is used to just look inside the code of whatever library they are using, and find a very straightforward implementation. Scala collections avoid a lot of boilerplate with canBuildFrom, SeqLike and suck, but simple and straightforward they are not. It takes quite a while before it stops reading like Japanese.

And IMO yes, a lot of symbolic methods in collections make relatively little sense. Don't forget that list also has ::, :::, +:, :+ and :\. There's more than a few, and there are no textual versions of them, for those that don't have them all memorized. They are a bit of a relic from the time Martin thought that /: was a good idea. It's fortunate that now only the scalaz people keep doing such things, because excessive use of symbolic operations hurt language adoption.

And he doesn't even get into other early confusion points, like how we have =>, <-, and ->, or how decomposing Seqs is not exactly pretty. Last week I had to help a guy that had been using Scala for 6 months to understand the 'punched in the face' operator :_*

So no, it's definitely not FUD. Are they issues that hurt my day to day Scala use? Not at all. Scala is my favorite language. Being able to use it instead of Java or Clojure is worth a good 15K a year for me.But that doesn't mean that I have forgotten some of the little things that made the learning curve tough at first. Thanks the heavens that I managed to end up finding a Scala job where I could learn from one Bill Venners.


Maybe this is a Java developer thing.

Everybody decried Ruby's 107 methods on Array. Then Fowler came out with "fluent-interfaces", and how often do you see someone make the claims that Ruby's Array is indicative of a general badness because it has a lot of methods and you can't memorize them all in an hour as a newbie?

scala.collection is the same deal.

:\? Sure it's not a good idea. I wouldn't debate that. Who uses that? foldLeft/foldRight.

And what's the deal with trying to memorize the entire interface anyways? 8 or so years with Ruby, writing libraries with over 4 million downloads (https://rubygems.org/profiles/ssmoot), and there are definitely methods in Ruby's Array I'm unfamiliar with.

So what?

While :: and ::: look a little foreign, I don't think asking people to learn them if they want to work with Lists in a Functional manner is anymore difficult than learning what the "spaceship operator" does in Ruby. And it's optional. You don't have to use them. But they're usage will probably be the smaller part in the grand scheme of things. Pattern Matching and accomplishing functional recursion with immutable data is the bigger picture. Outside of that context (and the REPL I guess, for convenience) you're just not going to see either of those operators very often (IME).

You don't use a "splat" (aka 'punched in the face'? That's a new one to me) operator very often. It's actually one of the few semi-pattern-matchy areas of Ruby so it comes pretty naturally for me.

You don't become a pro overnight. You can trust I'd be teaching infrequently used idioms in Ruby to a developer who'd only been using it for six months. Been there, done that. ;-)

I guess where I'm coming from is using CBF to claim Scala is a confusing, indecipherable language. Everyone's had that argument. I don't know that anyone wants to stand up and claim it's the best. But it works. Actually using it is a non-issue since you don't actually explicitly use it. And look at the docs. They're actually pretty great overall.

It just bothers me I suppose that someone interested in exploring Scala would be dissuaded by something that's only ever been a problem for 1% of 1% of Scala developers.

If you're just looking to swap in Play to replace Rails, odds are you'll never run into any of these CBF "concerns". At all. That's the very definition of "FUD" IMO.

For every CBF in Scala there's a Calendar in Java. Languages aren't perfect. CBF is probably a wart. But it's a hidden one. If you let it scare you off from Scala that's sad, because it really has about as much to do with day to day Scala development as array.c (https://github.com/ruby/ruby/blob/trunk/array.c) has to do with Ruby development.


You're conflating a lot of unrelated issues here by lashing out against things you seemingly don't understand.


How on earth does "I don't like operators" support your claim?


The claim was that scala code is very difficult to read and idiomatic scala code, as evidenced by their collections library, brings along a ton of conceptual overhead that creates much more complexity than it eliminates.

That line alone drives home the point for me, you can read the rest of scala.collection.immutable if you need more convincing.


First and foremost, I think you need to read about Blub. [0]

Also, idiomatic Scala code is perfectly readable. You have no right to complain about readability until you've dealt with spaghetti code written for Megacorp Inc, that completely ignores the fundamentals of structured programming. :)

[0] http://www.paulgraham.com/avg.html


The claim was that you need to know category theory. Your post is still there, we can still read it. It does not make much sense to pretend it says something other than what it says.


> majority of features in scala and Haskell for that matter are fundamentally misguided

I'm interested in the Haskell part. What features do you think are fundamentally misguided ?


I'm much less experienced with Haskell than with Scala, so I might be unfairly painting it with the same brush here.

But, for example, https://www.haskell.org/tutorial/io.html.

At the end of the day, this is a huge inner-framework anti-pattern over the same procedural syscalls that every other language handles procedurally. I shouldn't have to care what a monad is, and side effects? The whole point of I/O is side effects. It could be a no-op, idle process or CPU-burning busy loop if I didn't care about side effects.

From that doc:

" So, in the end, has Haskell simply re-invented the imperative wheel?

In some sense, yes. The I/O monad constitutes a small imperative sub-language inside Haskell, and thus the I/O component of a program may appear similar to ordinary imperative code. But there is one important difference: There is no special semantics that the user needs to deal with. In particular, equational reasoning in Haskell is not compromised. The imperative feel of the monadic code in a program does not detract from the functional aspect of Haskell. An experienced functional programmer should be able to minimize the imperative component of the program, only using the I/O monad for a minimal amount of top-level sequencing. The monad cleanly separates the functional and imperative program components. In contrast, imperative languages with functional subsets do not generally have any well-defined barrier between the purely functional and imperative worlds."

So, basically, they acknowledge that their theoretical model has a huge impedance mismatch with what we write programs to do (I/O, eventually, somewhere). And that's fine, they can knock themselves out and I hope it's fulfilling for them. It's not for me.


Clearly you have no idea what you're talking about. I suggest first learning the language before spreading FUD about it.

Separation of pure and unpure components greatly simplifies the reasoning about the problem domain. Traditional imperative programs are often full of subtle bugs because calling a procedure can have arbitrary effect on your system. For example calling function with the same arguments can return different values based on arbitrary hard-to-track reasons (such as your OS's scheduler). It's just too much details to keep in your head.

But in Haskell pure functions are guaranteed to have same result with same input. It enables the programmer to create logically isolated blocks without messy interdependencies.

It's especially helpful for concurrent programming by freeing you from all the non-deterministic spaghetti.

Haskell has a steep learning curve, but it'll make you a better programmer in the long run.


"But in Haskell pure functions are guaranteed to have same result with same input." I'm pretty sure that same law applies to a pure function in any language, not just Haskell. #shitthatHNsays


Correct. Though many languages don't encourage and as a result don't have nearly as many pure functions as Haskell code.


In fairness, I could also say that hammers don't encourage carpentry best practices and as a result they cause many more smashed fingers than wood glue. The fact that a tool doesn't prevent counterproductive usage patterns doesn't automatically make it inappropriate for every job.


Point being, I/O doesn't square with that concept. For some reason, every time I point this out, I'm accused of not appreciating the beauty of pure functions. I love pure functions. Maybe they're not appreciating the ugly reality of I/O?


How much experience do you have composing I/O in Haskell to prove that I/O doesn't square with that concept? Others here have experience using Haskell with I/O, but you just seem to have a hypothesis without experience or examples to support it.


> Point being, I/O doesn't square with that concept.

Sure it does, and composes well under it.


I'm seeing I/O as "side effects with possible random error conditions". I really don't see how that squares with functional purity.

It's late on EST but I promise if you put effort into explaining a higher-order take on this I'll put effort into reading and understanding it tomorrow. Have a good night.


Through monads functional purity with I/O is achieved, ergo you can compose I/O and get other advantages of purity.


What "pure" means varies from language to language.


It really does not.


I don't think they are acknowledging any impedance mismatch there. The point of that paragraph is to emphasize that Haskell retains the usual feel of imperative programming, with the important difference that the imperative bits of your code are cleanly separated from the pure bits due to the IO type.

There's no theoretical stuff here, it's just making sure that the caller knows about callees having side effects.


>At the end of the day, this is a huge inner-framework anti-pattern over the same procedural syscalls that every other language handles procedurally.

No, you have a tiny, very simple type that allows for type safe IO. It also happens to make haskell a more powerful imperative language than most imperative languages, as IO actions are first class and can be passed around and manipulated like anything else.

>So, basically, they acknowledge that their theoretical model has a huge impedance mismatch with what we write programs to do (I/O, eventually, somewhere)

No, they acknowledge that doing IO is so important that it should be done correctly. You are going to some pretty extreme mental gymnastics to misrepresent a language you want to hate.


To be fair, understanding I/O in Haskell takes quite a bit of mental gymnastics. :)

The I/O monad modifies the Universe that contains the set of all functions that comprise your program (if I even understand the concept correctly).


> The I/O monad modifies the Universe that contains the set of all functions that comprise your program

That sounds rather complicated. My way of thinking of it is simpler than that. a -> IO b is just a function from a to b that can do some I/O. Nothing more complicated than that.


I recommend just regarding IO a as an action, or some description of an action, that returns a value of type a. And (>>=) constructs bigger actions by attaching continuations to actions.


It is (slightly) more complicated than that. Your return type is not b but IO b. Your callers will be performing functions that are defined on IO b with the result (such as bind)


The IO monad is just an action. Its a cons pair with a left and a right half. In an imperative language you could model the left half with a closure that takes no arguments, does I/O and produces a value.

The right half is usually empty, except when you use the `>>=` operator (or flatMap) on an existing action, e.g.

let c = a >>= f

That operator chains the function `f` can take the value produced by the first IO action and produce another IO action. After we apply that operator, the result `c` is a cons cell where the left part is the original action `a` and the right part is the function f which takes the value produced by a and produces the next action. Its sort of like a cons cell in regular lists, except the next value is provided by a function. A lazy cons cell, perhaps :)

The final result is a lazy chain of I/O cons cells (called "main", of course :P). Its passed to the Haskell runtime, which executes that chain as a recipe, alternating between doing I/O actions and evaluating the function to decide what to do next.

So what does this buy us? Mostly just referential transparency. What does referential transparency buy us? Easy refactoring. We can replace any expression with its value, even stuff like `putStrLn "test"`. We can say `let writeTest = putStrLn "test"` at the top of the file then write `do writeTest; writeTest` in main.

Another neat thing is that do syntax isn't limited to just IO, but works with anything that implements `flatMap` (and `unit`, which I forgot to mention). That means we can build our own imperative DSLs that produce IO-like monads which are then interpreted by our own interpreter, and the users of those DSLs can use the same do syntax. Which is pretty awesome. Here is a simple example: https://gist.github.com/tonymorris/b5dba9d7d877051d0164 and a much more complex one http://augustss.blogspot.com/2009/02/more-basic-not-that-any... :)


or these PureScript examples https://gist.github.com/spion/982350f4b3d3464b1870 using the Canvas monad to pre-create a scene and then render it; the DOM monad to construct DOM elements.


So how are errors handled?


You have the option of two monads, Maybe and Either. Usually errors are handled with the Either monad. An operation may either return a result or an error. Together with do syntax the Either monad gives you multiple choices in handling errors.

You can handle every single error explicitly (as in Go) using pattern matching

  eitherResultOrError = operation1 arg

  case eitherResultOrError of
    Left error -> handle error
    Right result -> handle' result
Its also possible to chain multiple operations then check the error later. If an error occurs, the next operations in the chain will not execute.

  let eitherResultOrError = do
    x <- operation1 arg
    y <- operation2 x + 1
    z <- operation3 x y

  case eitherResultOrError of
    Left error -> handle error
    Right result -> handle' result
Or simply use `orElse` to return a default value in case of errors.

For IO operations and other monadic actions, its best to use EitherT, ErrorT or MaybeT, which are monads that can add error handling to any other monad. To understand how these work, its probably best to implement MaybeT. Basically, they add another wrapper to other monads to redefine what the bind operator (`>>=`) does


Understood, By the way, I do null checks in Scala, may the lord have mercy on my soul. I don't like all the "I'm functional and smarter than you with my higher kind types, Monads and type classes" attitude that a minority of Scala lead figures have given it. Scala is very pragmatic down to earth language that you don't HAVE to abuse, the issue is that some people abused it on some libraries (e.g. with weird operators etc...) and some people are "either all the way functional or I rather die" but I think that if any C# developer would have jumped into Scala, they would have had much better things to say about it, even without using a single "purely functional" aspect of the language.


I'd not say that avoiding null checks requires any serious functional magic. It's a matter of convention: If you are ever using anything that could be null, surround it with an Option. It just makes the fields that might need to be checked explicit.

One doesn't have to learn category theory to realize that Options are great, especially when accompanied by a little bit of help from pattern matching, map, flatten and getOrElse.

The fact that many Scala lead figures that come from a Haskell background are bringing with them a holier than thou attitude doesn't mean that there aren't some functional concepts that are very useful even if your code is mostly imperative, and Option is arguably the least controversial of the lot. It's so uncontroversial it's in Java 8, although suffering from the fact that it is lacking some of the great Scala goodies.


If I'm not mistaken C# 6 will be improving on the null check thing as well. Though, the syntax seems jarring to me right now (so did the inverted SQL-like linq style - yeah for the fluent style).


I'd rather duck the question by supporting non-nullable columns without resorting to higher-kinded types.


How can you think they are "fundamentally misguided" if you don't even know what they are? The category theory strawman is the absolute laziest, most absurd piece of FUD you could possibly resort to. It is even worse than the old "perl is bad because I don't want to read line noise" nonsense, at least that had some loose tie to reality.


Missed opportunity to explain why those are 'fundamentally misguided'.


GP didn't mention specifically which features were misguided, making a response require a disproportionate amount of effort compared to what he's put in. And he's talking complete nonsense about "category theory," which is just ridiculous so it's not like he's close to worthy of some effort-post.


In my experience the "category theory" claim is imprecise but not unfounded. Without fairly serious abstract understanding of monads, functors, and applicatives (at a bare minimum), it is not possible to write Haskell code profitably. By that I mean that without such knowledge you should have just written something else.

To make Haskell a serious advantage and not just a minor benefit (compared to, say, SML), you probably also need to understand kinds. Let's bear in mind that the vast majority of practicing programmers can't reasonably define an algebraic data type, and that it's not their fault, because the ROI for learning such things is often negative.

I do think there is a reactionary "don't-wanna-learn anything" vibe against Haskell among certain groups. But I think we should be clear that, to get the power that Haskell promises, you do need to learn many new things. Those things, while not exactly category theory in a narrow sense, are closer to category theory than they are to conventional programming knowledge.


Yes, it is completely unfounded. You do not need to know anything about category theory, or even what it is. All you need to know about Monad is its interface. If you can handle java, you can handle haskell. All you need to understand with kinds is "its how many arguments a type takes". And that isn't category theory in the first place. Anyone can define an ADT. That's one of the first things I taught our PHP team, nobody had any confusion or problems with it, it was a 2 minute thing. I have a hard time imagining how you could think:

    data Bool = True | False
is hard to learn or understand or woulf have a "negative ROI".


As a PL enthusiast that likes to dabble in different paradigms and basically learns a programming language or two every other year by far the hardest language to wrap my head around has been Haskell.

There are a few reasons for this that get dismissed by the day to day practitioners. First, there is the cognitive rewiring required to think of everything as an inert expression. In Haskell there are no actions, just descriptions of actions that the runtime manages. I'm specifically talking about I/O and its monadic implementation. Second, learning about monads is not enough. When effects are encapsulated as monads you need to understand monad transformers to fruitfully combine effectful computations. This is by no means the best way to do things because there are also implementations of effectful computations with row types and extensible effects, e.g. PureScript, that requires a lot less cognitive overhead and is less error prone. Third, many of the high-powered libraries in the Haskell ecosystem are so heavily reliant on categorical constructs, e.g. free (co)monads, functors, applicatives, monoids, bifunctors, Kleisli categories, etc. that getting through all that thicket to be truly productive with the libraries instead of just copying and pasting requires a time investment that is of dubious value to many programmers and you're better off learning about security practices on OWASP because you are more likely to encounter a SQL injection than you are to encounter a Kleisli category of a monad.

I'm not saying knowing these things is not useful or won't make you a better programmer but to just demonstrate that there is indeed a cognitive overhead that might not be worth it. I like category theory as much as the next mathematician but programming with categorical constructs is not necessarily the most optimal way to do things when all I need is a screen scraper for an XML feed.


>There are a few reasons for this that get dismissed by the day to day practitioners.

Perhaps they get dismissed because we all went through the process of learning haskell in order to become day to day practitioners, so we know these reasons are made up nonsense.


"""YOU NEED TO LEARN ABOUT MONADS JUST TO WRITE 'HELLO WORLD', LIKE WHAT IS UP WITH THAT."""


Peyton Jones and some other nice people work at Microsoft to improve Haskell, which is IMHO the right tool.


I went from scala to C# 7 years ago, and after some feature withdrawal, I began to like C# for its simplicity. It was not very elegant, I often had to settle for ugly solutions, but this was actually a step up from scala where I would obsess to find the most elegant solution for my code. In C#, the best way is more obvious even if not that good, and you settle more quickly.

As the language is considered holistically with tooling, the error messages are always good and they don't make dubious decisions (getting rid of semi colons) that are theoretically sound but screw up tooling.


> they don't make dubious decisions (getting rid of semi colons) that are theoretically sound but screw up tooling

Wait WUT? I'm not sure you know what you are talking about ... that just doesn't make any sense.

Scala 7 years ago is fundamentally different from today's Scala, so I'm not seeing how your experience is relevant anymore.


Taking out semi colons reduced the amount of redundant information needed for error recovery in the parser, leading to poorer error messages and reducing the quality of interactive IDE feedback. I know this because I was working in the IDE when martin made the decision. Maybe it was the right decision, but there were definitely costs!

There is a very good reason they will never eliminate semi colons from C#, the visual studio team would never let them...C# is developed in a completely different style from Scala.

I'm sure you are right: scala today is probably a much more simple language with great IDE support...7 years ago, it was a bunch of advanced features and building a decent IDE for it was a struggle.


Yes, that's why you would use F# in .NET if you want something like Scala. And then I heard many people say that .NET wins again.


Meh, F# is pretty good, but Scala has a better type-system. You cannot model type-classes in F# for example and there's some ugliness in it, in a true Ocaml fashion - for example I don't like how you've got to deal with 2 types of generics or the prevalence of "static" methods, static methods that are complete hacks that go back to C++ at least (Scala has no such thing as static methods btw). In many ways Scala is a very elegant language, too bad that many people are scared by these myths that are flying around.

That said, people are missing the forest from the trees. This isn't about the language, but rather about the runtime and the standard library. That's the true value of the JVM, that's the true value in .NET.

There wouldn't be any problem for porting Scala to .NET. Sure, it would be difficult as .NET's reified generics are maybe too limited for Scala's type-system, but I'm sure somebody could make it work, at least partially.

And there was a Scala for .NET, but it lacked interest so it died. But hey, you can still run stuff built in Scala on top of .NET by means of IKVM ;-) Has poor performance compared to a JVM, but then again, people are willing to build stuff in Scala for Android. Also Clojure.NET is doing pretty good from what I've heard.

As a Scala developer that loves Scala and the JVM, I personally find this announcement very exciting. As finally, the JVM has true competition ;-)


You can model type classes in F#, but it's not pretty[1].

Interestingly, C# can model type classes much more straightforwardly (using implicit conversions), and this is one of the reasons I continue to use it along side F#.

[1]http://stackoverflow.com/questions/9868327/f-type-constraint...


> There wouldn't be any problem for porting Scala to .NET.

Other than the fact that .NET reified generics and Scala's type system don't play well together, which was one of the big challenges facing Scala.NET.


Scala for .net never actually worked, at least when I tried it. I submitted a few bug reports and started leaning C#.


F# is elegant but ultimately held back by its lack of higher kinded types.


Held back from what? I mean, I'd love it even more if it had higher kinded types, but it's a pretty great language as is.


It's simply less expressive.

Just as Go is fine without generics, F# is fine without HKTs.

It's a sad realization for those coming from a language that does support HKTs. Still a great language.


I can't tell if you are serious or joking :). it is definitely a trade off to be made, the power offered by higher kinded typing doesnt come for free.


F# is not really like Scala in the sense that it's more "extreme" in terms of FP, you can't easy fallback on more classical patterns if FP doesn't work well for your problem.


Yes you can. I haven't found a traditional Object-Oriented pattern I can't use in F# ... but honestly, I've usually just ended up molding things to fit pattern-matching and reduced the complexity of things in the process.


> F# [...] more "extreme" in terms of FP

In the sense of lacking higher-kinded types and typeclasses?


In Scala, there are just too many ways to do things. It's optimized for flexibility and easy writability. C# on the other hand may be a little more verbose, but at least everyone can read everyone elses code.


IMHO this works to Scala's detriment. I very much enjoy Python and Clojure precisely because they have one, generally accepted way of doing things. Is it a silver bullet? Of course not, but this works for me 9 times out 10 and I find that coming back to my code later (or sharing with colleagues) is far easier because most code will be structured in a familiar way and therefore take less time to understand the unique bits that solve the problem at hand.


Actually that's the one thing that's driven me away from Clojure.

I love the simplicity behind the language and it makes simple things actually simple - which isn't as trivial as it sounds. But once you start dealing with complex things in Clojure the language doesn't do much for you, if you can't fit your problem in to it's provided toolkit the code written will actually be horrible - for example go look at core.async [1] implementation - just reading that code gives me a headache - I understand it's complicated stuff with buffering and all but having type annotations on protocols and variables used would be extremely helpful when trying to parse that code. Types help me think more abstractly when I'm reading the code as I can take them as compiler enforced contracts and think of them in abstract terms, in dynamically typed languages complexity and the amount of things you need to be aware of is just too overwhelming IMO.

[1] https://github.com/clojure/core.async/blob/master/src/main/c...


I think the nice thing is that you can start without types and add them later, staticlly or with dynamic checks. First I play around with data, once I know what I want I can write some schema annotation.

You can activate validation for everything in development and then in productive only activate validation on your api endpoints. In a future version, you will probebly be able to generate core.typed stuff directly from schema.

Other then that, I think extreamly high performace code a la core.async is not the norm, there the types are not used for the programmer, but rather for the VM. I have used type hints in the last couple of years maybe, one or twice.


I think you misunderstood what I was trying to say - core.async has no type annotations, you cant even figure out what the protocols are supposed to do because there is no documentation and no types - you literally have to find places its used and reverse engineer protocol semantics. And the function I linked to is very hard to keep track of - a large part of that is inherent with synchronization/buffering logic being complex but the language does nothing to make the code simple or easy to read - I'd bet that function would be easier to understand in Java than in Clojure.

I've tried core.typed, it's amazing in theory but in practice it's unusable (at least for me) and nobody annotates their code anyway. If core.typed becomes more widespread in Clojure community I might actually give it a second chance, in the mean time I'm just going to use Kotlin for JVM, it has transparent Java interop and reduces the Java noise, and the tools (IDEA) is infinitely better than anything I've seen in Clojure land.


Clojure is my favorite language by far, but there's hardly "have one, generally accepted way" of doing everything.


Funny, I find c# much harder to read. Its almost as if it is easy to understand code for languages you know well, and harder to understand code for languages you don't.


You find C# harder than this?

trait SeqLike[+A, +Repr] extends Any with IterableLike[A, Repr] with GenSeqLike[A, Repr] with Parallelizable[A, ParSeq[A]] { self =>

or

def :+[B >: A, That](elem: B)(implicit bf: CanBuildFrom[Repr, B, That]): That


If you want to make a compelling argument to anyone that actually knows Scala, you need to drop the idea you can make it with scala.collections. Find another altar. This is a complete non-issue for the day-to-day Scala developer.

Outside of some pretty extreme examples (scalaz maybe), I don't think I've actually ever seen someone else's source that implemented a custom collection.

And you're purposefully trying to confuse people by digging into what might as well be "internal" signatures.

What do the actual, official docs say about SeqLike :+ ?

  > def :+(elem: A): Seq[A]
  > [use case] A copy of this sequence with an element appended.
Oh wow. Why does that look so much simpler than what you posted? Could it be you're exaggerating for effect and disingenuously implying that Scala developers will be routinely faced with the challenge of trying to parse such signatures? Of course you are.

Can you dig down further and find the CBF signature? Yes. Do you need to for anything outside of spreading FUD? Nope. Is it a bad thing that it shows how the sausage is made underneath the covers? I don't think so.


Personally, I view the collections library as a showcase for how that language works when you're designing an API to be used for 3rd parties. If it's incomprehensible, that worries me.

If the collections API is purposely trying to confuse people, blame the Scala team, not me.


That's just BS and given the size of your axe to grind, you have to know it.

Akka? Spray? ScalaUtils? Do they take effort to learn? Yes. Are they anything like scala.collections? No.

Why didn't you pick scala.concurrent? Or scala.util? Or any number of other packages? Why didn't you use the signatures in the official docs as-presented?

The Scala team goes out of their way to present something simple and easily digestible to most programmers, even the ones unfamiliar with Type Classes.

I mean, you basically just use a Seq like an immutable version of Ruby's Array and it just does what you want 99.999% of the time.

How often do you see people go on to propose that because much of Ruby's Array implementation would be incomprehensible to 90% of Ruby programmers that that makes Ruby a bad language? It's an entirely ridiculous metric.


[SerializableAttribute] [ComVisibleAttribute(false)] public class Dictionary<TKey, TValue> : IDictionary<TKey, TValue>, ICollection<KeyValuePair<TKey, TValue>>, IDictionary, ICollection, IReadOnlyDictionary<TKey, TValue>, IReadOnlyCollection<KeyValuePair<TKey, TValue>>, IEnumerable<KeyValuePair<TKey, TValue>>, IEnumerable, ISerializable, IDeserializationCallback

Both languages can be difficult to decipher. I do find that Scala's syntax is much more flexible, which can lead to more stumbling when trying to read code.


Its not as hard when you format it a little nicer (and when you have syntax colouring)

[SerializableAttribute] [ComVisibleAttribute(false)] public class Dictionary<TKey, TValue> : IDictionary<TKey, TValue>, ICollection<KeyValuePair<TKey, TValue>>, IDictionary, ICollection, IReadOnlyDictionary<TKey, TValue>, IReadOnlyCollection<KeyValuePair<TKey, TValue>>, IEnumerable<KeyValuePair<TKey, TValue>>, IEnumerable, ISerializable, IDeserializationCallback { ... }

And this is of course ignoring the fact you copied that from the documentation witch shows all interfaces the class implements. IDictionary also implements ICollection & IEnumerable so you wouldn't need to include them. So to actually implement this class it would more likely be

[SerializableAttribute] [ComVisibleAttribute(false)] public class Dictionary<TKey, TValue> : IDictionary<TKey, TValue>, IDictionary, IReadOnlyDictionary<TKey, TValue>, ISerializable, IDeserializationCallback { ... }

maybe less


It is easier, there is only one concept to understand : generics, with scala you must understand generics, co and contra-variance, implicits.


That's true.

A talented C# dev will need to know co/contravariance as well as implicits considering they also exist in C#.

In Scala, being familiar with these concepts is practically a necessity, though.


> A talented C# dev will need to know co/contravariance as well as implicits considering they also exist in C#.

True for co/contravariance. For implicits, that depends on which implicits you are talking about.

Scala has:

* implicit conversion operators (these have existed in C# for some time, under the same name)

* implicit parameters (C# doesn't have an analog to these that I can think of)

* implicit classes (C# has an analog in classes providing extension methods, though the classes themselves are distinguished by a keywords as they are in Scala; in Scala, these are essentially syntactic sugar for creating a normal class and an implicit conversion from the extended class.)


> implicit parameters (C# doesn't have an analog to these that I can think of)

C# allows default values for parameters, but it's not quite the same.

Example of C# default value for parameter:

  static void Addition(int a, int b = 42)  
  {  
    Console.WriteLine(a + b);  
  }

  Addition(4); // Prints 46  
  Addition(4, 5); // Prints 9
In Scala, the default (implicit) value has more complex rules.

Like C#, implicit parameters must come after non-implicit parameters (if any).

Unlike C#, the implicit value does not need to be defined in the signature. Also unlike C#, if you provide the value for one implicit parameter, you must provide the value for all implicit parameters.


> C# allows default values for parameters, but it's not quite the same.

Right. Scala has C#-style default parameters as well, implicit parameters are a different (though very loosely related) thing.


Can you give me examples of implicits in C# ? I am unable to see a parallel.


  using System;
  
  public class Program
  {
  
  public class Person
  {
    private string _name;
    private int _age;
    
    public Person(string name, int age)
    {
      _name = name;
      _age = age;
    }
		
    public static implicit operator int(Person p)
    {
      return 42;
    }
  }
	
  public static void Main()
  {
    var person = new Person("Jim", 51);
    var number = 100;
    AdditionPrinter(person, number); // Prints 142
  }
	
  public static void AdditionPrinter(int a, int b)
  {
    Console.WriteLine(a + b);
  }
  
  }


Also consider extension methods, which serve some of the same purpose as implicits in Scala.


Good point.

Though implicits I feel are so much simpler than many people fear.

From a Ruby perspective: Just think of them as operating similarly to Refinements, except not completely insane because there is no global scope at compilation. You only have to worry about your own package, imports and inheritance.

Tracking down an implicit generally takes all of 10 seconds, and never more than a few minutes. Even moderately brain-bendy ones like akka.patterns.ask (where does the implicit "?" come from? "ask" is actually an implicit conversion to a class that defines it: https://github.com/akka/akka/blob/master/akka-actor/src/main...).


I'm a fan of C#, although I prefer F#; however I have to agree, I think the choice of < > for generic definitions was a real mistake, it can be so hard to parse. The side effect of this is that first-class functions (Func<>) are chronically underused by C# programmers.

Annoyingly F# brought them along for the ride too, although you rarely have to explicitly write them.


What would you have used for generic arguments? I think they work fine, and make use of Func<T> often.


I haven't tried out many alternatives, but there's something about nested <> that seems to throw me off each time.

I wrote a library of monads for C# [1], and implementing the SelectMany method is a good example of how messy it can be:

    public static RWS<R, W, S, V> SelectMany<R, W, S, T, U, V>( this RWS<R, W, S, T> self, Func<T, RWS<R, W, S, U>> bind, Func<T, U, V> project)
    {
    }
I quite like F#'s alternative syntax for single type generics:

    Option<int>
Can be written:

    int option
Obviously it's a personal thing, so I doubt I would be able to suggest anything that would change your mind; I just had a quick go at an alternative, and I quite like this:

    public static SelectMany R W S T U V ( this self : RWS R W S T, bind : Func T (RWS R W S U), project : Func T U V) : RWS R W S V
    {
    }

    class Thing int
    {
    }

    class Thing T : BaseThing T
    {
    }
It's moot anyway really, it is what it is. I've just found it cognitively challenging over the years.

[1] https://github.com/louthy/csharp-monad


Given proper indentation, ifall to see what is difficult to understand here. You have acouple of attributes, a generic you're declaration (with easy to understand syntax), and an inheritance list.


That's how it goes.

A Scala dev will look at the Scala type signature and shrug.

A C# dev will look at the C# type signature and shrug.

As usual, unfamiliar things are difficult and familiar things are easy.


Yeah, and not knowing Scala myself, I can't say that the OP didn't purposely use a contrived example. That said, I still don't think a class definition which is little more than a bunch of interfaces being implemented is confusing to anyone who got through chapter 4 of "Learn C# in 21 days".


Thank you! I'm working on a team that inherited some Scala code from contractors that no longer work for us and we've found that (at least their) Scala is almost as hard to read as bad Perl. Compounding this is a severe lack of good online documentation outside of the standard library reference and simple tutorials which don't go into the depths we need to know (like how do you create a Manifest from Java to call into Scala code that needs it? Or what even is a Manifest?). At least with Perl you have decades worth of online discussions and pretty good documentation of the core language.


> Thank you! I'm working on a team that inherited some Scala code from contractors that no longer work for us and we've found that (at least their) Scala is almost as hard to read as bad Perl.

I've seen C# that is almost as hard to read as bad Perl -- also, not surprisingly, from contractors who weren't going to be maintaining the code.

I don't think this says anything about C# or Scala, I think what it says is something about the natural result of the economic incentives of people being paid to throw code over the wall before they leave for greener pastures.


Yeah, I would agree with that. Scala is, at most, probably 2-3% responsible for the problems we're having with this project (due to inexperience with it). The vast majority of the blame rests on our contractors and on our management's lax oversight of them.


I think you just need to hire an experienced Scala Developer for a month or two until your team gets up to speed. It'd be like a c# team having a Ruby app dumped in their lap. You're practically preordained to hate it. ;-)

The documentation for Scala is actually pretty outstanding IMO. I had a much easier time compared to Ruby. Daniel Westheide's introduction series "The Neophyte's Guide to Scala" is the single best language intro I've ever seen, for any language. It even goes into enough of Akka to get a basic chat application going. And while Akka works nicely with Scala, it's conventions are so different it might as well be a different language. In idiomatic Scala you probably don't run into mapTo or vars too often for example, but that's going to be a pretty common sight working with Akka.

At least the documentation on PlayFramework.com is mostly current, and doesn't gloss over important detail. Unlike rubyonrails.org, where even five years ago the basics were horribly dated, inaccurate, and skip important background.


> we've found that (at least their) Scala is almost as hard to read as bad Perl. At least with Perl you have decades worth of online discussions and pretty good documentation of the core language.

Programming languages you don't know are harder to read than programming languages you do know.

Unlike "bad perl", you have types that tell you exactly what something does.

> ... and simple tutorials which don't go into the depths we need to know (like how do you create a Manifest from Java to call into Scala code that needs it?

Calling Scala from Java is not beginner level material that winds up in a tutorial, and it's a bad idea to begin with; you're stuck speaking a pidgin Scala using Java-only constructs just to maintain Java interop.


> Programming languages you don't know are harder to read than programming languages you do know.

I will definitely agree with you there. It would help our team significantly if we had a Scala expert we could consult but there's not enough in the budget (monetary or political) to hire one. We are pretty much left to fend for ourselves with Google, Stack Overflow and a couple reference books. This project also contains a C++ component that would probably have been just as impenetrable to my teammates (who have C but no C++ experience) if I hadn't already been fairly familiar with the language.

> Unlike "bad perl", you have types that tell you exactly what something does.

Static typing doesn't help always when you actively try to subvert it. There are casts to and from Any all over this codebase. Reflection is used everywhere even when it's not necessary. There are even places where they convert between types by serializing an instance of one type and deserializing it as an instance of another (with similar fields but not actually related in the type system). They also loved Option[] types which might not be so bad if they weren't also storing Some(null) into them in just enough places you forget to check for it but it still blows up in your face at least a once a month. Basically, the original authors of this code tried their best to destroy the usefulness of the Scala type system and succeeded fairly well.

> Calling Scala from Java is not beginner level material that winds up in a tutorial, and it's a bad idea to begin with; you're stuck speaking a pidgin Scala using Java-only constructs just to maintain Java interop.

By mandate of our management we are not allowed to write any new Scala code except where absolutely necessary. The contractors went behind our management's back to write it in the first place (we told them they could use Java 8 which they interpreted to mean they could use anything that would run on a Java 8 JVM). After that was found out (about a 1/3 of the way through the contract and too late to rewrite everything without blowing our contractual deadline with our customer) new Scala code was banned and they had to (and we have to) use Java as much as possible after that point. This is also not helped by the fact that at the time they handed the code over it was barely half-baked and performing at less than 25% of the needed throughput and we're left trying to finish and fix it.

Of course, when it can take three or four developers (with a combined background in Java, C#, C, C++, Python, Javascript, and an academic familiarity with OCaml-family functional languages) a half hour or more to figure out what a some of the methods in this codebase even do (at both a syntatic and conceptual level) I am going to put (a small) part of the blame on Scala for allowing such impenetrable code to be written in the first place. I'm certainly not placing all or even a significant amount of the blame for this project's problems on Scala but it is certainly not helping.

Thanks for letting me vent.


You intentionally chose code that is hard to read...I could easily do the same with C#. It isn't representative of the vast majority of the code that I use nor write.

That being said, when you understand the nuances of type annotations, that code is quite readable in the mathematical notation sense. It is concise and takes more time per byte to understand, but it tells me enough about the code that I rarely have to read the actual implementation to know exactly what it does.


It's from the standard collections library, the most-used code in any language.

It is in fact representative of the vast majority of code you use.

The fact that it wasn't immediately clear to you that a lot of your code DOES use this code is probably the strongest argument I could make.


Now I love C#, but in defense of the above poster, standard library code is often pretty gnarly stuff.

C++ is the poster child for this. No one argues that C++ is a bad language just because the standard library is full of magic.


Actually, I hold exactly that opinion about C++ :)


I find the STL great! Sure, some compiler template messages can be tricky to understand but STL doesn't detract from C++ - you don't have to use the STL (but you're probably losing out if you don't use it and roll all those containers and algorithms yourself)


Oh using the STL itself is fine, but some of the code behind it looks like a completely different language!


You have inferred more about me than you are entitled, and even if you were right, your argument is still poor. Have you read the c# stdlib? It is in no way any better. Stdlibs are always more complex than the code that builds on them, and that is by design. The same code that I use for a oneliner has to be robust enough to apply to play, akka, spark, finagle, etc. That is the primary reason why StdLib documentation is made more accessible. I have never had to read the Scala StdLib source code, and likely a tiny minority of Scala devs have done so.


The internals of STDlibs tend to be nasty, but externally they're designed to be clear. The problem here is externally these interfaces are extremely messy and hard to follow, i.e. the abstraction doesn't seem to work.


That is a good distinction to make. But then, you need to consider other factors as well. How powerful is the API, how honest is the API (doesn't hide internal gimmicks), how consistent is the API across the lib, and how DRY it is (to the STDlib implementors).

Scala's STDlib might be more powerful, more honest and more consistent, and also might be striving to be DRY for the implementors.

Personally, I feel that Scala's Collections goes overboard in hitting the above goals, and the resulting signatures are too verbose. And this increases the cognitive load for the subset of people that need to bother about it (performance tuners, architects, etc). But that's not a fault of the language per se.


I inferred that you probably use standard library collections. Not really an insulting inference, it'd be insulting to assume you didn't.

Personally, I make it a habit to always read (or try to read, in this case) standard library source code that I'm depending on to be correct.


Have you read the implementation of the python collection types (list, map, etc.)?


It is not representative probably because the IDE hide the ugliness.


The official docs do to.

To get the "ugly" signature you have to look at the actual source, or example the method definition in the official docs.

They go out of their way to make sure to present an easy-to-understand version for actual users.

Which is why every actual Scala developer rolls their eyes at someone bringing up CBF. You have to go out of your way to try to be confused about it. And if that digging stops at finding the method signature, well, probably serves you right for being confused. ;-)

Is CBF amazing? Probably not. Is it some sort of confusing eternal battle for most Scala developers? Absolutely not.

I mean just look at the official docs, find out where he got these weird examples from, and then tell me this is a well reasoned argument.


LOL! amen brother, tell it like it is.... people seem to have NO appreciation for the idea of simplicity.


Actually, the vast amount of features in C# isn't improving readability. But MS can't just throw out the bad half of features, which makes C# practically unimprovable. Scala has a similar problem.


Why not? I just can't see why every language needs to gather cruft rather than evolve elegantly. Imho major versions of compilers should aim to NOT remain backwards compatible, so they can remove or change things in the syntax while they add things.


Imagine this. I'm using Cool-Lang version 2. Now Cool-Lang 3 comes out. It has lots of features that I want to use. If the new version is compatible with the old one, I download it and start using it. If not, I have to convert my whole codebase first. In practice, that means it may be many years before I can use version 3. Look at Python. In 3.0, they decided to clean up some stuff from 2.x. They even wrote a tool 2to3 which will automatically fix most of the incompatible stuff for you, so converting the whole codebase is almost automatic. The problem is "almost" automatic is not enough. Many people and groups have yet to convert their old code.


Py2 to py3 is too far a jump which is made obvious from the fork-like nature of the project. Minor changes like API deprecation is of course already taking place in most projects. Just need to strike a balance.

I rewrote 100klines at least from C#1 to C#2 when generics came in 2.0. It was compatible in that the non generic code still worked, but I don't really see what difference that made. If we didn't want to upgrade the cod me then we wouldn't have updated the compiler!

Obviously if you don't want the benefits of Py3 then you don't have to pay the upgrade cost either.


> But MS can't just throw out the bad half of features, which makes C# practically unimprovable. Scala has a similar problem.

How? Scala has been throwing out bad features in pretty much every single release for the last 5 years.

It's nothing breathtaking, it just gets done, people are happier and the language gets better.


Scala has hardly the user base of C# developers.


So how has Scala "a similar problem"?

Plus, for this question, the size of the user base doesn't matter: If a developer has to adapt his code to a new language version, he/she doesn't care whether ten or ten-thousand other people have to do the same.


It matters for the language designers, how much people they want to make angry for breaking their code.

Scala's smaller user base means there are a lot of less people to get mad.


And you still haven't explained how Scala has a similar problem.


I am not the OP. Just commenting on this

> Scala has been throwing out bad features in pretty much every single release for the last 5 years.

You cannot throw out features in a language with the community size C# has, otherwise you just get another Python 3.0.


I'm not sure comparing C# to Python makes sense.

  a) Microsoft is in an excellent position to make such 
     changes, unlike Python, because Microsoft shops will 
     largely do whatever Microsoft tells them.

     Just have a look at how often they have incompatibly 
     changed the standard library used by C#.

  b) Not every language has to botch the transition as badly
     as Python designers did.


Scala keeps features for imperative and functional programming, which is one category too many, because they solve the same problem just different. Thats how Scala is supposed to work. Its impossible to remove an approach.


That's not even remotely related to the topic, isn't it?


Theoretically, everything is related to each other. Practically, no.


F# is more of a .NET analogue to Scala.


I'd say it's the .NET analogue to Scala with added elegance.


And removed flexibility that make it awkward - see for example the lack of higher kinds


Why would you compare C# (although they keep adding functional programming concepts to it) to Scala, instead of comparing F# (which can commingle with C# wherever you need it) with Scala?

I haven't looked into F# or Scala too much, but I bet F# comes out looking far better, because it was so deeply inspired by Haskell (but made practical).


> I bet F# comes out looking far better, because it was so deeply inspired by Haskell (but made practical)

Eh what? Scala is a lot closer to Haskell than F#/OCaml, and can express roughly the same things, while F# is a lot less expressive.


Can you explain why you think so?


Most of the selling points on scala-lang.org still apply over C# - type inference, pattern matching, case classes (C# makes these less bad than Java, but the field/property distinction can still bite you), an inheritance model that supports the good parts of multiple inheritance without the problems, covariance/contravariance, for/yield syntax that's generic and user-controlled, higher-kinded types.

It takes a while for your code to get big enough for the last part to matter, but it's huge when it does - you can handle any kind of "context" in a generic way, e.g. async calls, error handling, audit logging, software transactional memory, or custom things for e.g. database access. F# has a number of these specific implementations but they're kind of "hard-coded" in the language rather than fully extensible. So I guess the pitch is: imagine C# async/await could be "just a library", and other contexts that you wanted to handle similarly could also be a library, and the resulting functions are first-class citizens in the type system (no awkward choices of whether a particular function is async or not, you can handle that generically).


So I guess the pitch is: imagine C# async/await could be "just a library", and other contexts that you wanted to handle similarly could also be a library, and the resulting functions are first-class citizens in the type system (no awkward choices of whether a particular function is async or not, you can handle that generically).

That's true in F# -- async is just a library in a computation expression.


> F# has a number of these specific implementations but they're kind of "hard-coded" in the language rather than fully extensible.

That's not true. You can extend the language yourself with "computation expressions" (and that's in fact how the 'async', 'let!' et al keywords are implemented).


True, but what you can't do is abstract over them; you can't write a function that operates on "some generic computation expression" (you can't even write that type). So you can't write "sequence" or "traverseM" or more complicated/specific things you build on top of these.


There is an encoding supporting that level of abstraction (perhaps though not as straightforward as you'd prefer); namely using interfaces (which are first-class) to encode module signatures. See: https://gist.github.com/t0yv0/192353


+1 as a Scala fanatic :) It's all about what works for you though.


> C# as a language is better than Java, but not as good as Scala.

However Scala is not a first class citzen in the JVM, but C# is the systems language of the CLR, already a huge difference.


As someone who has written applications in C#, Java and Scala (amongst others). I'd be curious as what makes you think Scala is a winner relative to C#?


One big thing: Immutability by default. That's possible in C#, but it's hard to be sure everything is immutable => http://stackoverflow.com/questions/5097287/how-to-create-imm...

A lot of interesting aspects of Scala are possible only because of immutability.

And a lot of smaller things: * Typeclasses. In Scala you can implement them with implicit parameters, I have no idea how you would do it * Case classes. Again in C# you can implement something that kinda looks one with readonly and just a getter, but it's not as good as you won't get stuff like equality and copy, you'll have to write the boilerplate * Pattern matching, in particular with case classes * Option. You can create your own, or use the Option from FSharp, but all the lib you will use will be returning null all the time. * And more...

Additionally there are aspects that are more of a personal preference, like how the "map" function is called "Select" and "filter" is called "Where", and so on. All functional languages use the same names, the SQL-like naming for manipulating collections is just confusing for me.

At the end of the day, Microsoft tried to build a better Java when they designed C#. And it is indeed better than Java, especially because they moved fast in their versions while Sun/Oracle was terribly slow (why did we have to wait that much for lambdas??). But C# mostly sticks with the same concepts as Java (OOP) while Scala truly embraces both FP and OOP.


I'd happily switch from Scala to F# but I would miss higher kinds and using implicits for typeclasses.


Doesn't Scala run on the JVM, and isn't this announcement about porting the .Net runtime? And you're really drawing this comparison?


Apples and Oranges, my friend.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: