Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Is curl|bash insecure? (sandstorm.io)
151 points by jsnell on Sept 25, 2015 | hide | past | favorite | 160 comments


First off, my problem has always been not just with `curl ... | bash` but specifically with `curl http://... | bash`. Notice the http, not https scheme there.

Second, I feel like this is a huge anti-pattern. Why the heck are you writing a custom installer like this? Each platform has its own system for doing installs and upgrades. Package it for the OS you actually support, such as Debian, Red Hat, etc. It's pretty much a one time investment and it makes your software that much easier to maintain on my system. `curl | bash` makes me think that (a) your software will not play nicely with my OS and I'll have to figure out your weird locations for where you are putting your init scripts, config files, data, etc. and (b) that upgrades will be manual and painful. In other words `curl | bash` is for chumps (tm).


You're not wrong, exactly, but what ends up being brutal is not just packaging, it's packaging and supporting dependency management + post-install configuration.

RPMs, for example, are pretty terrible at allowing you to specify custom configuration parameters, and you can't use them at all to install something that you don't have root privileges to access.

Custom installers also allow you to have a lot more flexibility in terms of saying, "Okay, let's look to see if there's a mysql instance installed on the system, there's not, so rather than installing an OS one, we'll go ahead and install our preferred mysql version and treat it as an embedded db".

RPM-driven upgrades also end up being a lot messier, since you can't do templated or merged configuration files easily.

I'm not saying that the curl | bash strategy is good either, but forcing OS packages down people's throats isn't ideal for various situations.

EDIT: also, you can't do multiple installs of the same RPM package on a system. So even if you wanted to install 5 instances of app X on a machine, possibly for testing reasons, you can't, and you end up having to use VMs or containers or hacking something together.


The thing is that most of these types of projects have very rigid requirements for the OS they support. It's often not just Debian-like or RPM based, it's something like Ubuntu 14.04 64-bit only. If you are already doing that, you already bought into the OS. Support it properly.

RPM is not my favorite format. I much prefer Debian-based distros specifically because dpkg and apt are so robust. I have never had a problem with managing configuration files there. You either provide a sane default config, put a config in /usr/share/doc/your-package/, or provide a configuration utility. All that works just fine.

And you absolutely should use the system's packages for your dependencies. When you install your own copy of MySQL, nginx, or anything else, you assume responsibility for all upgrades to those subsystems. If next week MySQL releases a patch for remote code execution vulnerability, I want that patched right away. My OS's package manager will do that. Will your custom installer?

If you must, break up your installer into two pieces. First, use a simple Makefile-like setup (avoid autotools though), with generic targets like build, install, clean, dist-clean. Then wrap that in a .deb and .rpm. This way you can distribute system packages that cover 90% of the use cases, and a signed self-contained tarball for the 10% of people who for whatever reason cannot use your pre-build packages.

As an empirical point, hundreds of thousands of pieces of mature software get by with using RPM. Why is your project suddenly such a snowflake that it cannot fit that paradigm. Perhaps the people who develop apt/rpm/homebrew/whatever have thought of the problems you might have and already provided the tools for you to solve them so you don't have to write custom installers, upgrade scripts, migration scripts, dependency management scripts, etc. yourself.


> it's something like Ubuntu 14.04 64-bit only.

Not in Sandstorm's case. We depend only on the kernel, coreutils, curl, and bash. You can even have them all statically linked, with no shared libraries installed. Sandstorm self-containerizes in a chroot to avoid dependencies. This is actually a big reason we prefer an installer script: it works everywhere.

We also include an auto-updater and push often multiple times per week, so likely Sandstorm's deps will be more up-to-date than your system's.


Packages are a GREAT option if you want to consider upgrades and repeatability than relying on random scripts, particularly because they are largely standardized. Dependency management checks are available and more easily managed than each script doing their own dependency management. This also makes it easier to establish mirrors for production deployment than having to mirror EVERY possible install source (though in many cases, folks will do that).

While packages are not 100% standardized, as packages can also do whatever they want in their script sections, but generally distribution scripts are held to a higher degree of review. For instance, Fedora package review guidelines, which feed into CentOS and RHEL guidelines, are pretty thorough. I've generally seen a bit less rigor in Debian/Ubuntu packaging.

For handling "merged configurations", look up how conf.d directories work, if you are unaware - it's an easy paradigm.

Templating of config files should (or at least can easily be) be handled through a configuration management layer, if need be. I'd argue that no package should be doing things like running sed against existing files, as this is a goal to be solved at a higher level using some form of automation framework.

Which of course makes a big case for packages itself - it's much easier to know that a package is not going to be interactive, or tell it to not be interactive, than random web scripts on the internet.

As for seperation of differing configurations, VMs are nearly omnipresent at this point, and tend to simply management versus bare metal.


Gosh, and let's not forget that RPMs can also contain pre- and post- hook scripts, so you could be in a situation where you say, "Install this RPM and satisfy dependencies", and another RPM is downloaded and installed as root, with you never seeing the scripts that are going to be run as part of that process.

You can inspect the scripts by hand by examining the package files, but in some ways it's less transparent than downloading a script and running that.


You are downloading software from the internet. How it executes code on your machine is irrelevant, you already trust it. Or do you review every OS package you install for shenanigans on every update? This is an especially bad point to bring up in the context Sandstorm, which runs stuff as root all the time. Do you mean to say that you trust them enough to run a bunch of stuff as root on your server, but not to run a post-install script?


No, but I do trust some software enough to run as non-root. Sadly to install this non-root software I need to install the RPM as root, and then I have no idea what it's going to do during the pre- and post- hook scripts.


I suppose. Though have you considered the fact that if someone has non-root local access to your machine, they are just one local privilege escalation away form becoming root? And these things come out fairly frequently that you can pretty much count on one being found soon. In other words IMO you should either trust or not trust a piece of code to run on your machine. The distinction between root and non-root user running the code is more for maintainability and sanity and less for actual security.


I don't know about RPM, but for deb you can just do "dpkg-deb --control <deb file>" to see those scripts.


You can export the scripts from an rpm without installing it.


AppFS ( http://appfs.rkeene.org/ ) was created for this purpose. Everything is run as an unprivileged user because there is no install phase. You just run the software and the parts it needs are fetched lazily/JIT. Also because it uses PKI end-to-end, you trust the publisher of the application (as in signed RPM/DEB) not the site/mirror you are fetching it from (as in curl|bash).


Bypassing the developers say "version 1.15 is out now" instead of "version 1.15 is out:

1) Now + however many years until the next Debian stable

2) Now + however many months until the next Ubuntu

3) Now for Arch users

4) Now + whatever for Fedora

5) Now + a different whatever for SUSE

etc

etc"

Package-the-world-and-freeze distribution schemes are a black hole of suck for development tool projects that have their own ecosystem and want to iterate at a consistent pace.

"Of course," you could say, "the devs should just run their own secondary repos instead of submitting to the official ones".

So now instead of one bash script you just have to trust not to rootkit you, the devs distribute 10 different incompatible package formats you just have to trust not to rootkit you, at only 10x the maintenance, build script, and sysadmin burden as you run all these bespoke repos.


Note that "package-the-world-and-freeze" distributions (Debian, RHEL, whoever else) have an entire infrastructure in place to creating their own line of packages from an "upstream's" basic-tarball releases.

When Debian determines to, e.g., support Linux 3.6 for the next five years, it's not suddenly Linus's responsibility to backport kernel patches to Linux 3.6. Likewise, it's not your responsibility to maintain a stable branch of your software so that Debian can do less work packaging it.

The other kind of model, of Ubuntu PPAs and such, where they're expected to track the developer's releases, frequently incentivize the developer to get those packages created as part of their own build infrastructure. But there's still no expectation that they will. The culture of the "distribution" is set up to allow developers to just develop, and distributors to be the ones to think about distribution.


> Note that "package-the-world-and-freeze" distributions (Debian, RHEL, whoever else) have an entire infrastructure in place to creating their own line of packages from an "upstream's" basic-tarball releases.

> When Debian determines to, e.g., support Linux 3.6 for the next five years, it's not suddenly Linus's responsibility to backport kernel patches to Linux 3.6. Likewise, it's not your responsibility to maintain a stable branch of your software so that Debian can do less work packaging it.

That's great and all, but if I'm a Ruby dev my life isn't any better just because the burden of packaging a 4-year-out-of-date version of the runtime isn't on me. The Rails community wants to move to the latest Rails version, which wants to user modern Ruby syntax, which depends on a modern Ruby version, and the community doesn't just want to bifurcate and fork every single gem for half a decade because Debian's dragging it's feet about the next release. And I can't just order my users to use Debian Sid in a production environment.

The fundamental problem here is that Distro-packaging is optimized to distribute software for an ecosystem comprised of that Distro's users. The problem a lot of dev tool projects run into is that they serve a cross-cutting community of many different OS's users, all of whom want to stay in sync with the rest of the cross-cutting community, and who couldn't give two shits about what some other person's OS's package repo's timelines are like.

That's the niche that NPM, RVM, et al are serving and it's a niche that distro packages are completely unsuited to handle. And RVM et al tend to "package" as scripts because we're right back in this mess if Debian RVM users are 4 years behind OS X RVM users.


I think you're under a mistaken view of what OS distro packages exist for. The "system ruby" and other such packages aren't for end-user (or developer) consumption; they exist for the sake of other system packages, in case a piece of system software needs to run some Ruby code.

"Application software"—the kind you deploy to a system using Chef or stick in a Docker container or somesuch—is expected to use vendored libraries and a vendored platform, the way e.g. Erlang releases do. Not just in development, but in production. This is why /opt exists.

It's only once your software becomes "system software"—once it becomes something people expect to be a stable, black-box part of the OS rather than something they integrate into their solution—that you must pay attention to the versions of other "system software."

OSes demand stability from system software for one simple reason: the ability to silently auto-update "the OS" in production to take out crashers and security vulnerabilities, without materially affecting the application software running atop it.

If you want your package to successfully participate in the OS's zero-downtime auto-update magic, then your package needs to be stable and have stable deps. No new features, no breaking API changes, nothing that will break other system software that people are depending on in production.

But if you don't have that requirement—if ABI-incompatible updates to your package just affect the developers consuming your library/tool, rather than users whose OSes suddenly break—then there's no need to integrate with the OS.

Note that rvm, and rubygems or npm, and so forth, aren't "secondary OS package managers." They're developer tools, for developing application software. The end result that real operations engineers will expect out of you is a blob (maybe a private package, maybe a container, maybe just a tarball) containing the platform, the libraries, and your code, baked together. Production systems should not be running rvm, or gem/npm install, to get your software's deps installed!

---

An interesting consequence of this philosophy, though, is that there are many "leaf packages" in most distros that shouldn't be system packages at all, but should rather be floating above the OS, consumed via a secondary "application software manager" that pulls down full pieces of self-contained vendored-deps software.

OSX almost has the right idea by dividing the world between a low-level Unix system (all system software; all involving stable-version interdependency), and "application bundles" that embed their own vendored Frameworks. But they don't go far enough; there's no such thing as a "CLI application bundle" which gets probed on Spotlight discovery to provide a new bin/ directory for your $PATH, the way GUI application bundles get probed to provide new file-type associations. If there was, they could move a lot of their own "leaf nodes" out into CLI bundles. (The "Command Line Tools" package containing gcc et al. is a prime candidate for a CLI bundle. It's not system software; it's its own vendored build environment!)

Despite what I said above, it might be perfectly sensible to run something like gem/npm on a production system—but only for requesting the "leaf packages" you'll actually be using on that machine—both applications and utilities.

Such an "application package manager" could theoretically even do dependency-resolution—but only insofar as it would be doing it to optimize download time by breaking out things like "foo-1.0/vendor/[someframework]-1.1.13-[exactsha]" into its own slug so different apps' downloads could reuse it from cache. The result of running "app install [foo]" would still have to be a copy of foo-1.0 with its [someframework] vendored inside it, not a symlink or other such reference from foo to someframework. (Basically, if you get the idea, I'm talking about a cross between Homebrew Casks and Nix.)

If you twist your mind just right, container solutions like Docker are the "application package manager" I'm talking about. But they actually provide too much isolation to run a lot of important application software (games, for example), because they're built under the assumption that you're running someone else's untrusted application software, and also on the assumption that the environment above the container isn't itself a secure/ephemeral sandbox like a VM. It'd be pretty easy, though, to take the ideas from container management, and use them to create a one-true-tool for creating, deploying, and managing the lifecycle of "releases" of application software in an OS-neutral way. While rvm(1) shouldn't be installed in production, appvm(1) could very well be.


> I think you're under a mistaken view of what OS distro packages exist for. The "system ruby" and other such packages aren't for end-user (or developer) consumption; they exist for the sake of other system packages, in case a piece of system software needs to run some Ruby code.

Read the conversation thread. Original parent (and many of his respondents) are explicitly complaining about secondary package managers, and that distributing any software, not just "system software" via any mechanism other than the distro-specific package management system is "for chumps".

I completely agree that most application software should be distributed via some method "above" the system-specific page infrastructure. I'm attempting to explain to people who don't agree with that just why their Apt-Get Uber Alles desire fails in practice.


I think the point was less about a rootkit and more about maintainability. That said, a secondary repo hosted on SSL with trusted keys is a lot more secure than piping a curl'd script into bash. If you trust the application developer but not the infrastructure in-between anyway.


> Each platform has its own system for doing installs and upgrades.

A system which is lacking either by platform convention and/or by technological capability to handle some now common use cases: thus the rise of second-level package managers.

Homebrew, linuxbrew, rbenv/rvm, ndenv/nvm, etc. all came to the forefront because the platform's package management was not doing the job. You might pick out linuxbrew and say "ah, just use rpms/debs/etc!", but there are a lot of cases where you don't have the option to choose the distro flavor or version, which can severely limit your ability to pick up packages/versions that you need. Likewise, in most distros it's impossible to cherry-pick a single package back onto to an older release; an explicit backport must be made available. Too bad if it's not available!


> Homebrew

Exists because OS X doesn't have a vendor-supplied "download and install this, including any dependencies" type package manager, and the existing attempts to fix it, were terrible. Frankly so is Homebrew.

> linuxbrew

using Beer terms is apparently cool for software packaging so lets port the idea of Homebrew back to Linux.

> rbenv/rvm, ndenv/nvm

rvm (and can i assume nvm, which i guess is rvm for nodejs?) is about installing multiple versions of Ruby and using them. There is nothing inherent about this concept that can't be achieved with a regular package - you're just placing compiled files in certain directories.

Debian even had multiple versions of 1.x (1.8 and 1.9 from memory) that could be installed side-by-side. Even now the packages install binaries etc as e.g. `/usr/bin/ruby2.1`, so I don't buy the "traditional package managers were not doing the job".

They just aren't written in hipster languages and they don't force you to compile every fucking thing you want to use on every fucking machine every fucking time you install it.

> Likewise, in most distros it's impossible to cherry-pick a single package back onto to an older release; an explicit backport must be made available. Too bad if it's not available!

Debian provides a pretty easy to follow guide about making your own unofficial backports, or guidance on creating official backports, and even has a basic 'auto' setup script and a "backport from a PPA" guide.


>They just aren't written in hipster languages and they don't force you to compile every fucking thing you want to use on every fucking machine every fucking time you install it.

What? Homebrew has done binaries by default for ages now.


And, in fact, so does the FreeBSD ports system it was based on. In both cases, though, the important point is that the binary is effectively a cached compilation artifact of what your computer would (hopefully) generate on its own in the default case. This means the whole workflow of package-building is still there even for binary packages; you can ask the package manager to install a package with alternate ./configure options, at which point it'll realize it has no available cached artifact for that particular configuration and build it from source instead.

Honestly, after using Homebrew for a while, I find regular Linux package managers kind of naive, more like mail-order parts catalogues than a true system configuration tool. Having to consume Nginx from a PPA just to get ngx_lua support, for example: it should be as simple as an optional dependency on luajit which is enabled by an install-time "--with-lua" flag to the package manager, causing the package to be built differently.


Can we do without needless snipes like "hipster languages" please?


> using Beer terms is apparently cool for software packaging so lets port the idea of Homebrew back to Linux.

No, second-level package managers serve a very valuable role. They allow coeexistence of a stable base platform that packaged software (whether OS X apps or Linux packages) can target, while fully enabling the user to maintain a user-controlled and customizable set of tools. There's zero conceptual difference to what we've been doing forever with software built and installed into /usr/local, but sharing the burden of build and package maintenance. If you've ever run into the "argh, need package X (at version Y?) but $DISTRO doesn't have it!", then this is your out.

> There is nothing inherent about this concept that can't be achieved with a regular package - you're just placing compiled files in certain directories.

This is a statement made in a vacuum from the today-reality of Linux package managers. First everything important about the needs of 2nd-tier packaging works depends utterly on the stuff you cast aside: tooling and conventions and workflow above the mere fact of stuff-on-filesystem. In which case the devilish details of the package managers and the cultures they are embedded in are paramount. Yes, this is more about politics than technology. Distros don't want to maintain an entire suite of packages for language platforms. They pick some release they deem "stable", often with little actual sense of the needs of that language's community, and ship it. Individuals don't want to personally maintain an entire herd of packages, or have to maintain or depend on random PPAs. Also, just whipping up packages when you need a full dependency tree involving libraries that conflict with base distro versions... well good luck with all that.

Example of where traditional package managers fall face-flat: The ruby/node managers allow the current version to be easily managed (and version controlled) on a per-project basis via .ruby-version/.node-version files. Good luck ever getting anything like that into any distro.

> They just aren't written in hipster languages and they don't force you to compile every fucking thing you want to use on every fucking machine every fucking time you install it.

[So. Much. Anger. Really?]

ndenv can't build from source. It uses official upstream tarballs. Homebrew likewise has a build infrastructure and produces binary "bottles", used for almost all installs. So, whatever.

>Debian provides a pretty easy to follow guide about making your own unofficial backports, or guidance on creating official backports, and even has a basic 'auto' setup script and a "backport from a PPA" guide.

Which statement is willfully ignorant of the very real use cases that these 2nd tier package managers cover, such as:

* Works cross-platform (ala rbenv/ndenv). OS X, .deb-based, RPM-based, etc. All good.

* The aforementioned automatic version selection. (rbenv/ndenv)

* Supports nearly every upstream version in a consistent manner, important for long-term project consistency/stability. (rbenv/etc)

* Can't interfere with the base platform's operation. Backports done wrong can easily break stuff. These tools' strategy of never touching the QA'ed base platform is far better. (homebrew/linuxbrew)

* VASTLY easier to contribute to than any distro or even (especially) a random PPA. Pull request and done. (all)

* Obligatory pshaw re: making unofficial backports: "brew edit $FORMULA" and done. I've dealt with package systems from virtually every distro out there. None of them are lower ceremony and better suited to task than pretty much any of the 2nd level managers in their respective elements.

On a larger point of your rant, I'll agree: Linux distros and their package managers could stand to learn a lot from the cases covered by the current crop of 2nd level managers. There are great ideas that could be "upstreamed". But all of them would need significant tooling, convention, and cultural effort to make this happen. Honestly, I can't imagine ANY Linux distro embracing these needs well in practice, and many were actively hostile when approached about same. (Cf. mailing list discussions about extended ruby and python version support before/around the time rvm was created. Don't eat within an hour beforehand.)


At the end of the article:

> Installing software not managed by the distro’s package manager can be inconvenient for system administrators, especially those who are deeply familiar with the inner workings of their distro.

> On these concerns: We hear you.

> For now, we believe that having our own install scripts allows us to iterate faster, compared to maintaining (and testing) half a dozen package formats for different distros. However, once Sandstorm reaches a stable release, we fully intend to offer more traditional packaging choices as well. We still have lots of work to do!

This sounds like a cop-out. Packaging an RPM and a DEB will take care of most distros, and the distro maintainers should do the rest.

Your build system should be creating the requisite package files.

If not, there's always OpenSuse Build Service.


I fully agree with you. Though Sandstorm in particular explained why they do it this way:

Q: Installing software not managed by the distro’s package manager can be inconvenient for system administrators, especially those who are deeply familiar with the inner workings of their distro.

A: On these concerns: We hear you.

For now, we believe that having our own install scripts allows us to iterate faster, compared to maintaining (and testing) half a dozen package formats for different distros. However, once Sandstorm reaches a stable release, we fully intend to offer more traditional packaging choices as well. We still have lots of work to do!

So, there will be packages once it's stable. Right now, they'd rather spend their time adding new features.


They already spent more time creating a custom installer than they would have to package it properly. That's my point, just do it right and you will see a big ROI in terms of more users of your software.


Translation: at some point after putting one hack on top of another at high speed, we will clean up the mess and do it properly.


Why the heck are you writing a custom installer like this?

One possible example: User specific installs. Interaction with the system package manager requires root rights, and isn't something that is a good fit for something like, say, RVM or rbenv. If I'm a nonprivileged user on someone else's general purpose system, I shouldn't have to bug the administrator to install something useful to me.


Nix solves this problem neatly.

Not saying that's answering your question, since Nix is woefully underused, but it's something to look at.


Yeah I've been using arch for my "root" and guix for my users and it's a wonderland of excellence.

If you're interested in nix / guix, installing them and using them without the whole OS is a great way to get started.


All package managers allow user-specific installs.


As a special case, sure.

http://askubuntu.com/questions/339/how-can-i-install-a-packa...

http://askubuntu.com/questions/28619/how-do-i-install-an-app...

But if you're going to extract deb files, why not just extract tarballs? Or better yet, just curl|bash? :)


There's lots of reasons to use packages. Verification, versioning, bug tracking, offline use, system-install, user-install, dependency tracking, rollback, sanity checking, custom install steps, and being tested and certified for a particular repository and software branch, among others. But, you know, if you just want to install something and not know what it is, what it does, how to fix it or remove it, much less whether it's even the software you expected, curl|bash is okay.


We're talking about extracting packages in a one-off, unsupported configuration that literally nobody does, not using them as intended, so 90% of those things you mentioned are off the table.


What is off the table, exactly? You asked why you would extract a deb instead of extracting a tarball or curl|bash, right?

Because the deb is verifiable, because the deb can be inspected, because the deb is versioned, because the deb can be bug-reported, because the deb has dependency info, because you know what the deb was built for and supports, because the deb has clear scripts which tell you how to install and uninstall it, because it can be unpacked offline, etc.

Maybe you don't care about any of this. That is fine. I'm just telling you what the Deb gives you that a tarball or curl|bash won't.


* Just about every .tgz I've ever downloaded had instructions on the downloading page to verify its signature

* I can inspect a .tgz easier than a .deb (tar -tzf)

* More often than not, the tgz version is in its filename (foobar.1.3.0-24.amd64.tgz)

* With that info, I can bug report it

* Fair 'nuff on depdencies

* I know what the tgz was built for and supports by either the download page or the filename

* Just about every tgz package I've ever downloaded included a Makefile that handled installation and uninstallation

* Offline unpacking is the only way to handle tgz's

Literally the only benefit to offline unpacking a .deb vs offline unpacking a .tgz is that the former includes a CONTROL file that lists dependencies in it.


So we both agree that RPM is superior to all these methods, and that curl|bash is effectively not even worth talking about; great. It seems the question you're posing is: why would I use a deb when I could just make a Slackware package (which is what you're using as an example of a 'tarball', which is actually just another name for a tar file, but I digress) and 'unpack' that.

As you must be aware, both Slackware and Debian have methods to install a package locally: 'dpkg -x package.deb $HOME/` for Debian, and 'installpkg --root=$HOME/ package.tgz` for Slackware. This is effectively an easier way to 'unpack' the packaged files without performing any other operations.

You mention verifying signatures? That's built into the deb. It isn't built into the tgz.

You mention 'inspecting' the deb is more difficult. Yet 'dpkg -c package.deb` is not only less characters to type, it gives you more detail due to the nature of the deb's file layout.

You mention the version is in the tgz file name 'more often than not'. Doesn't really give me the warm and fuzzies compared to having a file with real metadata about the origin of the package, like deb provides.

And a random version number does not give you the info to report a bug. If it's not official, you have no idea who built it, unless the author added their e-mail to a file inside the package, which most don't. And even with the version and author, you still don't know what system the package was built for! If you're lucky the version was bumped from one distro version to the next, and with luck they never built the same version for more than one distro version!

HOW do you know what the tgz was built for? Sometimes the architecture is included, but you don't know what distro, what version, or any other platform or build-specific information. A 'download page' is not reliable metadata, nor is it included with the package, thus the package does not have the information, thus it is irrelevant to this conversation.

Slackware packages do not come with Makefiles for installation nor uninstallation. In fact, almost no precompiled tarball i've ever seen has had a Makefile for installation or uninstallation. And i've manually packaged well over 10,000 pieces of software for various distributions.

Comparing Slackware packages and debs is like comparing a bicycle to a Ford F-150. Sure, the bicycle is lightweight and easy to use. But a pickup truck is way more useful.


How would I install a package from one of the repositories as a non-root user for RPM or dpkg? I am not aware of any way to do this within either of those package managers.


You could use google for five minutes and find the answer. RedHat/Fedora have it as part of their guide on using RPM. (Hint: it involves making your own .rpmrc file)

Dpkg supports something similar but I forget when I last used it. Worst case you can just unpack the files into a local directory, add to your PATH and run it.


I asked this only after googling, and having done previous research.

The best I was able to find for RPM is https://www.redhat.com/archives/rpm-list/2001-December/msg00... , which involves creating your own RPM database in your user directory, and using --prefix. Unfortunately, --prefix requires an RPM to have been built with relocatability http://www.rpm.org/max-rpm/s1-rpm-reloc-building-relocatable... in mind, and my experience is that most RPMs I've found from RedHat, EPEL, and Fedora aren't actually relocatable. Further many of them depend upon other root-specific things like creating new users, editing init files, chowning, etc. Of course, with this approach, you also need to create a dummy package to provide dependencies for all of the root-installed things on the system, and keep it up to date yourself.

I looked through https://docs.fedoraproject.org/en-US/Fedora_Draft_Documentat... at your suggestion, but was unable to find the section you're talking about. Perhaps I overlooked it?

I'll also note that yum requires root unless you're willing to make substantial modifications to its source code, a process that I started but was never able to finish. This is outside of the scope of what I asked, however.

I'm less familiar with dpkg, but everything I've found so far simply suggests going around the package manager and instead unpacking the package and dealing with it manually.


Well you are right that the information isn't succinctly stated in the Fedora RPM guide or Maximum RPM, though different parts of these manuals go over the different things needed to set up a local install.

However, these are the results on the first page for a google of "installing rpms in local home directory". Some explain the rpm database, some talk about relocatability, and some talk about unpacking with cpio. There is no 100% reproducible solution. But you are taking software which was built to be installed on the system level and installing it on the user level, which never maps 100% on any operating system (as most users don't/shouldn't have admin privileges).

Installing a package locally to a user - best practices? https://unix.stackexchange.com/questions/73653/installing-a-...

Linux: Building RPMs without root access http://www.techonthenet.com/linux/build_rpm.php

yum install in user home for non-admins https://unix.stackexchange.com/questions/61283/yum-install-i...

How can I install an RPM without being root? https://superuser.com/questions/209808/how-can-i-install-an-...

rpm without root https://serverfault.com/questions/100189/rpm-without-root


In this particular case - https has always been used, and everything is in /opt/sandstorm. As far as curl|bash behavior goes, this is about the best I could ask for.


`curl | bash` makes me think that (a) your software will not play nicely with my OS and I'll have to figure out your weird locations for where you are putting your init scripts, config files, data, etc. and (b) that upgrades will be manual and painful.

Fwiw, the two programs I've used with such a script (Calibre and Youtube-dl) both update easily, and don't have problems with support.

That said, it would be nice if there was a way to update them automatically. There seems to be a way to hook commands into apt-get, maybe you can add a command to update your other programs at the same time.


youtube-dl is in PyPI, so you should be installing and updating with pip install youtube-dl and pip install --upgrade youtube-dl


youtube-dl -U # updates youtube-dl to latest version

You could add this command to cron.


Something I haven't seen mentioned in this article or these comments is that, as a sysadmin, I'm managing a lot of packages.

It's great if you're security-conscious enough to create the greatest install script ever, but I don't have the time to vet every install/upgrade script for every application I have to maintain. That's why we have repositories with signed packages: you trust the repository and you can then, with minimal effort, trust the packages and their updates.

...and that's another thing. A standardised mechanism to keep things up-to-date simply is a requirement of anything I'm going to allow to run on a server. I don't want to be faffing about to keep my servers secure.

Package management is a solved problem by now, I don't know why people are trying to make it more complicated.


Frequently, all the install script does is check what OS you're using, and run the relevant package-management incantations (e.g. for Ubuntu, [apt-key add + apt-add-repository +] apt-get update + apt-get install; for OSX, [brew tap +] brew update + brew install).

If package-management systems provided a one-liner "bring in this package source, get up-to-date with manifests, and install the root/default package provided by that source" command (in Ubuntu, "ppa-install" has been suggested numerous times), you could just have a list of such commands on your program's site's Installation page, and they would all fit on a single screen. As it is, though, the stanzas and prerequisites for each OS's incantation are complex enough that it's much simpler to provide the "bash magic" version.


On the install page for every PPA is a 1-3 line snippet you can copy and paste which will do add-apt-repository and apt-get. It's not hard and it sounds, from reading all the pro homebrew/linuxbrew comments around here, that this is more of a case that people want to reinvent something that's not broken, and are determined to do so without learning any of the lessons from the last 30 years of package management.

But hey, it prints out an emoji beer mug, so that's cool right?


No, there actually isn't. If you click a little green "read about installing" you'll get some bash snippets... which are split up into one-line-at-a-time embeds in lots of text and in any case use the placeholder "ppa:user/ppa-name" instead of substituting in the Right Thing.


Just so. I'm reminded of Raymond Chen's "what if everyone did this" thought experiments.


The thing I don't see in this article, which I do regard as (another) drawback to curl|bash is that you're not just trusting the software developer, you're also trusting their hosting provider.

Anyone who gets access to modify that script on their server can change the content.

Now it could be that they're hosting on a dedicated system, in a datacentre that they own, with no 3rd party access, but that's very much not the norm for most companies these days, and it's impossible for a customer to verify.

This is a problem which is solved with good package signing practices, where the developer signs the content on a system they control before distribution, so at least you can verify that what you're installing is what they released.


How is that an argument against curl|bash? If you download a source tarball from the developer's website, then you are also trusting their hosting provider. curl|bash changes nothing.


I was referring to cryptographic package signing (e.g. with PGP) so you validate the signature of the package before installing, this lets you validate that the package hasn't been modified since it was signed.

curl|bash ing a script does not provide this protection.


Why do you trust that the signing key actually belongs to the developer? If I were an evil hosting provider I would simply change the key fingerprint displayed on the download page.


You have ways of veryfing the ownership of a key: PGP's web of trust or keybase.io for examples.


Also, assuming you keep the key, it prevents someone from attacking you through this channel in the future. Instead of trusting the distribution channel every time, you just have to trust it once. Not perfect, but surely an improvement. Similar to the SSH security model.

Personally I verify keys to the best of my ability on first install, and then only use that key to verify in the future.


Of course you can still check signatures if you wish. Heck, sandstorm even provides instructions on how to do it on their installation manual: https://docs.sandstorm.io/en/latest/install/#option-3-pgp-ve...


But curl|bash is a one step process, as soon as you get the script you execute it, with no opportunity to check a signature...


You do realize you don't have to use curl|bash, right? You can wget, verify with gpg and then bash it.


Of course, but the article being discussed is about curl|bash so I felt that comment was relevant :)


Of course, but then we're not talking about curl|bash anymore. It seems kind of contradictory to say that curl|bash is secure as long as one doesn't actually curl|bash.


curl|bash is just a shorthand way to say "here is an URL for install script you can run with bash" which happens to be also conveniently copy-pasteable. You can do whatever you want with that information.


I feel like curl|bash is for the masses, who don't care. Then there are more secure options for those who do care.


I think that once we're talking about installing server software at the command line, we're already beyond most of the masses.

I'd also argue that secure defaults are most important for folks who don't care. And there are more secure options: the package managers.


Don't forget about workstations, but you're right. Though personally I install a lot of stuff on one-off VMs to try it out. I don't really care about security there because the VM is in an isolated network and will be erased an hour later anyway.


I don't think it's possible to make things secure for people who don't care, except by disallowing third-party software entirely. Deciding who you trust is fundamental.


> And there are more secure options: the package managers.

So sandstorms installation guide should in your opinion just say "wait a couple of years and maybe someone will have packaged it for your distro"? Yes, that sounds excellent way of getting users. And hey, the most secure way of installing software is not to!


you can host your own repos pretty easily, and still provide the bash script for oddball distros.


A source tarball can (and should!) be checked against GPG signatures et. al.

With curl|bash you have nothing.


A bash script can be signed too. See article.

How many projects provide GPG signatures for their tarballs?

How often do you check signatures on tarballs?


This is why signatures should be embedded into the packaging and distribution mechanism, to reduce manual steps.

Looking at Dockers deployment of Content Trust as an example of this.

You still have a trust decision on first use, but it's better protection than nothing.


> You still have a trust decision on first use, but it's better protection than nothing.

That's what you're getting by curl|bashing sandstorm today (if you skip the PGP verification step). Once installed the updater verifies signatures automatically.


> How often do you check signatures?

Every time you run apt-get. So the answer is almost every time.


The context of the comment is source tarballs. How often do you check the signatures on random tar.gz you download?


tar.gz vs. curl|bash is a false dichotomy, there is no reason for either. We have better ways to install software these days.

And yes, when I download tar.gz (e.g. when I am responsible of managing my own packages) I do check the signatures. You should, too.


Could you point me to the better way of installing software that will work on all Linux distros?


LSB-compliant RPMS, though there are a few aggressively antisocial distributions that don't support them as well as they should.


I just want to add my agreement to this. If the LSB (Linux Standard Base) was better supported these hacks would not be necessary.


AppFS :-)


District-appropriate packages.

Use the right fucking tool for the job.


That will not work on all Linux distros. That is writing a specific installer for each distro, which does not satisfy my requirements and as such is not the right tool for the job.


> That will not work on all Linux distros

The result of the work to create packages using the major packaging formats will produce a series of packages that combined, will arguably work better, on more distros than your home-grown installer.

> That is writing a specific installer

What makes you think creating e.g. a debian package equates to "writing a specific installer". The whole point of packages is that you don't "write an installer" you provide an existing installer (the package manager/package installer) with information about what to install where.


> The result of the work to create packages using the major packaging formats will produce a series of packages that combined, will arguably work better, on more distros than your home-grown installer.

Plunking a binary in /usr/bin and some configuration files in /etc will work just about anywhere.

> What makes you think creating e.g. a debian package equates to "writing a specific installer". The whole point of packages is that you don't "write an installer" you provide an existing installer (the package manager/package installer) with information about what to install where.

You will probably write a script to build your package, right? That's writing an installer. Or a builder for an installer, or however you want to call it. The point is that you now have to do work for every distro you want to support, and all users without a package manager that you support is SOL.


> Plunking a binary in /usr/bin and some configuration files in /etc will work just about anywhere.

Unless the user has existing files with the same names there. Or they edit a config file you placed in /etc/ and then they run the installer for a new version. Oh so now your "simple" installer has to handle version upgrades, and file conflicts?

If your software is literally so simple as files in /usr/bin and /etc, building packages to install those files should be easier than writing a good install script, because the package manager will handle so many use-cases for you out of the box.

> You will probably write a script to build your package, right?

No? I have a makefile which can do the actual building and installing. dh_make will generate a debian directory, and debuild will call this makefile. An RPM specfile also will have calls to make build, make install, etc.

None of this is writing an installer unless you consider a makefile having an "install" target "writing an installer".

> The point is that you now have to do work for every distro you want to support If the installation is as simple as you claim, the packages will be simple wrappers around the results of a makefile.

If its more complex, then packages are still easier to maintain, because you don't need to worry about "what distro am i on, and do i need to check what the apache binary is called, or what packages are installed by default".

> and all users without a package manager that you support is SOL.

If I haven't covered some Distro, it's hardly difficult to give them a .tar.gz containing the tree created by the aforementioned `make install`. This is still less work than your special installer script.


> Unless the user has existing files with the same names there. Or they edit a config file you placed in /etc/ and then they run the installer for a new version. Oh so now your "simple" installer has to handle version upgrades, and file conflicts?

And these scenarios would make deb's and rpm's bail out as well.

> No? I have a makefile which can do the actual building and installing. dh_make will generate a debian directory, and debuild will call this makefile. An RPM specfile also will have calls to make build, make install, etc.

A makefile is a script. An RPM specfile is also a script.

> If I haven't covered some Distro, it's hardly difficult to give them a .tar.gz containing the tree created by the aforementioned `make install`. This is still less work than your special installer script.

People will also be much less likely to actually install your software this way.


> And these scenarios would make deb's and rpm's bail out as well.

Conflicting config files can be handled intelligently by deb and rpm based package managers.

Does your install script identify whether or not the user made any changes to the config file your previous installer script placed in /etc 9 months ago, and give them options (including viewing a diff and an interactive shell while installation is paused) about how to handle it if they did make changes, and use the new one if they just used the default? I doubt it somehow.

> A makefile is a script

How do you build the binaries that your special install script installs then? My point is that package building tools generally leverage an existing makefile that you would already have.

> People will also be much less likely to actually install your software this way.

Given that my software targets Linux servers, I actually imagine they're more likely to install from my packages (or an archive on a weird unsupported distro) than from your home-grown install script.


> Does your install script identify whether or not the user made any changes to the config file your previous installer script placed in /etc 9 months ago, and give them options (including viewing a diff and an interactive shell while installation is paused) about how to handle it if they did make changes, and use the new one if they just used the default? I doubt it somehow.

It doesn't overwrite anything because the file is intended to be modified. Problem solved.

> How do you build the binaries that your special install script installs then? My point is that package building tools generally leverage an existing makefile that you would already have.

I don't write C, I usually write python (interpreter already installed), Go (static binary) or sh (interpreter already installed). These never use makefiles, so I'd have to script a new one for this. And then there's dealing with all the different tools to generate the different kind of packages.

> Given that my software targets Linux servers, I actually imagine they're more likely to install from my packages (or an archive on a weird unsupported distro) than from your home-grown install script.

I was comparing the install script with providing a .tar.gz file with some files in it. Obviously people would prefer for packages for their obscure system, but alas, they don't exist.


One of the potential problems here is the fact that even just inspecting the "curl" result of the file can trick sysadmins into executing code they did not wanted executed.

Imagine the following scenario:

$ curl something.tld > installer.sh

$ cat installer.sh

"Hey, that looks safe, I'll go ahead and execute it".

$ bash installer.sh

If the install-file had terminal escape sequences, it could have been used to trick anyone into executing code they had no intention to execute.

More here, if you're interested: https://ma.ttias.be/terminal-escape-sequences-the-new-xss-fo...

Bottom line: assume everything you download from the internet is malicious in nature and inspect it with every possible tool you have available. And even then, run it in a sandboxed environment wherever possible.


use `cat -v installer.sh`

     -v      Displays non-printing characters so they are visible.  Control
             characters print as `^X' for control-X, with the exception of the
             tab and EOL characters, which are displayed normally.  The tab
             character, control-I, can be made visible via the -t option.  The
             DEL character (octal 0177) prints as `^?'.  Non-ASCII characters
             (with the high bit set) are printed as `M-' (for meta) followed
             by the character for the low 7 bits.
http://www.openbsd.org/cgi-bin/man.cgi/OpenBSD-current/man1/...


Use "install.sh --oh-and-don't-delete-all-of-my-files=true".

If it's not the default, it's not what's going to happen.


> Bottom line: assume everything you download from the internet is malicious in nature and inspect it with every possible tool you have available. And even then, run it in a sandboxed environment wherever possible.

Ehhh. Ok that's not very actionable advice. I downloaded my entire OS from the internet.


There's a checksum for each package and for the CD. Back in the day, people cared about that.


How do you verify the checksum?


Very interesting. I'm always (ab)using cat for this, and I keep meaning to stop, but I forget. After seeing this I'll definitely stop.

Thanks!


So open it in an editor?


Yes, or view via `less` or a hexdumper.

But how many sysadmins do you know rely on `cat` for their day-to-day use? Too many, I reckon.


You should be cautious about running less on untrusted input: http://seclists.org/fulldisclosure/2014/Nov/74


`cat` abuse is rampant


Add "alias cat='cat -v'" to your login profile then.


Better python than bash or perl. At least the code shouldn't look like line noise. When's the last time you tried to read through a gnu ./configure file?


To support a "secure" curlbash for https://github.com/tillberg/gut, I used shasum to verify the sha256 hash for the script that is downloaded (before it is executed), and then the script in turn verifies the sha256 hash for the tarballs it downloads.

For example:

$ bash -c 'S="3bceab0bdc63b2dd7980161ae7d952ea821a23e693cb74961b0d41f61f557489";T="/tmp/gut.sh";set -e;wget -qO- "https://www.tillberg.us/c/$S/gut-1.0.3.sh">$T; echo "$S $T"|shasum -a256 -c-;bash $T;rm $T'

It makes for an uglier command, but it (theoretically) does a pretty good job of verifying that the thing that you download is the thing that you expect to download. It both prevents MITM attacks, TLS verification issues, as well as prevents me from putting some other code there for you to download in the future instead. It either downloads the code you saw the first time it downloaded, or it fails, nothing in-between.


This is also what Sublime Text Package Control does; it embeds the SHAsum for the package-manager-packge in the Python it tells you to run to retrieve it.

I think everything would be simplified if there was a program in coreutils for "pass STDIN to STDOUT only if STDIN's shasum is $1". Then you could just `curl | shaverify xxx | bash`.

Or, alternately, we could just try to get curl and wget to implement support for (even a limited subset of) magnet URIs. The magnet schema does exactly what we want here: provides both a SHA for content verification, and an optional "xs" (exact source) parameter to make the retrieval happen from a URL rather than a DHT.


Yeah, you have to have an external sum/key verification unit in place. Using the script you download to check it's keys is doesn't do anything. It wouldn't be that hard to have one hosted and one for download that returns the correct sum from without verifying.

Saying curl|bash is fine if you actually do curl; key check; checksum;install.sh is pretty dumb. I'm not really sure about these guys. I feel like they don't get it enough that I assume they are incompetent.


Well...

Is curl|bash insecure? YES

Is curl|bash more insecure than most of the ways people install software they just download from the Internet? NO

So it's not ideal, but most people don't do anything else that's much better, and it's convenient, so you can expect it to continue.


But they do. They use a package manager such as apt-get, that cryptographically checks everything.

Installing software outside the package manager is the exception here, not the norm.


And yet their browser probably has unpatched vulnerabilities...

Essentially, if someone wants to hack your system they will, whether it's patching your routers firmware, etc.

If Iran can't keep stuxnet out of airgapped systems, you're not going to keep any serious actors out of your systems connected to the internet.


And there is more: many people in security would agree that education is critical.

We educate users to download software from distributions rather than trusting random sources (that might be very safe or not).


>trusting random sources

A recent example to add to this, for those who think "nobody who's not a complete beginner would never do that":

https://www.reddit.com/r/netsec/comments/3lefc6/chinese_ios_...


You can programmatically add as much verification into your shell script to verify it's not tampered. All the Attacker has to do is replace your shell script with their own to neuter everything you did.

You will never escape the insecurity of this because you are connecting to another server and accepting whatever output it gives you to run under a bash process. They can poison your process via DNS cache; using a transparent proxy to replace the content. Don't assume that just because they are using HTTPS that it removes all attack vectors.

This is lazy; and insecure. Ship a proper package in a proper repo that people can programmatically add using Chef or Puppet or whatever their moment inspires.

P.S. don't create curl https://uninstall.sandstorm.io | bash


That's why the PGP-verified install comes with these instructions for verifying the installer script:

https://docs.sandstorm.io/en/latest/install/#option-3-pgp-ve...


HTTPS does remove those attack vectors. That's kind of the point.


https with a modern transport security stack that uses HPKP, HSTS and cert pinning, sure. curl doesn't use those, though.


and when curl supports it; not every distro will ship it. Might take a decade for it to be fully supported.


I don't understand all these posts here bringing up distro repositories, pointing out their superiority. Of course it's better to install from repos and I don't think anyone is wanting to replace repo-based package management with curl|bash. But like it or not, not all software is immediately available on every possible distro. Sometimes you will want to install software from upstream for one reason or another. And for that purpose upstreams need to provide independent installation mechanisms.

curl|bash complements package managers instead of replacing them.


It's almost as if they copied my blog post from last week...

https://medium.com/@ewindisch/curl-bash-a-victimless-crime-d... (HN link: https://news.ycombinator.com/item?id=10217877)


I was not aware of your blog post when I wrote mine.

I wrote this blog post to introduce the work we did this week on PGP-verified installs and updates, which itself was motivated by Twitter threads last Friday (which you can find in my Twitter history).

If it's any consolation, when I first posted my link yesterday, it didn't get upvoted either. Apparently someone reposted it and had more success. HN is random that way.


Thank you. It is just suspicious timing, given the general lack of articles with this tone and approach. They even look similar in structure. I've had articles copied in one way or another (by others) in the past, so it wouldn't be the first time.

Thank you for responding.


Leaving the security issue aside for a moment, they are re-implementing package management. For example:

- What version do I have, how to I roll back to a version that works?

- What packages do I need for this and where's the log for what you added?

- How do I deploy this to 100 machines and be sure they all using exactly the same version.

- How to I control it so updates only happen in my at-risk window?

- How do I uninstall this?

Package management does all of this and using curl|bash re-invents this in a bad, non-standard way which just makes it's hard for sysadmins to do their jobs.


In general I do curl https > tempo then vi tempo then bash < tempo. This gives me the opportunity to check that nobody is pwning me. If the script is an unreadable mess, I do not execute it because it is an indicator of bad software.

In general the script is clear and short.

I like very much curl https | bash as a way to explain installation procedures. It is more rigorous and effective than an english explanation. Code can be tested and improved whereas explanations are always prone to negative critics.


One issue they don't address is one of culture:

curl|bash is much less secure, in general, than using your package manager (since not everyone will serve downloads over HTTPS, and won't take the steps described in the article etc.). By encouraging it, they are making this method normal, when it shouldn't become normal.


One important distinction to make is that this very much depends on what package manager you're talking about.

Linux package managers tend to use signing and other mechanisms to check content.

Software library package managers (e.g. npm, rubygems, etc) generally don't. Some of them offer signing but almost no packages are actually signed, and they don't do any curation of content.


They also gloss over the fact that packages in package managers are usually vetted by the maintainers which often have additional safety measures in place like e.g. repeatable builds to insure that they are not compromised.


What I got from this article: "`curl|bash` can be secure if you add signing, file integrity checking, and trust the source." The whole time I kept thinking, these are assurances you can get from traditional mature package management.


and also from extra steps between curl and bash. Which is the whole point in the first place. If you are going right through a pipe and letting the script verify itself then what's the point.

Are you the guy I'm looking for? YES! --> Seems trustworthy.


No the bash script doesn't verify itself. The only real verification step is using https (and writing the script so that it's not vulnerable to truncation attacks).


Of course, all content served by sandstorm.io – from software downloads to our blog – is served strictly over HTTPS (with HSTS).

What HSTS? You're telling people to use curl!


Mentioning HSTS was meant to emphasize that we're serious about "HTTPS everywhere", not specifically meant to be relevant to curl, where the protocol is specified in plain view on the command line anyway. Sorry for the confusion.


`curl|bash` is just as secure, as downloading and executing a binary or as downloading and "make"-ing a tarball or downloading an .rpm or .deb file.

And all these are absolutely ugly and definitely less secure than your distro's package manager downloading packages from the supported repository.

1. Ugly. Just take a look at this: https://fedoraproject.org/wiki/Packaging:Guidelines. This is Fedora packaging guidelines/policies. Every distribution worth it's salt has one (https://www.debian.org/doc/manuals/maint-guide/, http://packaging.ubuntu.com/html/). There quite strict constraints on what a package should look like, how it should be compiled, in which directories it can place files and so on and so forth. Custom installers (be it `curl|bash` or a .sh file or a .bin file) on the other hand are not policed by anyone except the original author. The installer can stop and ask you to read/accept a license agreement, it can open a browser, it can start background jobs. It can absolutely do `rm -rf /*` (https://github.com/MrMEEE/bumblebee-Old-and-abbandoned/commi...).

2. Less Secure. There are hundreds of software packages which are supposed to be installed with `curl|bash`. Are you sure all of them adhere to infrastructure/server security best practices? I for one trust the SysOps team @RedHat. To be clear - if the source code of the program is infected and the distro package maintainer doesn't notice it, you're in big trouble.


The one (additional) problem I see with piping the script directly to bash instead of an intermediate file is that it leaves no trace of what was actually run.

Using an intermadiate file like 'curl ... > temp.sh' and then 'bash temp.sh' actually leaves a file which can be looked at before (and if not removed by malicious code) after the installation.

As others pointed out we are giving control away of our computers if we install software. We are giving more control away if we don't use a signed central repository system we (and others) can check openly. We are giving more control away when we can't check that repository. We are giving more control away when we download non-signed binary installers from some a specific site. We are giving more control away when we download installers from some site somewhere on the internet. We are giving more control away when the installer downloads the files itself and leaves no trace. We are giving more control away when the installer auto-updates software from some source somewhere we don't even know what happens anymore.

In that light, the difference between using a temporary file and directly piping is little in security and little in convenience. Downloading the most recent release would be a bit better but a bit more inconvenient. Having it distributed via a signed repository would be a bit better but probably slower and cause more overhead on the developers side.

These differences are not very big and YOU can fix them by doing the additional work. The heated discussion about a tiny detail is a typical symptom of our culture. If the developers of sandstorm promote the convenient way but still provide the more secure way, good for them, so be it, let's move on to things that actually matter.

After all don't forget the fact that sandstorm is bringing the cloud back to earth into your basement. It is designed to conveniently run cloud applications locally so that you don't have to upload your data into some cloud but keep control of it locally.


> Installing software not managed by the distro’s package manager can be inconvenient for system administrators, especially those who are deeply familiar with the inner workings of their distro.

They raise this point themselves, but never address it.

I guess I'm a "system administrator" of my personal machines, because the last thing I want is software being randomly dumped wherever the author thinks appropriate for now, I guess /opt/ in this case. The post mentions running as a separate user, but if the installer expects to write to /opt/ I don't see how that is possible. Installing in ~/opt/ would be barely acceptable, but it's barking up the wrong tree and now I really have to want to use your software to read enough to understand its expectations.

You don't need packages for all systems, since this also isn't convenient for rapidly changing software (unless you're going to make a full-on repo). But please keep and support a (git clone <repo> && ./configure --PREFIX=<my-choice> && make && make install) option. It solves all of these problems for users who do care.

And preemptively - no, "use a VM" is never an appropriate answer to these concerns. The POSIX system model has many annoying shortcomings, but indiscriminately deprecating the whole thing creates ten times as many problems when what POSIX did provide has to be reinvented in order to connect those monolithic images.


I would like the see improvements to configure / automake. These tools already allow you to control where the program will be installed "configure --prefix=$HOME". They also generate an uninstaller: "make uninstall".

But they are missing dependency resolution. I should be able to say: "sudo make depends" to install all missing libraries for the system I'm on, for example run the "apt-get install libexpat-dev" or whatever if I'm on Ubuntu.


One thing not addressed is that copy-pasting from the browser is insecure too:

http://thejh.net/misc/website-terminal-copy-paste

But this should be mitigated by TLS/finding a "trusted" source. A bigger issue might be that curl|bash effectively elevates trust in the developer, the CA, the hosting provider to the same level as trust in your distribution's security team.

And while using SSL, wrapping the script in a function, practising safe posix shell, working acrosss many platforms is good - is it better than using something like https://0install.net ?

I can understand not wanting to work with dozens of distributions; but if you don' tautomate tests across all of those - does your fantastic script really work?

[Ed: it's also much easier to secure an off-line signing key than a TLS key that must be always on-line]


The biggest issue I have with curl|bash is this:

curl|bash has no advantage over the "Download and run this installer program" plan. That is, itself, a plan that should go away.

Further, it has the disadvantage that the user gets less transparency and control. When I download and run an installer, I can do whatever I want in between: check a signature, run a malware scanner, copy it into an offline VM, copy it to multiple systems to save bandwidth, apply a (possibly binary) patch, the list goes on.

While I can do this anyway with the curl|bash idiom (by using a temp file instead of a pipe), it makes it less straightforward. Teaching users to do things this way will have the net effect of discouraging those other things.

I think Sandstorm is great, but when someone is insisting on an install method that is worse than one that needs to go away, there is a problem.


I get the thrust of the argument but I think this does gloss over some differences:

> The web sites for Firefox, Rust, Google Chrome, and many others offer an HTTPS download as the primary installation mechanism. It’s even the standard way to install most Linux distros in the first place (by downloading an iso from the web site).

Yes, but these usually also come with checksums. If I can verify these through any other means I know that what someone else got was what I got too.

> Certificates on the old key expire in about a month. The auto-updater will begin using the new key as soon as it has installed an update that is aware of the new key.

I'm not sure I follow. Your key is compromised, you swap out for a new one, but this new one will only work after installing something signed with the compromised key?

Is there any way of updating the trusted keys before installing an update?


> Yes, but these usually also come with checksums

Checksums provide zero security benefit. You'd need signatures (like sandstorm.io provides) to actually protect against adversary modifying the installation media.


A checksum I can verify through a third source means I know I've got the same thing as they have. Signatures don't ensure that, unless I'm missing something.


Yes, but these usually also come with checksums.

Checksums on the same site as the downloadable file are not that useful. If an attacker can replace the binary, they can also replace the checksum.

If the checksum is on another site, it makes it more difficult as an attacker would have to break in to both sites.


This topic definitely needs a lot more discussion but, unfortunately, the author mischaracterizes the strongest argument against `curl|bash`: trust.

> Most package managers will even execute scripts from the package at install time – as root. So in reality, although curl|bash looks scary, it’s really just laying bare the reality that applies to every popular package manager out there: anything you install can pwn you.

Security-minded individuals are well aware that binary package systems execute code with arbitrary privileges. That's why we only use packages from trusted sources. RedHat, for example, is bound by their commitment to investors. They would be completely ruined if their software channels were compromised.

Some person with a GitHub account? Not likely.


At #! we have taken the GPG approach for a while. We also use process substitution which helps with partial line issues, and we take the opportunity to warn and educate people that run our installer without at least glancing at the source first

Https://hashbang.sh


Ain't nobody got time to look at source code :-p , well more so, a lot of users (most?!?) would need to skill up in order to understand it. Better they can have confidence that you have good intentions and competency, or more likely, that they have reliable friends who support your work.

It seems a weak link here is TLS and its implementations (see many OpenSSL vulnerabilities) connecting users to some remote resource for importing your key. If you posted signatures of your signature (terrible nomenclature, I know) from very popular people, then new users might have a good chance of having someone in their network that believes in you. Otherwise, they could send out an notice that they need some reviews of you and your code.

(Continuing my reading for now @ http://www.cs.cmu.edu/~davide/howto/validate_gpg.html )


This is a silly discussion because those who complain about curl|bash are also those smart enough to curl -o file http://... && vim file and read it before running it.


So does curl need to have [or has someone scripted] a signature and checksum checking mode where it looks for standard signature files and md5/sha checksums in the download directory and exits if they don't pass or pipes to bash if they do? Would be useful for these sorts of scenarios (to run in a sandbox of course ;0)> ).


In the end it always seems to come down to the dependency tree.

Seems like unless you build something like Gentoo, you invariably run into overly stilted or rigid dependency trees.

Even Nix seems to have this issue, because the checksum of a lib is included in the path to the lib compiled into dependents.


There is no such thing as secure / insecure.

Security is a spectrum that guards against specific threats and actors.

Most technical security solutions fail the $5 wrench test, if you need to defend against an attacker with a $5 wrench your solution will likely fail.


I've read most of the comments so far but I've yet to come across an instance where someone mentioned an actual security compromise they knew of because of this.


I think there are other problems with using curl|bash than purely technical ones.

Since it isn't obvious to everyone how to do it correctly you're unwittingly creating a culture of just running anything of the Internet. You're also making it harder for yourself do the right thing, both over time and when you're stressed. It's generally just a good practice to remove sources of errors, especially if they are low-hanging, to get a more robust process. Managing computers is hard enough as it is.


curl|bash isn't insecure compared to downloading an installer program over https and running it.

It's just stupid from a reproducibility point of view: the concept that I have some program P which I can run two or more times, such that it's the same object P on each run.

Suppose curl|bash fails somehow and you need to debug it. If you try curl|bash again, it could be fetching a different script.

Of course, there are ways to fix it, like a versioned URL that points to an immutable script. You have to trust that it is immutable though.

A malicious rogue employee who works for the trusted domain could temporarily replace the script with something harmful. Bad things happen to some users, who have no trace of what was executed. The rogue restores the original script and so when that is re-fetched and examined, it looks clean.

None of the affected user saved a copy; they all ran 'curl|bash' and so there is no evidence.

This is really what people don't like; they are just not about to put a finger on it and articulate it properly: the transience of the executable program. When bash terminates for any reason, the program is lost forever. What was actually run? And does that program exist any more, or did the termination of bash just throw away the world's last copy of that version of the script?

In other words, maybe what you want is rather this:

   curl | tee justincase.sh | bash
:)


That's not true. Curl doesn't use HSTS, whereas a browser does. Installer packages are signed, as well, so there's at least two ways in which downloading a proper installer from a browser is more secure.


fpm would solve their "packages are hard" problem. Additionally, when the upgrade path is "curl|bash" that is poorly thought out.If "curl|bash" installed the package repository, and then used the local package system to install, then that would be better.


The rise of javascript has lead to this "Here download this code and eval it!" mentality.


sudo curl -k -s https://rootme.sh | bash

It's okay because it's secured by using HTTPS.


    $ gpg --verify software.tar.gz
    gpg: Signature made Fri 25 Sep 10:20:57 EDT 2015 using RSA key ID 99BD2CF1
    gpg: Good signature from "Keith Alexander <keith.alexander@ironnetcybersecurity.com>"
I'm so glad I'm using PGP!


I get time to read, and it sounds more to be a commercial, rather a technical overview.


They make the argument that "anything you install can pwn you", which is like saying, any food you eat can kill you. Well, no, any food you pick up off the street that has flies and maggots on it might kill you. That's why we verify our software before we install it, and verify our food is good to eat before we eat it.

Here's some reasons why curl|bash is stupid.

--

Unversioned, unverified software

Like was mentioned in the article, yes, signed checksums are standard ways to ensure you are running only the software the original packager intended. This means that, no matter who could possibly want to attack this software, you will always be positive you are using the correct software, forever.

The author's response to this? "Oh, but we're using HTTPS. So you know the commands we're running are from us."

Anyone could give you exploited software offline, since there is no versioned, verifiable software package.

In addition, anyone who can MITM a remote network connection (like your company, the company who sold you your laptop, the operating system creator or distributor, the browser creator or distributor, any of six hundred and fifty Certificate Authorities, and virtually any internet-connected government in the world) can exploit the software.

In addition to MITM, an attacker, or a Sandstorm employee, can at any time use the company's network itself to distribute exploited software.

HTTPS has had numerous security vulnerabilities over the years, which not only enables MITM of connections, but also attacking and subverting HTTPS servers and using them to distribute exploits.

--

Unknown commands at execution time

Every piece of package-managed software can be unpacked and examined. Even if you don't know what's in the binaries that are delivered, you can be sure what commands the package will run and what files will be dropped and where.

You can also run the installer in a sandbox and reproduce it every time. You can't reproduce curl|bash every time, because it has the ability to change every time it's run. (Unless you save the bash script the first time you download it)

--

Running arbitrary commands remotely

Using curl|bash literally asks a remote connection to execute arbitrary commands on your host inside your network. Most well-defended corporate networks should block this, but some don't.

Of course you're about to say "But we run arbitrary code remotely all the time with web browsers!", which is why we have things like NoScript, AdBlocker, etc to prevent any random attacker with a 0day from compromising unknown holes in our software, not to mention leaking our history, cookies and other sensitive data from otherwise "normal" browser functions.

--

You know why people (like me) get angry about curl|bash? It's a bad troll. It takes obviously bad ideas about security and tries to validate them by waving HTTPS around like a banner of invincibility, then coming up with excuses for why the security problems it introduces are not important.

If you want to be cavalier with your systems, that's fine. But don't try to convince people it's secure when it's not.


The answer is Yes. Moving on!


Yes, it's insecure and is based on complete trust of the author of the script and the hosting provider, but there's an easy workaround: Don't pipe to bash immediately.

Download the file, inspect the contents and then if you're satisfied with what the script will perform, then run it through bash.


The issue is the same for installing anything from source not because you sued Bash/Curl same can be said for the Arch community made package managements for the AUR packages.


That's far from enough.


You may have the most awesome setup in the world, with certificate stapling and keysigning and best-practice scripting and whatever. But curl|bash is simply a bad habit to reinforce. It's a bit like saying that we don't need laws against burglary, because I would never burgle anyone, and I have a 24/7 camera on me to prove where I am at any given time.

That bad habit isn't the worst thing in the world, but it's low-hanging fruit in terms of problematic things to reduce.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: