Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Running "n stable" removed bin, lib, share, include directories from /usr/local (github.com/visionmedia)
88 points by almost on Oct 7, 2012 | hide | past | favorite | 59 comments


He merged a pull request from someone he probably didn't know (new to node.js, different country). Without looking at the rest of the file it looks OK to me. That's probably what happened - he didn't get all of the program's relevant structure in his mind when he read the code.

https://github.com/visionmedia/n/pull/85/files#r1781158


Which is slightly scary security wise. Over the large number of libs, programs, etc people often pull half-blind if the code looks "mostly ok and does what it says"

Except it can do also a lot of other bad things, and its too much to review. So in the end you trust the tree owner, and he blindly trust a zillion people.

I actually have zero good solution to this, but it'll be interesting when it is used for a large attack.


> I actually have zero good solution to this, but it'll > be interesting when it is used for a large attack.

I don't know what this software is or anything about it aside from an estimating that it appears from comments here and on GH that it's serious as it deletes highly important directories and is possibly a widely used software package.

Whenever I install from source I run the installer as non root. It will error on higher than my user privilege deletes telling me I need to be root. In this case I believe I would have seen the attempt to remove important directories as an error alert.

If I have to be root, running as non root first has helped me as a purely investigative method to installations.

What if everyone adopted an 'rm -rfi' type command for any deletes. Then you are asked: "You are about to remove $dir are you sure you are okay with this? Y/N?

In this case, what I don't get is how it's now been three days and there hasn't been a rollback, pulling of the software, patch, notice, billboard, radio announcement, Emergency Broadcast Syatem alert, or otherwise some way to halt this problem dead in it's tracks right this very second.

It's almost like: README This software is 'use at your own risk', etc., etc., etc., see Hacker News for any potentially dangerous side-effects caused by installing this software.


Typically anything owned by your user is much more important to you than anything owned by root (from the perspective of it getting deleted).


rm -rfi results in a lot of noise, and it becomes a habit to simply dismiss the warnings without reading them.


everything in my /usr/local is owned by my user. isn't this typical in the case of a dev machine?


If it's going to be owned by your non-root account, shouldn't it be installed to ~/? It seems like a bad idea for a non-root user to have control over binaries that are in root's $PATH.


/usr/local on a normal, stock OS X install is completely unused. Homebrew's installer commandeers it by `chown`ing it to the current user and makes it group writable, as to cut down on the sudo noise. I think issues like this, and the fact that nearly all package managers, including aptitude, yum and ports require sudo should be catalysts for requiring sudo for updating and installation of packages. It's that whole security vs. convenience tradeoff again.

I've actually run into a similar issue with another installer via a Rakefile that lacked uninstalling capabilities and a confusing method of determining the $PREFIX which resulted in me shredding my /usr/local/bin directory for a couple seconds before my ^C spamming stopped the process.


On a server or an environment where you don't trust your users, absolutely. On your own private dev machine that's only used by you, doing things the 'wrong' way is often perfectly acceptable and makes life easier.


As far as the headline goes, not running any of this as root is a good start.


That by itself is rarely a solution, especially these days. People store very important stuff in their home directories. It would need to be something like an isolated user account, a virtual machine, or a browser window.


Ultimately the tree owners need to be bigger and have more resources devoted to them. The rails project routinely gets major bugs fixed in under a day.


> Except it can do also a lot of other bad things, and its too much to review. So in the end you trust the tree owner, and he blindly trust a zillion people.

> I actually have zero good solution to this, but it'll be interesting when it is used for a large attack.

The only solution is intelligence and prudence on the part of the tree owners. Large software systems need to ultimately be in the hands of smart and wise people. Unfortunately, that wasn't the case here.



Do you actually think Ken disagrees with me?

Hint: he doesn't. He wouldn't accept a line of code starting with "rm -rf $VAR" in ~30 minutes on a Sunday morning.


Try not to take offense from things so easily.

I linked Ken's paper because it's related. His conclusion is that it doesn't matter how smart the users or maintainers are if somebody wants to install a clever bug. Smart and clever people can still choose not to accept contributions from people they don't know.


basically they have to earn trust before they can do this i actually have some half solutions, that many use. one of these is code signing (digital signing or simple "signed off by" from an email that you believe belongs to the owner, even thus the first is stronger)

this means, the person may eventually do bad stuff, after earning your trust. OK. But if that's ever detected, at least you can trace back to him.


> Large software systems need to ultimately be in the hands of smart and wise people. Unfortunately, that wasn't the case here.

Wow, it must be awesome to be you.


Didn't say I was one of them. Enjoy your strawman. Personally, I prefer a bit more flavor. Though your implicit ad-hominem tastes a bit juicy.

Writing a package manager is actually really serious business. This package manager runs under user credentials and is expected to modify the filesystem. You can't sandbox around those requirements. No amount of whinging avoids that. This story is one of many examples why you can't take it trivially.

If you're afraid to point out incompetence - and where it is most dangerous - you won't know it when you see it. And it will bite you, hard.


I upvoted your response because on one hand I do think you are correct in all of what you are saying.

But your previous post can be interpreted as an implication that the people themselves are neither smart, nor wise. A somewhat harsh attack. I re-read it with more emphasis on the "here" part of the sentence and it sounded a bit more as though you are saying that in this instance the people were not wise.

Pointing out incompetence is helpful, but it's also very useful to do so in a way that attempts to minimize the chance of an extremely negative interpretation.


I expected an extremely negative interpretation. My criticism is harsh because the failure here was unacceptable.

Look at the pull request they merged in. Any line added to a script which starts with "rm -rf $VARIABLE" cannot be scrutinized enough.

The first commit was created at: 2012-09-30T10:25:44-07:00.

The pull request was accepted at: 2012-09-30T10:59:08-07:00.

34 minutes to accept on a Sunday morning. I suspect that wasn't 34 minutes of review. I suspect it was closer to 34 seconds of review.

Unacceptable.


If virus scanners were common on unices, their primary purpose would be to watch for the string "rm -rf".


shit happens, and then you die. when you have 50+ people that just "+1" an issue for long enough without helping, you tend to limbo-merge


Engineers have responsibility.

Otherwise you're just throwing shit at a wall and seeing what sticks.


Sounds like 99% of software development to me, OSS or otherwise.


Obviously this was an epic fail deserving of your original derision, but I can't help but be struck by the irony in your complaint of ad-hominem after what you wrote.


His comment wasn't an ad hominem. An ad hominem would be 'this person isn't smart and wise, so his decision's are wrong'. The poster's comment was of the following implicitt form:

* These sort of packages should be run by people who are smart and wise * The fact that this happened suggests that this person is not smart and wise * It is unfortunate that he is running a project used by many people.

It's perfectly valid to criticise a person (or a person's fitness for a responsibility) based on their actions. An ad hominem is the opposite - it is criticising a persons actions or arguments based on who the person is (rather than the actions or arguments themselves).


The change also has one of my pet hates: the -f flag to rm. It not only means force, it means ignore errors. If you're not interested in errors, why attempt to do it in the first place?


git has done an excellent job of bringing this abuse into the 2010s. If you need to delete a local copy of a git repo, which is often only useful for a couple minutes while evaluating code, you have to do something to override the read only files git creates. Mercurial doesn't have this issue.


chmod


I think it is a great flag, if used with that functionality in mind. Useful reasons very similar to why the -p flag to mkdir is useful.


To remove stuff?


Looks like the mods changed the title I gave. Just for those that don't know, "n" is a version manager for Node.JS that some use to handle multiple copies of Node.JS on their system.


Your original title was slightly linkbaity. I approve of the mod's change for clarity.


They were probably right to do it. But it would have been useful to keep the reference to Node.JS in there as not everyone knows what "n" is.


I read it as:

"Running and stable with no bin, lib, share, include directories in /usr/local".

Which didn't seem all that exciting to me, so I clicked just to see what all the excitement was with not having a /usr/local.

PS. "n" is a terrible name for a program - it's impossible to google.


I was hopelessly confused by the title. It may have made the title not linkbait, but it definitely muddled things.


As a non-Node.js user, I'm more bewildered by how easily github issues turn into reddit threads.


It isn't a node.js related problem, but more of a GitHub related problem when a specific issue is deemed to be "legendary". Example: https://github.com/MrMEEE/bumblebee-Old-and-abbandoned/issue...


It only happens occasionally on bugs issues that get a lot of attention (by being posted to HN for example, sorry about that). Usually GitHub issues threads are sensible and helpful places.


That's what so scary to me. There's all that productive discussion that is only a step over some noteworthiness threshold away from being completely ruined. I wouldn't think of blaming HN or even reddit submitters for that, though.


Worse than reddit threads. No moderation and unstructured.


This looks more like 4chan to me.


I used to use n before I found out about nvm[1]. I used to have issues installing Node.js packages with n, but so far nvm just works.

The best advantage of nvm is that I can easily install global packages without being the root user, because it installs your Node.js files in a per-user ~/nvm/ folder (this is customizable to whatever folder you choose).

[1] https://github.com/creationix/nvm


"The best advantage of nvm is that I can easily install global packages without being the root user"

One of us is very confused (it easily could be me). I do not understand this statement at all. How is something global if its in a user directory?


When you install a package using npm with the flag -g, it means that the package is global, i.e. available from any current working directory. The default is to only search the current directory hierarchy for a folder called "node_modules". That way every project that you work on can have its own versions of every library, instead of sharing them for the entire system. You can learn more about "global" vs "local" packages by reading the npm manual.


https://github.com/visionmedia/n/pull/85/files#r1781158

  |  for d in bin lib share include; do
  |    rm -rf $N_PREFIX/$d
LGTM!


Somewhat off-topic, but allowing people to post inline images-- especially animated ones, is such an awful idea.


my bad, sorry about the limbo-merge guys, I'll read PRs closer and/or ignore them since I don't have time


Reminds me of https://github.com/MrMEEE/bumblebee-Old-and-abbandoned/issue... (install script does rm -rf /usr for ubuntu) although I guess deleting /usr/local isn't nearly as bad.


Depends on platform. It is under FreeBSD. It wipes the entire userland that isn't part of the base system. That means every piece of software installed from the ports collection.


Yeah, but honestly that's FreeBSD's fault for installing system-managed software in a poor location for such.

(speaking as a FreeBSD user myself)


Makes me wish we had a transactional file system and the ability to force certain deletes to verify even if run as root with the -rf flags so we could roll back this type of thing.


Transactional filesystems have been implemented at considerable expense. To be clear, I mention transactions at a high enough level to encompass deleting a subdirectory (most reasonably implemented filesystems have support for tiny transaction that ensure that metadata is available on disk in a consistent fashion).

When I say considerable expense, I mean _very_ considerable expense. I know of no efficient implementation and only one practical implementation: TxF. TxF is not very fast either, it requires double the writes and, on top of that, is not easy to use. Microsoft is considering deprecating TxF due to the cost of continuing it's maintenance at the expense of other features [1]

I think a more reasonable model for what you want is filesystem snapshots. This is a feature that can be implemented with relatively high performance and without causing terribly large amounts of complication (needing to transact file descriptors, etc.)

[1] http://msdn.microsoft.com/en-us/library/windows/desktop/hh80...


My setup is pretty simple and could recover from this with few problems: I use ZFS and have zfSnap configured to take a snapshot every hour, saved for five days. So I'd lose potentially the last hour's worth of changes to /usr/local, but that's unlikely to be big.


bsd? linux? solaris?


FreeBSD; I didn't trust any of the ZFS implementations for linux (I'm not sure there is one that works for your root filesystem yet?) and Solaris didn't find all of my hard drives.

If you're thinking of switching there really isn't much difference between FreeBSD and Linux (at least if you're talking about a traditional Linux like Slackware); most of the admin commands work like Linux did up until 5 years ago, and obviously the UI is just KDE or whatever you like.


I think explicitly versioned file systems might be better. Cheaper to implement than transactions and useful in more cases.


Wow, just today I commented on the Anvil post about how to delete your system using this exact bug.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: