Hacker Newsnew | past | comments | ask | show | jobs | submit | more codehusker's commentslogin

The third paragraph might offer the logical analysis you're looking for:

> The performance difference arises (we believe) because when working from an SQLite database, the open() and close() system calls are invoked only once, whereas open() and close() are invoked once for each blob when using blobs stored in individual files. It appears that the overhead of calling open() and close() is greater than the overhead of using the database. The size reduction arises from the fact that individual files are padded out to the next multiple of the filesystem block size, whereas the blobs are packed more tightly into an SQLite database.


It seems that a higher ranking is worse.

> The Index presents a ranking of 167 countries based on the proportion of the population that is estimated to be in modern slavery. [0]

North Korea, with 4.37% in slavery, is ranked 1.

India, with 1.4% in slavery, is ranked 4.

Philippines, with 0.4% in slavery, is ranked 33.

China, with 0.25% in slavery, is ranked 40.

USA, with .02% in slavery, is ranked 52.

[0] https://www.globalslaveryindex.org/findings/

EDIT: number -> ranking


How is a higher number worse if there is less slavery in countries with a higher number?


I should have phrased it as a higher ranking rather than a higher number. I assumed 'higher' was clear in the terms of ranks, rather than a larger number.

GP's claim makes it seem like they interpreted as I did.

> ...countries like North Korea (rank #1) are closer to 0...

NK is not close to 0 in relative or absolute slave counts, and I don't see the utility in pointing out that 1 is closer to 0 than 52.


Yes, I replied before you edited your post, I get your meaning now. Thanks for clarifying.


What surprises me is that Netherland, while praised for doing the most to fight slavery, is, at #50, ranked lower than the US, which I believe has forced prison labour.

Maybe it has to do with prostitution being legal here. You'd expect licensed and regulated prostitution to reduce slavery in prostitution, but apparently it's still a serious problem. Still, 17,500 people in slavery? That's a lot. I'd like to know where they got that figure.


I meant 'higher' as in 'larger', sorry.


From your source:

"Customers who are running supported versions of the operating system (Windows Vista, Windows Server 2008, Windows 7, Windows Server 2008 R2, Windows 8.1, Windows Server 2012, Windows 10, Windows Server 2012 R2, Windows Server 2016) will have received the security update MS17-010 in March. If customers have automatic updates enabled or have installed the update, they are protected. For other customers, we encourage them to install the update as soon as possible."

If you don't have the update, you are not protected, you are vulnerable.


If you or your IT dept is not installing updates, especially security patches, over 2 months after they come out, somethings horribly wrong.


Posting from a throwaway for obvious reasons, but the place where I work still hasn't applied these patches after I warned their IT dept about the NSA vulns a month ago... luckily I'm at least able to apply the patch to my own system manually. If it hits us I'm pretty sure we're screwed on the order of a few thousand systems.


The reality is, this is very common.


Then what, realistically, can be done when nation-state knowledge of vulnerable systems is hoarded for cyber-warfare purposes?


Frankly, there is only one solution I can see anymore:

Laws must be passed to:

* Force the US government to report vulnerabilities to vendors

* Create a regulatory body to monitor the use of vulnerabilities in clandestine operations and ensure that mandatory reporting is upheld

I cannot see anything less working.

Get that through US and EU governments, and you'll likely have the vast majority of vulnerabilities being reported and patched.

Of course this is akin to asking the US and Russia to convert their nuclear stockpile into reactor fuel.


The Wikipedia entry is silent on non-rooftop solar installations, and I'm struggling to find good numbers. Can anyone with more experience in this sector point me in the right direction?

I would hazard a guess that non-rooftop installation is substantially safer than getting on a roof. I would also guess that nuclear would be more hazardous if reactors were installed on roofs.


I think the right conclusion is "they're both really safe". It's like people comparing things to the number of shark attacks per year. The number is so low that most things are more dangerous.


It's not even a fair comparison. One is construction/installation vs running.

I would be interested to see how many deaths have occurred during the construction of nuclear plants.

How many deaths occur from solar once it is built/installed?


>How many deaths occur from solar once it is built/installed?

Well if we are talking about what would happen if energy was produced entirely with solar/wind wouldn't you have to include things like the people freezing to death during a string of cloudy yet still winter days?


> It's not bikeshedding when the bikeshed's color will actually have concrete effects on adoption.

Not taking a stance either way on the name, but that is the definition of bike-shedding (aka law of triviality). A committee won't vote for my nuclear plant because the bike shed is red. The bike shed's color has concrete effects on adoption.

EDIT: I would just like to acknowledge the irony of bike-shedding bike-shedding.


  > ...but that is the definition of bike-shedding (aka law of triviality)
  > A committee won't vote for my nuclear plant because the bike shed is red.
  > The bike shed's color has concrete effects on adoption.
Not exactly.

  > Parkinson observed that a committee whose job is to approve plans for a 
  > nuclear power plant may spend the majority of its time on relatively 
  > unimportant but easy-to-grasp issues, such as what materials to use for
  > the staff bikeshed, while neglecting the design of the power plant itself,
  > which is far more important but also far more difficult to criticize constructively.
  > -- https://en.wiktionary.org/wiki/bikeshedding
This part is key here:

  > A reactor is so vastly expensive and complicated that an average person cannot
  > understand it, so one assumes that those who work on it understand it. On the
  > other hand, everyone can visualize a cheap, simple bicycle shed, so planning 
  > one can result in endless discussions because *everyone involved wants to add a
  > touch and show personal contribution*.
  > -- https://en.wikipedia.org/wiki/Law_of_triviality
  > -- https://books.google.com/books?id=RsMNiobZojIC&pg=PA317


I need some additional hand-holding here if you don't mind, I don't see the difference.

If I were to rephrase those two excerpts:

  > Parkinson observed that a committee whose job is to approve plans for a 
  > [globally distributed relational database] may spend the majority of its time on relatively 
  > unimportant but easy-to-grasp issues, such as what [the name is],
  > while neglecting the design of the [globally distributed relational database] itself,
  > which is far more important but also far more difficult to criticize constructively.

  > A [globally distributed relational database] is so vastly expensive and complicated that an average person cannot
  > understand it, so one assumes that those who work on it understand it. On the
  > other hand, everyone can [read a name], so planning 
  > one can result in endless discussions because *everyone involved wants to add a
  > touch and show personal contribution*.
edit: formatting


Please tell me we're not having a bikeshedding discussion on the meaning of bikeshedding. :-)


We probably are. ;)

It's so meta it hurts.


Alright, if you really want to unpack the metaphor:

The bikeshed story is to illustrate overemphasis on something that is trivial. It uses the example of a bikeshed color and a committee wanting to spend a lot of time on it because a) they care a little about it, and b) they understand it well enough for hard-headed members to wade into the dispute rather than trust experts.

It's a failure mode -- by stipulation -- because the bikeshed color doesn't matter beyond minor (but real) aesthetic feelings among the committee, that are far outweighed the cost of high-level personnel devoting time to it. Had they been aware of the general dynamic of these thing, they could entirely prevent the loss by moving on; it's purely an internal matter.

The bikeshed model ceases to demonstrate a failure mode if and when the bikeshed color has impacts far beyond things under the control of the committee. For example, if the majority of the world's people had a near-religious devotion to destroying facilities that house a blue bikeshed, and that fanaticism was hard to defend against, this would be a valid reason not to make the bikeshed blue, and would warrant the committee's attention.

I summarize such situations as "that's not bikeshedding", though of course, to be more technically correct, I should say "that situation does not illustrate the avoidable failure mode in the parable of the bikeshed".

Similarly, if adoption matters for more than just that committee -- if they need to convince numerous other committees to adopt the design -- it's likewise "not bikeshedding" because the first committee doesn't have control over all the other ones; with respect to the first, it's an external matter, and they can't stem the loss just by saying "hey, this is trivial".

Now, you are correct that, a high enough level, this could work as a bikeshedding example, if you could simultaneously get the entire world to collectively agree on the non-importance of aesthetics on technical matters, and on what counts as technical vs aesthetic. Then the world could play the role of that first committee and say "wow, this is trivial" and it's done.

But if that were actually feasible, then that should be your product (producing universal agreement on matters where you have a logical proof-of-correctness), not a database!


>Go 1.8 will use X25519 and ChaCha20-Poly1305 in its TLS stack, but it doesn't offer modern application-layer cryptography in its standard library.

What qualifies as a standard library? I think Go supports both in the "x" repository which is officially part of the Go project, but is distributed separately and with different compatibility guarantees. It even gets vendored back into the main repository.


1. Install the language.

2. Don't install anything optional.

There, that's the standard library.

Some ecosystems will never add something like NaCl/libsodium to the standard library because the main developer experience requires package management, but that's irrelevant to the very narrowly defined claim being made.


Install the language how, and from where?

What if my package manager includes libs like those under "x"? Or if it excludes certain libs? Homebrew is capable of this for many languages.

What if my language includes different libs depending on install target, like targeting embedded environments? I think Rust does this.

I think there is a large grey area being overlooked.

EDIT: This is particularly true in PHP land. If I just install the meta php package in most distributions, I will get a different set of extensions included.


> Install the language how, and from where?

Compile from src. Use the default options.


I know python keeps some packages outside the standard library as once it's in there development slows down and the package needs to be worked on more often.

Did any discussion on this possibility come up as I can see cryptography requiring a quicker update cycle when bugs are found then a schedualed language release can give.


Andreesen is right about the past, but we're near the inflection point where the curve smooths out.

8K video (near IMAX quality) can be compressed into a 500mbps stream[0], so gigabit should have decent lifespan. If we lay fiber to the premises then we can increase speeds in the future without digging everything up.

[0] https://www.extremetech.com/extreme/130238-8k-uhdtv-how-do-y...


So if VR next year is 2K per eye, and is hypothetically cloud rendered, then that means gigabit would support 16-ish people? Sounds good. 2K per eye was at CES last month, and is badly needed.

But 4K per eye prototypes exist, and will almost certainly be at CES next year. And will ship, maybe next year, maybe the year after. That's 4-ish people on gigabit? Hmm. Doesn't that seems a little bit tight to be confident that "gigabit should have decent lifespan"?


Cloud rendering is a fantasy for games on flat screens, because the latency is too high. To maintain 'presence' and not get sick in VR, the motion-to-photons latency has to be _consistently_ <20ms.

360 video for VR is certainly a bandwidth-hog, but I think that could well be offset by most VR content being game-like, where, though the game might weigh in at 40G, you download it once, and spend 40 hours in it, vs a 4k movie at the same size, which lasts 2 (and which you'd likely re-stream if you watched it again). In other words, widespread VR use, even with next-gen hardware, could actually lead to a reduced demand for bandwidth.


> Cloud rendering is a fantasy for games on flat screens, because the latency

One approach I've seen mentioned is sending a larger-than-used field of view (and resolution), and then locally deciding, with more recent orientation data, which portion to use. Also depth segregation of the scene, and sending multiple copies of nearfield, selected by recent position.

These approaches might result in sending more data than the displayed video.

I don't necessarily disagree with your suggestion of gigabit adequacy. Though after hearing similar suggestions so many times over the decades, about everything on Moore's law curves, and then having them almost always be wrong, I'm... leary of this form of suggestion.

But one way such estimates fail, is being hit by an "oh, we didn't expect that one". So I'm brainstorming (well, merely sort of musing) about potential surprises.

Hmm, surprises... One advantage of a single video stream is the system always knows what's needed next. The user may turn it off, but not much else. The above are perhaps examples of needing to send speculative content, which acts a demand multiplier. The future equivalent of web page preloading.

As people acquire automated assistants, one thing they may do is speculative exploration and data gathering. When a user's eye pauses on a github project, not just download everything about the project, to produce the desired pithy little briefing popup, but also other pages associated with the repo authors (still alive? any replacement project?), news articles, related work, and so on. Automation of the 'github project evaluation dance', which in the fine-grained node.js ecosystem, is frequent. So what is now a few bytes of web link, and a rare on-demand textual mouse popover, followed by slow manual surfing, might be become an immediate massive demand spike, and pervasively common? One potential of VR vs 2D screen UIs, is that while screen realestate must be severely managed, else clutter, VR may permit vastly greater inclusion of speculative "some related stuff", blended as low-cognitive-overhead ambiance. Once upon a time, a web page had the bandwidth demands of a few lines of ascii email from a dumb terminal - no longer.


But you assume only one stream will be played at a time which is unlikely.


We currently live in the post-post, of course.


>> I guess its either PoC||GTFO for users to update.

Sad but true.

Are there any supported systems still shipping an affected version?


Is there any person as trustworthy as Ladar Levison for a service like email or chat?

To my knowledge, he is one of the few that has gone to the mat for his users.


A good way to regain and build trust with users would have been to acknowledge his previous mistakes. Then at least you could say "he's been around the block, done it wrong and learned how to do it right". Instead, he writes:

"In August 2013, I was forced to make a difficult decision: violate the rights of the American people and my global customers or shut down. I chose Freedom."

That isn't what happened. He chose to build and sell a supposedly secure email service that was fundamentally vulnerable to government intrusion. He then decided to play chicken with the USG over a warrant no different than ones he'd complied with previously. The completely pointless escalation forced him to compromise all of his users, something the government had not been asking for. He then shut the service down.

There are a lot of ways to describe this but 'I chose Freedom' without any acknowledgment of his previous mis-steps is both misleading and shameless. I wouldn't buy supposedly secure services from him.


> The completely pointless escalation forced him to compromise all of his users

How has he compromised all of his users? The service was shut down and emails kept encrypted. Am I missing something?


He gave up the cert, there was no PFS-only configuration, plus, presumably the FBI got to do their surveillance except instead of the target's email, they could read everyone's. So no, you are not right.


I was not aware he gave up the cert in the end. Thought he just closed website without disclosing TLS cert. Now it looks way worse than I imagined.

Anyway, I really hope that it leads to adoption of backward-compatible and secure email protocols. Server encryption can't be trusted anymore anyway, we need end-to-end encryption.


The business with the cert was just the final outcome. The initial mistake was making and selling snake oil. It is possible for someone to innocently do this, out of inexperience and ignorance.

Over time, though, it's become increasingly clear Ladar Levison is just a snakeoil salesman who misled his users. He's never acknowledged he did anything wrong. Don't fall for his posturing about 'Freedom'.



Do you have a source for these claims?


From what I read, he gave up the SSL cert by printing out a hard copy in a tiny font, and when he was ordered to provide a digital copy, he shut down the service.

> At approximately 1:30 p.m. CDT on August 2, 2013, Mr. Levison gave the F.B.I. a printout of what he represented to be the encryption keys needed to operate the pen register. This printout, in what appears to be four-point type, consists of eleven pages of largely illegible characters.

And:

> On August 8th, rather than turning over the master key, Levison shut down Lavabit.

That was according to this article from the New Yorker: http://www.newyorker.com/tech/elements/how-lavabit-melted-do...


How big was that key? 11 pages at 4pt is a lot of characters. I wonder what encoding.


Hex maybe?


To be fair you have to acknowledge how society is essentially driven through tales of one's bolstered narrative. Many consider us in a "post facts" era given how you can watch almost any politician (some far worse than others) go on live television and say something that is completely untrue but it tells a fantastic story. Elections are won based on being able to sell a narrative.

While I agree with you I think a large amount of people will buy into the narrative of him sticking up for freedom.


The extent to which these narratives work is proportional to how much we accept them. No one can stop him from telling lies but we can make sure people know they're lies.


Source? So many hacks in this thread. Why trust any of this?


What's your question?


If Edward Snowden started a mail service, I'd probably trust it more. If you want to talk about "going to the mat" for people, I think Snowden has made the bigger sacrifice.

Moxie and Whisper Systems probably would get my nod too. Perhaps even DJB or Bruce Schnier.


Moxie is not impressed with lavabit as lavabit's entire security model relied on "we totally promise we won't look at your private key."

https://moxie.org/blog/lavabit-critique/

>Unlike the design of most secure servers, which are ciphertext in and ciphertext out, this is the inverse: plaintext in and plaintext out. The server stores your password for authentication, uses that same password for an encryption key, and promises not to look at either the incoming plaintext, the password itself, or the outgoing plaintext.

>The ciphertext, key, and password are all stored on the server using a mechanism that is solely within the server’s control and which the client has no ability to verify. There is no way to ever prove or disprove whether any encryption was ever happening at all, and whether it was or not makes little difference

Anyways, having good inventions doesn't equal having a secure product.


This one is about old Lavabit. It equals "trustful mode" of the new Lavabit.


Moxie is not impressed with anything other than signal


He lists two projects unrelated to Signal in the article. There is no mention of Signal.


[flagged]


Yeah but the GP was saying that Ladar Levison must have made a trustworthy system because he's Ladar Levison. However, I was pointing out that there was serious issues with the trustworthiness of the original lavabit.


Snowden is not a security expert nor a cryptographer. He used Cryptocat and Lavabit, for instance - he was (like most people) unable to independently assess the quality of their security guarantees and believed their claims.


he has said that he used pgp in his emails with poitras and greenwald because he knew from personal experience that, properly implemented, nsa was unable to decrypt messages protected with it


Are you saying he didn't use Lavabit and Cryptocat?


He used PGP over Lavabit. So even though Lavabit was compromised, content of his emails is secure.


He didn't go to the mat for his users. He built a service that he knew was vulnerable to standard legal process (or if he didn't, he was amazingly incompetent) but sold it as if it were safe from the government, duping even Edward Snowden. The government, naturally, engaged in standard legal process, and found that he possessed a key that would give the government access to everything they needed, and that he was capable of turning it over. So he was ordered to turn it over, which should have surprised no one.

He did surrender the key, although by printing out the key in 4-point font (unclear if he was buying time, or just thought contempt charges sounded fun). After the government pressed him harder, he shut down the service days later. He didn't disclose that he had surrendered the key; the public found out when the court documents, including the key itself, were unsealed.

If something can't be done securely, don't tell your users that it can be done securely. If you know you can't win, there's honor in refusing to lose without a fight. But there's no honor in first promising people that you'll win, and there's quite a bit of dishonor in asking people to pay you to win.

Lavabit v1 should never have been built. Many people were technically qualified to build something like it it (it's email, which constrains the design significantly), had the resources, and chose not to. The fact that Levison built it, and that he hasn't apologized for building it, demonstrates that he's untrustworthy. This is not to say that he's a bad person; everyone makes mistakes, and I wouldn't trust myself to build a secure email service singlehandedly, because I know what mistakes I've made and what sort of personality flaws I have. It's just a statement that the required level of trust is extremely high, and Levison hasn't demonstrated it.

Lavabit v2's "Trustful" mode has all of the same flaws as Lavabit v1. He writes about his "free and open source server" and asks how you feel about "trusting our servers," when that was never the problem. If you can magically make sure that the government doesn't have access to your system, a standard unencrypted email server will do just fine. If you can't, they'll issue the exact same legal order to Lavabit v2 that they did to v1, and it'll be just as effective.


Phil Zimmermann

He went to prison over pgp.

But then the mail service he was involved in (silent circle mail) shut down at the same time as lavabit.


He never spent time in prison, but he was investigated intensely by the US gov't.


You seem to be correct. He never went to prison for pgp.

But apparently he thought up pgp while in prison for nuclear protests.


Vincent Canfield


The owner of Cock Mail / cock.li is at least completely realistic with what can be expected from an email provider. Goofy domain names aside, being completely frank about privacy and security realities is what you want.


sadly he seems MIA.


He has a phone number, go call him.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: