Hacker Newsnew | past | comments | ask | show | jobs | submit | torginus's commentslogin

> An oft brought-up issue is that the code on crates.io and in Git don’t always match.

I don't understand why this is the case. Imo it should be a basic expectation, that a given package is built from a frozen, dedicated git commit (represented by hash), and a likewise frozen build script. The build should be deterministic, so that the end result should be hashed, and the build script ran by some trusted vendor (maybe github), and the end result hashed.

If there's any doubt about the integrity of a package, the above process can be repeated, and the artifacts checked against a known hash.

This would make builds fully auditable.


Build scripts often look for system libraries, generate larger artifacts, etc. It's not as black and white as you make it out to be.

Wtf does this even mean - its like saying nobody owes me asbestos-free food. There sure is a demand for it, and certain customers find as mostly not backdoored supply chain good enough, and they wont do business in your ecosystem if you cant give them that.

This is the classic open-source problem. Open source manintainers feel like they don't owe anything for people making money with their software for free, meanwhile customers want working code, and are willing to pay for it, your software being free is a nice perk.

As much as I understand the maintainers' standpoint, history has proven the customer is always right, and the only projects that have staying power are the ones that meet the quality bar.


I would say that's kind of a conspiracy-y explanation. Big companies in Munich either have their campuses on the outskirts of the city so that people can commute and park without flooding the city or they have it in the heart of the city as that is seen as more prestigious.

Lots of companies have flip flopped based on this, and that's what happened in MS case.

Tbh not saying MS didn't play dirty in general, but not necessarily in this.


I mean the idea has merit in of itself, but I think this should be more of an on-prem thing, just repurposing old laptops junked by IT as servers.

I mean we literally did this in one of my previous places. We took all the old laptops that were to be junked by IT, and used them as a selenium test farm. We saved like $100k per month on the AWS bill at the cost of basically electricity.

If all the machines were running Windows, the difference would've been even more drastic.

What I dont get is that we have these autoscaling technologies that allow software to be fault tolerant to hardware failure, yet companies still insist on buying expensive server grade HW for everything.


Been through this recently in a fairly large enterprise

We have some in house software which runs in k8s. Total throughput peaks at about 1mbit a second of control traffic - it's controlling some other devices which are on dedicated hardware. Total of 24GB of ram.

The software team say it needs to run across 3 different servers for resilience purposes.

The VM team want to use neutronix as their VM platform, so they can live migrate one VM to another.

They insist on 25gbit networking, and for resilience purposes that needs to be mlagged

The network team also have to have multiple switches and routers, again for resilience.

So rather than having 3 $1000 laptops running bare metal kubes hanging off a pair of $500 1G switches eating maybe 200W, we have a $140k BOM sucking up 2kW.

When something goes wrong all those layers of resilience will no doubt fight each other. The hardware drops, so the VM freezes as it restored onto another host, so K8s moves the workloads, then the VM comes back, the k8s gets confused (maybe? I don't know how k8s works).

It's all needlessly overspecced costing 30 times as much as it should.

But from each individual team it makes sense. They don't want to be blamed if it doesn't work, they don't have to find the money. It's different departments.


One of my favorite bits of hardware is a UPS. I’ve played with several over the years, from fancy server-grade rack-mount APC stuff to inexpensive edge stuff. Without exception, downtime is increased by use of a UPS. I used to plug a server with redundant PSUs into the UPS and the wall so it could ride out UPS glitches.

Even today, a UPS that turns itself back on after power goes out long enough to drain the battery and is then restored is somewhat exotic. Amusingly, even the new UniFi UPSes, which are clearly meant to be shoved in a closet somewhere, supposedly turn off and stay off when the battery drains according to forum posts. There are no official docs, of course.


Sounds like crappy UPSes. Even the cheap old used eBay Eaton UPSes I have in my homelab have a setting for "Auto restart" and the factory default setting is "enabled".

But even rackmount UPSes are more of an "edge" sort of solution. A data center UPS takes up at least a room.


I assume that datacenters UPSes are better, but I’ve never used one except as a consumer of its output.

But I’ve had problems with UPSes that advertise auto-restart but don’t actually ship with it enabled. And that fancy APC unit was sold by fancy Dell sales people and supported directly by real humans at APC, and it would still regularly get mad, start beeping, and turn off its output despite its battery being fully charged and the upstream power being just fine (and APC’s techs were never able to figure it out either).


> I assume that datacenters UPSes are better [...]

I don't know about specific datacenter models, but in our colocation there are humans available 24/7. So the UPS might not start after failure, but there's a human to figure it out.


Most (all?) decent datacenters also have generators on site, and the intent is that the UPS will never run out of charge. So the fully-discharged case is an error and it might be intentional to require intervention to recover.

Yeah, some people treat UPSes as "backup power" but that's not really what they're intended for. Their intended purpose is to bridge the gap during interruptions... either to an alternative power source, or to a powered-off state.

Sure, but when you stick a UPS in the closet to power your network or security cameras or whatever for a little while if there is a power interruption, you expect:

a) If the power is out too long for your UPS (or you have solar and batteries and they discharge overnight or whatever) that the system will turn back on when the power recovers, and

b) You will not have extra bonus outages just because the UPS is in a bad mood.


I completely agree with B. But alas, people love buying shitty cheap UPSes.

But A is along the lines of the misconception that I'm referring to... There should be no such thing as "the power being out too long for your UPS". A UPS isn't there to give you a little while to ignore the problem, it's there to give you time to address it. Either by switching to another source of power, or to power off the equipment.

Now, the reason that every UPS that supports auto-restart has it as a configurable option, is because you often don't want to do this for many reasons, e.g.:

* a low SOC battery could not guarantee a minimum runtime for safe shutdown during a repeated outage

* a catastrophic failure (because the battery shouldn't be dead) could be an indication of other issues that need to be addressed before power on

* powering on the equipment may require staggering to prevent inrush current overload

The whole use case of "I'm using the UPS to run my equipment during an outage" is kind of an abuse of their purpose. It's commonly done, and I've done it myself. But it's not what they're for.

But also, if you want a UPS that auto-restarts -- they exist -- but you get what you pay for.


Some of these is IMO a bit silly:

> a low SOC battery could not guarantee a minimum runtime for safe shutdown during a repeated outage

A lot of devices are unconditionally safe to shut down. Think network equipment, signs, exit lights, and well designed computers.

> a catastrophic failure (because the battery shouldn't be dead) could be an indication of other issues that need to be addressed before power on

This is such a weird presumption. Power outages happen. Long power outages happen. Fancy management software that triggers a controlled shutdown when the SOC is low might still leave nonzero remaining load. In fact, if you have a load that uses a UPS to trigger a controlled shutdown, it’s almost definitional that a controlled shutdown is not a catastrophe and that the system should turn back on eventually.

All of your points are valid for serious datacenter gear and even for large server closets, but for small systems I think they don’t apply to most users, and I’m talking about smaller UPSes.


> > a low SOC battery could not guarantee a minimum runtime for safe shutdown during a repeated outage

> A lot of devices are unconditionally safe to shut down.

Yeah, but that doesn't mean you want to expose them to brownout conditions when your UPS is depleted. If the power is continuing to flip on and off, it's better to just leave it off if you don't have the battery to prevent even short interruptions. A good UPS can do this automatically for you. A cheap one will just stay off and let you respond to the outage.

> This is such a weird presumption.

It wasn't a presumption I was making for all users -- but an example of why some users might not want auto-restart as a feature. Of course, if you want auto-restart as a feature, you can buy a UPS that has it as a feature and turn it on.

> they don’t apply to most users, and I’m talking about smaller UPSes.

Yeah, I know the situation: Someone has a network closet on a budget with a UPS they've sized to get them a few minutes of runtime. They put a UPS on the BOM because it checks a box. So they buy a low-end UPS that either doesn't have the feature, or it doesn't work right.

The solution is just to buy the right UPS for the thing they were trying to do... and test it.


The funniest thing about huge enterprises is that they often have processes so convoluted and restrictive for everything, that getting stuff done by the book is basically impossible, so people get creative with the limitations and we often end up with the sketchiest solutions in existence.

I hope the words 'web server hosted in Excel VBA' illustrate the magnitude of horrors that can emerge in these situations.


Raspberry pi on a network controlled power supply to rebroadcast udp broadcast traffic across subnets

I saw an entire physical switch configured for bridging VLANs. It was even labeled as such. 802.1q is hard and confusing if you don't know what you're doing.

which is exactly why this being different departments makes no sense

one infra team - provides the entire platform

any other approach and you’re dicking around


Enterprise hardware has companies that your company can call to get support when things go sideways, if they're using a rack full of 5 year old Thinkpads then they're on their own if something breaks

I believe they are referring to the dumpster support model. The hardware is so cheap that, if it fails, you toss it in a dumpster and buy more by the gross. Using Kubernetes to spread loads across your less reliable nodes ensures high availability. Sometimes this can be even more reliable because you are regularly testing your recovery and backup features and your hardware is more varied.

The downside is that if some piece of firmware or hardware has a vulnerability you have a larger attack surface.


There's a ton of out-of-support enterprise gear racked up in data centers. It can be done if you have a plan to handle failures.

But that's still a lot easier than managing laptops, which are unwieldily in a DC for a lot of other reasons.


We didn't have support, and we didn't need it, as the hardware was essentially EOL, probably would've been sold for like 20% of new price. We just chucked Selenium grid on them, locked them in the storage room, and if they died, they died (they didn't die a lot tho, which is surprising, as we had quite a few cheap sketchy in there as well)

I can deconstruct my workflow to the point where the benefits of plugging outdated hardware into the project are calculable. Info, transformation, etc I don't need in near real time feels like it's trending towards the price of electricity.

Since I've been looking at this situation from a resource point of view for a bit I see obvious savings in slowing down certain accepted processes. For example, an entity that continuously updates needs to be continuously scraped while an entity that publishes once a day needs to be hit once a day.


Seems like they'd have to find another 5 year old Thinkpad.

> What I dont get is that we have these autoscaling technologies that allow software to be fault tolerant to hardware failure, yet companies still insist on buying expensive server grade HW for everything.

Simple: the cost of managing the hardware scales with its heterogenity and reliability. Even just dealing with the dozens of different form factors (air vent placement!) and power units of laptops would be a big headache.


> We saved like $100k per month on the AWS bill

Did you also compare the bill to places that are not AWS, not Azure, and not GCP?


I would agree with you about autoscaling if ECC was enabled in every consumer computer :'/

>could we make a 2 inch diameter turbine engine reliably

I mean, technically yes, but in practical terms, no - turbines run on the Brayton cycle, where the are under curve efficiency is determined by the peak pressures it can withstand. if you scale down the turbine proportionally, it gets structurally weaker, meaning its efficiency drops. thrust/weight decreases

If you then thickened its walls you would then be able to handle higher pressures, but weight would increase - thrust/weight decreases again.

So the correct answer is if you really wanted to make a small turbine, you could certainly make one, but your design would be less optimal than a bigger one, so unless your goal is to go small, you would make one as big as you can get away with it.


Considering the many folk tales of giants and dwarves, featuring in all sorts of cartoons, or toy trucks and model trains I played with a kid, it's interesting to think scaling in real life works very poorly - even going beyond such simple principles as the square-cube law, if you think about stuff like a pressure vessel with a certain wall thickness that needs to hold 100 bar - the thickness needed is the same regardless you have something the size of a golfball or a swimming pool.

This is imo why scaling down combustion engines beyond a certain point makes little sense - you don't gain anything in terms of weight since the wall thicknesses are determined by the pressures the engine has to endure which is the same - this is why model engines suck - they're not only less powerful than big ones, but less powerful per pound.


The whole concept of the ceasefire is absurd - it's like the joke that to combat the rise of suicides, the government made them punishable by death.

There's no enforcement mechanism, only big dog, small dog logic. What happens if one party breaks the ceasefire? The other starts shooting?


Well we've already found out because Israel broke it. With Gaza they've gotten used to very flexible ceasefires where you can still bomb the other side repeatedly and the ceasefire "holds". Iran has shown that this will not work with them, and they closed the strait again

Um, yes?

My two cents is LLMs are way stronger in areas where the reward function is well known, such as exploiting - you break the security, you succeed.

It's much harder to establish whats a usable and well architected, novel piece of software, thus in that area, progress isn't nearly as fast, while here you can just gradient descent your way to world domination, provided you have enough GPUs.


Construction is always more expensive than destruction

To be pedantic: construction with an interconnected complex set of durable goals is hard. The general rule is that optimization over a constrained space is expensive.

But standing up a house of cards is pretty cheap. Examples include: shell corporations, formulaic business plans, AI slop, surface-level conversation, color by numbers, tract housing, cravenly only appealing the base desires of people, & c. (This might be the first time I've connected the dots in this way -- and it explains my distaste for all those things.)

But "cheap" isn't necessarily insecure. Installing bollards around building entrances is relatively cheap insurance against vehicular attacks. So this is more complicated than it seems. "Fast" doesn't mean unsafe. Even "hastily created" software _could_ be (relatively) secure if it was highly constrained to provably hardened patterns. A big problem comes when attacking a cheap target builds capability for the attacker. In a way, this analogous to how viruses attack. Start with an easy target, hijack the cell machinery, multiply, repeat.

Maybe this formulation is accurate?: If you creates something beyond your ability to understand it, then get ready to get pwned. "Staying in one's lane" in this sense might be 'safe' at least narrowly speaking (unless an entire industry is operating in a state of delusion, which is arguably the case now.)


offense has a clear reward function, but so does detection when you frame it right. "did this process try to read ~/.ssh/id_rsa?" is just as binary as "did the exploit land?" the reason defense feels harder is that people frame it as architecture review (fuzzy, subjective) instead of policy enforcement (binary, automatable). we keep trying to make AI understand intent when we should be writing rules about actions. a confused deputy from 1988 doesn't care why the request came in, it cares whether the caller is authorized. same principle applies here.

Thank god, finally someone said it.

I don't know the first thing about cybersecurity, but in my experience all these sandbox-break RCEs involve a step of highjacking the control flow.

There were attempts to prevent various flavors of this, but imo, as long as dynamic branches exist in some form, like dlsym(), function pointers, or vtables, we will not be rid of this class of exploit entirely.

The latter one is the most concerning, as this kind of dynamic branching is the bread and butter of OOP languages, I'm not even sure you could write a nontrivial C++ program without it. Maybe Rust would be a help here? Could one practically write a large Rust program without any sort of branch to dynamic addresses? Static linking, and compile time polymorphism only?


Everybody has been saying this for the last 15 years.

We're going to have to put all the bad code into a Wasm sandbox.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: