Hacker Newsnew | past | comments | ask | show | jobs | submit | zephen's commentslogin

The problem is that the same word is used for different things.

The comment you are responding to was correct in what "property" means in some settings.

The article itself says:

> A property is a universally quantified computation that must hold for all possible inputs.

But, as you say,

> but as those terms were adopted into less-than-academic contexts, the meanings have diluted.

And, in fact, this meaning has been diluted. And is simply wrong from the perspective of what it originally meant in math.

You are right that a CPU register is a property of the CPU. But the mathematical term for what the article is discussing is invariant, not property.

Feel free to call invariants properties; idgaf. But don't shit all over somebody by claiming to have the intellectual high ground, because there's always a higher ground. And... you're not standing on it.


My point was not that there exists some supreme truth about what words mean and that either you use words "correctly" or you're an idiot.

Yes, words have different meanings in different settings, but that's not the dilution I was referring to. It's absolutely fine that a word can be used differently in different places.

The "problem", such as it is, is that there are people who use terms from programming languages research to discuss programming languages and they use these terms inaccurately for their context, leading to a dilution in common understanding. For example, there is a definitive difference between a "function" and a "method", and so it is inaccurate to refer to functions generally as "methods". However, I see people gripe about interactions where these things are treated separately, and that is what I am addressing.

The parent comment to mine tried to offer some examples of such terms within the context of programming languages, so my corrections were constrained to that context. But your correction of my point is, I think, incorrect, because the meaning you are trying to use against me is one from a different context than the one we're all talking about.

There's no intellectual high ground here; my point was not to elevate myself above the parent comment. My point was to explain to them that they were, from the point of view of people like the author of the post (I assume), simply incorrect. There's nothing wrong with being wrong from time to time.


Too many MBAs, not enough concrete.

> The economy is not zero sum.

This is true.

But it's not always positive sum, either.

> Megacorporations making profit is not some evil that needs to be stopped.

Externalities are a thing. It's not about the profit per se, but about how (a) the making of that profit might negatively impact others, and (b) the deployment of that profit in pursuit of rent-seeking and other antisocial behavior in order to insure its continued existence might also negatively impact others.


Externalities are a thing, but this isn’t exactly dumping toxic waste into a river.

I disagree with that. from what I read data centers are going to have some real world negative effects on human populations

No, it's more just drying the river up entirely.

https://www.texastribune.org/2025/09/25/texas-data-center-wa...


That's a bit different.

- The ROM was used to build the emulator (which didn't include the ROM but was only able to use it like any other hardware)

- Then the ROM was used to derive a specification and do A/B testing on (similar to Phoenix BIOS), and a different team coded a replacement ROM

There is no cleanroom inside an LLM.


despicably detrimental

> The RVA23 profile was ratified a few months ago

If you're like me, you're suffering the typical time dilation that comes with getting old.

For everybody else, this was 18 months ago.


> why involve Git at all then?

I made a similar point 3 weeks ago. It wasn't very well received.

https://news.ycombinator.com/item?id=47411693

You don't actually need source control to be able to roll back to any particular version that was in use. A series of tarballs will let you do that.

The entire purpose of source control is to let you reason about change sets to help you make decisions about the direction that development (including bug fixes) will take.

If people are still using git but not really using it, are they doing so simply to take advantage of free resources such as github and test runners, or are they still using it because they don't want to admit to themselves that they've completely lost control?


> are they still using it because they don't want to admit to themselves that they've completely lost control?

I think this is the case, or at least close.

I think a lot of people are still convincing themselves that they are the ones "writing" it because they're the ones putting their names on the pull request.

It reminds me of a lot of early Java, where it would make you feel like you were being very productive because everything that would take you eight lines in any other language would take thirty lines across three files to do in Java. Even though you didn't really "do" anything (and indeed Netbeans or IntelliJ or Eclipse was likely generating a lot of that bootstrapping code anyway), people would act like they were doing a lot of work because of a high number of lines of code.

Java is considerably less terrible now, to a point where I actually sort of begrudgingly like writing it, but early Java (IMO before Java 21 and especially before 11) was very bad about unnecessary verbosity.


> If people are still using git but not really using it, are they doing so simply to take advantage of free resources such as github and test runners,

does it have to be free to be useful? the CD part is is even more important than before, and if they still use git as their input, and everyone including the LLM is already familiar with git, whats the need to get rid of it?

there's value in git as a tool everyone knows the basics of, and as a common interface of communicating code to different systems.

passing tarballs around requires defining a bunch of new interfaces for those tarballs which adds a cost to every integration that you'd otherwise get for about free if you used git


A series of tarballs is really unwieldy for that though. Even if you don't want to use git, and even if the LLM is doing everything, having discrete pieces like "added GitHub oauth to login" and "added profile picture to account page" as different commits is still valuable for when you have to ask the LLM "hey about the profile picture on the account page".

A series of tarballs is version control.

Git gives you the series of past snapshots if that's all you want it for, but in infrastructure you don't need to re-invent.


Your example is only for dumping memory.

> this is a weak argument for what computers should do; if LE is more efficient for machines then let them use it

Computers really don't care. Literally. Same number of gates either way. But for everything besides dumping it makes sense that the least significant byte and the least significant bit are numbered starting from zero. It makes intuitive mathematical sense.


Same number of gates either way

Definitely not, which is why many 8-bit CPUs are LE. Carries propagate upwards, and incrementers are cheaper than a length-dependent subtraction.


So, to be clear, I was writing about when you design a computer. It truly is the same number of gates either way. I have written my fair share of verilog. At one level, it's just a convention.

For the use of a computer, yes, if you are doing multi-word arithmetic, it can matter.

OTOH, to be perfectly fair and balanced, multi-word comparisons work better in big-endian.


Not only dumping, but yes I agree it only matters when humans are in the loop. My most annoying encounters with endianness was when writing and debugging assembly, and I assure you dumping memory was not the only pain point.

I've done plenty of assembly language. It was the bulk of my career for over 20 years, and little endian was just fine, and big endian was not.

I can easily imagine someone getting used to LE, but how is BE not fine as a human writing asm?

If you're mapping datatypes, or dealing with bit arrays.

The root of the problem, which manifests itself in scenarios far more often than you might think, is that in big endian, the location corresponding to 2**n within an integer maps to byte X - n/8 - 1, where X is the number of bytes in the mapped-to data structure, and if it's true big-endian like some IBM processors, the bit maps to bit number 7 - (n%8), but with most processors which are mixed endian, such as the M68K, it's merely n%8.

With little endian, the byte location is n/8 and the bit location is n%8.

A trite example of when this occurs is that you have a description of bit numbers within 4-byte hardware registers and you want to develop an integer mask for those.


> Computers really don't care. Literally. Same number of gates either way.

Eh. That depends; the computer architectures used to be way weirder than what we have today. IBM 1401 used variable-length BCDs (written in big-endian); its version of BCDIC literally used numbers from 1 to 9 as digits "1" to "9" (number 0 was blank/space, and number 10 would print as "0"). So its ADD etc. instructions took pointers to the last digits of numbers added, and worked backwards; in fact, pretty much all of indexing on that machine moved backwards: MOV also worked from higher addresses down to lower ones, and so on.


My comment was in response to the parent's

> FWIW, this is a weak argument for what computers should do; if LE is more efficient for machines then let them use it

I should have fleshed it out more fully, but basically, it was about how when you design an ALU, it's literally the same number of gates whether you swap the pins when you connect it to the rest of the system or not.

Using the computer is, of course, a different story that depends a lot on design decisions made when implementing it, and depending on your usage, endianness can matter more.


> BE is intuitive for humans who write digits with the highest power on the left.

But only because when they dump memory, they start with the lowest address, lol.

Why don't these people reverse numberlines and cartesian coordinate systems while they're at it?


A lot of graphics APIs do actually reverse the y-coordinate for historical reasons.

Right. I've done plenty of postscript/PDF.

But 99% of the time the x-coordinate and the number increment from left to right.


From personal experience, especially don't point out that you foresaw the problem and warned against the path in a putative "lessons learned" meeting, lest ye be admonished that the true meaning of "disagree and commit" includes "and forget this conversation ever happened," even though the singular point you were trying to make at the "lessons learned" meeting was about how paying attention to concerns might actually be useful in future projects.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: