Hacker Newsnew | past | comments | ask | show | jobs | submit | mdasen's commentslogin

> Bad actors can strip sources out

I think the issue is that it's not just bad actors. It's every social platform that strips out metadata. If I post an image on Instagram, Facebook, or anywhere else, they're going to strip the metadata for my privacy. Sometimes the exif data has geo coordinates. Other times it's less private data like the file name, file create/access/modification times, and the kind of device it was taken on (like iPhone 16 Pro Max).

Usually, they strip out everything and that's likely to include C2PA unless they start whitelisting that to be kept or even using it to flag images on their site as AI.

But for now, it's not just bad actors stripping out metadata. It's most sites that images are posted on.


There’s actually a part of the NY state budget right now (TEDE part X, for my law nerds) that’d require social media companies to preserve non-PII provenance metadata and surface it to the user, if the uploaded image has it.

linkedin already does this--- see https://www.linkedin.com/help/linkedin/answer/a6282984, and X’s “made with ai” feature preserves the metadata but doesn’t fully surface it (https://www.theverge.com/ai-artificial-intelligence/882974/x...)


You're implying social platforms aren't bad actors ;)

In seriousness, social platforms attributing images properly is a whole frontier we haven't even begun to explore, but we need to get there.


In the early days, Netflix benefited from other media companies not recognizing streaming for what it was: their replacement. They licensed content to Netflix cheaply without thinking about how it would impact DVD sales or cable tv subscriptions.

It's kinda like how IBM didn't see the value in software and that let Microsoft become Microsoft.


If this does bypass their own (and others') anti-AI crawl measures, it'd basically mean that the only people who can't crawl are those without money.

We're creating an internet that is becoming self-reinforcing for those who already have power and harder for anyone else. As crawling becomes difficult and expensive, only those with previously collected datasets get to play. I certainly understand individual sites wanting to limit access, but it seems unlikely that they're limiting access to the big players - and maybe even helping them since others won't be able to compete as well.


Common Crawl has free egress


As someone clumsy, I'm so grateful that my MacBook Air can take a beating. It has one slight dent of about 1mm in the 4 years I've had it and I definitely drop it or knock it off a desk or something a few times a year.

I'll take the extra weight of aluminum (0.3lb, 130g). Yes, someone might say the ThinkPad X1 Carbon is 14", but the 13" MacBook Air actually has a 13.6" screen.

If I were in the market for a PC laptop, I'd definitely take a look at the ThinkPad X1 Carbon, but I'm also not worried about the weight of my MacBook Air. The X1 Carbon Intel ones are on sale right now since Panther Lake will be a huge upgrade coming soon, but even on clearance they aren't cheap. An X1 Carbon with 32GB RAM and 1TB storage (Ultra 7 268V, the cheapest one due to the sale) will cost $1,679 while a similar MacBook Air will cost $1,699 - and the M5 has 48% better single-core performance and 56% better multi-core performance (Geekbench). A 16GB/512GB (Ultra 5 225U) X1 Carbon is $1,538 compared to $1,099 for a MacBook Air - and the M5 has a 74% single and multi core advantage there.

Panther Lake might narrow the performance gap, but early indicators don't seem like that's the case. Even the top of the line Ultra X9 388H sees the M5 with a 36% single-core advantage while the Ultra X9 388H gets 3% faster multi-core. And I'm not sure the higher wattage "H" processors work for something like an X1 Carbon.

The highest non-H Panther Lake processor (Ultra 7 365) sees the M5 get 51% better single-core and 58% better multi-core. Maybe we'll see better, but it looks like Intel isn't closing the gap in 2026.


For fascism, it's not always about getting something you think is a lot. It's about a power relationship. Trump has demonstrated that Nvidia will bow to his will.

It's also potentially an implementation of the foot-in-the-door technique (https://www.simplypsychology.org/compliance.html). It's a common manipulative strategy where you get someone to do a small favor for you which makes them much more likely to do a large favor for you later.


No. When you go into a Costco, Costco is a retailer who bought merchandise to sell to you. When you go to Amazon, a large amount of the products are being sold by third party vendors while Amazon is taking a large cut.


That doesn't seem to be the case across Europe based on current sales.

Looking at marketshare in the EU+EFTA+UK 2025 to 2026:

VW Group went from 26.8% to 26.7%. Stellantis went from 15.5% to 17.1%. Renault Group went from 9.8% to 8.7%. Hyundai Group 8.4% to 7.6%. BMW Group 7.0% to 6.9%. Toyota Group 8.0% to 7.2%. SAIC Motor was flat at 2.0%. BYD 0.7% to 1.9%. Tesla 1.0% to 0.8%.

So it doesn't really seem like BYD is eating into the sales of European manufacturers yet. VW + Stellantis + Renault + BMW + Mercedes + Volvo + Jaguar Land Rover was 66.9% in 2025 and it's 67.1% in 2026, an increase of 0.2 percentage points (looking at just VW + Stellantis + Renault, it was an increase of 0.4pp).

We'll see what happens going forward, but Chinese cars aren't killing it yet. SAIC Motor is flat. BYD is doing very well, but it's a lot easier to grow when you're small. I think that Chinese cars will present challenges, but I'm less sure that it's over for European automakers. Right now, European automakers are marginally increasing their marketshare (probably more noise than anything, but not evidence of decline).

I think BYD is a strong company and I think they'll continue to gain marketshare, but will others? SAIC has seen modest European growth since 2024, but nothing really threatening and they're sitting at 2% marketshare and their modest growth seems to becoming no growth. Chery is really small. Geely is ultra small without Volvo.

So it feels like it's really the BYD story. BYD is the company actually making inroads and growing at a significant rate. And I don't think that a single company can destroy the European auto industry. It's possible BYD could become 10-20% of the European market and that would be a major win for them and make a significant dent in competitors. But do you see them becoming more? Are there other companies that seem promising?


> And I don't think that a single company can destroy the European auto industry.

I’m still surprised auto hasn’t turned into a duo-tri-opoly.

Took a while but ~60% of eu cell phones are an Apple or Samsung.

If anything, the Chinese entrants are reversing some effects of automotive consolidation.

I guess marketing still convinces people that tons of vehicle choice is still necessary.


In the UK Chinese cars are hitting 10% of total cars sold not just imports


I think the big thing keeping Blazor back is that C# doesn't work well with WASM. It was built at a time when JIT-optimized languages with a larger runtime were in-vogue. That's fine in a lot of cases, but it means that C# isn't well suited for shipping a small amount of code over the wire to browsers. A Blazor payload is going to end up being over 4MB. If you use ahead of time compilation, that can balloon to 3x more. The fact that C# offers internal pointers makes it incompatible with the current WASM GC implementation.

Blazor performance is around 3x slower than React, it'll use 15-20x more RAM, and it's 20x larger over the wire. I think if Blazor could match React performance, it'd be quite popular. As it stands, it's hard to seriously consider it for something where users have other options.

Microsoft has been working to make C#/.NET better for AOT compilation, but it's tough. Java has been going through this too. I don't really know what state it's at, but (for example) when you have a lot of libraries doing runtime code generation, that's fine when you have a JIT compiler running the program. Any new code generated at runtime can be run and optimized like any other code that it's running.

People do underappreciate the JS/TS ecosystem, but I think there are other reasons holding back stuff running on WASM. With Blazor, performance, memory usage, and payload size are big issues. With Flutter and Compose Multiplatform, neither is giving you a normal HTML page and instead just renders onto a canvas. With Rust, projects like Dioxus are small and relatively new. And before WASM GC and the shared heap, there was always more overhead for anything doing DOM stuff. WASM GC is also pretty new - it's only been a little over a year since all the major browsers supported it. We're really in the infancy of other languages in the browser.


I definitely agree with the first point - it's not meant to be the best.

On the second part, I think the big thing was that they needed something that would interop with Objective-C well and that's not something that any language was going to do if Apple didn't make it. Swift gave Apple something that software engineers would like a ton more than Objective-C.

I think it's also important to remember that in 2010/2014 (when swift started and when it was released), the ecosystem was a lot different. Oracle v Google was still going on and wasn't finished until 2021. So Java really wasn't on the table. Kotlin hit 1.0 in 2016 and really wasn't at a stage to be used when Apple was creating Swift. Rust was still undergoing massive changes.

And a big part of it was simply that they wanted something that would be an easy transition from Objective-C without requiring a lot of bridging or wrappers. Swift accomplished that, but it also meant that a lot of decisions around Swift were made to accommodate Apple, not things that might be generally useful to the lager community.

All languages have this to an extent. For example, Go uses a non-copying GC because Google wanted it to work with their existing C++ code more easily. Copying GCs are hard to get 100% correct when you're dealing with an outside runtime that doesn't expect things to be moved around in memory. This decision probably isn't what would be the best for most of the non-Google community, but it's also something that could be reconsidered in the future since it's an implementation detail rather than a language detail.

I'm not sure any non-Apple language would have bent over backwards to accommodate Objective-C. But also, what would Apple have chosen circa-2010 when work on Swift started? Go was (and to an extent still is) "we only do things these three Googlers think is a good idea", Go was basically brand-new at the time, and even today Go doesn't really have a UI framework. Kotlin hadn't been released when work started on Swift. C# was still closed source. Rust hadn't appeared yet and was still undergoing a lot of big changes through Swift's release. Python and other dynamic languages weren't going to fit the bill. There really wasn't anything that existed then which could have been used instead of Swift. Maybe D could have been used.

But also, is Swift bad? I think that some of the type inference stuff that makes compiles slow is genuinely a bad choice and I think the language could have used a little more editing, but it's pretty good. What's better that doesn't come with a garbage collector? I think Rust's borrow checker would have pissed off way too many people. I think Apple needed a language without a garbage collector for their desktop OS and it's also meant better battery life and lower RAM usage on mobile.

If you're looking for a language that doesn't have a garbage collector, what's better? Heck, what's even available? Zig is nice, but you're kinda doing manual memory management. I like Rust, but it's a much steeper learning curve than most languages. There's Nim, but its ARC-style system came 5+ years after Swift's introduction.

So even today and even without Objective-C, it's hard to see a language that would fit what Apple wants: a safe, non-GC language that doesn't require Rust-style stuff.


I think that their culture of trying to invent their own standards is generally bad, but it is even worse when it is a programming language. I believe they are painting themselves into a corner.


>For example, Go uses a non-copying GC because Google wanted it to work with their existing C++ code more easily. Copying GCs are hard to get 100% correct when you're dealing with an outside runtime that doesn't expect things to be moved around in memory.

Do you have a source for this?

C# has a copying GC, and easy interop with C has always been one of its strengths. From the perspective of the user, all you need to do is to "pin" a pointer to a GC-allocated object before you access it from C so that the collector avoids moving it.

I always thought it had more to do with making the implementation simpler during the early stages of development, with the possibility of making it a copying GC some time in the feature (mentioned somewhere in stdlib's sources I think) but it never came to fruition because Go's non-copying GC was fast enough and a lot of code has since been written with the assumption that memory never moves. Adding a copying GC today would probaby break a lot of existing code.


To add to this, whatever was to become Obj-C's successor needed to be just as or more well-suited for UI programming with AppKit/UIKit as Obj-C was. That alone narrows the list of candidates a lot.


Is there a congruent DGGS that you would recommend?


None that are well-documented publicly. There are a multitude of DGGS, often obscure, and they are often designed to satisfy specific applications. Most don’t have a public specification but they are easy to design.

If the objective is to overfit for high-performance scalable analytics, including congruency, the most capable DGGS designs are constructed by embedding a 2-spheroid in a synthetic Euclidean 3-space. The metric for the synthetic 3-space is usually defined to be both binary and as a whole multiple of meters. The main objection is that it is not an “equal area” DGGS, so not good for a pretty graphic, but it is trivially projected into it as needed so it doesn’t matter that much. The main knobs you might care about is the spatial resolution and how far the 3-space extends e.g. it is common to include low-earth orbit in the addressable space.

I was working with a few countries on standardizing one such design but we never got it over the line. There is quite a bit of literature on this, but few people read it and most of it is focused on visualization rather than analytic applications.


Pointers to the literature please. I don't work in this space but love geometry.


He always comments like this. Never commenting a concrete answer. Just look at his history. Hes been doing this for years. Probably as advertisment for himself or to just feel/show superior.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: