Hacker Newsnew | past | comments | ask | show | jobs | submit | zokier's commentslogin

Microsoft had around 5k people in r&d in 1995. And that covered the full wide product range, win95, nt, office, visualc, sql server, and all the other stuff.

> we have technology to identify a car license plate from space

We don't. State of the art imaging satellites are in ballpark of 20cm/px.

Here is what antarctica looks from a satellite: https://space-solutions.airbus.com/resources/satellite-image...


Shouldn't raymarched sdfs be perfect for this? Simple primitive based geometry (spheres, cylinders) with no textures. I'm just wondering because the rendering from speck/modernspeck seems kinda splotchy, and that should be avoidable?

It should be avoidable even with the current approach based on impostors. The splotchy look seems related to how the AO is computed.

That's why we got ZGC and Shenandoah, and their generational variants, which have very low pause times (in the order of 1 ms)

You can see cabling with lacing in many images, for example:

Curiousity outside: https://mars.nasa.gov/raw_images/15126/?site=msl

Perseverance interal: https://www.jpl.nasa.gov/images/pia23312-in-the-belly-of-the...


Interesting. These however appear to be tie-downs of (PU-cast?) cable assemblies rather than running lacing. See e.g. p35 and on in https://standards.nasa.gov/sites/default/files/standards/NAS...

Weird that UE also implements this as purely postprocessing filter. Surely there is more efficient way to render directly using panini projection, or at least something closer to it? Could you do it in vertex shader or something

You can kinda do it in the vertex shader, but the geometry would have to be very finely tessellated for the curve to look right since each individual triangle would still have straight edges. Alternatively you could raytrace the camera instead, which makes it trivial to use any projection, but that's a non-trivial departure from how most engines work. Post-process is the least intrusive way to do it.

I'm not sure what HYPER DEMON does, it's built on a custom engine so they could really specialize into the crazy FOV if they wanted to.


Can also be done in fragment shader with up to six 90 degree cameras. For a fast-paced game doing it in vertex shader is probably fine. I’m not sure what HyperDemon does.

I was also thinking that maybe you could render the center part of image in higher res than outside edges. So that when you apply the projection filter it's less of a problem.

If you are serious about this proposal, one way to move forwards is to make tool that converts kdbx <-> sqlite. If you can't roundtrip that conversion perfectly then the idea is dead on arrival.

Per the article:

> The migration process would also be frictionless for users, it is a simple data map between probably the two easiest formats of all time.

I cannot imagine how you could mess this up. The developers already implement numerous export formats. The migration is the easiest part. The actual implementation of a new data format into the codebase and all the new security and robustness testing is the difficult part.


> I agree that seeing types is helpful, though typing them is also not necessary. Perhaps the solution is an IDE that shows you all the types inferred by the compiler

see "The Editor as Type Viewer" section in the docs: https://loonlang.com/concepts/invisible-types


Wow, we actually did think about the same thing

> You get the benefit of seeing types everywhere without the cost of maintaining them yourself.


I do wonder how good results you could get with a good capture setup, good macro lens, and high-resolution DSLR. Of course combined with state-of-art software. By the specs something like Canon R5ii + 100mm 1.4x macro should get up to almost 3um per pixel resolution; intuitively that should result also very high detail 3d models. Managing depth of field might be a problem though.

I'd imagine at some point the rig tolerances/vibrations/newly settled dust specks from snapshot to snapshot would completely negate any benefits you'd get from that level of detail. The processing power to handle that resolution would be a huge (but potentially interesting...) problem as well.

Cannon pixels are still pretty big. You could use an astronomy camera and some lens adapters to get better sampling.

For astronomy bigger pixels is also better.

It depends on sampling and sky conditions. My only point was that some astronomy cameras have smaller pixels (like 1.45um).

counted_by for struct fields actually is actually the part that afaik works today: https://embeddedor.com/blog/2024/06/18/how-to-use-the-new-co...

That's amazing. Thanks for that reference. If it's good enough for the kernel, then it's good enough for me to start using in my own projects.

It's really cool that the kernel is using this. The compiler must be generating simple bounds checking code with traps instead of crazy stuff involving magical C standard library functions. Perfect for freestanding nostdlib projects.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: