Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If it's incredibly fast at a 2022 state of the art level of accuracy, then surely it's only a matter of time until it's incredibly fast at a 2026 level of accuracy.


yeah this is mindblowing speed. imagine this with opus 4.6 or gpt 5.2. probably coming soon


I'd be happy if they can run GLM 5 like that. It's amazing at coding.


Why do you assume this?

I can produce total jibberish even faster, doesn’t mean I produce Einstein level thought if I slow down


Better models already exist, this is just proving you can dramatically increase inference speeds / reduce inference costs.

It isn't about model capability - it's about inference hardware. Same smarts, faster.


Not what he said.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: