Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Bad argument. My sense impressions consist of far more than just text input.

Multimodal models are being trained right now using text images, video, and audio. Eventually you can add data from pressure, heat, and acceleration sensors and motors (sense of touch). We can further add additional "senses" - data from RADAR/LIDAR, magnetometer, multispectrum vision, radiation sensors if desired.

AI will come to know our world very well.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: