Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm curious about the application and implication of using generative model for comparative analysis. Wherein, if the results are incorrect or a have a slight error in a map, can lead to incorrect conclusion and impact on policy. This observation is not centered on the Satlas projects because medical image analysis is also out there (but may be the FDA can drive some regulation). Broader question, how would we have to think about generative modeling for applications that are more then entertainment and cannot be corrected/ verified by a person (like the user in case of ChatGPT)


I fully agree that errors in extracted data can lead to making incorrect decisions/policies. Even for applications where accuracy is paramount, though, I think error-prone models still have their uses:

- For applications that only need summary statistics over certain geographies, analyzing small samples of data can yield correction factors and error estimates.

- The data could also be combined with manual verification to improve existing higher-precision but lower-recall datasets (e.g. OpenStreetMap where features are more likely to be correct but also have less overall coverage).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: