Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So if I'm understanding correctly this is using AI to fancily upscale Senitel 2 data, essentially guessing what it's seeing, and then suggesting the output of that should be used for making new products/decisions/models. Sounds a bit like CSI Zoom Enhance stuff...


The super-res is surprisingly usable for making sense of land use changes. With OpenStreetMap editing, one common challenge is that out of the usable (license-wise) imagery, the high-defs ones are old and the new ones from Sentinel are low-def. A lot of switching, squinting, and gusssing is required to understand of what's going on, even when most of the work is as basic as trying to spot this old road in the blurry new image. This super-res seems to do that well enough. It doesn't have enough information to guess the exact shape of buildings and that's okay.

They also do some object recognition, which is useful if you're an electric infrastructure nut. It spotted some solar fields in Shanghai which I've never heard of before -- a look at the same coordinates (30.753, 121.392) on Google sure shows the expected blue.


The models we use to extract the geospatial data (like solar farm and offshore platform positions) from Sentinel-2 imagery are currently separate from the Sentinel-2 upscaling model, which is a more exploratory project.

We report the accuracy of the data at [1]; the Satlas project is quite new and we're aiming to improve accuracy as well as add more categories over time.

We expect the geospatial data will be useful for certain applications, but I agree that the upscaled super-resolution output has more limited uses, especially in its current state outside the US since it is trained using NAIP imagery that is only available in the continental US. We're exploring methods to quantify and improve the accuracy of the upscaled imagery.

Note that the model weights, training data, and generated geospatial data can all be downloaded at [2].

[1] https://github.com/allenai/satlas/blob/main/DataValidationRe...

[2] https://github.com/allenai/satlas


Does satlas currently use any channels other than Sentinel's visible RGB? I imagine that those near IR bands can be very useful for plant-related tasks and (with a long stretch) potentially help with object discrimination by adding an extra band.


The marine infrastructure (offshore platform and offshore wind turbine) and super-resolution models only use RGB bands (B04, B03, B02), while the solar farm, onshore wind turbine, and tree cover models use 9 Sentinel-2 bands (add B05, B06, B07, B08, B11, and B12). With enough high-quality labels, the extra bands do provide slightly improved performance (1-2% gain in our accuracy metric, e.g. from 89% to 91%), but we don't have a detailed comparison or analysis at this time.

Also, all of the models input three or four images of the same location (captured within a few months), with max temporal pooling used at intermediate layers to enable model to synthesize information across the images. This helps a lot, definitely when one image has a section obscured by clouds (so model can use the other images instead), and maybe also when different images provide different information (e.g. shadows going in different directions due to slightly different times of day).


Do you by chance have comparisons of what the terrain actually looks like without the AI upscale? would be interesting to see how much it gets right.


We plan to eventually add some real paid high-res imagery to the map just as a comparison, but for now you would need to look at the map at https://satlas.allen.ai/map (select Super Resolution) and compare it to a source of aerial imagery like Google Maps or Bing Maps at the same spot.


Sounds good, it's been a little while since I've touched anything GIS related, but it was kind of fun while at the same time stressful for me as a junior developer at the time. I'm definitely curious how insanely accurate AI upscaling will become with stuff like this, at least in terms of getting a good amount of the terrain correct.


So you're currently providing the proposition of what might be accurate, but not accuracy.


If machines can hallucinate in text form, they surely can hallucinate in maps.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: