Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've seen this idea used in machine learning as well.

There was a paper [1] whose goal was a binary classification of the center pixel in a region of interest. Interestingly, their results improved when they applied a foveal blurring surrounding the pixel to be classified.

[1]: http://people.idsia.ch/~juergen/nips2012.pdf



Interesting! Thanks for sharing!

There are algorithms that mimic fixation paths the eyes follow when presented with a novel image, very much related to modeling dopaminergic systems. It seemed to find task-relevant information dense areas first, and then slowly spread out to less information dense areas. I wonder if there'd be any benefit to running these algorithms on images, basically turning them into a video, and then running classifiers based on this video (with or without foveal blurring


Do you have any more information about these algorithms? Or perhaps links to a paper or two? It gives me a couple ideas and sounds very interesting! I do wonder as you do as well. Feeding an image to a neural network instead of as a single input, but as a series of inputs separated over time, 'reusing' the same neurons for different portions of the image might allow for interesting feedback to develop.


Apparently I can reply now, repost just so it shows up in your comment thread

I remember seeing a video of a robot that would attend to different parts of a scene based on the "saliency", in the sense of novelty, of its features. I can't find the specific video, but I think the model running the robot is related to: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3930917/ The paper is quite dense, as it describes biologically plausible models of dopaminergic systems, which makes the model quite complex as well, but it's interesting because this system is considered quite 'low-level', no cortex involved. I'll add more if I can find any..


@otakucode The thread has gone too deep, so I can't reply to your comment directly.

I remember seeing a video of a robot that would attend to different parts of a scene based on the "saliency", in the sense of novelty, of its features. I can't find the specific video, but I think the model running the robot is related to:

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3930917/

The paper is quite dense, as it describes biologically plausible models of dopaminergic systems, which makes the model quite complex as well, but it's interesting because this system is considered quite 'low-level', no cortex involved.

I'll add more if I can find any..




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: