NNs are potentially very powerful arbitrary function approximators, but you have very limited control (or, arguably, insight) into the precise nature of the solutions their optimization arrives at. Because of that, they've been especially well suited to problems in vision and NLP where we have basic intuition about the phenomenology but can't practically manage a formal description of that intuition (and enumerating that description is probably not of great intellectual interest): what, in pixel space, makes a cat a cat or a dog a dog? What, in patterns of natural words, indicates sarcasm or positive/negative sentiment?
They also get tons of use in results-oriented modeling of lots of other statistics questions in structured data (home prices, resource allocation, voter turnouts, etc.) but in this luddite's opinion, these sorts of applications tend to be pretty fraught if they short-change the convenience of the model training paradigm for a deeper understanding of the data phenomenology.
I've never understood how to add "layers" when training a neural network. For example, I could train a model on historical home prices by area + date/time, but I'd also like to add for example, the unemployment rate, the trailing rate of inflation past 12 months, the current 30 year mortgage rate for the time, etc.
The things you mention after “For [the(sic)] example” are called features and they are considered as dimensions of the data. Layers are not strictly related to dimensions.
Is there a typical limit to number of features (that have almost nothing to do with each other/very little correlation) you can add to a basic "hack it together real quick from Google/copy paste/tutorials" Python neural network before the results are pretty bleh?
They also get tons of use in results-oriented modeling of lots of other statistics questions in structured data (home prices, resource allocation, voter turnouts, etc.) but in this luddite's opinion, these sorts of applications tend to be pretty fraught if they short-change the convenience of the model training paradigm for a deeper understanding of the data phenomenology.