Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

When I did my undergraduate degree in physics I think one of the best things I learned early on was estimation skills. I was used to doing things precisely and finding the tricks. Our professors made jokes about things just needing to be right to "within an order of magnitude", and it wasn't for two years that I internalized that.

When you deal with the real world there are always a lot of errors and uncertainty in measurement. Simply being within 10% of the right answer is generally sufficient and quickly getting that answer over getting the 99.99% accurate answer is better if it takes you one-tenth the time.



This is something I find tremendously useful in programming, but at the same time find a lot of other developers amazed when it's used.

I don't care if the dataset in memory is 553MB or 632MB - what I really need to know is whether it's "a few tens of MB", "a few hundreds of MB", or a "a few thousand MB".

I don't care if the API server can service 7321 simultaneous requests or 6578 - I just need to know if its "a few hundred", "a few thousand", or "a few tens of thousands".

You can solve an enormous number of engineering and architecture problems with a reliable order-of-magnitude estimate - at the very least you can quickly exclude solutions that are vastly under (or over) provisioned for the problem you're trying to solve.

A good order-of-magnitude estimate is also a great error check for a more detailed calculation, if my quick estimate said "5000-ish plus or minus 50%", and your calculation says "24,152", one of us has got something wrong.


I remember this from my first university physics class. We would derive a movement equation for a cannonball, to find the optimal angle to shoot a cannon for maximum travel. Everybody knew the answer of course, but we'd always just used the formula. This time we'd start with the obvious integration equation, movement + attraction between 2 point masses, integrate over flight time, and find the point where it crosses the ground plane.

And then the teacher just took the range from the integration, and the formula, multiplied the two and put a ~= sign between them. I believe I actually stood up and said you can't do that and we had the first of many discussions about exactness.

That was scary.

That was my first run-in with what I considered the central article of my then faith : that you can derive the structure of the physical world from first principles. Throwing away terms in an equation in order to arrive at correct physics laws, I don't know, I considered it sacrilege or something. Of course I've since learned that deriving all of physics from it's own basic laws doesn't work, and the way we fix that is that we delete "inconvenient" terms in the equations when required. Deriving physics from a few mathematical laws is completely impossible. You can't even correctly derive the (mathematical) fields used in physics, so the very numbers that one uses to do physics aren't actually valid mathematical numbers.

So the relation between physics and mathematics is not that one is based on the other, because that was tried and didn't work out, and people have almost completely given up. So it was replaced by a marriage of convenience (this works ! Sure it won't validate mathematically but the numbers look really similar), ignoring at least a dozen elephants that stood in the way, and we just act like they don't exist.


You may enjoy Feynman's excellent talk "The Relation of Mathematics and Physics": http://www.youtube.com/watch?v=kd0xTfdt6qw#t=1m05s




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: