Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A better search was inevitable. Google achieved it first and snowballed into a monopoly. Same with Facebook – many social networks went extinct just because it was spreading faster.

These companies appropriated the natural course of evolution of information exchange between humans.



I don't understand how mp3.com declined. It seemed to be going very strong and then withered. Does anyone know?


"A better search was inevitable."

I see it this way too.

As usual, most of this confused brain dump by yet another technology-challenged journalist could be reduced, per the "engineering mindset", to few lines: Facebook is a website protected by a password. There is a backend database. It contains photos, among other things. (Including personal data no web user would have shared with some random website in the 1990's.)

How to explain a website's popularity? Not easily. Do not be fooled by ex post facto "explanations" by those pontificating about already popular websites. If we knew the reasons why before the fact then we would not be having these discussions about the perplexity of network effects.

Does every web user really want to visit the same website, all day, every day? Do they set out to do that? ("Where do you want to go today?") If they do, then why even have a "web" of different sites? Why not stay on the same site and just visit its many pages (e.g. "profiles")?

For example, as a technical matter, do all web users need to log in to the same college drop out's website in order to share photos, or send messages to each other? The engineering mindset says no. The engineering mindset says there are many ways to accomplish this using a variety of methods. The most popular method may not be the best method, from an engineering perspective.

According to the journalist the engineering mindset yearns for a mathematical formula that proves why and how things become popular (cf. became or stay popular). But there is none.

The Google employee states that "web search" was cumbersome and slow back in the early 1990's. True.

Today, thanks to networking and hardware advances it is much faster.

But today's "search" is also manipulated to an extent not seen in the early 1990's. And increasingly, the web of "different" sites are (not obviously) owned by the same company, perhaps the same one providing the "search". Users are in some cases literally searching from among a selection of websites all part of the same enity, though it does not appear to them that way.

Indeed, in some aspects we have come a long way from the web of the 1990's.

How to explain a website's continued popularity? Manipulation of existing users and acquiring all potential competition. The list of methods is too long for an HN comment.

Needless to say, copying all the web's data and allowing access only by slow, small scale querying (with each query being recorded and used for advertising purposes) is not clearly an advance for users. It is just a tradeoff.

No technical barriers exist to opening up the web's data in bulk to every web user, and that would certainly be an advance for users.


Does every web user really want to visit the same website, all day, every day?

Actually in the 90's the theory was that people did want this, they were called portals and various companies competed to build the ultimate portal that would have your news and stock quotes and weather and whatever else all on your browser's start page.

we have come a long way from the web of the 1990's

In some ways yes, but in others we've come full circle. Facebook is AOL.


> But today's "search" is also manipulated to an extent not seen in the early 1990's

Exactly, it's so spammy for many keywords that more and more curated resources appear, i.e. Link Directories 2.0.

For example, if I need some software then I'm going to check Hacker News, Product Hunt, AlternativeTo but not Google's results.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: