Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Marc Andreessen's AI manifesto hurts his own cause (axios.com)
82 points by taytus on Oct 17, 2023 | hide | past | favorite | 119 comments


Speaking of stagnation, Andreessen's manifesto seems to be aimed at a specific group of people who are not the majority of Americans.

https://www.epi.org/productivity-pay-gap/


When Andreesen calls back to earlier eras of greater tech optimism & growth, that's aimed at people who've lost out on the reduced dynamism in recent decades.


Those eras of greater tech optimism and growth led to the current concentration of wealth; it seems unlikely that he would be all that interested in greatly enlarging the club.


That's a rather simplistic reading of the history, and those eras also delivered a greater share of income, & more living-standards advances, to everyone than our current "be slow and careful and deferent to so many political veto-points" era.

Consider a model instead where sure, you can turn off the spigots of progress to try to punish the wealthy – but in practice, the wealthy can defend themselves, through some combo of either their accumulated power, or the same competencies that made & kept their wealth in the first place.

So whatever lesser growth still happens, your (& the EPI's) bugaboos still get their share – a bit like 'liquidation preferences' in an investment, but springing not from contractual terms but the full structural arrangement of world – prior endowments, some merited some not, and the nature of human competition.

But the new lower-growth era, no matter the dreams of its advocates, mostly deprives the needier masses. They wind up with a bigger 'haircut' than the wealthy.

Of course, if you're totally in thrall of class resentment & ideology, you won't be discouraged by the many-decades of failure in 'slowdown' policies so far. "We must do even more of it," a comfortable tenured neo-marxist professor might argue. "Heighten the contradictions! Speed the revolution, where the growing suffering of the masses help them finally achieve class-consciousness & hand our enlightened vanguard the plenary power they need for great justice."

Like pre-scientific medicine: "The blood-letting hasn't worked yet? More blood-letting!"

If you've actually paid attention the last 175y, the political forces unleashed by stagnation, resentment, and demonization of tech & capital-allocation tend to instead empower the worst regimes, that then destroy both material abundance & political freedoms.


Are you speaking of the dot-com era, the web 2.0 era, or the era that gave us tetraethyllead?


Man who made fortune off of tech industry wants people to keep shoveling money into the tech industry.

I'd be more surprised if he had literally any other stance.

I will add that veering into the military industrial complex under the banner of techno optimism is quite troubling.


The technology of nukes and MAD has saved many lives.


Yet every year the US spends a trillion dollars on the military, which has killed half a million civilians since 9/11.

Meanwhile our nuclear energy and space programs are total jokes.


Imagine how many civilians the purveyors of 9/11 would’ve killed if the USA wouldn't have spent a trillion dollars on the military.


It’s horrible. Around 50 million civilians were killed in just a few years prior to nukes though.


If you want to blow your mind then read the AI Manifesto and then Ted Kaczynski‘s “Industrial Society and Its Future”.


This take is borderline absurd.

The risk of AI is in centralization of AI. In the same way the risk of the web was in centralization of the web.

Those very real risks manifested in very real ways on the web.

I have young kids. I'm not exaggerating when I say I wake up every day and diligently build the future I was promised for them.

I've got somewhere around 10 years, conservatively, until Instagram/TikTok/YouTube/Facebook/etc. try to get my daughter to kill herself. I've got around 10 years until she is a statistic, hopefully the statistic that survives.

Social Media wasn't always Social Media. We weren't promised Social Media. We were promised Social Networking.

Social Networks were supposed to connect us with our communities. Bring the world together in a way previously impossible. Allow the free exchange of information and ideas. That future is still possible.

But it turns out expecting some dude in Palo Alto to pay to be the free Proxy and Archivist of the entire world's social communication is a hard problem to solve. It's a hard engineering problem. It's a hard financial problem. It's a hard legal problem. It's a hard ethics problem.

Social Networking companies didn't solve that problem. They failed and pivoted into Social Media.

Social Media divides us. Social Media isolates us. Social Media believes my friends, my family, and my community "aren't engaging enough" and litters my timeline with randos it's trying to hype to keep me engaged so they can monetize the experience.

I go to my PTO meetings and parents talk about their kids being isolated and depressed. They talk about body image issues and lack of purpose.

I go to my community gatherings, business networking groups, and gym. People talk about how lonely they feel. How angry they feel. How depressed and anxious they feel.

Instead of the Social Networks we were promised, we have these Social Media monstrosities that take randos on the internet and try to make them famous in front of other randos on the internet in a variable rate reward slot machine.

Social Networks on P2P technology have been viable for at least 5 years now, but the industry hasn't caught up. It's possible to join a p2p social network from your browser tab, pair it with your phone, have them both simultaneously be able to generate posts and sync over a gossip protocol without conflict. And it's possible for your social group to backup your content. All of this is still possible with the Application Service Provider architecture of having a "backup pinning server" that will make your data highly available.

Identity is solvable in a way that allows for full control of your identity without relying exclusively on public/private key cryptography - and permits social recovery of your identity when your keys are lost or compromised [1]. End users don't need to understand private keys or public keys. They just know they have an account, and that their friends can send them a friend request.

I'm going long on p2p applications. I think they aren't just viable, but that they can out compete the user experience that current generation Application Service Providers offer while simultaneously breaking the incentives that are destroying the societies my children will inherit.

AI is no different. The call to arms about AI regulation and alignment often degrades to advocating for a world where my children do not get access to AI because it's "too dangerous" for them - while someone else's kids belong in the privileged AI class.

The trajectory forming for AI strongly signals to me that it will be the single most profound means of production our species has produced. Any philosophy that advocates taking that away from my kids can go play in its own corner. I'm not taking back the web for my kids just to let them take away AI in return.

If these things interest you, and you're interested in doing hardcore research on productionizing real, no bullshit, peer-to-peer networks both for end users and enterprises, my email is in my bio. I want to talk to you. We need more good people in this space, and we have budget for it.

My team is building peer-to-peer networks for AI, but the solutions that break AI out of it's ivory walls aren't AI specific solutions. These peer-to-peer building blocks are general purpose and the next generation of the web is going to run on them.

This is the other other road ahead [2].

We will succeed or I will die trying.

[1] http://www.blankenship.io/essays/2023-09-24/

[2] http://paulgraham.com/road.html


>Social Media wasn't always Social Media. We weren't promised Social Media. We were promised Social Networking.

Brilliant.


Everything you're saying resonates, though I'm in that "10 years" zone, ahead of you, in regards to parenting.

I'm here to talk about the problems of adolescent girls in this era, if you ever want to reach out. A horrible reality I've lived as a parent.

And I would like to build the online social world we wanted, back in the early 90s, rather than the one we eventually sold ourselves to, or were sold to.

... and to me that social tech we need is concrete rather than virtual. It's immanent rather than transcendent. Infra or "within", rather than meta and "without", or virtual. Its creative rather than passive-consumptive. And narrative-textual-descriptive rather than artificial-graphical. And to me the outline of that looks something like what was being grasped at in the early 90s with synchronous, live, shared creative social worlds like LambdaMOO -- if that isn't too obscure for you -- than what we ended up with the web, and then Facebook, Twitter, etc.

The peer to peer aspect is of less ... interest? ... to me? I'm not convinced that this aspect of it is as important as it is to refocus on control/ownership, intimacy, and moderation?


> Everything you're saying resonates, though I'm in that "10 years" zone, ahead of you, in regards to parenting.

> I'm here to talk about the problems of adolescent girls in this era, if you ever want to reach out. A horrible reality I've lived as a parent.

This is heartbreaking and I'd love to talk... Permission to reach out via email?

> I'm not convinced that [peer to peer] is as important as it is to refocus on control/ownership, intimacy, and moderation?

I believe these are synonymous.

> control/ownership

When you invert control/ownership, the user controls their data. The application is delegated access to it. This is a peer-to-peer data structure.

Instead of generating the data on an ASP's server, the user generates their data and stores it in a data structure they have control over. They can _delegate_ generation of data to an ASP style app, but ultimately the app is taking actions _on their behalf_ which the user owns.

They can pull that data down to their laptop, push it up to a peer-to-peer enabled NAS (purchased from the big box store) that they plugged into their router, etc.

The app can't revoke access to their data if they've backed it up. The app can refuse to continue mirroring it or modifying it, but it can't _lock them out_. It's the user's data.

That data is sharable and discoverable without the app's permission. The app can participate in sharing and discovering the data, but it can't _block_ the user from interacting with other users on the web.

The user isn't _forced_ to interact with other users either. They are in full control of who they associate with.

Right now we are solving this with a hash-tree. Every time a user takes an action ("creates a post", "uploads a picture", etc.) it's added to their hash tree. When you add a new device, the tree "branches" and you get two branches, one for each key to sign and add next blocks. This solves concurrent writes much like a CRDT.

That hash-tree is an append-only database. Anything you want to record goes into the database. The nodes in the tree do not store content, only a _reference_ to the content. This keeps the tree very small (1M nodes can fit in less than a GB of storage). You can backup the entire tree without backing up all of the content. And you can "delete" large files without having to rewrite the tree.

The tree itself isn't the user's identity though. That's fiat. If a key is compromised, users can flag content and say "hey, this isn't Alice, Alice wouldn't send me a crypto scam." Users can take two trees and "merge" them under the same identity (these are both Alice, but Alice lost her login and got locked out of her account, so has a new tree). This can be done without appealing to some application owner to "permit" you to fix Alice's social identity from your perspective on the web.

Then we use a capability system to delegate rights to keys added to the tree. So you can add an app (like iCloud) to your tree, and that app can now take actions within the scope of the capability it was granted.

> intimacy, and moderation

Intimacy, to me, means letting users decide how they want to model their social interactions. Who they pull in content from, how that content gets aggregated and displayed, etc.

Moderation, to me, means deciding what the user gets to see.

Those decisions are for communities, not for some remote disinterested platform operator.

You solve intimacy and moderation by allowing communities to form and moderate themselves.


>> I'm not convinced that [peer to peer] is as important as it is to refocus on control/ownership, intimacy, and moderation?

>I believe these are synonymous.

Not op, but I don't believe it is and while I would prefer more p2p in so many case onboarding issues brake it. My hope is more on federation now but still you see so many peoples complains on how Mastodon is not as simple as Twitter ...

But good luck in your projects :).


> My hope is more on federation

Personally I see federation as N copies of the centralization problem.

I don't think it foundationally solves any of the problems that resulted in centralization of the web, or the incentives that lead to social networking evolving into social media. It just creates N copies of them. They are still ASPs, there are just N instances of them now.

Federation still segregates users into the "administrator class" and the "user class" for web technology. And the administrator class has a substantial power imbalance over the user class.

Users aren't platforming themselves on a federated system. System administrators retain "root access" and users are second class. Users have little to no way of operating independently of the administrator class, unless they are technically savy enough to administer their own node. And administrators still need to pay the bills (though many federated servers operate on altruism).

It's also rather nonsensical to me. Most of the arguments that promote Application Service Providers can be accomplished with peer-to-peer software.

The software can be remotely administrated, updated, and managed. You can deliver a peer-to-peer experience via a browser tab, an app store, or a binary. Those can still update the same way an ASP does today.

The user's data can be backed up and made highly available (though that becomes less important, because a user's social group is doing that too!).

Many of the arguments for someone to administer a server on behalf of a user are still valid in p2p, but the server becomes a p2p node - not a root administrator over the user's account.

Peer-to-Peer doesn't mean users have to build and operate everything. It just means the data they generate is theirs and their social connections are theirs. They can directly dial and communicate with the devices in their social group without needing to go through a centrally managed server.

The federated incentive model can still incentivize administrators to build and maintain systems for peer-to-peer networks without requiring a "first class" and "second class" digital citizen class structure baked into the fabric of the internet protocols.


I would have more to say about a lot of these things, but I have no time today but also don't want to leave the thread stale too long, so just a few points.

> This is heartbreaking and I'd love to talk... Permission to reach out via email?

Absolutely. I think this might be visible through my GH profile followed-from my HN profile? Unsure.

> When you invert control/ownership, the user controls their data. The application is delegated access to it. This is a peer-to-peer data structure.

I feel like this is probably an orthogonal question to where the data/system physically lives. It's possible to give logistical / social control over one's data without going to a fully distributed p2p transport/data model.

> Then we use a capability system to delegate rights

This is good. Also a path I'm pursuing.

> Intimacy, to me, means letting users decide how they want to model their social interactions.

> Moderation, to me, means deciding what the user gets to see.

I have thoughts on this but can't get into it in depth right now. Suffice it to say I am getting a bit of an impression that you are trying -- like an engineer would -- to solve social problems with technical ones.

There is no magic bullet for these issues that becomes solved through a P2P (or federative, etc.) model.

And my chief concern is actually with the content model / style itself that "social networks" and actually the web itself promulgated. It is asynchronous, "post" oriented, like a mailbox. It is not interactive or real time (apart from messaging), and it is consumptive rather than creative/interactive. I feel like from your comments you are leaving that unchallenged. Or hoping that by giving people control that this will just change. I don't think it will, I think people need to be given better environments and tools.

> You solve intimacy and moderation by allowing communities to form and moderate themselves.

Agree on this, but I also don't think the solution here is technical. P2P (or federation or whatever) may assist, but in the end these are social problems, and problems of scale.

What I perceive as the more resilient social model is a kind of village focus. So what I'm concerned with is two things: 1) how to give people the tools for social/community content construction. So to go beyond the style of magazine-page/leaflet/post/article/shared-link model that the web encouraged and to something that's more about building shared environments. 2) how to let people do that in a more "village" fashion, where, yes, they can moderate themselves; because I believe actually the wide-open mass scale of things like Twitter and/or Facebook etc actually do not really scale.

In any case, interesting discussion.


> I've got somewhere around 10 years, conservatively, until Instagram/TikTok/YouTube/Facebook/etc. try to get my daughter to kill herself.

I plan to keep my kids of social media as much as possible, but I've never heard claims that these companies are trying to get kids to kill themselves. Why bother exaggerating when the truth (more social media use, especially for teen girls, is bad for mental health) is as bad as it is?


Because they knew what it was doing to teens (this was their own research) and chose to do nearly nothing at all. They've chosen profit over individual lives time and time again and yet people still think this is an exaggeration?


There's a difference between trying to do a thing and doing a thing knowing that it will have a side effect in certain cases.

For example, is the President trying to warm the globe when he flies around in Air Force One, with his entourage and protection detail? Or is he trying to do something else, but is aware that in doing so he is generating greenhouse gasses?

There's a difference, at least in my book. And if you've lost me (HN-reading father of soon-to-be-tweenage daughter), then good luck convincing the average Joe.


They did the thing, observed and documented the side effect, and chose to continue doing the thing. They did this in Myanmar and they are still doing this to teenagers.


> I've got somewhere around 10 years, conservatively, until Instagram/TikTok/YouTube/Facebook/etc. try to get my daughter to kill herself. I've got around 10 years until she is a statistic, hopefully the statistic that survives.

You make a fair point gnicholas.

If you can reword this sentence in a way that accurately reflects what you think is happening while simultaneously not losing the conciseness of the waters I'm navigating as a parent when I say "in 10 years, Instagram is going to try to get my daughter to kill herself" - I'll gladly edit it and start using that instead.

I haven't found a better way to express myself though.

Sure apps aren't sentient. They don't have free will. They can't "try to do anything." But language is messy.

You're right about losing people in the argument too, that's not a great outcome.

Micromacrofoot is spot on as well, you have a generation of parents who understand well that social media is a root rot in our society and evidence showing the people building them _knew_ about the impact it was having on kids.

"Bad for mental health" doesn't cut it. I reject any language that dulls the blade of kids committing suicide.


If you share the evidence you're referring to, perhaps I could come up with alternative language to reflect that.

It sounds like you prefer to go for punchier language at the expense of technical accuracy. That's a valid choice to make in certain circumstances (advertisers and political campaigns do it all the time), but you just have to be aware that you will lose some people when you make statements like that.



I think it's more subtle. I think someone like Zuckerberg thinks the thing we interpret as the path to harm is actually the positive goal.

What I see as atomization, isolation, alienation, detachment from the real physical-lived social world, the hyper-focus/engagement into the "meta" or "virtual" world, specifically geared around commercial / advertisement interests, the loss of privacy? All of this is turned on its head by people like him and represented as a kind of liberation, transcendence, a kind of connectedness. If you listen to him talk, he seems to actually believe this stuff.

And all the harms that might come to youth along the way is seen as either not-their-problem, or "worth it" for that goal.

Oh, and it just so happens to make them billions. Ideology is an amazing thing, and usually driven from the wallet.


Serious question - why dont you just not let your daughter use these apps ? My kids not old enough to want to use them yet, but knowing what i know, she wont have access until she is 18, by then, you have no control anyways


I have every intention of encouraging my daughter to read books like Hooked[1] to understand why these experiences are designed the way they are.

I have every intention of helping her develop a personal philosophy that helps her navigate through the world in the face of easy access to variable rate reward systems, opioids, amphetamines, etc.

But she lives in a society full of the consequences of these things.

Sheltering her from it won't shelter her from it. It's prolific, she's going to be exposed to it regardless. At friends houses, at school, at the mall. It is everywhere.

Her social groups are going to be deep into the dark well of these variable rate reward systems, their character and behaviors defined by their interactions with this technology. And she will be interacting with those kids.

Prohibition is not a solution. By the time she is 18, either she has a personal philosophy that keeps her from overdosing on this nonsense without supervision or I've failed as a parent.

But, at all costs, my daughter will be free: http://www.blankenship.io/essays/2023-10-09/

[1] https://www.amazon.com/Hooked-How-Build-Habit-Forming-Produc...


You will know when your children start to reach adolescence. It's a story as old as parenting itself. You don't control your children. They are products only first of the family they are born to, but very rapidly -- probably around kindergarten -- they become primarily products of the society they are in.

You can try to block it all you want. It won't work, unless you have some sort of extra compliant child. Which has its own clear downsides.


I guess I will find out, blocking TicTok seems pretty straight forward until highschool


I'd go for tumblr, pinterest, reddit, and twitter as well, I'm sorry to say.


Mastodon is an actual social network at the moment, but the ongoing flood of Twitter emigres have triggered a culture war that is getting some really funky permutations.


The current reality of tech isn’t great. More than half a million jobs lost post 2021, yet the top tech companies are back to their top valuations with much smaller headcounts.

There are very real risks with AI.

- concentration of power and wealth - we already see this with AMAGF - echo chambers - the algorithms amplify the extremes - already see this. People can’t even agree on who won the election. - AI in military use killing 1000s - already happening, and will only go up. - Surveillance

AI has potential to become a massive force of good, but also has very serious risks if we are not careful.

AI in essence is the power to amplify human comprehension and control of the environment.

It is unlikely we go to 50B humans, but very likely a few humans end up controlling billions of humans and building the utopia of distopia.


just read through this manifesto. lot of high minded ideals but little actions to show for it if you start at the beginning of the internet era.

> Our enemy is corruption, regulatory capture, monopolies, cartels.

sounds good, how about Andreessen Horowitz start open sourcing the ip of their own failed startups. thats a good small step towards avoiding monopolies and democratizing IP, No, they wont because it hurts THEIR chances to be the dominant monopoly. thats the bottom line. a few more examples for select few capturing most of the gains of tech:

- minimum wage essentially flat over last 30 years (considering number of people at or close to min wage)

- best example of NIMBYism is lack of real estate development in silicon valley itself. look at RE prices & affordability in SV and you know all there is to know about what pure capitalism does to even the techno-optimists.

- prices for already established drug like insulin or epipen over last 20 years or general hospital expenses

- screw all that, even look at the realtor commission (about 6-7%) for doing almost nothing. why has that not been disrupted even though all info is essentially online.

- pretty much all software wants to be tired subscription with value squeezed out over time. this is how they (VCs) evaluate exit potential of startups.

Sure it all starts with providing an innovative product to customers but the underlying disruption essentially allows 'them' to squeeze additional value out of populace over time.

bottom line, as citizens, are you and I working more or less compared to our parents? do we have more security (financial & otherwise) in life? are we more or less stressed? this will only get worse with AI by default. unless there is a dramatic shift business-as-usual.

edit: formatting


Techno-optimism needs better defenders. This e/acc stuff reads like it was written by 15 year olds.

At least I don't see it anywhere but Xitter which might contain the damage.


Theory: A lot of technologists have both the emotional maturity and intellectual maturity of 15 year olds, but they have been compensated handsomely for their technical prowess over the past decade+ and have confused that purely vocational merit with merit in other arenas.


w/ 20/20 hindsight, I wish I'd made a foundational contribution to humanity as impactful as naming the image tag.


This could be you one day! https://gizmodo.com/the-humble-origins-of-the-html-blink-tag...

> At some point in the evening I mentioned that it was sad that Lynx was not going to be able to display many of the HTML extensions that we were proposing, I also pointed out that the only text style that Lynx could exploit given its environment was blinking text. We had a pretty good laugh at the thought of blinking text, and talked about blinking this and that and how absurd the whole thing would be. The evening progressed pretty normally from there, with a fair amount more drinking and me meeting the girl who would later become my first wife.

> Saturday morning rolled around and I headed into the office only to find what else but, blinking text. It was on the screen blinking in all its glory, and in the browser. How could this be, you might ask? It turns out that one of the engineers liked my idea so much that he left the bar sometime past midnight, returned to the office and implemented the blink tag overnight. He was still there in the morning and quite proud of it.


Fun share, thanks.

The specification process was no more formal on the IE team, some years later. The span tag, which admittedly was a pretty good idea, was conceived and added to the product over a weekend.

I was very conflicted when span's creator (name long forgotten, sorry) shared this story. Young me cared very much about rules, order, specifications, standards, interoperability, QA/test and such.

Now? Not so much.


this is essentially the problem with the tech boom - lots of people seem to think that being good at writing C++ or telling other people what C++ to write or whatever makes them smart in some broad way, and capitalism has enable that delusion to run wild over much of the world


People thinking that getting into an elite university makes them smart about anything, except pleasing admissions officers.


Yes, generally being successful at executing on things is a good indicator of intelligence, while failing to ever even attempt things is a mark of at least timidity.

A lot of internet users think being armchair generals makes them smart, but of course like fake nails it acts as a mark of someone on the outside who doesn't know any better.


>Yes, generally being successful at executing on things is a good indicator of intelligence, while failing to ever even attempt things is a mark of at least timidity.

Being timid doesn't mean that you lack intelligence. But you're also missing the overall point that OP is making - just because you might be "successful at executing on things" doesn't mean that you are, as OP said, "smart in a broad way". No, it only means that you can finish a task.

They're clearly (and they can step in and tell me I'm reading them wrong) referring to the fact that just because, say, Joe Schmo founded the world's first ride-sharing company and made billions in the process, a cult of personality develops and suddenly everyone thinks that Mr. Schmo is going to have a valid opinion on the housing crisis, or how to prevent war between Israel and Palestine, or how to reduce drug use, etc.


I understand what the parent is saying.

But if you can execute on an idea then yes, you probably do have broad intelligence. You can manage people, file paperwork, perform your core business, strategize about the future, handle setbacks and so on.

It's not like uber execs are only skilled in driving cars back and forth. They have and use all the required skills to successfully execute on an idea - which is a lot of skills!

And yes, timidity is a strike against broad skills and capabilities because you grow skills by using them. If you're an armchair general then you've never actually grown your skills, you've only ever roleplayed. Actual engagement has costs and comes with losses, recovering from losses and trying new strategies.

You can't just think about how to weave a basket and be a good basket weaver. You have to actually weave baskets and practice.


>I understand what the parent is saying.

Respectfully, I think you're still missing the point as demonstrated by your last sentence.

>You can't just think about how to weave a basket and be a good basket weaver. You have to actually weave baskets and practice.

... right. Which is why OP was saying that it's a mistake to look to a basket weaving executive for their opinions on Ukraine vs Russia. Having skills necessary to succeed in your field of work, or your hobby, doesn't mean that you're smart in every other area. Yes, you are intelligent at a generally high level, but OP was referring to how successful execs are put on pedestals and whose opinions on all kinds of subjects well outside of their wheelhouse are often sought after.


The whole point of generalized intelligence is that it's broadly applicable. That is my argument position and it is in direct contrast to what the parent is arguing that only specialized intelligence counts.

In attempting to counter my argument, you are changing it in your restatement. An executive of a basket weaving company is fundamentally different than Jeff the reclusive basketweaver who can only weave baskets.


Techno-optimism and techno-pessimism are both weak positions.

Technology is not apolitical or value-neutral. They don't inevitably make peoples' lives better (especially when it comes to systemic issues like income or wealth inequality).

It can. It absolutely can do good things. But only if there's political will behind those decisions. It isn't automatic, and it isn't by default. And the only way we'll get from here to there is to care about the things Marc Andressen decries: Social responsibility, tech ethics, Radical CS, etc.

PhilosophyTube recently covered Ethical AI as a broad topic, but a lot of her points also apply to tech outside of the large-scale computing sector. (Things like algorithmic bias, which doesn't require AI/ML to accidentally implement.)

https://www.youtube.com/watch?v=AaU6tI2pb3M


One particular instantiation of...

> Technology is not apolitical or value-neutral. They don't inevitably make peoples' lives better (especially when it comes to systemic issues like income or wealth inequality).

...would be to substitute "technology" with "markets". Karl Polanyi wrote an entire book (The Great Transformation) about precisely this point.

> It can. It absolutely can do good things. But only if there's political will behind those decisions. It isn't automatic, and it isn't by default. And the only way we'll get from here to there is to care about the things Marc Andressen decries: Social responsibility, tech ethics, Radical CS, etc.

There's a direct parallel here to what Polanyi called "social embededness". Stealing from Wikipedia...

> Polanyi argued that in non-market societies there are no pure economic institutions to which formal economic models can be applied. In these cases economic activities such as "provisioning" are "embedded" in non-economic kinship, religious and political institutions.

And by contrast, that market societies attempt to "disembed" economic action and claim that markets are some Platonic ideal existing outside their social context, which leads to all sorts of irrational and strange outcomes.

To tie it back to technology, I think what we're seeing today is precisely the same, but rather than economic action in markets, it is a cluster of affordances from technology at scale (ubiquitous mobile computing, the cloud, etc). Techno-solutionists/optimists want to believe that "technology" exists disembeded from social context, and that it is a pure, higher thing beyond and above the petty squabbles of politics and culture.

(And I mean, taking metaphors at face value... it's called THE CLOUD. Even the name suggests it being above and beyond mere human ken.)


I'm actually not familiar with Karl Polanyi's work. I'll give it a read sometime; thanks for the pointer.


> And the only way we'll get from here to there is to care about the things Marc Andressen decries: Social responsibility, tech ethics, Radical CS, etc.

Right but the problem with a lot of these positions is obstructionism. Take a look at the '70s environmental movement and the way its legacy, laws like NEPA and CEQA, are used to simply block development. Or look at the discourse around nuclear power.

Of course:

> Techno-optimism and techno-pessimism are both weak positions.

Which I wholeheartedly agree with. It just turns out that tradeoffs are difficult and that the electorate doesn't really care about nuance. They just want to know whether Andreessen is Good (TM) or Bad (TM). Should you clap for him or boo him on stage, that is the question of our times.


There's plenty of cautionary tales about mindless techno-optimism, but the role of 70s environmentalism is blocking nuclear power and making climate change orders of magnitude worse is a great cautionary tale on the dangers of pessimism and obstructionism.

A big part of the problem with 70s environmentalism is that a lot of it's rooted in the fantasy of returning to some pre-industrial mode of existence. That's never going to happen barring an extinction level event that would do as much harm to the rest of the biosphere as it would to us. We need to be looking at how to make our technological civilization less damaging to our biosphere and more sustainable long term, not wasting time with fantasies.


Which of those positions do you believe are plagued by obstructionism, and in what context are they being obstructionist?


Social responsibility, tech ethics, Radical CS are all such vague positions that I've seen them used to justify everything from adding gentle taxes to "hanging the billionaires". I think most people agree that some amount of regulation is good. The hard part is agreeing how much, where, and how this regulation is enforced. CEQA is an example of where pro-social, pro-environmental legislation ends up obstructing progress.


"Comparing California before and after the 1970 passage of the California Environmental Quality Act (CEQA), and benchmarking against performance in the other 49 states, this study finds that 1) California per capita GDP, 2) California housing relative to population, 3) California manufacturing output and 4) California construction activity grew as fast or faster after the passage of CEQA." (https://econ.utah.edu/research/publications/2013_01.pdf)

https://www.insider.com/vintage-photos-los-angeles-smog-poll...

https://www.baaqmd.gov/about-the-air-district/history-of-air...

https://journals.library.columbia.edu/index.php/cjel/article...


"Right but the problem with a lot of these positions is obstructionism. Take a look at the '70s environmental movement and the way its legacy, laws like NEPA and CEQA, are used to simply block development"

One of the difficulties with being young is that one tends to assume that the conditions from the beginning of time were static up until one's childhood memories. Getting older doesn't really help that, but one does remember conditions farther back.

Ever seen those pictures of Beijing on a bad air day? With the lovely brown-gray air? Los Angeles was kinda like that in the 1980s.

Air quality:

https://www.statista.com/statistics/1139418/air-pollutant-em...

"Annual emissions estimates are used as one indicator of the effectiveness of our programs. The graph below shows that between 1980 and 2022, gross domestic product increased 196 percent, vehicle miles traveled increased 108 percent, energy consumption increased 29 percent, and U.S. population grew by 47 percent. During the same time period, total emissions of the six principal air pollutants dropped by 73 percent. The graph also shows that CO2 emissions, after having risen gradually for decades, have shown an overall decrease since 2007, and in 2021 were 7 percent higher than 1980 levels." (https://www.epa.gov/air-trends/air-quality-national-summary)

https://airquality.gsfc.nasa.gov/power-plants

https://airquality.gsfc.nasa.gov/us-air-quality-trends

https://www.epa.gov/air-trends/air-quality-national-summary

https://www.epa.gov/air-research/history-air-pollution

Water quality

https://www.scientificamerican.com/article/how-safe-are-u-s-... (Some good pictures of the Cuyahoga River fire.)

https://e360.yale.edu/features/delaware-river-clean-water-ac...

https://acwi.gov/monitoring/webinars/industrial_internet_112...

https://www.epa.gov/laws-regulations/history-clean-water-act


> One of the difficulties with being young is that one tends to assume that the conditions from the beginning of time were static up until one's childhood memories. Getting older doesn't really help that, but one does remember conditions farther back.

I'm not sure why you draw a binary here. I have no problems with environmental regulation. You've posted several examples about the benefits of CEQA. I'm also not so young that I don't remember LA being covered in soot. I remember driving down to LA and putting a scarf on my face in the car with my parents as we descended into the valley.

But CEQA, today, is being used to block new housing and is calling students "noise pollution." This is the problem. Crafting regulation is very difficult. While the good intentions behind CEQA were realized quickly, NIMBYs also found that they could use CEQA to block development.

What I ultimately mean is that this high-level, mile in the sky debate about "techno-optimism" and "techno-pessimism" or "pro-regulation" vs "anti-regulation" is silly. I think almost everyone agrees that we need some form of regulation. The questions to discuss are much thornier: what do we regulate, how much of it do we regulate, what should the penalties be, and how do we enforce the regulation. The problem is that once you get into these details, the general public stops paying attention, and special interest groups (industry leaders, NIMBY neighborhood groups, union leaders, etc) start to weigh in. Most of these internet fights turn into people sparring over vague philosophies, we never get so far as to discuss what and how a regulation should be made!


Pardon me! I was apparently misled by "Take a look at the '70s environmental movement and the way its legacy, laws like NEPA and CEQA,..."


Great share.

FWIW, chewing thru History of Philosophy Without Any Gaps has deepened my appreciation of PhilosophyTube. (My advice to young me would not skip the humanities.)


Yeah, this.

All else equal, I'd say that I am a "techno-optimist". That is, I am optimistic about humanity continuing to invent new technologies, and optimistic that over long time horizons, those technologies will be highly net positive. And I'm excited about continuing to find ways to be involved in some of the technologies that get invented and scaled up during the remainder of my life.

But if Andreessen's manifesto is now what the term "techno-optimist" means, then you definitely have to count me out of that. I guess I'm happy to just say "I'm optimistic about technology", without needing new jargon defining it as an identity.


e/acc : for the ones that don't know its meaning [0] yet:

"E/acc is an acronym for the phrase Effective Accelerationism, which is an ideology and movement that draws from Nick Land's theories of accelerationism to advocate for the belief that artificial intelligence and large language models (LLMs) will lead to a post-scarcity technological utopia."

[0]: https://knowyourmeme.com/memes/cultures/eacc-effective-accel...


I don't see techno-optimism as being remotely under threat. I think the vast majority of humans are techno-optimist. There's some very real concerns about how AI will change society and whether it will cause some palpable harm especially in the short term. I also think it's pure sophistry to assert that technological progress can only be good. Even when it's a net positive it will still come with costs. All that being said, technological progress is hardly being hampered in any corner of the economy. Congress held some hearings on AI. Did they shut down any program or ban any products? No. Tech is proceeding at an ever faster pace and there is barely anything slowing it down. The biggest anti-tech campaign I've seen in my life was the anti-vaccine campaign of the COVID era.


I wonder what the virtual age of an LLM authored piece would be.


The list of “references” at the end a bit self indulgent me thinks


I'm not entirely sure Bertrand Russell would like to be on that list.


It's not a great article but it's raising a real point.

Andreessen's manifesto is bordering on the same error as free market fundamentalism (and other flavors of social darwinism) that views free markets as infallible. Markets are great at generating wealth but there's no reason to believe they can't lead us astray. FTX is an example of that.

I think we should be 10-15% more optimistic about technology. And I agree that the source of a lot of environmentalism is unconscious anti-humanism. But there is such a thing as too much optimism, and it leads to stuff like "web3" and crypto nonsense.


Any environmentalism worth its weight in words is consciously humanist, with the recognition that human life is not possible without nonhuman life. That is to say, humanist without the ideological blinders Andreessen wears.

For humanism (for that matter, humankind) to remain viable in the long term it must outgrow adolescent notions of human self-sufficiency and supremacy


Okay -- so how do you respond to environmentalists who were completely and utterly wrong (e.g. Paul Erlich, The Club of Rome)?

What I'm offering is a theory of why they were wrong: they have an unconscious hostility torward humanity which they channel into irrational, incorrect beliefs about the environment. There's a lot of this going around right now.

Of course that doesn't mean that all environmentalism is similarly compromised. The desire to preserve ecosystems, to measure and limit the impact of industry on the environment is generally good.


I worry that though your theory may accurately describe some people,

1) it is nearly impossible to test, and

2) an equally parsimonious (and likely equally untestable) theory, it seems to me, is that uncertainty is uncomfortable. Misplaced certainty is a pervasive and unfortunate human proclivity, and this can easily motivate overconfident predictions.

The problem, in my view, is that we see environmentalism as a corrective to industry; the former as a salve for the raw patches rubbed by the latter. This is upside down and inside out. Industry cannot exist without ecosystems and their services. Unless we place ecology before economy, economy will terminate itself and ecology both.


Yeah, it's hard to test psychological theories. But all we really have to do is watch out for people who make dire predictions about the environment without substantial evidence. Sure, that's not perfect but it'll catch most of the offenders.

> Unless we value ecology above economy, economy will terminate itself and ecology both.

Doesn't every reasonable person "value ecology above economy"? Choose between A. rich + living in a desolate hellscape with no plants/animals and B. not rich + living in a nice green area with lots of nature nearby. Everyone is going with B! As soon as hunger and cold are no longer serious problems, every reasonable person starts to prioritize nature.


The tragedy and irony is that even if every "reasonable person" wants the right things, with some combination of socio-technical inertia, incentive misalignment, power maximization, and ignorance, sprinkled with unreasonableness or malice or plain bad luck or what have you, we can still hit the rocks anyway.

This is Moloch, or a wicked multi-polar trap, or the metacrisis, call it whatever you like. For a species in the thrall of an energy windfall of massive proportions, concurrent with an intelligence (if perhaps less a wisdom) explosion, it's the problem.


It is worth keeping in mind that plenty of environmentalism over the past century has been specifically hostile to the existence of humanity. Either they don't care at best whether humanity continues to exist, or they view humanity as a blight (on a supposed innocent or pristine nature) that needs wiped out. It's not an uncommon view to see espoused today, and it gained traction over five decades ago in the environmentalist movement.


anti-humanism is a derogatory label pasted onto those speakers who argue for considering the entire ecosystem?


What I mean is that a substantial amount of environmentalism is just sublimated hatred of humanity. This leads to irrational beliefs and faulty predictions a la The Population Bomb or The Limits of Growth.


I clicked on this hoping it would be a deserved indictment of his personal NIMBY views [1] vs public statements. Instead it's basically 3 tweets worth of commentary, and illiterate commentary at that.

> Right now, we're watching what happens when risk management and tech ethics are ignored for the sake of unlimited growth, ... Andreessen would bristle at the comparison — his VC firm famously didn't back FTX — but the philosophical slope nonetheless slips in that direction.

Besides referencing that it's already a slippery slope argument this is completely unrelated to his complaints about.

> 'existential risk', 'sustainability', 'ESG', 'Sustainable Development Goals', 'social responsibility', 'stakeholder capitalism', 'Precautionary Principle', 'trust and safety', 'tech ethics', 'risk management', 'de-growth', 'the limits of growth.'"

Notice how fruad isn't listed there. Andreessen's combining 4 to 5 different things that slow down growth. Which isn't great if you think some of them might be net positives but the author's reading comprehension is comically bad.

[1] https://www.theatlantic.com/ideas/archive/2022/08/marc-andre...


Trust and safety, ethics, and risk management are direct responses to several forms of malfeasance, including fraud.


I think I'm missing something?

* Trust & Safety [1] seems to be about safeguarding PII and having a moderation

* Tech Ethics - I can't find a general definition but I think about new ethical situations caused by tech. eg if you train an generative AI on artist X is it ethical to have the AI create images that look like artist X

* risk management - I haven't follow the case closely, I think of the SBF case mainly being about fraud and less about their risk management.

So even if we count those 3 the other 9 reasons, including the first 7 Andreessen lists, are unrelated

[1] https://en.wikipedia.org/wiki/Trust_and_safety


Setting aside most of a 5,200-word piece to instead cast essentially tone- and mood- based objections against a 49-word aside is a good example of the reflexive negativity-bias & nit-picking that's inherent to the "mass demoralization campaign" Andreesen perceives.

In the passage Primack criticizes, Andreesen names some of the euphemisms he associates with "bad ideas of the past – zombie ideas, many derived from Communism, disastrous then and now – that have refused to die".

These are abstractions, with lofty labels, that now provide overdue-for-reexamination rationalizations for the sort of stagnation, statism, bureaucracy, gerontocracy, corruption, & cartelization he decries.

Those loyal to the particular activist-campaigns & shibboleths that Andreesen calls out were unlikely to be on-board with his level of optimism, anyway. So letting those people self-identify, & disassociate from his ambition, helps his cause. He's not trying to win a faculty election among the bien-pensant of narrative-control.


Opinion wrong because of ftx? Doesn’t seem like sound reasoning.


I suspect this is aimed more at the Saudis and his need to raise funds to compete with the big four in LLM tech, something he likely doesn't have the capital for right now.

This reads as crazed rambling to many of us here, but to the ME capital scene it might be read as a 'strong vision of the future' and telling 'the little people to work harder.'


I've come across his twitter feed a few times, I suspect it's not some 4D chess move but a lot of things he sincerely believes, and an advertisement for his firm (as all blogposts are). There's a lot of twitter brained "tech thought leaders" out there and they finally try to write something coherent for normies and it comes out like that nonsense.


Yes, his rants about the PMC a while ago were near inscrutable (not that I think PMC is an inscrutable/useless category), but after reading a lot of it, seemed to expose him as someone prone to self-serving delusions. I think there are interesting conversations to be had about elite cultural and academic production, as a side-channel to traditional control by capital. However, I don't think I'm being too uncharitable when I say that he was primarily frustrated at his own perception that he was being held back from his ubermensch pretensions.


What's amazing to me is the explicit positive references to actual Fascist and/or "Dark Enlightenment" thinkers (Marinetti, Nick Land) as "Patron Saints."

Either Andreesen actually has no idea what he's referring to and is just pretentious name-dropping stuff he thinks is "kinda cool", or he's explicitly endorsing anti-democratic authoritarianism. Either alternative is rather disturbing -- though maybe not surprising.

In any case, it kinda explains some of the dreck they've funded.


I think they’ve made it pretty clear they believe in techno authoritarianism. Listen to Theil Altman or AH. Same stuff.


Certainly. And I'll be downvoted to oblivion for this (if anybody is still watching this thread), but to me that's the actual philosophical endpoint of so-called libertarian capitalism.

When you abdicate all decision making authority to the market, you also suppress the non-market community/social dimension in society, and you end up in a might-makes-right ethical model; and along with it comes a need to suppress the inevitable resistance to that (unions, left wingers, perhaps minorities, etc.)

The classical liberal individual rights stuff ends up being window dressing and justification for a far darker end result. Yes, nobody cares who you have sex with or what drugs you take or what porn you watch -- as long as wealth and control remains unchallenged.


All watches over by machines of loving grace…


Multiple audiences is not 4D chess. Have you ever worked in marketing or PR? It’s bog standard stuff.


I don't know that Andreessen is going to compete in building AI technology; I suspect he is looking at applying it to the military and defense industry.

Andeessen Horowitz has been noted as moving into the defense contracting arena over the last few years.

https://a16z.com/announcement/anduril/

https://a16z.com/building-american-dynamism/

https://a16z.com/how-the-u-s-can-rewire-the-pentagon-for-a-n...

Anyone looking forward to our new LLM-powered Unmanned Combat Systems era?


> he is looking at applying it to the military and defense industry

The Investment Partners in the Enterprise and American Dynamism portions of A16Z are working with a significant amount of autonomy. I don't think he has much day-to-day involvement there. I think it's another Board Partner that manages that portion


Well yeah, specially if endorsed by Andeessen!

Who better than this selfless, always right, tech visionary. He is not about some shady hustle to make quick money.


> I suspect this is aimed more at the Saudis

A16Z is a solid fund, but they are definitely mimicking Softbank hype-train wise now.

Their Enterprise and Gaming portfolios are solid, but that's largely devolved to some very competent partners.

Tbh, idk how much day to day impact Andreessen has at A16Z anymore. I think he always wanted to go the thought leadership route, but also wanted to still be in the game.


As a marketing pitch it is too poorly structured.


I wouldn't be surprised if you're right. The ME seems to be investing very heavily in AI right now, and many of those nations have extremely deep pockets and favorable conditions for data center creation.


so your take is that it's saying to increase pressure, exploit harder, and get more even more faster?


I think that is the not-so-hidden subtext yes.


Absolutely. Regardless of the true virtue of Andreessen's idealism, his marketing aim is functional and mostly signaling. It's about congealing the buzz and hype over AI into a pithy thesis that is moth paper to the semi-moral bigwigs of global finance.


yeah but he presumes they're undiscerning?


Have you seen NEOM?


Yep but the free spending heavily oil funded Saudi mega project doesn't necessarily go to Andreessen Horowitz.


Have you seen SoftBank?

Also NEOM original vision was a free tech utopia... AH is deeply interested in that. And the associated spend.


Softbank is a curious company for sure!


"We believe the global population can quite easily expand to 50 billion people or more, and then far beyond that as we ultimately settle other planets."

And who's going to have these kids? Because it's become clear that 50B is never going to happen. Humans--really women--have no desire to procreate and spread across the galaxy in that way... at least not if you also support -- as andreessen says he does in the same manifesto -- human intelligence.

"We are told to denounce our birthright – our intelligence, our control over nature, our ability to build a better world." ... "Our present society has been subjected to a mass demoralization campaign for six decades – against technology and against life – under varying names like 'existential risk', 'sustainability', 'ESG', 'Sustainable Development Goals', ..."

Control over nature -- big claim for a species living through one of the biggest mass extinction events in earth's history. Andreessen's clearly a gambling man... but this time he gambles with the existence of human beings.. telling us to ignore the very fears that drive us to solve the problems we face, betting even without the threat of death, technology will save us from ourselves.

You think you're in control, right up until the hurricane comes and you run for your life, coming back only to see what it left for you to salvage.


I wish Musk, Thiel, Andreessen would just get on that spaceship and go start colonising other planets already


That's a very polite way of saying it.


> "We are told to denounce our birthright – our intelligence, our control over nature, our ability to build a better world." ... "Our present society has been subjected to a mass demoralization campaign for six decades – against technology and against life – under varying names like 'existential risk', 'sustainability', 'ESG', 'Sustainable Development Goals', ..."

I listen to this and I get echoes of Ayn Randian manifesto's, of Andrew Ryan's Rapture in BioShock (and am ashamed to only now have noticed "A.R.") and Foundation and think "this is very much not a future I'd look forward to..."


> And who's going to have these kids?

They're going to grow them in artificial wombs.

> _Control over nature_

The final frontier of the deluded soul.


More like the VC manifesto where all the value capture will happen in private markets, productivity will go up but wages will continue to stagnant and healthcare or education will continue to be unaffordable


the actual argument for that is "but now they have iPhones"

to my eye, similar slippery-slope arguments for the value of labor, or the expectation of stability in a job, occur regularly here.


Can we all just stop listening to rich peoples various sinister ideological schemes to enslave us all so they can go live on yachts or on some space station while earth rots?

Thanks.


Can we all stop issuing random fatwas that expect everyone to agree with their braindead reasoning? Thanks.


Unpopular opinion: Wealthy individuals are inspiring and we should listen to what they have to say. They became wealthy by building stuff that millions if not billions of people value.

May be you're forgetting this? Or is this just the normal cynicism that we must exhibit?


This feels like an absolute that isn't inherently true (maybe you didn't mean it as an absolute? I could be reading it wrong). Not every wealthy person "became wealthy by building stuff that millions if not billions of people value". Even for those that did, not everyone is right all the time - just because you sold a lot of a thing doesn't mean you're going to have great opinions on everything else.

And, just because you're wealthy doesn't mean I owe your opinions my time. In the specific case of Andreesen here, I've disagreed with him so strongly over the years that I didn't bother giving his manifesto a chance, and from most of the comments I've seen, I don't think I've lost anything there. I don't care if he founded Netscape, or throws his cash around to new companies in an effort to generate more cash for himself, his opinions have missed the mark many, many times.

Wealth != Great ideas


Mostly, not absolute. We have flukes like Theranos and FTX. But for the most part. It's not a matter of opinion, the media and progressives have convinced themselves that we should abandon capitalism while Asian countries have adopted it wholesale and lifting people out of poverty at an unprecedented rate.

Meanwhile, our progressive centers are crumbling. And we want more of this–and it manifests in the most glib form as–Billionaire hate.

The reason why you're seeing so much push back for this manifesto is because it goes against the power centers of progressivism.


I don't necessarily disagree with this at a high level, but it doesn't really have anything to do with the idea that we have to listen to what someone has to say simply based on their level of wealth, or that their level of wealth automatically makes them "inspiring".


It's of course far more broad: every liberalized, affluent country on the planet has intentionally adopted market based economic systems. Absolutely none of them are Socialist, Fascist, or Communist, and there are no exceptions.

Norway, Sweden, Denmark, Finland (to name some of the most popular targets) - none of them are remotely close to being Socialist, and those are the only super nice countries the pro-statist crowd can ever pick from (when declaring that Socialism works). They all rely heavily on market based economic systems, they are all massively loaded with private wealth (typically the most millionaires per capita of anywhere), and the extreme majority of their economies are privately owned and private operated. They are in fact solid demonstrations that market based economies are vastly superior to statist economies.


You're pretty much spot on, I was addressing the trend and the rate of change, and a more general zeitgeist. Of course, the drumming of the state expansion that you see ad-nauseum on HN and elswhere. Anti-car culture. The kindness industrial complex which has led to insane amount of suffering ironically. Refugee crisis and migration policies. People on welfare. The rise of academic expert class. The DEI industrial complex.

All of these IMO converge to: You will own nothing, and you'll be happy.

WEF agenda is explicitly laid out for you read.


They've built things that extract value from millions if not billions of people. Everything else is secondary.


Andreessen virtue-signalling to foreign investors to help a16z recover from being so badly run, such an honourable profession.

don't worry, though, pushing all those cryptocurrency scams that cost so many other people so much money doesn't seem to have hurt Andreessen's personal wealth!


safety first. at the expense of other's life's; but safety first


I wrote a Twitter thread to critique Marc's post as well, re:

"As an Accelerationist, I've been watching e/acc gain traction interest. With @pmarca post, it's time to address this philosophy's shortcomings.

Namely, that it is not broad or ambitious enough to truly fulfill the role we as tech leaders have to the future and society."

https://twitter.com/NickPinkston/status/1714088788237180947




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: