Hacker Newsnew | past | comments | ask | show | jobs | submit | more avivo's commentslogin


> Sure, but that's irrelevant. Whether or not the user understands the answer they posted is not the concern of the site.

Well, that's unfortunate. Then again, I guess that's a logical conclusion of the "safe harbor" for serving any user-submitted content: Stack Exchange only does the most cursory moderation, and the rest is caveat readator


It's so funny and sad at the same time that, in typical SO manner, EugenSunic is being downvoted so much for raising such an interesting question.


I think this gets almost all the way there but not quite — there is one more vital point:

How we act depends on our environment and incentives.

It is possible to build environments and incentives that make us better versions of ourselves. Just like GPT-3, we can all be primed (and we all are primed all the time, by every system we use).

The way we got from small tribes to huge civilizations is by figuring out how to create those systems and environments.

Yes, the algorithm is not the problem alone, but a good algorithm can help fix the problems — since it creates the "loss function" (the incentives) for the humans using the platform (I go into that in more detail here https://twitter.com/metaviv/status/1529879799862378497 and here https://www.belfercenter.org/publication/bridging-based-rank... for those who are curious).

So it's not about "reaching for the stars" or complaining about how humanity is too flawed. It's about carefully building the systems that take us to those stars!


But there are communities that make it work and I believe these are negatively affected by general rules we try to establish for social media through some systems.

I don't believe any system can be a solution, it isn't a requirement for a lot of communities either. I don't know what differentiates these groups from others, probably more detachment from content and statements. There is also simply a difference between people that embraced social media to put themselves out there and ghosts that have multiple pseudonyms. Content creators are a different beast, they have to be more public on the net, but that comes with different problems again.

I believe it is behavior and education that would make social media work, but not with the usual approaches. I don't think crass expressions with forbidden words or topics are a problem, on the contrary they can be therapeutic. Just saying because this will be the first thing some people will try to change. Ban some language, ban some content, the usual stuff.


I had been thinking how I’d put it, and I think:

- by “failure of algorithm”, the vocal minority actually mean “lack of algorithmic oppressions and treatments according to alignments of a speech with respect to academic methodologies and values”.

- average people are not “good”; many are collectivist with varying capacity of understanding individualism and logic. They cannot function normally where constant virtue signaling, prominent display of self established identities, said alignments above, are required, such as on Twitter. In such environments, people feels and expresses pain, and makes effort to recreate their default operating environments, overcoming systems if need be.

- introducing such normal but “incapable” people - in fact honest and naive and just not post-grad types - into social media had caused the current mess, described by the vocal minority as algorithm failures and echo chamber effects, and by the mainstream peoples as elitisms and sometimes conspiracies.

Algorithmically oppressing and brainwashing users to align with such values would be possible, I think(and sometimes I’d think about trying it for my interests; imagine a world where every pixel seems to have had 0x008000 subtracted - it’s my weird personal preference that I don’t like high saturations of green), but an important question of ethics has to be discussed before we’d push for it, especially with respect to political speeches, I also think.


How do you go about determining what is collaborative or "bridging" discourse, though? That seems like a tricky task. You have to first identify the topic being discussed and then make assumptions based on past user metrics about what their biases are. Seems like you would have to have a lot of pre-existing data specific to each user before you could proceed. Nascent social networks couldn't pull this off.


This also seems to be gameable. Suppose you have blue and green camps as described in the linked paper. And if content gets ranked high when it gets approval from both blue and green users then one of the camps may decide to promote their opinion by purposefully negatively engaging with the opposite content in order to bury it.

This seems no different from "popularity based" ranking mechanisms (e.g. Reddit) where the downvote functionality can be used to suppress other content.

Maybe the assumption is that both camps will be abusing the negative interactions? But you can always abuse more.

How is such system protected from someone manipulating the consensus by employing a troll farm (https://en.wikipedia.org/wiki/Troll_farm)?


What a great reply thank you. i agree


This is on point

> The goal of all this activity is not to debate, converse or exchange information. The goal is to win by being maximally controversial, as that's the behavior that is rewarded.

> the real issue: what gets amplified

But it can be (at least partially) fixed if you change the optimization function.

I advocate for that here: https://www.belfercenter.org/publication/bridging-based-rank...

And Twitter's Birdwatch (that Elon recently got all excited about when it fact checked the White House: https://twitter.com/metaviv/status/1587884806020415491) actually does this "bridging-based ranking" for adding context on tweets.

Here's the paper with details on how it works for Birdwatch: https://github.com/twitter/birdwatch/blob/main/birdwatch_pap... (you can also check out the source code in that repo).


Halfbaked thought: Have it cost half the amount as normal mail to receive email there. Perhaps also gov use is free, and users can whitelist specific domains or subdomains to be free.


Better yet, you could allow emails cryptographically signed by chosen entities (*.gov, power company, etc) and reject everything else.


> It turns out that we like people the best when they respond to us the fastest––so fast (mere milliseconds!) that they must be formulating their reply long before we finish our turn.

This might be true; but looking into the linked study, it appears to be on Dartmouth students. This claim at least maybe culturally dependent.


There is this quip that psychology is really the study of freshman students. Probably because (at least where I live) students are required to participate in a number of psychological studies in their first year.


Can you install a real keyboard tray?


I'm curious if Astro supports git-backed visual editing? I can't seem to find that, but it would rather useful if so! (ala tinacms.io , stackbit, etc.)


It looks like it provides out of the box support for Overleaf! (Collaborative LaTeX editor) https://www.overleaf.com/blog/635-languagetool-a-free-browse...

This is not true of Grammarly last I checked, you have to hack something together... (e.g. https://medium.com/@tardijkhof/how-i-made-grammarly-seamless... )


If you use VSCode as a text editor, you can use it locally on LaTeX files using LTeX extension (https://marketplace.visualstudio.com/items?itemName=valentjn...)


and if you're into Markdown, i can also recommend prosemd

https://github.com/kitten/prosemd-lsp


This is correct, there's first-class Overleaf support. https://languagetool.org/overleaf


I think what would be most interesting and helpful is an augmented form of reading that is semantic. I would love to have bold and italics and even section headings and table of contents that are toggleable on and off as I read anything, to quickly skim and focus on the most relevant content.

I can also imagine a version of this which is contextual, based off a query, or a personalized recommendation system.

I spend much more time skimming than reading, and skimming to determine if something is worth reading. Anything that can support that is incredibly valuable and would increase my functional reading speed for accomplishing tasks.


Fascinating. "Deprank is particularly useful when converting an existing JavaScript codebase to TypeScript. Performing the conversion in strict PageRank order can dramatically increase type-precision, reduce the need for any and minimizes the amount of rework that is usually inherent in converting large codebases." I wonder if the idea of using pagerank style systems for ~refactoring translate to other domain; e.g. organizational or knowledge refactoring (ala https://aviv.medium.com/when-we-change-the-efficiency-of-kno... )


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: