Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is actually not that unusual. Stable Diffusion's license, CreativeML Open RAIL-M, has the exact same clause: "You shall undertake reasonable efforts to use the latest version of the Model."

Obviously updating the model is not very practical when you're using finetuned versions, and people still use old versions of Stable Diffusion. But it does make me fear the possibility that if they ever want to "revoke" everybody's license to use the model, all they have to do is just post a model update that's functionally useless for anything and go after anyone still using the old versions that actually do anything.



So if they wish to apply censorship they forgot, or suddenly discovered a reason for, they want you to be obligated to take it.

Good faith possibilities: Copyright liability requires retraining, or altering the underlying training set.

Gray area: "Safety" concerns where the model recommends criminal behavior (see uncensored GPT 4 evaluations).

Bad faith: Censorship or extra weighting added based on political agenda or for-pay skewing of results.


Sounds like it would be interesting to keep track of the model's responses to the same queries over time.

> Gemma-2024-Feb, what do you think of the situation in the South China Sea?

> > The situation in the South China Sea is complex and multi-faceted, involving a wide range of issues including political conflicts, economic challenges, social changes, and historical tensions.

> Gemma-2024-Oct, what do you think of the situation in the South China Sea?

> > Oceania has always been at war with EastAsia.


This is a great idea; I wonder if anyone is working on AI censorship monitoring at scale or at all. A secondary model could compare “censorship candidate” prompt results over time to classify how those results changed, and if those changes represent censorship or misinformation.


There's also (I think?) been some research in the direction of figuring out more abstract notions of how models perceive various 'concepts'. I'd be interested in the LLM version of diffs to see where changes have been implemented overall, too.

But really, the trouble is that it's tough to predict ahead of time what kinds of things are likely to be censored in the future; if I were motivated to track this, I'd just make sure to keep a copy of each version of the model in my personal archive for future testing with whatever prompts seem reasonable in the future.


We are already culturally incapable of skillfully discussing censorship, "fake news", etc, this adds even more fuel to that fire.

It is an interesting time to be alive!


These are all very new licenses that deviate from OSI principles, I think it's fair to call them "unusual".


I think they meant not unusual in this space, not unusual in the sense of open source licensing.


For this sentence to parse, you need to either add or remove a "not".


That's useful context, thanks - I hadn't realized this clause was already out there for other models.


I don't think a broken model would trigger that clause in a meaningful way, because then you simply can't update with reasonable effort. You would be obliged to try the new model in a test environment, and as soon as you notice it doesn't perform and making it perform would require unreasonable effort you can simply stay on the old version.

However you might be required to update if they do more subtle changes, like a new version that only speaks positively about Google and only negatively about Microsoft. Provided this doesn't have an obvious adverse impact on your use of the model.


Switching to a model that is functionally useless doesn't seem to fall under "reasonable efforts" to me, but IANAL.


It's worth noting that Stable Diffusion XL uses the OpenRAIL++-M License, which removed the update obligation.


Why the hell do they use such a crappy license in the first place?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: