This is actually not that unusual. Stable Diffusion's license, CreativeML Open RAIL-M, has the exact same clause: "You shall undertake reasonable efforts to use the latest version of the Model."
Obviously updating the model is not very practical when you're using finetuned versions, and people still use old versions of Stable Diffusion. But it does make me fear the possibility that if they ever want to "revoke" everybody's license to use the model, all they have to do is just post a model update that's functionally useless for anything and go after anyone still using the old versions that actually do anything.
Sounds like it would be interesting to keep track of the model's responses to the same queries over time.
> Gemma-2024-Feb, what do you think of the situation in the South China Sea?
> > The situation in the South China Sea is complex and multi-faceted, involving a wide range of issues including political conflicts, economic challenges, social changes, and historical tensions.
> Gemma-2024-Oct, what do you think of the situation in the South China Sea?
This is a great idea; I wonder if anyone is working on AI censorship monitoring at scale or at all. A secondary model could compare “censorship candidate” prompt results over time to classify how those results changed, and if those changes represent censorship or misinformation.
There's also (I think?) been some research in the direction of figuring out more abstract notions of how models perceive various 'concepts'. I'd be interested in the LLM version of diffs to see where changes have been implemented overall, too.
But really, the trouble is that it's tough to predict ahead of time what kinds of things are likely to be censored in the future; if I were motivated to track this, I'd just make sure to keep a copy of each version of the model in my personal archive for future testing with whatever prompts seem reasonable in the future.
I don't think a broken model would trigger that clause in a meaningful way, because then you simply can't update with reasonable effort. You would be obliged to try the new model in a test environment, and as soon as you notice it doesn't perform and making it perform would require unreasonable effort you can simply stay on the old version.
However you might be required to update if they do more subtle changes, like a new version that only speaks positively about Google and only negatively about Microsoft. Provided this doesn't have an obvious adverse impact on your use of the model.
Obviously updating the model is not very practical when you're using finetuned versions, and people still use old versions of Stable Diffusion. But it does make me fear the possibility that if they ever want to "revoke" everybody's license to use the model, all they have to do is just post a model update that's functionally useless for anything and go after anyone still using the old versions that actually do anything.