Tavis didn't write the bug he just found it (through a lot of hard work). This was free security research given to Microsoft. He gave them a very reasonable amount of time before disclosing the bug (if the disclosure window was 180 days and MS missed it people would be complaining just the same as 90 days).
There's no reason why someone else couldn't discover this bug and exploit it. I would rather know that I am vulnerable then be ignorant and assume my software was safe when it in fact was not.
Thanks taviso for all the great security work you do. (Also 2004 me would like to thank you for your cool fvwm configs).
> (Also 2004 me would like to thank you for your cool fvwm configs).
So much time wasted^W invested in those FVWM configs... I could never get it working quite as magically as Tavis Ormandy or Thomas Adam made it look like.
"Personally I think it's a bit harsh,"
Wright says, "every fix is different
and they should allow for some
flexibility in their deadline."
As far as I can tell, (1) this is a denial-of-service bug, not a privilege escalation or remote root exploit, i.e. a low-severity bug; (2) it is unlikely any code depends on the infinite loop triggering, meaning this fix doesn't call for great architectural upheval; and (3) the deadline was already extended to 91 days, rather than the usual 90.
If you'd give out deadline extensions for this bug you'd give them out for almost any bug, so you may as well not have deadlines at all.
It's just game theory. You have to have follow through, or people will learn that they can ignore you.
Although the ethics are very different, think of a loan shark. What do they get out of breaking the kneecaps of a person who failed to pay their debt? It doesn't bring back the lost money.
But it does show you are committed to following through with consequences. And that changes the behavior of people who might face those consequences.
For anyone wondering why the one (1) day extra, microsoft has a fixed patching schedule. The patches were released on june 11 patch tuesday. They needed the extra day to check if it was included in the june 11 patches.
Does anyone else think that microsoft's policy here is ridiculous?
"We know your stuff is broken to the point of being insecure and a risk to your business because we screwed up when making it. We know how to fix it. We've done the work to fix it. No we won't actually fix it until a few days from now.
You appear to be arguing against monthly "patch Tuesday". The reason that exists is for their enterprise customers. They used to release patches as they were ready, but it got to the point where there were patches every day, and corporate IT became a huge mess because every computer had different patches. Also, IT could never certify if a particular patch would break their internal apps.
By having patch Tuesday, it lets the IT department accept the patch only on their test machines, certify their internal apps, and then push out the patch to their other hosts in a controlled fashion, while auditing that everyone actually got the patch.
So while it's annoying, it was necessary to support enterprise IT.
That seems like a policy decision that can and should be made by the company, not Microsoft: Microsoft should feel free to release patches on any day they want, and if a common accepted practice is for IT departments to have a staged rollout of cleared patches, that sounds awesome. If I want "patch Tuesday" or even "patch Wednesday", it seems trivial for me to just do that myself.
It was made at the request of the companies so there could be a coordinated day. Once the patches are out they can be exploited, so having a coordinated day helps to ensure everyone can get patched in time.
Again, Microsoft only changed because their customers asked, basically demanded, that they do it that way. It wasn't really MS's decision, it was them responding to their biggest customers.
Microsoft doesn’t want to — they want to bundle as much as possible into patch releases these days, and push as many things into new builds as possible.
No. If you sold cars and realized that there were some buttons you could push on the AC unit that would cause it to catch fire, you wouldn’t remotely shut off my car to perform the repairs while I was driving down the highway.
The customers who “need” patches have a business to run, and forcing their computer to reboot in the middle of the workday for some service that may not be exposed at all on their network would be a good reason to avoid Microsoft products for said customer.
If Windows Server has been developed in such a way that a patch would just randomly reboot in the middle of the day or if anything would just reboot it unknowingly to the ops team then this is an OS problem and a reason why Windows Server is unfit to be used for servers rather than a patch problem.
A patch should be released as soon as possible and the OS should make it possible for the OPs team to install a patch at their own convenience and then there is no reason to withhold a patch ever. Sounds like a fundamental Windows issue and not a business practise issue.
I fundamentally disagree with your point of 'a patch should be released as soon a possible'.
A patch releases the fix, but by necessity also puts a bright target on the vulnerability it fixes by providing the information on potential exploitable vulnerabilities in the system being patched. From that moment on it is a race between reverse engineering exploit writers and system maintainers to get their work done first.
Having coordination and predictable planning where possible allows companies to include security maintenance into the workload, rather than to have to constantly scramble and react to unforeseen and unpredictable wildfires.
It is not about 'convenience', it is a component of a mature security process.
> If you sold cars and realized that there were some buttons you could push on the AC unit that would cause it to catch fire, you wouldn’t remotely shut off my car to perform the repairs while I was driving down the highway.
Okay, so this instance was likely human error, but the manufacturer should make it very difficult for you to get yourself into this situation. I'm counting the days until one of the many new EVs on the road is force-updated (i.e., for something like a recall due to malfunctioning brakes), causing the car to become unavailable when the owner needs it.
Are there actually businesses who just use the normal windows updater? (Ignoring smaller businesses without IT departments for a second). I assumed the forced patching at boot/reboot was a consumer version thing? A unforeseen update from microsoft can just shut down your business?
Whilst servers are in a different category, multiple Windows Updates have changed the way updates work. Look at the Dual Scan situation[0]. People who had central management applied one update and suddenly found desktops also accepting updates from the Internet.
Then you've got the fact that "Professional Edition" was once a perfectly fine solution for businesses, but suddenly the ability to properly control updates like you suggest required Enterprise Edition. These aren't the only issues.
There's always someone who points out that if you have basically unlimited free time you can stay on top of all of it, but at the end of the day a lot of businesses still find surprise updates happening. I just got a sales call for a third party business product with the tagline "Disable Updates automatically applying (Yes, REALLY!)" as a listed feature.
Finally you can top it all off with the BYOD trend, where people often expect to run their own machines without management software.
The same line of reasoning applies to IT departments as well, though. If you force corporate IT departments to spend all their time installing your daily updates, the cost of ownership of your product for the company goes way up and the department heads will start looking for cheaper alternatives. Exceptions could surely be made for critical vulnerabilities, especially those being exploited in the wild, but this is a low-severity DoS. As another commenter said, if you made this an off-schedule hotfix you would have to do it for basically every bug.
To anyone who still thinks a 90-days deadline (with up to 14 additional days for patch release alignment) isn't fair enough, I invite you to look at the timeline for this report:
This is is remote code exec on any device. Yet without hard deadlines, vendors stall, lie, etc. This isn't the first example of this. There has been many throughout the past. Project Zero's policy is actually very well thought, and state of the art IMHO.
Quite frankly, a multi billion dollar software company like Microsoft should be ashamed of themselves for not fixing a 0-day in 90 days. Even with all the non-coding involved and rolling it out properly they should probably be ashamed if they can't do it in a week.
I think it is fair and necessary as an incentive to immediately start working on it. It is also right to publish them after 90 days. I just think it is not fair to blame or make fun of companies for failing on delivery within 90 days what happens here regularly (see: upvotes for the article).
When developing a low risk application with a fancy DevOps infrastructure everyone expect bug fix delivery in hours or maximum the 2 weeks sprint.
90 days is not much time when patching operating systems or mission critical software. Windows is used in literally all regulated environments, from aircrafts to medical devices. The amount of necessary paper work and the amount testing to reduce risk is beyond what anyone not in that business can imagine.
> I just think it is not fair to blame or make fun of companies for failing on delivery within 90 days what happens here regularly (see: upvotes for the article).
Are people upvoting just to "make fun" of Microsoft? I assumed it was for visibility or to share an interesting insight into the inner workings of the security industry.
It's not a zero day, it's a 91 day. Some bugs can be complex to do root cause, fix, and verify a bug. But 3 months for what appears to be a validation failure seems more than sufficient.
Obviously we'll need to wait until the actual fix comes out to see if the fix was more substantial.
I've been around infosec for 21 years and this is still a 0day. That term has no strong definition, certainly not one that would allow precise interpretation as above, but in this case even a vague sense of what it means covers the situation easily: _users_ have had no time to patch
Can confirm. "Zero day" means you've had zero days to patch. The term has been used this way since, IIRC, the late 1990s. See Phrack 53 for an example:
Words are used to communicate, and language is fluid and changes over time. Clearly, zero-day is being used and understood by many to mean simply "unpatched", and so that is a reasonable definition. If ever you're arguing that a significant proportion of people are using language incorrectly, you're probably on the wrong side of history.
Don’t you just love it when people pull out their dusty tomes to prove to you that you’re wrong? It’s so pedantic yet also incredibly ignorant of how dynamic languages are.
I got yelled at once for using the word “cheap” to mean “inexpensive” once and wish you had been there with me.
I think a "zero day" threat model and terminology is from the point of view of the blue team type of people running the system, not the vendor.
From someone running a system, it doesn't matter if the vendor had 0 prior knowledge of the vuln or if they had made 25%, 50% or even 99% progress towards a patch. The point is there is still no patch available for the vulnerability and your only defense strategies are the same as if the vendor hadn't known at all, so it's still a 0day.
Zero day is almost always used in the context of “the bug was unknown and first seen during an attacks”.
The alternative definition (that zero day means purely day of publicizing) would mean that if you had two bugs in a product and you notified the vendor of one. Then three months later published both, they would both be zero days, and should be treated as such.
A 0day means publicizing a bug without the vendor themselves having the potential to have a fix.
Very simply: if a virus comes out attacking a known but unfixed bug in MS software no one would call it a zero day. Every article would say it was a bug that Microsoft knew about but hadn’t fixed.
Some bugs are just really hard to find, diagnose, verify, fix and then verify the fix.
At a previous employer, I was responsible for a very long running service that usually had an uptime of about 9-10 months. At one point we noticed we had a memory leak that was directly related to the number of requests the service processed. The kicker: it leaked less than a single byte per request in an x64 C++ service. It took us about a year to find the cause. Turned out to be a bug in an in house logging lib that failed to create a rolled log file in case of a filesystem full or disk usage quota exceeded condition, which we rarely ever encountered in development because we wiped logs every week in dev, but on a more relaxed schedule in prod.
Agreed, like I said it will be interesting see what the final patch set will be.
Diagnosing some bugs can take a huge amount of time, work, and luck (I had a bug one time where the repro steps required a specific google reader account and a stapler resting on the space bar for an hour or so).
But for a bug that has a trivial test case, especially in the context of parsing/validating bugs generally, isolating the bug should not be hugely challenging (a few days).
Much more likely in my experience is that the nature of the specific flaw indicates that there is a pattern in your code base that is potentially unsafe - at that point you want to fix all instances of the pattern, because when you release the patch you’re documenting the flawed pattern.
Seems to be a DoS against the application using cryptoapi. certutil <cer> seems to hang but can be killed without negative effects on the system. Ant timeouts in cryptoapi ops will prevent this. According to the report: Severity-low
I'm really dont think forbes is a good media for publishing the digest like this - the public there is very broad and dont have the expertise to evaluate what've been presented.
If you see a bridge or a building with cracks and signs of structural weakness, be sure not to tell anyone, you might start a panic. Instead directly contact the engineering firm and give them at least 90 days to rectify the issue before telling the public.
If you experience a defect in your automobile that causes your steering to cut out intermittently do not alert other users of the same make and model. Instead contact the manufacturer and give them 90 days to fix the issue internally and mail you a new part.
If a drug you are taking causes a serious reaction quietly contact the maker of the drug directly...
See how incredibly stupid "reasonable disclosure" sounds in other industries?
"As already mentioned, Project Zero has a 90 day disclosure deadline and this was applied to this vulnerability. It was first reported by Ormandy on March 13, then on March 26 Microsoft confirmed it would issue a security bulletin and fix for this in the June 11 Patch Tuesday run."
How is that a zero day? Isn't it a 91-day? What is the meaning of zero day in this context if there were actually 91 days between reporting to Microsoft and public release?
Words are used to communicate, and language is fluid and changes over time. Clearly, zero-day is being used and understood by many to mean simply "unpatched", and so that is a reasonable definition. If ever you're arguing that a significant proportion of people are using language incorrectly, you're probably on the wrong side of history.
Lately OpenBSD has been consistently pushing out new security errata, with as quick as a ~3 day turnaround from finding/reporting to released fix, even for difficult issues; like Intel MDS.
Because if OpenBSD pushes out broken patch, nobody will care, as this is business in usual in free software world, shit breaks, WITHOUT ANY WARRANTY and all that. On the other hand, if Microsoft does that, customers paying millions of dollars will get pissed.
That said, 100+ days to push out a patch is indeed ridiculous.
Except OpenBSD isn't shipping broken patches.. so I'm struggling to see your point.
There's a pretty substantial difference between 3 days and 90 days (or 100). And one could argue that any amount of days after the embargo ends, is plenty opportunity for their paying customers to remain vulnerable without having provided any fixes, regardless of whether it is broken or not.
I seriously doubt that OpenBSD can ensure that their patches don’t break their users in 3 days. Additionally, if their patches do break their users, OpenBSD can, unlike Microsoft, claim that it’s working as intended, and if you don’t like it, tough shit. Microsoft doesn’t really have this as an option.
You are extremely generous in assuming Big Corp is using whole 90 days for fix and validation. In might have being sitting in a backlog for 90% of that time.
Indeed, that OpenBSD can succeed in 3 days, with an errata team typically between 4-6 people, what Microsoft as a company with well over a hundred thousand employees, is failing at 90 days. Doesn't explain that.
That... Sounds like their problem. Like, I get that they have way more api surface, legacy code to support, etc... But they have a budget and manpower to match. If they can't test changes, that's their fault.
Microsoft fired their testers in 2018 [1]. Since then we've seen a surge of pretty serious bugs, some of which had to be pulled [2]. Not saying this is related, as this is a vulnerability and not a bug in an update, but 100 days to release a patch is telling.
>"The number of apps in its Windows Store had dwindled to 13 percent of the 1.1 million offered in 2014, the company said, and it needed less bug testing from Lionbridge, according to a January 2017 memo obtained via a FOIA request."
From memory they used to restrict all warranty claims to USD$5, but it appears it's been increased to USD$50 (or the sales price). I would wager it's probably because offering a maximum relief of less than 5% the cost of the product was found to be illegal somewhere.
Not all bugs are equal, especially when it comes to security bugs. They may range from "oops, I forgot to check an array bounds" versus "the entire implementation is fucked due to a bad architecture and we have to rewrite from scratch to fix this".
I'm stating this with no judgment on either OpenBSD or MS. Ive used OpenBSD in the past, and I've a lot of respect for them. I also work in a heavy Windows shop, and I appreciate MS's desire to fix issues while maintaining backwards compatibility. Doing so for MS often involves horrible hacks when 3rd party software relies upon either undocumented internal APIs or broken APIs. Prime example is the current battle with Win10 and anti cheat software causing green screens.
It is curious that Google is perfectly fine keeping an Intel embargo for a long while when it affects them, but is very strict about disclosure when the exploit affects others.
Why do you assume Google wouldn't be affected by a Windows bug? Have you forgotten about the compromise of their corp networks by a Chinese APT in 2008/9 which leveraged Windows as the attack surface? The reason for disclosures like this to expedite bug fixes is because they have skin in the game.
There is a rather large difference between Google’s own OSes, software, and services; and the stuff Google uses. Google has the resources to mitigate problems with what they use from others much faster than most customers of the their competitors.
Notices, pressure or teeth, should be effective and reduce harm... 1. Notify Manufacture w Details, Start 90 Day Clock. 2. 90 Days, Notify public of discovery and notice date, NO public details. Notify Reputable Security Vendors of details to prep defense of un-patched bug. 3. 180 days release limited details publicly and date of notices to MFG and Sec Vendors. THIS will build public pressure on whole ecosystem and limit impact.
Writing code and especially bug fixing issues with complicated code is not as deterministic, as most people imagine. And any developer knows that they should definitely expect the unexpected! but few seriously wonder whats going on when we agree to estimate for them.
So getting up-tight and blaming engineers for having delays is just as random as having a 90 day deadline.
June 12, 2019 Huge Cybersecurity Global Alert which proves Microsoft patches won't fix 2006-2019 front and backdoor vulnerabilities created by FVEY, Nine, Fourteen Spying Eyes Google belongs too.
I really can't wait until Microsoft does this to Google and Google sues them into the sunset. Something tells me big G wouldn't like a taste of it's own medicine in this department.
>> but he seems to not give a shit about the impact of releasing a zero day, that perhaps only he knows about, on businesses trying to earn a crust. Not very responsible
You are ignoring the impact of constantly extending deadlines for companies that don't take security seriously within 90 days of notification. At some point, there must be consequences as a negative feedback signal to show that you mean business and won't just constantly push these back until you fix your negligence.
The Project Zero guidelines must have teeth. And they do. And it causes acute pain, and they know it. It is to spur companies on to do the right thing.
Without open disclosure first without time-barred restrictions, we never would have settled on "responsible disclosure" with embargoes and such. Companies and organizations need to know they will be held accountable.
I'd be up for "gradual disclosure" after 90 days. Something that gives some kinda hint as to the storm ahead but doesn't hand out a pre-built bunch of stuff the script-kiddies can fuck your business over with.
We were pretty fucking good at security on our platform (Windows and Linux), but if you've no idea something like this is coming down the pipe, it's hard to mitigate once TO and pals decide to release working exploits.
I think you must be talking about CVE-2010-0232, it wasn't 90 days, it was more like 180. This was at a time when Microsoft refused to release kernel patches outside of service packs. I begged Microsoft at multiple in-person meetings at Redmond to reconsider and patch, they simply refused and said there were would be repercussions if I disobeyed.
After four months of negotiations, I told that I'm going to publish it whether a patch was available or not. This didn't have the effect I had hoped, they started threatening me instead. They called me and told me my career would be destroyed. In one particularly memorable call they told me that their PR and legal department only had two settings, "off and destroy" and (in a rather menacing tone) that they would "air my dirty laundry in public". I still don't know what that means.
I was shaken, but told them I'm still going ahead. They responded by calling everyone they knew at my employer demanding I was terminated.
There was a trivial mitigation, just disabling a very rarely used feature (vdm support for 16 bit applications). I made detailed documentation explaining how to enable the mitigation for every supported platform, and even made tutorial videos for Administrators on how to apply and deploy group policy settings.
I sent these detailed instructions to all the usual places that advisories are published. I included a test case so you could verify if the bug affected you and verify the mitigation was correctly deployed. As you can imagine, Microsoft were furious.
I know it's little comfort, but through some hard fought battles over the last decade we have reached the point that Microsoft can reluctantly patch critical kernel security bugs if given around three months notice. They still pull some dirty tricks to this day, you wouldn't believe some of the stories I could tell you, but those are war stories for sharing over beers :)
It sounds like your attackers compromised you with an outdated wordpress installation, then gained privileges with this vulnerability. I'm not sure I agree the blame here lies solely with me, but regardless, I would recommend subscribing to the announce lists for the software you're deploying. You could also monitor the major security lists for advisories related to the software you use. It's high volume and varies in quality, but you can usually identify the advisories that apply to you easily.
That’s mental and very damning. Most defences of MS in this thread lean on goodwill, which is completely lost here. After this behaviour, I’m surprised you even gave them one more day. Acting like they’re in a goddamn Scorsese movie.. shame on those kids. They weren’t raised right.
They should have lost the 90 day privilege to begin with.
1) You should have named names. A jerky company is merely one composed of jerky people and it is those people who should be shamed.
2) Developers need to be unafraid to stick up for their principles and to prioritize their career more than their job, because idiot managers exist. The job might fire you; your love of technology will not. Glad you seemed to stick to your guns; you will be vindicated.
> there were would be repercussions if I disobeyed.
That's absolutely incredible. Not that I don't believe you, more that I find it incredibly stupid of them to think that you could be intimidated like that.
This is a nice little bit of backstory for those that wish to peddle the tale that the Microsoft of today is nothing like the one from before.
Microsoft is one of many companies that did things like that in the past (and some still try). This abusive behavior in the past is why many groups have strict release policies after some amount of time, and even why some people will drop anonymous zero days. You cannot trust these companies to do the right thing. Time and time again they have gone as far to attempt to make security research illegal.
Yeah, like project zero forcing Microsoft’s hand to be more responsible, for example. Tavis doing this now is what paved the way for the next Tavis who comes along, who will now be able tonget issues fixed without threat of their lives being ruined by a megacorp.
>> Something that gives some kinda hint as to the storm ahead but doesn't hand out a pre-built bunch of stuff the script-kiddies can fuck your business over with.
That's what the author already gave on day zero. Why does there need to be more extensions, gradients, and timelines for a trillion dollar market cap company's core product?
> Something that gives some kinda hint as to the storm ahead but doesn't hand out a pre-built bunch of stuff the script-kiddies can fuck your business over with.
I am not an expert, but an argument I've heard before is that this is worse, because a "hint" that describes the impact in even broad terms has a good chance of giving attackers enough a clue to figure out what the issue is. Meanwhile it will not contain a patch or detailed mitigation steps (because that would be too specific) so even fully informed administrators will be left helpless until whenever "full disclosure" happens.
It sounds like your beef should be with Microsoft for failing to patch a severe bug after 3 months.
> Sure MS should have moved faster to plug the hole, but you know, Windows is a helluva legacy code base,
Microsoft is not some 10 person startup. They have ample resources to fix these problems if they choose to prioritize them appropriately. The decision not to is on them. I guarantee that if the deadline was 180 days, they'd still miss the deadline on some of these issues and complain about not having enough time.
No, they are more like a 200-car freight train than a sportscar that can turn on a dime. I suspect just getting the problem into the backlog could chew up weeks.
Then they need to fix their process to be more agile in response to urgent problems.
Not every hacker is going to be nice and give 90 days notice. Sometimes the first notice you'll get is when a vulnerability is already being exploited in the wild. And if you takes you weeks just to begin to respond to that, then that's your problem.
I feel sympathy for your situation, but if you want to read an alternative opinion that is just as harsh as your opinion of Tavis then consider this (which I don't necessarily endorse):
No one owes your company the ability to make a profit, and if you can't make money without removing the free speech rights of security researchers (who are improving the state of the industry) then perhaps you need to change your business model (or your OS).
Sorry for all the pain you went through. Without discounting it, I do think that it's not about reveling in disruption. The uniform signal sent to software companies (there is no software company too big for us to bend our rules) leads to better global outcomes.
Project Zero just told everyone how to exploit a bug that won't be fixed for a month.
Even if that is (arguably) better than "having no teeth", that doesn't make it a good idea. Perhaps they can find better teeth, a response that doesn't involve helping bad actors when vendors fail to patch quickly.
Blackhats knowing is what makes companies patch the vulnerabilities. Not enough people care about there being an undisclosed vulnerability for the company to expend resources.
Fines for companies that don't patch vulns within 90 days? This would encourage shifting left with security to make it easier to fix flaws so they don't overrun product deadlines.
Yes, this is not simple description of a vulnerability, this is almost ready-to-use exploit.
Ethics aside, what does it mean for Microsoft and ProjectZero from the legal standpoint? Does publishing of an active exploit make either ProjectZero or Microsoft criminally or civilly liable in some way?
If you merely say that you observed a crash in someone else’s software when a certain argument is passed, and that makes you liable for their bug, we are all in big trouble.
Well, there is a difference. First, that certain argument (malformed certificate) was not randomly encountered, it was specifically constructed to trigger the vulnerability (Was reverse-engineering involved? I don't know). Second, this bug report not only discloses the fact that the vulnerability exists, but also provides a working example for any script-kiddie to use as an exploit. Third, the bug was not privately disclosed to software vendor, but was released to the public.
From https://security.stackexchange.com/questions/22973/if-i-find... it seems that would be criminal in UK or Germany, no idea what could've happen in US. On one hand, you have First Amendment, on other hand, there is an EULA.
There's no reason why someone else couldn't discover this bug and exploit it. I would rather know that I am vulnerable then be ignorant and assume my software was safe when it in fact was not.
Thanks taviso for all the great security work you do. (Also 2004 me would like to thank you for your cool fvwm configs).