AWS didn't acquire a nuclear reactor. THey bought a data center park that is sited next to an existing nuclear power station. It's interesting news but less crazy as the headline made it sound,
They have some pretty hefty minimum use and right to use options tied to the nuclear reactor though. And it says the intention is to consume 960MW, which a very quick search says is roughly equivalent to the entire output of a typical nuclear reactor.
As I hear, for ML machines is normal to have 16 GPUs, if each consume 250W, totally 4KW per machine, so need 250,000 machines for 1GW (for 40MW would be 10000 machines, so 240,000 for 960MW).
240k machines is a big number, I myself seen DC for 40k machines, but they talking about campus, so reasonable assume, there would be not one DC but few DC buildings.
The 80GB version runs 300W on PCIe and 400W on SXM4. (Which is my third reason to hate the current wave of "AI" bullshit, the first two being industry scale copyright violation and barely better than slave labor).
Do you think there is always a positive relationship between production/consumption and quality of life? It seems to me this statement confuses what we should be ultimately optimizing for.
Since hubble said space is expanding, that means there will always be more room for entropy to increase. We will not have a heat death. Rather we are heading towards isolation and solitary confinement. I wonder if ET is behind this
That's gotta save a bundle on transmission and distribution charges. I wonder if industrial users get better deals on that, because my utility bills are mostly not for the gas/electricity generation.
Amazon's single solar plant for their new Canada west region is 5% of all solar installed capacity in the province. It's about .5% of total power, which is big enough to influence the quasi-market-based pricing. Pretty crazy...
I give less credit to systems that just "agree to buy the output from a distant generator via the grid" (my own quote). Can I say I support coal electricity generation by saying 100% of my electricity comes from coal when it's really some mix that I can't control?
If your contract with your energy company says they will buy enough wind/solar/whatever to cover your consumption, you can say you support that. But when it comes to how much, I think basically you can take “credit” for pushing the energy mix a little in the direction of your choice, but you can’t say your energy usage is all fossils or fossil free or whatever and feel all smug about it.
Consistent power demand from a nuclear reactor is a much more cost-effective way for a reactor to run - I imagine a datacenter is a good client because they're basically optimized to sit at a consistent level of power consumption.
Depends on what the data center does, and what time period we're considering. 1990s data centers probably didn't have very dynamic power consumption, as 1990s computers generally didn't.
A modern datacenter serving some consumer facing service likely sees a big difference in cpu usage and thus power usage between peak and off peak.
Otoh, if it's doing non-interactive data processing, it could be near 100% capacity consistently.
Personally, I'm guessing AWS has a noticable peak and trough on power. But if it makes sense, they do have the spot market which could be used to attract or repel load in order to hit a consumption target.
I still remember hearing and myself promoting the concept of "idle computing capacity" in the 1990s (because of the idea that computers would consume the same resources whether or not they were actively running jobs).
For a while I was in charge of the EFF Cooperating Computing Award. When it was announced, the announcement said
> However, the computer industry produces millions of new computers each year, which sit idle much of the time, running screen savers or waiting for the user to do something. EFF is encouraging people to pool their computing power over the Internet, to work together to share this massive resource.
That was in the 1990s. :-)
It's funny to think of just how inaccurate this became for subsequent CPU and chipset architecture, I guess because of convergent trends in customers wanting to save power for ecological, financial, and mobile device battery life reasons.
> how inaccurate this became for subsequent CPU and chipset architecture
You are right. My previous computer was AMD Athlon, and even it near halved consumption when idle.
Modern machines extremely power saving, because desktop CPUs shared many features from Notebook platforms (for example speed step frequency change), and I see more than 6x difference of consumption of whole machine, even when I myself too conservative to use sleep/idle modes, because some programs don't behave adequate in such environment.
> Personally, I'm guessing AWS has a noticable peak and trough on power. But if it makes sense, they do have the spot market which could be used to attract or repel load in order to hit a consumption target.
Sure. Even for the on-demand stuff they have a vested interest in reducing power consumption as much as they can.
They charge the customers a flat rate per hour, so every joule they can cut out is money in their pocket.
It would be different if they were charging the customers by the kwH or something.
Agreed. Running less time critical Spot instances for batch processing is in the increase especially in the Banking sectors. We have retired whole banks of on-prem servers that sit idle 80% of the month and only get used end of month or year. The TCO cost analysis was the clincher.
You are totally right. I worked as admin on few projects with dedicated servers, and ALL have two classic peaks (~10:00, 16:00..19:00) and near zero consumption at night (00:00..06:00), all these are now very close to civilian power consumption :) .
Some time, before CDN invention, very large sites distributed traffic between few time zones around world and so they distributed load for a whole 24h. But when CDNs appeared, local peaks returned.
And unfortunately, not much things possible to run at classic DC environment, nearly only ML tasks. For example, for classic supercomputing tasks, typical DCs have too slow network (too large latency for things like C++ MPI).
Sure, any individual server: but amortized over 1000s? You'll develop an average just from traffic, and it'll be a lot higher then the bottom end - since otherwise why have the datacenter? Why not use spare capacity in your other datacenters?
The trick is that you sign a contract with the utility to guarantee a particular type of usage - and utilities love people who burn Watt-hours rather then Volt-Amps (i.e. a power factor near 1) since they're a lot easier to service, and people who promise to use power fairly consistently.
The utility says "we'll give you a lower overall rate, provided you promise to consume at least this much electricity" - in practice of course, you could consume less but you'll pay for it no matter what.
But compute is great for this model: it's fast to move applications onto and off servers, so hitting that minimum is easy.
> You'll develop an average just from traffic, and it'll be a lot higher then the bottom end - since otherwise why have the datacenter? Why not use spare capacity in your other datacenters?
Because when the sun is up over the Pacific ocean, everyone is asleep, and all of your datacenters are quiet.
Or routing latency insensitive traffic there. Do I care where my backups are stored? No. (well, I kinda do for geological/weather reasons, but put it on Mars for all I care)
I looked it up and it seems like that’s what they’re selling yeah! Though aws advertising really doesn’t want to say how they’re so cheap on their front page!
Their marginal cost for you using otherwise-unutilized capacity is near zero; probably just power really. So if it’s a choice between getting $0 and $0.1 for a $1/hr instance, that’s an easy choice.
You just have to be sure to make the spot stuff juuust unreliable enough that people can’t run their full workloads on it all the time.
if you've got the engineering in place for distributed fault tolerance you can run a lot of your workload on pre-emptable spot instances. There's an entire "as a service" product space that does this, abstracting away the challenging parts and arbitraging the difference
DC power demand is anything but constant. This is getting worse with ML clusters. There's an entire cottage industry focused on optimizing power utilization. There are huge incentives for DC operators who can guarantee consistent high power utilization.
It has no impact, because nuclear reactors are base loads, i.e. they are essentially never turned off. The only real difference is transmission line losses, and those aren't very significant.
I mean, I take the comment about supervillains as mostly a joke, so I don’t really feel a need to strictly interrogate it for factual accuracy.
But also I do think it fits. Looking at super villains, they’re usually just powerful people with slightly unusual property, who also happen to be evil.
Lex Luthor’s company Lexcorp was an aerospace company. In the first Superman movie, Luthor was a pretty weird and relatively unremarkable guy. In the James Bond movie Tomorrow Never Dies the super villain was just a guy with a global media company who created calamity and murder so that he could report on it first.
Nuclear power and radiation have often been themes for comic book heroes and villains. I think it’s reasonable to read the (misleading) headline and think “oh great now Bezos owns a nuclear power plant, he’s going to become a super villain” as the sort of non-serious comment prevalent on the internet. People say the same about Musk all the time.
Please explain your reasoning about the word campus. You seem to think it's obvious but I'm not seeing it.
A campus is a group of buildings and land that are part of one institution. I've been in multiple campuses that have power plants (and where the primary purpose of the campus is not generating power).
The idea of a datacenter campus that makes its own power is a reasonable one.
They didn’t say they bought a power plant. They bought a campus next to the power plant. That implies that it’s not generating power. How you could be confused I don’t understand. I get that headlines are often misleading or sensationalised. But this one isn’t.
> They didn’t say they bought a power plant. They bought a campus next to the power plant. That implies that it’s not generating power. How you could be confused I don’t understand. I get that headlines are often misleading or sensationalised. But this one isn’t.
The title doesn't say "next to". It says "960MW nuclear data center campus".
What makes it "nuclear"? Well in this case it's because it's next to a nuclear power plant, but it would be just as valid to use that wording for a campus that contains a nuclear power plant. In fact I think the latter is more straightforward.
And funnily enough if you look at the picture, it looks like it's one big campus too!
How crazy does the headline make it sound? They're a book store that got into the online cloud business. They got into the chip design business. I don't know the industries to know whether it would make sense for them to buy power stations, but it doesn't sound outright crazy like Nike buying the Hoover Dam would.
There is going to come a point where all network traffic will at least go through 1 or more AWS data center. Amazon will be the literal gatekeeper of network traffic. This is fucking scary to me.
Check out this recent episode of the Volts podcast with guest Jigar Shah, head the DOE's Programs Loan Office, where David Roberts and Jigar go into quite a bit of detail about the energy needs of data centers over the coming decades, and why nuclear is going to be a necessary component of any CO2-free energy solution.
Of course it's a point of interest: there's hundreds of cars in that parking lot. They must have gone out of their way to suppress the map point—that's a reflection of their internal corporate culture, which is one of paranoia.
Note the corn ethanol manufacturing plant next door was allowed to labelled, as was the coal power plant.
Interesting if this becomes an AWS "region", because I would have thought that independent power supplies for the different AZs are part of the promise of what an AWS region/AZ is about.
But if the data center(s) are all getting power from one source, that's a SPOF.
They say (you can kind of prove it but not really because they don't officially list locations) that AZs are supposed to be on different flood planes and stuff so that the chances of an entire region failure due to a natural disaster are minimized. I think that means that the other AZs probably wouldn't be really close to the power station like this one.
Does AWS need to build anything on the site as part of this deal, or are they simply leveraging the power added to the grid in their existing us-east-1 footprint? The distance from Ashburn is too far for this to be an extension of a region.
I think we nuclear energy should play a big part in decarbonizing our energy infrastructure, but this is doing nothing of the sort. This is ostensibly 1GW of new projected demand without any new generation being built for it. Meaning, at best this is creating new demand that will have to be sourced elsewhere - this is a nuclear station that has been operating for 40 years. It's good for it to stay online, but they would not have a hard time selling that electricity. This really just seems like a crabs in a bucket capitalist move - laying claim to existing resources before they become scarce and expensive.
That's not really "at best" - at best, this represents a mix of new demand plus some compute being moved from customers' on-prem / small and inefficient datacenters. I'm not sure how much of AWS's growth is "new compute" vs "moved compute", but for the part that's moved, it's not necessarily a bad thing for it to move to a datacenter with a low PUE (power utilization effectiveness - overhead such as cooling/backup power/power supply inefficiency) that uses green power.
(AWS/Google/Microsoft's datacenters are a _lot_ more efficient on a per-compute basis than the typical small corporate datacenter; like 30+% lower power waste because of inefficient AC and power distribution design.)
Agreed. I had to double take because I spent a lot of time worrying about nuclear data, which is all the probabilities of various nuclear reactions in certain situations. Like, the probability that a neutron moving at 2200 m/s will bounce off a Uranium-238 atom vs. the probability that it will be captured and emit a gamma ray is nuclear data.
They thought they were going to mine cryptocurrency here during off-peak hours, if anybody is wondering why an energy company casually owns a huge datacenter.
edit: guilty of 1) not reading article first, and 2) being overly pessimistic about the cryptocurrency climate.
Musk stated that he predicts that 2023 going into 2024 will have a shortage of transformers and that based on the rate of improvement of AI that there will be a shortage of electricity in 2025. Maybe these are moves by tech companies trying to preempt what might be coming?
He said it, but in reverse order: first shortage of electricity, then of transformers - needed by new power plants built to solve the electricity shortage.
You can get an estimate of how much new electricity will be needed by the GPUs already ordered but not delivered yet.
Not enough capacity and not enough plans to bring more online. It is difficult for data center customers to expand in Northern Virginia because Dominion is struggling to deliver enough electricity given the limitations of the current infrastructure.
This isn't doing that though. It would be if Amazon commissioned a new plant for their own needs but it seems like they will be sucking power from the existing grid.
Presumably there's a reason this complex was built right next to the nuclear plant rather than built somewhere else and plugged into the grid. It gives them guaranteed demand which helps them operate more efficiently, it avoids transmission losses, it avoids putting load on the grid since it's not going via the grid...
Talen Energy Corporation just wanted to make a zero carbon data center attached to their power plant. A cool side-project for them. Microsoft made them a deal they couldn't refuse.
Shortage of transformers is already a thing and has been for years. That's not even a prediction. The lead time for large transformers is years and there is no inventory.
This is anecdata, but my electrician friends who do a lot of commercial projects that need transformers say that lead times are atrocious and prices are terrible which lead to budget overruns and project delays already. And people already see the strategic issues with sourcing them from China while domestic manufacturing dwindles.
Tech companies that want cheap power for their data centers would make sense to buy facilities close to power plants, but savvy investors trying to profit off the market discrepancies would be investing in the forges and factories that can make the transformers domestically and start lobbying their congresspeople to get incentives from the DoE and other pork barrel spending to make building transformers cheaper and faster domestically. I think that will happen before we see the existing shortage lead to brownouts because energy demands are constantly growing, and both parties have incentives to grow domestic manufacturing.
do not underestimate the magnitude of this news. it reveals there is basically negligible opposition to existing nuclear and limited or no opposition to new nuclear other than for cost reasons.