back

Tuesday 8 March 2016

Desert Datacentre - Thinking outside of the Box


Desert Datacentre

Back in 2010 my team and I kicked off an ambitious project to design and build a datacentre on the east coast of Qatar.   Initially the project was well received, but the politics of coordinating many parties became a big issue. After 18 months, the project was abandoned.  In that time we had come up with a design, purchased a shipping container for the proof of concept, and obtained the relevant permissions to run the experiment at a coastal location on land donated by a local Qatari who was an invaluable supporter of the project. The progress we made remains relevant to datacentre design.
Building a datacentre in the desert meant ignoring most of the rules of modern datacentre design:

Rule 1: Use a cool location so that “free air” can be used, instead of needing to cool the air.  In temperatures up to 55 degrees centigrade, this wasn’t going to be the case in Qatar.
Rule 2: Ensure that airborne contaminants are kept away from the servers.  Not easy in Qatar where sandstorms are not uncommon and sand gets literally everywhere.
Rule 3: Be close to power sources. You may have noticed that many mainstream datacentres reside next to power stations. Ours was going to be at a coastal location away from the largely gas-fired power stations of Qatar.
Rule 4: Locate your datacentre where there is a clearly defined market to sell capacity.  Whilst there was an existing datacentre business within Qatar, it certainly wasn’t established and there was considerable uncertainty regarding future demand in the country.
Rule 5: Position your datacentre close to excellent sources of connectivity. This was one aspect that we had spot on, as our site was very close to a point of presence of the main submarine cable connection entering Qatar from beneath the Persian Gulf.    


Datacentre planned location 

As traditional air cooling wasn’t an option, we looked at oil-based cooling. This involved the use of pharmaceutical grade oil (more commonly referred to as “baby oil”) with a large heat exchanger about 200 meters off the coast, 10 meters below sea level.  If you aren’t familiar with oil-based cooling, check out Green Revolution Cooling based in Austin, Texas USA. They have built an entire datacentre based upon servers submerged in oil with a cooling system overhead which consumes just 2 percent of the energy delivered to the system.  This is very impressive, since the majority of commercial datacentres outside of the showcases of Google and Facebook use closer to 50 percent of the energy delivered to the system for cooling.  For details on average energy efficiency of datacentres, it is worth looking at the periodic reports from the Uptime Institute, such as this from 2014.

In Qatar much of the work undertaken was based on a very dedicated fluid dynamics team who mapped the heat flow from our test server infrastructure and modelled this in software, allowing us to identify potential heat spots and design them out.  In theory, if our sealed, oil-cooled shipping container actually worked, we would have been able to run largely from solar power during the day and switch to grid power overnight.  There was significant interest in using technologies to store excess solar energy in batteries and discharge these overnight, but this was really a project for another team. I decided we had enough to contend with ensuring that servers immerged in oil could be remotely managed and that the fluid dynamics of heat exchange with sea water would work.
Unfortunately we had to abandon the project before it was completed, but I still feel that if we had been allowed to continue, we would have been successful.  I must say that many people considered the idea to be crazy back in 2010 and described me as an eccentric, “mad scientist” Englishman in the local press.

In recent weeks you may have heard of Microsoft’s Project Natick, an experiment to run a submerged datacentre 0.6 miles off the coast of Californian using seawater cooling.  Whilst this is purely a research project, at present several industry analysts have suggested to me that if Microsoft can pull this off, the dynamics of datacentres could literally change overnight.  I have always been opposed to large megalithic datacentres with tens of thousands of servers and umpteen levels of resilience.  My preference has always been for small installations with a few hundred servers that replicate to similar configurations and are able to fail-over in the event of an issue at one location.  Let’s see how Microsoft gets on with their current project. Perhaps I will receive a call in the near future to complete building an oil-cooled datacentre in the desert.

Simon



Tags : ,

Tuesday 8 March 2016

Desert Datacentre - Thinking outside of the Box


Desert Datacentre

Back in 2010 my team and I kicked off an ambitious project to design and build a datacentre on the east coast of Qatar.   Initially the project was well received, but the politics of coordinating many parties became a big issue. After 18 months, the project was abandoned.  In that time we had come up with a design, purchased a shipping container for the proof of concept, and obtained the relevant permissions to run the experiment at a coastal location on land donated by a local Qatari who was an invaluable supporter of the project. The progress we made remains relevant to datacentre design.
Building a datacentre in the desert meant ignoring most of the rules of modern datacentre design:

Rule 1: Use a cool location so that “free air” can be used, instead of needing to cool the air.  In temperatures up to 55 degrees centigrade, this wasn’t going to be the case in Qatar.
Rule 2: Ensure that airborne contaminants are kept away from the servers.  Not easy in Qatar where sandstorms are not uncommon and sand gets literally everywhere.
Rule 3: Be close to power sources. You may have noticed that many mainstream datacentres reside next to power stations. Ours was going to be at a coastal location away from the largely gas-fired power stations of Qatar.
Rule 4: Locate your datacentre where there is a clearly defined market to sell capacity.  Whilst there was an existing datacentre business within Qatar, it certainly wasn’t established and there was considerable uncertainty regarding future demand in the country.
Rule 5: Position your datacentre close to excellent sources of connectivity. This was one aspect that we had spot on, as our site was very close to a point of presence of the main submarine cable connection entering Qatar from beneath the Persian Gulf.    


Datacentre planned location 

As traditional air cooling wasn’t an option, we looked at oil-based cooling. This involved the use of pharmaceutical grade oil (more commonly referred to as “baby oil”) with a large heat exchanger about 200 meters off the coast, 10 meters below sea level.  If you aren’t familiar with oil-based cooling, check out Green Revolution Cooling based in Austin, Texas USA. They have built an entire datacentre based upon servers submerged in oil with a cooling system overhead which consumes just 2 percent of the energy delivered to the system.  This is very impressive, since the majority of commercial datacentres outside of the showcases of Google and Facebook use closer to 50 percent of the energy delivered to the system for cooling.  For details on average energy efficiency of datacentres, it is worth looking at the periodic reports from the Uptime Institute, such as this from 2014.

In Qatar much of the work undertaken was based on a very dedicated fluid dynamics team who mapped the heat flow from our test server infrastructure and modelled this in software, allowing us to identify potential heat spots and design them out.  In theory, if our sealed, oil-cooled shipping container actually worked, we would have been able to run largely from solar power during the day and switch to grid power overnight.  There was significant interest in using technologies to store excess solar energy in batteries and discharge these overnight, but this was really a project for another team. I decided we had enough to contend with ensuring that servers immerged in oil could be remotely managed and that the fluid dynamics of heat exchange with sea water would work.
Unfortunately we had to abandon the project before it was completed, but I still feel that if we had been allowed to continue, we would have been successful.  I must say that many people considered the idea to be crazy back in 2010 and described me as an eccentric, “mad scientist” Englishman in the local press.

In recent weeks you may have heard of Microsoft’s Project Natick, an experiment to run a submerged datacentre 0.6 miles off the coast of Californian using seawater cooling.  Whilst this is purely a research project, at present several industry analysts have suggested to me that if Microsoft can pull this off, the dynamics of datacentres could literally change overnight.  I have always been opposed to large megalithic datacentres with tens of thousands of servers and umpteen levels of resilience.  My preference has always been for small installations with a few hundred servers that replicate to similar configurations and are able to fail-over in the event of an issue at one location.  Let’s see how Microsoft gets on with their current project. Perhaps I will receive a call in the near future to complete building an oil-cooled datacentre in the desert.

Simon



Tags : ,

Popular Posts