Ben Hatton Direct Liquid Cooling: The Future for Data Centers?

Direct liquid cooling

An application of direct liquid cooling in a data center. Photo courtesy of Green Revolution Cooling

For data centers, cooling is everything; not only is it what keeps colocated equipment running efficiently, it also makes up the majority of a data center’s power usage. Most data centers, including Data Cave, rely on cooling systems that use air for transferring heat away from equipment, while simultaneously supplying chilled air. However, a cooling method that was developed back in the 1960’s and is now seeing somewhat of a resurgence, is direct liquid cooling. Rather than using air, this method involves physically submerging servers in a dielectric fluid to keep them cool (NOT regular water). While this method of cooling has been anything but mainstream in the industry, there appears to be a new interest in it, according to a July report by 451 Research.

The report indicates that there has been some growth in the number of liquid cooling providers, as well as new interest in liquid cooling by larger companies that require very high density computing in their data centers.

Why liquid cooling hasn’t become mainstream

While the direct liquid cooling method saw a fair level of practical application cooling mainframes during the 60’s and 70’s, it eventually lost out to cheaper but equally efficient air cooling methods in the 1980’s. It was during this time period that the modern data center as we know it began to take shape, and air cooling quickly became the norm within the industry.

There are a number of obstacles that currently prevent most data center operators from considering liquid cooling as a viable option, and here are a few of them:

  1. Little to no vertical scaling: With this method, servers are submerged in large vats of fluid that are only accessible from above. This inhibits the ability to scale servers vertically (or stack them), since you have much less vertical space to work with. If a data center is already equipped with very tall cabinets stacked with equipment, then moving to a liquid cooling system would require a great deal of re-configuration.
  2. Not all server types are supported: Since liquid cooling involves the physical equipment being fully submerged in liquid, not all types of hard disk drives can be cooled this way. Drives must be designed to not only tolerate the liquid, but they must be fully operational while submerged as well. While this type of server equipment exists, any incompatible servers would have to be upgraded as well.
  3. Costs are too high for most data centers to justify: In addition to the costs involved with updating the server equipment itself, the numerous other costs of implementing a liquid cooling system are quite high. These include the physical equipment needed (the large vats as well as supplies of dielectric fluid), maintaining that equipment, and modifying the existing infrastructure and data center layout to accommodate for this equipment. In all likelihood, it would be more cost effective to start from scratch with liquid cooling, rather than trying to retrofit it into an existing data center.
  4. Extra precautionary measures are required: Data centers have always been environments where any type of liquid has been strictly forbidden(*), so the idea of introducing a liquid into that environment is something that would definitely raise a few eyebrows and cause a bit of fear on the data center operator’s part. It would also necessitate additional measures and physical changes within the data center to prevent other pieces of critical equipment from coming into contact with the cooling fluid.

Ultimately, it is the steep costs and long-term infrastructure changes that are the major barriers keeping most data centers from adapting liquid cooling anytime soon.

What it would take for it to become mainstream

While there has been some renewed interest in liquid cooling by some larger data centers with very high capacity computing needs, I think it will still take considerable time before it begins to gain traction throughout the industry. Here are a couple of things that could help it to take off though:

  1. Widespread adoption of high density computing: There has been a growing shift in the industry towards high density computing, a scenario that would be well served by liquid cooling. However, this isn’t a shift that has happened overnight, and until high density data centers start to really become the norm in the industry, liquid cooling won’t see a great deal of growth.
  2. Adoption by the industry trend setters: Many of the key trends and directions that the data center industry moves in are set first by some of the world’s largest tech companies (ie. Google, Facebook, Amazon, etc.). The computing and capacity requirements for these companies are incredibly above average, and they are often the pioneers in developing and working with new types of data center technologies to meet those needs. If many of these larger companies begin adapting liquid cooling as part of their data center operations, the rest of the industry would definitely take notice and likely follow suit over time.

Liquid cooling for data centers definitely has the potential to become a legitimate standard for cooling within our industry. It’s not there yet, but as the industry continues to evolve, there is a good chance we’ll be hearing more about it in the future.

Share this with your friends!

Share on Facebook Share on Twitter Share on LinkedIn Share on Google+

If you liked this post, sign up for our monthly newsletter!

Newsletter Signup

 

 

Ben Hatton Winter Disaster Recovery: Are you ready?

Buffalo snow storm

One look at the epic snowfall that the Buffalo area recently received. Image courtesy of Flickr.

A few weeks ago the New York area was absolutely slammed with a record-breaking level of snow (6 feet of it in many places). While we have been a lot more fortunate in Indiana, we and the majority of the rest of the country have seen record low temperatures for this time of the year. The calendar may say it’s still autumn, but for all intents and purposes, it’s winter now.

If there is a bright side to this, it’s that this is a good reminder that with winter weather comes some unique challenges to your business. These will typically include things like:

  • Employees not being able to make it into work, due to the snow and ice.
  • Power outages (luckily few were reported in New York).

While these are certainly challenges, they are challenges that can be met with the right disaster recovery plan. With a plan that includes server colocation at a data center like Data Cave, the effects of winter weather on your operations can be mitigated. Here’s why:

Highly defended from power outages

As we’ve written about before, Data Cave has a high level of redundancy when it comes to our electrical infrastructure. With multiple utility feeds entering the building, as well as UPS flywheels and two 2MW Cummins backup generators per quadrant, we are very well protected from power outages year round. For more specifics, check out our Electrical Infrastructure page.

This means that even if the lights go out at your office, your applications and data will still be accessible.

Icy and snowy roads aren’t as big of a deal

I’m not recommending that any of you attempt to drive through 6′ snow drifts (unless you happen to drive one of these), but if you or your employees can’t make it into work, it’s likely possible that at least some of your staff can work from home instead. This becomes especially easier to do if your data is maintained at a location that is safe from power outages, where it can still be remotely accessed during a winter storm.

Start planning!

If last year’s extended winter was any indication of things to come this time around, then last month’s winter storm has virtually guaranteed it. This means that right now is the perfect time to think about your company’s disaster recovery plan, and how you would be able to respond after a winter storm. Contact us to start the conversation today!

 

Share this with your friends!

Share on Facebook Share on Twitter Share on LinkedIn Share on Google+

If you liked this post, sign up for our monthly newsletter!

Newsletter Signup

 

 

Ben Hatton Slack: An app that takes collaboration to the next level

Effective communication within a company is vital for its long-term success, and we’re no exception at Data Cave. With multiple internal projects going on at any given time, an expansive infrastructure to monitor and maintain, and staff that aren’t always in the same place within the facility, the ability to communicate quickly within the Data Cave team is a very big deal for us and our redundancy levels. Slack logo

Earlier this year we began using Slack for much of our internal communications and notifications on different projects, and it has become a very effective tool for us. Our objective was to find a communication tool that was easy to use, provided a good range of functionality, was mobile-friendly, and stable (finding the right tool that meets all of those requirements is actually pretty tough!). All in all, we have been very happy with it, and if you have been looking to enhance your organization’s internal communication methods, keep reading to hear our thoughts about it!

About Slack

At its core, Slack is an instant messaging communication tool that can be accessed on computers as well as mobile devices. It allows for both private and group messaging, as well as team-wide chats that can be broken down by topic.

Slack interface

The Slack user interface.

Benefits we’ve seen

Over the past several months, our team’s internal communications have been greatly enhanced from using Slack, in a number of different ways:

1) More visibility into projects: Like most companies, our internal communications include a little bit of everything (emails, IM’s, hand-written notes, etc.), but by shifting a decent portion of these communications over to Slack, we’ve found that it’s been much easier for everyone to stay in the loop on all of the data center projects we have going on. The functionality for multiple chat channels has played a big part in this, as well as the ability to easily upload attachments. Overall it has provided a much higher level of visibility and organization for all of our internal projects, which has resulted in them moving forward much more efficiently than they had in the past.

2) Very mobile-friendly: Data Cave is a big facility, with lots of moving parts and things that go on behind the scenes. This effectively guarantees that none of us are going to be glued to our computers all day. This is where Slack’s mobile app comes in; it’s very intuitive and easy to use, offering all of the same major functionality as the web application. Through this we can stay in the loop with what is going on in each channel, regardless of where we are.

3) Third-party integration: Slack is an open platform that integrates with a wide range of  third-party applications. We are able to tie in some of the additional outside applications and services that we use for various activities at Data Cave, and view notifications as well as perform some actions from directly within Slack. To that end, Slack has proven very effective at allowing us to view information (and take action when needed) that is created from multiple outside services, all within one central application. There are a lot of chat and instant messaging services out there, but very few have taken this approach of deep service integration.

Final thoughts

Overall, Slack has had a tremendous impact on how we communicate at Data Cave. Not only has it allowed for better visibility and faster communication on our internal projects, but more importantly it has improved how efficiently these projects get completed. The ability for us to work efficiently as a team is why we are able to grow and stay on top of today’s data center trends, and Slack is one of the tools that helps us to do just that. I would highly recommend Slack for any organization that is looking to enhance its internal communications, and to see the benefits that come out of it!

 

Share this with your friends!

Share on Facebook Share on Twitter Share on LinkedIn Share on Google+

If you liked this post, sign up for our monthly newsletter!

Newsletter Signup

 

 

Ben Hatton Understanding the Impact of Data Center Downtime

Drive to Thrive logo

The Drive to Thrive report looks extensively at the impact that downtime has had on federal data centers around the country. (Image courtesy of MeriTalk)

The top priority for any data center operator is to ensure that downtime never occurs, and in the (hopefully rare) event that it does, to restore availability as quickly as possible. A big thing that plays into this is the ability for the data center to empathize with the people who rely on it, in order that they can truly understand the specific impact downtime can have. Downtime affects more than just the data center’s bottom line; more importantly it affects the operations of customers who have equipment in the data center, and everyone else who relies on those services being available. Having a solid understanding of the impact downtime has is something that will benefit any data center operator.

The ‘Drive to Thrive’ data center report

This is why I found some of the stats from the recent ‘Drive to Thrive’ report from MeriTalk on federal data centers so alarming. This report is based on a survey that was taken of 300 federal IT workers, and it focuses extensively on the level of data center downtime they typically see. Not only did the survey find that downtime is a regular occurrence for the majority of federal IT workers, but federal data center operators appear to have a skewed perception of just how impacting a downtime event can be. You’ll see what I mean shortly. Here are a few of the key findings from the survey:

  1. Over the course of one month, 70% of federal agencies experienced downtime events lasting 30 minutes or more.
  2. These events affected the ability to work for 90% of federal employees.
  3. When asked to assign their data centers a letter grade for how well they perform, 36% of the employees gave a C, stating that they cannot manage downtime well, and that it occurs often.
  4. The most telling stat: From everyone surveyed, only 29% believe that their data center personnel fully understands the impact that downtime has on their ability to work.

These findings certainly show that federal data centers have a problem when it comes to preventing downtime from occurring (the vast majority of downtime was caused by server outages or connectivity failures). However, I believe that a major underlying cause for this occurring in the first place is ultimately a disconnect in how data center operators view the impact of downtime. Having the mindset that even a small amount of downtime won’t have major repercussions, or any other skewed mindset about downtime in general, will mean trouble for you if you run a data center. As data center operators we need to assume that any amount of downtime will have a disastrous impact on the people who rely on us (it does after all, the numbers speak for themselves). Any attitude less than this is a compromise that we can’t (and shouldn’t) make.

Downtime is bad, period.

We realize that our level of uptime at Data Cave impacts much more than our own business and reputation, but more importantly it impacts the success and livelihood of each of our clients, as well as everyone who counts on their services being available. This empathy towards the success of our clients has been a driving factor in our efforts to prevent downtime in all of its forms from occurring. While the government’s data center operators appeared to have a skewed understanding about the effects of downtime, we feel that the serious way that we view it has put us at an advantage, setting a solid foundation for every decision we have made at our data center.

If you would like to learn about some of the other things that set Data Cave apart from other data centers, contact us today!

Share this with your friends!

Share on Facebook Share on Twitter Share on LinkedIn Share on Google+

If you liked this post, sign up for our monthly newsletter!

Newsletter Signup

 

 

Ben Hatton Data Breaches: Prevention and Response go hand in hand

October is Cybersecurity Awareness Month, an issue that doesn’t just impact us as a data center, but all individuals and companies across the board as well (check out our post from last year on the subject). This is a challenge that is continually evolving, and it was the focus of last month’s TechPoint panel discussion. Bringing together experts in consulting, healthcare, and cybersecurity research, the discussion centered around the changing security landscape, what some of today’s key risks look like, and how businesses and individuals can (and must) adapt to these risks.

TechPoint panel discussion: Cybersecurity-The New Normal

I’m going to look at some of the key points that were discussed by the panel, and how due diligence against data breaches is essential for both companies as well as individuals.

If you haven’t already created a data breach prevention plan, please do so. Right now.

The discussion opened with a few alarming statistics from a recent Ponemon study on data breaches (the report can be accessed here):

  • Over the past year, 43% of companies have experienced a data breach, with 60% of them having experienced multiple breaches in the last 2 years.
  • 67% of business leaders lack a full understanding of how to effectively respond to a breach when one occurs.
  • 62% expressed no confidence in their ability to respond to a breach, and of the respondents who do have a response plan in place, 30% of them stated that it is flat out worthless when responding to a breach.

If these stats make it seem like data breaches are ultimately inevitable for any company regardless of how much they prepare, that’s because they are. The general consensus of the panelists was that if your company hasn’t experienced a breach before, it will. This is largely due to the growing level of sophistication of the hackers responsible for breaches, who have evolved from teenagers with an illegal hobby to full-scale operations that are funded by criminal organizations or, in some cases, governments. Since the threat itself has evolved, the ways we prepare and respond need to evolve as well.

A breach response plan is just as important as a prevention plan.

Due to the increasingly high probability for any company to experience a data breach, the panelists stressed the importance of having an effective breach response plan, in addition to the measures you take to prevent breaches from occurring. To a degree, how companies respond to a data breach can sometimes be even more important than how they  work to prevent them in the first place, due to the fact that breaches are so prevalent. To make matters worse, a breach response plan is something that is often overlooked or neglected altogether by many companies.

“Human beings are not perfect computers.”

While companies can have solid security policies and practices in place, they will not amount to much if their employees aren’t educated about them, or abide by them. Another key point made during the discussion was that “human beings are not perfect computers,” and that as employees and consumers, we all need to become better educated about the specific security risks that we face on a daily basis. This requires effort both on the parts of the employees themselves, and the organizations they work for.

This is somewhat of a paradigm shift from the “traditional” line of thinking, where the burden of security protection is almost always placed solely on companies, and not on the consumer (I touched on this same thing in this Data Privacy post from August). As I wrote about then, and as the panelists discussed at this event, as consumers and responsible employees we need to become educated about both our personal data security as well as the security of our organizations. In reality, companies themselves can only handle so much of the burden of keeping data secure from breaches; it requires education and vigilance on the part of the employee as well.

The threat is continually evolving, so we need to continually adapt.

My final and biggest take-away from the discussion also pretty well summarizes and ties together all of the previous points: Not only will the threat of data breaches always exist, but it will continually evolve with the technology as well as those who can profit from compromising a company’s data. Furthermore, as consumers we continually desire to be more and more connected, and we want to interact digitally more than ever before. These two factors point even further to the huge need for both companies and individuals to be continually learning and adapting to these trends, or risk the consequences. And as the panel closed with, therein lies the real challenge.

Share this with your friends!

Share on Facebook Share on Twitter Share on LinkedIn Share on Google+

If you liked this post, sign up for our monthly newsletter!

Newsletter Signup

 

 

Next Page »