Ben Hatton Addressing the ‘Comatose Server’ Problem in Data Centers

NRDC Data Center Efficiency Assessment

You can view the Data Center Efficiency Assessment report in its entirety at this link. Image courtesy of NRDC.

A report from the National Resources Defense Council (NRDC) came out this past year that has been making the rounds in numerous IT blogs, concerning the high power usage of the world’s data centers. One article about this report goes so far as to call the energy consumption of data centers wasteful, citing a few different reasons for the higher power usage. While it’s no secret that data centers do consume a lot of power, one major factor highlighted in the report is the existence of “comatose servers” that needlessly waste power within data centers.

A “comatose server” is basically any server that runs and consumes power, yet doesn’t serve a functional purpose for an organization. They are often the remnants of network projects where new servers are added, or where services are migrated off of one server and to another; often when this happens, the old servers don’t actually get decommissioned or removed from the infrastructure (often due to simply being overlooked). This results in a piece of equipment that is running and drawing electricity needlessly. They also include servers that are frequently idle and not active very often; even when idle, they still consume power. According to the report, an estimated 20-30% of servers in data centers today fall into this “comatose” category.

It pays to lose these servers

The report emphasizes the importance of reducing a data center’s power consumption by decommissioning these comatose servers, as well as the high cost savings that can be seen as well. It references a great example of AOL (yep, they’re still around), where they recently decommissioned over 9,000 comatose servers. This led to an estimated total savings of close to $5 million, a big chunk of which was in power and cooling savings. This shows that it is definitely worthwhile to take efforts to cut back on any of these power-sucking comatose servers that may exist in a data center.

Some steps to get things started

While the existence of comatose servers can present a daunting challenge, there are ways that the number of them can be reduced, and their impact on power use lessened. Here are just a few of them:

  1. Consolidate whenever possible: Making efforts to consolidate your organization’s applications and data onto fewer production servers has many benefits. A key benefit you’ll see is lower power usage by the data center for powering the equipment itself, as well as for cooling that equipment. For companies who colocate in a data center, these savings are often passed on to them in the form of lower power costs. Although consolidation projects are very involved and require planning, they can yield tremendous benefits for an organization, with less power usage being just one of them.

  2. Document everythingA little bit of documentation can go a long way when it comes to identifying any servers that are good candidates for being decommissioned. With proper documentation you can determine which of your servers may host an application that is no longer used in production, or servers that contain data or files that already reside elsewhere. This knowledge can help you to determine any servers you have that may be comatose.

  3. Consider switching to SSD where possible: Solid state drives consume much less power than traditional hard disk drives, due to the fact that they have no internal moving parts and are often lighter. This can result in less power that is consumed by servers with SSD’s, as well as less power needed for cooling them (since SSD’s don’t generate as much heat either). Another nice benefit of SSD’s is that they will only consume power when they are actually in use, so if the drive is idle at any time, virtually no power will be consumed. And, while these types of drives have traditionally been more expensive than hard disk drives, the price gap has been lessening over time.

Good for the environment and the bottom line

Taking efforts to reduce the number of any comatose servers in a data center can have huge benefits for the data center’s operators and tenants, both in terms of how much power the data center uses, as well as how much cooling is needed. Not only are changes like these beneficial to everyone’s bottom line, but they are better for the environment as well; that is especially a big deal in our industry!

Share this with your friends!

Share on Facebook Share on Twitter Share on LinkedIn Share on Google+

If you liked this post, sign up for our monthly newsletter!

Newsletter Signup

Ben Hatton Data Cave celebrates 5 years in business!

Happy New Year! This is a very special day, not just because it’s the start of a new year, but today also marks Data Cave’s 5th anniversary of opening for business! We opened our doors on January 1, 2010, excited to bring our unique high-level data center services to the Columbus area. Data Cave 5 years in business

The past 5 years have been good to us as we have started relationships with many great clients throughout the Midwest, and brought together an amazing team. We also introduced several new services such as our expansive Business Continuity workspace, and our Data Backup service. We have grown tremendously over the last 5 years, and we are primed to grow even more in 2015!

As we look ahead to what 2015 will bring, we want to thank all of our clients, our Data Cave team, and the Columbus community for helping to make our first 5 years in business memorable ones. We are all excited for many more to come!

 

Share this with your friends!

Share on Facebook Share on Twitter Share on LinkedIn Share on Google+

If you liked this post, sign up for our monthly newsletter!

Newsletter Signup

 

 

Ben Hatton Direct Liquid Cooling: The Future for Data Centers?

Direct liquid cooling

An application of direct liquid cooling in a data center. Photo courtesy of Green Revolution Cooling

For data centers, cooling is everything; not only is it what keeps colocated equipment running efficiently, it also makes up the majority of a data center’s power usage. Most data centers, including Data Cave, rely on cooling systems that use air for transferring heat away from equipment, while simultaneously supplying chilled air. However, a cooling method that was developed back in the 1960’s and is now seeing somewhat of a resurgence, is direct liquid cooling. Rather than using air, this method involves physically submerging servers in a dielectric fluid to keep them cool (NOT regular water). While this method of cooling has been anything but mainstream in the industry, there appears to be a new interest in it, according to a July report by 451 Research.

The report indicates that there has been some growth in the number of liquid cooling providers, as well as new interest in liquid cooling by larger companies that require very high density computing in their data centers.

Why liquid cooling hasn’t become mainstream

While the direct liquid cooling method saw a fair level of practical application cooling mainframes during the 60’s and 70’s, it eventually lost out to cheaper but equally efficient air cooling methods in the 1980’s. It was during this time period that the modern data center as we know it began to take shape, and air cooling quickly became the norm within the industry.

There are a number of obstacles that currently prevent most data center operators from considering liquid cooling as a viable option, and here are a few of them:

  1. Little to no vertical scaling: With this method, servers are submerged in large vats of fluid that are only accessible from above. This inhibits the ability to scale servers vertically (or stack them), since you have much less vertical space to work with. If a data center is already equipped with very tall cabinets stacked with equipment, then moving to a liquid cooling system would require a great deal of re-configuration.
  2. Not all server types are supported: Since liquid cooling involves the physical equipment being fully submerged in liquid, not all types of hard disk drives can be cooled this way. Drives must be designed to not only tolerate the liquid, but they must be fully operational while submerged as well. While this type of server equipment exists, any incompatible servers would have to be upgraded as well.
  3. Costs are too high for most data centers to justify: In addition to the costs involved with updating the server equipment itself, the numerous other costs of implementing a liquid cooling system are quite high. These include the physical equipment needed (the large vats as well as supplies of dielectric fluid), maintaining that equipment, and modifying the existing infrastructure and data center layout to accommodate for this equipment. In all likelihood, it would be more cost effective to start from scratch with liquid cooling, rather than trying to retrofit it into an existing data center.
  4. Extra precautionary measures are required: Data centers have always been environments where any type of liquid has been strictly forbidden(*), so the idea of introducing a liquid into that environment is something that would definitely raise a few eyebrows and cause a bit of fear on the data center operator’s part. It would also necessitate additional measures and physical changes within the data center to prevent other pieces of critical equipment from coming into contact with the cooling fluid.

Ultimately, it is the steep costs and long-term infrastructure changes that are the major barriers keeping most data centers from adapting liquid cooling anytime soon.

What it would take for it to become mainstream

While there has been some renewed interest in liquid cooling by some larger data centers with very high capacity computing needs, I think it will still take considerable time before it begins to gain traction throughout the industry. Here are a couple of things that could help it to take off though:

  1. Widespread adoption of high density computing: There has been a growing shift in the industry towards high density computing, a scenario that would be well served by liquid cooling. However, this isn’t a shift that has happened overnight, and until high density data centers start to really become the norm in the industry, liquid cooling won’t see a great deal of growth.
  2. Adoption by the industry trend setters: Many of the key trends and directions that the data center industry moves in are set first by some of the world’s largest tech companies (ie. Google, Facebook, Amazon, etc.). The computing and capacity requirements for these companies are incredibly above average, and they are often the pioneers in developing and working with new types of data center technologies to meet those needs. If many of these larger companies begin adapting liquid cooling as part of their data center operations, the rest of the industry would definitely take notice and likely follow suit over time.

Liquid cooling for data centers definitely has the potential to become a legitimate standard for cooling within our industry. It’s not there yet, but as the industry continues to evolve, there is a good chance we’ll be hearing more about it in the future.

Share this with your friends!

Share on Facebook Share on Twitter Share on LinkedIn Share on Google+

If you liked this post, sign up for our monthly newsletter!

Newsletter Signup

 

 

Ben Hatton Winter Disaster Recovery: Are you ready?

Buffalo snow storm

One look at the epic snowfall that the Buffalo area recently received. Image courtesy of Flickr.

A few weeks ago the New York area was absolutely slammed with a record-breaking level of snow (6 feet of it in many places). While we have been a lot more fortunate in Indiana, we and the majority of the rest of the country have seen record low temperatures for this time of the year. The calendar may say it’s still autumn, but for all intents and purposes, it’s winter now.

If there is a bright side to this, it’s that this is a good reminder that with winter weather comes some unique challenges to your business. These will typically include things like:

  • Employees not being able to make it into work, due to the snow and ice.
  • Power outages (luckily few were reported in New York).

While these are certainly challenges, they are challenges that can be met with the right disaster recovery plan. With a plan that includes server colocation at a data center like Data Cave, the effects of winter weather on your operations can be mitigated. Here’s why:

Highly defended from power outages

As we’ve written about before, Data Cave has a high level of redundancy when it comes to our electrical infrastructure. With multiple utility feeds entering the building, as well as UPS flywheels and two 2MW Cummins backup generators per quadrant, we are very well protected from power outages year round. For more specifics, check out our Electrical Infrastructure page.

This means that even if the lights go out at your office, your applications and data will still be accessible.

Icy and snowy roads aren’t as big of a deal

I’m not recommending that any of you attempt to drive through 6′ snow drifts (unless you happen to drive one of these), but if you or your employees can’t make it into work, it’s likely possible that at least some of your staff can work from home instead. This becomes especially easier to do if your data is maintained at a location that is safe from power outages, where it can still be remotely accessed during a winter storm.

Start planning!

If last year’s extended winter was any indication of things to come this time around, then last month’s winter storm has virtually guaranteed it. This means that right now is the perfect time to think about your company’s disaster recovery plan, and how you would be able to respond after a winter storm. Contact us to start the conversation today!

 

Share this with your friends!

Share on Facebook Share on Twitter Share on LinkedIn Share on Google+

If you liked this post, sign up for our monthly newsletter!

Newsletter Signup

 

 

Ben Hatton Slack: An app that takes collaboration to the next level

Effective communication within a company is vital for its long-term success, and we’re no exception at Data Cave. With multiple internal projects going on at any given time, an expansive infrastructure to monitor and maintain, and staff that aren’t always in the same place within the facility, the ability to communicate quickly within the Data Cave team is a very big deal for us and our redundancy levels. Slack logo

Earlier this year we began using Slack for much of our internal communications and notifications on different projects, and it has become a very effective tool for us. Our objective was to find a communication tool that was easy to use, provided a good range of functionality, was mobile-friendly, and stable (finding the right tool that meets all of those requirements is actually pretty tough!). All in all, we have been very happy with it, and if you have been looking to enhance your organization’s internal communication methods, keep reading to hear our thoughts about it!

About Slack

At its core, Slack is an instant messaging communication tool that can be accessed on computers as well as mobile devices. It allows for both private and group messaging, as well as team-wide chats that can be broken down by topic.

Slack interface

The Slack user interface.

Benefits we’ve seen

Over the past several months, our team’s internal communications have been greatly enhanced from using Slack, in a number of different ways:

1) More visibility into projects: Like most companies, our internal communications include a little bit of everything (emails, IM’s, hand-written notes, etc.), but by shifting a decent portion of these communications over to Slack, we’ve found that it’s been much easier for everyone to stay in the loop on all of the data center projects we have going on. The functionality for multiple chat channels has played a big part in this, as well as the ability to easily upload attachments. Overall it has provided a much higher level of visibility and organization for all of our internal projects, which has resulted in them moving forward much more efficiently than they had in the past.

2) Very mobile-friendly: Data Cave is a big facility, with lots of moving parts and things that go on behind the scenes. This effectively guarantees that none of us are going to be glued to our computers all day. This is where Slack’s mobile app comes in; it’s very intuitive and easy to use, offering all of the same major functionality as the web application. Through this we can stay in the loop with what is going on in each channel, regardless of where we are.

3) Third-party integration: Slack is an open platform that integrates with a wide range of  third-party applications. We are able to tie in some of the additional outside applications and services that we use for various activities at Data Cave, and view notifications as well as perform some actions from directly within Slack. To that end, Slack has proven very effective at allowing us to view information (and take action when needed) that is created from multiple outside services, all within one central application. There are a lot of chat and instant messaging services out there, but very few have taken this approach of deep service integration.

Final thoughts

Overall, Slack has had a tremendous impact on how we communicate at Data Cave. Not only has it allowed for better visibility and faster communication on our internal projects, but more importantly it has improved how efficiently these projects get completed. The ability for us to work efficiently as a team is why we are able to grow and stay on top of today’s data center trends, and Slack is one of the tools that helps us to do just that. I would highly recommend Slack for any organization that is looking to enhance its internal communications, and to see the benefits that come out of it!

 

Share this with your friends!

Share on Facebook Share on Twitter Share on LinkedIn Share on Google+

If you liked this post, sign up for our monthly newsletter!

Newsletter Signup

 

 

Next Page »