Big data has put strain on these facilities to support the increasing IT load requirements. This pressure manifests in two key ways:
- The sheer quantity of data is spurring growth in high-density facilities and heightening the need for impeccable data center management.
- Business operations and data center infrastructure management (DCIM) must be better aligned to get the most out of big data.
In other words, data center managers must improve their own infrastructure-based analytics capabilities, while also having a way to bridge the existing divide between the data center and the business. As challenging as those dual objectives may seem, they’re perfectly in line with what best-in-class DCIM solutions have been working toward for years now. To better understand this, read on for an explanation of how DCIM and big data go hand in hand.
Real-time facility analysis
For the sake of adjusting infrastructure and data center operations to support big data storage and analysis, facility managers can benefit from real-time analysis of metrics that include:
- CPU usage
- Rack temperature, humidity and airflow
- Large-scale power usage
- Granular power usage at the device level
The ability to aggregate all of this data into a central dashboard, and then analyze it, takes a page directly out big data’s book. The idea is that you have now digitized facility metrics into a data format, so the next move is to centralize it and then make sense of it every second. This is why DCIM was created in the first place, and ultimately, why its use is so critical to big data strategies. Up-to-the-second analysis of data center metrics doesn’t just ensure the facilities’ health in the moment – it also enables precise and accurate data center capacity planning.
For instance, DCIM analytics can predict how a CPU increase in a certain set of racks will affect temperature, both in that particular rack and row, and also facility-wide. Taken a step further, DCIM can even be used to understand relationships between multiple facilities, so as to distribute load more equitably between data centers in different geographies.
The fact is, big data strategies are entirely dependent upon data center uptime. A failure in the data center will precipitate a failure in your big data-related business operations.
DCIM reporting: Bridging the data center and the business
That brings us to the issue of silos. For a big data strategy to be successful, data centers and businesses can no longer be looked at as separate entities. The health and capabilities of one will ultimately affect the other. After all, both the data and the analytics engine live in the data center. Thanks to cloud computing, that’s all too easy to forget when interfacing with them from behind a desk hundreds of miles away. It’s completely out of sight and out of mind – that is, until something goes wrong.
“Data center and business operations are more intertwined than ever.”
This is where the need to bridge the divide between becomes clear. Perhaps Yevgeniy Sverdlik, editor-in-chief at Data Center Knowledge, put it best:
“As companies deploy more sophisticated analytics systems to help with operations (and beyond), DCIM can provide a set of operational data about infrastructure that can be very useful to those systems,” Sverdlik wrote.
Actually contextualizing that “operational data about infrastructure” requires detailed reporting that can be shared with the right business managers. Again, this reporting capability is something that best-in-class DCIM solutions should feature. With these reports, the data center’s role in a big data strategy becomes much clearer, as does the extent to which one impacts the other. The result is the ability to create a more unified and agile big data strategy.
It really boils down to this idea that data center and business operations are more intertwined than ever. To understand the intricacies of their relationship through DCIM is to create new ROI opportunities, whether that’s through big data, or through another initiative with potential to boost bottom line.