The American Society of Heating, Refrigerating and Air-Conditioning Engineers recommends air temperatures in IT environments range from 64.4 to 80.6 degrees Fahrenheit (18 to 27 degrees Celsius). Those numbers have crept up over the years, with data center employees and service technicians increasingly eschewing jackets for short sleeves and CIOs welcoming the effect on their electric bills.
Still, the safe operating temperatures for IT equipment are a far cry from what’s common in many telecommunications deployments. Traditional telecom equipment must function in environments prone to extremes, with temperatures in excess of 100°F or far below freezing not uncommon. Telco equipment is built to withstand temperatures up to 131°F (55°C) or higher.
Telecom environments also lack the intense heat-generating servers at the heart of the data center, so cooling is focused more on protection from outside heat sources than on rejecting heat from the equipment. Shelters and enclosures are the tools of the trade, not the precision cooling systems used in the data center. With the advent of 5G wireless communications, however, telcos’ thermal profile is changing, and the toolbox is expanding.
Global mobile data traffic is expected to increase fourfold by 2025, with network energy consumption trending up by 150 to 170 percent, all due to the widespread implementation of 5G. 451 Research calls 5G “the most impactful and difficult network upgrade ever faced by the telecom industry,” with good reason. 5G isn’t the latest refinement of the traditional cellular network; it’s something new entirely.
5G applications require low-latency computing, which means IT systems are being introduced into the telecom space to be closer to the consumer. Suddenly, the sensitive electronics in those IT servers, designed to operate at no more than 80.6°F, are being deployed en masse to new and existing sites across the telecom network. That includes exchange sites at the core and traditional access spaces and cell sites, where thermal management was often an afterthought.
The transformation of those exchange sites from what used to be called central offices to what now can be characterized more accurately as edge data centers is well underway. The effect on the thermal profile is profound. These facilities now house racks of servers and associated IT equipment, all of it producing hot air that must be managed. But even that oversimplifies the emerging architectures in these exchange sites. In most cases, the equipment footprint is shrinking – those racks typically take up less space than all the switching equipment housed in an old central office – and the unused space factors into the cooling strategy almost as much as the used.
In many cases, exchange sites have enough cooling capacity in terms of BTUs in their basic HVAC systems, but that cool air is being blown into a large, mostly empty space and not reaching the IT equipment it needs to cool. Operators could blow more, colder air at the problem, but that’s massively inefficient and, when repeated across hundreds or even thousands of sites in a network, it makes energy costs (and carbon emissions) unsustainable.
Instead, data center cooling solutions are making their way into exchange sites. These can be in-row cooling solutions, rear-door cooling systems or fully integrated systems that can use contained hot or cold aisles to maximize cooling efficiency. Integrated systems are a popular choice, enabling not just efficient cooling, but effective use of space and easy, modular capacity increases. They provide other benefits as well, such as integrated fire suppression.
Because these facilities are larger than currently needed for these IT systems, rack densities typically are relatively low. For that reason, the high-density cooling solutions becoming more prevalent in the data center – including liquid cooling, which is designed for racks at 15 kilowatts and above – are not yet a significant factor in today’s exchange sites. As network computing demands increase and as more equipment is packed into these spaces, that design is most likely to change. Already, expectations for cooling efficiency are moving past industry norms of 95 to 96 percent and into the 97 to 98 percent range, and nothing is more efficient than liquid cooling.
5G is pushing IT equipment into the access space as well, although these sites typically rely on a single server to handle the necessary computing. That puts a premium on small enclosures and cabinets that typically have built-in cooling capabilities. In mild environments, with clean outside air, those cabinets may use that outside air for cooling. Elsewhere, the cabinets and cooling systems must be more robust, producing cool, dry, clean air for the server intake.
As 5G applications become more common and more sophisticated, the criticality of these micro-edge computing sites will increase. Thermal management will become increasingly important to ensuring the availability of these sites, as will remote management of those cooling systems. With 5G driving an inevitable spike in energy consumption, operators will seek out efficiency and cost savings wherever possible. Advanced thermal controls that turn cooling on or off depending on inlet temperatures offer significant savings opportunities when scaled up for the thousands of access sites in a typical network.
5G requires an influx of computing equipment across the network – equipment that both produces heat and that is far more sensitive to heat than traditional telecom gear. Operators are responding with new approaches to thermal management at their sites, including data center-like cooling strategies in their exchange sites and more sophisticated cooling systems and management in the access space.
The urgency here is twofold. First, a failure to adequately cool these systems will result in network outages, and second, failure to do it efficiently will add to already skyrocketing electric bills.
David Michlovic is America’s Offering director at Vertiv and has been with the organization more than 15 years. In this role, he supports Vertiv’s telecommunications business and its DC power portfolio. At Vertiv, formerly Emerson Network Power, Michlovic has filled several roles with increasing responsibilities in product design and engineering, followed by product management ownership for a variety of product lines. His responsibilities cover the DC power and outside plant portfolio for Vertiv Americas. Michlovic received a bachelor’s degree in mechanical engineering from Ohio University and an MBA from Baldwin Wallace University.