The data center, as we have come to know it, has changed. With bandwidth needs driven by trends such as wearable technology and big data, we see a shift in how organizations are viewing, building and planning their data centers. We see many organizations migrate new data centers to leased co-location facilities and public cloud. When organizations choose to build their own data center, the facilities need to be more efficient and achieve higher density.

With changes taking place in how technologies are used and valued within the enterprise, there are many shifts I believe will happen in the near future. Here is a summary of the key trends influencing data centers in 2016:

nokia networks core networks 1
– Nokia

Shift from storage to compute and on-demand access

The previous generations of data centers focused primarily on storage of information and disaster recovery. Geographic diversity was required for backup and data was retrieved on a periodic basis. Now the focus has shifted to analyzing and processing data for on-demand access. The rise of mobility and wearable technology creates requirements for latency that previously has never been seen.

Consumers and business users alike have an expectation of on-demand access to data from the cloud with the same user experience when accessing data residing on the device. This results in data centers that are far more distributed. The most efficient way for most business to do this is with cloud computing.

Where the growth is happening

As stated earlier, data centers will need to be more efficient and achieve higher density. From a service provider and co-location perspective, there will be a large growth in providing distributed computing. The large wave of growth will be in point of presence (PoP) data centers, supporting content delivery networks for service providers as well as promoting network virtualization and software defined networks. A combination of growth within PoP and co-location will increase the need for interconnecting or peering between service providers.

Bringing compute power to the edge

A big expansion in the coming year will be the idea of moving computing power to the edge of the network. We are seeing service providers want to push as much computing resources to the edge of the network as possible to reduce latency by reducing the number of “hops” the data has to take in order to reach the end user.

You can’t afford to have an inefficient data center. You need to know exactly where everything is, how it is being used and powered.

A large amount of data is shifting from storage to algorithms that manipulate and analyze the data stored. As we use the data, we need to reduce latency. Ten years ago, in the era of programs, you would pull a program up on our laptops that took some time to load and we would look at it for long periods of time a couple of times a day. Now, we shifted towards an app-driven world where we look at the data hundreds of times a day in shorter durations. Users are starting to feel that data should be predictive and instantly serve up information from the cloud.

To give you an example, look at how social networks launched in the early 2000s. One factor that limited growth during the first few years was the need to increase the number of servers available. Today, a new social network can have instant access to nearly unlimited compute resources on every continent with the use of cloud services. This provides instant scalability, especially for start-ups and tech companies. So naturally, small and medium-sized businesses are going that way.

One aspect playing into the growth of computing power into the edge of the network is modular data center. We are seeing a trend in hyperscale and service providers deploying modular at the base of cell sites to bring compute as close to the consumer point of use as possible. They are deploying an appropriately sized data center at a geographically correlative location that cuts down latency.

How DCIM and ITSM play

As you build out these data centers, it becomes a game of how efficient you can make these facilities. You can’t afford to have an inefficient data center. You need to know exactly where everything is, how it is being used and powered. Any form of inefficiency in the data center can be costly and data center infrastructure management (DCIM) will be paramount in helping keep these data centers running smoothly.

The hype around hyperscale

The scale of demand stemming from the number of consumers doing online shopping caused several high profile outages. These events will drive organizations to move some of their operations to the cloud in hyperscale data centers. This will give them the ability to flex into a cloud capability when their network becomes stressed. Some phenomena happening within the hyperscale arena include:

  • The streaming, uninterruptable, low latency services of music, video, and information are spurring the growth of hyperscale. Users want streaming information without delay.
  • More people are moving their compute services to the edge of the network. Data can exist in multiple places to provide static information without latency.

This past year was the year when we heard the trumpeting of wearable technology as we watched a few major technology companies take great strides toward moving away from managing their own data centers and completely into the cloud. It’s exciting to see how the data center will grow and change over the next few years, starting in 2016. I foresee more businesses expanding via cloud and co-location, while a simultaneous expansion of computing power expands throughout the network and around the world.

John Schmidt is the leader of CommScope’s global data center solutions group