Top Software-Defined Data Center (SDDC) Trends

Hardware has always defined the data center. You looked into a large room and there was aisle after aisle of servers, storage, and networking equipment. Along the walls were huge cooling systems and power management hardware, such as switches, batteries, etc.

For resiliency and disaster recovery (DR), the solution was simple. You build a mirror image of this data center in another location: buy a new set of this equipment and install it in the new center. Of course, there was also a lot of software. But hardware has defined the existence of the data center.

But that could change as the software-defined movement gains momentum. The basic idea is to decouple the software from the underlying hardware. Instead of one vendor building a storage area network (SAN) array with proprietary software that only runs on that system or another vendor building a switch with secret software inside, the idea is that the software can run on any hardware. There are so many software-defined elements that people now talk about entire software-defined data centers (SDDCs).

Key trends in the software-defined data center market include:

Software Defined Flash

We had Software Defined Storage (SDS), Software Defined Compute, and Software Defined Networking (SDN). And now we have software-defined flash.

To achieve large-scale efficiency, cloud storage and large-scale data centers need more flash storage devices that are currently based on hard disk drive (HDD) protocols. Linux Foundation software-enabled flash Community Project has therefore developed a software-defined flash API. Developers can use it to customize flash storage specific to data center, application, and workload requirements.

Kioxia, for example, introduced software-defined technology and an example of hardware based on PCIe and NVMe technology. This technology decouples flash storage from legacy hard drive protocols, allowing flash to realize its full capacity and potential as a storage medium.

“Software flash technology fundamentally redefines the relationship between host and solid-state storage,” said Eric Ries, SVP, Memory Storage Strategy Division, Kioxia America.

Orchestration

The magic begins to happen once you decouple physical servers from the software they house, storage arrays from the many types of software they can deploy, and network software from switches, routers, and other under-the-radar network equipment. underlyings.

But also, complexity emerges. What’s needed is a way to orchestrate the many elements, so that the “symphony” of elements in the data center all play in the same key, keeping time and going with what the boss is doing. orchestra demands.

“With the increasing complexity and scale of data centers, the industry must move beyond automating the configuration of infrastructure and workloads to a new paradigm based on orchestration,” said said Rick Taylor, CTO, Or I.

“We need to think about the desired state of services and leverage intelligent software to plan and deploy instances and their connectivity.”

Already there?

Many believe that software-defined data center (SDDC) is gradually emerging.

But Ugur Tigli, CTO at MinIObelieves that we are already there thanks to containerization and especially thanks to Kubernetes.

“The modern data center is already software-defined, and the colossal success of Kubernetes only guarantees that it will remain so,” Tigli said.

“With software-defined infrastructure, you have the ability to dynamically provision, operate, and maintain applications and services. Once the infrastructure is virtualized and software-defined, automation becomes a force multiplier and the only way to achieve elasticity and scalability.

Device addiction

Devices have emerged over the past two decades to support a multitude of data center functions.

They are used for deduplication, compression, backup, and a host of other uses. There are even massive devices like Oracle that put all the compute, networking and storage hardware in one box with Oracle software and databases – all tuned and optimized to be the environment for that application or database. of data.

But there is a problem. These appliances tend to go against the software-defined paradigm. They usually have proprietary software inside. Yet data centers, and IT in general, are riddled with them because they’ve worked so well.

“Legacy infrastructure providers face a major challenge: you can’t containerize an appliance,” Tigli said with MinIO.

“Every device maker is frantically trying to separate their software from their hardware because the cloud-native data center is an extinction event for them.”

You’ll still need a processor, networks and drives, Tigli said, but everything else is software and that software should run on anything.

Look at the cloud today, the diversity of processor options includes Intel, AMD, Nvidia, TPU, and Graviton, to name a few. Even private clouds show considerable diversity with commodity hardware from Supermicro, Dell Technologies, HPE, Seagate and Western Digital offering different price and performance options and configurations.

“The result is that we live in a software-defined and increasingly open world of data centers,” Tigli said.

“Only with open source software can the developer gain the freedom to understand the software in the context of heterogeneous hardware.”