Infrastructure, in the context of IT servers, is simply organized server support. It refers to how the servers are physically, logically, and/or functionally grouped together and includes the tools (mostly provided by vendors and occasionally custom-made by the system administrator) for supporting them. The infrastructure only works if the component servers are correctly set up, its services are properly managed, and its operations are diligently monitored. In the IT world, an infrastructure necessarily involves many servers, and for this reason there must be tools to keep the servers running.
Because various types of servers and thousands of vendors providing them exist, there is consequently a great variety of tools available. The categories of tools include: application deployment and management, configuration and change management, cluster management, network administration, web systems management, system performance testing, user management, security control, patch and update management, storage management, backup/restore and archiving, disaster recovery, IT asset and inventory management, license management. The list, although lengthy, is only a tiny fraction of product category offerings that vendors have placed in the market, but it illustrates the idea of how much is required to run a server infrastructure.
Knowing what tools are available is not enough. To be able to choose the appropriate tools and use them effectively, the IT professional in charge of running the servers must be thoroughly knowledgeable of: what the infrastructure elements are generally used for; the hardware and software within the infrastructure, and how they are configured; the location of the infrastructure components. Despite this knowledge, however, the choice of tools may be affected by such constraints as difficulty in comparing products, justifying ROI, and budget.
The task of running servers — particularly numerous servers — is never easy, and to go around any constraint to acquire the needed tools is something an administrator absolutely needs to do.
The idea of converged infrastructure revolves around forming a single optimized IT package by putting together several components to meet present-day business needs. According to HP, converged infrastructure meets these needs “by bringing storage, servers, networking, and management together – simply engineered to work as one.” The end result is interoperability of IT components using resource pools based on a common platform. Convergence embraces all target resources in one shot, not on a piecemeal basis.
Network performance, supported by uniform applications and resources, is critical for making infrastructure convergence work; this is especially true for convergence implemented in a virtual environment. Virtualization, as the experts tell us, is the starting point of convergence. There is one interesting observation regarding the relationship between networking and convergence: although the network may be a target for convergence, it is at the same time the “connecting tissue” that binds the physical resources to the central abstraction upon which virtualization is based.
However successful convergence also depends on other factors, such as the continuing evolution of NaaS (Network as a Service, which is a cloud service) and SDN (software defined networking).
Infrastructure convergence can be achieved at the network and cloud levels. In fact, the evolution of the cloud has helped make the idea of fully converged infrastructure a reality. The cloud creates the need for abstraction of certain IT components such as servers, storage systems and network connections into virtual resources. Users manage these virtual abstractions through APIs (application programming interfaces). Using the network, the APIs distribute resources from a pool of various hardware elements to applications.
NaaS is a concept founded on two network missions created by the cloud: one, to connect the items comprising the resource pool collectively called the cloud: application, compute, and storage elements; and two, to support the connectivity needed by applications.
SDN is a service to support the need of NaaS. Three approaches to SDN have been developed: virtual overlay network (a.k.a. SDN overlay network), centrally controlled SDN, and non-centrally-controlled SDN.
In simple terms, the virtual overlay network allows NaaS to substitute for a traditional VPN (virtual private network). This is also a multi-tenant model that has partitioned services for all users and applications.
Centrally-controlled SDN enables a software controller using the OpenFlow protocol to manage network traffic by creating appropriate rules in every device. All aspects of traffic management and connectivity are directed by software.
The non-centrally-controlled SDN model seeks to achieve the benefits of OpenFlow-based SDN minus the burden of a control function to direct all connections and traffic. Instead of focusing on changes in network technology, this model prefers APIs for software control. By building on current network practices and protocols, this model can converge existing network devices into future SDNs.
Market observers have noted indications that a unified SDN that includes the three models mentioned has already been in the works.