科技行者

行者学院 转型私董会 科技行者专题报道 网红大战科技行者

知识库

知识库 安全导航

至顶网网络频道Virtualizing VMs in Data Center Switches

Virtualizing VMs in Data Center Switches

  • 扫一扫
    分享文章到微信

  • 扫一扫
    关注官方公众号
    至顶头条

With IT demands rising rapidly, and resource-strained organizations challenged to meet them, a transformation within IT is taking place. Virtualization and cloud computing are significantly changing how computing resources and services are managed and delivered.

来源:ZDNet网络频道 2012年8月30日

关键字:

  • 评论
  • 分享微博
  • 分享邮件

  With IT demands rising rapidly, and resource-strained organizations challenged to meet them, a transformation within IT is taking place. Virtualization and cloud computing are significantly changing how computing resources and services are managed and delivered. Virtual machines have increased the efficiency of physical computing resources, while reducing IT system operation and maintenance (O&M) costs. In addition, they enable the dynamic migration of computing resources, enhancing system reliability, flexibility, and scalability.

  Virtual machines can be used to this effect across the data center, including on many network devices that function as critical network elements. For example, the next-generation Huawei Virtual System (VS) enables virtualization in data center switches.

  What is network virtualization?
 

  Simply speaking, virtualization turns IT resources into a utility, not unlike household utilities such as electricity, which allows users to obtain these types of resources on demand. This makes it a critical cloud computing technology. By abstracting resources from the physical layer, virtualization makes them elastic and enables sharing and isolation of these resources on the cloud. According to International Data Corporation (IDC), after virtualization is introduced into cloud computing, resource use efficiency is increased from 15% to 80%. In addition, IT resource O&M costs are reduced by tens of times.

Virtualizing VMs in Data Center Switches

 

  Virtualization in the cloud computing era consists of computing virtualization, storage virtualization, and network virtualization. Similar to server virtualization and desktop virtualization, network virtualization allows users to obtain network resources on demand. By enabling the deployment and management of network resources as logical resources, rather than physical resources, network virtualization brings great advantages for cloud network users, including increased flexibility and lower costs.

  There are two types of network virtualization: N-to-1 and 1-to-N. In N-to-1 virtualization, multiple physical network resources are virtualized as a logical resource, such as stacking and cluster technologies. In 1-to-N virtualization, one physical resource is virtualized into multiple logical resources.

  Typical examples of 1-to-N are channel virtualization and service virtualization. Channel virtualization has been widely used in traditional networks; logical channels are provided over the network so that user traffic can be isolated, controlled, and processed using various VPN, VLAN, and QinQ technologies. With service virtualization, multi-instance services are logically isolated using MSTP multi-process or virtual firewalls.

  Channel virtualization and service virtualization are partial virtualization technologies that apply to certain application scenarios. In many cases, network administrators have to integrate multiple virtualization technologies. Such technology integration makes network deployment and O&M complicated. To simplify virtualization, a system-level virtualization technology, i.e. network device virtualization, is preferable. This technology virtualizes the entire network device, removing limitations on certain services or channels.

  Network virtualization addresses market demands
 

  The continuous expansion of information and communications technology (ICT) networks, particularly data center networks, has enriched network services, but also complicated network management. ICT networks pose high requirements for network attributes, such as service isolation, security, and reliability. With hardware capabilities on networks greatly improved, multi-chassis, cluster, and distributed routing and switching systems are rapidly developing. Service processing capabilities of single physical network devices continue to reach unprecedented high levels. But how do organizations effectively utilize these high levels of service processing to meet current service requirements and implement seamless network migration?

  The following are some key concerns that customers face in answering this question:

  Contradiction Between High Device Investment Costs and Low Device Resource Use Efficiency

  The rapid development of data centers and expanded ICT infrastructure has resulted in greater investment required, due to considerable increases in maintenance costs; continuous growth in the number of network devices; a surge in network investment costs; and rising O&M costs, device power consumption, and equipment rooms space needs.

  Meanwhile, customers are also generally selecting network devices with higher capacities than services actually require in anticipation of potential spikes in service demand. As a result, the workload of current network devices is inevitably imbalanced, and in some cases the use efficiency of these devices is quite low.

  Contradiction Between Centralized Multi-User Processing on Network Devices and Simplified Network Management, Isolation, and O&M

  The large expansion and centralized evolution of data centers has spurred customers to integrate services from various interior and exterior user clusters at different departments. These services are processed on data center networks in a centralized manner, and services from various user groups are often processed on the same network device. Such centralized processing helps to streamline network management.

  However, due to potentially significant differences in the service security, performance, and reliability needs among these user clusters, each must have high management and isolation capabilities and each needs to be able to deploy, manage, and maintain its own services independently from others. As a result, we are seeing network management personnel challenged by how to effectively manage and isolate user groups, while also reducing operating expenses (OPEX).

  Contradiction Between Centralized Multi-Service Processing on Network Devices and Reliable and Secure Service Isolation
 
  The development of next-generation data centers brings new network technologies, such as transparent interconnection of lots of links (TRILL), MAC in IP, Fiber Channel over Ethernet (FCoE), and various inter-data center connection technologies. Customers require the services processed on networks be diversified. As a result, there is a demand for broader processing capabilities and services on data center networks.

  Next-generation data centers urgently need to allow network devices to independently process these services using various technologies. Critical services of customers are increasingly being migrated to cloud data centers, so next-generation data centers will require greater reliability and security of network devices than traditional data centers.

  These increased market demands on service isolation, security, and reliability are matched by the capabilities of virtual machines. After a virtual machine is introduced in a data center switch, multiple virtualized devices can be deployed on a physical device. These virtualized devices manage various user groups and process various services. Accordingly, device resource use efficiency is significantly increased.

  Architecting for abstraction, isolation, and encapsulation
 

  A virtual machine in a data center switch removes barriers between physical devices, changing physical device resources into logical and manageable resources. These logical resources run transparently on a physical device platform, enabling isolation and on-demand distribution of resources.

  A key feature of Huawei’s Cloud Fabric Data Center solution, Huawei VS provides the technical architecture of network device virtualization, dividing multiple logical or virtual systems on physical devices. Each VS is a virtual machine on a network device and can be independently configured, managed, and maintained. In addition, each VS is isolated from other VSs, running and processing network services independently. Data center networks process various services and serve various user groups using the VS on physical devices, which enable the following:

l   Improvements in service isolation, network reliability and security,

l   Increase in device use efficiency,

l   Reduction in user investment,

l   Isolation between and management of user groups,

l   Simplification of network O&M.

 Virtualizing VMs in Data Center Switches

  To put virtualization technology into effect, devices must be abstracted, isolated, and encapsulated. The VS architecture is built with the following modes:

l   Abstraction

Software and applications are abstracted from physical devices into multiple virtual machines. The virtual machines each have an independent and logical control and service panel, forwarding panel, and management panel. Hardware system resources are abstracted into standardized virtual hardware to meet user requirements, including ports, boards, memory, and central processing unit (CPU) resources.

l   Isolation

Process-level isolation is implemented among multiple virtual machines that run on the same physical device. Abstracted virtual hardware is managed as a virtual machine. Moreover, VSs do not affect one another.

l   Encapsulation

Virtual machines are encapsulated independently from the virtual context on a specific physical device. Full-service and distributed capabilities and the fine-grained, multi-process mechanism of Huawei VRPv8 are used to build system-level dynamic migration capabilities. These system-level dynamic migration capabilities enable the flexible service deployment and improvement of virtual machine reliability as well as device use efficiency.

  Ensuring isolation, security, and reliability through software architecture
 

  Huawei VS uses a virtual, fine-grained, elastic, and distributed architecture. The entire VS is constructed based on the full-service and distributed middleware of Huawei VRPv8. Similar to those of Hypervisor in the server virtual machine, VS control components uniformly schedule and manage multiple VSs. The control components virtualize the control and service plane, data plane, and management plane so that each VS can independently deploy services, upload configuration files, and control network management.

  Furthermore, the control components enable the VS to provide physical device capabilities. The VS also uses the full-service and distributed capabilities to implement fine-grained and distributed deployment of services. For example, various VS service modules can be distributed on different boards, which substantially increases hardware resource use efficiency.

 Virtualizing VMs in Data Center Switches

  The virtual control and service plane transmits network control protocols and processes user services. Both network reliability and secure isolation are critical here. The VS can run in different processes and still provide fine-grained process control. It uses inter-process isolation and exclusive virtual memory space to prevent control protocols and services from affecting each other. Therefore, VS service reliability and secure isolation capabilities are considerably consolidated. The fine-grained process control mechanism sharply reduces the expense of each VS and allows a physical device to virtualize up to 16 VSs simultaneously.

  The virtual forwarding plane uses independent forwarding environments and port resources. Data traffic of each VS is separated to ensure service isolation and security.

  The virtual management plane sets an independent management domain for each VS. This plane ensures service isolation in user, log, and alarm management and file configuration. Each VS is able to access only its own management information, therefore ensuring the independent management capability of each VS.

  System resource sharing boosts efficiency
 

  Physical device hardware system resources, including ports, boards, memory, and CPU resources, can be virtualized into multiple VSs, allowing each VS to have independent hardware resources. For example, when a port is designated to a specified VS, the VS occupies the port exclusively. Such virtualization ensures isolation among VSs and simplifies VS migration in devices.

 Virtualizing VMs in Data Center Switches

  To ensure system resource use efficiency, certain system resources can be shared, allowing VSs to use them on demand. For example:

l   Multiple VSs can be flexibly deployed so that they share the same MPUs and line cards.

l   IPv4 and IPv6 route tables as well as VLAN and VRF resources can be shared by multiple VSs. Each VS's specifications are set to ensure appropriate distribution and use of system resources.

l   VLAN IDs of different VSs can overlap.

l   Two VSs can share a physical port using logic port isolation, which saves on physical links and networking costs.

  Flexible VS management, operation & maintenance
 

  Key concerns regarding implementation of virtual machines in data centers involve ensuring effective management and handling of the O&M of multiple user clusters. Huawei VS addresses this with its control components and virtual management plane, which allow for easy yet flexible control and management.

  After a VS is created, it can be independently controlled and managed in the same way as a physical device. For example, a VS can be reset and suspended and can switch services and allocate resources based on service requirements. Services can be deployed and configurations can be delivered independently in the VS view. Only specific network administrators can perform control and management, as well as service deployment, in the VS. Network administrators that have not been assigned rights to access the VS are unable to perform these tasks, allowing enterprise departments to manage their services independently.

  Each VS has its own file systems, configuration files, logs, alarms, and network management servers, allowing for independent O&M.

Each VS has exclusive network management channels and isolation rights, meeting multiple user clusters' requirements for independent management and secure isolation. This network management mode is called independent management mode. Each VS is managed as an independent network element that has its own topology.

 Virtualizing VMs in Data Center Switches

  To satisfy customers' various network requirements, Huawei VS also provides a unified management mode. In this mode, each VS is uniformly managed on a physical network element and does not have its own topology. The unified management mode is applicable to service isolation, while the independent management mode integrates service isolation and network isolation.

  Benefits of VS in real-world applications
 

  The benefits of virtualization have countless applications that can help organizations meet their IT needs, while reducing resource demands.  

  Market Challenge 1: Contradiction Between High Device Investment Costs and Low Device Resource Use Efficiency
 

  Application Scenario 1: Network Node Virtualization
 

  The VS is divided by network node. For instance, when two longitudinal VSs are divided at the core layer and aggregation layer, a single physical device meets the networking requirement for two physical devices. When two horizontal VSs are divided, the number of virtualized network devices decreases by half.

  With the same logic topology, the VS provides the following benefits in this application scenario:

l   Reduces the number of physical network devices and shrinks O&M costs,

l   Improves device use efficiency,

l   Reduces the power consumption of devices such as power modules and fans, as well as auxiliary devices including equipment rooms and air conditioners,

l   Provides consistent service and management experience.

 Virtualizing VMs in Data Center Switches

  Market Challenge 2: Contradiction Between Multi-Service Centralized Service Processing on Network Devices and Reliable and Secure Service Isolation
 

  Application Scenario 2: Service Virtualization
 

  The VS is divided by service, which helps address the uncertainty and risks that accompany service pilot projects. Deploying a specific service in an independent VS can reduce possible interference with other services. As shown in the following figure, Layer 3 services are deployed in VS 1, and TRILL services are deployed in VS 2.

  In this scenario, after services are isolated using VS assignment, services appear to run on an independent device. In addition, service resources are protected, and isolation, reliability, and security are enhanced.

 Virtualizing VMs in Data Center Switches

  Market Challenge 3: Contradiction Between Multi-User Centralized Service Processing on Network Devices and Simplified Network Management, Isolation, and O&M
 

  Application Scenario 3: User Cluster Virtualization
 

  The VS is divided by network user cluster. For example, the VS can be divided by the following types of user clusters:

l   User service departments, such as production, R&D, marketing, customer service, and network management departments,

l   User attributes, such as the intranet, DMZ, and extranet,

l   User types, such as inner office, online banking services, and credit card services users in financial services.

In this application scenario, the VS provides the following benefits:

l   Network service isolation and fault isolation are enabled among user clusters, which ensures high service reliability and security,.

l   Independent network management is enabled among user clusters, which prevents information security risks.

 Virtualizing VMs in Data Center Switches

  Application Scenario 4: Multi-Tenant Application
 

  In the public cloud, VSs are assigned by VIP tenants and can be assigned at the core and aggregation layers on demand. Tenants can be divided in VLANs at layers below the VS. As shown in the following figure, VS 1 serves tenant A and VS 2 serves tenant B.

  Applying the VS in multi-tenant scenarios has advantages when compared to the VRF isolation mode. These advantages include flexible service deployment, simplified O&M, streamlined management, high reliability, and secure isolation. Therefore, the VS can meet VIP customers' requirements for high-quality services.

Virtualizing VMs in Data Center Switches

  Transform networks with Huawei VS
 

  The significance and value of virtual machines in data center switches are clear, meeting critical IT needs with the new IT paradigm of network virtualization. In this vein, Huawei VS, for example, provides a number of key benefits thanks to its new-generation virtualized architecture, including:

l   Helping customers flexibly construct virtual machines in data center switches,

l   Simplifying multi-user management,

l   Improving service reliability and security,

l   Making full use of network device resources to lower investment costs.

  Furthermore, Huawei VS integrates with other virtualization technologies such as Cluster Synchronization Services (CSS) to separate or combine network devices on demand. The VS also provides flexible and scalable services to build data center networks into elastic and virtualized cloud networks, allowing customers to boost their services in the cloud computing era. Ultimately, the virtualization capabilities of Huawei VS promise to transform networks – delivering greater flexibility, security, reliability, and agility, while simultaneously lowering costs – and better enable businesses to meet their objectives.

    • 评论
    • 分享微博
    • 分享邮件
    邮件订阅

    如果您非常迫切的想了解IT领域最新产品与技术信息,那么订阅至顶网技术邮件将是您的最佳途径之一。

    重磅专题
    往期文章
    最新文章