科技行者

行者学院 转型私董会 科技行者专题报道 网红大战科技行者

知识库

知识库 安全导航

至顶网网络频道Cloud Fabric Technology: Helping Data Center Networks Bridge New Chasms

Cloud Fabric Technology: Helping Data Center Networks Bridge New Chasms

  • 扫一扫
    分享文章到微信

  • 扫一扫
    关注官方公众号
    至顶头条

The concerns of data center managers are shifting from network integration to how best to exploit network virtualization and cloud computing.

来源:ZDNet网络频道 2012年8月30日

关键字:

  • 评论
  • 分享微博
  • 分享邮件

  Abstract

  The concerns of data center managers are shifting from network integration to how best to exploit network virtualization and cloud computing. Many would like to “cross the chasm,” as Geoffrey Moore put it in his seminal book, but are hampered by uncertainty caused by rapid technological innovation and radical changes that result in immature technology and standards; different lifecycles for software, server hardware and CT networks; and the need to respond to shifting overall network architectures. Huawei’s recently introduced Cloud Fabric, which incorporates and extends Cloud Engine technology, offers a way to bridge the chasms that keep data center networks from reaching their most flexible, cost-effective potential—all while minimizing adoption risks.

 ____________________________________________________________

  Geoffrey Moore’s highly influential book Crossing the Chasm identifies five important marketing segments that influence the technology adoption lifecycle: innovators, early adopters, early majority, late majority and laggards.

  As Moore explains, the most difficult transition is often bridging the chasm between visionaries (early adopters) and pragmatists (early majority). Services, products or organizations that cannot cross the chasm will lose opportunities and fail. Sometimes, an organization can build enough momentum in time to get customers to jump the chasm, and its products become a de facto standard. In a similar fashion, data centers that can quickly adopt and exploit newer, faster technology can propel their organizations across the wide chasms of technological change, putting them in a better position to exploit new opportunities. The challenge for suppliers is to anticipate customer needs and provide products that help customers successfully maneuver a treacherous and shifting technological landscape.

  Cloud Computing Creates New Obstacles for Data Center Managers

  One of the most important obstacles for many pragmatists is understanding how to make the transition between network integration—something that most organizations have already achieved—and networks that are virtual and cloud-aware.

  “The trend now is to shift data centers away from network integration paradigms toward network virtualization and cloud computing, and data centers that are at the verge of major technological innovation,” says Linda Dunbar, IETF ARMD co-chair and Senior System Architect of Huawei’s Enterprise Network Business Unit.

  “What’s standing in their way are immature standards, rapidly evolving overall network architecture, insufficient network equipment performance, and more,” she adds.

  Indeed, Dunbar’s sentiments echo those of many data center managers. Next-generation data centers house dense and massive network applications and pose a harsh test for enterprise network technology in terms of bandwidth, port density, security, management and maintenance. For example, the traffic model of a data center based on cloud computing applications is quite different from previous paradigms, a mismatch that sometimes results in unpredictable network behavior and performance.

  Such an environment a departure from that of traditional data centers, which are characterized by simple convergence of relatively fixed interactive traffic. Consequently, the performance of traditional MAN switches is likely to suffer in the future scenario, where this will condition will probably not often be met. Power management also presents a challenge; devices in data centers are densely arranged, so the left and right air ducts of MAN switches cannot meet current requirements for energy conservation.

  Other challenges abound as well. Many new applications also require next-generation data centers to provide unified switching, dense 10GE ports, 40GE/100GE support, virtualization and energy efficiency. Thus, the changing requirements imposed upon next-generation data centers constitute a formidable obstacle for data center managers to vault. The rest of this paper describes three major obstacles that must be overcome before many pragmatists can “jump the chasm” to products that incorporate better networking technologies.

  Obstacle 1: Fear of Risks From Immature Technology and Incomplete Standards

  Most new technologies of data centers are still being standardized, and early adoption of immature standards leads to risks of network instability as well as orphaned or inadequate technology.

  For instance, although the framework of TRILL has been standardized, OAM, multi-lessee expandability and other parts of the technology are still being optimized. For example, technologies such as Openflow, OpenStack and SDN are still evolving, and have faced perception challenges to their acceptance. Though some equipment providers already support these protocols, their results are frequently met with skepticism. Many providers think that customer requirements should take priority over purely technological considerations, a belief that has hindered adoption.

  Obstacle 2: Different Lifecycles for IT and Network Infrastructure

  Another hurdle arises from the difference in the lifecycle of IT components and networking infrastructure. IT systems, including servers and applications, are upgraded frequently—with lifecycles ranging from three months to three years—while the lifecycles of networks range from five to eight years.

  These gaps between the lifecycles of ever-changing IT systems and relatively fixed CT networks generate additional problems for networks. Early adoption of cutting-edge technologies in network infrastructures often leads to wasted investment. And from a technical risk-management perspective, it is also inadvisable to frequently upgrade existing networks, since such upgrading might affect the reliability of overlying applications and users’ quality of experience, if only temporarily.

  Obstacle 3: Integrating Network Equipment Spanning Multiple Generations

  At present, a large number of data centers have egress bandwidth over 10GE, a technology whose adoption is growing by about 50-400 percent each year. According to IDC, 10GE servers will become mainstream in 2012, and 40GE/100GE servers will come into widespread commercial use over the next five years. Therefore, network planning should take into account the need to combine different generations of equipment to address emerging requirements in the era of cloud computing.

  Consider the changes that experts expect over the next five years: Most core switches will probably be CLOS dynamic routing switches. Most line cards and single boards will use PP chips for complex processing, and 10GE/40GE wire speed forwarding and processing will be realized. There will be a few 100GE uplink ports. The bandwidth per line card slot will support mostly 1T to 4T capacities, and back plate links will start from 10G and will evolve to reach 25Gbps.

  But the minimum capacity today is 48 – 96*10GE wire speed forwarding; complete QoS processing is available, and high caches are supported up to 100 ms/port. Clearly, major upgrade initiatives are required if all the network equipment performance standards of tomorrow are to be met.

  Huawei Cloud Fabric: Meeting Long-term Cloud Computing Goals

  The ideal cloud computing network architecture is a non-blocking, self-healing, plug-and-play, black-box network platform. It allows continuous evolution and provides open network services.

  Despite astounding breakthroughs in core switches of data centers, data centers face formidable challenges, including limited network expandability, uncoordinated network and application virtualization, and limited transparency of network behaviors. Until problems are resolved and the solutions communicated to data-center customers, it is unlikely that suppliers will see their customers “jump the chasm” in great numbers.

  Suppliers have proposed various ideas and options to address the new changes and trends of this new future so dominated by the needs of cloud computing.

  Two strategies predominate. One of them takes the network as a whole set of equipment—with one core and multiple remote line cards, providing a network platform of high performance, complete traffic and load balancing—and presupposes true centralized management and control.

  For instance, Cisco ClouldVerse’s unified data center uses FCoE, DCB, Fabricpath, OTV, VPC and other technologies for convergence and expandability. And to improve network performance, H3C virtualizes multiple devices into a logical single device with an intelligent and elastic architecture, to eliminate the complex Layer 2 protocol and change the active-standby mode of links into the more efficient load-balancing mode. This appears to be a tradeoff that gives up some parts of an ideal platform due to technical difficulties that cannot yet be overcome.

  A second approach is to redefine the network forwarding architecture and fully open the capability of networks. To this end, Juniper is developing equipment supporting OpenFlow, a shared data center network that includes (mostly dynamic) segmentation of infrastructure to support various applications and user groups and connect resource pools with maximum agility. In most cases, this involves virtualization technology that allows different logical operations to run on one physical object, such as a switch, router, or dedicated device.

  In the new OpenFlow marketplace, suppliers usually focus on one of three types of components: switches, controls or applications. It is not technically difficult for equipment to support OpenFlow; however, the question is whether interfacing is easy or even possible when diverse suppliers support OpenFlow, but few supply a unified set of components all designed to work together under OpenFlow. Data center managers are therefore taking a risk in adopting a strictly OpenFlow model.

  To resolve these problems, Huawei launched Cloud Fabric at Interop 2012 in Beijing, Designed to support cloud-exploiting data centers, the Huawei Cloud Fabric achieves elastic expandability of data centers, ease-of-use, flexibility and openness. Efficient forwarding is concentrated at the core to realize higher capacity. The access layer is intelligent so that it can identify migration, modification and deployment of services and support flexible deployment through network virtualization mapping. Independent network controllers make network capabilities open to the upper layer to quickly respond to changes in overlying services.

  Cloud Fabric is expandable and can incorporate evolving technologies better than other existing solutions, something that will become increasingly important as performance and capacity requirements increase. Consider that in 2012, the capacity of world’s leading data center core switches is 480 G/slot, as the front panel capacity supports 48 × 10 GE. To achieve a service lifetime of five to ten years, such products must support eight-fold bandwidth increases over today’s 4 T/slot. CloudEngine switches provide bandwidth of 360 T, which can be used to expand non-blocking networks and meet the requirements for evolution from GE/10GE to 40GE/100GE of 4G servers.

  And to support virtualization, CloudEngine switches support intraband transmission and drive network virtualization for close coupling with applications via extraband service network management interfaces. Also, a combination of technical measures is used to realize end-to-end virtualization adaptation.

  Cloud Fabric Lets Data Center Managers Expand and Improve Networks, Cross Chasms

  One thing is certain: In the near future, the need for higher capacity, performance, security, management, power consumption, compatibility and openness will increase in an effort to improve efficiency and productivity, reduce complexity, and drive return on investment. These Huawei Cloud Fabric features improve the industry’s ability to reassure data center customers so that no matter what obstacles lie ahead, their data centers can meet them with flexible, expandable, adaptable infrastructure—all tied together with Cloud Fabric technology. Only then will the next wave of data-center pragmatists adopt these emerging technologies and cross the chasm ahead.

    • 评论
    • 分享微博
    • 分享邮件
    邮件订阅

    如果您非常迫切的想了解IT领域最新产品与技术信息,那么订阅至顶网技术邮件将是您的最佳途径之一。

    重磅专题
    往期文章
    最新文章