Understanding Edge Routing and the WAN in the Virtualized IT Data Center

Understanding Edge Routing and the WAN in the Virtualized IT Data Center

The purpose of the data center is to host business-critical applications for the enterprise. Each component in the data center architecture is designed and configured to ensure the highest quality user experience possible. This document describes the critical role that edge routers play in the virtualized IT data center architecture.

Edge Routing

The edge is the point in the network that aggregates all customer and Internet connections into and out of the data center. Although high availability and redundancy are important considerations throughout the data center, it is at the edge that they are the most vital; the edge serves as a choke point for all data center traffic and a loss at this layer renders the data center out of service.

At the edge, full hardware redundancy should be implemented using platforms that support control plane and forwarding plane redundancy, link aggregation, MC-LAG, redundant uplinks, and the ability to upgrade the software and platform while the data center is in service. This architectural role should support a full range of protocols to ensure that the data center can support any interconnect type that may be offered.

Edge routers in the data center require support for IPv4 and IPv6, as well as ISO and MPLS protocols. As the data center might be multi-tenant, the widest array of routing protocols should also be supported, to include static routing, RIP, OSPF, OSPF-TE, OSPFv3, IS-IS, and BGP. With large scale multi-tenant environments in mind, it is important to support Virtual Private LAN Service (VPLS) through the support of bridge domains, overlapping VLAN IDs, integrated routing and bridging (IRB), and IEEE 802.1Q (QinQ). The edge should support a complete set of MPLS VPNs, including L3VPN, L2VPN (RFC 4905 and RFC 6624, or Martini and Kompella drafts, respectively), and VPLS.

Network Address Translation (NAT) is another factor to consider when designing the data center edge. It is likely that multiple customers serviced by the data center will have overlapping private network address schemes. In environments where direct Internet access to the data center is enabled, NAT is required to translate routable, public IP addresses to the private IP addressing used in the data center. The edge must support Basic NAT 44, NAPT44, NAPT66, Twice NAT44, and NAPT-PT.

Finally, as the edge is the ingress and egress point of the data center, the implementation should support robust data collection to enable administrators to verify and prove strict service-level agreements (SLAs) with their customers. The edge layer should support collection of average traffic flows and statistics, and at a minimum should support the ability to report exact traffic statistics to include the exact number of bytes and packets that were received, transmitted, queued, lost, or dropped, per application.

Figure 1 shows the location of the edge routing function in this solutionEdge Routing

Figure 1: Edge Routing


The WAN

The WAN role provides transport between end users, enterprise remote sites, and the data center. There are several different WAN topologies that can be used, depending on the business requirements of the data center. A data center can simply connect directly to the Internet, utilizing simple IP-based access directly to servers in the data center, or a secure tunneled approach using generic routing encapsulation (GRE) or IP Security (IPsec). Many data centers serve a wide base of customers and favor Multiprotocol Label Switching (MPLS) interconnection via the service provider’s managed MPLS network, allowing customers to connect directly into the data center via the carrier’s MPLS backbone.

Another approach to the WAN is to enable direct peering between customers and the data center; this approach enables customers to bypass transit peering links by establishing a direct connection (for example, via private leased line) into the data center. Depending on the requirements of the business and the performance requirements of the data center hosted applications, the choice of WAN interconnection offers the first choice in determining performance and security of the data center applications. Choosing a private peering or MPLS interconnect offers improved security and performance at a higher expense. In cases where the hosted applications are not as sensitive to security and performance, or where application protocols offer built-in security, a simple Internet connected data center can offer an appropriate level of security and performance at a lower cost.

To implement the edge routing and WAN portions of the virtualized IT data center, this solution uses MX240 Universal Edge routers. Because the MX240 router offers dual Routing Engines and ISSU at a reasonable price point, it is the preferred option over the smaller MX80 router.

Figure 2 shows the Edge routing design.

Figure 2: Edge Routing Design


This design for the network segment of the data center meets the requirements of this solution for 1-Gigabit, 10-Gigabit, and 40-Gigabit Ethernet ports, converged data and storage, load balancing, quality of experience, network segmentation, traffic isolation and separation, and time synchronization.