<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=1334192293361106&amp;ev=PageView&amp;noscript=1">

Posted by Mervyn Kelly

Big Data is gradually becoming the next big trend in the ICT world. While it is still only nascent and many of its technical specifications are yet to emerge, one can already imagine the numerous ways in which the huge sets of largely unstructured data that exist today and will be created in the future can be used to our advantage.

In fact, the application of Big Data technologies is likely to become pertinent in every aspect of our daily lives; retail, healthcare, transport – all of these will benefit from the ability to better understand all the myriad transactions and interactions consumers have with technology throughout the day.

One other big trend in the ICT world – Cloud Computing – is clearly emerging as a major enabler of Big Data. The sheer quantities of data involved mean that the distributed computing model will be essential to take advantage of it, as the immense compute power necessary to make use of the resource is only likely to reside in large data centres, which not many businesses are comfortable owning as part of their IT infrastructure.
This means that “rentable” storage space and compute power, both sitting at the heart of the Cloud concept, will likely form the core of most Big Data projects, as more and more enterprises want to take advantage of Big Data without the inherent (and non-core to their business) costs of acquiring, maintaining and developing the necessary IT infrastructure. Some interesting technologies in this space have already emerged – for instance NoSQL or Hadoop developed by the Apache foundation – which enable the management and distributed processing of the immense quantities of data.
As these two industry’s mega-trends meet and we enter the era of Big Data in the Cloud, from a technology perspective the focus will, to a certain extent, shift away from the software powering Big Data projects and towards the infrastructure necessary to support it. This in turn will reveal a stark truth: in order to power the projects of the future we need to take a radically different approach to networking. The “brute force” approach of simply adding more bandwidth to support new services and the inexorable rise in data will no longer be sustainable. Instead, the networks of the future – especially those connecting the data centres powering the Cloud – will need to evolve towards a more flexible and considerably more intelligent resource, a Cloud backbone of sorts, able to deliver the required network performance on demand.
Connecting data centres through a Cloud backbone
Broadly speaking, there are two different classes of inter-data centre traffic: high-bandwidth, extremely Quality of Service (QoS)-sensitive flows such as live virtual machine migration traffic and active-active data storage replication traffic; and lower-bandwidth, more QoS-tolerant flows such as redirected user-application traffic. Both require controlled performance—the former on an individual, absolute basis; the latter on an aggregate, relative basis. The key role of the Cloud backbone is to supply switching and transport resources among data centres that can be allocated and isolated for these different traffic flows, as dictated by the data centre and Cloud orchestration systems.
For the QoS-sensitive service-layer “client” traffic flows, it is necessary to first establish end-to-end connection-oriented transport-layer “servers” (one per destination data centre), into which these flows may individually be admitted only if there are sufficient resources. These are point-to-point connections that have deterministic paths and reserved capacity that may be continuously accounted for and explicitly allocated. The QoS-tolerant traffic flows may use these same point-to-point connections, as long as they are marked at a lower priority. More practically, such traffic will be allocated a separate point-to-point connection or carried over a multipoint-to-multipoint transport server shared among a set of data centres.
Depending on network operator preference and the magnitude of traffic between a particular pair of data centres, the point-to-point connection types supported include photonic wavelengths, OTN circuits, MPLS LSPs, or PBB-TE tunnels. Multipoint-to-multipoint constructs include VPLS and SPBM. These connections originate and terminate at the Cloud Backbone Edge (CBE – see image below) nodes and are switched and transported through the aggregation (CBA) and core (CBC) nodes.
The Cloud backbone edge connects to the data centre fabric and is the performance control gatekeeper to the wide area Cloud backbone network. On the client side, it connects to the data centre core switch with an Ethernet interface, and potentially to the Storage Area Network (SAN) border switch with a Fibre Channel interface. The Cloud backbone edge also classifies and polices pre-authorized service flows and maps them to the appropriate end-to-end connection. These connections are then aggregated onto Wide Area Network (WAN) ports for direct interconnection over dedicated wavelengths to the appropriate destination nodes on the edge, or transported to an aggregation site for further aggregation and switching. The edge is also responsible for Layer 2 extension (and potentially LAN protocol interworking) of the local data centre fabric to the fabric of other federated data centres, and for isolating the virtual network services of multiple tenants over shared resources.
Except for the direct wavelength scenario, the CBA nodes aggregate and, along with the core nodes, switch these connections over a core mesh network to the destination CBE nodes. Together, the nodes provide reliable, deterministic interconnection bandwidth between pairs of CBEs in a 1-to-1 or many-to-1 fashion that allows the network edge to enforce absolute performance control over that bandwidth. The aggregation and core portions of the network also optionally reallocate raw capacity to where current traffic demand is by reconfiguring wavelengths along heavily used segments and re-tuning switch DWDM ports to establish connectivity across the new wavelength.
In this way, the network seamlessly connects a larger pool of resources shared between both enterprise and provider data centres, creating an IT model that federates enterprise customer data centres with multiple geographically distributed Cloud provider data centres to unlock the full power of Cloud computing. This involves the virtualisation and pooling of all data centre and network assets to enable fluid placement and migration of workloads according to changing needs – effectively creating a “data centre without walls.” Only that level of flexibility in the Cloud will be able to truly meet the infrastructure challenge posed by Big Data and allow this phenomenal trend to deliver on its promise.
Learn more about SAP SuccessFactors
AltaFlux Corporation

By AltaFlux Corporation

AltaFlux understands what you and your organization need to excel, and can deliver rapid innovation to unleash your full workforce potential. Together, we can empower your business by streamlining, transforming, and optimizing your key HCM and talent processes with industry-leading SAP SuccessFactors technology—enabling you to adapt at the speed of change.