The Any-to-Any Shift: Not Only About About Speed
By Brian Doricko, CyrusOne vice president of Strategic Sales
For Lease: 50,000 square feet of prime office space in downtown’s newest premier office tower. The 20th-floor space includes high-speed wiring directly to individual offices and to the cube farms in the bullpen, where each desk can plug into the internet with their landlines and modems. From there, they can dial right into the Public Switched Telephone Network (PSTN) hub-and-spoke infrastructure and connect predictably to anyone in the world.
Of course not! This sort of infrastructure and technology haven’t been state-of-the-art or optimal since “Seinfeld” was the No. 1 network show in the United States and Nokia made the top-selling cellphone. It’s unlikely anyone today would design the above-described office space. They certainly will not in the future. Forward designs by engineers master-plan cities around data centers that include gigawatts of power and mass-scale fiber.
Yet we still need more innovation, creativity and mindset change before we deploy the next generation of scale. The global COVID-19 pandemic fast-tracked a major shift. Collectively, we have proven what works best now and for the foreseeable future, both technically and economically. When the world went into lockdown in March 2020, virtually everyone was forced to shift to remote working. That massive disruption provided an opportunity to test in real-time and in an escalated way how the IT network would hold up with unprecedented demand as we stayed home for work and school and ordered our consumer goods online.
In the past, offices and internal resources accessed systems in predictable ways from predictable locations. During the pandemic, it turned into an Any-to-Any flow for internal systems, cloud and hybrid cloud. This included all communications types, including internal users, the external channels like supply chain partners, SaaS, and digital services, and most certainly customers, consumers, and work/school-from-home users.
Traffic patterns for our external and internal clients changed in a dramatic shift. Cloud spend jumped 37% to $29 billion in the first quarter of 2020 alone. Amazon Web Services (AWS), Google Cloud and Microsoft Azure all saw unprecedented demand during the early stage of the pandemic, Forbes reported, as companies began to understand the real value and need for cloud technology. It’s a lesson that will only continue to gain traction – as evidenced in the first quarter of 2021, when cloud spend spiked 35% to $42 billion, according to analyst firm Canalys.
“The key to survival for many organizations has been the rapid digitization of their business and the adoption of cloud, which has enabled them to pivot their business models and ways of working to better meet their employees’ and customers’ rapidly evolving needs,” CIO magazine reported in March 2021.
While it has allowed and supported access, the current hub-and-spoke system often forces traffic to take longer, slower paths than the direct routes and Any-to-Any architecture boasts. We also need to help clients deploy infrastructure that minimizes latency and distance between routes. Hub-and-spoke must evolve to Any-to-Any architectures for technical and economic advantage. Existing deployments look a lot like the phone networks around 2000 and prior. The voice networks have evolved into cell- and WiFi-based networks that no longer use a hub-and-spoke architecture. I.P.(TCP/IP) based networks are making the same shift, albeit slowly.
And while latency remains top of mind in the IT world, it may be time to rethink it. Solutions should be engineered based on the application performance characteristics required by the user community.
Is fastest always best? Let’s certainly say it is for applications tasked with performing perishable workloads. This includes trading and ad serving, for instance. These essentially are “foot race” applications that “first wins.” What about other applications – the majority in fact have different requirements. Most applications have a performance requirement but not a “get there first” mandate, especially if it costs more.
Let’s think about this another way. Consider a scenario where you order something from an online retailer. You order around lunchtime and are given choices for delivery speed. For $10 extra you can have your delivery between 1 a.m. and 3 a.m. For no extra you can get your delivery between 7:30 a.m. and 8 a.m. If you sleep between 11 p.m. and 7:30 a.m., would you ever choose the faster, more expensive option? And is it really faster if it arrives in the middle of the night?
If we learn to design infrastructure based upon the application requirements, then certainly in this scenario about the package arriving at 3 a.m. is NOT faster. Why? Because you are asleep when it is delivered. From your standpoint, the package still arrives around the time you wake up. In fact, it may be much worse to get it in the middle of the night, as it will sit on your front step and perhaps get wet if it is raining or maybe get stolen. This scenario illustrates how often extra dollars go into infrastructure that could be more efficiently and securely designed for less money.
How about a restaurant that delivers your five-course meal and dessert all at once, or the fastest? Would that be faster? Maybe. But as defined by the user (you), one could argue they ruined your entire meal by bringing dessert, appetizers and entrees all at once. Further, it is much more expensive to be able to accommodate a kitchen large enough to prepare everything at once.
Many applications behave the same way. Even as data gets bigger and bigger, the distance to move the data matters in a different way. Big data, AI, machine learning and neural network applications need data from so many places to be gathered then crunched by massive compute engines. If data is getting too big and if data gravity is influencing deployments, isn’t it another massive argument for having bigger centralized compute instead of more distributed edge infrastructure? We must have more fiber everywhere, and definitely move away from the PSTN-like hub-n-spoke infrastructure, which often offers a longer and more expensive way for data to travel.
Consider shopping at a large national retailer with hundreds of stores across all 50 states. You buy a shirt off the rack, which is scanned when you checkout and leave. If the data center was in the store itself, it would take one-eighth of a millisecond to let the retailer know that shirt was sold. If the scan routes to a massive data center in Iowa, that takes 29 milliseconds. Is that microscopic difference in milliseconds really going to change the retailer’s inventory and shipping schedule?
Probably not. It’s more helpful and logical to instead let the retailer know it sold 53 of those shirts in the last six hours and where and when it can get replacement stock. That is likely centrally located at one data center to achieve the goal of maximum elasticity of filling demand and responsiveness to clients. A retailer like this can’t “do it locally” when it has 43 warehouses in remote locations across the United States.
This is one reason why CyrusOne built a data center in Council Bluffs, Iowa.
There are so many different use cases for the diversity of applications used globally that flexibility has become one of the most important requirements in infrastructure, along with availability, manageability and cost. CyrusOne built and operates a data center in Council Bluffs, Iowa, adjacent to one of the largest Cloud Data Centers in the world. This site allows enterprises to deploy dedicated, owned equipment of a hybrid mix in our facility and utilize the Cloud platform adjacent for certain workloads and for variable demand applications that require additional compute from time to time.
If you are undergoing business transformation that leverages infrastructure and technology, give us a call. CyrusOne would love to help you evolve away from traditional expensive, inflexible hub-n-spoke designs into a less expensive, more reliable, more flexible design that best aligns to how to optimize your architecture and your application performance – and thereby user experience. Every company’s journey is different, and we can help you through that journey to make your IT environment as efficient as possible based on your parameters, not somebody else’s.
For more insights, read “Time to Embrace the Any-to-Any Shift,” the first blog post authored by Brian Doricko on this topic.