WAN Connectivity Isn’t Getting Any Less Expensive
The cost of internet access depends greatly on where you are located. For many, the price of internet connectivity hasn’t changed in some time, nor does it look to be getting less expensive any time soon. This has some real-world impacts on organizations of all sizes.
In developed nations it is a rare organization that can profit without access to the internet. Today, a website and an email address are arguably more important than a storefront and a phone line. Accessing either one requires access to the internet.
Virtually every organization uses some form of public cloud service. This can range from internet-delivered SIP phones to online banking to complex data analysis. The more sophisticated the public cloud solution, the higher the likelihood that the organization making use of it will require high-end internet connectivity.
The internet is also used to interconnect organizations with multiple locations. In a very real way, it has become the glue that holds our very society together. We back up data from our on-premises operations over the internet, transfer massive datasets into the public cloud for analysis, and have customers submit work to us over the internet.
The internet is a modern necessity.
The Cost of Access
Large organizations with equally large IT budgets have traditionally been able to afford to have fibre optic internet access provisioned. The costs of such an endeavour vary wildly, but even at the lower end of the scale, Internet Service Providers typically begin the discussion at tens of thousands of dollars.
With the exception of a few lucky cities experiencing a Fibre to the Premises (FttP) rollout, the cost of high speed internet access for small and midmarket companies has stalled. In addition, even where FttP (or comparable) internet access is available, ISPs often charge for total consumption.
Worst of all, the current political climate around regulation of the service provider industry doesn’t provide much hope that competition will create price wars any time soon. The United States in particular is in the process of rolling back a decade’s worth of customer-friendly gains.
The cost of internet access is directly related to how much bandwidth one needs. The more you order, the lower the cost per megabit/second as well as per gigabyte/month. In many parts of the developed world, a small organization looking to get access to 100 megabits/second of throughput and 10 terabytes/month of capacity could be easily asked to pay several thousand dollars per month. Access to 10x that throughput and capacity, however, would likely be less than double the price.
This nearly exponential decline in marginal cost has resulted in a huge gap between organizations of different sizes.
This worked fine when small organizations were simple consumers of websites. As new technologies gain mainstream acceptance, however, the ability to selectively handle workloads on-premises could result in significant savings.
The Internet of Things (IoT), Augmented Reality (AR) and even driverless cars are all examples of emerging workload categories that can generate significant amounts of data. This data can’t be processed on the devices that generate the data. Furthermore, usefully processing that data requires combining it with larger datasets and then returning results to the originating devices.
Early vendors in these emerging spaces designed their products such that those workloads would stream all of their data to the public cloud for processing and analysis. This hasn’t worked out as planned.
Many of these emerging workloads not only generated enough data to overwhelm the capabilities of small and medium business internet connections, they also ran into latency issues. A driverless car’s cloudy connectivity might be great when it’s being tested in the same city that hosts a large public cloud datacenter and be completely unusable when tested in another city across the country.
The answer to this challenge has – thus far – been edge computing. Public cloud providers are locating smaller data centers closer to the workloads that are using their services. This solves the latency issue, but not the connectivity issue.
Unlike cloud storage, backups, or video on demand, these emerging workloads demand real-time access to computation. Organizations can’t simply stick a caching server on-premises and solve the problem of smoothing out demand spikes by stretching out internet utilization over the whole day.
The solution is a new form of hybrid cloud computing. Vendors of emerging technologies are increasingly looking at locating some or all of the compute and storage capacity required inside customer networks. In some cases this takes the place of dedicated appliances, but in many cases this takes the form of black box virtual appliances that customers are expected to run on infrastructure they provide.
This places smaller organizations in a difficult position. To remain competitive with their larger competitors they need to devote what IT talents they have to automating and streamlining business processes, not keeping their IT infrastructure operational.
Larger organizations can both afford more capable internet connectivity and can afford to place more of their IT operations in the public cloud. This one-two punch makes the IT operations of larger organizations more efficient than their smaller competitors, especially those still reliant on traditional IT implementations.
The IT efficiency gap leaves smaller organizations looking for any advantage they can find. Technologies like Hyperconverged Infrastructure (HCI), which remove multiple layers of IT management become a sort of “easy button” solution.
On-premises IT isn’t going anywhere soon, especially while the costs of internet connectivity remain stalled. Until that changes, the only practical solution is to focus on making on-premises IT as automated and easy to use as possible in order to close the capability and efficiency gaps between organizations of different sizes.