Mike is Service Director for the Current Analysis Business Technology and Software service. Mike and his analyst team monitor and evaluate activities in the markets for Application Platforms, Collaboration Platforms, Data Center Technology, Enterprise Mobility Technology, Enterprise Networking, Enterprise Security, and Unified Communications and Contact Centers. Additionally, Mike reports on major technological, strategic and tactical developments of companies that provide networking solutions deployed on premise to support enterprise business operations.
Enterprises struggle with whether a programmatic networks is a developer concept, a networking concept or both
The long term success of SDN will eventually depend on solutions being simple to integrate across multi-vendor environments
At the Open Networking Summit this week in Santa Clara, the largest SDN conference and marquee event for the Open Networking Foundation, the leading SDN standards body, it is not lost on this attendee that the event is concurrent with the OpenStack event, a parallel standards body that also fosters open initiatives and technology (though on compute and the software stack vs. networking and the L1-3 services stack). It is ironic that while this particular conflict was not intentional, it does represent the challenge faced by enterprises who are seeking to incorporate more “open” technologies into their ecosystem. The question is whether to pursue the early adoption path as is the case with SDN and several solutions which are more coding than CLI configured today, or to wait for the fully “baked” solutions expected to arrive in the future. The skill sets, staffing challenges, and operational paradigm for each radically differ. Where one is often sought for a solution that cannot be accomplished by other means, the other is more focused on resource optimization, solution maintenance, and minimized disruption. (While not necessarily technical interruption, introducing a new technology such as SDN is highly disruptive to people and processes at minimum.)
The IT management toolkit consists of at least a dozen or more management tools to address element management, event stream correlation and trending, business process automation, virtualization control, to name a few, it’s a complex task to integrate and one that falls to consulting or the DevOps.
APIs and pre-tested integrations will become priority feature enterprises will evaluate when making technology decisions.
Gone are the days of being able to choose a point management product for a specific problem or vendor device and installing that parallel to other, dedicated task tools. Today’s IT management buy centers must also evaluate the integrations with their existing toolsets, many of which were not tested by the vendor. Network management vendors partner programs assist in integration and testing with other vendors but are limited to a small subset of third parties that joined the program. These systems include element management, virtualization software, an event framework for operations and security streams, server and storage optimization tools, network tools, business process toolsets all of which should, but may not work together today. The list of an average enterprise management software is much longer, rarely integrated well, and a hurdle to greater IT efficiency. Much of this integration falls to a role that has always been a jack-of-all (integration) trades, the DevOps administrator. Continue reading “What Does Management Mean to You, How Big is It, & Can It Be Done?”→
SDN may have begun academically focused on enterprise LAN needs, but carrier interest is intense and driving innovation as well.
More SDN solutions are materializing out of powerpoints and into reality today, with some creative and innovative answers to challenging problems in the past – but when will adoption begin to accelerate?
SDN may be the most exciting networking technology since the advent of Internet Protocol (IP). At its most fundamental, the concept of SDN provides a construct by which we can discuss the implementation of services within the fabric of the network itself, whether by decentralized routing & flow tables or the abstraction of more advanced network services that are embedded within the network intelligence itself. There are many approaches, many proposals and of course many vendors vying for a piece of the pie that is in the oven. It is likely that 2013 is when early momentum will begin. Cisco’s C-Scape in 2012 promised several elements of its One-PK solution would materialize in H1 2013, Juniper has come out with its own rather encompassing SDN vision, and HP has had enough time in the market to get traction (given the long sales cycles on a solution this complex), to name a few. There are some truly great technology suppliers working in concert (mostly) to move the proverbial ball forward. However, the question I get asked the most often remains “Is the need real?”, i.e., whether the market currently has a particular need that cannot be addressed or solved another way to which I have replied previously “Not yet, but soon.” Continue reading “Software Defined Networks: Is the Technology Catching Up with the Hype?”→
Enterprise access networks are still largely wired today, but with wireless stability and performance improvements providing a relatively similar experience, the all-wireless campus access environment may be imminent.
How much will the access switch port taper off once 802.11ac begins to ship?
In a recent conversation with a colleague, we were discussing how quickly (or if) the enterprise access environment will shift from the traditional wired access methods to an all-wireless environment. While nearly every enterprise has some wireless support today (of the many enterprises to which I have spoken, I cannot name one that does not), very few have committed to solely wireless access for the clients. Printers, the odd workstation or two, and other peripherals may always demand some wired access, but with the prevalence of the mobile worker and the multitude of devices they tote around, it is very easy to envision the WLAN in any campus being the access method of choice. In the past year, the market has seen an aggressive maturation of unified access solution messaging, with some extending into the adjacent space of mobile device management (where acquisition and/or consolidation will likely occur in the next 18 months). Continue reading “Where Is the Enterprise Campus Network Heading?”→
The market early optimism towards cloud may have been tempered due to skeptics and the overuse of ‘cloud washing’ campaigns (i.e., everything in the cloud, attached to the cloud, or solved by a cloud of some sort)
Enterprises remain optimistic though as many have embraced some form of cloud with measured success and asked good questions about what to do next, moving forward, and leveraging the experiences and concept proofs others have employed
Last week’s Interop show was a success by many measures. It offered users and vendors the opportunity to interact on critical topics. The track sessions were reasonably attended, though no one had to fight for seats at this event. There were few logistical issues, due in large part to the efforts by UBM TechWeb, the company behind the Interop magic (and a great crew running the show). Continue reading “Interop New York 2012: A Variably Cloudy Perspective”→
In the race to get OpenFlow and SDN onto new networking RFPs, enterprises must remember that controlling flow-based traffic patterns will serve to address a couple of weaknesses of networks past; however, edge-to-edge switching latency, performance, and more remain crucial.
For the first two to three years, as enterprises prove OpenFlow and early SDN technologies within their environments (and to themselves), the prevalent model will be a hybrid one, in which a vendor’s high-speed fabric and flow control run concurrently on a device (Cisco, Brocade, Juniper, Arista, etc.).
I find it amusing that the OpenFlow discussion has polarized pockets of the IT industry so completely. It is a great innovation, absolutely, and it will address certain limitations and free up otherwise locked networking resources. However, you get the sense that any given author of one of these articles is slightly biased to applications, servers, or networks. The application purist who consumes all resources for the purpose of application architecture wishes to remove inhibitive deployment times from the infrastructure and therefore does not focus on the minutia of each domain’s critical factors. The server teams have long sought to enable their own domain constituency to deploy high-speed interconnect between adjacent servers; in fact, several technologies exist from the biggest server vendors to provide for just such an answer. The network team members, who have found themselves thrust into the infrastructure limelight due to the efficiencies to be gained, struggle with this newfound stardom and the education that they must gain in order to elevate all of the network attribute qualities for which they are responsible. Many enterprise IT buyers who are writing RFPs are in the process of adding (or have already added) some flavor of SDN language to the mix, which is good, but there is merit in having the discipline expertise contribute to the RFP itself. Server administrators are the best at defining the understanding memory riser architectures and how best to deal with firmware ‘fun’ on their platforms, while network administrators are best suited to defining the wired architecture and intricacies and application guys can best address acceleration needs and OSI 4-7. OpenFlow and SDN are amazing, but fundamental architecture needs remain. Continue reading “Despite OpenFlow’s Promises, Switch Architecture Still Matters”→
The software-defined data center is a concept that encapsulates networking, virtualization, storage, orchestration, and ultimately, a truly agile framework.
Orchestration and manageability must be designed into a solution, rather than being bolted on, to yield the best results.
It became evident during VMworld that the notion of a software-defined data center is central to VMware’s strategy. However, when you pause a moment and reflect on where the tech industry has been heading for the last five to ten years, it is easy to see elements of this notion accelerating over time, really coming to dominate design principles across the disciplines that constitute the DC (storage, compute, network, and operations platforms) in the last few years. Software-defined networking (SDN) is perhaps one of the most visible or actively marketed software-defined concepts, but when one realizes that virtualization is just another software-defined concept (compute/machines), it is easy to see the theme encompassing practically every element of DC technology, not to mention platforms and applications already being managed as software elements themselves. The logical question here is: If all elements within a data center are software-controlled, then what about the technology characteristics of fabrics, SPB-M/Trill, FCoE, and more of the physical network elements? The answer is that the technology differentiation of the devices which constitute the infrastructure does not go away or diminish with the SD DC, but rather becomes instrumental as the devices themselves must each integrate with upper-level orchestration platforms (i.e., VMware vCenter/vCloud Director). Continue reading “Is Your Network Ready for the Software-Defined Data Center?”→
Software-defined networking (SDN) is a massive, all-encompassing concept which spans campus, data center, WAN, and carrier backbone networks (pretty much every type of networking infrastructure imaginable) and is being touted by some as capable of solving nearly every networking issue that has plagued us for the last 20 years; and yes, it does make coffee in the morning for you (no, not really).
Eventually, SDN may do most of the things claimed, but getting there will take a long time and some IT fundamentals and best practices will remain critical moving forward.
The OpenFlow protocol and (more recently) SDN have been discussed and put forth as solutions to complex, hierarchical, legacy architectures that were built up over years to solve the complex performance and management needs of enterprises and service providers alike. Yes, the technology for each type of deployment was different (MPLS vs. OSPF vs. multicast, etc.), based on various criteria, but regardless of the technology, each vertical or segment executed on best practices learned over years of (sometimes painful) experience. The result was a set of processes and instructions, if you will, that each IT or production environment team could leverage as they looked to new protocols or ports or architectures to avoid the same pitfalls encountered before. SDN promises to eliminate the need for several of these, but a few still demand strict adherence or consideration. Continue reading “SDN Market Frenzy: Your Network Best Practices Remain Important!”→
802.11n, which capped out at a max of roughly 500 Mbps in ideal cases, never filled the 1 Gbps link with which many were connected, avoiding bottlenecks at the access port itself (though potentially congesting aggregation links).
802.11ac, with its initial specification release capably supporting 1.3 Gbps throughput on a single AP, may force a ‘re-think’ on access point attachment and how traffic will be routed onto the physical infrastructure and ultimately back to the data center or services location.
Wireless enterprise networks are a must today for both efficiency and convenience. More frankly, they are necessary to be competitive. The market gets this, as indicated by the continued healthy growth of WLAN as a segment. Originally, 100 Mbps links often connected 802.11a/b/g APs, and given that the top throughput was often less than the 54 Mbps throughput of 802.11g, no bottlenecks were encountered. Then came 802.11n; in many cases, it was either proceeded by or coupled with a Gigabit network upgrade, sufficient to support the initial 150/300 Mbps and scaling to 600 Mbps (in a perfect world), as well as multiple radio technologies. This is still well below the 1 Gbps links that in some cases supply connectivity and power (PoE) to the 802.11n access points. However, with the next-generation 802.11ac specification nearing completion and its initial release throughput providing up to 1.3 Gbps connectivity, we reach the first throughput bottleneck from the AP to the wired environment. No debate has come up yet in the public forums regarding how one would wire and architect an 11ac network, but it is certain to become an issue in the coming quarters as commercial products become available. There is no specification for 10Gbase-T PoE currently, few (if any) access points in the past have had multiple Ethernet ports to connect to the network, and the current link technology employed (1GbE) will be oversubscribed. Continue reading “Wireless: 802.11ac May Break Your Wired Network”→
You really can’t run an enterprise without some level of support contract these days due to infrastructure complexity
Your own talent pool & business needs will drive the level of support contract required for your environment
There are many case studies and hot topics that have circulated for years (and will continue to for many more, I’d wager) about how much support contracts cost. However, I’ll ask you this, “Do you want to be the one responsible when you explain that the network outage could have either been avoided, or considerably shortened with expert help available?” The question isn’t whether you should have access to expert help. The question is what level of expertise is appropriate for your organization. This in turn depends on the systems in question, how many vendors are involved (in which case you begin to drift from a vendor specific support contract into a more involved services engagement with an integrator/partner – which is out of the scope of this particular blog) and what kind of an investment in your IT staffing you’ve made – and will continue to make. Certifications, time out of office, headcount, expertise focus, business metrics, uptime requirements, line of business commitments for network uptime, etc. It’s quite simple, right? (Tongue firmly in cheek.) At minimum, you should have a standard business hours call center contract, which also gives you access to software updates. Not every vendor requires a contract for this and it is a significant perk for customers of those who are satisfied. Though in mission critical situations, when a problem can run from a simple configuration error (which in my experience, is increasingly rare) to the more grievous hardware failure that you may not have hot spared on site (these lessons are learned once, painfully, and then never repeated), you need expedited assistance. When a two or four hour support contract is put in place, a vendor or local partner is trained and carries inventory for every SKU that such a high alert contract may need. After all, when an outage occurs, it could be trivial, it could represent millions of dollars per hour in lost revenue, or it could result in potential litigation (think about emergency services or when lives are on the line). This is the vendor-side support model. Continue reading “Help! My Network is Broken!”→