Mike is Service Director for the Current Analysis Business Technology and Software service. Mike and his analyst team monitor and evaluate activities in the markets for Application Platforms, Collaboration Platforms, Data Center Technology, Enterprise Mobility Technology, Enterprise Networking, Enterprise Security, and Unified Communications and Contact Centers. Additionally, Mike reports on major technological, strategic and tactical developments of companies that provide networking solutions deployed on premise to support enterprise business operations.
Customers have been apprehensive about continued significant investment in 802.11n with the 802.11ac technology on the horizon.
Cisco’s 802.11ac guarantee, via a simple tool-less module available in 2013, will provide forward compatibility with 11ac with a capable, enterprise-class 802.11n access point today.
I have had several conversations that started with the question of whether continued investment in 802.11n platforms was wise given the pending standardization of 802.11ac and the benefits which it will bring (in late 2012/early 2013). Since the standard is not yet fully ratified and endorsed, there has been no guarantee that the fully ratified specification will be supported by an enterprise vendor… until now. Cisco had announced that the Aironet 3600 access point would be eligible for a tool-less module upgrade (which simply snaps in and is secured with two thumbscrews on the back) in early 2013 (release date: TBD) that would allow customers to take advantage of the 802.11n features the AP possesses today while ensuring investment protection for a forward-looking upgrade to 802.11ac. Now, this module is not free of course, and as of the time of this writing, it had a suggested retail around $500 (potentially subject to change); however, given the access point’s suggested retail of around $1,500 and the module SRP of $500, each access point would have a CapEx of $2,000 (list) and provide for a simple evolution from 11n to 11ac. Continue reading “Cisco Becomes First Enterprise WLAN Vendor to Commit to 802.11ac Support”→
Intelligent embedded network agents and sophisticated software heuristics provide key insights into information and performance patterns for predictable data consumption, but interpreting these requires talent
Humans remain the most valuable troubleshooting tool in the IT arsenal
Having worked in infrastructure in the ‘90s and I’ve done my fair share of troubleshooting vampire taps, thick-LAN, and eventually thin LAN (and those finicky terminations) I can say we’ve come a long way. Granted at its most basic we’re troubleshooting low voltage electrical wires in most wired infrastructure. Sophisticated tools are embedded in many switching platforms now which can immediately detect a link loss in addition to whether it’s a damaged cable or connector, or alert correlation from multiple devices to pinpoint the exact location of a ‘noisy’ device polluting the network. Advances such as these have increased efficiency, reduced trouble ticket resolution times, and freed up valuable resources to work on more complex challenges. With wireless access becoming the norm for clients as more and more devices go solely mobile, tools have generally kept pace and network management systems have slowly grown more capable and feature rich. As cloud adoption rates increase and systems grow more diverse, the tools are likely to suffer a setback, though, with many disparate elements, both physical and virtual, contributing to a single application connection. Troubleshooting these will once again require a significant amount of technician involvement to determine root cause during an outage (and no, rebooting your client isn’t the answer, Mr. Helpdesk). Physical and virtual agents must be deployed in order to collect statistics in real time and aggregate these bits into a collective perspective of the health of the network. Whether this is done with one of the extensible “framework” NMS systems or via vendor element management systems does not matter, but at the heart of this is that enterprises need to embrace a more sophisticated management model than they have in the past. Continue reading “IT Pains Evolving: Where’s Holmes & Watson?”→
With today’s modern professional so dependent on remotely located files and real-time, Web-based applications (sales force, Web portals, etc.), downtime is painful.
Device failures, misconfiguration issues, congestion, and interference all make the job of the IT infrastructure specialist more complex as dependence on the infrastructure increases daily.
On a recent trip, while I sat waited for my flight to depart (the airport shall remain nameless), I hopped on the wireless network, connected via VPN, and started to download some material from the company intranet. About 50% into a large file download, the network link was lost, dropping the VPN, and of course stopping the file transfer. The signal strength was good; since I was short on time, I did not break out the wireless troubleshooting tools to see how much additional noise was in the area that may have interfered. Instead, I pulled out my phone, tethered via strong 4G (yes, I’m lucky), and grabbed the file in a minute. However, I could see that several others in the immediate area had issues with the WLAN and were growing frustrated. It struck me how dependent we are on having convenient access to remain productive in these moments of lull time (unless you can get through an airport in 15 minutes consistently, you know what I’m referring to). Unfortunately, public area WLANs are not yet universally enterprise-grade and a solid connection is not a given. I had grown accustomed to being able to connect in the airport and assumed it would be working as usual. We have this same assumption in our enterprise environments; why not in the highly trafficked areas? Continue reading “All I Ask For Is a Stable Connection”→
Interop attendance and vendor participation are often a bellweather for interest across much of enterprise IT
Several strong announcements once again highlight the importance of this show to vendors and the anticipated press and customer reaction
The weather is nice, the Mandalay Bay is busy, the Eye Candy lounge is once again packed in the evenings which all point to one thing: Interop is back in Vegas. Over 350 vendors are demonstrating on the trade floor this year and the pitches appear to be well attended. Additionally, the various technology and business tracks are also nearing capacity for the most part, which indicates significant buyer interest. For the most part, it looks like the show is regaining its glory with new management and the associated benefits of some economic recovery. Continue reading “Interop 2012: Attendance Improved, Vendor Excitement, Energy High”→
Interop 2012 promises to be larger than the 2011 show, a good sign of enterprise interest and investment in network technologies.
Mobility, virtualization, fabrics, and cloud services will dominate much of the discussion surrounding the trade show.
With the 2012 Las Vegas Interop just over a week away, inquiries and invitations have been flooding in. From wireless to fabrics to virtualization and everything cloud-related, there is a great deal of energy and excitement around enterprise challenges and how best to address them (with a great deal of differentiation between offerings). UBM has brought together a compelling track list and the open sessions are almost certain to be full every day, so get registered and get to the rooms early to ensure a seat. Last year, many popular sessions were standing room only, and this year is almost certain to command similar audiences. Virtualization challenges, evolving management platforms, and vendor interoperability will be key for data center-centric pitches, while most campus and organizational issues touch upon consumerized IT and the host of challenges around BYOD. Continue reading “Interop 2012: Virtualize, Mobilize, Exercise (Bring Walking Shoes)”→
NEC, IBM BNT, HP, and Juniper have committed to support and released OpenFlow-capable code.
The greatest impediment to mass-market adoption will be commercial support.
If you have read the trade magazines and press materials around network technology over the last six to twelve months, you have noticed a decided uptick in the number of articles that focus on SDN and OpenFlow. These articles have covered the topics from the technology vendor perspective, as well as highlighting the academic and research communities’ support and interest in it. There are commercial products available today through a joint effort from NEC and IBM. HP announced the general availability of OpenFlow-capable switch software on a large swath of its switches, and Juniper committed to supporting OpenFlow within the SDK for its Junos developer community. This is a demonstration of the networking community’s belief that there is promise and a bright future for this enabling technology. Though conceived originally within the academic world for the purpose of network control and segregation to enable researchers, it has long since been perceived as a solution for certain ailments in the enterprise to address the scale and flexibility of networks. It is clear with the increase in inquiries from both enterprises and observers, however, that interest in this movement is reaching critical mass. The question is: When will it be commercially available, at scale, and supported in complex environments? Continue reading “Software-Defined Networking: Part 2”→
Many vendors offer embedded application platforms within either WAN or LAN equipment (or both), touting performance benefits.
Customer adoption remains tepid, however, and many often opt for appliances or servers/virtual machines due to convenience or familiarity.
Nearly every major networking vendor provides an application platform with which either their partners or customers themselves may embed applications. These platforms can come in several forms, such as HP’s ONE module, which resides in a switch; Cisco’s UCS Express, a router/switch application services device; or Arista’s new 7124SX switch, to name just a few. Potential benefits include, for example, improved packet processing performance, faster application response times, and deployment simplicity. Whether it is a lightweight application such as a DNS or DHCP server, or something more robust such as Exchange or a call management suite, these emerging application platforms appear to be gaining steam in the market. Vendors say their customers find ease of use, tight integration, and performance/responsiveness top the list of benefits, though operational simplicity and (perhaps more important) network team control help. This last element is one of the most notable, as it demonstrates the divide that remains and inhibits enterprise growth into a more aggressive cloud adoption curve. The storage, server, application, and network teams often remain separate functions; therefore, appropriation of resources to their peer groups can oftentimes be slow. However, these quasi ‘network appliances’ give the network team back the keys to a server resource, yet administration and control remain within their domain. Continue reading “Embedded Network Applications: Friend or Foe?”→
For years, enterprises invested in ‘good enough’ networks merely to make sure the plumbing connected everything together functionally.
With cloud adoption rapidly increasing, fewer applications residing on-premises, and business continuity depending on 24×7 network access, enterprises need to re-think the network design and approach.
Enterprise networks were designed for years (and, to a large degree, still are) for three application areas: campus (or access), core, and data center. With cloud and ‘anywhere’ access of mission-critical applications, users must have quality access to resources no matter the connection point. Whether wired Ethernet, WLAN, wireless 3G/4G, or other means, downtime is unacceptable. Yet, as RFQs go out, access resilience is missing or getting surprisingly low priority. I contend that enterprises must raise the stakes and invest in redundant power, resilient management (whether in-box or in-stack), resilient protocols, and ultimately solid management interfaces (assurance, monitoring, orchestration, etc.). Now, it is true that redundant links have become more prominent with the availability of commercial cable and DSL at aggressive prices (relative to fractional T and frame a decade ago), yet within the campus, surprisingly few switches or WLAN have RPS or resilient, distributed uplinks. Continue reading “With Network Dependence Critical, Is Downtime Acceptable?”→
Carrier and enterprise data centers share many common elements.
The messaging will be different and tailored to each segment; however, the core technology remains the same.
As I was preparing for an upcoming event in London (Layer123’s Cloud-Net Summit, March 12-14) and working on characterizing the data center profiles that carriers are considering, it struck me that carrier and enterprise data centers are discussed and defined in completely separate ways. However, as one evaluates the technology within, there are fewer differences than similarities. Both enterprise and carrier data centers consider low-latency mechanics and optimizing point-to-point communications; provide secure domains that enable customers access to their resources; and rely on highly resilient, scalable architectures that allow for both growth and ironclad operational uptime. The list does not stop there. Continue reading “Degrees of Separation Between Carrier and Enterprise Data Centers Are Few”→
Even minor incremental upgrades may pay significant dividends
IT departments should consider additional WLAN surveys post-deployment due to the increase in radio noise and potential coverage issues that result
In the last six months several vendors have announced products that incrementally improve the 802.11n solution either through clever antenna designs, intelligent noise suppression or improved throughput performance (radio or otherwise). The key element is that most of these require additional investment in WLAN hardware or software. To some this may seem odd as in the past, unless a new technology or significant advantage was to be obtained, the effort to procure any budget would not be worth the hassle of the RFP and budgeting process. However, we contend that with the demands placed on the WLAN network today, even minor incremental investment in the network could pay handsome dividends, with returns in months. The number of tablets, smartphones and other wireless dependent devices being brought into the enterprise and consuming WLAN cycles continues to increase at an incredible pace. This in turn increases the access to and therefore the usage of enterprise applications which are easily accessible via these mobile devices, the most notable being email. As more advanced applications become more widely available and these users further increase their productivity, the correlation between WLAN performance and user productivity grows clearer. Therefore, the improved WLAN coverage and performance may directly translate into increased user productivity and ultimately into increased revenue productivity (or at least user efficiency). Continue reading “Incremental WLAN Investments May Pay Significant Dividends”→