High-profile outages, apprehension about data security, and compliance questions make many enterprises wary about moving mission-critical workloads to the cloud.
Yet, the flexibility, efficiency, and geographically dispersed nature of the cloud may make it a cost-effective disaster recovery/business continuity option for organizations, large and small.
There is more than a little push/pull element to the cloud. Businesses are drawn to the flexibility, lower cost, and simplicity which the on-demand model promises. However, there is enough mystery in the cloud to raise questions about security, as well as enough headline-making outages to put up red flags about stability. Incidents such as Amazon Web Services’ twin outages this past summer, which impacted both small customers and marquee businesses such as Netflix, make customers of all sizes wary about the cloud.
Pay attention to basic security procedures and attitudes
Explore quantifying the risk from an insurance perspective
Most attacks on most networks could be defeated with just four key strategies according to this year’s winner of the SANS Institute 2011 US National Cybersecurity Innovation Award – Australia’s Defence Signals Directorate: patching applications and always using the latest version of the software, keeping operating systems patched; keeping admin rights under strict control (and forbidding the use of administrative accounts for e-mail and browsing); and whitelisting applications. The basis of these recommendations is that security is a behavioral problem, not a technical problem. In other words, if users don’t have the basic security procedures and the right attitude, no amount of technology investment is going to create the needed security. Continue reading “KISS Your Security Measures”→
At a high level, U.S. businesses are taking similar strategic approaches to the introduction of both tablets and smartphones into the enterprise.
Not surprisingly, the majority of survey respondents want to buy these devices and manage them.
Current Analysis recently completed a survey of enterprises in the U.S. to determine strategic direction for the adoption of tablets and smartphones in enterprise networks. While it is often assumed that enterprises will approach the adoption of tablets and smartphones differently (with tablets treated as simple laptop replacements), our research suggests this is not the case. Continue reading “Addressing the Adoption of Tablets and Smartphones in the Enterprise”→
Your network service provider should be able to provide applications performance management and WAN optimization.
Poor applications performance damages corporate productivity; meanwhile, understanding how applications are performing and behaving should lead to cost savings and a faster network.
Like going to the dentist, IT managers should be encouraging their data network suppliers to work together to conduct regular network performance audits to learn about how the data WAN is running between all connected business locations, including data centres. Performance on off-net, IPSec-connected sites continues to raise challenges, but all of the major telecom companies, including AT&T, BT, Telefonica Multinational Solutions, Tata Communications, T-Systems, Orange Business Services and Verizon, for example, now support broad ranges of applications performance management and WAN optimization/acceleration tools, which can help considerably to improve applications response times over the available bandwidth. Typical third-party partners that carriers use for delivering such services include Bluecoat, Cisco, Juniper, Ipanema and Riverbed. Just to frame what one could be missing: A major European carrier has claimed 20 times faster WAN response and 63% bandwidth reduction following one such consultation. Applications performance is also evolving rapidly for mobile networks and devices; a handful of telcos are deploying a Gomez (Compuware) platform to assess end-user experience from mobile devices to corporate Web sites and Web applications, whilst Riverbed provides acceleration to mobile users using a ‘Steelhead Mobile’ client.
The OpenSocial standard is being woven into enterprise collaboration applications, promoting both interoperability and extensibility.
OpenSocial is not yet complete or reliably employed, requiring careful screening by would-be consumers.
For decades, enterprise IT departments have sought out and experimented with component development models that would both allow for application interoperability and speed development. Throughout that time, ORBs, Beans, and portlets have all had their turn and played influential roles in shaping how applications are built and interoperate. However, a small standard created by Google in 2007 may put all of those efforts in its rear view mirror, at least within the collaboration platform marketplace.
The “golden thread” linking carrier infrastructure and enterprise CPE for managed services is a myth.
There should be added value, but most suppliers have failed to deliver it.
It seems to be such a straightforward, simple proposition. Service providers build managed network and application solutions using a certain supplier’s infrastructure, and the resultant service is terminated elegantly and powerfully with enterprise premises infrastructure from the same supplier. Goodness flows liberally in this setup, through supplier-specific value-add features, increased manageability and better security. The evidence suggests, however, that this rarely happens. The managed services golden thread is a myth. Continue reading “The Mystifying Struggle to Mix Enterprise and SP Infrastructure”→
‘Hardware-assisted security’ is Intel’s preferred phrase to describe how security features in its silicon can be used to deliver additional functionality to new or enhanced software-based threat protection products.
McAfee has been working hard to make Intel’s vision a reality, first with last month’s DeepSAFE announcement and then this week with the first look at Deep Defender and Deep Commander.
Last month, Intel and McAfee made a bit of a stir with their announcement of DeepSAFE technology that provides a foundational element for McAfee software to leverage security features in Intel silicon. DeepSAFE is important to Intel, because it helps to justify the McAfee acquisition. It got the market’s attention, because the technology was described repeatedly as “game-changing.” Well, fast forward to this week and Intel/McAfee have released the first products that build on the DeepSAFE technology: Deep Defender and Deep Control. Deep Defender monitors system activity (i.e., CPU and memory) to detect and block rootkits. Deep Control is a plug-in for McAfee’s ePO management system that leverages Intel’s Active Management Technology to allow some very cool remote management and update capabilities on devices running Intel Core i5 vPro and Core i7 vPro processors.
The current generation of solutions for dealing with use of personal mobile devices in the enterprise have been an unsatisfactory compromise between IT control and employee flexibility
A new generation of technologies is poised to solve the problem of dual personas with less complexity and more flexibility for both the business and the employee
Enterprises are changing their minds about allowing employees to bring in their own mobile devices to work, because it’s actually a huge money-saver. Why shell out scarce dollars for new corporate-owned cell phones when employees are already buying the latest devices? Corporate-liable ownership is starting to go the way of company cars and even company-owned laptops. The problem is that smartphones are now frighteningly capable computers that can access internal corporate information behind the firewall, can store confidential emails, documents and customer data, surf the internet, become virus-ridden, and are much more likely than laptops to be left in a taxicab (or a bar). Continue reading “New Developments in BYOD: How to Keep IT and Employees Happy at the Same Time”→
OpenFlow will move from the academic to the commercial in the next 24 months
Vendors’ perception of OpenFlow will determine whether they resist or embrace the technology
From its beginnings at Stanford as a research project to becoming a technology movement that has start-ups building businesses around it, OpenFlow has emerged as a topic of discussion in many networking circles today. OpenFlow is essentially the proposal to add a hook into an existing network device that enables control and forwarding actions to be centrally managed off-device and then implemented identically across all devices in the network. Whether this control includes table replication, routing actions, security policies, or even access control list (ACL) population, all are possible with the OpenFlow architecture. Consider that a device could merely execute packet handling directions versus actually having to determine which decision to make. This, in turn, could radically reduce the processing requirements on the device itself, in addition to enabling a consistent policy application across a very large number of devices (theoretically, tens to hundreds of thousands), which would vastly simplify management. OpenFlow also offers the operator the ability to integrate intelligence into the network without relying on the network device’s operating system or even application awareness that can then execute and apply QoS and security policies. While some vendors offer devices that possess this ability today on a select portion of their portfolio, there are exceptionally few environments that are 100% standardized on a single vendor’s latest generation of products. Continue reading “OpenFlow: Distributed Network Nirvana or Academic Science Project”→
Look at how service providers support communication solutions, not just the vendors and products, ask for customer support center locations and processes, the professional services staff numbers by geography, number of people with certifications, and SIP trunking availability for starters.
Consider and compare carriers and IT service providers, depending on your needs and you could be surprised that carriers are looking more like ‘integrators’ these days.
Service providers have been facing tough times with voice and data revenues falling year over year, and new ‘advanced services’ slow to make up the shortfall, combined with the desire (and often necessity) from enterprises to get more for less. The need to transform is ongoing though the focus on technology and new ‘products’ can distract service providers. While innovation and technology is important, pretty soon every service provider ends up selling a similar IP PBX or Microsoft or Cisco UC service, and differentiation in the competitive landscape disappears. Enterprises will end up choosing the service providers who can deliver outstanding customer service and support together with a ‘cost-effective solution’, designed for their business needs, and that can be adapted for future needs by skilled integration and professional services staff. Continue reading “Communicate Using Solutions, Not Products”→