by Roberta J. Witty | February 20, 2013 | 3 Comments
On 12 February 2013, emergency/mass notification services (EMNS) vendor xMatters purchased the intellectual property of Bamboo, an enterprise-level incident management mobile app from Deloitte Australia, for an undisclosed amount. Members of Deloitte’s risk practice are assisting in the full transition as are application developers from the Bamboo team employed in the build-out. This acquisition of Bamboo, a mobile app for incident management, should appeal to companies looking to integrate emergency/mass notification services and offline access to recovery plans in a mobile platform.
Bamboo has now found a software development home to enhance its business continuity management software. Gartner believes xMatters has the most opportunity to grow Bamboo adoption by supporting the importation of Microsoft Word and Excel files as well as a SharePoint Web service API for those who do not use business continuity management planning (BCMP) tools now. Gartner also believes xMatters should consider evaluating its EMNS pricing strategy to make it more competitive with the rest of the market for increased adoption of Bamboo by prospects that do not already have an EMNS tool.
xMatters adds a mobile app that supports push technology for recovery plan updates, role-based and offline recovery plan access, and GIS-enabled tracking of all capabilities used for real-time incident management. Integration with the xMatters IT alerting system may be a future enhancement.
Before this acquisition, Gartner observed limited Bamboo adoption by our clients, who cited additional costs compared to perceived benefits; Australia-only product support with uncertain future support from Deloitte (which is not known for mobile application development); and limited business continuity management tool integration.
In both current and combined forms, Bamboo powered by xMatters lacks many of the capabilities of the larger BCMP market, particularly related to planning functions, including:
- Business impact analysis
- Risk assessment
- Recovery plan development, maintenance and exercising
But the offering could appeal to xMatters customers that lack a mobile app for real-time incident management.
BCMP tool customers: If you are looking for EMNS and enhanced real-time incident management capabilities through a mobile device, encourage your BCMP vendor to integrate with xMatters.
EMNS tool prospects: Consider xMatters because it now has an enhanced mobile app for offline recovery plan access, emergency contact list dialing and GIS for resource tracking — all used for real-time incident management support.
BCMP vendors that only have mobile Web browser access: If you are looking for an EMNS tie-in, either integrate with xMatters or enhance your mobile app to provide push technology for recovery plan updates, role-based and offline access to plans through the mobile device, and EMNS integration.
xMatters EMNS competitors: Enhance your mobile app to support push technology for recovery plan updates, role-based and offline recovery plan access and GIS-enabled resource tracking. (EMNS leaders currently support GIS-enabled resource tracking.)
Existing Bamboo customers: Discuss with your EMNS vendor whether it will continue supporting Bamboo, as it may be a direct competitor to xMatters.
“Best Practices: EMNS Implementation Advice” — EMNS implemented without a well-considered plan can hurt the constituencies that rely on these services for everything from basic safety to basic survival. By Roberta Witty and John Girard
“Market Analysis in Depth: EMNS Magic Quadrant” — Buyers of EMNS should use this research to guide their vendor selection projects. By Roberta Witty, John Girard and Catherine Goldstein
Category: Event Technology Tags: Bamboo, BCM, BCM planning, BCMP, Business Continuity Management, Business Continuity Planning, Business Impact Analysis, Emergency Notification, EMNS, Gartner, Mass Notification, Recovery Planning, Recovery Plans, Roberta Witty, xMatters
by Roberta J. Witty | February 8, 2013 | Comments Off
Not many of us paid a lot of attention to the weather forecasting systems that the meteorologists use. When we did in the U.S., we were very cynical because the meteorologists just didn’t seem to get it right, almost to the point of the forecasts being a joke. But Superstorm Sandy quickly brought the problem to light: the European system predicted a direct hit on NYC 3 days before the U.S. system did, which was the same day of the storm – a bit too late I would say. WOW! Why?
There is a rather significant difference between the two: the European system has faster computers, more data and better initialization data. This blog post from www.accuweather.com and video from the Today Show explains it all.
Fortunately for the U.S., the Europeans are allowing us to use their data, and have helped us figure out why the U.S. model isn’t performing as well. But it will take time and money to fix. So when planning your next outdoor party, make sure the forecast you consult is from the European model.
Category: Advisory BCM Process Event Technology Tags: BCM, Business Continuity Management, Business Continuity Planning, Crisis Management, ECMWF, Gartner, GFS, NOAA, Roberta Witty, weather, weather forecast
by Roberta J. Witty | February 4, 2013 | Comments Off
Friday’s Bank of America outage reminded me of an increasingly frequent question we receive on third party liability due to an operating outage. The use of cloud service providers is making this question top-of-mind for many organizations. But it’s not just cloud providers that you need to worry about: it’s all of your third party providers: business processors, IT SPs et al. Nearly all contracts have a force majeure clause in them that exclude outages such as acts of God, war, terrorism, civil disturbance, court order, 3rd party performance or nonperformance, strike, work stoppages et al. But another interesting twist we’ve started to see in contracts is a $0 valuation of the data being held or processed by the 3rd party.
I nor Gartner is a legal advisor, so you need to consult with your own legal advisor for how to address the liability issue in your contracts. Our findings from recent research about 3rd party liability and data valuation might provide some background for those discussions.
- Data valuation is a highly unaddressed, very difficult thing to do.
- Since few if any of us have perfect foresight into the future uses of data, the most that one can do is estimate the probable maximum value of data elements – which is no way to do risk management.
- Organizations can buy data insurance but it is very expensive and there is no standard approach to assigning policy premiums by the insurance companies.
- We see it extremely unlikely that a vendor/service provider would take on business impact liability of an outage that is based on data valuation. One method might be to have customers pay a premium for the SP service and then that premium goes into a pool that the vendor would use for liability payout if an outage occurs.
- We do see some contracts (for cloud SPs) where there is a “per incident” minimum of how much the SP would pay the customer if there is an outage. Most of these outages are related to data loss, especially when the SP is processing personal information (PII). How these minimums are calculate is unknown, but what the organization should be doing is trying to get more money back from the SP than the fee return for the outage period, e.g. 12 months of fees max is one option, another is to craft contract terms that require that the fees returned to the customer are based on the amount of time of the outage.
- Customers require the SP to hold higher levels of liability insurance:
- Commercial general liability (CGL), example: no more than 1 million for each occurrence including death and 1 million for each property damage. This type of insurance coverage protects against all liability exposures of a business, except ones specifically excluded. Important to note that it is limited to bodily injury and property damage, and includes defence costs for defending against suits from third parties, and payment only if the insured is found liable for the loss.
- Liability Insurance for Professionals – example: the amount of One Million Dollars ($1,000,000) per occurrence and Three Million Dollars ($3,000,000) in the aggregate including coverage for X,Y and Z. The policies will name the client as an additional insured and be written as a primary policy, not contributing to any other policy client may have. The provider needs to provide certificates of insurance. This type of insurance coverage protects professionals in various fields i.e. lawyers professional liability insurance, manufacturers professional liability insurance, etc. This insurance essentially covers “errors and omissions” and is not limited to bodily injury or property damage.
- Umbrella (Excess) Liability Insurance – example: in an amount of not less than four million ($4,000,000) per occurrence. CGL and professional liability insurance is written on a “primary” basis, usually with a deductible or “self insured retention” and usually has a limit of liability of ~$1 million per occurrence. Excess liability policies are used to increase the limit of liability on specific CGL and professional liability policies. Umbrella liability policies are used to increase the limit of liability on several of these policies. Limits of liability in this market can go into hundreds of millions of dollars.
- Organizations can buy contingent business interruption insurance (CBII) to cover supplier outages. To buy CBII you first need to have a business interruption insurance (BII) policy in place. To buy BII you need to have a property insurance policy in place. BII and CBII are property insurance policies that cover primarily “loss of earnings” following a property insurance loss. Sometimes these coverage points are included in a company’s property insurance policy, and sometimes they are written separately. Casualty policies do not come into play.
- Valuing lost revenue (in the case of business interruption insurance) is a tricky calculation, and usually involves looking at the average revenue of a company for the three months prior to a loss, and adjusting for the seasonal revenue ups and downs of some businesses.
- We do not have data regarding a SP’s liability to all of its customers if the SP has an outage.
- After a negative impact to a SP outage, organization can sue the SP because the returned fees aren’t nearly enough to compensate the customer. Valuing losses in these cases sometimes depends on the creativity of the attorneys and case law. SPs and storage vendors provide remedies in contracts to limit their exposures, not to keep their customers whole. This also keeps insurance cost for the vendors lower than it would be if they were providing remedies based on the value of data lost and its impact on a company’s reputation, revenues, and future success.
Category: Advisory BCM Process Event Tags: Availability Risk, Backup and Recovery, BCM, BCP, Business Continuity Management, Business Continuity Planning, Business interruption insurance, Business Resiliency, Cloud Computing, Commercial general liability insurance, compliance, Contingency Planning, contingent business interruption insurance, Continuity of Operations, COOP, Data Protection, data protection insurance, Disaster Recovery, Gartner, Governance, Liability Insurance for Professionals, Operational Risk Management, Property and casualty insurance, Recovery Planning, Resiliency, Risk Assessment, Roberta Witty, Umbrella (Excess) Liability Insurance
by Roberta J. Witty | January 15, 2013 | Comments Off
For the first time in more than three decades, NYC is gearing up for a strike on Wednesday January 16, 2013 by the city’s largest school bus driver union: Local 1181 of the Amalgamated Transit Union: NYC Department of Education: Pupil Transportation and “School Bus Drivers’ Union Calls for Strike on Wednesday“. This strike announcement set off a rapid review of options for transporting students to NYC schools ranging from putting students on a city bus or subway to paying parents mileage when they drive their child to school.
It also highlights the need to be prepared for an outage – intentional or not – of all suppliers to your business processes not just your IT vendors. Organized labor must be considered as a supplier of business services and you better have a contingency plan in place well ahead of a strike. As additional guidance, do not announce a change of supplier or potential change of a supplier until that contingency plan is in place and tested – at least through a tabletop exercise.
It won’t be pretty tomorrow morning: disabled children and those too young to be on a city bus or subway seem to be particularly at risk of not getting to school. We could see the second largest workforce availability issue for city agencies and private enterprises since Superstorm Sandy in October/2012 as parents will be late to work or not show up at all because their kids will be home.
I hope you are prepared…somehow I think many aren’t.
Category: Event Tags: BCM, Business Continuity Management, Contingency Planning, crisis communications, Crisis Management, Department of Education, NYC, organized labor, strike, Workforce Continuity, workforce resilience
by Roberta J. Witty | November 16, 2012 | Comments Off
Over the last five years considerable attention has been paid to the rise of public social media as the dominant tool for communications during large scale disasters. In major crises such as the Queensland floods, Christchurch earthquake, Haitian earthquake, and the tsunami destruction of the Fukushima nuclear failures, individuals and organizations have leveraged a variety of social media outlets to make personal contact, broadcast data to the public and to correct misinformation. In the most recent major disaster, Hurricane Sandy’s march through the northeast of the United States, we again witnessed heavy use of social media.
This time though, it was different. No one was surprised by this or found the activity remarkable. Massive surge of tweets before, during and after Sandy hit? Sure, of course. People informing friends of their survival through FaceBook status updates? Meh. Thousands of YouTube videos of storm impact. Uh huh.
The predictability and casual acceptance of this pattern of reliance on public social media platforms is the important message that Sandy delivered. This is the new normal. When New Jersey and New York residents powered up their smartphones off of a free generator, they used the electrons to check and upload updates to Facebook, Twitter, YouTube and thousands of blogs. They did not open up web browsers to watch broadcast news. This is a clear indication of where consumers place their trust when it comes to critical communications.
If your organization has been hesitating to use social media to communicate with customers and employees it is time to wake up and smell the coffee. If you are not driving your corporate image and communications through social media, someone else is driving it for you and they may not have your interests in mind. Misinformation continues to appear in social media during disasters and normal life. Your customers and employees are looking for information in social media and you should make sure they are getting the correct information when they need and on the platforms that they have selected.
Category: Advisory BCM Process Event Technology Tags: #HurricaneSandy, Andrew Walls, BCM, Business Continuity Management, Business Continuity Planning, crisis communications, Crisis Management, Disaster, Disaster Recovery, Emergency Management, Emergency Notification, Emergency Preparedness, Facebook, Gartner, Sandy, social media, Twitter
by Roberta J. Witty | November 7, 2012 | 2 Comments
With BYOD and telework gaining in adoption in the U.S., the question arises as to how much the employer is willing to provide backup and recovery support for personal devices as well as home-based offices such as generators, backup devices for computers, iPads for easier field-level functioning, dual Internet connections, dual telcom connections et al?
I asked this question on LinkedIn and got only one response – it added the insurance angle. So I’m looking for what your organization is doing in supporting these two initiatives.
Respond with your ideas and feedback.
Category: BCM Process Technology Tags: Business Continuity Management, Business Continuity Planning, Business Resiliency, BYOD, COOP, Disaster Recovery, Gartner, Hurricane Sandy, IT Disaster Recovery, Personal preparedness, Roberta Witty, Sandy, Superstorm, Telework, WAH, work at home
by Roberta J. Witty | November 7, 2012 | 2 Comments
Sandy’s impact on mobile wireless service was, if anything, a reminder that the best backup systems will never replace the need for redundant communications channels when it comes to standalone or lifeline services.
The FCC indicated that at one point last week up to 25% of the cell sites in affected areas of the region from Virginia to Massachusetts were not working. In hardest hit areas of New York and New Jersey that figure probably was a lot higher, although AT&T, Verizon Wireless, Sprint Nextel and T-Mobile USA have not detailed just how badly their network suffered in those areas.
Customers in areas that did have service often experienced differences in the apparent resiliency of different carrier networks. One Gartner colleague in Middlesex County, New Jersey reported significant disruption to his AT&T voice and mobile data service at home while his wife had no problems with her Verizon service.
That kind of disparity and the widespread loss of service in some areas highlighted the inherent weakness of the cellular network: Even with cell sites girded by backup batteries and diesel generators, the macro cellular system is not a very resilient network. Each Sandy-related site outage could have resulted from any or all of these factors:
- The site did not have an on-site backup generator to recharge batteries or supply power to the base station. Verizon claims all of its tower sites have at least eight hours of backup power, but any experiencing a power outage at the storm’s outset – say, from a tree falling on the local power line – easily would have exceeded the eight-hour threshold before the storm passed. By Nov. 6, Verizon was reporting that 99% of its towers in the affected storm area were operating while AT&T put its figure at 98%.”
- Major physical damage occurred, such as a tower toppling in high winds.
- Ancillary damage occurred due to flooding or falling debris, which may have knocked out of commission backup power supplies or the local optical backhaul network element. Any towers backhauling traffic through flooded Verizon central offices in lower Manhattan and other areas essentially were cut off, even if the tower itself maintained power.
- Regional factors such as roads blocked by fallen trees or flooding that made it impossible for fuel trucks to resupply backup generators or move portable cell sites – COWs or COLTs – into place once service went out.
The FCC has attempted to address the robustness of the backup power issue before, with a 2007 rule requiring a minimum of eight hours backup power for cell sites. A federal court effectively blocked the rule in 2008 amid objections by the Bush administration and mobile carriers, who objected to the purported cost of the mandate.
Carriers also raised the salient point that for some disasters such as Hurricane Katrina, it is virtually impossible to prevent some towers from going out of service. As Sprint Nextel noted in seeking a stay of the FCC rule: “Backup power supplies—whether they provide electricity for eight hours or eighty hours—are useless when sites and lines are submerged in flood waters.”
That will continue to be the reason why users, especially enterprises using mobile wireless for business-critical functions, need backup communications platforms, not just backup power. Where available, POTS lines that do not rely on a user power source still are a reliable backup, assuming the local central office – typically built like a fortress – - has not flooded or burned down.
Satellite systems also may provide backup for the most critical communications. For example, AT&T offers a specialized handset that can connect U.S. customers via the 3G cellular or satellite networks. AT&T also recently introduced its Remote Mobility Zone product, a portable kit that essentially provides an on-the-spot 2G cell site that will backhaul voice and data to the AT&T network via satellite.
In addition, ask your mobile service provider to substantiate their network resiliency measures in locations important to your business. If the carrier’s cell tower serving an important manufacturing facility does not have backup power, for example, factor that into your buying decision.
Category: Advisory BCM Process Event Technology Tags: 2G, AT&T, Bill Menezes, Business Continuity Management, Business Continuity Planning, Business Resiliency, cell phone, COOP, Disaster Recovery, Emergency Management, Emergency Notification, Emergency Preparedness, enterprise mobility, FCC, Gartner, Hurricane Sandy, IT Disaster Recovery, Personal preparedness, Remote mobility, Sandy, satellite phone, Sprint Nextel, T-Mobile, Telecommunications, Telework, TMobile, Verizon, Verizon Wireless, ]
by Roberta J. Witty | November 2, 2012 | Comments Off
by Roberta J. Witty | November 2, 2012 | 2 Comments
by Jay Heiser | November 2, 2012 | Submit a Comment
Our home telephone is totally dependent upon the electrical power grid, and a lead acid battery of unknown age is all that stands between us and total loss of external connectivity.
Fiber to the home, which we’ve now had in 2 different houses, represents an opportunity for high speed, flexibility, and economics, providing a single source for television, telephone, and Internet. Unlike analog phones and broadcast TV, ‘advanced residential communications’ in ‘Smart Neighborhoods’ offering ‘Blazing Speed’ that will ‘exceed your expectations’ are totally dependent upon a powered-up interface box. Unlike an old-fashioned copper phone line, or a TV antenna, you can’t receive fiber optic transmission without a powered device that splits out the three services and interfaces them to the in-home wiring. If the power goes out, the fiber no longer blazes—it flares out.
In order to maintain telephone service, high-tech homes have a backup battery hidden in the customer premise equipment. Nobody claims they last over 8 hours, they are not routinely maintained, and common wisdom is that they often do not last that long.
It isn’t just the home fiber interface that requires power. The (currently unapproved) franchise agreement between our provider and the county requires 2 hour of backup for all distribution amplifiers and fiber optic nodes, 24 hours for all head end tower and HVAC, and at least one dispatchable portable generator to do something somewhere. I don’t know how reassuring that is to people who have already experienced 2 multiday power outages this year.
Clearly, there are reliability advantages to the plain old telephone system (POTS), which only requires emergency power at the central office. Given a choice, telecommuters with 2 lines sometimes do decide to make one of them analog—but increasingly, you don’t get that choice. Once a neighborhood switches over to fiber, the providers become extraordinarily reluctant to support copper. Our new neighborhood has no POTS, and the single telecom provider has exclusive cabling rights for the remainder of my lifetime—and well beyond.
Obviously, there are many advantages to wireless, which becomes the channel of choice when the home or office phone is powered out. Unfortunately, it tends to fail when it is most needed. After hurricane Katrina, the FCC attempted to force providers to include 8 hours of backup for all cells (which would barely last past the excitement of the storm). This 2007 blog post, correctly discussing the unlikelihood of that happening, states “Well, we are likely headed for the big one here soon and it stands to reason we’ll want to have some cell phone service in the aftermath. As we saw last month during a 5.6 earthquake, you don’t have to have cell towers go down to lose service. There was enough congestion in that first hour to bring conversations to a halt. But in a much bigger scenario, having additional power could keep information flowing in the hours after a disaster, helping speed aid and relief to the right places.” New York and New Jersey have just had their big ones, and information is still not flowing in the aftermath of that disaster.
Reporters based in New York city, and Gartner staff living in the areas hardest hit by Sandy have reported total failures of cell phone in their neighborhoods, with some providers apparently doing worse than others. The FCC reported yesterday that “the number of cell site outages overall has declined from approximately 25 percent to 19 percent” (the perceptive observer might ask, percentage of what population of sites).
In addition to significant traffic increases during a natural disaster, there are at least 3 reasons for cell phone failure, with the first one being particularly acute for cell systems:
- Power: Batteries get drained pretty quickly. While a growing number of cells do have generators, the generators need fuel replenishment, which in the post-Sandy world is becoming a logistical problem for several reasons. At the same time that the power grid is coming back online, a growing number of cell sites are running out of backup power.
- Physical damage: wind damage to antenna, or water damage to electronics can impact service, and it takes time after a disaster to deploy existing repair crews across a transportation-challenged region.
- Network failures: The backhaul networks between towers and the switching offices are subject to physical damage, especially from flood water, and they require electrical power (see 1 above).
There’s a lot to be said for the continuity advantages of POTS and analog phones, but other than rural areas, its likely to be phased out in favor of home digital connectivity and cell phones. If you want to do some contingency planning, you might want to scout your neighborhood for pay phones.
Specific details on the post-Sandy status of each wireless provider can be found in yesterday’s NYT blogs.
Category: Advisory BCM Process Event Technology Tags: Business Continuity Management, Business Continuity Planning, cell phone, COOP, Disaster Recovery, Emergency Management, Emergency Notification, Emergency Preparedness, Gartner, Hurricane Sandy, IT Disaster Recovery, Jay Heiser, mobile Technology, Sandy, smartphone, social media
by Roberta J. Witty | November 2, 2012 | Comments Off
I was wondering how insurance companies keep track of all their covered assets and then assess the damage due to catastrophic events such as Superstorm Sandy. Clearly such storms generate tremendous stress on insurance industry operations due to property loss and associated claims submissions. They are overloaded with work in determining coverage and payment associated with the policy provisions of the covered asset – just getting to the asset may not be possible or the asset may no longer exist.
So I asked my colleague Kimberly Harris-Ferrante for some insights into how insurance companies can use GIS, geo-spatial, mapping and other technologies to help them speed up their operations and make accurate assessments and payments for damages. She talked about “location intelligence technologies”. Here’s what she has to say.
“P&C insurers offering property coverage must embrace new techniques to help lower the cost of physical inspection, increase the accuracy of risk assessment, improve claims handling, reduce fraud, and improve accuracy of pricing and underwriting through greater insight into property risks. For years, companies have questioned how to accomplish this, and began to adopt technologies, such as GIS and geocoding, in isolated instances throughout the enterprise. Initiatives often started with underwriting, but did not expand to other business units or emerging technologies, nor were they integrated into the core insurance systems or desktop technologies used by insurance professionals. This research outlines the emerging area of location intelligence, explaining its use for property insurers.
Location intelligence is the use of new data sources — both structured and unstructured — to assist P&C insurers that are conducting property valuations and risk assessments with determining the accurate risk associated with a physical location or property. This includes mapping technologies, such as GIS and geocoding, as well as the use of Internet-based maps and digital/aerial imagery offered by specialty companies servicing the real estate, insurance and government market. It is important to note that data is not available in all countries, cities and locations. Many countries have limited information that is provided by governmental agencies. In rural areas, imagery may be updated infrequently or is not available at all. Assessing the quality and accessibility of data must be a fundamental step in planning for location intelligence.
Location intelligence today is not a well-known concept among P&C insurers. Tier 1 companies mostly made investments in GIS and geocoding in the past, but this has yet to become commonplace in Tier 2 and Tier 3 companies, or in geographies outside the U.S., Canada, and the U.K., where vendors are prevalent.”
Read this note by Kimberly Harris-Ferrante “Location Intelligence and Property Insurance: Underused Assets to Improve Risk Decisions for more information on location intelligence.
Category: Uncategorized Tags: Availability Risk, BCM, BCP, Business Continuity Management, Business Continuity Planning, Business interruption insurance, Business Resiliency, Contingency Planning, Continuity of Operations, COOP, crisis communications, Crisis Management, Disaster Recovery, Emergency Management, Emergency Preparedness, Hurricane Sandy, Incident Management, Operational Risk Management, P&C insurance, property damage, property insurance, records management, Recovery Planning, Recovery Plans, Resiliency, Risk Assessment, Sandy