by French Caldwell | July 1, 2014 | 1 Comment
I’m in the middle of a doctorate program at Northeastern University, and my research focus is on the impact of disruptive technologies on the public policy making process. Before I ever engaged in any behavioral research, Northeastern required that I achieve certification on protecting human research participants. The certification was based in large part on principles outlined in The Belmont Report, which was commissioned by Congress following the public outrage over the syphilis study conducted at Tuskegee University. The Belmont Report is the basis for the ethical guidelines for any academic research involving human subjects. Any such research must be approved by an Institutional Review Board (IRB) at the university or other research institution that is conducting the study.
“Respect for persons” is a first principle of Belmont’s ethical guidelines. This principle requires informed consent of the research participants. If informing participants of the exact nature of the research would comprimise the validity of the study, the participants don’t have to be told the exact nature of the research, but they do have to be told that there will be experiments. The general notice to users of Facebook that their data could be used for research purposes may be fine for experiments involving pre-existing data, but it does not explicitly address live experimental research (plus when the study was done in 2012, the research clause did not exist at Facebook). In the case of the experiment on over 600,000 Facebook users that was published recently and reported in The Atlantic and elsewhere, subjects’ behaviors were directly manipulated. They were shown varying degrees of positive and negative content on Facebook and then evaluated as to whether they then behaved in a negative or positive way.
“Respect for persons” also requires that vulnerable persons be protected. That could apply to minor children, individuals with a limited mental capacity to understand the nature of the experiment and to persons with a precarious emotional or mental state. Notably, the researchers in their study noticed a “withdrawal effect,” stating: “People who were exposed to fewer emotional posts (of either valence) in their News Feed were less expressive overall on the following days, addressing the question about how emotional expression affects social engagement online.” Such a lingering effect of the individuals involved in behavioral research makes it important that protection for vulnerable individuals is established. These persons would need to be identified and then additional measures taken to prevent any negative outcomes for them. It is not adequate to do a statistical calculation and then from that determine that the risks to the group of potentially vulnerable persons is minimal. For a waiver of the disclosure requirement, the specific individuals should be identified and action taken to protect them as individuals. Furthermore, when the research is concluded, the Belmont principles require that research subjects are debriefed. It’s not stated in the study as to whether a debriefing occurred.
Analysis of social media usage data invokes serious ethical considerations and my colleague Frank Buytendijk at Gartner is working to provide guidance to our clients about big data ethics. Taking that data and then using it to manipulate individuals into taking action to buy a certain product or service, engage in a certain activity, and even vote in a certain way is the core of the business model of social media firms. With that business model, perhaps in the world of social media businesses, this experiment may have seemed mild. Certainly, the researchers appear to have been caught off guard by the public backlash. If nothing else positive comes of this experiment, it offers a sober lesson on the fine line between the business of marketing and the intentional manipulation of emotions.
Facebook states that it has significantly upgraded its research safeguards since 2012 when this experiment was conducted. Perhaps Facebook, and other social media firms, could consider openly sharing the details of those safeguards.
Category: public policy Social Technology Tags: Belmont Report, cornell, facebook, research, university of california
by French Caldwell | May 13, 2014 | 2 Comments
Today, the European Court of Justice ruled that Google, and by implication other search engines, must allow for individuals to have certain personal data blocked from search results. The case involved a Spanish national who wished to have personal data involving a foreclosure 16 years ago, an issue that has long since been resolved, removed from Google’s search results. The ruling is being hailed as a major step forward in establishing a “right to be forgotten.” Critics will claim that judges should stay away from making public policy and leave that role to legislators.
A huge issue facing the courts these days is that technology adoption is accelerating much more rapidly than public policy and the political process cannot keep up. The Audiencia Nacional, the Spanish high court, explicitly recognized that issue and that’s why they referred this case to the European Court of Justice. Thus the ECJ had to interpret a law that preceded the technology within the context of how that technology is currently used. Here’s what the ECJ said in section 19 of its opinion:
… the actions raise the question of what obligations are owed by operators of search engines to protect personal data of persons concerned who do not wish that certain information, which is published on third parties’ websites and contains personal data relating to them that enable that information to be linked to them, be located, indexed and made available to internet users indefinitely. The answer to that question depends on the way in which Directive 95/46 must be interpreted in the context of these technologies, which appeared after the directive’s publication.
We can consider the action of the Court to be a stopgap measure, and should the politicians decide to catch up, then they will modify the EU Data Protection Directive. However, in this case, I would be surprised if the politicians actually do anything on this matter anytime soon. The public is pretty aroused these days about surveillance, whether by private sector entities like Google and Facebook or by government entities like NSA and GCHQ. The only way the EC would intervene here would be if the Court was too far out in front on this issue. The fact is that the Court is not way out in front of the public.
We are going to be in for at least a decade or so of technology public policy issues being settled more by courts than by parliaments – much like desegregation in the South between 1954 and 1964. Except in this case, the courts are not ahead of the population – they are just picking up the slack in the political process.
Keep in mind that while this ruling from a technology standpoint seems to have tremendous implications, from a public policy process this is a narrow ruling. Based on its interpretation of the EUDPD, the Court could have required Google to establish means of screening out personal information altogether. And by personal information, this means anything that could be linked to an EU citizen’s name. This would have put ad-paid search out of business. Instead, the individual must request that the information be removed, and then that information is subject to a balancing test — balancing the individual’s right to be forgotten verses the public’s need to know — so for public figures and for sex offenders for instance, the standard will be much higher. Furthermore, the ruling is specific to a search on an individual’s name, and the inclusion of the data in a list of search results. Other types of searches that may surface the information are not affected by this ruling.
Bottom line is that we have technologies that are fundamentally changing the world in a very rapid way, and the risk that companies and governments that are exploiting these technologies will bump up against fundamental societal values is very real. There is no way in liberal democracies that traditional legislative and executive rule-making processes can keep up. Until we reach some balance between the rate of technology adoption and the pace of legislative and rule-making processes, courts will play a larger part in the technology public policy process than they have in the past. So long as they don’t get out in front of the public on these issues, and keep their rulings narrow, they can serve a useful function of balancing commercial, individual and societal interests.
Category: compliance public policy Tags: ECJ, google, Privacy, right to be forgotten
by French Caldwell | May 8, 2014 | 3 Comments
Photo: Planet Killing Asteroid – Los Alamos National Laboratory
A giant planet killing asteroid helps. Short of that, perhaps losing millions of your customers over a data breach incident. Actually, neither of those will create a truly risk aware culture. When the risk probability is 100%, your people will tend to focus on that one risk and ignore those with lower probabilities. So the next risk to get you will be the one that you are not focused on.
Risk aware culture is the objective du jour of executives and their consultants today. I’m not sure why anyone wants one — frankly it sounds like it could be paralyzing. One of my skippers in the submarine service was so risk aware that he hung up an embroidered plaque from his wife in the wardroom that spelled out “Don’t Screw Up” in signal flags. When you have the clear signal from the top to not screw up, and you are in an inherently risky business, it leads people at lower levels to do a lot of little cover-ups. Risk wariness at the top leads to a loss of transparency, and that in turn increases risks.
A risk culture can be paralyzing too. That’s what I thought when I read that a big bank is firing some of its top customers just because they’ve worked for a foreign government. Foreign to whom? If you’re an international bank, are there any foreigners, really? It seems that the compliance department at the bank has decided that the best way to ensure that they can comply with the U.S. Foreign Corrupt Practices Act is to get rid of customers. No doubt they had some consultants point out to them that 80% of their risks came from 20% of their customers. So you can just lop off that 20%. But guess what, after you do that, you’ll still have 80% of your risk coming from 20% of your customers — where do you stop?
Despite that skipper, most of the lessons I learned from submarines about risk management are really good ones. I learned that you can take some really hairy risks if your crew is well trained, you’re open with what the risks are, and you’re communicating well. And you don’t take those extraordinary risks every day — if you do that, you’ll die. No one can stay at an extraordinary level of risk awareness for long.
And something else I learned long after my submarine duty — you can’t really change the culture. What you can do is understand your company’s culture, understand the risks, and find ways that fit your culture that enable your company to take risks in a responsible way and get the job done.
Category: Risk Management Transparency Tags: culture, culture change, Risk, Risk Management
by French Caldwell | May 1, 2014 | 2 Comments
Last week my colleagues Andrew Walls, Stessa Cohen and I published the “Regulated Social Media Survival Guide.” While not all enterprises have strict regulations that limit how they can use social media, all do have in common the need to manage risk to brand and reputation. I’ve been at the MetricStream GRC Summit today and yesterday, and keynoters former Secretary of Defense Bill Cohen, and Chairman and CEO of Kaiser Permanente Bernard Tyson both emphasized that all risks really boil down to your reputation.
Taking this issue of reputation right to the individual employee level, Cohen emphasized the importance of hiring ethical people — that is the surest bulwark against enterprise risks. He shared one of his favorite sayings from his father, “You can sell your integrity, but once you’ve done that, you’ll never in your life be rich enough to buy it back.”
Tyson said that he wakes up every day thinking of the brand and reputation of Kaiser Permanente. He said that while compliance with regulations is essential, it is not the motivator for employees to do the right thing. Brand and reputation are.
With social media, both good news and bad about an organization spreads quickly, and even those organizations that are not heavily regulated and consumer facing should consider their social media risk management capabilities. But as both Cohen and Tyson emphasized, ethical people are the ultimate guarantors of your reputation. So whatever you do in setting up your risk management and compliance programs, focus the most on how to help your good people do the right things.
Category: compliance ethics Risk Management Social Technology Tags: compliance, ethics, governance, reputation, reputational risk, Risk Management, social media
by French Caldwell | April 14, 2014 | 3 Comments
Not that it directly affects U.S. legal and constitutional considerations on the NSA phone records program, it is still worth noting that last week the European Court of Justice declared the EU Data Retention Directive was a violation of the fundamental rights of EU citizens under the Charter of Fundamental Rights of the European Union — that’s the equivalent of the Bill of Rights in the U.S. A fundamental right is a legally protected right – such as the right to due process, the right to equal protection under the law, or the right to free speech – or the inalienable rights in the US Declaration of Independence.
The Charter provides for fundamental rights of respect for private life (Article 7), which includes private communications, and protection of personal data (Article 8). The EU Data Retention Directive required that telecoms and ISPs retain phone records and some internet service records for at least six months and up to two years and make these available to government agencies as needed for law enforcement. The requirement that telecoms hold on to phone record data instead of the NSA storing the data is likely to be part of the White House proposals for NSA reforms in response to public concerns over domestic spying.
The courts are ultimately the arbiter of what are rights, and what infringements are allowed. In order to infringe on a fundamental right, the government must prove that it serves a significant governmental purpose that cannot be achieved in some other way. Even when that is proved, the infringement must then be narrowly tailored. It is the latter which the EU Court appears to state has not been done – that is, the data retention directive did not narrowly tailor the means of meeting the government’s interest of law enforcement. This ruling then leaves open the ability of the EU to revisit the directive and tailor it in a way that is narrower. The Court described six ways in which the directive is too broad. The EU could issue a new directive that addresses those six objections.
Notably the directive was intended to harmonize activities in which many EU member states were already engaged. And the directive was phrased in terms of law enforcement, where the EU has some standing, not national security where the EU has very little standing. We should expect that EU member states that have a history of this type of activity will continue to require telecoms and ISPs to store the data for national security purposes. However, this ruling will balkanize the data, making pan-EU law enforcement and anti-terrorism analysis more difficult.
Category: Cybersecurity Legal IT public policy Surveillance Tags: EU, NSA
by French Caldwell | April 8, 2014 | 4 Comments
A couple of months ago, the conference chair for Gartner’s Dubai Symposium, Mary Mesaglio, presented me a challenge. She said, “French, we need more local content and more security content. What’s possible?”
Having made some trips to the Gulf region in the last year, I’d met some really interesting people and heard some great stories. I told Mary that perhaps we could do a panel. I shared this idea with some other Gartner associates who have experience in the Middle East and some who work there, and there was real skepticism as to whether we could find panelists willing to share their stories and best practices on security. Some colleagues told me that the culture just wouldn’t support that kind of open sharing around topics as sensitive as security and risk management. When I told them that I was going to get the audience to participate in the discussion as well, I met with even more skepticism.
With the assistance of our Gulf region account executives, I reached out to two security and risk management leaders in the region whom I had met on earlier trips, José Rossi at RasGas in Qatar, and Amair Saleem at Dubai Road and Transport Authority. RasGas had been the target of a highly publicized cyber attack in 2012, and I knew that would grab the attention of attendees, and RTA operates one of the most technologically advanced driverless Metro systems in the world — which represents a breadth of risk management challenges. Their two organizations also demonstrate the convergence of operational technology (OT) and IT security and risk management.
José and Amair agreed to join the panel, and my colleague Kristian Steenstrup who leads our OT research community at Gartner also joined. Not only did this panel work out extremely well, the audience itself joined the panel — it was an hour long lively discussion among the attendees and panelists of security and risk management challenges and sharing of practices for dealing with those. The idea that security and risk management leaders, and CIOs — there were a number of them joining in as well — will not openly share with each other their challenges and solutions is a myth in the Gulf as it is in all the other regions where I have tried this interactive format. Clearly the panelists and the audience participants saw value in sharing and connecting with each other.
Here are key takeaways from the audience and panelists:
1 — Security awareness: Inducements are very important, such as including metrics in performance appraisals, rewards for tip of the week, and even providing security for personal IT in the home
2 — Risk Management: Should start from business objectives, can’t be a stand-alone function, and risk ownership must be unambiguous
3 — Cloud risk management: Data classification is essential in deciding what can go on the cloud and the type of cloud allowed
This panel and audience were the most dynamic and engaging that I have seen in a long time, and I am grateful to Amair, José, Kristian and the audience participants for contributing, to Mary for insisting that we do this, and to our events program manager Rutuja Vadhavkar for making the arrangements to add this session.
Join us for the first ever Gartner Security and Risk Management Summit in Dubai, 15-16 September 2014.
Category: Cloud Cybersecurity Risk Management Tags:
by French Caldwell | March 25, 2014 | 7 Comments
Like most of us, since the Target hack, I’ve heard statements on how EMV is THE answer to credit card fraud, and how it’s been working great in Europe which has had it for 20 years. If the business case were so compelling, wouldn’t EMV have made the trip across the Atlantic a long time ago? Let’s take a look at the numbers.
According to a report from Aite and ACI, with just 10% of credit card users reporting they’ve experienced fraud in the last five years, Germany’s fraud rate would seem to be very low as compared to the US (37%). While there must be many factors than just technology involved, at first blush, with such a huge disparity, this looks very promising for EMV. But then, taking a look at the UK, which adopted EMV in 2006, the fraud rate for Britons is 31% — not so far behind the US. So is credit card fraud in the UK really three times that in Germany?
Perhaps there could be other factors involved. According to data from the European Central Bank, Britons use their cards more. With twice as many transactions per card and more cards per person, 2.4 for each Briton and 1.68 for each German according to the ECB, Britons have almost three times as many transactions per inhabitant. So, Britons use their cards three times as much as Germans, and they have three times the fraud. That’s at least one way of looking at the data – I’m sure there are others.
So, perhaps culture and payment habits have something to do with the fraud rate.
Now let’s take a look at the US where the number of credit cards is 3.5 per person. Is the US fraud rate really that much higher than the UK? Americans have a consumer lifestyle much like Britons and I would think would use their cards in a similar fashion. That’s just a working assumption and certainly open to challenge.
As noted above, 37% of Americans and 31% of Britons report experiencing credit card fraud. Since Americans have 3.5 cards per person and Britons 2.4, this would mean a fraud probability of 10.6%/card/person in the U.S. and 12.9%/card/person in the UK. Hence, one reason that Americans may experience more incidents of fraud than Britons is that they have more cards per person. There are other reasons as well – such as the percentage of cards that are authorized online or offline in a particular country. All I am trying to point out here is that EMV is not going to solve the problem of consumer credit card fraud.
No doubt, EMV chip and pin could have a big impact on point-of-sale face-to-face fraud, but it will push fraud to other means, and once the big honeypot of US consumers is on EMV, I’d expect that Europe will see an uptick in cross-border fraud. The numbers as best as I can tell for the fraction of transaction fraud in US is 0.0005, and for Europe it is roughly 0.0004, with many countries well below that and several well above it.
With fraud incidence per card roughly equal in US and Europe and the cost of fraud only a tiny fraction of the transaction value — much less even than card fees — it’s easy to see why EMV has not yet made the leap across the Atlantic. EMV will be helpful, yes – particularly for merchants doing face to face transactions – but looking at the data, the best way to avoid credit card fraud is to follow the German example and just avoid using credit cards.
Category: Cybersecurity fraud Standards Tags: credit cards, EMV, fraud, target
by French Caldwell | March 7, 2014 | 3 Comments
I’ve spoken to a few corporate boards on IT governance and risk management, and I’ve one question that I always ask — but first let me clarify this Target CISO tweet with my twitter handle on it.
In an internal Gartner e-mail thread about the Target CIO resigning, I added some irony, writing: “Another good reason to have CISO — so the CISO can resign.” Violating all manner of e-mail and twitter etiquette, my good friend and colleague Doug Laney blasted my snarkiness to the world in a tweet — thanks, Doug! I mean it — thanks — wish I’d thought to tweet it.
But it’s really not funny, is it, when a CIO must resign her post over something she probably had been trying to fix for some time. I’ve no special inside knowledge of Target, but we’ve all seen other large organizations that have had big security, risk management, or compliance failures, and typically someone, somewhere has made the problem known, but other business priorities — making a project deadline, opening new big box stores in an emerging market, or closing the deal for a merger — seem more tangible to the powers-that-be (PTB) than dealing with security or risk issues. ‘We’ve lived with it so far — how do you know something bad will happen, anyway, Ms CIO?’ It’s a real stumper when the PTBs just don’t get it — especially when one fail after another is in the news!
Two factors often emerge when there is a big failure — 1) There’s no one outside of IT who acknowledges ownership of the risk; 2) There’s no one coordinating and providing oversight of the many different risk silos.
Target is just the latest in a long line of consumer giant security fails — remember TJX, remember Sony?
So, after the fail, they all get religion. The answer lies not just in getting a real corporate CISO, but also requires getting true business leader ownership of the risks. That can only come from the very top, from those who are truly responsible to the shareholders for governance — the Board. Tone at the top is the one ingredient of risk management that even when you are just a pinch short, your recipe will end in disaster.
So besides running an effective coordinated security program, there’s another role for a CISO in a large dynamic enterprise, and that’s working with the leadership of the company and ensuring that IT risk management issues are addressed in business initiatives. For large organizations, the CISO will have her hands full running a corporate-wide IT security program and organization, and to have that kind of oomph, she must have a direct line to the board.
So, if you’re a corporate director, I have just one question for you: ‘Can you tell me the name of your CISO?’
Category: Cybersecurity IT Governance Tags: board, ciso, Risk Management, security, target
by French Caldwell | February 28, 2014 | 4 Comments
The first ever Gartner legal IT scenario is out, and it’s both controversial and not. Many of the disruptions that we discuss in the scenario are well underway, such as the increasing demand for legal process outsourcing (LPO) and the use of advanced analytics — so what’s new? Well, new are the dramatically disruptive effects arising from the accelerating adoption of legal IT. Here are a few predictions:
- By 2020, 75% of U.S. and U.K. corporations will use LPO.
- By 2019, 75% of corporate legal and IT departments will have shared staff.
- By 2018, legal IT courses will be required for the graduates of at least 20 U.S. Tier 1 and Tier 2 law schools.
– if you want more, please read the research. We’ve provided analysis and recommendations for CLOs, law firms, CIOs, and legal IT vendors and service providers in each of the four futures in the scenario. And we’ve laid out current day evidence and future indicators to guide your legal IT strategy and investments.
One big hint though for all those legal IT vendors — it’s time to get big or get out. Frankly, half of you guys will be gone within another 36 months. Good luck!
Category: Legal IT Tags: analytics, compliance, legal, legal process outsourcing, LPO, smart machines
by French Caldwell | February 27, 2014 | 2 Comments
Vendor Risk Management Is Flashing Hot
I went to the RSA conference once — it was really busy and hearing from my buddies at the front, it’s now busier than ever. So much for the boycott, eh?
A lot of my security buddies are at RSA this week, and are broadcasting the buzz back to the rest of us here at Gartner. One piece of gossip that got my attention was shared by Erik Heidt. He said that many of the financial services attendees are talking about the FS regulators ramping up vendor risk oversight requirements on FS firms. Third party risk management is the one area where I do get involved in security — I always say I’m a risk management analyst whenever anyone asks me a really tricky security question.
Third party risk management is pretty broad; it covers downstream risks associated with customers and prospects, business partners and resellers — and downstream is where much of the fraud, bribery and corruption comes into play — and it covers upstream risks associated with suppliers in manufacturing, mining, oil and gas, retail and other supply chains, plus the risks associated with vendors that provide business process outsourcing, information services, or manage IT assets — these vendors can range from a major outsourcer to a visiting nurse. We group these vendors that somehow touch information which you own or for which you are accountable into VRM — it’s focused mostly on the logical supply chain, whereas supplier risk management focuses on the physical goods supply chain. For more on this, I’ve included at the bottom of this post our working definition for the upcoming Magic Quadrant for Vendor Risk Management, which is slated for Q4 this year.
Anyway, is the buzz about VRM at RSA right? — yes. Ever since late October when the Office of the Comptroller of Currency published guidelines saying that VRM should be part of the ERM program, we have seen an uptick in inquiry on vendor risk management. I expect other FFIEC regulators — FRB, FDIC, NCUA and CFPB — to continue raising the bar as well.
Now there are a couple of immediate problems with complying with the OCC guidelines — first they make the assumption that FS firms have ERM programs. I’m sure most do in name, but frankly, in practice many don’t. Secondly, most FS firms don’t have a vendor management function, and if you don’t do vendor management, then how can you do vendor risk management?
To deal with the onslaught of client interest, we’ve been ramping up on VRM here at Gartner. First we formed a dedicated vendor management team, headed up by Linda Cohen, and including my good friends Helen Huntley, Chris Ambrose, and Gayla Sullivan. You may remember that Helen and I led a special report on VRM in 2009 when the first bubbles of VRM began to appear in the risk management pond. Now the pond is in full boil, and we’re worried about a steam flash!
By the way, it’s not just FS clients driving the demand — healthcare and E&U are getting into this too in a big way, and no industry vertical will be left behind. You can thank cloud computing for that!
We’re getting behind the demand for VRM research in a big way. Very soon, you’ll see a note from Kristian Steenstrup and Gayla Sullivan on VRM for operational technology. Looking ahead, Chris Ambrose and I are working on updating Gartner’s Simple Vendor Risk Management Framework. There’s nothing wrong with it now, and it’s very popular, but we want to add more detail on sources of risk data, and key risk indicators for VRM.
Chris, Gayla and I are also working on the new VRM magic quadrant, and we’re starting to track services for VRM. This is in addition to the work that other analysts like Debbie Wilson, Ray Barger and Noha Tohamy are doing on supplier risk management, Jay Heiser and Rob McMillan on VRM standards, and Khushbu Pratap on auditing vendors. And I’m also working on some of those downstream risk issues — expect a note on FCPA solutions in Q2.
VRM Technology Definition
Vendor risk management (VRM) is the process of ensuring that the use of third-party service providers and IT suppliers does not create an unacceptable potential for business disruption or a negative impact on business performance. VRM solutions support enterprises that must assess, monitor and manage their risk exposure from third parties that provide IT products and services, or that have access to enterprise information.
Category: Cloud compliance Cybersecurity Risk Management Third Party Risk Management Vendor Contracts Tags: cloud, cybersecurity, rsa conference, vendor risk management