Dan Ariely is fond of pointing out that “door locks only keep the honest out”. Locks are basically a reminder, like a sign, that it’s socially unacceptable to go further.
One could make the argument that is true when it comes to human behavior about scams…if you have a personal relationship with someone, you are less likely to scam them. Unless you are pathological. It falls under the same category of that old saw “generalize the negatives, individualize the positives”.
So what does that mean in a world where your personal face-to-face connections keep shrinking when compared to the world of social acquaintances that you have online – like those 1000 linked in or facebook contacts? Enter, social engineering. It’s my thesis that the decrease in the personalization of relationships through online social networks, the more that the psychological barrier to scamming someone is lowered, increasing both the likelihood, scale, number, and effectiveness, of social engineering attacks.
In other words, social engineering opportunities are there, much like walking down the street and trying open doors are there. Yet, unlike the door analogy, there’s is no psychological barrier in place in our online lives that makes entering that metaphorical door – or crossing that line – less attractive. I am talking about those of us that do NOT have pathological issues. Evidence to this is the amount of Millenials downloading music, movies, books for free.
So if my theory is correct, we can only expect the scale or tenacity of social engineering attacks to increase, and from some unlikely (normally trustworthy) sources. And that will only change if one of two conditions happen:
1. The likelihood of capture and retribution, or high barrier to entry (like, for instance, a fool proof lock with reinforced doors) makes the scam attempt subject to a very low success rate (success rates that are infinitesimal compared to the increased ability, and low cost, to pull off an online scam – think spam) or,
2. Our social network lives raise themselves to a level of trust and familiarity so that the smallest attempt is viewed as generally unacceptable social behavior – and can be communicated and outed. Think something like “Snopes” for social engineering alerts, and strong identity frameworks that publicly tie attempts to the instigator. But then that gets precariously close to 1984 scenarios (ignore the fact we are almost 30 years beyond that scenario).
If my thesis rings true, what other conditions can you foresee that will mitigate my expectation? Which one of these two can most likely happen?
Come to our Catalyst conference in July for more on topics like this…