Ben Tomhave

A member of the Gartner Blog Network

Ben Tomhave
Research Director
1 years at Gartner
19 years IT Industry

Ben is conducting research within the Security and Risk Management Strategies team under Gartner for Technical Professionals.

Things That Aren’t Risk Assessments

by Ben Tomhave  |  July 24, 2014  |  2 Comments

In my ongoing battle against the misuse of the term “risk,” I wanted to spend a little time here pontificating on various activities that ARE NOT “risk assessments.” We all too often hear just about every scan or questionnaire described as a “risk assessment,” and yet when you get down to it, they’re not.

As a quick refresher, to assess risk, you need to be looking at no less than 3 things: business impact, threat actor(s), and weaknesses/vulnerability. The FAIR definition talks about risk as being “the probable frequency and probable magnitude of future loss.” That “probably frequency” phrase translates to Loss Event Frequency, which is compromised of estimates that a threat community or threat actor will move against your org, that they’ll have a certain level of capabilities/competency, and that your environment will be resistant to only a certain level of attacks (thus representing weakness or vulnerability).

Oftentimes, that “probable magnitude” component is what is most lacking from alleged “risk-based” discussions. And, of course, this is where some of these inaptly described tools and capabilities come into play…

Questionnaires

A questionnaire is just a data gathering tool. You still have to perform analysis on the data gathered, supplying context like business impact, risk tolerance/capacity/appetite, etc, etc, etc. Even better, the types of questions asked may result in this tool being nothing more than an audit or compliance activity that has very little at all to do with “risk.” While I realize that pretty much all the GRC platforms in the world refer to their questionnaires as “risk assessments,” please bear in mind that this is an incorrect characterization of a data gathering tool.

Audits

The purpose of an audit is to measure actual performance against a desired performance. Oftentimes, audits end up coming to us in the form of questionnaires. Rarely, if ever, do audits look at business impact. And, one could argue that this is ok because they’re really not charged with measuring risk. However, we need to be very careful about how we handle and communicate audit results. If your auditors come back to you and start flinging the word “risk” around in their report, challenge them on it (hard!!!) because dollars-to-donuts, they probably didn’t do any sort of business impact assessment, nor are they even remotely in the know on the business’s risk tolerance, etc.

Vulnerability Scans and AppSec Testing

My favorite whipping-boy for “risks that aren’t risks” is vulnerability scans. I had a conversation with a client last week where they had a very large (nearly 100-page) report dropped onto them that was allegedly a “risk assessment,” but in reality was a very poor copy-n-paste of vuln scan data into a template that had several pages of preamble on methodology, several pages of generic explanatory notes at the end, and did not appear to do any manual validation of findings (the list of findings also wasn’t deduplicated).

Despite the frequent use of “risk” on these reports, they’re most often describe “ease of exploit” or “likelihood of exploit.” However, even then, their ability to estimate likelihood is pretty much nonsensical. Take, for instance, an internal pentest of a walled-off environment. They find an open port hosting a service/app that may be vulnerable to an attack that could lead to remove root/admin and is easy to exploit. Is that automatically a “high likelihood” finding? Better yet, is it a “high risk” finding? It’s hard to say (though likely “no”), but without important context, they shouldn’t be saying anything at all about it.

Of course, the big challenge for the scanning vendors has always been how to represent their findings in a way that’s readily prioritized. This is why CVSS scores have been so heavily leveraged over the years. However, even then, CVSS does not generally take into account context, and it absolutely, positively is NOT “risk.” So, again, while potentially useful, without context and business impact, it’s still just a data point.


I could rant endlessly about some of these things, but I’ll stop here and call it a day. My exhortation to you is to challenge the use of the word “risk” in conversations and reports and demand that different language be used when it’s discovered that “risk” is not actually being described. Proper language is important, especially when our jobs are on the line.

2 Comments »

Category: Risk Management     Tags: , , , , ,

Three Epic “Security” Mindset Failures (“ignorance is bliss”)?

by Ben Tomhave  |  May 6, 2014  |  2 Comments

I don't care if it's a bug or a feature as long as Nagios is happy

I saw this graphic a couple weeks ago while trolling Flickr for CCby2.0-permissioned images and it got me thinking… there are a number of mindset failures that lead us down the road to badness in infosec.

Consider this an incomplete list…

  • As long as [monitoring/SIEM] is happy, you’re happy.
  • As long as [auditor/checklist] is happy, you’re happy.
  • As long as [appsec testing / vuln scanner] is happy, you’re happy.

I’m sure we could all come up with a few dozen more examples, but for a Tuesday, this is probably enough to start a few rants… :) Part of what triggered this line of thinking for me was the various reports after the retail sector breaches about tens of thousands of SIEM alerts that were presumed to be false positives, and thus ignored. Kind of like trying to find a needle in a haystack.

(Image Source (CCby2.0): Noah Sussman https://www.flickr.com/photos/thefangmonster/6546237719/sizes/o/)

2 Comments »

Category: Uncategorized     Tags:

New Research: Security in a DevOps World

by Ben Tomhave  |  April 30, 2014  |  Comments Off

Hot off the presses, new research from Sean Kenefick and me titled “Security in a DevOps World,” which is available to Gartner for Tech Professionals subscribers at www.gartner.com/document/2725217.

Some of the key takeaways from this research include:

  • Automation is key! AppSec programs must find ways to integrate testing and feedback capabilities directly into the build and release pipeline in order to remain useful and relevant.
  • Risk triaging is imperative to help differentiate between apps with lower and higher sensitivity. We recommend leveraging a pace-layered approach to assist with building a risk triage capability.
  • Engineer for resilience. It’s impossible to stop all the bad things. Instead, we need to build fault-tolerate environments that can rapidly detect and correct incidents of all variety.

We also have found there continues to be some confusion around DevOps, and much trepediation as to the ever-shifting definition. Within security, this uncertainty translates into negative perception of DevOps and typically assinine resistance to change. To that end, I wish to challenge 5 incorrect impressions.

DevOps doesn’t mean…

  • …no quality. QA doesn’t go away, but shifts to being more automated and incremental. This can be a Good Thing ™ as it can translate to faster identification and resolution of issues.
  • …no testing. Patently untrue. There should be testing, and likely lots of testing at different levels. Static analysis should occur on nightly code repository check-ins, and possibly at code check-in time itself. Dynamic analysis should occur as part of standard automated build testing. For high-risk apps, there should be time built-in for further detailed testing as well (like manual fuzzing and pentesting).
  • …no security. Security should be involved at the outset. Ideally, developers will work from a pre-secured “gold image” environment that already integrates host and network based measures. Further, as noted above, appsec testing should be heavily integrated and automated where possible. This automation should free appsec professionals to evolve into a true security architect role for consulting with developers and the business.
  • …no responsibility. On the contrary, DevOps implies empowerment and LOTS of shared responsibilities.
  • …no accountability. Perhaps one of the more challenging paradigm shifts is the notion that developers now be held directly responsible for the effects of their code. This accountability may not be as dire as termination, but should include paging devs in the middle of the night when their code breaks something in production. The telephone game must end. This doesn’t mean removing ops and security from the loop (quite the opposite), but it changes the role of ops and security (IRM).

I hope you enjoy this new research!

“Security in a DevOps World” (published April 29, 2014

Comments Off

Category: Uncategorized     Tags:

Where I’ll Be: Spring/Summer 2014 Events

by Ben Tomhave  |  March 27, 2014  |  Comments Off

A quick post… I’ll be traveling a bit this Spring and Summer to speak at a number of events. For non-Gartner events, we’re actively looking for GTP sales opportunities, so if you’ve been thinking about getting a subscription to Gartner for Technical Professionals, this could be your chance to meet face-to-face to discuss! :) For Gartner events, I will be available for 1-on-1s, as well as sales support as needed.

Here’s what I have scheduled through August:

Please drop me a note if you’ll be at any of these events and we can arrange to meet-up. Also, if you’re interested in exploring a GTP subscription, please contact us and a sales rep will reach out to help coordinate. We love meeting with clients!

Hope to see you soon!

Comments Off

Category: Uncategorized     Tags:

Discussing RA Methods with CERT

by Ben Tomhave  |  March 26, 2014  |  Comments Off

In follow-up to our paper, “Comparing Methodologies for IT Risk Assessment and Analysis” (GTP subscription required), Erik Heidt and I were given the wonderful opportunity to be guests on the CERT Podcast to discuss the work.

You can listen the episode, as well as view program notes and a full transcript, at the CERT Podcast page here.

Comments Off

Category: Uncategorized     Tags:

Incomplete Thought: The Unbearable “Bear Escape” Analogy

by Ben Tomhave  |  March 20, 2014  |  Comments Off

“You don’t have to run faster than the bear to get away. You just have to run faster than the guy next to you.”

The problem with this analogy is that we’re not running from a single bear. It’s more like a drone army of bears, which are able to select multiple targets at once (pun intended). As such, there’s really no way to escape “the bear” because there’s no such thing. And don’t get me started on trying to escape the pandas…

So… if we’re not trying to simply be slightly better than the next guy, what approach should we be taking? What standard should we seek to measure against?

Overall, I’ve been advocating for years, as part of a risk-based approach, that the focus should be on determining negligence (or, protecting against such claims). Unfortunately, evolving a standard of reasonable care takes a lot of time. It’s been suggested in some circles (particularly on The Hill) that the NIST CSF may fill that void (for better or worse). One challenge here, however, is that the courts are charged with determining “what’s reasonable,” and so in many ways we’ll be challenged in evolving this standard (that is, it’ll take a while).

At any rate, I believe that there is an opportunity for constructing a framework (or, perhaps rubric would be a better outcome) by which people can start determining whether or not they’ve met a reasonable standard of care. Of course, one might also point out the myriad other standards in place that could serve a similar capacity. I don’t think CSF is remotely sufficient in its current incarnation, but that may improve over time.

It is probably worthwhile here to reinforce the point that “bad things will happen” and that it’s not so much a matter of “stop all the bad things,” but rather “manage all the bad things as best as possible” (at least after having exercised a degree of sound basic security hygiene). Anyone who’s familiar with my pre-Gartner writings will recognize the topics of resilience and survivability as key foci for risk management programs (and implicit in my thoughts and comments here).

But, how do you get to that point of a healthy, robust risk management program? Where do you start? How do you prioritize your work?

Here’s the priority stack I’ve been using lately:

  1. Exercise good basic security hygiene
  2. Do the things required of you by an external authority (aka “things that will get you fined/punished”)
  3. Do the things you want to do based on sound risk management decisions

What this stack should tell you is two key things. First, a reasonable standard has to consider a basic set of security practices applied across the board. It would probably be comprised of policies, awareness programs, and foundational practices like patch mgmt, VA/SCA, appsec testing (for custom coding projects), basic hardening, basic logging/monitoring/response, etc. Second, from the perspective of considering a negligence claim (bearing in mind that IANAL!), I think looking at high-level practices will be key, rather than delving into specific technical details.

For instance: Did a breach occur because a system wasn’t up to full patch level? If so, is a reasonable patch mgmt program in place? If so, why wasn’t this patch applied? What does the supporting risk assessment show about why this particular patch was not applied?

Lather, rinse, repeat.

Obviously, more could be said… but, hopefully this stub gets you started thinking about how the business may need to protect itself from legal claims in the future, and how an evolved standard for “reasonable care” (as determined in court) may impact security practices and expectations for security performance.

Comments Off

Category: Uncategorized     Tags:

Join Us! SRMS has an opening!

by Ben Tomhave  |  March 20, 2014  |  Comments Off

We’re hiring for the Security & Risk Management Strategies (SRMS) team within Gartner for Technical Professionals. Full details here.

The official listing covers a LOT of territory, but here are some things to consider:

  • This position is with Gartner for Technical Professionals (GTP), which is distinctly different from the IT Leaders (ITL) team (which produces marketscopes, magic quadrants, etc.). Our focus is on an architectural perspective and we often approach research from the perspective of addressing technical questions.
  • Positions are generally work-from-home/remote! :)
  • The amount of travel required tends to be fairly low.
  • Doing research is the top priority!
  • Work-load is a nice balance between research, writing, speaking, and “other stuff” (like sales support).
  • There are plenty of opportunities for client interactions.
  • The pace of research is very nice, in large part because our documents tend to be longer and more thorough. For example, a typical GTP document will run anywhere from 25-40 pages, whereas ITL papers will typically be under 15 pages. Our audiences tend to be different, though security and risk management do tend to have some overlap.
  • SRMS is an awesome team! This is really a fun bunch to work with.
  • The sky’s the limit! No kidding. There is no limit to the available research opportunities. It makes the job a ton of fun!

So… those are my quick thoughts. I joined this team in June 2013 and am loving it. If you’re looking for an opportunity to do technical research, then this might be the opportunity you’ve been waiting for!

Apply here.

If for some reason that link doesn’t work, please visit careers.gartner.com and search for “IRC26388.”

Feel free to reach out to me if you have questions or just want to chat about the opportunity.

Good luck!

ps: You can read Anton’s thoughts on the position, too.

Comments Off

Category: Uncategorized     Tags:

RSA 2014 Round-up: From Predictive Analytics to Denied Taco Service

by Ben Tomhave  |  March 13, 2014  |  2 Comments

One of the most challenging aspects of attending RSA each year is not just attending, but also recovering from, RSA each year. :) It occurs to me as I finally get this recap post drafted that it’s been almost two weeks since I returned and am only now getting a chance to put virtual pen to virtual paper to share my thoughts from the event. So, here goes… :)

Another USA edition of the RSA Conference is now in the books, and it was a doozy! For the first time in years, there seemed to be an air of hope and innovation, which was really quite refreshing. There were a few themes throughout the event; some were overt, while others more covert. Overall, though, it seemed like we’re nearing a tipping point. The end-users are returning, the vendors are starting to evolve, and maybe – just maybe – there is cause for hope that we’ll find ourselves “jumping to the next curve” in the not too distant future.

Amid Hopefulness, Innovation Returns

In a surprising twist, innovation seems to be returning to the industry. Emerging from the doldrums of “business as usual,” there were a number of excellent conversations occurring all around the event. Mind you, many happened away from the show floor, or at least in the south expo away from mega-vendor-land. :) People seemed truly hopeful for the first time in a long while, even as the size and frequency of data breaches seems to be growing. There is, in fact, some hope that automation and DevOps will help transform enterprises, with security finally starting to catch-up and realize the opportunities.

Predictive Analytics Starting to Emerge (for realz)

Speaking of automation, a major (official?) theme seemed to be around “predictive analytics.” While I’m not necessarily sure what that means in the vendor PR context, I do think there is something to be said for enterprises finally being shown how this nebulous “analytics” beast might be turned to a useful advantage. We saw vendors across numerous product verticals (such as appsec, VA/SCA, endpoint protection, etc, etc, etc) starting to build in analytical capabilities that, in some cases, even went so far as to include quantitative risk analysis capabilities. It’s too early to say how long it will be until these emerging notions will truly be ready for primetime, but again, the mood was hopeful. Perhaps the most interesting notion is being able to take automated scan/test results from varying sources, run them through a “risk engine” that in turn uses asset information and associated valuation information to automate impact scores put in terms of $$$. From a prioritization perspective, this advance could be very interesting going forward.

Reducing Friction

One phrase I heard over and over again while speaking with vendors was “reducing friction.” Regardless of context, the general meaning here is that you don’t want security activities or functions to be a one-off that ends up derailing business as usual. In an appsec context, this means feeding appsec testing findings directly into native bug tracking systems such that developers tackle the fixes as part of standard practice (as opposed to handing them a big report that will get ignored or burned because it isn’t in a native format for their consumption). I heard similar phrasing in other realms, too, such as around authentication and authorization. For instance, the notion of inherent, transparent (low friction!) authentication based on analysis of many factors (akin to what I described in my September 2013 post “AuthN TNG: Many Factors, Confidence, and Risk Scoring”). There are now several kits available for mobile devices that allow for built-in continuous monitoring capabilities that essentially profile users when they run an app, adding that contextual information to the overall authentication picture. One vendor described this as a “new” 4th type of authentication factor (“what you’re doing” or “contextual authentication”).

Overall, it will be very interesting to see how this concept of “reducing friction” plays out going forward. I think it certainly plays well to a DevOps-oriented crowd, and I’m hopeful (there’s that word again!) that it can lead to a shift in how security architecture, technologies, and decisions are considered, composed, and executed.

Automating Lower Risk Decisions/Remediation

Speaking of automation and reducing friction, an interesting idea I encountered in a couple places was the notion of automating lower risk decisions or remediation. For example, you run a vuln scan, you find a list of ports or services that are open, but they’re not really high-risk items. What do you with them? Up until this point, most enterprises will simply ignore these as “low risk, no concern” findings. But, what if you could push a button and have the changes automatically made for you, such as after a quick vetting discussion? This notion could potentially scale nicely over time, and if you give it some hooks into a DevOps build and release pipeline, then you might even start to some very interesting changes, too.

Part of this idea ties into the notion of “configuration as code” that we’re starting to hear more about, especially as pertains to Software Defined Networks (SDN) and Software Defined Perimeters (SDP). In fact, in many ways, as SDN and SDP become increasingly automatable, there is a good opportunity to start encoding security requirements in such a manner that they also just become configuration items that are automatically applied to an environment (dare I even suggest that we may some day see “policies as code”?). It’s an interesting notion, which when combined with risk analytics engines, could have some very interesting results in the near future.

Colbert’s Closing Keynote

Despite the various “protests” being lodged against EMC/RSA Security for an alleged business interaction with the NSA, Stephen Colbert did take the stage for the closing keynote as planned. As he put it, he looked at the requests for him to back out, and then he looked at the contract he signed, and thought that following through on his commitments was probably more important, at least so long as the check cleared.

As an aside, it should be noted that the planned protests had no real perceived impact on the event, which is rumored to have had attendance in the 25-30k range (I’m waiting on “official” numbers from RSA). Yes, the Vegas 2.0 crew did run their awareness event on the Wednesday of RSA, and some people were handing out pamphlets around the event, but really, that was about all that people noticed. I spoke to several people who planned to attend the competing TrustyCon event, but most of those people also were RSA speakers or attendees. Basically, the protests seemed to amount to much adieu about nothing…

Overall, I found Colbert’s keynote to be one of the most enjoyable in recent years (which have included people like Bill Clinton, Condoleezza Rice, and Tony Blair, as well as Adam Savage and Jaime Hyneman of Mythbusters). Colbert delivered a prepared talk that seemed to be reasonably well research, full of political jabs, as well as a recurring theme about his “new startup,” CloudFog. After the address, he then did something truly unique… he sat down in a chair, alone on stage, and took questions from the audience. During this period he effectively shifted out of his “Colbert Report” persona and responded largely out of character, which was quite fascinating. Unsurprisingly, Colbert was thoughtful, intelligent, and insightful, even when lampooning politicians or even the event’s namesake.

For a truly gack-worthy summary of the closing keynote with Stephen Colbert (which was, I thought, well done), check out CNN’s coverage.

Also, for as long as it lasts, you can check out the entire session here on YouTube:
Steven Colbert at RSA Part 1
Steven Colbert at RSA part 2
Stephen Colbert answering questions…


And, well, that’s about it. Overall, RSA seemed to fun again this year, despite it being my first (grueling) year as a Gartner analyst. I spoke to dozens of vendors (officially and unofficially) and, of course, chatted with hundreds of end-user attendees. As always, I found the event to be very useful in gauging the timbre of industry.

2 Comments »

Category: Event Notes     Tags: , , ,

Fatal Exception Error: The Risk Register

by Ben Tomhave  |  March 7, 2014  |  8 Comments

I read this article a few weeks ago and set it aside to revisit. In it, the author states that “Risk management used to be someone else’s job.” and then later concludes that “…in a global business arena that is increasingly unforgiving when it comes to missteps, the message is clear: Everyone—including you—now has to be a vigilant risk manager.” Yes, well, sort of, maybe, kind of… hmmm…

During RSA 2013 (last year) I had the opportunity to sit in on a half-day event around IT risk management. When I joined the closing panel, I asked how many people in the audience had “risk manager” in their titles, and then asked them to leave their hands up if they actually made decisions based on their risk analysis, or if they simply made recommendations. Unsurprisingly, the vast majority (possibly all) of the hands went down. You’re not “managing” anything if you’re not empowered to make a decision. And, inevitably, that means you’re going to be one of those people contributing to the “risk register,” which is the place where all good risk conversations seem to go to die.

My opinion is not necessarily shared by the rest of Gartner, or even by the rest of my team, but I want to make a few points about these risk registers and why I think they’re a faulty concept that needs to be deprecated within our environments. Similarly, I think these surveys (like the one noted in the article referenced earlier) are also silly. “What are your top concerns?” If you’re a business, it’s going to be “staying in business” and “growing revenue” and “avoiding foolishness.” The specifics of each of these varies year-to-year, but let’s be honest for a moment and admit that, at least within the US, this is really what execs are “worried” about (if you can even call it that – I’m convinced most really don’t think too much about it, instead preferring to focus on making good decisions that lead to up-side realization).

Here are three reasons why I think the risk register is really a silly notion:

Shouldn’t risk findings be driving actual remediation activities?

One of the reasons I hate risk registers is because, as a former consultant, auditor and assessor, I’ve often seen the same items maintained on the list year after year after year. What’s the point of that list? If you have a risk finding worth recording on the “really important scary things” list, then you doggone well better have a remediation plan or compensating controls. Your risk management program serves to inform, as well as to drive good decision-making. Risk registers don’t meet this need at all. I would far prefer that enterprises resolve to have a clear “register” every year (or quarter!) so that all risk assessment findings either drive directly to remediation or are summarily managed through compensating controls or are summarily dismissed as unconcerning. Failing to take action strikes me as an indefensible approach that will some day land your business in hot legal waters.

What exactly are you trying to accomplish with it, anyway? (it’ll never be complete)

You’ve built a risk register, probably over the course of a few years. Now what? What was the objective of making this list? Are you trying to give your executives a migraine? Or, maybe you secretly hope that hackers will find the list and start taking advantage of your weaknesses? I’ve heard of enterprises that make these lists and then keep them super-secret, but to what end? More importantly, though, is that these lists will never be complete. “Risk” evolves over time. Moreover, a lot of operational risks, particularly under IT, get short shrift and are underrepresented within risk registers. Or, even worse, they get rolled up into meaningless aggregate statements like “cybersecurity risk is high” (whatever that means?!). If your goal is prioritization, then improve your risk analysis and risk assessment capabilities. If your goal is to make better decisions, then turn that data into something actionable. But, know that the list is temporal and should always be in flux. If it’s not… if your risk register tends to be very static… then I submit you’re not truly doing something useful.

Risk registers reinforce the really bad idea of the “annual risk assessment.”

One of my other pet peeves around risk registers is that it tends to reflect the fatally flawed notion of the “annual risk assessment.” I’ll address this topic in-depth in another blog post, but suffice to say, if you’re only “assessing risk” on an annual basis, you’re doing it wrong. Risk assessment and risk management are ongoing activities that should be leveraged to make good decisions throughout the business calendar, rather than just ahead of the annual budget cycle. All meaningful decisions should be supported by at least a lightweight risk assessment that helps analyze key factors toward ensuring that due diligence is performed and that a reasonable standard of care is met.


When all is said and done, the risk register typically becomes a dumping ground for “things we don’t know how to manage” or “things we don’t care enough about to manage.” This is unacceptable, and often a cop-out. Any finding worth listing is worth listing in an action plan for remediation. Can’t do everything this year? No problem, put it on your strategic roadmap, documenting how you’re going to address it. Or, document your compensating controls (like insurance) and then move on. Yes, documentation should exist, but not as a list of “really scary things.”

8 Comments »

Category: Risk Management     Tags: , , , ,

Patch Your Internet Router/Gateway!

by Ben Tomhave  |  February 14, 2014  |  1 Comment

Just a friendly fyi… if you’re running an Internet router/gateway from Asus or Linksys, please make sure that you’ve updated the firmware recently! In some ways, this strikes me as another example of attacks on the Internet of Things (IoT). If you’ve been following IoT attack trends, then you may have read about the possibility that a refridgerator may have be found sending out spam.

Things seem to be getting worse, and quickly. First, for a little background, please note that the Asusgate vulnerability in question was first disclosed in June 2013.

While Asus fixed the bug, many many many routers have not been updated, and thus there has been some significant data disclosure (a non-Gartner colleague has looked through some of the compromised data and found file names suggesting highly sensitive info from all sectors, include law firms and DoD).

Now we also learn that there appears to be a worm out there affecting Linksys devices (now owned by Belkin, btw, in case you missed that announcement last year).

Read more from SANS ISC: “Linksys Worm ‘TheMoon’ Summary: What we know so far”

So… what’s the take-away here? Well, quite simply, it’s this: You need to monitor and patch ALL your Internet-connected devices, whether that be mobile or desktop or streaming media or even your routers/gateways. Failing to do this can very well lead to compromise and abuse.

Welcome to a brave new world of interesting times…

1 Comment »

Category: Common Sense     Tags: , , , ,