David Norton

A member of the Gartner Blog Network

David Norton
Research Director
7 years at Gartner
25 years IT industry

In his role as research director with Gartner's application development and architecture team, David Norton supports clients by developing and delivering quality and timely research. Read Full Bio

The Rise of the Digital Invisibles

by David Norton  |  February 23, 2014  |  Comments Off

In the future there will be two tribes the “digital visible” and the “digital invisibles”.  One will embrace digital technology without question, the other limit it to their terms; one will be on the grid 24/7 the other will appear and disappear like a stealth fighter.  One will be a digital open book to be consumed, marketed too, and observed; the other elusive, a digital enigma, a shadow in your data.          

But this not another message of Luddites and technophobes proclaiming technology has gone too far and society is rapidly moving to the dystopia of Metropolis or 1984.  Digital invisibles will be some of the most tech savvy people in society. They will embrace it, understand it and use it, in a way that makes them part of the digital society but at the same time apart from it. Digital invisibles will seek to use technology to their advantage whilst using technology to shield themselves from their government, employers, business, and their peers.   

We commonly hear “If you have nothing to hide why would you worry” assuming that an individual that tries to remain anonymous online is up to no good.  At this point I could say all the publicity around NSA and GCHQ has changed people’s attitudes to anonymity and driven more of us to be digital invisibles, the truth is it has not.  The driver for the digital invisibles will not be the state it will be business and their peers.  

Imagine the following. You are sat in a coffee shop, your tablet computer (or glasses) contains a built in visible light and infrared (IR) camera, and doppler shift motion sensor.  Gesture and body heat data provide a real-time analysis of the likely emotional state of the other coffee shop users, plus their general state of heath i.e. are they running a fever and likely to have flu.  The facial recognition software is matching face to names and providing data on who they are, do you know them, did they go to same school, did they date your sister? etc, etc.  And where physical sensors fail near field comms and cloud based services will fill in the gaps using information from the coffee shop users own digital devices and social networks.  This might sound a bit too like Star Trek now, but what about in 10 years?

It’s in these situations we may see the biggest contrast between the digital visible and the digital invisibles.  Your augmented view will have gaps, individuals that appear as digital ghosts, you see them but your digital world does not.  Their anti-thermal clothing limits your IR sensors accuracy, they are wearing anti-facial recognition glasses, and your tablet returns no data on them.  

Lets make it more interesting. Imagine you are selling your house and a prospective buyer asks will you accept an offer of £300K?  The truth is you will but your hoping for a figure closer to £320K so you say “sorry no”.  The prospective buyer’s digital glasses IR, gesture sensors, and microphones all detect a rise in your stress levels.  The buyer’s digital glasses also pulls the information from the cloud that you sold your last two properties at 20% under the asking price.  In the buyers glasses a message reads “85% probability the last statement was false, 92% probability they will accept £300K”. 

It’s not hard to imagine a similar situation in the boardroom during an M&A or licensing negotiation.  In these situations one party has a very clear advantage if the other party is a digital visible. The digital invisibles will have the lead in a world that combines game theory and real time digital data, he will see your cards whilst showing you a digital poker face.    

But what of digital business?  The digital invisibles are not going to bring down your digital strategy but they could make big holes in. At one extreme they may choose to simply not engage with you on a digital level at all, but this would be more Luddites behaviour.  More likely the tech savvy digital invisibles will take advantage of the incentives you are offering them to engage with your digital strategy but then leave you high and dry when it comes to the payback you were expecting in the form of rich customer data.  I will take your app that gives me 10% discount at the till but block it from obtaining and sending location data from my device. This may not have a major impact on your strategy unless you have the misfortune of attracting digital invisibles in which case your data mining and analytics will be working off incomplete data. Think what that will do to your dashboards, forecasting and decision making.   

So how real are the digital invisibles?  Anti thermal imaging clothing is on sale now; there are apps that limit how much data you provide to other apps, social sites, and individuals.  There are bags that stop near field comms and mobile signal.  There are even makeup styles that interfere with facial reorganization software.  Jamming technology is being used, often illegally, to limit location and tracking information (see http://www.bbc.co.uk/news/technology-17119768).  Executives are being advised to limit how much information they provide on social sites, and sales staff are being told not to crow about their latest deal on facebook.   

Socially digital invisibles will be driven by a desire to maintain a level of personal mystique and to stand out from what they perceive as “digital sheep”; or simply not to tweet the “The game was great” after calling in sick forgetting their boss follows them on twitter (they may not even know).  As customers they will be hard to reach and quantify, an anomaly in your data.  And across the table from you in business they will have the advantage.

Well I am going to wrap the blog up before I get to carried away with digital ethics, government digital policy and the digital proletariat.

Comments Off

Category: Uncategorized     Tags:

The Debt Collectors

by David Norton  |  October 25, 2013  |  Comments Off

 KNOCK  KNOCK  KNOCK – “Come on, open the door”

“Just a minute, I am coming” says Jo the CIO as he walked slowly to the door.  He knew this day was coming, the day when all those “we will worry about that tomorrow” would catch him up.

“Come on, open the door, don’t make us break it down”   

 “Sorry, I have to disable my security before I can open the door”

There was the sound of collective laughter from the other side of the door “well that won’t take long, we installed it, just type user “admin”, password “admin” and it will open right up” 

The CIO did as the voice said; click as the door lock opened. All those years typing user =IamTheBoss and password =IamTheBoss123 and a simple “admin”, “admin” would have done.

The increasingly nervous CIO opened the door to come face to face with four large men dressed in black suits sporting dark sun glasses and black gloves, worryingly they all had scuff marks on the knuckles .

“I am Mr Technical Debt, representing Poor Agile Inc, Mr TD to my friends so you can call Mr Technical Debt.   These are my associates,   Mr Code Complexity, Mr Inf- Destructor and Mr Anti Pattern.  Mr Bad Security has accidentally locked himself in the car but will be here shortly”   

“Well what can I do for you gentlemen?” said the CIO noticing Mr Anti Pattern had a bad smell about him.

“Don’t play dumb with us, you know why we are here, you borrowed from us and now its pay up time” barked Mr Code Complexity. 

“But but I can’t pay, I didn’t know the debt interest was so high, come on guys I need more time. I tell you what came back next budget cycle and I will have it then”

Mr Technical Debts’ cold dead stare became even colder and a bit deader. “Did not know the interest on the debt was so high you say!! Hmm I seem to recall when you went into debt with my associates and I you did not care about the debt interest. We make it clear to all our borrowers from Poor Agile Inc that we, shall we say, have elevated interest rates on all debts and severe penalties for non-payment”

It was Mr  Inf-Destructor’s  turn to speak “You were warned,  your developers told you don’t get into debt with us, your development manager told you to start refactoring the debt out , even Gartner told you. But no, you didn’t care, you ignored them all, you just wanted to get the business off your back and get the stuff out the door”   

“What do you mean severe penalties” asked the CIO not really wanting to hear the answer

Mr Anti Pattern gave a slight grin. “Well it will start by me making it hard for your developers and maintenance guys, the odd break here, the odd break there, then bam I am gonna hit them with a major performance problem”   

“And once my associate has finished that I am gonna hit you with code so complex you will be lucky if you release a new feature once  a year, and you can kiss goodbye to that new mobile app you been dreaming about” added Mr Code Complexity.

“And then it is my turn” Mr  Inf- Destructor  said “I am gonna max out your servers, overload your network and turn your virtual environment into a real nightmare”

Mr Technical Debt leant forward and in a forceful whisper said “And if you still don’t pay back the debt we are going after the things you hold most dear”

“OMG not my users, not the business, please no” cried the CIO

“Yes”   Mr Technical Debt replied “We are gonna bring down their apps, slow down their processes, and break their capabilities one by one.   We are going to make them hurt so bad and it’s all your fault”

“Please I beg you, it’s not my fault, the business made me do it. I only went into debt to please them; they kept asking for more and more, it not my fault”

“We sympathise we really do” said a slightly embarrassed Mr Bad Security who had now joined the party “But it was not the business who came to us, it was you”

“But what could I do, I was under so much pressure, I can’t say no to the business, you must understand that” said an ever more panicky CIO

“Even now you don’t get it, it was your job to make the business see what would happen, it was your job to pay any debt back not get into more. As for the business forcing you, you don’t give an alcoholic another drink just because he asks”  said an increasingly impatient  Mr Technical Debt.  

“So it’s payback time –NOW”

The CIO gave one last pleading “Noooooo…………..”

“Wake up, wake up dear, you’re having a bad dream, wake up. Look you’re soaked in sweat, what were you dreaming about?”

“Sorry love it was that crazy dream again, just a crazy dream, I will speak to my analyst about it, maybe he can tell what all these IT debt dreams are all about. Night love”

KNOCK  KNOCK  KNOCK

 “Jo, wake up, someone is at the door”

Comments Off

Category: Agile Application Development     Tags:

The Nexus Will Demand Continuous Everything and Smart Systems

by David Norton  |  June 28, 2013  |  Comments Off

I have been thinking about the Nexus of forces; cloud, mobile, social, big data and what does it really mean to me and my clients! I have come up with an analogy; bit of an odd one but it demonstrates two areas I think are important.

Imagine there is a spoon you have been asked to balance in a jar of three liquids.  The first is jam! (not technically a liquid but never mind).  You place your spoon in the middle of the pot of jam, remove your fingers, and wait!! The spoon will stay upright in the jam with no help from you, it balances easily.  Why? Because the viscosity of jam is (8.500 mPas), so high it stops the spoon from moving.  You will notice the spoon over time will fall towards the edge of the jar, but its so slow your can correct it by either moving the angle of the jar or by poking the spoon back into the vertical position with your finger.   

Now you have to repeat the trick with honey.  Honey has a much lower viscosity compared to jam, about a ¼.  Once you place the spoon in the vertical position and remove your finger it will start to fall towards the edge of the jar. But it falls really slowly – you have time to correct the fall by moving the angle of the jar. Unlike jam, which only required infrequent intervention honey needs you to make many little adjustments each minute.         

The final liquid is water.  Not surprisingly given the viscosity of water (0.0009 mPas), once you place the spoon in and remove your finger it instantly falls towards the jar edge. It’s almost impossible for you to balance the spoon, it’s moving too quickly and you react too slowly.  The only way to keep the spoon vertical is to keep your finger on it i.e. cheat.

With all three liquids, you are sensing the position of the spoon relative to the edge of the jar, deciding which is the best way to correct the spoons position then moving the jar accordingly.  Its classic John Boyd’s  OODA loop  – observe, orient, decide, and act (John Boyd’s is one my heroes if you want to read more have a look my paper “The Fly By Wire Organization” 2007).   

At this point you are asking what does spoon balancing have to do with the Nexus ?  Well let me explain.  The liquid “viscosity” is the business and IT environment while the “spoon” is the systems.  Jam was the late 80’s and early 90’s where the viscosity of business change and IT systems was relatively high.  Systems delivery was measured in years (average delivery time for mid size system was 2yr 6mo), applications languages and architectures did not lend themselves to rapid change. For most people desk top support meant the legs that held the table up.  The internet had only just started (remember Gopher before WWW ?) no mobile apps, and the closest thing to social media was IRC and USENET.  

Honey is the late 90’s and 00’s where the pace of change picked up rapidly. The internet became all pervasive and mobile computing was taken more seriously.  The business woke up to IT as a business differentiator and CIO’s found “balancing the spoon” was much much harder.  But legacy process and legacy systems acted like a brake and in fact did us a favour by allowing us time to catch our breath and decide which way the “spoon” was falling and what to do about it.          

Now we come to the last jar – water.  This is the period we are moving into now with low business and IT “viscosity”, the period of the digital native, cloud, BYOD, “gold” in the data and everyone connecting to everyone else – in short the Nexus.  Process and capability owners, CIO’s and application managers will not have time to balance the spoon using the old approach’s the pace of change will just be too fast.

So what to do?

First let’s look at the systems. In the future humans will be removed from “balancing the spoon”.  In our last jar (water) imagine if the spoon was somehow self-aware and could change its own direction to keep itself vertical?  Now a spoon cannot do this unless it’s some sort of Hogwarts magic spoon but IT systems can.  Applications can be built in such a way that they can sense their environment, autonomously decide a course of action and monitor the outcome. In the future applications will use many of the techniques associated with agent based systems by being goal directed with learning capabilities.  High frequency trading systems already do this, and I worked on a logistics system that dynamically changed based on the goals of the business without human intervention, and we have networks that can self-diagnose and self-heal.  The application will “balance the spoon” in ways we have never even thought of – the spoons are getting smart. 

The second change is how we react to the lowering of the business and IT viscosity and the demand for faster response.  We are all going to have to get used to a world of continuous delivery where features are added, updated and removed based on customer feedback and big data opportunity’s.  The current shift to agile development, PaaS, iBPMS and DevOps will be the tip of the iceberg. PMO and application governance will have to be continuous and less reliant on stage gates. The application will be managed as products continuously from concept to retirement instead of stop start projects.  We will need to constantly look at our application and projects portfolio to maintain business value and quality.   

The Nexus CIO will have to look at every function from enterprise architecture to help desk and ask how they can add business value on a continuous basis. And the Nexus CIO will have to makes sure that this continuous IT delivery is all joined up end to end. 

There will be a more practical follow up to this blog in the coming months as I take some of the ideas and put them into a systems dynamic model I been working on.  Until then see how long you can balance a spoon in a glass of water, and don’t cheat by using ice !

Comments Off

Category: Uncategorized     Tags:

Gaming Velocity or What Star Trek Taught Me About Metrics

by David Norton  |  September 7, 2012  |  Comments Off

Agile development is not immune to commercial pressures and the adverse effect it can have on individuals and teams.  We are already seeing issues with “under-promise, over-deliver” in company’s new to agile and organizations with more conservative command and control cultures. What is more, it is an increasing problem with agile sourcing.

The issues we see currently arise when teams are under excessive pressure to show value. This can lead to a problem of “group-think” where the team wants to show their value but are also fearful that if they over commit and fail this will be held against them, the blame culture. Net result they over estimate at the to give themselves more contingency than they actually need.  Normally this will self correct as the team feels more confident with the backlog and the burndowns show they have more bandwidth, exactly as it should with a good empirical feedback system.  

But on occasion it does not self correct as the teams expand the work to fill the extra time they have gained by being conservative about their abilities and over estimating  task effort. In this scenario the team can show dramatic improvement and over delivery when really under pressure, the business comment, “those guys pulled out all the stops” and the project is deemed a success but in truth the team has being operating at a lower productivity level and maintaining an artificial low velocity.  

However, let us be fair, most of the gaming of velocity and scope is done by management who want to show themselves in a good light or are fearful of their position. This might sound counterproductive, logically if they want to show their value they would push their teams to deliver more business value?  Well lets have one of the worlds, indeed the galaxys, must famous engineers explain,  Montgomery “Scotty” Scott, from Star Trek. 

Captain Kirk: “How long to re-fit?”

Scotty: “Eight weeks. But you don’t have eight weeks, so I’ll do it for you in two.”

Captain Kirk: “Do you always multiply your repair estimates by a factor of four?”

Scotty: “How else to maintain my reputation as a miracle worker?”

Captain Kirk: “Your reputation is safe with me.”

 The sad fact is “under-promise, over-deliver” is easier to do and in the short term less risky compared to changing team behavior and actually improving productivity.  As a manger, gaming the metrics is something that is under your control, you can tell the team to add task contingency or not to commit to risky stories. When penalty and incentive clauses in contracts are involved, as they are with agile sourcing, there is real pressure to game the system. 

Monitoring team efficiency will not reveal this issue as efficiency is derived from velocity (efficiency = Velocity/Resources Days). The best approach is external or internal benchmarking with similar projects and teams so we can see if the velocity might be lower than we could reasonably expect. Another approach is to push the velocity up to the point where the burndowns start to show failure, and overtime requests start coming in, then throttle back by 10% (I want highly productive teams, not dead ones).    

Agile is about trusting in people to do their best, and it sounds very un-agile to suggest agile teams may not be doing their best for the customer. The reality is that people react in different ways under pressure “fight-or-flight-or-freeze” and the more agile goes main stream the more we will see the principles manipulated or outright abused as it is pushed into organizations with the wrong type of culture for agile.  Until organization culture changes, the issue of agile under-promise, over-deliver is going to be a reality.

 So do it right and “boldly go where no agile team has gone before”

 

Comments Off

Category: Agile Application Development     Tags:

The Ticking Time Bombing Of Technical Debt

by David Norton  |  December 4, 2011  |  1 Comment

The timer flashes red 5:32, 5:31, 5:30 counting down to its final terrible conclusion. James Bond calmly leans over the device, “So is it the red or green wire, let’s go with lucky red”, snip. The counter jumps from 5:24 to 0:30. ”Ahh not so lucky red, lets try the green”, snip. The counter stops, 0:07 “humm my lucky number” says our hero.

And that’s the way it is in the movies; the hero disarms the bomb with 3 seconds to spare on the clock and is home in time for tea, while the world sleeps soundly in its bed.

But this is not the movies, the ticking time bomb of technical debt is ticking louder and louder, and the clock is counting down faster and faster, so where is James Bond when you need him? Well I will tell where he is, he’s been outsourced and the only contract out on James Bond these days is the one that says deliver to this date, at this price or else!!. He is too busy trying to keep his head above water to try and disarm the technical debt bomb; in fact he cannot even hear it ticking.

But lets be fair it’s not just the trend of outsourcing that has generated the technical debt crisis, technical debt started with the very first program 60 years ago, the first “I’ll fix that later”, the first “the design’s not great but it will do”, the first cry of “just get it out the door”.

So if the bomb has been ticking away for 60 years and we have been blissfully ignoring it for just as long why should we care now?

First, as my colleague Andy Kyte has stated, technical debt and its big brother IT Debt will break the trillion dollars mark in the next 5 years.  That’s a trillion dollars of development that needs to be done to remove bad code, poor architecture, and ill thought out technical strategy or simply time catching up with good design.

Second, the pace of business and technical change, coupled with faster delivery methods like agile and citizen development are speeding up the timer.  Agile is a double edged sword, when done right practices like refactoring can help us remove technical debt and stop it being introduced in the first place, when done wrong agile can be a technical debt generating machine. The trend of agile outsourcing driven by the margins often ends with the outsourcer saying “refactoring looks like re-work and re-work is hard to bill for so we won’t do it”.

If you think this is analysts FUD or me being negative on agile, consider this. In 2011 I had over 400 calls, over 20 workshops and 50 plus face to face meetings at conferences all related to agile, and not one started with “Dave I am concerned about my technical debt”, not a single one.  If pushed to give a figure I would say less than 30% of organizations using agile are really refactoring to the levels they should.

And what happens when your organizations technical debt bomb goes off? Well first it does not go off with a bang, it’s more a slow burn.  Change starts to take longer, you cannot react to the needs of the business, mobile and cloud initiatives start to run into trouble, and opex costs start to spiral – it will not be a single cataclysmic event, it will be death by a thousand cuts.

What to do? Start by acknowledging that the ticking sound is not a server hard drive on the blink but a much larger problem. Don’t wait for James Bond to abseil into your data center and disarm your technical debt bomb, you’re going to have to do it yourself (abseiling optional).  You need to get a handle on the size of your technical debt and take steps to make sure you’re not adding to it more than you have to.  And then you can start to actively remove the debt and disarm the bomb.

Good luck Mr Bond tick tick tick…….

1 Comment »

Category: Agile Application Development IT Governance SDLC     Tags:

Countering Cyber-Warfare With Conventional Force Is Not News, It’s Doctrine

by David Norton  |  June 10, 2011  |  Comments Off

For this blog I am going to be wearing my defence hat or should I say cap. I spent the best part of 15 years in defence working as a systems specialist including urgent operational needs during the first Gulf War. So it not surprising that I help to look after defence at Gartner. And that brings me onto the topic of this blog

The last two weeks has seen both the US and the UK make public announcements on the use of sanctions and conventional force as a response to cyber-attacks. The Pentagon and UK MoD proposal to formalize cyber-warfare policy and extend conventional battlespace to include cyberspace is needed to counter the growing threat to both nations of cyber-warfare. The option of conventional defensive response or even offensive pre-emptive use of conventional force to neutralise a foreign power or irregular forces cyber-warfare capability is a natural extension of military doctrine and strategy.

If the enemy knows you will limit your response to the same means they deployed against you they can use “salami” tactics. The enemy could user superior cyber-warfare capability to knock out your infrastructure, “slice by slice”, without triggering an escalation. Ultimately it does not matter if I destroy your infrastructure by cyber or by strategic interdiction (aerial bombing of rail heads, power nodes, command & control lines) the net effect is I have reduced your ability to operate both militarily and as a nation.

We need only look back to the cold war to see this is not a new problem. NATO made it clear in the 50’s that tactical nuclear weapons (TNW) like the tiny US M-29 Davy Crockett were a very real option when facing down conventional Warsaw Pact armour formation in Europe.  Now no one is suggesting that a cyber-attack would be repaid by a TNW on your data centre but what we can learn from NATO policy on TNW is that the threat of escalation helped keep the peace.  It sent a clear message to Moscow that slicing the “salami” with superior armour could “turn hot” (TNW is nicknamed pizza delivery – “served hot and fast”).

Modern warfare is based on the manoeuvrist and network-centric warfare (NCW) doctrine, using strength against weaknesses, combining violent and non-violent means, disrupting the enemy’s command and control (C2), and decision making capability.  It means making an enemy or potential enemy doubt their strategy by making them doubt what your response might be. And that means keeping conventional forces as an option for cyber-warfare, even to the point of offensive use of conventional forces to counter a cyber-warfare capability.

Any potential aggressor must feel the threat of conventional force is credible, if they doubt your resolve they will dismiss the threat as sabre-rattling.  Part of a credible response is target identification and that is a problem with many DDoS cyber-attacks being carried out behind the wall of plausible deniability – you may suspect it was me but can you prove it ?

But it would be a mistake to think that if they can not positively identify you as the instigator of the attack you are safe, this is cyber-warfare not cybercrime. Waiting to get evidence of a level that would gain a prosecution in a cybercrime case takes time, time you may not have in cyber-war. If the cyber attack is a prelude to war or part of combined cyber and conventional terrorist operation, or is paralysing vital infrastructure would you wait ? Cyber-warfare exists within the “Fog of War” where it is understood that action will be taken on the baisses of probability, assumptions, risk of inaction and within the rules of war. This is an open question and fundamental to the issue in hand, what are the rules of war for the new reality of combined battlespace and cyberspace?

Following on from the US and UK comments NATO must consider how cyber-warfare will affect article 5, “if a NATO Ally is the victim of an armed attack, each and every other member of the Alliance will consider this act of violence as an armed attack against all members and will take the actions it deems necessary to assist the Ally attacked. Attack on one is attack on all”, would NATO standby if a member state was knocked out by a major state sponsored cyber-attack but no armed force was used ?

Cyber-warfare is aonther piece in the game of international brinkmanship that takes place in-between hot conflict – Cold War 2.0.  Nations will use cyber-warfare just below the level they think will illicit a conventional response, but like all games of brinkmanship there will be mistakes and miscalculation. The Cuban missile crisis, Falklands War, Gulf War and Korea are all examples of one side over estimating how far they can push their opponent and underestimating the opponents response.

Comments Off

Category: Cyber-warfare     Tags:

Kicking off the BPA MQ

by David Norton  |  January 28, 2011  |  Comments Off

Today I kicked off the Business Process Analysis (BPA) Magic Quadrant. Writing a MQ is always a demanding task; coordinating with vendors, taking up customer references and making sure the MQ process is followed. But the hardest part is not producing the MQ its making it sure its relevant and helpful to our clients.

And that brings me back to the point of the blog (it not just to say hey I am doing a MQ). BPA is a mature market; it’s so far right on the hype cycle it’s almost in the margin. So how relevant is it? Am I just reporting on a bunch of grey suited vendors gathering dust on a MQ past its sell by date? Well in my honest opinion, no (you knew I was goner say that) but I did a lot of soul searching to come to that conclusion.

If I said BPA is exciting you would tell me to get out more. But I have seen a shift in the BPA market that leads me to believe we are witnessing its next evaluation step. I am seeing more clients using BPA in a far more dynamic fashion, its gone from a small set of BPA specialists to a tool that is being used operationally day-to-day. Yes lots of users are focused on basic process modelling or business analysis but more and more organizations are finding value in BPA as a strategic decision support tool.

Simulation is final starting to be used they way it was meant to be, by the business for the business. The “If” in “What-If” analysis no longer cynically means “If” you trust the model and “If” you trust the data. We can validate the models and data before committing to a course of action based on them. And finally BPA is opening up to the masses for process discovery, building consensus and operational use.

The lines between BPA, EA and BPM tools are blurring and added to the mix is the ever-increasing need for BI. All these technologies are coalescing into something that is more than the sum of its parts, a tool that will help the business navigate the dynamic and complex world we live in.

That’s why the BPA MQ is exciting.

Comments Off

Category: BPA BPM     Tags:

Will 2011 See Our Love Affair With Scrum End?

by David Norton  |  January 16, 2011  |  2 Comments

If you  talk about agile to a developer you often hear the reply “oh you mean Scrum” – the two have become synonymous.  By any measure Scrum is the most well known of the agile methods (is Scrum a method is for another blog), search results, blogs, books or surveys – its top of the agile hit parade.   There are a lot of reasons for this; first it works when done right and within the right type of organization but success’s is only part of the story.  Other methods like FDD, Crystal have had their success too but do not have the same status as Scrum.  A strong community has helped to popularize Scrum, but DSDM has a strong community however it does not come close to Scrum’s fame.

So what else is there? Well it comes down to a mix of marketing, push for certification, consultancy self-interest and let’s be honest, its cool.  Scrum is the Apple of the methods world, what I mean by that is yes it does the job but often coolness and peer status plays a role in adoption. (Just in case your thinking I am knocking Apple I just switched from PC to MacBook Pro and am very happy even with the dent in the lid after I dropped a radio on it).

The big question is does it matter how Scrum has been popularized as long as it is moving agile into the business and bringing success?   I think yes.

If a single agile method get’s a monopoly it has the potential to stifle innovation. But the chance of any one method getting a monopoly is unlikely; there are just too many variables to ever have a single silver bullet approach agile or otherwise.

My bigger concern is I am starting to see a lack of due diligence when selecting an agile approach.  Scrum is being adopted within many organizations by osmoses.  That’s fine if the organization culture is a strong fit for Scrum but if it is not we reach a crisis point, a point where Scrum fails.  Now this is true for any approach if misapplied but the hype around Scrum often means its adoption is not questioned until there is a problem.

“I don’t know why we are having problems we are using the best approach!!” I hear client’s cry.  And when I ask what evaluation process they used before adopting Scrum 90% of the time I hear “none”.  They adopted Scrum because it was already used ad-hoc within the organization, or they had some ScrumMaster certified developers, or Fred-Blogs consulting recommended it.

As agile becomes more and more poplar and is applied to bigger projects, outsourcing, package development and legacy the “Scrum effect” will start to be a real problem. In 2010 I saw a number of failures of Scrum in SOA projects and two SAP implementations.  None of these failed projects had done a good job of the method evaluation; they where driven by a real business need to deliver fast and in their haste jumped to Scrum. These failed projects would have been better suited to DSDM, FFD or Agile Modelling given the organizations types, architecture and technology.

So will 2011 see a move away from Scrum, no. But it will see more organizations run into problems with Scrum and either result in a hybrid approach or worst-case dropping agile.  It would be to easy to say a Scrum failure is a result of the organization not implementing it correctly and therefore not a failure of Scrum at all – the classic “It was not Scrum that was the problem it was the company”.  And yes many Scrum failures and issues will be down to the organization not really embracing Scrum practices but just as many will be down to selecting the wrong method.

So before you go down the Scrum road ask yourself are we adopting it for the right reason, and will it work for us? And if the answer is yes (and in most cases it will be) you have at least asked the question and looked at the options.

2 Comments »

Category: Agile Application Development IT Governance SDLC     Tags:

Enterprise Agile in 2010

by David Norton  |  January 20, 2010  |  2 Comments

Well 3 weeks into 2010 and it already clear we going have a busy year regards agile. If last year saw the tipping point for agile this year will see the blood on the boardroom carpet. When clients told me of their plans to use Scrum on a $5 million project with 400 developers in three countries’s I found myself excited and a tad scared – bit like sitting in a roller coaster for the first time.

As agile becomes a strategic tool at the enterprise level we are going to see some great success, often in surprising areas – agile development for defence systems for example. But we are also going to see some spectacular cock-up’s. Yes you heard right – agile can fail.

Don’t get me wrong I don’t want to be negative about agile, after all I spend most of my time evangelising it. But we have to be realistic, no method is perfect and being the fallible human beings that we are we will misapply the principles, use it on the wrong project and run before we can walk. So there are risks, what’s new? we take a risk crossing the road.

Enterprise Agile (Agile 2.0, sorry I could not resist) needs to raise its game to face the challenges of greater funding oversight, large and complex architectures, legacy and package implementations, and the ever present integration problem.

The work already undertaken by the agile community around PMP, Prince 2, SOX and CMMI needs to consolidated into a consistent set of practices that support agile as a strategic differentiator. Its not the engineering practices that will trip us up, continuous integration, test first, refactoring – these things are understood. Its governance that’s going to be the problem.

2 Comments »

Category: Agile Application Development SDLC     Tags:

Want to see the perfect component? – just open your eyes.

by David Norton  |  September 25, 2009  |  1 Comment

Monday night saw me settle down in front of the TV to watch “What Darwin Didn’t Know” on BBC. A documentary described as “the story of evolution theory since Darwin postulated it in 1859 in ‘On the Origin of Species’.” Towards the end of the program, and just before I nodded off (having two boy’s under 4 staying up past 22:00 is quite an achievement) the presenter introduced the tree of life and the DNA evidence for it. OK nothing new there, but then he brought up the subject of shared genes. Yes he said the usual stuff on humans and chimpanzees sharing 98% of their DNA. But he also went on to give a very specific example – a gene pair called Pax-6.

Pax-6 is a control gene; it triggers eye development in human embryos. It also triggers eye development in chimpanzees, apes, mice, rats, cats, bats, and fruit fly’s. The Pax-6 gene triggers eye development in basically every creature that has eyes. So even though the eyes in a spider and a dog look very different they have the same starting point.

But there is something more important about Pax-6 – its old, very, very old. Over 500 million years ago sometime in the Cambrian period the first proto-eye developed. Those early eyes just detected the presence or absence of light, a simple but major evolutionary milestone. And Pax-6 was there, the trigger for that first proto-eye.

Pax-6 is so good at what it does that is has been passed down and across species unchanged. It’s position within species chromosomes changes but its still Pax-6. Its replication across the species is so good that you can take the Pax-6 of a mouse, transplant it into a fruit fly embryo which will then go on to develop eyes as normal. Genes for arms and fins have come and gone (and in some cases come back again) but Pax-6 remains.

So at this point you probably think thanks for the lesson on gene evolution but what has this to do with a software components? The answer – Pax-6 is an example of a fundamental building block in nature, a tiny component. It’s simple but so perfect that its on every creatures top 10 list of genes I like to have – ultimate reuse.

Could you imagine the equivalent of cross species reuse in IT? Taking a core component of SAP or Oracle and dropping it into every package application, regardless of version, on the planet and expecting the component to still work!!! We still have trouble doing that within the same package, even when we design for it.

The most impressive thing about Pax-6 is it was not designed to be this uber-gene with the power to cross species. The first creature to have the gene did not evolve it as some sort of altruistic gesture designed to be reusable by other animals. The reason why its still doing the job today is because its simply the best way of getting the job done.

When developing services and components we talk about designing for reuse. Maybe we should take a lesson from Pax-6, and focus less on designing for reuse and focus more on designing the simplest, most stable component for the task. And let reuse come from the component being selected and pulled because it’s the obvious choice, not pushed (rammed down your throat in some cases) into the solution.

Now I can hear you say “But Dave one of the principles of evolution is mutation may do nothing or even harm the species – and nature may have many attempts at getting it right” Yes that’s true our tiny Pax-6 component was probably not the first attempt – the control gene that placed a eye on your tail was never going to catch on. But that highlights another problem with IT. You either try and design the perfect “gene component” from the start which is a bit like trying to create a complex multicellular organism from nothing. Or you start with basic building blocks and evolve, adapt, add, and take away over time. The former has risk of outright failure the later has risk of individual “component mutations” failing but higher probability of overall success. The above is a long winded way of saying incremental change is better than big bang.

We could take this analogy further saying gene evolution is an example of an open system, with feedback – adaptive to changes in the environment. Individual sub-systems and components may be unaware of their outer environment but the system they are in is. In IT we often ignore that feedback or react to it too late, we need to make a conscious effort to seek out and act on feedback. This is also a guiding principle of agile and adaptive systems development.

So what can a billion years of evolution tell us? Focus on doing the job in hand, reuse needs low level simplicity even when dealing with complex systems, and complex systems do not just come into being, they are built up over time based on continuous feedback.

1 Comment »

Category: Agile Application Development SDLC System of Systems     Tags: