by David Norton | September 7, 2012 | Comments Off
Agile development is not immune to commercial pressures and the adverse effect it can have on individuals and teams. We are already seeing issues with “under-promise, over-deliver” in company’s new to agile and organizations with more conservative command and control cultures. What is more, it is an increasing problem with agile sourcing.
The issues we see currently arise when teams are under excessive pressure to show value. This can lead to a problem of “group-think” where the team wants to show their value but are also fearful that if they over commit and fail this will be held against them, the blame culture. Net result they over estimate at the to give themselves more contingency than they actually need. Normally this will self correct as the team feels more confident with the backlog and the burndowns show they have more bandwidth, exactly as it should with a good empirical feedback system.
But on occasion it does not self correct as the teams expand the work to fill the extra time they have gained by being conservative about their abilities and over estimating task effort. In this scenario the team can show dramatic improvement and over delivery when really under pressure, the business comment, “those guys pulled out all the stops” and the project is deemed a success but in truth the team has being operating at a lower productivity level and maintaining an artificial low velocity.
However, let us be fair, most of the gaming of velocity and scope is done by management who want to show themselves in a good light or are fearful of their position. This might sound counterproductive, logically if they want to show their value they would push their teams to deliver more business value? Well lets have one of the worlds, indeed the galaxys, must famous engineers explain, Montgomery “Scotty” Scott, from Star Trek.
Captain Kirk: “How long to re-fit?”
Scotty: “Eight weeks. But you don’t have eight weeks, so I’ll do it for you in two.”
Captain Kirk: “Do you always multiply your repair estimates by a factor of four?”
Scotty: “How else to maintain my reputation as a miracle worker?”
Captain Kirk: “Your reputation is safe with me.”
The sad fact is “under-promise, over-deliver” is easier to do and in the short term less risky compared to changing team behavior and actually improving productivity. As a manger, gaming the metrics is something that is under your control, you can tell the team to add task contingency or not to commit to risky stories. When penalty and incentive clauses in contracts are involved, as they are with agile sourcing, there is real pressure to game the system.
Monitoring team efficiency will not reveal this issue as efficiency is derived from velocity (efficiency = Velocity/Resources Days). The best approach is external or internal benchmarking with similar projects and teams so we can see if the velocity might be lower than we could reasonably expect. Another approach is to push the velocity up to the point where the burndowns start to show failure, and overtime requests start coming in, then throttle back by 10% (I want highly productive teams, not dead ones).
Agile is about trusting in people to do their best, and it sounds very un-agile to suggest agile teams may not be doing their best for the customer. The reality is that people react in different ways under pressure “fight-or-flight-or-freeze” and the more agile goes main stream the more we will see the principles manipulated or outright abused as it is pushed into organizations with the wrong type of culture for agile. Until organization culture changes, the issue of agile under-promise, over-deliver is going to be a reality.
So do it right and “boldly go where no agile team has gone before”
Category: Agile Application Development Tags:
by David Norton | December 4, 2011 | 1 Comment
The timer flashes red 5:32, 5:31, 5:30 counting down to its final terrible conclusion. James Bond calmly leans over the device, “So is it the red or green wire, let’s go with lucky red”, snip. The counter jumps from 5:24 to 0:30. ”Ahh not so lucky red, lets try the green”, snip. The counter stops, 0:07 “humm my lucky number” says our hero.
And that’s the way it is in the movies; the hero disarms the bomb with 3 seconds to spare on the clock and is home in time for tea, while the world sleeps soundly in its bed.
But this is not the movies, the ticking time bomb of technical debt is ticking louder and louder, and the clock is counting down faster and faster, so where is James Bond when you need him? Well I will tell where he is, he’s been outsourced and the only contract out on James Bond these days is the one that says deliver to this date, at this price or else!!. He is too busy trying to keep his head above water to try and disarm the technical debt bomb; in fact he cannot even hear it ticking.
But lets be fair it’s not just the trend of outsourcing that has generated the technical debt crisis, technical debt started with the very first program 60 years ago, the first “I’ll fix that later”, the first “the design’s not great but it will do”, the first cry of “just get it out the door”.
So if the bomb has been ticking away for 60 years and we have been blissfully ignoring it for just as long why should we care now?
First, as my colleague Andy Kyte has stated, technical debt and its big brother IT Debt will break the trillion dollars mark in the next 5 years. That’s a trillion dollars of development that needs to be done to remove bad code, poor architecture, and ill thought out technical strategy or simply time catching up with good design.
Second, the pace of business and technical change, coupled with faster delivery methods like agile and citizen development are speeding up the timer. Agile is a double edged sword, when done right practices like refactoring can help us remove technical debt and stop it being introduced in the first place, when done wrong agile can be a technical debt generating machine. The trend of agile outsourcing driven by the margins often ends with the outsourcer saying “refactoring looks like re-work and re-work is hard to bill for so we won’t do it”.
If you think this is analysts FUD or me being negative on agile, consider this. In 2011 I had over 400 calls, over 20 workshops and 50 plus face to face meetings at conferences all related to agile, and not one started with “Dave I am concerned about my technical debt”, not a single one. If pushed to give a figure I would say less than 30% of organizations using agile are really refactoring to the levels they should.
And what happens when your organizations technical debt bomb goes off? Well first it does not go off with a bang, it’s more a slow burn. Change starts to take longer, you cannot react to the needs of the business, mobile and cloud initiatives start to run into trouble, and opex costs start to spiral – it will not be a single cataclysmic event, it will be death by a thousand cuts.
What to do? Start by acknowledging that the ticking sound is not a server hard drive on the blink but a much larger problem. Don’t wait for James Bond to abseil into your data center and disarm your technical debt bomb, you’re going to have to do it yourself (abseiling optional). You need to get a handle on the size of your technical debt and take steps to make sure you’re not adding to it more than you have to. And then you can start to actively remove the debt and disarm the bomb.
Good luck Mr Bond tick tick tick…….
Category: Agile Application Development IT Governance SDLC Tags:
by David Norton | June 10, 2011 | Comments Off
For this blog I am going to be wearing my defence hat or should I say cap. I spent the best part of 15 years in defence working as a systems specialist including urgent operational needs during the first Gulf War. So it not surprising that I help to look after defence at Gartner. And that brings me onto the topic of this blog
The last two weeks has seen both the US and the UK make public announcements on the use of sanctions and conventional force as a response to cyber-attacks. The Pentagon and UK MoD proposal to formalize cyber-warfare policy and extend conventional battlespace to include cyberspace is needed to counter the growing threat to both nations of cyber-warfare. The option of conventional defensive response or even offensive pre-emptive use of conventional force to neutralise a foreign power or irregular forces cyber-warfare capability is a natural extension of military doctrine and strategy.
If the enemy knows you will limit your response to the same means they deployed against you they can use “salami” tactics. The enemy could user superior cyber-warfare capability to knock out your infrastructure, “slice by slice”, without triggering an escalation. Ultimately it does not matter if I destroy your infrastructure by cyber or by strategic interdiction (aerial bombing of rail heads, power nodes, command & control lines) the net effect is I have reduced your ability to operate both militarily and as a nation.
We need only look back to the cold war to see this is not a new problem. NATO made it clear in the 50’s that tactical nuclear weapons (TNW) like the tiny US M-29 Davy Crockett were a very real option when facing down conventional Warsaw Pact armourformation in Europe. Now no one is suggesting that a cyber-attack would be repaid by a TNW on your data centre but what we can learn from NATO policy on TNW is that the threat of escalation helped keep the peace. It sent a clear message to Moscow that slicing the “salami” with superior armour could “turn hot” (TNW is nicknamed pizza delivery – “served hot and fast”).
Modern warfare is based on the manoeuvrist and network-centric warfare (NCW) doctrine, using strength against weaknesses, combining violent and non-violent means, disrupting the enemy’s command and control (C2), and decision making capability. It means making an enemy or potential enemy doubt their strategy by making them doubt what your response might be. And that means keeping conventional forces as an option for cyber-warfare, even to the point of offensive use of conventional forces to counter a cyber-warfare capability.
Any potential aggressor must feel the threat of conventional force is credible, if they doubt your resolve they will dismiss the threat as sabre-rattling. Part of a credible response is target identification and that is a problem with many DDoS cyber-attacks being carried out behind the wall of plausible deniability – you may suspect it was me but can you prove it ?
But it would be a mistake to think that if they can not positively identify you as the instigator of the attack you are safe, this is cyber-warfare not cybercrime. Waiting to get evidence of a level that would gain a prosecution in a cybercrime case takes time, time you may not have in cyber-war. If the cyber attack is a prelude to war or part of combined cyber and conventional terrorist operation, or is paralysing vital infrastructure would you wait ? Cyber-warfare exists within the “Fog of War” where it is understood that action will be taken on the baisses of probability, assumptions, risk of inaction and within the rules of war. This is an open question and fundamental to the issue in hand, what are the rules of war for the new reality of combined battlespace and cyberspace?
Following on from the US and UK comments NATO must consider how cyber-warfare will affect article 5, “if a NATO Ally is the victim of an armed attack, each and every other member of the Alliance will consider this act of violence as an armed attack against all members and will take the actions it deems necessary to assist the Ally attacked. Attack on one is attack on all”, would NATO standby if a member state was knocked out by a major state sponsored cyber-attack but no armed force was used ?
Cyber-warfare is aonther piece in the game of international brinkmanship that takes place in-between hot conflict – Cold War 2.0. Nations will use cyber-warfare just below the level they think will illicit a conventional response, but like all games of brinkmanship there will be mistakes and miscalculation. The Cuban missile crisis, Falklands War, Gulf War and Korea are all examples of one side over estimating how far they can push their opponent and underestimating the opponents response.
Category: Cyber-warfare Tags:
by David Norton | January 28, 2011 | Comments Off
Today I kicked off the Business Process Analysis (BPA) Magic Quadrant. Writing a MQ is always a demanding task; coordinating with vendors, taking up customer references and making sure the MQ process is followed. But the hardest part is not producing the MQ its making it sure its relevant and helpful to our clients.
And that brings me back to the point of the blog (it not just to say hey I am doing a MQ). BPA is a mature market; it’s so far right on the hype cycle it’s almost in the margin. So how relevant is it? Am I just reporting on a bunch of grey suited vendors gathering dust on a MQ past its sell by date? Well in my honest opinion, no (you knew I was goner say that) but I did a lot of soul searching to come to that conclusion.
If I said BPA is exciting you would tell me to get out more. But I have seen a shift in the BPA market that leads me to believe we are witnessing its next evaluation step. I am seeing more clients using BPA in a far more dynamic fashion, its gone from a small set of BPA specialists to a tool that is being used operationally day-to-day. Yes lots of users are focused on basic process modelling or business analysis but more and more organizations are finding value in BPA as a strategic decision support tool.
Simulation is final starting to be used they way it was meant to be, by the business for the business. The “If” in “What-If” analysis no longer cynically means “If” you trust the model and “If” you trust the data. We can validate the models and data before committing to a course of action based on them. And finally BPA is opening up to the masses for process discovery, building consensus and operational use.
The lines between BPA, EA and BPM tools are blurring and added to the mix is the ever-increasing need for BI. All these technologies are coalescing into something that is more than the sum of its parts, a tool that will help the business navigate the dynamic and complex world we live in.
That’s why the BPA MQ is exciting.
Category: BPA BPM Tags:
by David Norton | January 16, 2011 | 2 Comments
If you talk about agile to a developer you often hear the reply “oh you mean Scrum” – the two have become synonymous. By any measure Scrum is the most well known of the agile methods (is Scrum a method is for another blog), search results, blogs, books or surveys – its top of the agile hit parade. There are a lot of reasons for this; first it works when done right and within the right type of organization but success’s is only part of the story. Other methods like FDD, Crystal have had their success too but do not have the same status as Scrum. A strong community has helped to popularize Scrum, but DSDM has a strong community however it does not come close to Scrum’s fame.
So what else is there? Well it comes down to a mix of marketing, push for certification, consultancy self-interest and let’s be honest, its cool. Scrum is the Apple of the methods world, what I mean by that is yes it does the job but often coolness and peer status plays a role in adoption. (Just in case your thinking I am knocking Apple I just switched from PC to MacBook Pro and am very happy even with the dent in the lid after I dropped a radio on it).
The big question is does it matter how Scrum has been popularized as long as it is moving agile into the business and bringing success? I think yes.
If a single agile method get’s a monopoly it has the potential to stifle innovation. But the chance of any one method getting a monopoly is unlikely; there are just too many variables to ever have a single silver bullet approach agile or otherwise.
My bigger concern is I am starting to see a lack of due diligence when selecting an agile approach. Scrum is being adopted within many organizations by osmoses. That’s fine if the organization culture is a strong fit for Scrum but if it is not we reach a crisis point, a point where Scrum fails. Now this is true for any approach if misapplied but the hype around Scrum often means its adoption is not questioned until there is a problem.
“I don’t know why we are having problems we are using the best approach!!” I hear client’s cry. And when I ask what evaluation process they used before adopting Scrum 90% of the time I hear “none”. They adopted Scrum because it was already used ad-hoc within the organization, or they had some ScrumMaster certified developers, or Fred-Blogs consulting recommended it.
As agile becomes more and more poplar and is applied to bigger projects, outsourcing, package development and legacy the “Scrum effect” will start to be a real problem. In 2010 I saw a number of failures of Scrum in SOA projects and two SAP implementations. None of these failed projects had done a good job of the method evaluation; they where driven by a real business need to deliver fast and in their haste jumped to Scrum. These failed projects would have been better suited to DSDM, FFD or Agile Modelling given the organizations types, architecture and technology.
So will 2011 see a move away from Scrum, no. But it will see more organizations run into problems with Scrum and either result in a hybrid approach or worst-case dropping agile. It would be to easy to say a Scrum failure is a result of the organization not implementing it correctly and therefore not a failure of Scrum at all – the classic “It was not Scrum that was the problem it was the company”. And yes many Scrum failures and issues will be down to the organization not really embracing Scrum practices but just as many will be down to selecting the wrong method.
So before you go down the Scrum road ask yourself are we adopting it for the right reason, and will it work for us? And if the answer is yes (and in most cases it will be) you have at least asked the question and looked at the options.
Category: Agile Application Development IT Governance SDLC Tags:
by David Norton | January 20, 2010 | 2 Comments
Well 3 weeks into 2010 and it already clear we going have a busy year regards agile. If last year saw the tipping point for agile this year will see the blood on the boardroom carpet. When clients told me of their plans to use Scrum on a $5 million project with 400 developers in three countries’s I found myself excited and a tad scared – bit like sitting in a roller coaster for the first time.
As agile becomes a strategic tool at the enterprise level we are going to see some great success, often in surprising areas – agile development for defence systems for example. But we are also going to see some spectacular cock-up’s. Yes you heard right – agile can fail.
Don’t get me wrong I don’t want to be negative about agile, after all I spend most of my time evangelising it. But we have to be realistic, no method is perfect and being the fallible human beings that we are we will misapply the principles, use it on the wrong project and run before we can walk. So there are risks, what’s new? we take a risk crossing the road.
Enterprise Agile (Agile 2.0, sorry I could not resist) needs to raise its game to face the challenges of greater funding oversight, large and complex architectures, legacy and package implementations, and the ever present integration problem.
The work already undertaken by the agile community around PMP, Prince 2, SOX and CMMI needs to consolidated into a consistent set of practices that support agile as a strategic differentiator. Its not the engineering practices that will trip us up, continuous integration, test first, refactoring – these things are understood. Its governance that’s going to be the problem.
Category: Agile Application Development SDLC Tags:
by David Norton | September 25, 2009 | 1 Comment
Monday night saw me settle down in front of the TV to watch “What Darwin Didn’t Know” on BBC. A documentary described as “the story of evolution theory since Darwin postulated it in 1859 in ‘On the Origin of Species’.” Towards the end of the program, and just before I nodded off (having two boy’s under 4 staying up past 22:00 is quite an achievement) the presenter introduced the tree of life and the DNA evidence for it. OK nothing new there, but then he brought up the subject of shared genes. Yes he said the usual stuff on humans and chimpanzees sharing 98% of their DNA. But he also went on to give a very specific example – a gene pair called Pax-6.
Pax-6 is a control gene; it triggers eye development in human embryos. It also triggers eye development in chimpanzees, apes, mice, rats, cats, bats, and fruit fly’s. The Pax-6 gene triggers eye development in basically every creature that has eyes. So even though the eyes in a spider and a dog look very different they have the same starting point.
But there is something more important about Pax-6 – its old, very, very old. Over 500 million years ago sometime in the Cambrian period the first proto-eye developed. Those early eyes just detected the presence or absence of light, a simple but major evolutionary milestone. And Pax-6 was there, the trigger for that first proto-eye.
Pax-6 is so good at what it does that is has been passed down and across species unchanged. It’s position within species chromosomes changes but its still Pax-6. Its replication across the species is so good that you can take the Pax-6 of a mouse, transplant it into a fruit fly embryo which will then go on to develop eyes as normal. Genes for arms and fins have come and gone (and in some cases come back again) but Pax-6 remains.
So at this point you probably think thanks for the lesson on gene evolution but what has this to do with a software components? The answer – Pax-6 is an example of a fundamental building block in nature, a tiny component. It’s simple but so perfect that its on every creatures top 10 list of genes I like to have – ultimate reuse.
Could you imagine the equivalent of cross species reuse in IT? Taking a core component of SAP or Oracle and dropping it into every package application, regardless of version, on the planet and expecting the component to still work!!! We still have trouble doing that within the same package, even when we design for it.
The most impressive thing about Pax-6 is it was not designed to be this uber-gene with the power to cross species. The first creature to have the gene did not evolve it as some sort of altruistic gesture designed to be reusable by other animals. The reason why its still doing the job today is because its simply the best way of getting the job done.
When developing services and components we talk about designing for reuse. Maybe we should take a lesson from Pax-6, and focus less on designing for reuse and focus more on designing the simplest, most stable component for the task. And let reuse come from the component being selected and pulled because it’s the obvious choice, not pushed (rammed down your throat in some cases) into the solution.
Now I can hear you say “But Dave one of the principles of evolution is mutation may do nothing or even harm the species – and nature may have many attempts at getting it right” Yes that’s true our tiny Pax-6 component was probably not the first attempt – the control gene that placed a eye on your tail was never going to catch on. But that highlights another problem with IT. You either try and design the perfect “gene component” from the start which is a bit like trying to create a complex multicellular organism from nothing. Or you start with basic building blocks and evolve, adapt, add, and take away over time. The former has risk of outright failure the later has risk of individual “component mutations” failing but higher probability of overall success. The above is a long winded way of saying incremental change is better than big bang.
We could take this analogy further saying gene evolution is an example of an open system, with feedback – adaptive to changes in the environment. Individual sub-systems and components may be unaware of their outer environment but the system they are in is. In IT we often ignore that feedback or react to it too late, we need to make a conscious effort to seek out and act on feedback. This is also a guiding principle of agile and adaptive systems development.
So what can a billion years of evolution tell us? Focus on doing the job in hand, reuse needs low level simplicity even when dealing with complex systems, and complex systems do not just come into being, they are built up over time based on continuous feedback.
Category: Agile Application Development SDLC System of Systems Tags:
by David Norton | September 16, 2009 | Comments Off
Last week found me at the Defence Systems & Equipment International Exhibition (DSEi) in London. DSEi bills itself as the world’s largest defence trade show with over 1000 exhibiters and 25,000 delegates over 4 days. Attending the show is an event by itself, you need to be security checked weeks before hand and it took nearly two hours to get in. The event organizers were paranoid that anti-arms trade protestors would get in and start trouble. “Gentlemen, you can’t fight in here! This is the War Room.” comes to mind.
Arms shows are bit of a anticlimax. They are not full of Yuri Orlov type characters (Nicholas Cage, Lord of War) offering to sell a dodgy batch of AK47’s or a secondhand T72 tank – one careful owner. Instead a large part of what’s on show is related to the more mundane side of defense – pots and pans (in green), connectors, cables, torches etc. But there is also the business end of defence – APC, small arms, rockets, special ops, and helicopters. Situation awareness was a big theme from miniature UAV like the Maviric with 45 minutes duration to the large Watchkeeper and Euro Hawk with duration in tens of hours.
This year DSEi had a softer edge to it. There was greater focus on the civilian usage scenarios – homeland security, civil defiance, policing and commercial security. For example using miniature UAV’s for looking for lost people or crowd control – lot cheaper than a helicopter.
So after looking at rocket launchers and dry firing a couple of Heckler & Koch and FN sub-machine guns I got down to why I was there – looking at system of systems (SoS).
For the last 15 years the military has had a strong focus on Network Centric Warfare (NCW). NCW is based on the idea that in modern warfare conventional and irregular is a set of interacting systems sharing information and collaborating towards a common goal (simplification but it will do). For example a British soldier can send target information to an US helicopter which in turn relays it to a frigate that fires a GPS guided missile to destroy the target. The solider, helicopter and ship are all autonomous systems working together as a system of systems to deliver a desired end effect spanning land, sea, and air. The DSEi affords a rare opportunity to see these SoS all in one place.
SoS is really taking traditional systems theory to the next level and has important implications in non defiance. One very exciting area is Telemedicine (see Gartner Hype Cycle for Telemedicine, 2009), from monitoring your blood pressure over a mobile network to carrying out complex surgery remotely. For example external devices see the individual as a system to be monitored with functions like a circulatory system – that’s convenient medicine already thinks in terms of systems. Information is transferred either manually or automatically to your mobile phone which then sends it to a healthcare provider system which analysis’s the data for potential health problems. You, the monitoring device, mobile phone and the network are all separate systems – only together as system of systems do we telemedicine.
All of these examples; defence, medical, and social networking are based on systems which are collaborating either physically or virtually for common and individual goals. This has serious implications for architecture, design and the whole process of development. How do I understand my customer’s world? How do I architect my capabilities in a way they can be part of a larger system of systems? How can I use and extended existing system in new and innovative ways?
The future belongs to those who can see and understand the big picture but also understands the wants and drives of the individual system – you and me.
Category: Application Development SDLC System of Systems Tags:
by David Norton | August 26, 2009 | 4 Comments
“Now this is not the end. It is not even the beginning of the end. but it is, perhaps, the end of the beginning.” When Winston Churchill spoke those words in 1942 he was talking about a turning point in WW2, I am talking about a turning point in agile development.
In my last blog I mentioned that agile has reached its tipping point. My internal indicators – inquiry rate, request for agile workshops, agile vendor briefings and adoption metrics have all shot of the chart. And external indicators, books, blogs, and my personal favorite, pub conversation – all indicate agile is now mainstream.
But that’s only half the story. It’s not simply a mater of greater XP or Scrum adoption. IT organizations are applying Lean and agile practices to their whole SDLC including, architecture, PMO, maintenance and operations. It’s no longer small collocated teams but large distributed projects, mission critical solution and even non IT. For example this week at the Agile 2009 conference I have meet people using Scrum in sales and market, legal departments and to support venture capital funding decisions.
My attendance at Agile 2009 has confirmed my view we have reached a major mile stone. Dr Alistair Cockburn keynote on Tuesday entitled “I Come to Bury Agile, Not to Praise It” was both dramatic and thought provoking. The drama was in the form of a lone bagpiper playing “Amazing Grace” at the start of the presentation followed by Alistair reciting a modified version of “Friends, Romans, countrymen, lend me your ears” from Shakespeare Julius Caesar.
“I come to bury Agile, not to praise it;
The evil methods do lives after them,
The good is oft interred with their bones,
So let it be with Agile.”
Alistair went on to say software engineering in the 21st century will use craft, cooperative game, lean principles and knowledge acquisition. So whilst not burying agile he emphasized what we call agile today is very different from what we called agile 10 years ago. I recommend having a look at Alistair full presentation and related article.
I Come to Bury Agile, Not to Praise It
From Agile Development to the New Software Engineering
Category: Agile Application Development SDLC Tags: #agile2009
by David Norton | August 24, 2009 | Comments Off
This week I am in Chicago attending the Agile 2009 conference. Now in its 8th year this is the major event in the agile calendar. The event kicked off today boasting over 1400 delegates, a measure of the interest in agile in these austere days of travel bans and belt tightening. With over 300 session its looks to be a busy week, its times like these I wish had a clone.
Even though the conference has grown year on year it still has its agile vibe. The coffee breaks were a mix of ice breaker conversation plus enthusiastic exchange of ideas, and “call me, I love to help”.
With 2009 being the tipping point for agile adoption (in no small part due to the economic melt down) it’s no surprise Agile 2009 is a more mature affair. Fresh faced developers rubbing shoulders with gray haired senior management. And there is no shortage of Fortune 100/500 delegates walking around. GE and Nokia is a couple I spotted, with a large contingent from finance.
One thing I am really pleased to see is a great emphasis on agile with a capital “A” – its not just XP and Scrum. There are session on project management, product owner, agile and CMMI and even agile with system engineering. All signs agile has gone mainstream.
Finally, one of the hardest things to do in the world is to grow up with a sense of wonder and inquisitiveness intact. It’s easy to accept the status quo and stop asking why? Why do we have to do it this way? Why can’t we do if differently? It’s good to see agile growing up but still asking why and challenging the established view of IT.
I am looking forward to a very busy and interesting week
Category: Agile Application Development Tags: #agile2009