Posts Tagged ‘Agile’

Throughout industry, whether business led or IT led, the mantra still seems to be if you are not ‘agile’ then you can’t survive or rather succeed in today’s market place. But what does ‘agile’ mean? In my role, I speak to numerous companies, whether in a business development capacity or networking at industry events or after presenting at conferences. The views of the World are polar opposite, sometimes disingenuous or contradictory to manifesto guidelines. Some of the questions and statements that I hear on a regular basis include:

  • “Do you develop products or technology in an agile way?”
  • “Does your company adopt business agility models?”
  • “Are you a Scrum fan or Kanban? “
  • “Actually we operate lean principles “
  • “We’re not quite there yet, so we are more scrumfall than anything”
  • “It sounds good, but it’s just so difficult to be heard”
  • “We have a daily stand up every day”
  • “The business/IT Management just don’t get it”
  • “The pressure of multiple releases is just intense”
  • And my personal favourite “We’re so agile, we go beyond agile”

The objective of agile is no different to the objective of more traditional approaches I.e. a working product (Quality), delivered on or under budget (Cost) in an acceptable timeframe (Time) to gain advantage over competition, by increasing company reputation and trust as the ideal solution provider.

There is a lot of talk in blogs regarding how ‘agile’ must adapt to keep up with the ever changing demands of the marketplace. I’m not sure that I completely agree with that, from the examples above there is already confusion in the marketplace, so change for the sake of change may make the gulf in ‘maturity’ even wider. A good example of change, is the kudos that DevOps brings. If you ask 10 different people how to describe agile, you’ll get many different views. If you ask the same people to describe DevOps you’ll likely receive more than 10 views, as was proven in my office when we discussed what our testing offering could offer to a DevOps environment. Everyone had a view about what it was, though the biggest consensus gained was how to spell it!

For me, that is the bottom line, if your project, company or industry wants to operate in an agile way or adopt a certain flavour of agile, you need to lay the foundations, in affect define your individual manifesto and publicise it

“Working software over documentation” could translate to “working software with just enough documentation to meet our regulatory demands”

I could go on, but I think the message here is clear, to be agile, you need to collaborate. To collaborate, you need to know what you are working towards, define a top to bottom understanding within your organisation or project team to enable change, transform your approach, and deliver success, and if you want to badge this as ‘agile’ that’s fine too.

Working for a Global consultancy like Sogeti UK, where the majority of our business is in the testing space, when we are asked to assist with introducing agile testing to engagements, we look at the bigger picture. Trying to understand the landscape of the project and the company will determine the type of consultants that we deploy to the engagement. If a client needs to set their own manifesto, there isn’t much reason to deploy highly technical testers, as reputation could suffer if collaboration or rather the mechanics of the project are not smooth and delays occur or releases dates missed as the end game wasn’t known. Instead we’d look to utilise more rounded consultants that can embed into teams and coach and steer the team or company to defining “what looks good for us?” Setting the manifesto and expectations for all to understand and commit and collaborate too.

At Sogeti, we look to help companies achieve their own manifesto, through the use of workshops, involving senior stakeholders through to junior testers, with the objective of aligning objectives, and embedding consultants to assist companies with their transformation journey, to define an individual working ‘agile’ approach, irrespective of ‘flavour’ applied. At every step of the way, looking to support the needs as maturity I.e. Success is gained, to move to the seeming utopia of DevOps (I think that was the agreed spelling!)

If you’d like to find out more about the services Sogeti UK can offer, visit our website http://uk.sogeti.com/ or drop me a line.

Wherever you go, whatever blog post you read, whatever company you do business with; there is likely to be a general journey theme when it comes to implementing agile; especially the transition from positivity through to surrender. “We are doing agile”; “my velocity is bigger than yours”, “I can’t believe our release is delayed because our downstream system isn’t agile”, “this agile thing sucks, it just doesn’t work, and now they want to use offshore”, “great, so now we death March every second week”, “my technical debt list is bigger than my backlog”.

Ok, so the last one I added a bit of poetic license to, but I think the point holds, at the outset of the bright new world, positive vibes and optimism are plentiful, as problems arise, if they are not dealt with, they escalate and confidence turns to confusion which leads to defeatism. This attitude isn’t an agile problem, it’s a standard project problem, if ‘risks and issues’ arise on a project they are managed or mitigated; in agile, the same holds true. For the purpose of this post, I’ll concentrate mainly on Scrum approaches

Let’s look at the statements outlined:

We are doing agile“are you sure? Asking the same three questions each morning, doesn’t mean you are “doing” Scrum (agile). How are you finding the journey, do all of your team members and sponsors understand the investment required in time and people to find the benefits of applying agile principles? Do they understand the role that each needs to play, especially during sprint planning and retrospectives?

“my velocity is bigger than yours” – great! So how are you using a backward looking measure to assist your future release train? How are you applying what you have learnt to road mapping the next set of features in the backlog?

I can’t believe our release is delayed because our downstream system isn’t agile” – is it really the downstream systems fault? Turn the situation round and ask “as a scrum team, why didn’t we understand our dependencies? Why didn’t we have an acceptance test that covered this integration? ” as a self organising team, each story that goes into a sprint needs to add business value, needs to have a clear objective (I.e the scrum team will agree what the expected sprint outcome is for a story, such that it will either demonstrate UI layout via the use of stubs or that it will be partly or fully integrated), needs to be technically feasible, has to be testable and needs to understand its dependencies. Having a project roadmap (the “Release Train”) will help visualise the technical priority, aligning development to fit with downstream or third party systems will keep the train moving, and allow stakeholders to apply pressure where necessary on suppliers to meet the demands of the train.

this agile thing sucks, it just doesn’t work, and now they want to use offshore” – again, does the scrum team and the stakeholders and sponsors understand the commitment required? What problems are being faced? Are the retrospectives being used effectively, to identify common problems in the approach, that can be tackled by the team to improve the collaboration and quality of the team? If these are addressed, what is the next area that “sucks”, continuous, manageable improvement builds confidence over widespread overhaul. In terms of offshore, what elements are being offshored? If it’s development, are the offshore team working in-sprint or are they feature based, complimenting the onshore development team; perhaps running a more Kanban; just in time approach, limiting the amount of work in progress to demonstrate progress? What about testers offshore? If we think about the obvious benefit and alignment to agile / scrum principles in terms of constant early feedback; utilising offshore resources means that developers create the nightly build, the offshore team test and feedback is available first thing when the developers arrive, they can fix and build and the onshore testers can retest and close the cycle down. Maybe the offshore team are focused on regression capability via automation, taking pressure off of the onshore team to focus on acceptance and collaboration? In short there are many ways to improve the approach, and find the approach that provides the benefits that the stakeholders envisaged when they decided to adopt agile.

great, so now we death March every second week” – if this is the case, we need to back to the sprint planning; what did we do wrong? Did we understand the complexity and dependencies? Was the product owner available to give us the insight; were the UX designs ready? What did we forget? For scrum to succeed, we need to manage the backlog and during the sprint not over burden through either underestimating or taking on too many stories; doing so will add to the technical debt either through “hacks” to get the function working or outstanding defects that cannot be fixed in sprint.

my technical debt list is bigger than my backlog” – following on from the last point, why are we building technical debt? How can we change? Is the technical debt related to new versions of development tools or architecture? Do we need to refactor the code to migrate new tools in order to achieve new business goals? If yes, then do we assign a technical sprint to refactor in order to kick on?

Only the team can decide ….. Well the team and the now fully educated stakeholders!

Some of the articles and tweets that I have found useful this month include:

(via CIO.com) Why ‘Agile Project Management Controls’ Isn’t an Oxymoron http://m.cio.com/article/731594/Why_Agile_Project_Management_Controls_Isn_t_an_Oxymoron?page=3&taxonomyId=3170 …

 
(via techtarget) “Agile methodology techniques: Unit test, automation and test-driven development” http://searchsoftwarequality.techtarget.com/tip/Agile-methodology-techniques-Unit-test-automation-and-test-driven-development …
 
(via techtarget) “Estimates in Agile development: Capacity matters in sprint planning” http://searchsoftwarequality.techtarget.com/tip/Estimates-in-Agile-development-Capacity-matters-in-sprint-planning …
 
(via techtarget) “Continuous integration, automation and test-driven development explained” http://searchsoftwarequality.techtarget.com/Continuous-integration-automation-and-test-driven-development-explained …
 
(via @parasoft) How Can Test-Driven Development (TDD) Be Automated?  http://blog.parasoft.com/bid/52034/How-Can-Test-Driven-Development-TDD-Be-Automated …
 
(via @agilescout) Being an Agile Coach – Dealing with Conflict http://bit.ly/19dXGuf  #agile #pmot
 
 
(via mironov.com) Agile Fundamentals, ROI and Engineering Best Practices http://www.mironov.com/assets/Agile_SFO_Mironov.pdf …
 
 
 
 
 

 

 

Having worked over many years on various sized projects, similarities can be drawn irrespective of who runs the project, what the business priority is and how much budget is available. Wherever you go you are likely to face the same issues or anti-patterns when it comes to process and approach.

The focus of this blog post will be the Agile sprint process, moreover what prevents us achieving near “Production Quality” code at the end of a sprint and how in turn that code becomes “Production Ready”. I will be looking at some of the main barriers to achieving this status, and outlining where things should change, to give the project team the best chance of success.

Before going into detail, I’d like to start by explaining what I mean by “Production Quality” and “Production Readiness”:

Production Quality – I see this as the minimum acceptable level for any code package delivered at the end of a sprint cycle. To achieve this production quality, I assume that before the sprint, an inclusive game-planning session was conducted, where the scope of each story was agreed, and the acceptance criteria (inclusive of tests) were defined in agreement with the Product Owner. The “Quality” is determined, by the fact that the project team has developed and reviewed the agreed stories within the sprint, meeting all of the acceptance criteria. By achieving this, the project team are suggesting that there is available a Production candidate that can be handed over to pre-release QA and / or Business Acceptance Teams (UAT) for validation.

Production Readiness – Once the release candidate has been subjected to ‘an appropriate’ level of QA and UAT analysis, if no blocking defects have been found, and all of the functional and non-functional requirements have been validated, we can label the package as “Production Ready”. The package can now be handed over to the Release Management team, in order to schedule a deployment to Production.

Now that I have clarified what I mean regarding quality and readiness, I’d like to address the title of this post “What cost a lack of acceptance?”. If elements of the sprint process are ignored, or overlooked, whether to save time, money or to get things done; the impact can be quite high. In my opinion, these thought processes are a false economy, as you are potentially shifting the issues later in the development or release cycle; which as we know from traditional analysis; the later in the lifecycle you detect a defect, the greater the cost is to fix it. So what pressures can be experienced, that can lead to a lack of acceptance?

1. Development Versus QA Ratios – Inequality within a project team with respect to the ratio of QA personnel to that of the development team can have  a significant impact on the ability of the QA team to keep pace with in-sprint tasks (such as defining acceptance tests for future sprints and providing daily feedback during the current sprint) whilst conducting more traditional QA tasks; such as release candidate testing. If this situation arises, it is likely that a trade-off may need to happen. One such trade-off could be to reduce the amount of acceptance criteria on a story. If this happens, we could be adding risk to our release schedule as only limited validation has taken place before entering final QA; leading to more bugs being found during QA, requiring additional builds and retest cycles. Having to pull developers off of in-sprint duties to fix defects, raises the cost to the project. Not only in terms of time to fix (and merge to code branch(es)), but also the potential impact their withdrawal has on the current sprint.

2. Waterfall in a sprint – I have worked on projects, which claim to be Agile, which on inspection stretches the truth just a little! All that happens is work packages are split in to 3 weeks, the developers code for 2 – 2 1/2 weeks then ‘throw over the fence’ to the in-sprint QA for testing. Adopting this approach, does not maximise the potential gains of providing regular (daily?) feedback from the QA team. The earlier in the sprint issues are found; the quicker they can be fixed, raising the perception of quality on a daily basis.

3. Lack of Product Owner buyin –  If your product owner is not fully engaged with the project, this can lead to a lack of acceptance. If the Product Owner does not provide (or at least provides input) the minimum acceptance criteria at the beginning of a sprint, the team will be fighting a losing battle, as come the walkthrough at the end of the sprint, any vagueness will more than likely have been interpreted incorrectly, and thus the Product Owner may insist on instance fixes to resolve before providing sign-off for the feature in question.

5. How integrated should you be? –  If during a sprint the acceptance criteria, overlooks particular integration dependencies; including the target go-live dates of dependent systems, this could impact the projects attempts to achieve production readiness. A lack of understanding of release schedules for dependent systems can lead to reputational loss of the project team with sponsors; as features will be unusable in production, thus preventing the company from rolling the product or feature out to prospective clients.

I have experienced all of these pressures on various projects over the years. They are always likely to come up during software delivery projects; but it is how far they are allowed to manifest themself; that will determine the size of the impact that they will have. Keeping control of these factors, should increase confidence, and promote knowledge within the team. In order for the team to succeed, it is how these influences are handled, and how close the team sticks to the agreed approach, which will play a significant part in determining whether the delivered product is fit for purpose. Keeping control on acceptance criteria, from clear definition through to implementation will provide one of the key factors of project success.

 

Having worked on various projects over the years, with the main driver being time to market pressures, I thought that working as part of an Agile team, the days of the death march were over as we agree our work packages upfront.

Currently, I am working on a project that is going all out to meet the deadline, at the expense of agile approach. Though it has it’s moments of chaos, the job is getting done.

This leads me to the topic of this post. What are people’s views on the death march approach, is it an inevitable consequence of changing business demands? Or can agile fight it’s corner and prove it’s flexibility, to deliver to change in focus?

All comments / views welcome

Reading and practising agile within a team, the aim is to promote collaboration and commitment to delivery on a sprint basis. Process improvement is what all team’s should strive for, making the process ‘lean’ will allow the team to take on more work or deliver increased complexity. I’ve come across a couple of interesting articles, that highlight the case for doing something and the case for doing nothing until you have a question to answer.

The first article from tool vendor VersionOne – http://bit.ly/drDGUG highlights the need for constant improvement to keep the process fresh.

The second article My Flexible Pencil – http://bit.ly/9dj2pD shares a different view, that sometimes doing nothing is the ‘best’ thing that could happen to the process.

From my point of view, I’ll take a few splinters, and sit on the fence with this one; though if I had to commit, I’d probably fall on the side of My Flex Pencil.

The important thing for me is that process change has to be worthwhile, and display a tangible benefit to the team; whether that be reducing admin overhead or encouraging collaboration through workshops. The outcome of retrospectives is an important one, small; achievable tweaks to a process on a sprint by sprint basis should easily show tangible benefits. Vast overhauls of process, can lead to the dangerous position where team members become lost; in turn affecting delivery.

So in summary, ‘how often’ should be driven by the outcome of retrospectives. ‘How much’ depends on the evolution of the team; has the team ‘outgrown’ the current process? If yes then a major overhaul maybe the answer. If no, the retrospective tweaks may be sufficient, but the team will decide…..

In an earlier post, we highlighted “The Definition of Done” concentrating of what factors make up a complete story, that is potentially shippable at the end of a sprint.

In this post, I want to focus on Production releases, integration headaches and how to account for Production Support in later sprints.

Recently, a situation arose where a 3rd party vendor could not react quick enough to the requirements of our sprint. Rightly or wrongly, the decision was made to create fake services to test out the code that was being written, all was merry and bright and there was a good feeling within the team, until reality dawned that there was an imminent production rollout, and our application was far from shippable; due to the lack of integration.

What followed was Death March city, as the team worked through the raft of integration defects, where message contracts between the systems were either incorrectly applied or uncovered edge cases that had previously gone undefined.

A harsh lesson learnt by all concerned. This situation got me thinking about post-production; specifically support, the inevitable patch releases, as end users uncover missing business rules or fine grained entitlement issues etc.

How do we support these in an Agile sprint? The main problem that I foresee is that the developers will have moved on to the next feature delivery in-line with the roadmap; therefore if production issues do arise, that codebase is likely to be 4-6 weeks old. There is the danger that fixes become ‘over engineered’ as the developer may be working on new functions in that area as part of the current sprint, which may affect the fix on the ‘old’ code branch.

Also, there is the issue of delivery, production support estimates are always finger in the air estimates, however if you do not provide sufficient cover, your current sprint may be at risk. The counter argument to this is that if you take too cautious approach, and assign too many points to support, your Product Owner will loose faith, as the team will not be delivering much new functionality.

The approach currently being taken, is to agree a proportion of points for production support (up to 20%-25% of capability), splitting these between our development teams. Within the team there is a rota of who is on support each day, the objective here is to ensure everyone contributes to the current sprint whilst also being mindful of the need to support an old code branch.

In line with my previous post, we are looking to improve our ‘doneness’ relying less on the fakery, or if fakery is required, ensure that future tasks are captured and estimated to reduce pain points in the run up to subsequent production releases. This includes testing considerations such as multi-region testing, external network and compatibility.

Implementing Agile throws up a number of challenges, but I guess the testament to how well you are doing lies in how well you can react and implement new strategies and considerations……