Posts Tagged ‘Testing’

Throughout industry, whether business led or IT led, the mantra still seems to be if you are not ‘agile’ then you can’t survive or rather succeed in today’s market place. But what does ‘agile’ mean? In my role, I speak to numerous companies, whether in a business development capacity or networking at industry events or after presenting at conferences. The views of the World are polar opposite, sometimes disingenuous or contradictory to manifesto guidelines. Some of the questions and statements that I hear on a regular basis include:

  • “Do you develop products or technology in an agile way?”
  • “Does your company adopt business agility models?”
  • “Are you a Scrum fan or Kanban? “
  • “Actually we operate lean principles “
  • “We’re not quite there yet, so we are more scrumfall than anything”
  • “It sounds good, but it’s just so difficult to be heard”
  • “We have a daily stand up every day”
  • “The business/IT Management just don’t get it”
  • “The pressure of multiple releases is just intense”
  • And my personal favourite “We’re so agile, we go beyond agile”

The objective of agile is no different to the objective of more traditional approaches I.e. a working product (Quality), delivered on or under budget (Cost) in an acceptable timeframe (Time) to gain advantage over competition, by increasing company reputation and trust as the ideal solution provider.

There is a lot of talk in blogs regarding how ‘agile’ must adapt to keep up with the ever changing demands of the marketplace. I’m not sure that I completely agree with that, from the examples above there is already confusion in the marketplace, so change for the sake of change may make the gulf in ‘maturity’ even wider. A good example of change, is the kudos that DevOps brings. If you ask 10 different people how to describe agile, you’ll get many different views. If you ask the same people to describe DevOps you’ll likely receive more than 10 views, as was proven in my office when we discussed what our testing offering could offer to a DevOps environment. Everyone had a view about what it was, though the biggest consensus gained was how to spell it!

For me, that is the bottom line, if your project, company or industry wants to operate in an agile way or adopt a certain flavour of agile, you need to lay the foundations, in affect define your individual manifesto and publicise it

“Working software over documentation” could translate to “working software with just enough documentation to meet our regulatory demands”

I could go on, but I think the message here is clear, to be agile, you need to collaborate. To collaborate, you need to know what you are working towards, define a top to bottom understanding within your organisation or project team to enable change, transform your approach, and deliver success, and if you want to badge this as ‘agile’ that’s fine too.

Working for a Global consultancy like Sogeti UK, where the majority of our business is in the testing space, when we are asked to assist with introducing agile testing to engagements, we look at the bigger picture. Trying to understand the landscape of the project and the company will determine the type of consultants that we deploy to the engagement. If a client needs to set their own manifesto, there isn’t much reason to deploy highly technical testers, as reputation could suffer if collaboration or rather the mechanics of the project are not smooth and delays occur or releases dates missed as the end game wasn’t known. Instead we’d look to utilise more rounded consultants that can embed into teams and coach and steer the team or company to defining “what looks good for us?” Setting the manifesto and expectations for all to understand and commit and collaborate too.

At Sogeti, we look to help companies achieve their own manifesto, through the use of workshops, involving senior stakeholders through to junior testers, with the objective of aligning objectives, and embedding consultants to assist companies with their transformation journey, to define an individual working ‘agile’ approach, irrespective of ‘flavour’ applied. At every step of the way, looking to support the needs as maturity I.e. Success is gained, to move to the seeming utopia of DevOps (I think that was the agreed spelling!)

If you’d like to find out more about the services Sogeti UK can offer, visit our website http://uk.sogeti.com/ or drop me a line.

Advertisements

A nice read on the benefits of Testing in production from InfoQ and shared by mdavey.wordpress.com

http://www.infoq.com/articles/mobile-app-performance-testing

Having spoken to various clients across various industries; there seems to be a general shift away from offshore test centres of excellence towards a more proximate solution. This is especially apparent with respect to mobile device testing. Companies want to call upon services, but don’t want to rely on ‘follow the sun approaches’.

Couple the instantly accessible device solutions with the experience of offshore partners and you have the foundation of a solution that covers the main elements of today’s society with respect to new device tests being performed onshore; with older; more unusual device configurations being performed by offshore partners; via a cloud solution.

With regard automation approaches, we are looking for frameworks that look at the multi-channel aspect of the digital transformation age.

Finding an automation framework that can be used across multiple devices, channels and browsers will provide stakeholders with great solace; as; in theory any expenditure in automation approach will result in greater ROI returns as, “one script fits all” across various desktop and mobile browsers.

Providing companies with a “script once” approach will be very appealing.

Tools and frameworks are trying to readdress the balance, providing efficiency in the framework space, but until verified; there will be the need for maintenance to ensure current framework is compatible.

However, for me the most important thing is that as testers, we need to be constantly innovating, and developing our skills. Technology is constantly changing and so must our testing methods, frameworks that limit maintenance and maximise channel coverage are a must; but the most important (for now) is that the individual must develop and keep pace with the ever changing technology landscape.

Wherever you go, whatever blog post you read, whatever company you do business with; there is likely to be a general journey theme when it comes to implementing agile; especially the transition from positivity through to surrender. “We are doing agile”; “my velocity is bigger than yours”, “I can’t believe our release is delayed because our downstream system isn’t agile”, “this agile thing sucks, it just doesn’t work, and now they want to use offshore”, “great, so now we death March every second week”, “my technical debt list is bigger than my backlog”.

Ok, so the last one I added a bit of poetic license to, but I think the point holds, at the outset of the bright new world, positive vibes and optimism are plentiful, as problems arise, if they are not dealt with, they escalate and confidence turns to confusion which leads to defeatism. This attitude isn’t an agile problem, it’s a standard project problem, if ‘risks and issues’ arise on a project they are managed or mitigated; in agile, the same holds true. For the purpose of this post, I’ll concentrate mainly on Scrum approaches

Let’s look at the statements outlined:

We are doing agile“are you sure? Asking the same three questions each morning, doesn’t mean you are “doing” Scrum (agile). How are you finding the journey, do all of your team members and sponsors understand the investment required in time and people to find the benefits of applying agile principles? Do they understand the role that each needs to play, especially during sprint planning and retrospectives?

“my velocity is bigger than yours” – great! So how are you using a backward looking measure to assist your future release train? How are you applying what you have learnt to road mapping the next set of features in the backlog?

I can’t believe our release is delayed because our downstream system isn’t agile” – is it really the downstream systems fault? Turn the situation round and ask “as a scrum team, why didn’t we understand our dependencies? Why didn’t we have an acceptance test that covered this integration? ” as a self organising team, each story that goes into a sprint needs to add business value, needs to have a clear objective (I.e the scrum team will agree what the expected sprint outcome is for a story, such that it will either demonstrate UI layout via the use of stubs or that it will be partly or fully integrated), needs to be technically feasible, has to be testable and needs to understand its dependencies. Having a project roadmap (the “Release Train”) will help visualise the technical priority, aligning development to fit with downstream or third party systems will keep the train moving, and allow stakeholders to apply pressure where necessary on suppliers to meet the demands of the train.

this agile thing sucks, it just doesn’t work, and now they want to use offshore” – again, does the scrum team and the stakeholders and sponsors understand the commitment required? What problems are being faced? Are the retrospectives being used effectively, to identify common problems in the approach, that can be tackled by the team to improve the collaboration and quality of the team? If these are addressed, what is the next area that “sucks”, continuous, manageable improvement builds confidence over widespread overhaul. In terms of offshore, what elements are being offshored? If it’s development, are the offshore team working in-sprint or are they feature based, complimenting the onshore development team; perhaps running a more Kanban; just in time approach, limiting the amount of work in progress to demonstrate progress? What about testers offshore? If we think about the obvious benefit and alignment to agile / scrum principles in terms of constant early feedback; utilising offshore resources means that developers create the nightly build, the offshore team test and feedback is available first thing when the developers arrive, they can fix and build and the onshore testers can retest and close the cycle down. Maybe the offshore team are focused on regression capability via automation, taking pressure off of the onshore team to focus on acceptance and collaboration? In short there are many ways to improve the approach, and find the approach that provides the benefits that the stakeholders envisaged when they decided to adopt agile.

great, so now we death March every second week” – if this is the case, we need to back to the sprint planning; what did we do wrong? Did we understand the complexity and dependencies? Was the product owner available to give us the insight; were the UX designs ready? What did we forget? For scrum to succeed, we need to manage the backlog and during the sprint not over burden through either underestimating or taking on too many stories; doing so will add to the technical debt either through “hacks” to get the function working or outstanding defects that cannot be fixed in sprint.

my technical debt list is bigger than my backlog” – following on from the last point, why are we building technical debt? How can we change? Is the technical debt related to new versions of development tools or architecture? Do we need to refactor the code to migrate new tools in order to achieve new business goals? If yes, then do we assign a technical sprint to refactor in order to kick on?

Only the team can decide ….. Well the team and the now fully educated stakeholders!

Having worked over many years on various sized projects, similarities can be drawn irrespective of who runs the project, what the business priority is and how much budget is available. Wherever you go you are likely to face the same issues or anti-patterns when it comes to process and approach.

The focus of this blog post will be the Agile sprint process, moreover what prevents us achieving near “Production Quality” code at the end of a sprint and how in turn that code becomes “Production Ready”. I will be looking at some of the main barriers to achieving this status, and outlining where things should change, to give the project team the best chance of success.

Before going into detail, I’d like to start by explaining what I mean by “Production Quality” and “Production Readiness”:

Production Quality – I see this as the minimum acceptable level for any code package delivered at the end of a sprint cycle. To achieve this production quality, I assume that before the sprint, an inclusive game-planning session was conducted, where the scope of each story was agreed, and the acceptance criteria (inclusive of tests) were defined in agreement with the Product Owner. The “Quality” is determined, by the fact that the project team has developed and reviewed the agreed stories within the sprint, meeting all of the acceptance criteria. By achieving this, the project team are suggesting that there is available a Production candidate that can be handed over to pre-release QA and / or Business Acceptance Teams (UAT) for validation.

Production Readiness – Once the release candidate has been subjected to ‘an appropriate’ level of QA and UAT analysis, if no blocking defects have been found, and all of the functional and non-functional requirements have been validated, we can label the package as “Production Ready”. The package can now be handed over to the Release Management team, in order to schedule a deployment to Production.

Now that I have clarified what I mean regarding quality and readiness, I’d like to address the title of this post “What cost a lack of acceptance?”. If elements of the sprint process are ignored, or overlooked, whether to save time, money or to get things done; the impact can be quite high. In my opinion, these thought processes are a false economy, as you are potentially shifting the issues later in the development or release cycle; which as we know from traditional analysis; the later in the lifecycle you detect a defect, the greater the cost is to fix it. So what pressures can be experienced, that can lead to a lack of acceptance?

1. Development Versus QA Ratios – Inequality within a project team with respect to the ratio of QA personnel to that of the development team can have  a significant impact on the ability of the QA team to keep pace with in-sprint tasks (such as defining acceptance tests for future sprints and providing daily feedback during the current sprint) whilst conducting more traditional QA tasks; such as release candidate testing. If this situation arises, it is likely that a trade-off may need to happen. One such trade-off could be to reduce the amount of acceptance criteria on a story. If this happens, we could be adding risk to our release schedule as only limited validation has taken place before entering final QA; leading to more bugs being found during QA, requiring additional builds and retest cycles. Having to pull developers off of in-sprint duties to fix defects, raises the cost to the project. Not only in terms of time to fix (and merge to code branch(es)), but also the potential impact their withdrawal has on the current sprint.

2. Waterfall in a sprint – I have worked on projects, which claim to be Agile, which on inspection stretches the truth just a little! All that happens is work packages are split in to 3 weeks, the developers code for 2 – 2 1/2 weeks then ‘throw over the fence’ to the in-sprint QA for testing. Adopting this approach, does not maximise the potential gains of providing regular (daily?) feedback from the QA team. The earlier in the sprint issues are found; the quicker they can be fixed, raising the perception of quality on a daily basis.

3. Lack of Product Owner buyin –  If your product owner is not fully engaged with the project, this can lead to a lack of acceptance. If the Product Owner does not provide (or at least provides input) the minimum acceptance criteria at the beginning of a sprint, the team will be fighting a losing battle, as come the walkthrough at the end of the sprint, any vagueness will more than likely have been interpreted incorrectly, and thus the Product Owner may insist on instance fixes to resolve before providing sign-off for the feature in question.

5. How integrated should you be? –  If during a sprint the acceptance criteria, overlooks particular integration dependencies; including the target go-live dates of dependent systems, this could impact the projects attempts to achieve production readiness. A lack of understanding of release schedules for dependent systems can lead to reputational loss of the project team with sponsors; as features will be unusable in production, thus preventing the company from rolling the product or feature out to prospective clients.

I have experienced all of these pressures on various projects over the years. They are always likely to come up during software delivery projects; but it is how far they are allowed to manifest themself; that will determine the size of the impact that they will have. Keeping control of these factors, should increase confidence, and promote knowledge within the team. In order for the team to succeed, it is how these influences are handled, and how close the team sticks to the agreed approach, which will play a significant part in determining whether the delivered product is fit for purpose. Keeping control on acceptance criteria, from clear definition through to implementation will provide one of the key factors of project success.

 

http://www.agile-community.com/profiles/blog/show?id=4633438%3ABlogPost%3A1764

A number of text’s on the subject of ratio’s suggest a 4:1 developer to tester split. I would agree that if the majority of your development work concentrates on delivering functionality, then you will need to increase the number of testers on the team. But what about the early phases of a project, where infrastructure needs to be installed and integrated or in the run up to that first release; where teams have to tackle additional technical debt to harden the application? Running a 4:1 ratio in this situation may seem to be overkill for your client; or lead to utilisation problems. In these situations I have been working to a 6:1 ratio, anything greater than that and cracks began to appear; leading to resourcing conversations.

No project is going to be the same; and as the project matures; and hopefully the technical backlog/debt mountain reduces; this is where team ratios will get closer to 4:1 or even 3:1 in order that velocity can be maintained and that the testers can provide feedback early in the sprint; whilst maintaining the automation effort as part of continuous build.

I mentioned in my previous post whether the Agile Tester’s input finishes at the end of a sprint; as I outlined; my current thinking is ‘No’; therefore this also needs to be factored in to discussions around team size and ratios.