Having worked in and around the testing industry for over 15 years, I have seen many changes in the World of testing. The importance of the tester over these years has gone through peaks and troughs. Potential business failures such as Y2K and the implementation of the Euro have seen companies invest heavily in “Career Testers”, paying increased salaries and bonuses to get the job done. Both Y2K and Euro had issues, but not the catastrophic impact that analysts predicted. The immediate aftermath of both events led to questions being raised around why testers were needed in such numbers and at such cost to projects.

That to and fro question continues, market events often play a big role in the answer to the question.

In more recent years, the popularity of agile, and in particular the rise of approaches such as TDD has raised the question of a testers value still further, with some companies relying on development led activity to provide the necessary quality. Other companies; such as Microsoft have invested in engineering roles for testers, or “Developers in Test”, people who can bridge the gap between testing and development.

So, with all these changes in approach, where does the humble tester sit? Do they become coders? Do they become business analysts? Some will, some won’t. Others may increase their skills by learning the basics of both in order that they can continue to contribute to ensure a development area covers the ‘what if’s’. We hear terms such as ‘Shift Left’, testers in this arena can still add value, static testing with respect to documentation or architecture diagrams will highlight risk areas and contribute to coverage models, environment management and performance strategies.

The technology World is becoming more service based, enterprise architects are looking at SOA services to provide efficiency. Testing is playing catch up again, defining a framework that provides optimal coverage of service calls is imperative, striking a balance between manual and automation coverage is key, supporting continuous integration is another way that testers can bring value back to the project life cycle. Complimenting development and business approaches, builds confidence and should contribute to companies meeting the business objectives around the cost, time and quality triangle.

But what if the tester doesn’t want to code? Doesn’t want to use automation frameworks? Well the digital age is here, and growing. The rise of consumerization in the market place is an important factor for companies when committing budget spend to projects. Consumers hold the key, demands for information to be displayed on multiple channels is leading companies to define their digital strategies. The technology race to bring newer and better wearable devices raises the question of consistency and quality again. Each device, whether static, mobile or wearable will need to portray consistent design and functionality to keep the consumer interested. Deviation in level of service will lead to the consumer voting with their feet (or fingers !) and move to a competitor.

Usability is key, device interaction and gesture support is vital to maintain market share. Can automation tools cover this? Not entirely, therefore the need for the humble tester is raised again, filling the gaps that automation frameworks cannot support, operating in an assurance role to keep products relevant and maintain consumer interest.

In short, the manual tester is dead, long live the manual tester !

Having spoken to various clients across various industries; there seems to be a general shift away from offshore test centres of excellence towards a more proximate solution. This is especially apparent with respect to mobile device testing. Companies want to call upon services, but don’t want to rely on ‘follow the sun approaches’.

Couple the instantly accessible device solutions with the experience of offshore partners and you have the foundation of a solution that covers the main elements of today’s society with respect to new device tests being performed onshore; with older; more unusual device configurations being performed by offshore partners; via a cloud solution.

With regard automation approaches, we are looking for frameworks that look at the multi-channel aspect of the digital transformation age.

Finding an automation framework that can be used across multiple devices, channels and browsers will provide stakeholders with great solace; as; in theory any expenditure in automation approach will result in greater ROI returns as, “one script fits all” across various desktop and mobile browsers.

Providing companies with a “script once” approach will be very appealing.

Tools and frameworks are trying to readdress the balance, providing efficiency in the framework space, but until verified; there will be the need for maintenance to ensure current framework is compatible.

However, for me the most important thing is that as testers, we need to be constantly innovating, and developing our skills. Technology is constantly changing and so must our testing methods, frameworks that limit maintenance and maximise channel coverage are a must; but the most important (for now) is that the individual must develop and keep pace with the ever changing technology landscape.

Wherever you go, whatever blog post you read, whatever company you do business with; there is likely to be a general journey theme when it comes to implementing agile; especially the transition from positivity through to surrender. “We are doing agile”; “my velocity is bigger than yours”, “I can’t believe our release is delayed because our downstream system isn’t agile”, “this agile thing sucks, it just doesn’t work, and now they want to use offshore”, “great, so now we death March every second week”, “my technical debt list is bigger than my backlog”.

Ok, so the last one I added a bit of poetic license to, but I think the point holds, at the outset of the bright new world, positive vibes and optimism are plentiful, as problems arise, if they are not dealt with, they escalate and confidence turns to confusion which leads to defeatism. This attitude isn’t an agile problem, it’s a standard project problem, if ‘risks and issues’ arise on a project they are managed or mitigated; in agile, the same holds true. For the purpose of this post, I’ll concentrate mainly on Scrum approaches

Let’s look at the statements outlined:

We are doing agile“are you sure? Asking the same three questions each morning, doesn’t mean you are “doing” Scrum (agile). How are you finding the journey, do all of your team members and sponsors understand the investment required in time and people to find the benefits of applying agile principles? Do they understand the role that each needs to play, especially during sprint planning and retrospectives?

“my velocity is bigger than yours” – great! So how are you using a backward looking measure to assist your future release train? How are you applying what you have learnt to road mapping the next set of features in the backlog?

I can’t believe our release is delayed because our downstream system isn’t agile” – is it really the downstream systems fault? Turn the situation round and ask “as a scrum team, why didn’t we understand our dependencies? Why didn’t we have an acceptance test that covered this integration? ” as a self organising team, each story that goes into a sprint needs to add business value, needs to have a clear objective (I.e the scrum team will agree what the expected sprint outcome is for a story, such that it will either demonstrate UI layout via the use of stubs or that it will be partly or fully integrated), needs to be technically feasible, has to be testable and needs to understand its dependencies. Having a project roadmap (the “Release Train”) will help visualise the technical priority, aligning development to fit with downstream or third party systems will keep the train moving, and allow stakeholders to apply pressure where necessary on suppliers to meet the demands of the train.

this agile thing sucks, it just doesn’t work, and now they want to use offshore” – again, does the scrum team and the stakeholders and sponsors understand the commitment required? What problems are being faced? Are the retrospectives being used effectively, to identify common problems in the approach, that can be tackled by the team to improve the collaboration and quality of the team? If these are addressed, what is the next area that “sucks”, continuous, manageable improvement builds confidence over widespread overhaul. In terms of offshore, what elements are being offshored? If it’s development, are the offshore team working in-sprint or are they feature based, complimenting the onshore development team; perhaps running a more Kanban; just in time approach, limiting the amount of work in progress to demonstrate progress? What about testers offshore? If we think about the obvious benefit and alignment to agile / scrum principles in terms of constant early feedback; utilising offshore resources means that developers create the nightly build, the offshore team test and feedback is available first thing when the developers arrive, they can fix and build and the onshore testers can retest and close the cycle down. Maybe the offshore team are focused on regression capability via automation, taking pressure off of the onshore team to focus on acceptance and collaboration? In short there are many ways to improve the approach, and find the approach that provides the benefits that the stakeholders envisaged when they decided to adopt agile.

great, so now we death March every second week” – if this is the case, we need to back to the sprint planning; what did we do wrong? Did we understand the complexity and dependencies? Was the product owner available to give us the insight; were the UX designs ready? What did we forget? For scrum to succeed, we need to manage the backlog and during the sprint not over burden through either underestimating or taking on too many stories; doing so will add to the technical debt either through “hacks” to get the function working or outstanding defects that cannot be fixed in sprint.

my technical debt list is bigger than my backlog” – following on from the last point, why are we building technical debt? How can we change? Is the technical debt related to new versions of development tools or architecture? Do we need to refactor the code to migrate new tools in order to achieve new business goals? If yes, then do we assign a technical sprint to refactor in order to kick on?

Only the team can decide ….. Well the team and the now fully educated stakeholders!

Sprint Planning in a SCRUM team is an essential piece of the jigsaw. Allowing teams to collectively understand, detail and size stories provides (or at least aims to add) predictability to sprint delivery.

Detailing the tasks, integrations, need for stubs and level of acceptance for each story allows us to determine what ‘Done’ looks like. Which brings us to the title of this post “when 2 + 2 doesn’t equal 4″, by this I refer to the concept of Story Points.

In my experience, one of the key decisions to be made by the SCRUM team at Sprint 0 is what a story point looks like and refers to. Agreeing with your stakeholders upfront what that size indicator means, will avoid confusion during sprint planning sessions.

Some Stakeholders will see 2 people available to do two; 8 hour days as 4 days or 32 hours of effort. Then as soon as you enter the sprint, those same stakeholders will want a 2 hour workshop, instantly reducing capacity of our 2 team members, which will pressure the sprint and create (potentially) the perception of failure.

Adopting the point system, whether that be basing an individuals realistic daily capacity as 2, 4 or 6 hours is irrelevant, the importance is to pave the way for successful sprint completion. Providing a measure of effort, that stakeholders can refer to, understand, and gain predictability is key. Allowing sufficient time to plan ahead to keep the backlog machine working.

Except for our Stakeholder, I have made no reference to specific roles in our SCRUM team, the point system should be applied to all members of that team, each story will have a number of tasks, whether that is for a ‘BA’ to define a set of business rules, a ‘QA’ to implement a set of acceptance tests or a developer to write some code, all tasks are important to that story and working towards an agreed definition of ‘Done’, adopting a standard measure, at least allows a collaborative understanding of the task in hand……

Some of the articles and tweets that I have found useful this month include:

(via CIO.com) Why ‘Agile Project Management Controls’ Isn’t an Oxymoron http://m.cio.com/article/731594/Why_Agile_Project_Management_Controls_Isn_t_an_Oxymoron?page=3&taxonomyId=3170 …

 
(via techtarget) “Agile methodology techniques: Unit test, automation and test-driven development” http://searchsoftwarequality.techtarget.com/tip/Agile-methodology-techniques-Unit-test-automation-and-test-driven-development …
 
(via techtarget) “Estimates in Agile development: Capacity matters in sprint planning” http://searchsoftwarequality.techtarget.com/tip/Estimates-in-Agile-development-Capacity-matters-in-sprint-planning …
 
(via techtarget) “Continuous integration, automation and test-driven development explained” http://searchsoftwarequality.techtarget.com/Continuous-integration-automation-and-test-driven-development-explained …
 
(via @parasoft) How Can Test-Driven Development (TDD) Be Automated?  http://blog.parasoft.com/bid/52034/How-Can-Test-Driven-Development-TDD-Be-Automated …
 
(via @agilescout) Being an Agile Coach – Dealing with Conflict http://bit.ly/19dXGuf  #agile #pmot
 
 
(via mironov.com) Agile Fundamentals, ROI and Engineering Best Practices http://www.mironov.com/assets/Agile_SFO_Mironov.pdf …
 
 
 
 
 

 

 

Travelling around different clients, with different objectives; different risk appetites; the one question that always comes up is related to Governance.

Irrespective of what Methodology your project is following; be it SCRUM, Waterfall, SCRUMFALL, there will always be a stakeholder or two that will want to know exactly what return on their investment they are getting. The title of this post includes the phrases “Too Much, Too Little, Too Late”; I am going to outline my views on each of these situations and the potential impact each has on the project lifecycle and perception of the stakeholder.

 

“Too Much”

At first glance this seems a little counter intuitive; surely demonstrating full control of the project via the medium of metrics can only be good? To a point yes; but this is a case of knowing your audience. What happens if your project is only one of a wider portfolio that your stakeholder has budgetary responsibility for? Too much detail, may lead to disinterest or confusion. Key messages may become lost in the countless graphics and metrics.

To avoid this situation; it is imperative that boundaries and expectations are set as early in the project lifecycle. Agree the format and detail level with your stakeholder, ensure that they are engaged from the outset so that week on week you are providing decision grade information to assis the stakeholder to manage risk within the project.

“Too Little”

Again this seems an obvious statement, but sometimes when you are in the detail or under mounting pressure to deliver; these relatively “simple” steps can be overlooked. So what can happen in this situation? Well, depending on the vision and focus of your stakeholder, a lack of governance, can lead to the perception of panic or rather lack of control in your project space. Not having to hand the latest defect trends, the latest execution numbers will not install confidence that there is control within the project. If governance meetings are scheduled and there are conflicting messages being delivered from each of the project teams, the battle is lost. Stakeholders losing faith in their project team could be an ominous precursor to budget reductions and smaller releases, to reduce their “perceived” risk exposure.

Setting the boundary and expectation upfront, will provide the foundation to provide key messages during the project lifecycle. Delivering bad news in a controlled, consistent manner; though not necessarily welcomed; will still maintain confidence in the project team, decision grade information will allow options to be explored, mitigating risk where appropriate.

“Too Late”

Trying to apply governance when things start to go wrong, or after stakeholders have raised concerns with delivery is risky. Introducing governance in this situation could lead to the perception of the team being “Reactionary”. the stakeholder may feel that they need to follow closely ‘progress’.

In this situation, I believe the key is too start small. Introducing one or two measures to overcome the ‘reactionary’ tag, to increase confidence with stakeholder is key. Once the value is demonstrated, the team can work with the stakeholder to increase the level of decision grade information, to regain faith that project team is working in partnership, and is in control.

In all of these situations, the key is communication; which is often overlooked as a matter of course. Setting the expectation upfront, will ensure the level of governance is acceptable. Agreeing format and data collection tools is also key. Governance should be useful project tool, rather than a massive overhead and an invitation for criticism. providing good and bad messages on a project are par for the course; both messages need to be delivered with confidence and consistency to ensure stakeholders remain confident; giving projects the best chance to succeed

 

 

Having worked on numerous high profile projects, it is still surprising to find push back from stakeholders and delivery teams not to include dedicated story points and capacity for the not so ‘sexy’ error handling and UI error / warning messages from iteration (sprint) 1.

With the growing acceptance of adding UX design to the project team, providing the vision; through the creation of design prototypes that help secure budgets and buy-in from stakeholders, many of these design presentations concentrate on the happy path scenario only. Story backlogs are created, estimates generated and iterations sliced and diced giving everyone a sense of we can do this, with a clear; agreed roadmap in place.

Sound bites such as ‘Minimum Credible Release’ are likely to be shared, pats on the back, high fives and talk of just how big the bonus will be; may become common place.

But what happens on the run in to production implementation? End to End system testing starts, or worse UAT, tests are passing, confidence rising… Then Bang! A downstream system goes down for maintenance, no body knew on the project team…. The UI can’t cope, starts crashing at will, those in the business responsible for UAT are now panicking as its 2 weeks until launch and the platform is not stable….. That bonus that was a sure bet that morning is starting to move from a deposit on the Ferrari to a child’s toy Ferrari…. How could this of happened????

No-one thought about the what-if scenario, all hands to the pump… We must go live in 2 weeks (I at least want to test drive the Ferrari!)!!!!! This is where Project Management need to ensure that during project initiation and backlog creation; the definition and estimation phase should include representation from a variety of functions such as; QA, Business Analysis, Development and business. Taking the UX design, should lead to a decomposition of each functional area. All of the integration touch-points need to be clearly defined in order that resiliency or error handling stories can be captured from which the QA team can define suitable acceptance tests to aid the in-sprint development and testing.

Taking the time up-front to fully size stories for the functional and non-functional requirements; though increasing the initial estimate, should help to reduce risk factors and potential failure points; especially in the run up to production implementation. There is likely to be pushback from the business from the outset, as estimates are increased, and it is difficult for the stakeholders to visualise the value that is being added to the system. The project team will need to provide a solid case for the investment; calling upon previous experiences and championing the case for the production support team to minimise outages once the product is deployed to production.

If these unhappy paths are not included from the outset, those teams that are “Agile” (quotes intentional!) will soon be forced  to support production maintenance, likelihood is that ‘special’ teams boldly given names such as “Rapid Response” or “Tiger” are likely to be formed; tasked principally with ensuring the production environment is stable, and resilient to failover. Creating such teams will impact the project teams velocity against the delivery roadmap; which the stakeholders have provided budget for.

As with all projects; there is likely to be a trade-off; it is very unlikely that a project team will be able to code for all eventualities; the stakeholders may have a high appetite for risk and may decide not to invest fully in unhappy path resiliency. However, the exercise is still a useful tool to use to populate the backlog. Architectural analysis should quickly highlight the main project / integration risks in order that mitigation plans can be agreed sooner, engaging downstream systems earlier should assist the co-ordination of production release dependencies; thus reducing risk still further.

In summary, learning from mistakes to make the next phase / project better is the goal. Providing a robust and resilient system to users and stakeholders will raise confidence in the platform; which will be useful at budget allocation times.