How to make good use of user stories in Agile? (Part 2)

Note: This article was followed by the article How to make good use of user stories in Agile? (Part 1)


Trap 4: Irregular use?


In daily work, such as incomplete format, unclear user, unclear value description and other problems are very common, which also causes delivery failure sometimes. So, what is the standard of a user story?

Therefore, a good user story should have a standardized format, complete elements, and independence. The interdependence between stories should be avoided as much as possible because the interdependence between user stories will cause problems in prioritization or iteration planning, and make it difficult to estimate user stories.

Therefore, we must standardize the use of user stories. How to achieve a standardized format, complete elements, and independence?


Guide to avoid pit 4:


In order to standardize the use of user stories, we should:

1. Use the standard format of the story: use the template of “As <role>, I want <function> to realize <commercial value>”.

2. Follow the INVEST principle

Independent; Negotiable; Valuable; Estimable; Small; Testable

3. Define the “definition of readiness”: Define the “ready state” checklist that the team agrees on.

4. Define the “definition of completion”: Define the “definition of completion” checklist that the team agrees with.

5. Define “acceptance criteria”: Team members should agree on how to confirm and accept user stories and test them.

If dependence is unavoidable, try the following measures as much as possible.

1. Re-integrate user stories and divide stories in another way.

2. Low-priority stories depend on high-priority stories

3. Independent stories should be given higher story points, and stories that rely on other stories should be given lower story points.

4. Put interdependent stories into an iteration for development and delivery

5. Assign interdependent stories to the same person


Trap 5: Comparison of story point? Promise?


The following are conversations that often occur in daily work.

Boss: Look at the team next to you who completed 120 story points in this iteration. Why did your team only complete 50 story points?

Employee: They promised 130 story points but only completed 120. We promised 45 story points but completed 50 story points, which exceeded the promised amount.

The setting of story points needs to consider factors such as complexity, uncertainty, workload, and risk, etc. It is only a rough relative estimate of the work to be completed. There is no uniform size and universal measurement benchmark for story points, and the setting of story point benchmarks for different teams is different.

Therefore, there are two misunderstandings in the above conversation.

1. Use story points to compare the performance of different teams horizontally.

2. Use story point estimation as team commitment.


Guide to avoid pit 5:


To avoid the above errors, you can try the following measures.

1. Focus on value, not story point. Focus on how much functionality the iteration delivers, not the story points.

2. Story points are only used to record the team’s work efficiency and plan iterations.

3. Do not make horizontal comparisons between teams.

4. After defining the base point, use other story base points as a reference to determine the size and sort them.

5. It must be understood that estimates contain a lot of uncertainty. There is no need to pursue 100% accuracy and it is not the ultimate commitment of the team.

6. With the increase in the estimated value, uncertainty, complexity, and risk will increase exponentially. The Fibonacci number sequence is used to determine the number of user stories, and the estimated value is enlarged to reflect the uncertainty and risk represented by the points.

7. Using planning poker: planning poker is a collective estimation method from the Delphi method, including some principles of psychology.


Trap 6: Ready-Are you ready?


Before the user stories are put into iteration for development, they need to be deeply discussed and sorted to ensure that the user stories are “ready”. This requires the user and the team to determine a clear definition of “ready”, which can be included in the iteration plan for development.


The definition of “ready” is usually a checklist, which clearly lists the items to be checked by consensus. Only when the items in the checklist are met can the user stories be included in the iteration plan for development. User stories that do not meet the “ready” condition should not be put into the iteration plan for development.


Guide to avoid pit 6:


Define “ready definition”: the team needs to agree on what is “ready” in advance.

1. Make the standard clear: make a checklist to clearly list the inspection items that are agreed upon. Only when each item of the checklist is met can it be considered as “ready”.

2. Within the scope of team or project, different definitions of “ready” can be made for different types of user stories. But there is no need to develop a “ready definition” for each story.

3. Regularly review the items in the definition of “ready” and update them according to different situations.


Trap 7: Completion-Is it done?


How can a user story be considered as completed? If there is no consensus, it is likely that “I think the story is finished, but others don’t think it is finished”.

For example, is the user story completed after the functional code is written? Or is it completed after the functional test? Or does the function go online to the production environment?

What’s more, is the “completion” considered by the team consistent with the “completion” considered by users? The most important thing in the user story is customer value. When the team completes, the user needs to confirm the final completion.

These are two different “completion” criteria. The team needs to determine “completion” at the iteration level to ensure that all necessary work is done. Users need to accept the completed functions and confirm that they have received the required software functions.

The former is the definition of completion in the eyes of the team, which is often referred to as DOD; the latter is the acceptance criteria (AC).


Guide to avoid pit 7:


Determine the “definition of completion”: the team agrees on how to be completed together:

1. Define corresponding completion checklists for stories of different levels.

2. Develop corresponding completion checklists for different types of stories.

3. Clearly list the completed inspection list: write down the consensus and list the completed inspection items.

4. In the initial stage, the completion definition can be simple. With the improvement of project infrastructure and the improvement of teamwork, strict completion standards can be gradually increased.

5. At the end of each iteration, review and update the “completion definition”.

Define the “acceptance criteria”: users, product owners, and teams should discuss how to test the details of the functions implemented and test methods.

1. The acceptance test is written by the user or the product owner, and it is better to be performed by the user or business personnel.

2. Acceptance criteria can also be written or determined by the team but must be confirmed by the customer or product owner.

3. According to the function of each user story, each user story needs a specific set of acceptance criteria. Acceptance testing should be automated as much as possible.

4. Automated acceptance testing as much as possible.


Summary


This article and the previous article How to make good use of user stories in agile? (Part 1) explored the seven problems that technical teams often encounter in the process of using user stories and provided countermeasures.

User stories are one of the basic practices of agile software development and play a very important role. Effective and efficient use of user stories to manage and track user needs is vital to the success of software projects.

About

Launched in 2016 as 591Lab International and locally in China known as “WUQIUYAO Tech. Ltd” we are committed to offering our clients excellent experience on ISACA, PMI, Cisco and Huawei examination preparatory services. We focus strongly on popular exams, and exam preparations services. We provide our customers with the complete training needed to earn the best scores for their respective Management and IT career certifications. We have a huge list of satisfied customers with top grades to back up all the claims we make.

Quick Links

Contact

This material is not sponsored by, endorsed by, or affiliated with Cisco Systems, Inc & Huawei Technologies Co., Ltd. Cisco Certified Internetworking Engineer, the Cisco Systems logo and the CCIE™ logo are trademarks or registered trademarks of Cisco Systems, Inc. in the United States and certain other countries.Huawei Certified Internetwork Expert, the Huawei logo and the HCIE™ logo are trademarks or registered trademarks of Huawei Technologies Co., Ltd . in China and certain other countries All other trademarks are trademarks of their respective owners. 

© Copyright 591Lab 2020. All Rights Reserved.