Agile project metrics peer review effectiveness năm 2024
I’ve always been fascinated by the concept of Agile projects and their ability to adapt and deliver results in a fast-paced, ever-changing business landscape. However, it’s not always easy to measure the success of these projects. That’s why I’ve compiled a list of 5 essential metrics that can help you track and evaluate the effectiveness of your Agile endeavours. With these metrics in hand, you’ll have a clearer understanding of how well your project is performing and be empowered to make informed decisions that can drive its continued success. Show
Customer SatisfactionCustomer satisfaction is one of the most important metrics to evaluate the success of an Agile project. It indicates how well the team is meeting the needs and expectations of its customers. There are various metrics that can be used to measure customer satisfaction, including: Net Promoter Score (NPS)The Net Promoter Score (NPS) is a widely used metric that measures the likelihood of customers recommending a product or service to others. It is calculated based on the responses to a single question: “On a scale of 0–10, how likely are you to recommend our product/service to a friend or colleague?” Customers are then classified into Promoters (9–10), Passives (7–8), and Detractors (0–6). The NPS is then calculated by subtracting the percentage of Detractors from the percentage of Promoters. Customer Satisfaction Score (CSAT)The Customer Satisfaction Score (CSAT) measures the satisfaction of customers with a specific interaction or experience. It is usually measured using a survey that asks customers to rate their satisfaction on a numerical scale (e.g., 1–5) or a descriptive scale (e.g., Very Dissatisfied to Very Satisfied). The CSAT score is calculated by averaging the responses. Customer Effort Score (CES)The Customer Effort Score (CES) measures the ease with which customers are able to achieve their goals when interacting with a product or service. It is typically measured using a survey that asks customers to rate the level of effort required to complete a specific task or resolve an issue. The CES score is calculated by averaging the responses. User Satisfaction SurveysUser satisfaction surveys can provide valuable insights into the overall satisfaction of users with a product or service. These surveys typically include a range of questions that cover various aspects of the user experience, such as ease of use, functionality, and customer support. User satisfaction scores can be calculated by averaging the responses to the survey questions. Customer Complaints and FeedbackCustomer complaints and feedback are also important indicators of customer satisfaction. Monitoring and analyzing customer complaints and feedback can help identify areas for improvement and address any issues or concerns that customers may have. This feedback can be collected through various channels, such as customer support tickets, online reviews, or social media comments. By regularly monitoring and analyzing these metrics, Agile teams can make data-driven decisions, identify areas for improvement, and continuously enhance their processes, practices, and outcomes. Ultimately, the goal is to deliver high-quality products or services, meet customer expectations, and create value for the organization and its stakeholders. School of Science and Technology, Polytechnic Higher Institute of Gaya, 4400-103 Vila Nova de Gaia, Portugal * Author to whom correspondence should be addressed. Submission received: 26 April 2023/Revised: 5 June 2023/Accepted: 9 June 2023/Published: 11 June 2023 Abstract: Metrics are key elements that can give us valuable information about the effectiveness of agile software development processes, particularly considering the Scrum environment. This study aims to learn about the metrics adopted to assess agile development processes and explore the impact of how the role performed by each member in Scrum contributed to increasing/reducing the perception of the importance of these metrics. The impact of years of experience in Scrum on this perception was also explored. To this end, a quantitative study was conducted with 191 Scrum professionals in companies based in Portugal. The results show that the Scrum role is not a determining factor, while individuals with more years of experience have a higher perception of the importance of metrics related to team performance. The same conclusion is observed for the business value metric of the product backlog and the percentage of test automation in the testing phase. The findings allow for extending the knowledge about Scrum project management processes and their teams, in addition to offering important insights into the implementation of metrics for software engineering companies that adopt Scrum. 1. IntroductionAgile methodologies have gained great importance in the project management field, having been strongly influenced by Japanese philosophy. As argued by Poth et al. [], the practices related to planning, controlling, and streamlining are actions strongly related to techniques and principles of Lean production that can be applied to any industry with the goals of reducing waste and creating value. Within the agile methodologies, we can find different methods such as Kanban, Lean, Scrum, Extreme Programming (XP), and the Rational Unified Process (RUP), among others. Data obtained by KPMG in 2019 [] indicate that 91% of organizations consider the adoption of agile in their organizations a priority and Digital.ai [] registered an increase from 37% in 2020 to 86% in 2021 in the number of agile adoptions in software teams, with Scrum standing out as the most popular framework, followed by Kanban and Lean. The agile manifesto emerged in 2001 when a group of 17 representatives from various software development practices and methodologies met to discuss the need for lighter and faster alternatives to the existing traditional methodologies. From this meeting, the Agile Alliance presented the Manifesto for Agile Software Development to elucidate the approach known today as agile development. The values of the agile methodology are based on four pillars []: (i) individuals and interactions over processes and tools; (ii) working software over comprehensive documentation; (iii) customer collaboration over contract negotiation; and (iv) responding to change over following a plan. Agile methods were designed to use a minimum of documentation, helping in the flexibility and responsiveness to change, that is, in this methodology, flexibility and adaptability are much more important than planning, unlike the traditional methodology [,,,,,]. Scrum was conceived by Jeff Sutherland and Ken Schwaber in 1993 with the intention of being a faster, more effective, and more reliable way to develop software for the technology industry []. This method emerged motivated by the traditional method, called waterfall, being too slow and often resulting in a product not desired by the customer and more expensive [,]. Alternatively, agile methodologies present an incremental and iterative process whose objective is to identify the priority tasks in each phase and effectively manage time with efficient teams [,,,,]. Therefore, agile methodologies came to face the difficulties that occurred during project management. The processes inherent to the several phases of process management must be effective and efficient. According to Flores-Garcia et al. [], the advances in technology that have occurred in the last decades have provided business managers with a large volume of tools to help them make decisions. This new business scenario has forced development companies to constantly seek technologies and methods that allow them to guarantee the quality of the products offered so that they do not lose competitiveness in the market where they operate. In this context, the use of metrics that can define the performance of projects, as well as the products resulting from them, has taken on an increasingly important role in the industry and, consequently, in academia, which is preparing to meet the challenges posed by these organizations. The most well-known and commercially successful software companies such as Google or IBM have adopted metrics to evaluate the success of their project management and campaigns [,]. Indeed, it is not only in the waterfall development model that it is necessary to have metrics to evaluate the performance of your processes and teams. In addition, in agile development, it is necessary to measure the effectiveness and efficiency of the processes using metrics. Planning and monitoring are necessary for projects developed in Scrum []. Previous works developed by Almeida & Carneiro [], Kurnia et al. [], and López et al. [] were important in synthesizing these metrics considering the whole Scrum development cycle and its ceremonies. However, none of these studies provide a measure of the relative importance of these metrics considering the various agile Scrum roles (i.e., Product Owner, Scrum Master, and development team). Understanding the importance of these metrics while considering the specific role of each Scrum role is important to increase team cohesion and the quality of the work produced. It is also a way to gain practical insight into the role of each Scrum role in teams and establish policies to promote increased effectiveness and efficiency of project management in Scrum. Therefore, we have developed a quantitative study based on the perception of the importance of metrics considering several profiles of actors in the Scrum process. The rest of this manuscript is organized as follows: in the first phase, a literature review is performed on metrics that can be found in a Scrum environment. After that, the several methodological phases of the study are presented. This is followed by the presentation and discussion of the results, considering their contribution to the increase of knowledge in the field. Finally, the main conclusions are listed, considering their theoretical and practical contributions. It is also in this last section that the main limitations of the study are exposed and suggestions for future work are presented. 2. BackgroundVelimirovic et al. [] note that to monitor the progress of projects and promote the necessary improvements during their execution, the use of performance indicators is necessary. Therefore, before starting project management, the first step to ensure success is to define the metrics for follow-up. Having performance indicators for each stage of project implementation is essential to optimize the results and guide the team’s path. In the software industry, metrics are used for several reasons, such as project planning and estimation, project management and monitoring, understanding quality and business goals, and improving communication, processes, and software development tools []. The first paper identifying metrics for the Scrum environment was developed in 2015 by Kupiainen et al. []. This study sought to provide identifications collected from scientific papers and metrics used in the industry. Despite this dual aspect, the methodology used did not involve the collection of primary metrics but only a survey of secondary sources through the development of a systematic literature review. The rationale for and effects of the use of metrics in areas such as sprint planning, software quality measurement, impediment resolution, and team management were identified. The results of this study allowed us to conclude that the most important metrics are related to customer satisfaction and backlog progress status monitoring, considering the product specifications and the work developed in each sprint. Other studies have been undertaken. Kurnia et al. [] collected 34 metrics related to Scrum development. The metrics included the entire Scrum development cycle, such as sprint planning, daily meetings, and retrospective sessions. The findings contributed to the identification of the most common metrics in the literature and identified new metrics related to the value delivered to the customer and the rate of development throughout the sprint. In Almeida & Carneiro [], a review of Scrum metrics was performed considering a primarily quantitative study with software engineers. The study involves 137 Scrum engineers and concluded that “delivered business value” and “sprint goal success” were the most relevant metrics. In López et al. [], a total of 61 studies from the last two decades were reviewed to explore how quality is measured in Scrum. Two important conclusions were drawn from this study. First, despite a large body of knowledge and standards, there is no consensus regarding the measurement of software quality. Another conclusion is that there is a very diverse set of metrics for this purpose, with the top three being related to performance, reliability, and maintainability. In Kayes et al. [], a different perspective was used, with the goal of proposing a metric for measuring the quality of the testing process in Scrum. Therefore, instead of looking at the whole Scrum cycle, attention was focused on a specific action. The Product Backlog Rating (PBR) provides a complete picture of the testing procedure of a product throughout its development cycle. The PBR takes into consideration the complexity of the features to be produced in a sprint, evaluates the test ratings, and provides a numerical score of the testing process. A similar line of research is the work conducted by Erdogan et al. [], which looked at the value of metrics in the process of analyzing sprint retrospectives. This study found metrics for the inspection process, improving team estimates and increasing team productivity. This study further found that the metrics of “actual quality activity effort rate” and “subcomponent defect density” helped improve product quality. The metrics collected as part of this study allowed us to synthesize the metrics and their definition, as shown in . Metrics related to effort estimation can be used to prioritize features to be developed or to prioritize activities based on relative value and effort, and velocity can be used to improve effort estimates for the next iteration, which will help the team to verify that the planned scale has been completed. Metrics related to defect identification can be used to inspect the defects in the backlog, which will allow the sharing of this information among the team members. In addition, in the same vein, we found metrics that measure defects that appear during a sprint. Finally, there are metrics, such as return on investment, that measure the delivery of software and that can be used to understand the relationship between the result and the investment in software. These metrics apply to the Scrum software development process and do not specifically explore their importance considering the three Scrum roles (i.e., Product Owner, Scrum Master, and development team), which prompts us to establish the first research hypothesis. H1: There are significant differences in the perceived importance of Scrum metrics according to the Scrum role. Scrum governance is a challenging task because it cannot focus only on software development processes and must involve multiple domains and include interdisciplinarity members. Empirical studies developed by [,,,] show that organizations implementing Scrum must concentrate their efforts on process improvement in a controlled and limited number of areas to face the high complexity of the continuous improvement processes and the strong interconnection between them. This requires keeping track of the metrics of the Scrum activities. As reported by Kadenic et al. [], the experience of a Scrum professional in understanding the Scrum work processes is an important element for the success of the implementation and diffusion of Scrum in the organization. In this sense, this study also aimed to understand if years of experience in Scrum are important to understand the relative importance of Scrum metrics. Two further research hypotheses were established: H2: The number of years of Scrum experience is a differentiating element in the perception of the importance of Scrum metrics. H3: Years of experience in their specific Scrum role is a differentiating element in the perceived importance of Scrum metrics. 3. Materials and Methodsschematizes the three phases of the methodological process. In the first phase, a literature review is conducted on the main activities associated with Scrum development and the metrics that we can find to measure the effectiveness of these activities. It is also in this phase that the research hypotheses are established according to the existing gaps in the area. In the exploration stage, the questionnaire is built and distributed among Portuguese software engineering professionals who apply the Scrum methodology to manage and develop software projects. Finally, the analysis stage is responsible for the statistical analysis of the data. This is followed by the discussion of the results, which allows us to explore the main innovative contributions of this work and its importance to the knowledge about Scrum methodology. Finally, the main conclusions of the study are drawn. A questionnaire was distributed through the partner network of the Portuguese Chamber of Commerce and Industry in the field of software engineering to collect the perception of the relative importance of metrics in each Scrum activity. The questionnaire was sent by email and was also shared on the LinkedIn social network. The questionnaire was only completed by companies adopting the Scrum methodology. It was available online between 6 February 2023 and 30 March 2023. A total of 191 valid responses were received. Only partially completed questionnaires were not included in the data analysis process. The questionnaire data were statistically explored using SPSS v.21, The Cronbach’s Alpha, Composite Reliability (CR), and Average Variance Extracted (AVE) of each construct were calculated to determine its internal consistency. According to Shrestha [], the first two coefficients should be higher than 0.7, while it is recommended that the AVE is higher than 0.6. It is seen from that all constructs have a Cronbach’s Alpha and CR greater than 7. Additionally, the AVE is greater than 0.6 in all constructs. An ordinal scale (i.e., very low, below average, average, above average, very high) was considered and then converted for statistical analysis purposes to a Likert scale from 1 to 5. To enable answering the previously formulated research hypotheses, it was necessary to collect information on three control variables: (i) role in the Scrum team; (ii) number of years of experience in Scrum; and (iii) number of years of experience in their current Scrum role. presents the distribution of the sample considering the profile of the respondents. The majority of respondents play a role on the development team, which is almost double the number of respondents who play the role of Product Owner, which is consistent with the number of individuals expected on a Scrum team where the majority of the team is composed of development team members. It is also noted that more than 43% of the individuals have more than 5 years of experience in Scrum. Consistent with this information is the number of years in the same Scrum role. Nevertheless, there is a greater homogeneity of the sample in this third question, indicating that individuals may assume several Scrum roles throughout their professional careers. 4. ResultsThe results in provide an understanding of the perceived importance of each Scrum metric. It is calculated as the mean, mode, and standard deviation. The mode represents the most frequent response given by the respondents. Only two metrics have a perceived importance higher than 4.2 (i.e., business value and number of impediments). Furthermore, in these two situations, the mode equals 5, which indicates that most of the respondents indicated the maximum importance of these two metrics. Other metrics (i.e., number of user stories, number of concluded tasks, number of tasks completed in a sprint, number of user stories completed in a sprint, accuracy of estimation, and velocity) also have a mode of 5, although their average is lower than 4.2. The standard deviation analysis allows us to see the homogeneity of the respondents’ behavior. The largest dispersion of answers is registered for the metric “number of deleted user stories”, while the metric “functional tests per user story” presents a standard deviation around half of the highest value, indicating a greater homogeneity of answers for this metric. complements the previous analysis by considering the relative importance of the metrics for each activity. The activities were ranked according to their position in the Scrum work methodology. It was concluded that metrics related to team performance evaluation are the most important, followed by those related to testing. It is noteworthy that metrics related to the final phases of the Scrum process, which enable a more comprehensive perception of the organizations’ strategy in managing Scrum processes and their teams, are more relevant than metrics in the early phases of the process and related to operational Scrum processes such as the daily Scrum, sprint backlog, or product backlog. performs a statistical analysis of variance between the groups of collected responses. Three experimental parameters have been considered: (i) role in the Scrum team; (ii) years of experience (YE) in Scrum; and (iii) years of experience in the same Scrum role (YE-SR). An F-test was performed to determine the probability that the variances of two different samples are equal. The F-test uses a statistic known as the F-statistic to test statistical hypotheses about the variances of the distributions from which the samples were drawn. A significance level of 5% was considered. The findings indicate that:
5. Discussion5.1. The Importance of MetricsThe organizations’ need for agility has led them to explore new ways of managing projects based on agile methodologies. Scrum has been widely disseminated and explored by IT companies in the construction of their technology solutions. However, the search for agility should not compromise the efforts of evaluation and continuous improvement of software engineering processes. The collection of metrics assumes itself as a fundamental strategy for organizations to improve their work processes [,,,,]. The findings of this study confirmed the importance that metrics collection assumes for Scrum teams. However, metrics do not assume the same importance in all phases of Scrum. Metrics related to the daily execution phases of Scrum (e.g., “daily Scrum”) are perceived as less important than those related to planning (e.g., “sprint planning meeting”) or retrospective (e.g., “sprint retrospective”) phases. Furthermore, metrics related to team performance and the testing process are those that received the highest importance from Scrum professionals. Team performance is mainly evaluated by considering the metrics of velocity, targeted value increase, and work capacity. Velocity is a widely known and used metric in Scrum that measures the work rate []. Using this metric in isolation can prove detrimental as Doshi reports []: “If two teams have similar skillset, shouldn’t their velocities be similar?”, “Team A’s Velocity is 2 times that of Team B’s—shouldn’t Team A work on the remaining Product Backlog Items for faster delivery?” The answer to this question lies in the differences in the starting point between both teams and the estimates made for each user story. Thus, speed comparisons between the two teams can be a metric with negative effects and make the teams uncomfortable []. Another equally relevant metric is the “targeted value increase” that results from an analysis of the speed increase along the process. In Downey & Sutherland [], this metric is defined as “Current Sprint’s Velocity ÷ Original Velocity”. Unlike velocity, which measures the teams’ current performance, this metric allows for exploring process improvements. Finally, the work capacity measures the total time the team is available to work during a sprint []. In [], it is suggested that team performance metrics should be combined in a team performance histogram, in which the most popular metric, such as velocity, can be combined with others, such as predicted and actual capacity. By incorporating additional metrics, teams can gain deeper insights into their performance, make more informed decisions, and improve their overall efficiency and effectiveness [,]. This approach is especially useful to complement the calculation of metrics such as “Velocity”, which is strongly unstable, inconsistent across teams, and context dependent. For example, by combining velocity with other metrics such as backlog size, cycle time, or lead time, it becomes easier to predict future delivery dates or release milestones. This helps stakeholders and product owners to plan and more accurately set expectations. Another suggestion is to combine velocity with metrics like team capacity and individual workload to help identify if the team is overburdened or underutilized. By analyzing the relationship between velocity and capacity, teams can make better decisions about how much work to take on in each sprint and effectively allocate resources. The testing phase is another area highlighted as offering the best conditions to be measured. The percentage of test automation is the metric that gathered the most interest from the respondents. In general, test automation consists of the use of tools to gain control over the execution of tests, allowing an increase in the coverage of software testing [,,,]. Considering the major goals of test automation, the ease of control over test execution and the reduction of resources are major attractions when one wants to implement this approach in software engineering. However, these are not the only advantages of this model. As reported in [], automation is also able to provide consistency and reliability in test repetition, because validations are always performed in the same way; easy access to information related to test execution, because the use of automation tools makes it easier to extract data such as test progress, error rates, and performance; and, mainly, reduction in repetitive manual work, a task that over time can lose reliability, causing a decrease in software quality. 5.2. The Impact of Control VariablesThe results of this study provided an understanding of the importance of these metrics in light of the role and years of experience of a Scrum professional. The role played by the individual in the Scrum team has no impact on the perceived importance of these metrics because the significance of the test is higher than 0.05. Scrum stands out as an inflexible methodology in which the roles of each individual are less important than working toward the success of the deliverables. Harmony emerges as a fundamental factor for successful delivery. In the work developed by Aldrich et al. [], the role that the Scrum Master has in promoting harmony between the developers and the Product Owner is highlighted. Visibility, inspection, and adaptation are three core elements of the Scrum philosophy [,]. They contribute to making all professionals feel part of the whole project and not overly focused on a specific task. Consequently, this makes all Scrum practitioners, regardless of their role, recognize the importance of metrics in each activity in a similar way. On the contrary, years of experience in Scrum is a factor that impacts the perceived importance of Scrum metrics. However, this impact is only reflected in the metrics related to team performance and in two specific metrics of the product backlog (i.e., business value) and tests (i.e., test automation percentage) that present a significance of the test lower than 0.05. Benchmarking team performance is a process associated with continuous improvement, as reported in []. The perceived importance of this phenomenon is not equally recognized by all individuals. Individuals with fewer years of experience in Scrum tend to give less importance to measuring team performance and focus on the operational factors of technical implementation of Scrum. In the same vein, the business value is a metric that individuals with less experience in Scrum have less difficulty recognizing, not least because Product Owner positions tend to be held by individuals with more years of experience. The percentage of test automation is the last metric that showed significant differences. The gains from implementing test automation strategies are not limited to development processes but impact other areas such as version management, teams, and execution environments []. All these dimensions are more recognized by professionals with more than years of experience. Finally, the impact of years of experience in the same Scrum role was also assessed and it was concluded that its effects are negligible. Effectively, the dynamics of adaptability and progression within a Scrum environment make the diversity of performance of a Scrum professional broader and not restricted to a specific role. In this sense, the importance of metrics is not affected by this factor. 6. Conclusions6.1. Theoretical Contributions and Practical ImplicationsThis study offers relevant theoretical contributions by extending the literature on the adoption of metrics for evaluating agile development processes in Scrum by exploring the impact of the Scrum professional’s role and experience of involvement in Scrum teams. The study concluded that the role played by the individual in the Scrum environment is not a relevant factor in the perception of the importance of metrics, while years of experience is a relevant factor in the perception of the importance of metrics related to team performance analysis activities. It was also found that the business value metric of the product backlog and the test automation percentage of the testing phase are metrics more valued by individuals with more years of experience in Scrum. It was also concluded that years of experience in the same Scrum role is not relevant in perceiving these differences. In the practical dimension, the results of this study can be adopted by organizations that are adopting Scrum in their software development methodology or that intend to migrate in the near future. Due to the high complexity of collecting and processing metrics, it is important that organizations can focus on the metrics that offer the greatest visibility into Scrum processes and that enable teams to continuously improve their software engineering processes. 6.2. Limitations and Future Research DirectionsThis study presents some limitations that it is relevant to address. First, the study focuses specifically on the Scrum methodology. In future work, it is suggested that a similar approach be applied to other agile methodologies such as XP, RUP, Feature Driven Development (FDD), and Lean Software Development (LSD), among others. It becomes relevant specifically to explore the paces that can be found in each of these frameworks, which are different from what happens in the Scrum environment. Equally relevant as future work would be to consider the adoption of Scrum in large-scale environments, in which the complexity of process management is greater. In this sense, metrics can play an even more relevant role to have greater visibility over the processes. Another limitation is the difficulty of including all metrics in the study that include all the recommendations addressed in the literature such as story points, function points, and COSMIC function points. Furthermore, this study only measures an individual’s prior experience with Scrum, ignoring that many individuals, before adopting agile methodologies, have experience in waterfall development environments. Consequently, it becomes relevant to explore whether this prior experience in traditional waterfall development and project management contributes positively or negatively to the perception of the importance of metrics in Scrum. The difficulty in formally defining Scrum metrics is another limitation. In this sense, the respondents’ perceptions of the importance of metrics are strongly related to the way these metrics were implemented in their organizations and not to the formal definition associated with each one of them. This study did not obtain information about the size and organizational structure of the companies. Scrum implementation models can be very diverse, given the size of each company and the constitution of their teams, which are increasingly geographically distributed. Exploring the impact of these two facts is also a relevant suggestion for future work. Finally, this study collects the perceptions of the various actors in the Scrum process and, consequently, does not collect information about the impact of metrics on the success of software engineering processes. In future work, it would be relevant to explore the impact of collecting each metric on software development processes. Author ContributionsConceptualization, F.A. and P.C.; investigation, F.A.; writing—original draft, F.A.; supervision, P.C. All authors have read and agreed to the published version of the manuscript. FundingThis research received no external funding. Institutional Review Board StatementNot applicable. Informed Consent StatementNot applicable. Data Availability StatementData are available on request from the authors. Conflicts of InterestThe authors declare no conflict of interest. References
Figure 1. Phases of the adopted methodology. Figure 1. Phases of the adopted methodology. Figure 2. Perceived importance of metrics by activity. Figure 2. Perceived importance of metrics by activity. Table 1. Overview of Scrum metrics. Table 1. Overview of Scrum metrics. ActivityMetricDefinitionDaily scrumNumber of tasksTotal number of tasks in the sprint backlog.Number of tasks in progressNumber of tasks “in progress” in the sprint backlog.Number of concluded tasksNumber of tasks completed in the sprint backlog. Estimated hours for a taskTime needed in hours to complete a task.Remaining hours for a taskRemaining time in hours to complete a task.Number of impedimentsNumber of impediments, obstacles, or issues that hinder the progress of a Scrum team during the implementation of a sprint. Workload distributionMeasure of how much work is assigned to each development member for the current sprint.Product backlogNumber of user storiesTotal number of user stories in the product backlog.Number of added user storiesNumber of new user stories added to the product backlog. Number of deleted user storiesNumber of user stores removed from the product backlog. Business valueImportance of a user story considering the product owner’s vision. It should reflect the value generated for the organization in terms of revenue, customer satisfaction, market share, competitive advantage, or any other relevant business objective.Sprint backlogNumber of user storiesNumber of user stories in the sprint backlog.Number of tasksNumber of tasks in the sprint backlog. Hours spent to implement a taskHours spent in a day to implement a given task.Hours remaining to finish a sprintTime in hours remaining to finish the current sprint. Sprint burndownGraphic representation of the rate at which work is completed and how much work remains to be performed in a sprint.Sprint planning meetingSprint lengthDuration of a given sprint.Size of teamNumber of developers in the development team.Team members’ engagementLevel of engagement of the team member in their work and workplace. Sprint retrospectiveNumber of tasks in a sprintNumber of tasks assigned to a sprint. Number of tasks completed in a sprintNumber of tasks completed in a sprint.Number of user stories completed in a sprintNumber of user stories implemented during the sprint.Sprint reviewNumber of accepted user storiesNumber of user stories accepted by the customer during the sprint review. Number of rejected user storiesNumber of user stories rejected by the customer during the sprint review. Team performanceAccuracy of estimationPercentage of correctness of the estimated implementation time of the user stories compared to their actual implementation.Focus factorThe speed of implementation to be divided by the internal capacity of the team.Targeted value increaseTeam’s speed in the current sprint divided by its initial speed.Team member turnoverIndicates the turnover of team members considering a full development cycle.Team satisfactionDegree of satisfaction of the team with the Scrum environment and adopted methodologies.VelocityAmount of work a development team can do during a sprint. It can be calculated by considering the story points divided by actual hours or the estimated hours divided by actual hours.Work capacityThe total time the team is available for work during a sprint. It is usually measured in hours. TestsAcceptance tests per user storyNumber of acceptance tests per user story.Defects count per user storyTotal number of defects per user storyDefects densityNumber of defects found divided by the size of the considered module/software.Functional tests per user storyNumber of functional tests per user story.Tests automation percentageTests automation percentage considering automatic tests and manual tests. Unit tests per user storyNumber of unit tests per user story. Table 2. Reliability analysis of the constructs. Table 2. Reliability analysis of the constructs. ConstructCronbach’s AlphaCRAVEControl variables0.7220.8360.641Daily Scrum0.8480.8930.636Product backlog0.8280.8710.670Sprint backlog0.8210.8740.661Sprint planning meeting0.7370.8490.629Sprint retrospective0.7880.8620.688Sprint review0.7130.8540.670Team performance0.8660.9100.659Tests0.8330.8850.673 Table 3. Sample characteristics. Table 3. Sample characteristics. VariableAbsolute FrequencyRelative FrequencyWhat is your role? Product Owner470.246Scrum Master660.346Development team780.408How many years of experience in Scrum? Less than 1 year230.120Between 1 and 2 years270.141Between 3 and 4 years580.304More than 5 years830.435How many years of experience in your current Scrum role?Less than 1 year260.136Between 1 and 2 years390.204Between 3 and 4 years610.319More than 5 years650.340 Table 4. Statistical analysis of the importance of Scrum metrics. Table 4. Statistical analysis of the importance of Scrum metrics. ActivityMetricMedianModeProduct backlogNumber of user stories45 Number of added user stories44 Number of deleted user stories34 Business value55Sprint planning meetingSprint length44 Size of team44 Team members’ engagement44Sprint backlogNumber of user stories45 Number of tasks44 Hours spent to implement a task44 Hours remaining to finish a sprint44 Sprint burndown43Daily ScrumNumber of tasks33 Number of tasks in progress43 Number of concluded tasks45 Estimated hours for a task33 Remaining hours for a task33 Number of impediments55 Workload distribution33Sprint reviewNumber of accepted user stories43 Number of rejected user stories33Sprint retrospectiveNumber of tasks in a sprint33 Number of tasks completed in a sprint45 Number of user stories completed in a sprint45Team performanceAccuracy of estimation45 Focus factor44 Targeted value increase44 Team member turnover43 Team satisfaction44 Velocity45 Work capacity44TestsAcceptance tests per user story44 Defects count per user story44 Defects density44 Functional tests per user story44 Test automation percentage44 Unit tests per user story44 Table 5. Statistical analysis of variance between groups. Table 5. Statistical analysis of variance between groups. ActivityMetricRoleYEYE-SRF ValueSig.F ValueSig.F ValueSig.Product backlogNumber of user stories1.6770.2032.0030.1421.9150.152 Number of added user stories1.4280.2391.7350.1951.6790.209 Number of deleted user stories2.5810.1023.1180.0793.3000.071 Business value2.7840.08110.916<1.10−39.875<1.10−3Sprint planning meetingSprint length1.5660.2252.0120.1332.4550.083 Size of team1.3110.2881.5150.2291.8900.160 Team members’ engagement1.5610.2271.6840.2051.7880.194Sprint backlogNumber of user stories1.7330.1962.0560.1332.1580.129 Number of tasks1.6880.2002.1220.1242.2370.115 Hours spent to implement a task1.8200.1681.9000.1572.2450.113 Hours remaining to finish a sprint1.7110.1981.9670.1512.1560.130 Sprint burndown1.5550.2291.8900.1621.9080.155Daily ScrumNumber of tasks1.2320.3011.5050.2371.6700.220 Number of tasks in progress1.4550.2321.7880.1881.7120.199 Number of concluded tasks1.3470.2941.7110.1971.6000.228 Estimated hours for a task1.6700.2031.9900.1532.1030.137 Remaining hours for a task1.8700.1632.2460.1102.0560.150 Number of impediments1.8240.1672.1980.1182.2560.110 Workload distribution1.5690.2261.7560.1931.7900.191Sprint reviewNumber of accepted user stories1.5220.2311.6780.2081.8900.161 Number of rejected user stories1.9670.1502.2890.0962.0990.139Sprint retrospectiveNumber of tasks in a sprint2.4550.1132.5650.0882.4500.086 Number of tasks completed in a sprint2.3110.1302.8120.0832.4910.084 Number of user stories completed in a sprint1.9150.1532.4500.1012.2550.110Team performanceAccuracy of estimation1.2400.2975.784<1.10−37.122<1.10−3 Focus factor1.2330.3018.120<1.10−38.770<1.10−3 Targeted value increase1.4990.2267.665<1.10−38.233<1.10−3 Team member turnover1.3670.2919.125<1.10−37.990<1.10−3 Team satisfaction1.2990.2857.900<1.10−37.458<1.10−3 Velocity1.3170.2874.7520.0065.3410.002 Work capacity1.5670.2285.890<1.10−37.111<1.10−3TestsAcceptance tests per user story1.6710.2041.8780.1732.1560.133 Defects count per user story1.5020.2381.9230.1622.1780.130 Defects density1.7880.1771.9980.1582.0910.141 Functional tests per user story1.2450.2951.6520.2131.8900.161 Test automation percentage1.3480.2978.239<1.10−37.799<1.10−3 Unit tests per user story1.2900.2891.5660.2251.6700.201 Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Share and CiteMDPI and ACS Style Almeida, F.; Carneiro, P. Perceived Importance of Metrics for Agile Scrum Environments. Information 2023, 14, 327. https://doi.org/10.3390/info14060327 AMA Style Almeida F, Carneiro P. Perceived Importance of Metrics for Agile Scrum Environments. Information. 2023; 14(6):327. https://doi.org/10.3390/info14060327 Chicago/Turabian Style Almeida, Fernando, and Pedro Carneiro. 2023. "Perceived Importance of Metrics for Agile Scrum Environments" Information 14, no. 6: 327. https://doi.org/10.3390/info14060327 Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here. Is peer review effectiveness applicable to Agile projects?The beauty of code reviews is that they not only can be timed to add the most value to your existing process, but they also scale easily in Agile environments to accommodate changing requirements and software quality benchmarks. Effective code reviews can be invaluable to the Scrum methodology. Which are the four Agile metrics for success?4 Metrics to Improve Agile Team Performance. Cycle Time (Productivity). Escaped Defect Rate (Quality). Planned-to-Done Ratio (Predictability). Happiness Metric (Stability). How is Agile effectiveness measured?This should ideally be measured with an Agile project dashboard. On a very simple level, you can measure productivity by using dashboards to track the amount of work completed in a sprint or cycle from when the work started to when it finished. The shorter the cycle time, the more things are getting done. What kind of metrics do you use when measuring a project's progress in Agile?For example, the most common agile metrics for scrum teams are burndown and velocity — while kanban teams typically track cycle time, throughput, and work in progress (WIP). |