Making software Good-To-Go! - Better, faster, and cheaper than ever before.
  • Home
  • Purchase/Contact
  • TRAM Abstract
  • TRAM v. 2.5
  • TRAM Feedback
  • Blog
  • M.A.S.
  • Resume
  • ChessMaster

Okay, Its Stopped. We No Longer Need It!

11/10/2011

1 Comment

 
How to Give an Accurate Answer
Written by Scott G. Ames
Thursday, 10 November 2011

Published: December, 2011 in the Agile Journal

“How long’s it gonna take?” My response, “Six weeks, plus or minus two days. No more.” In this article, I’ll give a rebuttal to Daryl Kulak’s article, “Let’s Stop the Wishful Thinking.” I will show why his beliefs about software estimating, while understandable, are questionable, at best, because of the advent of the Test Requirements Agile Metric (T.R.A.M.).

We Are Such Good Estimators!
The previous example uses a real world experience. The head of development at NEC was asking, for the customer, how long it would be until the final release of the software. They wanted it within three weeks. Everyone felt it would be possible to release within that timeframe. I knew a six week estimate would be more realistic. The use of TRAM helped me to justify my statement, not just to the rest of the development staff, but also, to the client.

Storypoints
This is where Scrum’s “storypoints” would have failed us.
Equating storypoints to an amount of time is ludicrous for a development team. As Mr. Kulak says, “It’s ridiculous. The power of the storypoint in estimating user stories is that it is vague. Keep that power.” Since we are trying to eliminate the vagaries, we will eliminate the storypoints.

“Oh. No! How will we estimate?” You ask. By using the TRAM’s method of Verification points. Verification points, or VP, are a defined, rather than a fuzzy, metric. Each requirement is given a score based on the type of defect it would cause if it failed, as follows:

Catastrophic:    The defect could cause disasters like loss of life, mass destruction, economic collapse, etc. This severity level should only be used in mission or life-critical systems and must have at least Exciter priority.
Showstopper:    The defect makes the product or a major component entirely unusable. In mission or life-critical systems, failure could be hazardous. This severity level must have at least Recommended priority.
High:        The defect makes the product or a major component difficult to use, and the workaround, if one exists, is difficult or cumbersome.
Medium:     The defect makes the product or a major component difficult to use, but a simple workaround exists.
Low:          The defect causes user inconvenience or annoyance, but does not affect any required functionality.

These are then modified by the priority level:

Mandatory:    The defect is highly visible or has critical impact upon the customer; it must be repaired prior to general availability release.   
Exciter:        The defect has significant impact upon the customer and inclusion of this functionality would greatly increase customer satisfaction with the product.    
Moderate:    The defect has moderate impact upon the customer and should be repaired before a general availability release, but it is not necessary unless at least Medium severity. This level is also used for requirements that have not been prioritized.
Recommended:    The defect has some impact upon the customer and should be repaired before a general availability release, but it is not necessary unless at least High severity.
Desired:        The defect has a low impact upon the customer and should be repaired before a general availability release, but it is not necessary.

As you can see, while defects are still prioritized, some requirements will have the same priority. This is not a problem, rather it is the developer’s choice to determine which defect to fix next or the requirement will be re-scored.

The team decides on the requirement’s severity, and the Product Owner on the requirement’s priority. Since the estimation does not handle time in “man-hours,” but rather in “team-weeks,” the estimation has more value because all those little bumps in time are smoothed out. You don’t even need to show an employee’s time off as other members of the team will be able to pick up his tasks.

Verification points, being a defined metric, are the same no matter who calculates them. New York, Denver, or New Delhi, a Verification point is a Verification point and means the same thing. This is not true of storypoints. With VP, you’ll be able to determine which of two teams should develop faster by the use of simple velocity. The total Verification points for a project are a good metric for determining the overall size of the development effort. Using a form of iterative development, over time, it can be accurately determined how many Verification points can be cleared during each iteration. Cleared Verification points represent deliverable software. Verification points cleared by the team per day, week, iteration, or sprint is a valuable metric that can be used to: Show how much effort was required per Verification point, determine how rapidly a project can be completed, estimate a project’s duration and cost.

At NEC, I implemented the TRAM on the mobility project to aid in determining how much functionality to attempt during each sprint. 14 weeks into the project, the customer asked us to make final delivery of the project within 3 weeks. Management hoped that we could do that for them; however, we had only cleared 280 VP of product during those 14 weeks. That gave us a velocity of 20 VP/week. As there were 120 VP of product still in the backlog, we told them that our best estimate for completion would be 6 weeks. It is worth noting that the TRAM analysis estimate was 100% accurate. We made the final delivery of the project in exactly 6 weeks. One thing that I was asked at NEC was, “What if we made more overtime mandatory to attempt to get the project out in 3 weeks.” We already had mandatory overtime on Saturdays for the prior three weeks, and the effects were not helpful. During the first week, the team worked an additional day and produced an additional day’s worth of product. In the second week, production started to fall. The team only produced 90% of what it had accomplished during a normal work week. The third week, production slipped to 80%. Obviously the team was burning-out. Rather than slip further to 60%, which is where the team was heading, cessation of mandatory overtime was recommended and implemented. Velocity then returned to pre-overtime levels. This saved the company two weeks of development time and associated costs. For a team of 60, this was a significant monetary savings for NEC. This is the power of Verification points.


Verification points from the Test Requirements Agile Metric are a very good way to save your company time and money while producing very accurate estimates that will be useful to business people.


About the Author
Scott G. Ames has 15 years in software quality, is a Certified ScrumMaster, and is the Chief TRAM Engineer at Good-To-Go!
1 Comment

Stresstesting - What a Load!

11/10/2011

0 Comments

 
Published: February, 2006 in Better Software

Over the years, I have seen many stress testing projects, and one question that I am often asked is how to go about ramping up the amount of stress in a project. Now, please understand, I know that what is really being asked is how to go about ramping up the amount of load, not stress, in a test. Since I, however, am a quality engineer by trade and like specifics, this article will attempt to answer the question, not as it was intended, but rather, as it was asked. If you really want to increase the overall stress on your people and in your testing project, these are some of the best practices to follow.

 Skip any “unnecessary” steps.

Defining goals, requirements, and test specifications takes a long time. These steps not only cause test engineers to create tests that will execute properly with something other than just simple test data but seriously cuts into their functional testing of Bejeweled and load testing of internet web servers. If a test works with any possible type of data, it significantly reduces the amount of stress in a testing project. Not taking the time to do this right increases stress by forcing people to have to do it over, and possibly over again, under deadline pressure. Also, defined goals, requirements, and specifications remove any possibility of scapegoating outsourced resources for any failures that occur in the test project. This is especially important when validation that a system works is more important than actually fixing any problems that might arise.

 Have a meeting.

Better still, have lots of meetings. Require that everyone connected to the project attend, no matter how small his or her role, and without regard to his or her other responsibilities. The meetings should require preparation and follow up tasks that take at least twice as long as the meetings themselves. To increase stress, it is far more important that everyone know everybody else's responsibilities rather than actually accomplishing anything productive. This process creates stress two ways: it increases micromanagement, and it aids in choosing potential scapegoats.

 Delegate, Delegate, Delegate.

You probably think this actually reduces stress, but proper delegation to increase stress is almost an art form. Delegate critical tasks to people who are completely inappropriate for those tasks. For example, make your testing tool consultants, who will have little or no knowledge of your business processes, responsible for collecting the data necessary to execute the tests. Give them no direction as to how to collect the data and no authority to obtain assistance from the subject matter experts who know how to collect and validate the data. Be careful to not go overboard here. Later on, when the project is approaching its deadline, you can re-delegate this task to people who are capable of actually accomplishing it. In this manner, you can increase stress on multiple fronts and still keep the project from failing. If the project does fail, you can always scapegoat any of the people to whom these tasks were originally delegated. In the case above, since it is impossible for the consultants to deliver a successful project, it may even be possible to avoid paying them for their work. This is best accomplished when used in combination with the previous items.

 Don't bother to validate the test environment.

This is really an important step for increasing the stress in a testing project. It's even better to ignore any piddling little details like test tool environmental requirements, such as the operating system, networking protocols, and any other necessary software and system configuration settings. Just make blanket statements like:

    "All of the machines have exactly identical configurations."
    "The test system is an exact duplicate of the system we have here."
    "Everything has been set up as you requested."

Later, when these statements are proven inaccurate, the project stress will greatly increase. The third statement above was once combined with "Our people spent the whole weekend working on it." Of course, that was before an automatic system restore early the next morning undid all of the system configuration work that had just been completed. Having to rebuild an environment two or three times while working on a project deadline is a wonderful way to increase stress. You will be amazed by the amount of stress added to a project by not having a properly configured environment and by making people scramble to make inadequate work-arounds in a short amount of time.

Better yet, don't bother with a test system at all. Simply use the production system as the test bed, and don't bother to back anything up. Remember, backups are only for people who make mistakes. And finally,

 Ignore the stupid questions. Just make assumptions.

This is a combination of all the above items. What assumptions, made at all levels, keep the project from running smoothly and greatly increase the amount of stress on the project? As complex and detailed as a software testing project is now, there are no stupid questions, only bad assumptions.

On second thought, just do it right the first time; otherwise, you may not be around to do it over. Who needs all that extra stress anyway?

0 Comments

    Author

    Scott G. Ames is a Certified ScrumMaster with 17 years of Software Quality and Estimation.

    Archives

    March 2015
    September 2014
    March 2014
    January 2014
    November 2013
    October 2013
    September 2013
    November 2011

Powered by Create your own unique website with customizable templates.