Algorithmically Estimating Developer Time

Vic Cherubini recently came up with an interesting way of calculating developer estimates that get more accurate over time. When I began reading it, I simply said to myself: “Well if you estimate over time… the averages should just come out in the wash anyway…” This could be true or not, what is interesting about Vic’s system is that per the algorithm, you should improve over time.

  1. Set up an Issue Tracker – Something to contain your issues/stories/etc like a developer backlog.
  2. Give points to issues – Fibonacci or doubles works here. Points should be integers greater than 0. The point system is entirely arbitrary, but points should be relative to how hard the issue is to the other issues in the project.
  3. Estimate total number of hours to complete each issue -Based on personal experience to start
  4. Complete each issue – Track total amount of time it took to complete. This does not include waiting around for the client, or sitting in meetings, or doing other necessary administrative work. This is simply an algorithm for when you’re actually coding, architecting, or otherwise engineering what the issue specifically asks for.
  5. Reflection – Calculate your efficiency ratio, or ER. The ER is the ratio of the number of hours estimated to the actual number of hours taken. This needs to be calculated for each issue. Next, multiply the ER by the point value of the issue. I call this the developer effectiveness, or DE. Ideally, your ER will be close to 1 and the two values will be approximately equal.
  6. Summary – At the end of the project, developers who are good at estimating their time have a sum(DE) >= sum(possible_points). Note: The possible_points variable is only the sum of the points for issues that developer worked on. Developers who are not as good at estimating their time would have a ER less than 1, and a sum(DE) < sum(possible_points).

Over time, you will be able to track your effectiveness at estimating time. Thus, if your ER approaches 0.5, you might think about doubling your time estimates for future tasks. Alternatively, if it approaches 2.0, perhaps shave some time off the estimates.

What are your thoughts? Make sense?

[HT: LeftNode]

7 Replies to “Algorithmically Estimating Developer Time”

  1. Pingback: Algorithmically Estimating Developer Time | Agile | Syngu
  2. We use a similar approach for estimating how much capacity devs should be allocating to support. We create a support story/tasks in each iteration and measure support estimates/actuals (we are using Rally). Based on these patterns, we move up/down the amount of capacity they allocate to support time in a given iteration. This has worked well for us and makes the amount of support the teams are working on very visible.

Leave a Reply