here’s one thing that somewhat worked for my team. when we first started using LLMs we decided to run the same process as if they did not exist, same sprint planning meetings, same estimation. we did this for 6 months and saw roughly 55% increase in output compared to pre-LLM usage. there are biases in what were tried to achieve, it is not easy to estimate something will take XX hours when you know some portion (for example writing documentation or portions of the test coverage) you won’t have to write but we did our best. after we convinced ourselves of productivity gains we stopped doing this.
wow, great experiment. I'm amazed the whole team went through with duplicating everything for that long. Nice work :)
I resorted to feels. After decades of programming, I know when I'm being productive, and I can reasonably estimate when a colleague is being productive. I extrapolate that to the LLM, too. Absolutely not an objective measure, but I feel that I can get the LLM to do in a day a task that would take me 2-3 weeks (post-Nov 25 and using parallel agents).
I did too also generally resorted to feels. However, as a team we decided that we need to convince ourselves that LLMs are both an accelerator as well as not introducing flaws in our process (regressions, performance issues, security issues...). My team is pretty awesome and we generally try not to "fall for FOMO" type deals, like "everyone is using ____, surely we must as well."
just like HN, we had team members that were hesitant to say the least in the beginning as well as team members that were "convinced" a lot earlier that LLMs can be a great accelerator to what we are doing. I would venture a guess that similar situation exists(ed) in many places. so we tried to figure out a way where it is not just portion of the team "putting the foot down" so-to-speak but more like "OK, lets see if this can be measured so that everyone (or lets say overwhelming majority) is on board." I wish I would read that more teams at least attempted this approach...