I just finished reading Steve McConnell's Rapid Development: Taming Wild Software Schedules. Well, many times it was not exactly "reading", more like "page-skipping". Obvious. Obvious. Obvious. But every time I thought I knew what was coming there was something unexpected hidden inside this big recapitulation of Fred Brook's findings. With big hard data pictures on the border. And then I'm horrified that I can relate to most of what is written, and that nothing's really new to me any more. Even if I didn't expect to read it from McConnell.
So the excitement of the days when I wrote stuff like Do You Understand XP? is gone. Rapid Development is a good book that reviews every practice it preaches with a critical eye, but I'm not overly excited. Yes, I learned, again, that many so-called Agile practices are known for a long time. But hey, I knew that after reading the Mythical Man Month. Excitement is hard to find in books these days. If I only could find one of those thought-provoking books, that rips your universe apart.
Wednesday, July 1, 2009
Saturday, June 6, 2009
All Programmers Are Code Narcissists
01001101011110010010000001100011011011110110010001100101
00100000011100100111010101101100011001010111101000100001
I finally discovered the truth about why developers rather rewrite a 1MLOC project from scratch than trying to understand a fellow programmer's code:
We're all code narcissists!
And the reason for that can easily deducted in a tiny logical chain:
(The best code is easy to understand) ^ (I can understand my own code the easiest, duh!)
=> (My own code is the best code)
Unfortunately this is only true for me, or perfect clones of myself. Which rules out everybody else I work with. Which reminds me... What was the reason that programming is done in teams?
00100000011100100111010101101100011001010111101000100001
I finally discovered the truth about why developers rather rewrite a 1MLOC project from scratch than trying to understand a fellow programmer's code:
We're all code narcissists!
And the reason for that can easily deducted in a tiny logical chain:
(The best code is easy to understand) ^ (I can understand my own code the easiest, duh!)
=> (My own code is the best code)
Unfortunately this is only true for me, or perfect clones of myself. Which rules out everybody else I work with. Which reminds me... What was the reason that programming is done in teams?
Sunday, February 8, 2009
Test Everything That Could Possibly Break - A Guide To Better Testing
Joe: "Writing this test will make sure that we find bugs quicker. It will let us change the code without breaking anything and it will help us to write decoupled code."
Jim: "Maintaining this test will be a nightmare. It is tightly coupled to the class we're writing and we cannot change anything without changing the test. It will be a pain."
Joe: "How do you know?"
Jim: "Well, how do you know?"
Joe: "I have 20 years of experience not writing unit tests."
So what?
When I'm writing new code I am never sure whether I test enough, or if my tests are on the right level of abstraction. For complicated core functionality of a distributed system this is a no-brainer - I use TDD, which by the very definition gives me 100% code coverage, and add some nice integration and acceptance tests. But there are a myriad of cases where going forward in the baby-step TDD way seems a waste of time, and it would really help me to find some sensible rules to apply.
The simplest and best rule I have found so far is an idea from Extreme Programming:
Test Everything That Could Possibly Break
Unfortunately this rule is not as simple as it looks. My approach to that is the good old mantra of try-measure-adapt, where try means to just do whatever a randomly selected guy thinks is the new cool-aid, measure means to listen to that whimsical thoughts my brain produces while doing it and adapt means to look at the results and change my behavior.
So here's my guide to better testing, from the beginner to the professional level:
Beginner: Do some unimportant project by following the description of TDD step-by-step. Don't waste your employers money with that if you don't have any idea of how to do it - the first time you use it will be a disaster. Writing yet another Sudoku solver in your favorite programming language might be a cool idea.
The important part is that you don't have an idea of what "test everything that could possibly break" means, so your best bet is to assume everything might break. Even those getters and setters over there. Remember that you're not allowed to rant about made-up scenarios of why too many tests might be bad if you have never experienced what it means to maintain a program with too many tests. Do that first and come back later and read on.
Advanced: So you already have some experience doing TDD and know how it feels to write all those little unit tests. You got some feedback on when those tests caught a stupid bug that would have taken an hour to find if you hadn't written the test. You now know what the impact of unit tests on your ability to do refactorings is. Now go and break the rules by various degrees. Try to be less exhaustive with your tests and bundle your baby steps into bigger units of work. See how that affects your ability to find bugs. Test your assumptions and be aware of when they break. When you find a new bug that takes some hours to debug, think about what kind of test would have helped you find it quicker and write those tests from now on.
Expert: If I were an expert I could probably tell you more about what to do in that case. I still hope that repeating the advanced guidelines will finally make me as wise as Joel and Jeff in their discussions or Jay Fields when he writes about developer tests. Perhaps listening to those guys will enlighten you. You could even take a look at the very interesting discussion of the idea to test everything that could possibly break.
In a nutshell:
Jim: "Maintaining this test will be a nightmare. It is tightly coupled to the class we're writing and we cannot change anything without changing the test. It will be a pain."
Joe: "How do you know?"
Jim: "Well, how do you know?"
Joe: "I have 20 years of experience not writing unit tests."
So what?
When I'm writing new code I am never sure whether I test enough, or if my tests are on the right level of abstraction. For complicated core functionality of a distributed system this is a no-brainer - I use TDD, which by the very definition gives me 100% code coverage, and add some nice integration and acceptance tests. But there are a myriad of cases where going forward in the baby-step TDD way seems a waste of time, and it would really help me to find some sensible rules to apply.
The simplest and best rule I have found so far is an idea from Extreme Programming:
Test Everything That Could Possibly Break
Unfortunately this rule is not as simple as it looks. My approach to that is the good old mantra of try-measure-adapt, where try means to just do whatever a randomly selected guy thinks is the new cool-aid, measure means to listen to that whimsical thoughts my brain produces while doing it and adapt means to look at the results and change my behavior.
So here's my guide to better testing, from the beginner to the professional level:
Beginner: Do some unimportant project by following the description of TDD step-by-step. Don't waste your employers money with that if you don't have any idea of how to do it - the first time you use it will be a disaster. Writing yet another Sudoku solver in your favorite programming language might be a cool idea.
The important part is that you don't have an idea of what "test everything that could possibly break" means, so your best bet is to assume everything might break. Even those getters and setters over there. Remember that you're not allowed to rant about made-up scenarios of why too many tests might be bad if you have never experienced what it means to maintain a program with too many tests. Do that first and come back later and read on.
Advanced: So you already have some experience doing TDD and know how it feels to write all those little unit tests. You got some feedback on when those tests caught a stupid bug that would have taken an hour to find if you hadn't written the test. You now know what the impact of unit tests on your ability to do refactorings is. Now go and break the rules by various degrees. Try to be less exhaustive with your tests and bundle your baby steps into bigger units of work. See how that affects your ability to find bugs. Test your assumptions and be aware of when they break. When you find a new bug that takes some hours to debug, think about what kind of test would have helped you find it quicker and write those tests from now on.
Expert: If I were an expert I could probably tell you more about what to do in that case. I still hope that repeating the advanced guidelines will finally make me as wise as Joel and Jeff in their discussions or Jay Fields when he writes about developer tests. Perhaps listening to those guys will enlighten you. You could even take a look at the very interesting discussion of the idea to test everything that could possibly break.
In a nutshell:
- Start by testing everything, even if it looks stupid (don't do it at work).
- Do slightly bigger steps and see what happens.
- Adapt whenever you experience a situation in which different behavior would have made more sense.
Subscribe to:
Posts (Atom)