tag:blogger.com,1999:blog-30814426841290168222024-03-08T04:38:39.928-08:00Manuel Klimekklimekhttp://www.blogger.com/profile/04044731490885944160noreply@blogger.comBlogger58125tag:blogger.com,1999:blog-3081442684129016822.post-74679680326007141342009-07-01T15:23:00.000-07:002011-05-30T13:07:33.181-07:00Review: Rapid Development<img style="float:left" src="http://ecx.images-amazon.com/images/I/41+sSYBlD9L._SL160_.jpg" alt="Rapid Development Cover" />I just finished reading Steve McConnell's <a href="http://klimek.box4.net/blog/index.php?now_reading_author=steve-mcconnell&now_reading_title=rapid-development-taming-wild-software-schedules">Rapid Development: Taming Wild Software Schedules</a>. Well, many times it was not exactly "reading", more like "page-skipping". Obvious. Obvious. Obvious. But every time I thought I knew what was coming there was something unexpected hidden inside this big recapitulation of Fred Brook's findings. With big hard data pictures on the border. And then I'm horrified that I can relate to most of what is written, and that nothing's really new to me any more. Even if I didn't expect to read it from McConnell.<br/><br/>So the excitement of the days when I wrote stuff like <a href="http://klimek.box4.net/blog/2007/02/19/do-you-understand-xp/">Do You Understand XP?</a> is gone. Rapid Development is a good book that reviews every practice it preaches with a critical eye, but I'm not overly excited. Yes, I learned, again, that many so-called Agile practices are known for a long time. But hey, I knew that after reading the Mythical Man Month. Excitement is hard to find in books these days. If I only could find one of those thought-provoking books, that rips your universe apart.klimekhttp://www.blogger.com/profile/04044731490885944160noreply@blogger.com5tag:blogger.com,1999:blog-3081442684129016822.post-68046321808094615642009-06-06T17:05:00.000-07:002011-05-30T13:07:33.181-07:00All Programmers Are Code Narcissists01001101011110010010000001100011011011110110010001100101<br/>00100000011100100111010101101100011001010111101000100001<br/><br/>I finally discovered the truth about why developers rather rewrite a 1MLOC project from scratch than trying to understand a fellow programmer's code:<br/><br/><b>We're all code narcissists!</b><br/><br/>And the reason for that can easily deducted in a tiny logical chain:<br/><em>(The best code is easy to understand) ^ (I can understand my own code the easiest, duh!) <br/>=> (My own code is the best code)</em><br/><br/>Unfortunately this is only true for me, or perfect clones of myself. Which rules out everybody else I work with. Which reminds me... What was the reason that programming is done in teams?klimekhttp://www.blogger.com/profile/04044731490885944160noreply@blogger.com7tag:blogger.com,1999:blog-3081442684129016822.post-58989664517249449452009-02-08T22:43:00.000-08:002011-05-30T13:07:33.181-07:00Test Everything That Could Possibly Break - A Guide To Better TestingJoe: "Writing this test will make sure that we find bugs quicker. It will let us change the code without breaking anything and it will help us to write decoupled code."<br/>Jim: "Maintaining this test will be a nightmare. It is tightly coupled to the class we're writing and we cannot change anything without changing the test. It will be a pain."<br/>Joe: "How do you know?"<br/>Jim: "Well, how do <em>you</em> know?"<br/>Joe: "I have 20 years of experience not writing unit tests."<br/><br/><strong>So what?</strong><br/>When I'm writing new code I am never sure whether I test enough, or if my tests are on the right level of abstraction. For complicated core functionality of a distributed system this is a no-brainer - I use TDD, which by the very definition gives me 100% code coverage, and add some nice integration and acceptance tests. But there are a myriad of cases where going forward in the baby-step TDD way seems a waste of time, and it would really help me to find some sensible rules to apply.<br/><br/>The simplest and best rule I have found so far is an idea from Extreme Programming:<br/><strong>Test Everything That Could Possibly Break</strong><br/><br/>Unfortunately this rule is not as simple as it looks. My approach to that is the good old mantra of <em>try-measure-adapt</em>, where <em>try</em> means to just do whatever a randomly selected guy thinks is the new cool-aid, <em>measure</em> means to listen to that whimsical thoughts my brain produces while doing it and <em>adapt</em> means to look at the results and change my behavior.<br/><br/>So here's my guide to better testing, from the beginner to the professional level:<br/><br/><strong>Beginner:</strong> Do some unimportant project by following <a href="http://klimek.box4.net/blog/index.php?now_reading_author=kent-beck&now_reading_title=test-driven-development-by-example-addison-wesley-signature-series">the description of TDD</a> step-by-step. Don't waste your employers money with that if you don't have any idea of how to do it - the first time you use it will be a disaster. Writing yet another Sudoku solver in your favorite programming language might be a cool idea.<br/>The important part is that you don't have an idea of what "test everything that could possibly break" means, so your best bet is to assume everything might break. Even those getters and setters over there. Remember that you're not allowed to rant about made-up scenarios of why too many tests might be bad if you have never experienced what it means to maintain a program with too many tests. Do that first and come back later and read on.<br/><br/><strong>Advanced:</strong> So you already have some experience doing TDD and know how it feels to write all those little unit tests. You got some feedback on when those tests caught a stupid bug that would have taken an hour to find if you hadn't written the test. You now know what the impact of unit tests on your ability to do refactorings is. Now go and break the rules by various degrees. Try to be less exhaustive with your tests and bundle your baby steps into bigger units of work. See how that affects your ability to find bugs. Test your assumptions and be aware of when they break. When you find a new bug that takes some hours to debug, think about what kind of test would have helped you find it quicker and write those tests from now on.<br/><br/><strong>Expert:</strong> If I were an expert I could probably tell you more about what to do in that case. I still hope that repeating the advanced guidelines will finally make me as wise as <a href="http://joelonsoftware.com/items/2009/01/31.html">Joel and Jeff in their discussions</a> or <a href="http://java.dzone.com/articles/thoughts-developer-testing">Jay Fields when he writes about developer tests</a>. Perhaps listening to those guys will enlighten you. You could even take a look at the very interesting discussion of the idea to <a href="http://c2.com/cgi-bin/wiki?TestEverythingThatCouldPossiblyBreak">test everything that could possibly break</a>.<br/><br/>In a nutshell:<br/><ul><br/> <li>Start by testing everything, even if it looks stupid (don't do it at work).</li><br/> <li>Do slightly bigger steps and see what happens.</li><br/> <li>Adapt whenever you experience a situation in which different behavior would have made more sense.</li><br/></ul>klimekhttp://www.blogger.com/profile/04044731490885944160noreply@blogger.com5tag:blogger.com,1999:blog-3081442684129016822.post-5027815973766282872008-12-22T15:02:00.000-08:002011-05-30T13:07:33.181-07:00Leaving the Comfort ZoneA week ago I held my first talk since university. It was the first talk in my life I held in English, and I was scared like hell. It felt like living through those exam days back at school all over again - just that this time I thought I'm old enough to realize that it'll all be okay. I'm obviously not.<br/><br/>I acted just like I did back in school. Or worse. I waited for the last possible moment before I started preparing the talk. When I finally started with the slides, a little perfectionist devil Manuel sat down leisurely on my left shoulder, telling me that this crap is just not up to my own standards. I worked long hours the night before the big day, and woke up early just to be able to rehearse the whole play before entering the stage.<br/><br/>Of course everything worked out just fine. Well, besides me saying basic-a-lly all the time. So why was I so freaked out? Well, I was obviously leaving my comfort zone.<br/><br/>Now the interesting thing beneath all that personal drama I'm ranting about is that I suddenly realized for how long I did not really step out of my safe little comfort zone. Granted, applying at Google is not the most un-stressful experience I ever had. And obviously starting a new job with all those bright people around me did not exactly make me fell warm and cozy.<br/><br/>But at that very moment I stood there, my peers gazing absent-mindedly into their laptops, my hands slightly sweating, I suddenly realized that ever since I started working I was doing everything exactly in the way I was most comfortable with. And that was kind of a shock.<br/><br/>Of course after giving the talk I felt great. I had known that in advance, but I realized that without being nudged enough I'd probably have tried to wiggle out somehow. <br/><br/>Lessons learned:<br/><ol><br/> <li>It's easier to step out of the comfort zone when somebody kicks your ass.</li><br/> <li>It's a long way from leaving your comfort zone once to <a href="http://www.ibm.com/developerworks/rational/library/nov07/pollice/index.html">real change</a>.</li><br/> <li>I must learn how to leave my comfort zone on my own.</li><br/></ol><br/><br/>Do you know tricks that make it easier to leave the comfort zone?klimekhttp://www.blogger.com/profile/04044731490885944160noreply@blogger.com4tag:blogger.com,1999:blog-3081442684129016822.post-5465403779519359102008-06-07T13:01:00.000-07:002011-05-30T13:07:33.181-07:00One Day In My Life As A Googler<font color="grey"><i>Disclaimer: this is a personal entry. Which means that if you are here for technical revelations or to gather super secret information about Google, go away, or I'll bore you to death. Seriously. Don't tell me I didn't warn you.</i></font><br/><br/><i>6:00 am:</i> The iPod station starts playing and lifts me gently from the land of dreams. <br/><a href='http://klimek.box4.net/blog/wp-content/uploads/2008/06/stanford.jpg' title='Stanford'><img style="float:right; margin: 0 0 10px 10px" src='http://klimek.box4.net/blog/wp-content/uploads/2008/06/stanford.thumbnail.jpg' alt='Stanford' /></a>"... Don't run away the time is now the place is here ..."<br/>Sasha's swinging voice finally makes me cautiously open my eyes: another sunny day in Mountain View. The weather gadget on my iPod informs me that the temperature is currently 11 degree Celsuis, probably rising to a comfortable 21 throughout the day. I realize that I still don't know how to convert Celsius to Farenheit without accessing the Internet.<br/><br/><i>6:40 am:</i> Armed with a gbike and a helmet I finally leave the apartment, trying to get warmed up for my workout on the 3 mile ride to work. While I pass the sporadic early jogger I can see the first rays of sunlight painting the landscape in warm colors. Pushing the pedals gets my pulse up to 150. Sweet.<br/><br/><i>6:55 am:</i> The gym is wonderfully empty at this time of the day. With just a handful of Googlers around me I enjoy a quite workout. Today I'm torturing my upper body. <br/><br/><i>7:45 am:</i> I end my workout with an <a href="http://www.gmap-pedometer.com/?r=1887536">easy 50 minute run</a> along Mountain View's beautiful shoreline.<a href='http://www.gmap-pedometer.com/?r=1887536' title='Jogging Track'><img style="float:left; margin: 0 0 10px 10px" src='http://klimek.box4.net/blog/wp-content/uploads/2008/05/jogging.png' alt='Jogging Track' /></a> Squirrels pass my way while two chatting women overtake me. I realize that I've got a long way to go before I will become a decent runner. Well, at least I can program computers.<br/><br/><i>8:50 am:</i> Breakfast time. I get myself some freshly made scrambled eggs with crispy bacon, two chocolate croissants, some healthy orange juice and a steaming cup of coffee. Throughout the next view minutes some of my colleagues from the build tools team arrive, and a discussion about espresso machines, barbecue grills or why we don't want to discuss Agile evolves. The day can come. <br/><br/><i>9:10 am:</i> Time to do some work! And, no, that doesn't involve foosball. Or ping pong. I'm actually <i>writing code</i>. Unfortunately I am not allowed to chat about what we do, or even how we do it. So just imagine me typing. And talking to people. And typing again. More talking. More typing. You get the idea.<br/><br/><a href='http://klimek.box4.net/blog/wp-content/uploads/2008/05/pintxo.jpg' title='Lunch at Pintxo'><img style="float:right; margin:0 0 10px 10px;" src='http://klimek.box4.net/blog/wp-content/uploads/2008/05/pintxo.thumbnail.jpg' alt='Lunch at Pintxo' /></a><i>12:10 pm:</i> The hardest question at lunch time is which cafe to choose. I just run with the crowd of people working next to me. Today we're heading over to <a href="http://money.cnn.com/galleries/2007/fortune/0701/gallery.Google_food/9.html">Pintxo</a>. One thing that constantly surprises me about food at Google is that I actually like the dessert. Chocolate cookies, sliced fruit, hot brownies, some chocolate cream topped with strawberries. Take your pick.<br/><br/><i>1:00 pm:</i> Work, work, work. Getting some coffee. Work, work, work. More coffee. Work, work, work. Grabbing a snack, which leads to some discussion about pair programming and the state of the world in general. Work, work, work.<br/><br/><i>7:30 pm:</i> Going to Charlie's to get my usual treat for the evening: a self-designed burger and a coke. This certainly feels like America. I meet Nicolai and his friend, who doesn't work for Google. I learn that we're allowed to bring visitors to dinner one or two times a month. We talk about real estate prices and I realize that living here is even more expensive than Munich. Dang!<br/><br/><a href='http://klimek.box4.net/blog/wp-content/uploads/2008/06/shoreline.jpg' title='Mountain View Shoreline'><img style="float:left; margin: 0 0 10px 10px" src='http://klimek.box4.net/blog/wp-content/uploads/2008/06/shoreline.thumbnail.jpg' alt='Mountain View Shoreline' /></a><i>8:15 pm:</i> I finally arrive at the apartment. I kill half an hour by starting Yet Another Blog Post. I want to write something about unit test size that features some modestly comical adult references (go figure). After some time I realize that I don't have enough high energy content for this entry. I'll finish it eventually. Blogger's shortest joke.<br/><br/><i>9:00 pm:</i> A new episode of House starts. For some strange reason the ability to watch a show that doesn't run in Germany yet makes me feel childishly happy. This probably reveals things about my personality I don't even want to think about.<br/><br/><i>10:00 pm:</i> The favorite part of my day is talking to my wife. And this is PRIVATE! ... <br/><br/><i>10:30 pm:</i> Another day at Google passed. I do some reading and eventually fall asleep. Just for the reference: my dreams are private, too. Just in case you'd hoped for something. Good night!klimekhttp://www.blogger.com/profile/04044731490885944160noreply@blogger.com6tag:blogger.com,1999:blog-3081442684129016822.post-38053551629616006222008-03-11T16:46:00.000-07:002011-05-30T13:07:33.181-07:00An Editor Independent Unittest ExecutorSince I got test infected I'm somehow unable to write a single line of untested code <a href="http://klimek.box4.net/blog/2007/11/01/fasten-your-seat-belts-chances-and-pitfalls-of-test-driving-your-development/">without feeling uneasy</a>. When I just want to write a tiny script containing a few lines of code in whatever text editor is installed in a system, it seems to be a daunting task to set up a programming environment that allows you to execute unit tests with a single click. But this single click is what makes writing unit tests unobtrusive enough to keep doing it.<br/><br/>So I'm quite fond of using a simple script to execute my script's unit tests whenever I save it. This concept is not new, and certainly not an original idea in itself, but the simplicity of an editor independent unit test executor in 10 lines of code has a certain appeal for me:<br/><pre class="code"><br/>#!/bin/bash<br/>stat_command="stat -c '%Y'"<br/>file_name=$1<br/>last_modification=""<br/>while true; do<br/> current_modification=$( $stat_command $file_name )<br/> if [ "$current_modification" != "$last_modification" ]; then<br/> clear<br/> $file_name --test<br/> last_modification=$current_modification<br/> fi<br/> sleep 1<br/>done<br/></pre><br/>This script stats the script file until it detects a change. Whenever a change is detected, the script is called with <i>--test</i>, which is my personal way to tell a script that it should just execute it's unit tests and exit. See my blog post about <a href="http://klimek.box4.net/blog/2008/02/04/integrating-unit-tests-in-ruby-scripts/">integrating unit tests in Ruby scripts</a> to learn how this can be done in Ruby. A very similar approach is possible for Python:<br/><pre class="code"><br/>#!/usr/bin/python<br/>import unittest<br/>import sys<br/><br/>if sys.argv.count("--test") > 0:<br/> sys.argv.remove("--test")<br/> unittest.main()<br/></pre><br/><br/>Now I can simply call the test bash script, giving it the script under test as parameter:<br/><pre class="code"><br/>./run_tests.sh ./script_under_test.py<br/></pre><br/><br/>The beauty lies in the simplicity of the solution: Even when I remote edit a script on some server with vi, I can simply launch a new console and execute run_tests.sh, watching the test results whenever I type ":w". <br/><br/><b>Update: The "sleep 1" really helps to keep I/O load down. Thanks to Philip for pointing this out. And yet another nice example of how hard it is to write 10 lines of bugfree code without a test.</b>klimekhttp://www.blogger.com/profile/04044731490885944160noreply@blogger.com2tag:blogger.com,1999:blog-3081442684129016822.post-83020082888403316382008-02-04T15:21:00.000-08:002011-05-30T13:07:33.182-07:00Integrating Unit Tests In Ruby ScriptsWhen I write ruby scripts I like to use a single file, containing the program and all unit tests. It took me some time to find out how to add a command line switch to my ruby scripts that makes them run in script mode with full access to the Test::Unit command line arguments, while being able to run the script without the test framework interfering in the execution:<br/><br/><pre class="code"><br/>#!/usr/bin/ruby -w<br/>require 'test/unit'<br/><br/>if ARGV.include?("--test")<br/> ARGV.delete_at(ARGV.index("--test"))<br/>else<br/> Test::Unit::run = true<br/> puts "Running..."<br/>end<br/></pre><br/><br/>Now you can simply run the program by typing<br/><pre class="code"><br/>$ ./testme.rb<br/>Running...<br/></pre><br/>or run the tests with<br/><pre class="code"><br/>$ ./testme.rb --test<br/>Loaded suite ./testme<br/>Started<br/><br/>Finished in 0.0 seconds.<br/><br/>0 tests, 0 assertions, 0 failures, 0 errors<br/></pre><br/>The nice thing is that only the first "--test" will be removed, so you can still leverage the Test::Unit command line argument interface.klimekhttp://www.blogger.com/profile/04044731490885944160noreply@blogger.com5tag:blogger.com,1999:blog-3081442684129016822.post-77242311883340931182007-12-25T17:19:00.000-08:002011-05-30T13:07:33.182-07:00My First OCaml Tests - So Close To Heaven!Some time ago a a discussion in the testdrivendevelopment Yahoo-Group evolved around the concept of <a href="http://tech.groups.yahoo.com/group/testdrivendevelopment/message/26384">"testable languages"</a>. I thought about this for a while and came up with the idea that I want to be able to have <a href="http://tech.groups.yahoo.com/group/testdrivendevelopment/message/26424">expressions as first class citizens</a>:<br/><pre class="code"><br/>assertThat { assertThat(false) } abortsWith TestFailedException<br/></pre><br/>Today I played a little with <a href="http://caml.inria.fr/ocaml/">OCaml</a>, sorted my functional programming skillz out, and finally arrived at my first test driven unit test environment for OCaml! Note that it's nearly what I wanted to be able to write, but unfortunately there's not enough syntactic sugar for lambda expressions (or I didn't find out, yet), so I'm stuck with using the quite ugly ( function () -> expression ) syntax. But hey, it's really close to heaven.<br/><pre class="code"><br/>exception TestFailed<br/>exception TestError<br/><br/>let failsWith expectedError expression =<br/> try<br/> expression ();<br/> false<br/> with error -><br/> expectedError = error<br/><br/>let isTrue expression: bool =<br/> expression<br/><br/>let isFalse expression: bool =<br/> not (expression)<br/><br/>let assertThat expression conditionMatchesOn =<br/> if not (conditionMatchesOn (expression)) then <br/> raise TestFailed <br/> else <br/> ()<br/><br/>let _ = (<br/> assertThat true isTrue;<br/> assertThat (isTrue true) isTrue;<br/> assertThat (not (isTrue false)) isTrue;<br/> <br/> assertThat false isFalse;<br/> assertThat (isFalse false) isTrue;<br/> <br/> assertThat (TestFailed = TestFailed) isTrue;<br/> assertThat (failsWith TestError (function () -> raise TestError)) isTrue;<br/> assertThat (failsWith TestFailed (function () -> raise TestError)) isFalse;<br/> assertThat<br/> ( function () -> assertThat false isTrue )<br/> ( failsWith TestFailed );<br/><br/> Printf.printf "OK\n";<br/>);;<br/></pre>klimekhttp://www.blogger.com/profile/04044731490885944160noreply@blogger.com0tag:blogger.com,1999:blog-3081442684129016822.post-3917525351615112742007-11-25T10:28:00.000-08:002011-05-30T13:07:33.182-07:00XPDays Germany 2007 - Ideas Going MainstreamI had the opportunity to take part in the <a href="http://xpdays.de/2007/de/index.html">XPDays Germany</a> last week. The company I work for enabled Uwe, our project lead, Holger and me to participate. It all started with a three and a half hour ride from Munich to Karlsruhe where we heroically overcame a nearly empty tank, a shaking car that felt like it just drank the wrong kind of gas and my own card reading skills - or lack thereof. <br/><br/><h3>Day 1</h3><br/>In the end we arrived on time for a <a href="http://en.wikipedia.org/wiki/Randori">Randori</a> session by Dave Nicolette and Rod Coffin. From the moment I learned of this <a href="http://xpdays.de/2007/sessions/TDD-Randori-and-Fishbowl.html">experimental learning session</a> where two people sit in front of a computer and test drive a piece of code while the whole audience is throwing in questions, I was kind of scared of the prospect of being watched while writing code by a hundred people - which is probably kind of normal, given that some people even <a href="http://saloon.javaranch.com/cgi-bin/ubb/ultimatebb.cgi?ubb=get_topic&f=42&t=000931">dislike being watched while pair programming</a>.<br/><br/>The session turned out to be very interesting. One of the key elements of this style of learning is that the audience's energy level stays high for a long time - you have to pay close attention, since due to the random selection of the next person to come to the front you could always be this person. And since you don't want to look like a fool when doing stuff in front of a hundred people (um, what was the problem, again?) my adrenaline level alone was enough to keep me awake.<br/><br/>The other thing I learned from this experience besides a new way to coach technical stuff is that there is a very good reason we do "pair programming" and not "group programming". Throwing two brains at a problem can be a very mighty tool to solve programming tasks, but in some situations throwing a hundred brains onto a very simple programming example felt like sitting in one of Dilbert's most unproductive meetings:<br/><br/>"So we add the game to the character: character.addGame(game)"<br/>"But why don't you add the character to the game?"<br/>"I see duplication, I see duplication!"<br/>"But isn't this, um, less expressive?"<br/>"Duplication is bad!"<br/>"Why do you assign null to a variable, this is done automatically"<br/><br/>After some time of silent observation I realized that sometimes I am exactly like this! So to all of you who have to cope with me on a daily basis: just hit me on the head with a big club from time to time.<br/><br/>The next session was a series of "lightning talks" about all kind of Agile topics where I learned about <a href="http://alistair.cockburn.us/index.php/Crystal_methodologies_main_foyer">Alistair Cockburn's Crystal</a>, which is a set of methodologies built upon the Agile principles and an extreme tailoring approach.<br/><br/><h3>Day 2</h3><br/>Despite the enormous amount of a whole liter of "badisches Helles" I had in the evening I was wide awake and ready to suck in new ideas during the conference's main part. The day started with an interesting presentation by Dave Nicolette about how to communicate TDD and design debt to your management. I was particularly stunned by the fact that he did talk about the cost of design debt and the refactoring part in the TDD cycle for a very long time without mentioning the cost of fixing an error in terms of when it is found and the shortened feedback cycles that TDD provides.<br/><br/>The next presentation was titled "why Agile projects fail", but turned out to be about why projects fail in general and provided some insight into the ideas of root cause analysis (<a href="http://www.isixsigma.com/library/content/c020610a.asp">5 Why</a>), the dimensions in which failure can occur (<a href="http://www.ambysoft.com/essays/brokenTriangle.html">The Broken Triangle</a>), and the psychological factors that are the real cause of ineffective development practices.<br/><br/>After lunch the keynote by "Dark Side" Rod Austin from HBS showed how Agile development fits the icy wind of change that swirls today's leading companies in the world from a cost competitive to an innovative business model. After all, who doesn't want a <a href="http://www.myvipp.com/">designer trash bin</a>?<br/><br/>Stfan Roock's talk on "Simplicity in Software Projects" was a very entertaining lecture on how easy it is to get so accustomed to complexity that you don't even realize how simple things could be. Well, that and that the Borg are the only entities in the universe who understand that when you travel through space <a href="http://en.wikipedia.org/wiki/Image:Borg_Cube_Model_1.JPG">aerodynamics is pointless</a>.<br/><br/>In the end it was very interesting to see big German companies like SAP, EADS and Siemens to take interest in extreme programming. Looks like those ideas are finally going mainstream.klimekhttp://www.blogger.com/profile/04044731490885944160noreply@blogger.com5tag:blogger.com,1999:blog-3081442684129016822.post-29443687926764954752007-11-16T16:26:00.000-08:002011-05-30T13:07:33.182-07:00Does "Test After" Work For You?"Why not write a test for this?"<br/>"Why should I, it works..."<br/><br/>The idea of Test-After Development is to write a set of automated white-box tests after writing your production code. Since probably every CS student in the world has learned that unit tests are a good idea, you'd expect unit testing to be an industry state standard for quite a while now. Interestingly the idea of automated unit and integration test is lately becoming more popular due to the widespread use of Test-Driven Development.<br/><br/>So why do we need Test-Driven Development to be able to efficiently write automated unit tests?<br/><ul><br/><li>If you write your code first and don't think about how to test the code, the code will not be testable. Thus testing becomes expensive and frustrating. Test-Driven Development will guide your software design by the old mantra of "how-do-I-want-to-use-this-class", leading to a highly decoupled design.</li><br/><li>When you write your tests, you'll discover a lot of errors. But instead of the red bar in Test-Driven Development, which you <em>expect</em>, the red bar in Test-After Development is the demotivating sword of reality.</li><br/><li>The most important reason why I have never seen Test-After Development work, is that developers just don't believe in errors once they wrote the code. This seems to be an eternal wisdom of software development psychology: once the code works, why bother testing it? Let's just implement the next feature.</li><br/></ul>klimekhttp://www.blogger.com/profile/04044731490885944160noreply@blogger.com5tag:blogger.com,1999:blog-3081442684129016822.post-7673242615596456942007-11-01T17:17:00.000-07:002011-05-30T13:07:33.182-07:00A Program Is Born: QCMake's First Functional TestWhen you write a GUI that is just a thin layer for an existing business layer and you don't see how to integrate test fixtures into this business layer, you'll be down in the dirty functional testing work very quickly. This happened to me today when I tried to write my first test for a small Qt facade object for cmake.<br/><br/>I started the test very enthusiastically: To test cmake I create a directory, cd into that directory, create a CMakeLists.txt and let cmake create a CMakeCache.txt. In the end I know that cmake ran when CMakeCache.txt exists.<br/><pre class="code"><br/>void QCMakeControlTest::shouldExecuteCMakeInTheCurrentDirectory()<br/>{<br/> QDir currentDirectory;<br/> QDir testDirectory(currentDirectory.path() + "/ExecuteInCurrentDirectory");<br/> QVERIFY(currentDirectory.mkdir(testDirectory.dirName()));<br/> QVERIFY(QDir::setCurrent(testDirectory.path()));<br/>}<br/></pre><br/><br/>I hit F5 and everything runs just fine. Once. The second time the directory ExecuteInCurrentDirectory already exists. Of course to have a nice and clean starting point the test must remove the test directory if it already exists. So I added:<br/><pre class="code"><br/>void QCMakeControlTest::shouldExecuteCMakeInTheCurrentDirectory()<br/>{<br/> QDir currentDirectory;<br/> QDir testDirectory(currentDirectory.path() + "/ExecuteInCurrentDirectory");<br/> if(currentDirectory.exists(testDirectory.dirName())) <br/> {<br/> QVERIFY(currentDirectory.rmdir(testDirectory.dirName()));<br/> }<br/> QVERIFY(currentDirectory.mkdir(testDirectory.dirName()));<br/> QVERIFY(QDir::setCurrent(testDirectory.path()));<br/>}<br/></pre><br/>Green. Perfect. Now let's create a CMakeLists.txt.<br/><pre class="code"><br/>void QCMakeControlTest::shouldExecuteCMakeInTheCurrentDirectory()<br/>{<br/> QDir currentDirectory;<br/> QDir testDirectory(currentDirectory.path() + "/ExecuteInCurrentDirectory");<br/> if(currentDirectory.exists(testDirectory.dirName())) <br/> {<br/> QVERIFY(currentDirectory.rmdir(testDirectory.dirName()));<br/> }<br/> QVERIFY(QDir::setCurrent(testDirectory.path()));<br/><br/> QFile cmakeLists("CMakeLists.txt");<br/> QVERIFY(cmakeLists.open(QIODevice::ReadWrite));<br/>}<br/></pre><br/>Green again. Once. The test fails the second time it's executed:<br/><pre class="code"><br/>********* Start testing of QCMakeControlTest *********<br/>Config: Using QTest library 4.3.2, Qt 4.3.2<br/>PASS : QCMakeControlTest::initTestCase()<br/>FAIL! : QCMakeControlTest::shouldExecuteCMakeInTheCurrentDirectory()<br/> 'currentDirectory.rmdir(testDirectory.dirName())' returned FALSE. ()<br/>..\..\..\..\..\Source\CMake\Source\QTDialog\qcmaketest\QCMakeControlTest.cpp(30) :<br/> failure location<br/>PASS : QCMakeControlTest::cleanupTestCase()<br/>Totals: 2 passed, 1 failed, 0 skipped<br/></pre><br/>Yep, no problem, all I need to do is to rmdir recursively. Just a quick glance into the Qt docs. But I found nothing. Well, it's not too hard to implement a recursive rm -rf, but still... I was so sure that this function would be hidden somewhere that I spent more time googling and doc-reading than implementing it when I finally realized that I was on my own. So in the end the test looked a little bloated:<br/><pre class="code"><br/>#include "qcmaketest/QCMakeControlTest.h"<br/><br/>#include "qcmakeui/QCMakeControl.h"<br/><br/>bool removeRecursiveForced(QDir& directory, const QFileInfo& entry)<br/>{<br/> if(!entry.isDir())<br/> {<br/> return directory.remove(entry.fileName());<br/> }<br/> QDir directoryEntry(entry.filePath());<br/> QList<QFileInfo> entries(directoryEntry.entryInfoList<br/> (QDir::Dirs | QDir::Files | QDir::NoDotAndDotDot));<br/> for(int entryIndex = 0; entryIndex < entries.count(); ++entryIndex)<br/> {<br/> if(!removeRecursiveForced(directoryEntry, entries.at(entryIndex)))<br/> {<br/> return false;<br/> }<br/> }<br/> return directory.rmdir(entry.fileName());<br/>}<br/><br/>void QCMakeControlTest::shouldExecuteCMakeInTheCurrentDirectory()<br/>{<br/> QDir currentDirectory;<br/> QDir testDirectory(currentDirectory.path() + "/ExecuteInCurrentDirectory");<br/> if(currentDirectory.exists(testDirectory.dirName())) <br/> {<br/> QVERIFY(removeRecursiveForced(currentDirectory, <br/> QFileInfo(currentDirectory, testDirectory.dirName())));<br/> }<br/> QVERIFY(currentDirectory.mkdir(testDirectory.dirName()));<br/> QVERIFY(QDir::setCurrent(testDirectory.path()));<br/><br/> QFile cmakeLists("CMakeLists.txt");<br/> QVERIFY(cmakeLists.open(QIODevice::ReadWrite));<br/><br/> QCMakeControl qCMakeControl;<br/> qCMakeControl.configure();<br/><br/> QFile cmakeCache("CMakeCache.txt");<br/> QVERIFY(cmakeCache.exists());<br/>}<br/><br/>#include "QCMakeControlTest.moc"<br/></pre><br/>At least I have an idea where this could lead me - a nice class to generate a clean cmake directory. But let's see whether I'll be right, perhaps YAGNI will finally get back at me. And if you know an easier way to delete a directory recursively with Qt, please leave a comment.klimekhttp://www.blogger.com/profile/04044731490885944160noreply@blogger.com2tag:blogger.com,1999:blog-3081442684129016822.post-77921813801441567562007-10-31T17:32:00.000-07:002011-05-30T13:07:33.182-07:00Fasten Your Seat Belts! - Chances And Pitfalls Of Test Driving Your
DevelopmentTen years ago I had a rendezvous with a beautiful girl, and at the end of the evening I gave her a ride home. Back than I thought playing the gentleman to be posh, so I opened the door for her. She slid with one elegant movement into her seat and I paced around the car and folded my wiry frame behind the steering wheel. I looked at her.<br/><br/>"Don't you want to start?", she asked and watched me curiously. "Um", I said, obviously always finding the right words at the right moment. "Um. - Not before you buckle up...". She frowned at me: "But I never buckle up". I replied "Well, if you don't buckle up, we're not gonna go anywhere tonight". "Oh come on!", she now somewhat furiously stated. "Nope!" I insisted eloquently, finally feeling her shield of stubborn resistance falter. It took a few seconds before she realized that I really wouldn't drive her home without her being properly protected from falling through the windshield and decomposing her pretty head by hitting the next best fireplug. So she buckled up.<br/><br/>Even back than I was so used to the secure feeling of the protective belt that just thinking of driving unbelted drove an uneasy quiver through my guts. This feeling is so strong that if I don't drive strapped when moving the car just a few inches there's always a sense of awareness that makes me want to fasten the seat belt immediately.<br/><br/>Today I felt exactly the same way while writing code.<br/><br/><h3>The path of the test</h3><br/>At the beginning of the last iteration we identified a story that affected some legacy modules in our code base. When we recognized that the changes we needed to make would touch more code than we had thought, so we decided to try to test drive a part of the system from scratch to replace the tangled old code. So Richard, Reinhard and myself started to pair on the story alternately. Besides some private experiments with a Sudoku solver in Java this was the first time I was doing real full time TDD pair programming for a couple of days. Aside from some initial irritation and the constant realization that pair programming is hard to learn I was quickly pulled into the red-green-refactor cycle, as usual. But this time I held the pace for longer than ever. And was pulled into the cycle deeper and deeper. Write a test, make it work, look for redundancy. Write a test, make it work, look for redundancy. Write a test...<br/><br/>Today I wanted to quickly integrate the changed interface into an existing module. I didn't have a test yet. The cpp file was readily opened in my editor. Just a quick edit, nothing more than integrating this interface. A few simple edits. Only three lines or something. And suddenly a nagging question materializing in my head:<br/><br/><b>How can I make sure this works?</b><br/><br/>At this moment I felt like driving unbuckled. I felt unsafe. I wanted my cozy safety net back. Like an addict I went for the next test.<br/><br/><h3>What use is a seat belt when you hit a tree at 200 MPH?</h3><br/>Since I'm the one driving the adoption of XP in our company, I wanted to try TDD for myself on a save playground to learn more about the ins and outs before applying it at work. Since Java has really nice tools for TDD, I started test driving a small Sudoku solver in Java. This was my first real test driven code and I often wondered about how nicely the test suite covered my errors. Spirited in the Agile fashion, I began with a really straight forward brute force implementation. Everything went a lot more smoothly than I had expected and after some coding I had a simple solution that needed over 90 seconds for one simple Sudoku.<br/><br/>After a while I wanted to optimize the runtime. So I introduced some caching variables. I struggled with the failing tests as my solution grew more and more sophisticated, but the tests helped me to get to a deeper understanding of the real problem. Finally I arrived at a point where the algorithm managed to work through 1400 Sudokus in less than a second. I was thrilled. And I wanted more. So I installed a profiling framework to find out where the next optimization sweet spot would be hidden. When I browsed the profiling data I realized that the real solver didn't even <em>call</em> the algorithm. So I had benchmarked a program that didn't solve any Sudoku at all.<br/><br/><b>At this moment I felt like hitting a tree with 200 MPH, suddenly realizing that it is not a good idea to drive that fast into a 90-degree turn on a wet street, even if you have a seat belt.</b><br/><br/>After the blood had returned to my head on it's way to my brain I implemented a test into the main program to check every solution with a simple algorithm before claiming to have solved anything. In the meantime I have a solution that runs 1400 Sudokus in 6 seconds on my core 2 notebook. I'm even quite convinced that I got the solution part correct...<br/><br/><h3>Do I get my driver's license?</h3><br/>So, here's the lesson I learned on this journey on my path to the test:<br/><ul><br/><li><b>Don't rely on your tests too quickly.</b><br/>If you want to heed the XP advice to "test everything that can possibly break", be aware that it's often the things of which you think that they can't break that finally break.<br/></li><br/><li><b>Use a healthy mixture of tests on all abstraction levels.</b><br/>Unit and functional tests are orthogonal - they cover different aspects of the code. But of course you'll already have a lot of unit <em>and</em> functional tests if you don't rely on your tests too quickly.<br/></li><br/><li><b>Buckle up!</b><br/>The unsafe feeling while trying to modify code without having a test was a very impressive experience for me. I know that from now on I'll fasten my code's seat belt.<br/></li><br/></ul>klimekhttp://www.blogger.com/profile/04044731490885944160noreply@blogger.com7tag:blogger.com,1999:blog-3081442684129016822.post-65219897853670864502007-10-29T16:03:00.000-07:002011-05-30T13:07:33.183-07:00Can You Remember Hungarian TLAs?<b>Scene 1.</b> Karl and J.B. are pairing on a small web service. Karl is just returning to the workplace with a steaming, hot cup of coffee in his hand.<br/><br/><b>Karl:</b> 'Let me see what you wrote just now... <br/><pre class="code">usUserName = request.getParameter("UserName");</pre><br/>Um... This variable, usUserName, what does the <em>us</em>-prefix stand for?'<br/><b>J.B.:</b> 'Well, that is the unescaped user name the way we get it from the user. I wanted to make sure that we don't accidentally write it into a database or send it back in it's evil, unescaped form to the webbrowser. If we use the us-prefix every time we have an unsafe string, we'll immediately recognize any error that could otherwise escape us because we will learn to look for such errors. This is the application Hungarian notation I read about over at <a href="http://www.joelonsoftware.com/articles/Wrong.html">Joel's site</a>, where you actually use a prefix that has a meaning instead of just a shorthand for the type.'<br/><b>Karl:</b> 'So... why not just call it <em>unescapedUserName</em>?'<br/><br/><h3>Confusion</h3><br/>In a time where you enter veryLongVariableNames by typing 'v', 'e', 'r', Crtl-Space, I don't see why we can't finally get rid of <a href="http://en.wikipedia.org/wiki/Three-letter_acronym">TLAs</a>. You know that the Hungarian notation got out of hand when your colleagues check in code that changes ucpBuffer to pucBuffer ("fixed a segfault"). Why not just name a variable for what it contains, in plain old English? I definitely know that I should think about my method name if my partner asks during a pairing session: "And what exactly do you intend to do in this method?".<br/><br/>In which example is the error easier to spot? Does the second example really take longer to write? To read? To understand?<br/><pre class="code"><br/>for(unsigned int i = 0; i < iLineCount(); ++i)<br/>{<br/> for(unsigned int j = 0; j < iNodeCount(i); ++i)<br/> {<br/> pGetNode(i, j)->layout();<br/> }<br/>}<br/></pre><br/><pre class="code"><br/>for(unsigned int lineIndex = 0;lineIndex < getLineCount(); ++lineIndex)<br/>{<br/> for(unsigned int nodeIndex = 0; nodeIndex < getNodeCount(lineIndex); ++lineIndex)<br/> {<br/> getNode(lineIndex, nodeIndex)->layout();<br/> }<br/>}<br/></pre><br/>If you can really remember mnemonic prefix TLAs (or any TLAs for that matter) and think Hungarian notation or abbrVarNames are a great way to safe yourself some typing, please let me know.klimekhttp://www.blogger.com/profile/04044731490885944160noreply@blogger.com2tag:blogger.com,1999:blog-3081442684129016822.post-73248465520836711262007-10-28T11:29:00.000-07:002011-05-30T13:07:33.183-07:00Struggling to TDD a GUI application<a href="http://www.cmake.org">CMake</a> is one of the best build tools out there. It has a nice command line interface and comes with an even nicer GUI. Unfortunately the GUI is MFC based, which means you need a VC professional license to build it for windows and you can't use it in linux.<br/><br/>Since <a href="http://trolltech.com">Trolltech</a> released it's wonderful GUI framework Qt for windows open source development some time ago, I decided to combine my eagerness to learn TDDing GUI apps with my need for a nice cmake GUI - and to start developing <a href="http://www.sourceforge.net/projects/qcmake">qcmake</a>.<br/><br/>The first priority for me was to learn how to TDD a GUI application. CMakeSetup, the MFC application qcmake should be able to replace, has a very simple single-window interface, so this should be the ideal playground to get an idea of the basic GUI testing problems.<br/><br/><h3>Setting up the testing framework.</h3><br/>The first step to successful TDD is to set up a test environment where you can execute your tests with a single keystroke from within your development environment. I spent some time integrating Qt's testing framework <a href="http://doc.trolltech.com/4.3/qtestlib-manual.html">qtestlib</a> into ctest. Hitting F5 from my Visual Studio Express executes all the tests. If something goes wrong, the qtestlib framework prints the debug output into the Visual Studio output window. This way I can just click on the error message to find the offending code, or just enable a breakpoint step through my personal mess...<br/><br/><h3>Top-Down or Bottom-Up - the duck's decision</h3><br/>The testing framework is ready and eagerly waiting for it's first real test. But somehow I don't know where to start. The options are quite simple: either the good ol' bottom-up approach, implementing one layer upon each other until I reach the top, or the top-down development 2.0 methodology where everything is faked or mocked, slicing the whole vegetable vertically until the feature is finished.<br/><br/>Since the top-down approach resembles the design-driven process the most (plus the running tests, minus some heavy documents) and Heusser & McMillan's presentation <a href="http://www.youtube.com/watch?v=PHtEkkKXSiY">Interaction Based Testing</a> at GTAC made my mouth water (I really like chocolate flakes), I thought I'd go for the top-down method.<br/><br/><h3>My first user interface test</h3><br/>And finally my first test looks like this:<br/><pre class="code"><br/>#include "QCMakeTest.h"<br/>#include "QCMakeWidget.h"<br/><br/>#include <QtTest/QTestMouseEvent><br/><br/>void QCMakeTest::shouldEmitConfigureSignalOnConfigurePressed()<br/>{<br/> QCMakeUi::QCMakeWidget* qCMake = new QCMakeUi::QCMakeWidget();<br/> QSignalSpy configurePressed(qCMake, SIGNAL(configure()));<br/> QTest::mousePress(qCMake->getConfigureButton(), Qt::LeftButton);<br/> QCOMPARE(configurePressed.count(), 1);<br/>}<br/><br/>QTEST_MAIN(QCMakeTest)<br/>#include "QCMakeTest.moc"<br/></pre><br/><br/>That was a lot of work just to get started with a simple test and basically no functionality. Fortunately I have some TDD experience to build upon, and right now this experience tells me that the up-front effort will pay of in the short run due to not debugging a lot. Up-front effort, quicker development, isn't that what BDUF was all about? I'm curious where all this will lead me to...klimekhttp://www.blogger.com/profile/04044731490885944160noreply@blogger.com2tag:blogger.com,1999:blog-3081442684129016822.post-77304497399237604542007-10-20T07:40:00.000-07:002011-05-30T13:07:33.183-07:00Big Design Up Front vs. Just Enough Design InitiallySoftware is complicated. More often than not it's a complicated mess. Sometimes even a tangled complicated mess. And wherever you look all you see is tradeoffs. There are no easy solutions (tm). The <a href="http://en.wikipedia.org/wiki/DLL_hell">dll hell</a> is replaced by the side-by-side hell. Emacs is better than Vim, Vim is better than the Visual Studio editor and the Visual Studio editor is better than Emacs. The Visual Studio editor even has a kill-ring (Ctrl-Shift-Ins). But Emacs has a web-browser. Vim is way cooler because I can't remember the commands, even though they're orthogonal. To what? Why not write a new editor in Erlang. Well, no, not me, I just want an editor that has all the features of Emacs, Vim and Visual Studio. Now. But without the bloat of Emacs. Or Visual Studio. More slick, just like Vi.<br/><br/>Since the early days of computer science, when software developers still had to wear suits at work and wrote A.I.s in Cobol, um, COBOL, with their feet, people tried to find out how software development could be made less complicated. And they soon discovered that the secret sauce is <a href="http://en.wikipedia.org/wiki/Abstraction">abstraction</a>. <br/><br/><h3>Abstraction</h3><br/>Layers. Components. Modules. Interfaces. Design. Architecture. It's so easy: define an architecture, think of layers, interfaces, modules. Create a nice design that meets this architecture's goals. Hire a bunch of developers to implement the components. <br/><br/>From this level of abstraction it really sounds easy. This is why it's called abstraction: it hides the complicated details. The good thing is that as long as you work on this level of abstraction, it's cheap to change your concept. Or as Joel Spolsky says:<br/><blockquote cite="http://www.joelonsoftware.com/items/2007/09/06.html">Designing a feature by writing a thoughtful spec takes about 1/10th as much time as writing the code for that feature—or less.</blockquote><br/><br/>Well, than it's obviously a very good idea to do all the design first. After all, if you change your design, you'll have to change your implementation. As long as you didn't start writing code, changing your design is easy. Or even better, start at the architecture level. Hire the best consultants to create the perfect architecture. Hire some really bright guys to do your design. In the end, a bunch of monkeys can do the implementation. The dream of the pointy-haired boss came true!<br/><br/>"Um. Sounds easy. So, how do we know that our design is good?"<br/>"This is easy: experience."<br/>"But to get experience I'd have to actually <em>try</em> the design, won't I?"<br/>"Yes, of course."</br><br/>"So, if my design is not perfect in the first place, I'll learn this only when I try to implement it?"</br><br/>"Well, yes, come to the point."</br><br/>"Then how can I finish my design before the implementation phase?"</br><br/>"Um. Well. You just do iterations. Big iterations, I guess, because design is so much easier to change."</br><br/>"So I work for months on a design of which I don't even know that I will be able to implement it?"</br><br/>"Perhaps... easier to. Um, change..."<br/>"And when I finally find out that my design was crap, I'm in the implementation phase, a deadline looming on the horizon and no time to change the design and all the code that was already written?"<br/>"... - well, is there a different way?"<br/><br/><h3>Feedback</h3><br/>Tradeoffs again. Working with abstractions means to get less feedback. "I'll take the chair and hit the sentinel" will be a hard job if the chair turns out to weight a hundred pounds. <br/><br/>And feedback is important. One of the laws of software development is:<br/><b>The longer it takes until you find out that you made an error, the more costly it is to fix that error.</b><br/><br/>This means that you should try to find your errors as quickly as possible. But when you're working on a high abstraction level, you just don't know all the complicated details because, well, that's why you're working on that high abstraction level, isn't it? So you'll find out that your design is crap when you're in the "implementation phase", at which point nobody has time to change the design. So you just live with the crappy design and run around cursing the designer and hating your job.<br/><br/><h3>Fail!</h3><br/>One solution to discover your errors early, is to do <q>Ultra Extreme Elite Programming</q> (Joel Spolsky). Design just enough up front that you get an idea of where you're going, write the target down as a test and sit down with a colleague to find a redundancy-free implementation. When you find out that your initial design is crap, which you'll do very quickly, rely on your tests to help you refactor your code to a better design. Of course, as Joel puts it so beautifully, this is like <q>driving around with the handbrakes on</q>.<br/><br/>The question is whether driving around with the handbrakes on is really slower than driving at full speed with closed eyes and a plan. I think it mostly depends on where you want to end.klimekhttp://www.blogger.com/profile/04044731490885944160noreply@blogger.com4tag:blogger.com,1999:blog-3081442684129016822.post-11873165011837193662007-08-28T10:13:00.000-07:002011-05-30T13:07:33.183-07:00Defects On Sale!Today after our planning game I did a short poll on how the guys perceive test driven development and pair programming. We're trying to do both for some time now, and since I take the blame for introducing both practices, I feel I'm somewhat - um - preoccupied on that matter. A few days ago, I was caught totally off guard when Richard told me that, well, he doesn't believe programming in pairs is more productive. Bummer. And I had believed my show to be grand circus.<br/><a name='more'></a><br/><br/>Coming down to earth from my alien space shuttle of imaginative knowledge in the face of uncertainty I realized that I didn't really have a clue what my teammates thought about our recent process improvement tactics. I figured the easiest way to find out would be to ask them. So I did a short poll. There were six people. Including me. I asked four questions:<br/><br/><table border="0" cellspacing="3" width="100%"><br/> <tr><br/> <td>Question</td><br/> <td>Yes!</td><br/> <td>No!</td><br/> </tr><br/> <tr><br/> <td><em>Do you think TDD makes you more productive?</em></td><br/> <td>3</td><br/> <td>3</td><br/> </tr><br/> <tr><br/> <td><em>Do you think TDD leads to better quality?</em></td><br/> <td>6</td><br/> <td>0</td><br/> </tr><br/> <tr><br/> <td><em>Do you think pair programming makes you more productive?</em></td><br/> <td>3</td><br/> <td>3</td><br/> </tr><br/> <tr><br/> <td><em>Do you think pair programming leads to better quality?</em></td><br/> <td>6</td><br/> <td>0</td><br/> </tr><br/></table><br/><br/>Now this is an interesting bite from the apple of knowledge: while we all seem to agree that pair programming and TDD increase code quality, half of the guys thinks that this raise in quality comes with a cost in overall productivity. Unfortunately shooting them with my nerf gun didn't help to teach them reason, so I concluded that the half I am in may be wrong. Perhaps.<br/><br/>But since I usually don't give in that fast I pondered over this anomaly of perception during our two-years-wedding-anniversary-dinner. While I munched down a deliciously flavorsome tenderloin, Anna proposed that maybe if you believe that TDD and pair programming don't increase productivity you don't expect to make any errors. While the implication would be true, the poll's data seems to suggest that <em>all</em> of the guys think that the practices improve quality - which implies that they expect to make errors.<br/><br/>So when we arrive at a point where we are self-conscious enough about our code to expect ourselves to err frequently, a simple question remains:<br/><br/><b>What Is The Relation Between Quality And Effort?</b><br/><br/>This is where a little math may help... Let's define the overall effort of a feature as the effort it takes to produce a certain function in lines of code (how crude!) plus the effort to fix the expected errors. The oversimplified measure of programming tasks in lines of code is, of course, questionable to the degree of calling it excrement of horned mammals. On the other hand it allows me to do a quick-and-dirty wort-case pi times thumb calculation.<br/><pre class="code"><br/>effort(feature) -><br/> codingEffort(linesOfCode(feature)) + <br/> expectedFixingEffort(linesOfCode(feature))<br/></pre><br/>Let's further simplify (yuk) that the coding effort is defined as directly proportional to the lines of code of the feature:<br/><pre class="code"><br/>codingEffort(numberOfLines) -><br/> codingEffortPerLine * numberOfLines<br/></pre><br/>Excessive googling (and IEEEing) informs us that the defect rate is normally defined as <em>defects per thousand lines of code</em>. So without test driving my functions I'd expect the expected fixing effort to be something along the lines of:<br/><pre class="code"><br/>expectedFixingEffort(numberOfLines) -><br/> fixingEffortPerDefect * (defectRate / 1000) * numberOfLines<br/></pre><br/><br/>But where does this lead? Good question. My answer is even more assumptions: Perhaps we can agree that if we make errors (and we do, don't we) introducing practices that increase quality allows us to exchange coding effort (up-front-effort) with fixing effort. If you read carefully, perhaps you ask whether I may exchange effort for cost arbitrarily... well, technically, no, but since I'm a software developer the Flying Spaghetti Monster may smile forgivingly onto my unworthy soul.<br/><br/>For example, when I do pair programming and my partner finds an error that I didn't see, the effort of this lapse is about:<br/><ol><br/><li>"hey, shouldn't that read '>=' instead of '>'?"</li><br/><li>"oh, yeah, 'course"</li><br/><li>*clickety-click*</li><br/></ol><br/>-- 3 seconds --<br/><br/>When such a defect is not found until the product is in the field, the effort of fixing the error is:<br/><ol><br/><li>Cost of the error for the customer (lost money, lost customers, being angry, beating up the pup)</li><br/><li>Reporting the error to the provider</li><br/><li>Checking the error logs and dealing with the customer</li><br/><li>Reporting the error to our hotline</li><br/><li>Checking the error at our site and finding out what the error really is</li><br/><li>Reporting the error to our development</li><br/><li>Prioritizing the error</li><br/><li>Trying to reproduce the error and find out what the customer <em>really</em> did</li><br/><li>Finding the error</li><br/><li>Fixing the error</li><br/><li>Building a new patch-release</li><br/><li>Testing the patch-release</li><br/><li>Getting the patch-release approved by the customer</li><br/><li>Updating the life-units with a certain probability of update-death</li><br/><li>(More indirect cost due to loss of trust, etc)</li><br/></ol><br/>-- um, more than 3 seconds, definitely --<br/><br/>I think it is not presumptuous to claim that increasing quality <em>may</em> also increase overall productivity if the expected effort to fix an error is high enough with regards to the expected decrease of errors due to better quality. The refined question is<br/><br/><b>What does a worst case error effort scenario look like in the break-even point of quality against productivity?</b><br/><br/>Let's assume we know a practice that increases our coding effort by a factor (additionalEffort > 1) and improves our error rate by a different factor (defectRateImprovement in [0;1[). For the practice to be effort efficient the overall effort without implementing this practice must be greater than the overall effort when using the practice. Using the already defined formulas this yields:<br/><pre class="code"><br/>(codingEffortPerLine * numberOfLines) + <br/>(fixingEffortPerDefect * (defectRate / 1000) * <br/> numberOfLines)<br/>><br/>(additionalEffort * codingEffortPerLine * numberOfLines) +<br/>(fixingEffortPerDefect * <br/> (defectRate * defectRateImprovement / 1000) * <br/> numberOfLines)<br/></pre><br/>Tackling this equation with a load of 7-th grade mathematics gives:<br/><pre class="code"><br/>fixingEffortPerDefect * (defectRate / 1000) * <br/> (1 - defectRateImprovement)<br/>><br/>codingEffortPerLine * (additionalEffort - 1)<br/></pre><br/>Should this innocent looking inequation be close enough to reality to make any sense, we could conclude that<br/><ul><br/><li>After you cut down the defect rate by a factor of two, cutting it by yet another factor of two would require twice the opportunity cost. Which means that halving your defect rate gets more and more expensive with regards to the opportunity cost of letting the defect go wild.</li><br/><li>If you know your current defect rate and your current price per defect, you can guess whether <em>the defect reducing effort</em> spent for a certain practice will be cost efficient. Of course a practice may and probably will have other impacts. But that's a different bed-time story. Featuring a hungry gorilla and a beautiful princess.</li><br/></ul><br/><br/>Now that we've got a nice equation we can torment it with some values, fed to our greedy mouths by the power of the Flying Spaghetti Monster. Let's assume that we have a defect rate of 20 defects per 1000 lines of code (which a google search reveals to be considered somewhat "normal"). Let's now assume that our practice increases coding effort by a factor of 2 (which is the worst case for pair programming, obviously). Let's further assume that this will find one tenth of all errors directly when they're implemented (fixing the errors in this phase is covered easily by the effort factor of 2). Watch and behold 3rd grade maths:<br/><pre class="code"><br/>fixingEffortPerDefect * (20 / 1000) * (1 - 0.9)<br/>><br/>codingEffortPerLine * (2 - 1)<br/></pre><br/>... or ...<br/><pre class="code"><br/>fixingEffortPerDefect > codingEffortPerLine * 500<br/></pre><br/>This means that for a defect rate of 20 errors per 1000 lines of code using a practice that doubles your coding effort and finds a tenth of the errors during coding will save you some bucks if the expected effort of fixing an error is more than 500 times the effort of writing a single line of code.<br/><br/>If you want even more numbers, let's further assume that <a href="http://www.qsm.com/FPGearing.html">in C++ you need 60 lines of code per function point</a> (now we get really braggy) and that you can somehow earn $200 per function point, this means that our practice lowers overall cost if the expected price per defect is greater than about $1600.<br/><br/>It all boils down to this: If you work in an environment where the average price per defect found outside the holy halls of your development team is greater than 2000 bucks, introducing a technique that doubles the coding effort to prevent a tenth of the errors will reduce development cost and thusly increase productivity. Well, if I really did a worst case analysis and didn't mess up the seventh grade maths up there, that is. <br/><br/>Do you think a total expected cost of $2000 per defect is a lot? Does this apply to your work environment? Do you actually have any clue how much your favorite defect is today?klimekhttp://www.blogger.com/profile/04044731490885944160noreply@blogger.com1tag:blogger.com,1999:blog-3081442684129016822.post-41712440578162116442007-07-01T13:58:00.000-07:002011-05-30T13:07:33.183-07:00Understanding The Fuzz About Engineering In Software DevelopmentSteve McConnell responds to Eric Wise's article <a href="http://codebetter.com/blogs/eric.wise/archive/2007/06/26/rejecting-software-engineering.aspx">Rejecting Software Engineering</a> that <a href="http://forums.construx.com/blogs/stevemcc/archive/2007/06/28/software-engineering-ignorance-part-ii.aspx">Rumors of Software Engineering's Death are Greatly Exaggerated"</a>. There's a lot of fuzz about the usage of the word "engineering" when it comes to software development. What's this all about?<br/><br/><h3>What is engineering?</h3><br/>According to wikipedia, <a href="http://en.wikipedia.org/wiki/Engineering">engineering</a> is defined by the ECPD as<br/><blockquote><br/>The creative application of scientific principles to design or develop structures, machines, apparatus, or manufacturing processes, or works utilizing them singly or in combination; or to construct or operate the same with full cognizance of their design; or to forecast their behavior under specific operating conditions; all as respects an intended function, economics of operation and safety to life and property.</blockquote><br/><br/>When I look at this definition I don't really see anything that would not be applicable to software development. Steve McConnell generously points out that software engineering is recognized and practiced for some time now, so where does this new-fashioned stubborn refusal to call the child by it's name come from?<br/><br/><h3>The engineering process</h3><br/>The real problem comes to light when you look at the engineering process. You'll find a description of the <a href="http://en.wikipedia.org/wiki/Concurrent_engineering">engineering process</a> on wikipedia. The article describes the engineering process in four stages:<br/><ul><br/><li>Conceive</li><br/><li>Design</li><br/><li>Realize</li><br/><li>Service</li><br/></ul><br/>Now there are some engineers who map the engineering stages to software development in a rather funny way: They think that "Design" is the process of drawing good looking UML diagrams and that "Realize" is "Coding". When you look at the wikipedia article, you'll see that for engineers "Realize" stands for "Manufacturing". In a software context, manufacturing means running the compiler and pressing some CDs or deploying some binary over the Internet.<br/><br/>So when engineers claim that software developers should look at how engineers do their design and all this talk about software processes would be settled once and for all, they're ignoring that software development is a design-only activity and that software has a lot less problems with the "Realization" stage than traditional engineering.<br/><br/><b>When a software developer writes code she is building an executable, mathematical model of reality.</b><br/><br/><h3>Conclusion</h3><br/>When software developers prefer not to use the title "engineer" to describe what they're doing, they're trying to avoid a mapping of the engineering process onto the software development process that is wrong. In the end, software developers will build mathematical models (source code) and apply scientific methods (complexity analysis) to solve problems. If this is not engineering, than we're not engineers.klimekhttp://www.blogger.com/profile/04044731490885944160noreply@blogger.com14tag:blogger.com,1999:blog-3081442684129016822.post-9494788423609377642007-06-29T16:23:00.000-07:002011-05-30T13:07:33.183-07:00How You Can Start Improving Your Software Process TodayAre you a developer who dreams about a better software development process in the organization you work for? Maybe you read something about fancy practices on your favorite blog or mayhap you even touched one of those old-style paper collections called books? Do you have some concrete ideas on how to improve, but don't know how to start? I was in the same situation a year ago. Here's what I did and what I would do differently today.<br/><br/><a name='more'></a><br/><br/>One of the big obstacles when trying to implement change is your own doubt. When you've still got some sense left you'll be very insecure about things you didn't even try yet. Perhaps you even mentioned your idea already to some team members or a project lead, but all you got is skepticism. This is the point where it is easy to give up. How can you as a mere team member introduce any big change without the full support of the team and your management? The answer is simple: you can't. At least not in one fell swoop.<br/><br/>For me doing nothing is not an option when I see the opportunity to improve. So how can you introduce a big change in small, safe steps and more importantly how can you convince your team members and management to try out your proposed changes? <br/><br/><h3>First: How do you measure up?</h3><br/>The practice I expected to be the hardest to implement proved to be a fast-selling item: the story board. This is why I would aim for this target first. The story board will not only make your development process more transparent, it will also provide a simple process metric, your <i>velocity</i>.<br/><br/>To set up a story board is not that hard. Get a pin board, write your requirements and the predicted effort on small paper cards and pin all the cards you plan to finish within a fixed time (we use two weeks) to the board in a column headed "todo". Whenever you start to work on an item, move the card to a column labeled "in progress" and when your task is done (<i>really</i> done) hang it over to the other finished cards where it can happily indulge in self-display.<br/><br/>After a fixed amount of time you take all the cards that are finished and count the effort you managed to implement. This is your velocity. Simple, easy, and hard to game. If you want to learn more about it, there's an excellent <a href="http://www.xprogramming.com/xpmag/jatRtsMetric.htm">article by Ron Jeffries about the metric of "running, tested features"</a>.<br/><br/>But wait, isn't measuring the software process <a href="http://www.joelonsoftware.com/items/2006/11/10b.html">what evil consultants do to make huge amounts of money by playing with fear</a>? Yes, it is. Then again, no, it's not. It depends on what you do with the data. <a href="http://klimek.box4.net/blog/2006/12/28/review-object-oriented-metrics-in-practice/">Metrics are a two-edged sword</a>. You can use them to learn or to judge. Never do the latter.<br/><br/>Introducing the story board is not hard. I did it without asking anybody for permission. I just organized the pin board and started to pin up stories I worked on. It doesn't take a lot of time, so it's not too hard to get your team members to buy in. When you show your management that you finally managed to produce an easy, transparent metric they'll find it useful, too.<br/><br/><h3>Second: The problem with problems.</h3><br/>If you want people to change you'll have to come up with a good reason. Change is never easy, so you have to convince people that you're going to scratch <i>their</i> itches. To identify problems and assets you can use a <i>retrospective</i>. Propose a meeting where you discuss what runs smoothly, what runs not totally perfect and how to improve. Try to get the commitment to do this on a regular basis. <br/><br/>This should not be to hard to implement, either. From what I've heard management people usually don't object to process improvement loop. And why should they? But what about your team mates? Do you think they will let the opportunity slip to vent their anger?<br/><br/><h3>Third: The solution.</h3><br/>Now the really hard part begins: the actual change. The first two steps will make introducing the change easier since they provide a platform to introduce a change as a solution to a real problem and safeguard the change by measuring it's outcome.<br/><br/>The platform is the retrospective. Let your team mates say what bothers them. If your observation of your development process was correct, they'll bring up the same issues you already identified. After the rumble dies down, propose your change or set of changes as a possible solution. Now you have to face the storm.<br/><br/>But we have the velocity metric as a safe guard. Propose to try your improvement and see how it affects productivity. You can simply try it until the next retrospective and judge based on more experience than. This makes it easier for your management to support the idea, too, since it is clear that the final decision is yet to make.<br/><br/><h3>Steering your development process.</h3><br/>If you are familiar with the concept of <a href="http://en.wikipedia.org/wiki/Test-driven_development">Test Driven Development</a> you know the concept of <i>steering</i> your own design. The same principle is used in Extreme Programming on a higher level to steer the product features by the planning game. The key principle is that each day you gain experience and are able to make better decisions based on your expanded knowledge. Or, as Ron Jeffries likes to put it: Today is the dumbest day of the rest of your life.<br/><br/>Using a simple meta process you are able to steer your development process. Identify a problem, try a solution, measure the outcome, inspect what you learned. Redo from start. It's not easy. In fact, it takes a lot of work to keep yourself and your environment self-aware and open to change since people seem to avoid change - it always means risk and effort. But using the same methods I used it's not impossible, either. Taking the test driven way to process improvements will force you to make baby steps - sometimes you'll hardly recognize movement, and this can drive you up the wall. But I'm still idealistic enough to believe that those baby steps will sum up and in the end you'll reach the sun. Wherever that is.klimekhttp://www.blogger.com/profile/04044731490885944160noreply@blogger.com0tag:blogger.com,1999:blog-3081442684129016822.post-20226984376363092262007-06-12T16:26:00.000-07:002011-05-30T13:07:33.184-07:00Today The Test Suite BrokeWhen I arrived at work today I fired up Outlook and checked my mail. I found five mails from our auto-build server telling me that the build broke. Since we introduced test driven development and continuous integration only a short time ago this was not out of nowhere - the build usually breaks at least once a day.<br/><br/>But today was the first day a <em>unit test</em> broke since we introduced TDD and CI.<br/><a name='more'></a><br/><br/>At first I thought that perhaps somebody checked in a broken test yesterday evening. But then I dug into the whole thing and found out that yesterday the build was perfect. Somewhat bewildered I got myself a nice cup of hot, steaming coffee, started Visual Studio and tried to find the reason for the failed test. Since there was some cryptography involved and I didn't know the ins and outs of this particular part of the system I asked the pair that wrote the code for help and we began a thorough debugging session.<br/><br/>As you can probably already imagine it was a date problem. A part of the systems cryptography was dependent on the system time and one of the algorithms broke <em>today</em>. We extracted a small mock-up that enabled us to simulate an arbitrary date as system date for the algorithm and found that this particular error would show up on two of 500 days since January the 1st.<br/><br/>So today was a 1:250 chance day.<br/><br/>What would have happened if this part of the code had no unit test. Well, most likely at some day in the future there would have been a bug report stating that the key that was entered by the service technician wasn't accepted. Since at that point the same procedure would have worked a thousand times we'd probably blamed the poor technician and told ourselves that this "can't possibly happen".klimekhttp://www.blogger.com/profile/04044731490885944160noreply@blogger.com2tag:blogger.com,1999:blog-3081442684129016822.post-64385373440441573852007-06-10T08:37:00.000-07:002011-05-30T13:07:33.184-07:00Ubuntu Gutsy Ximeta NDAS HowtoA month ago I bought a TREKSTOR NDAS device. This devices promises on it's package to be linux compatible. So after I unpacked the hardware and everything was running in Windows I tried to install it in linux. Unfortunately the stock feisty debian package I found didn't work with my WLAN configuration.<br/><br/>Now after I reinstalled ubuntu and upgraded to gutsy which comes right now with a 2.6.22er kernel I tried to build the driver from source. I had to patch the sources to make it work, but since it works flawlessly right now I provide my patch and a little compilation howto.<br/><br/>Download the <a href="http://code.ximeta.com/dev/current/linux/">current NDAS sources</a> and my <a href='http://klimek.box4.net/blog/wp-content/uploads/2007/06/ndas-11-2_kernel-2622.patch' title='NDAS patch for linux kernel 2.6.22'>NDAS patch for linux kernel 2.6.22</a>.<br/><br/><pre class="code"><br/># installed some packages. I don't know which exactly, but you'll need<br/># at least the following:<br/>apt-get install build-essential checkinstall linux-headers-generic<br/><br/># extract and patch the ndas sources...<br/>tar xvzf /path/to/ndas-1.1-2.tar.gz<br/>cd ndas-1.1-2<br/>patch -p1 < /path/to/ndas-1.1-2_kernel-2.22.patch<br/><br/># you only need to set NDAS_KERNEL_VERSION if you <br/># don't want to compile ndas for the currently running <br/># kernel for example, if you're compling from within colinux<br/>NDAS_KERNEL_VERSION=2.6.22-6-generic<br/>make<br/><br/># ndas_root must be exported for make install and <br/># checkinstall to work<br/>export ndas_root=$(pwd)<br/># somehow I had to make install before checkinstall...<br/># this is no problem, since checkinstall will clean up<br/># the whole mess again<br/>sudo make install<br/>sudo checkinstall<br/></pre><br/><br/>After that you can start the NDAS service by issuing<br/><pre class="code"><br/>/etc/init.d/ndas start<br/></pre><br/>Configure your device by following the <a href="http://code.ximeta.com/trac-ndas/wiki/Usage">Ximeta NDAS driver documentation</a>.klimekhttp://www.blogger.com/profile/04044731490885944160noreply@blogger.com16tag:blogger.com,1999:blog-3081442684129016822.post-11156273826690278612007-06-08T16:38:00.000-07:002011-05-30T13:07:33.184-07:00Mobile Ubuntu Colinux SetupMy Vista Home Premium finally arrived. Since I have a Ati 9250 at work, which is a smartly rebranded DirectX 8 card, I was craving for the full "vista experience". After doing some backup I installed Vista on my laptop and spent some time setting up the basic programs I need. Since installing colinux is one of the great challenges of the Game Of Windows I'll try to present you a step-by-step guide to a working mobile colinux setup in Vista.<br/><a name='more'></a><br/><br/>My goal is to set up a colinux configuration for truely mobile computing. That means that I use wireless networking in Windows and Kubuntu natively as my primary networking interface. I want colinux networking to work whenever my Windows has network access and I want a single configuration on linux and Windows. The configuration you find here is based on the two years I use colinux now and constant fidling with the configuration parameters.<br/><br/>As a prerequisite you need a running GNU/Linux on your computer. I installed an Kubuntu Feisty onto a spare partition. Kubuntu is my favorite distribution at the moment, because you get up-to-date basic packages like in debian/sid, and up-to-date KDE packages like SuSE. After installing GNU/Linux I rebooted into Vista. Let's rumble.<br/><br/>First I installed colinux 0.8.0 pre from Henry Nestler. My experience with colinux is that the latest unstable you can get at <a href="http://www.henrynestler.com/">Henry's site</a> is the most stable version available.<br/><br/>Once you installed it (you can safely skip downloading the linux image, we'll boot from a real partition instead) head to C:\Programme\coLinux and copy example.conf to colinux.conf. Now edit colinux.conf with WordPad. Since I have activated UAC I can't simply edit the system file but use an administrator notepad. Start - Type "Notepad" - Right Click the appearing notepad and select "Run as administrator...".<br/><br/>Now my colinux configuration looks like this:<br/><pre class="code"><br/>kernel=vmlinux<br/><br/># my swap partition<br/>sda3=\\Device\\Harddisk0\\Partition3<br/><br/># my root partition<br/># I usually have to play around with the partition numbers before<br/># I get the final digit right. <br/>sda4=\\Device\\Harddisk0\\Partition4<br/><br/># this should point to the root partition...<br/>root=/dev/sda4<br/>ro<br/>initrd=initrd.gz<br/>mem=768<br/><br/># a slirp device for internet access; when I boot<br/># kubuntu natively, this is my wired ethernet device<br/>eth0=slirp<br/><br/># when I boot kubuntu natively this is my wireless<br/># connection; since I don't want any configuration<br/># changes from native to colinux I inserted a<br/># unusable eth1 device in the colinux configuration<br/>eth1=pcap-bridge,"Unknown"<br/><br/># internal high-speed connection between colinux<br/># and vista only<br/>eth2=tuntap<br/><br/># I want to see my Vista files and cdrom in colinux...<br/>cofs0=c:<br/>cofs1=d:<br/>cofs2=e:<br/></pre><br/><br/>Now I can start colinux. Configuring the network devices takes some time, since the stuff is not yet configured on the linux side. <br/><br/>Edit /etc/network/interfaces. Remove the default entry for eth1 (it will be handled by knetworkmanager later, we don't want to use it in colinux) and change eth2 to a private network:<br/><pre class="code"><br/>auto eth2<br/>iface eth2 inet static<br/>address 192.168.42.2<br/>netmask 255.255.255.0<br/></pre><br/>Now configure the tuntap device on Windows to ip 192.168.42.1. Start colinux and you should be able to ping colinux at 192.168.42.2 from Windows. You can't ping back since the Windows Vista Firewall is active on the tuntap device.<br/><br/>Install <a href="http://sourceforge.net/projects/xming">Xming</a> and <a href="http://cygwin.com">cygwin</a>. Make sure you select openssh when installing cygwin, you'll need it later to log into your colinux.<br/><br/>Run XLaunch (from Xming) and configure an X Server without access control and save the configuration to your local documents directory. To start the X server without access control at login time link the config.xlaunch file into your Autostart menu. Make sure that you allow your X through the firewall. Don't do this if you don't have an extra firewall to the internet, since otherwise people from outside would be able to contact your X server! <br/><br/>Now you have to make sure you can log into your colinux from cygwin without a password. Luckily ssh features public-key based authentication, and it's not that hard to set up. <br/><br/>Start a cygwin shell.<br/><br/><pre class="code"><br/>ssh-keygen -t dsa<br/></pre><br/><br/>Copy the file to the .ssh directory on the colinux server. Make sure you have a .ssh directory in your $HOME before doing this:<br/><br/><pre class="code"><br/>scp .ssh/id_dsa.pub manuel@192.168.42.2:~/.ssh/authorized_keys<br/></pre><br/><br/>Now we're able to ssh manuel@192.168.42.2 without using a password. Edit a file konsole.bat on your desktop.<br/><pre class="code"><br/>c:<br/>cd \\cygwin\\bin<br/>run bash --login -c 'ssh manuel@192.168.42.2 "export DISPLAY=192.168.42.1:0; konsole > /dev/null 2>&1"'<br/></pre><br/><br/>Now all that is left to do is to run colinux as a service. At an adminstrator command prompt (Start->Run "cmd", Right-Click->Run As Administrator) c:\Programme\coLinux write:<br/><br/><pre class="code"><br/>colinux-daemon.exe @colinux.conf --install-service colinux<br/></pre><br/><br/>Open "Services" as administrator and edit the properties of the colinux service. Set the start mode to automatic. Voila.<br/><br/>To access your windows partitions from colinux you can simply add them to your /etc/fstab:<br/><pre class="code"><br/>0 /media/c cofs user,defaults,rw 0 0<br/>1 /media/d cofs user,defaults,rw 0 0<br/>2 /media/e cofs user,defaults,rw 0 0<br/></pre><br/>Now your init-scripts will mount your cofs devices (a.k.a. Windows Drives) when colinux boots.<br/><br/>I use this configuration on a Dell XPS M170 laptop. Of course you can use it for workstations, too. I myself prefer a winpcap'ed configuration for workstations, though. It's easier to get access to the colinux that way. Unfortunately winpcapping the device doesn't work on most wireless networks in the wild.<br/><br/>See also:<br/><br/><a href="http://klimek.box4.net/blog/2007/04/09/information-overflow-colinux-wlan-networking/">Information Overflow & Colinux WLAN Networking</a><br/><br/><a href="http://klimek.box4.net/blog/2006/11/24/colinux-on-windows-vista/">Colinux On Windows Vista</a>klimekhttp://www.blogger.com/profile/04044731490885944160noreply@blogger.com14tag:blogger.com,1999:blog-3081442684129016822.post-81449127602949866362007-05-20T12:37:00.000-07:002011-05-30T13:07:33.184-07:00IT Security: YOU Are The Weakest Link<a href='http://klimek.box4.net/blog/wp-content/uploads/2007/05/theweakestlink.jpg' title='The Weakest Link'><img src='http://klimek.box4.net/blog/wp-content/uploads/2007/05/theweakestlink.jpg' alt='The Weakest Link' /></a><br/><br/>There is a single most important rule to IT security:<br/><br/><b>Always address the weakest link.</b><br/><br/><a name='more'></a><br/><br/>This rule may seem obvious at a first glance, but don't forget the secret society of NOKEY (1). The members of NOKEY make companies spend huge amounts of money to strengthen the strong links, skillfully steering their attention away from the weakest spots. Sometimes the weakest links are so hard to improve that I believe they really do us a favor.<br/><br/>But what <em>is</em> the weakest link in a IT security environment. In 99.9999% of all cases this is easy to answer:<br/><br/><b><em>You</em> are the weakest link. Goodbye.</b><br/><br/>If you don't use passwords like "opensesame01" or "k00lN4M3" and always play around with the magnetic card reader before you put your credit card into it when you buy a nice necklace for your wife, I don't mean you personally. And of course you don't do this. <br/><br/>But as long as people just hack their PIN into every beeping box that asks for it and use passwords that are as random as the unpredictable zero, it seems to be a job-creation measure to build a certification process that asks for high security standards. Why should a talented criminal bother to spend tens of thousands of Euro to hack an operating system when the data is easily accessible via the careless user?<br/><br/>There may be a solution besides not allowing a system to be used. Educate the user. Go out and spread the word. If you read this, you're probably a person with a strong understanding of basic security principles. Explain the necessity for randomness in a user password. Make people around you use a tool like <a href="http://passwordmaker.org/">PasswordMaker</a> and threaten them with endless lectures about cryptographic algorithms if they <em>ever</em> write down a PIN.<br/><br/>And when you really managed to build a system where the user is not the weakest link anymore, we can talk about algorithms.<br/><br/>(1) Nameless Organization of Kernel Error Yieldklimekhttp://www.blogger.com/profile/04044731490885944160noreply@blogger.com0tag:blogger.com,1999:blog-3081442684129016822.post-75062459205779150312007-05-16T14:51:00.000-07:002011-05-30T13:07:33.184-07:00The Perfect Engineering LieDo you estimate the time it will take you to finish your task in the mythical unit called "perfect engineering day"? Have you ever wondered why?<br/><a name='more'></a><br/>A perfect engineering day is the time you assume a task would take if you could work uninterruptedly, knowing exactly what to do and without making errors for one day. It's about as real as dowsing.<br/><br/>The big problem with the perfect engineering day is that is has nothing in common with what normal people consider to be a day. If you multiply it by pi you may be in the right order of magnitude, but then again it's not really a time span, but a probability distribution of a time span.<br/><br/><strong>The next time you say "I'll be finished in five minutes, honey" at six in the evening and find yourself facing your wife and the dirty laundry at nine, I'd like to see her face when you try to explain the concept of "perfect engineering five minutes".</strong><br/><br/>But, some may argue, isn't it better to truthfully explain the complex context to your customer? Maybe, if you're either knowing exactly how long it will take (which you don't - just face it), or if you always communicate the context clearly: it will take two perfect engineering days, and there are 1.8 of these in a week, but in the last few weeks we had a standard deviation of 50%.<br/><br/>If you ever forget about the context and tell product management at the coffee machine that "it will take two days", you're lying. A perfect engineering lie.<br/><br/><b>Update:</b> Hello, this is Manuel's wive. I just want to say that his "perfect engineering five minutes" are more like two weeks than three hours - at least when dirty socks are involved.klimekhttp://www.blogger.com/profile/04044731490885944160noreply@blogger.com4tag:blogger.com,1999:blog-3081442684129016822.post-515464435483191302007-05-12T04:05:00.000-07:002011-05-30T13:07:33.185-07:00Good Code: A Value-Oriented ApproachThe first human beings had a hard time. When they weren't on edge due to the Neanderthals who constantly tried to get their unprofitable genes into the big pool again, they had to deal with a real challenge: <a href="http://en.wikipedia.org/wiki/Human_self-reflection">self reflection</a>.<br/><br/>When I exercise my introspectional skills I often think fondly of my ancestors. I imagine their first grunted discussions of values, time and the meaning of soccer. And I believe these discussions closely resembled those we see about <i>good code</i> nowadays. But without the discussion we'd probably still be fighting naked over the affection of women. Um...<br/><br/>In this article I'll try to define good code from a business value perspective. I'll come to this conclusion:<br/><b>Good Code executes a set of features correctly in a specified time (present value) and maximizes future value (minimizes future cost) by adhering to the dynamic nature of code through an ROI-oriented design, a test suite, process automation and VCS-usage.</b><br/><br/><a name='more'></a><br/><br/><h3>Value</h3><br/>I would like to investigate the topic of good code by looking at the code's value. From a financial point of view the <a href="http://en.wikipedia.org/wiki/Present_value">present value</a> of Bilbo's ring is the sum of discounted future payments. Those include negative payments (being lectured by Gandalf) and, of course, positive payments (becoming a hero is not that bad, after all).<br/><br/>But how does this relate to code? What <i>future payments</i> can you get out of a computer program? There is a myriad of ways to get value from a computer program:<br/><ul><br/> <li>Sell it for money.</li><br/> <li>Change it and sell it for even more money.</li><br/> <li>Learn from it.</li><br/> <li>Have fun while changing it.</li><br/> <li>Have fun using it.</li><br/></ul><br/>Without further discourse, I'll head straight for the topics that cover the money making aspects.<br/><br/><h3>Current Value</h3><br/>You can sell software. The value of the software today is the expected sum of discounted future income from this software. So if you can sell the software without changing it ever again, a binary-only version that meets your needs may be a very good piece of code. The present value of software is a <b>set of features</b> it executes <b>correctly</b> in a <b>specified time</b>, and is therefore independent of the source code.<br/><br/>It's interesting that the current value is already based on <i>expectations</i>. Software value is not like a piece of gold, it's more like an investment that pays dividends. As such, software is risky and highly speculative. And since our expectations are usually not good at predicting the future and changing software is a lot easier than changing hardware, the software is going to be changed. A lot. And this is why software is <i>grown</i> rather than built.<br/><br/><h3>Expected Value Growth</h3><br/>All those expected future payment series become a lot more complicated when you plan to increase the software's value over time by changing it. Now you have not only an expected series of growing payments, but also an expected series of cost for maintenance. The expectations become even fuzzier when you realize that you have to guess the future growth of value without knowing the present value (remember that this is an expectation, too) or the future requirements.<br/><br/>But there's still light at the end of the tunnel: you don't have to calculate the expected value, you just have to maximize it. You don't need to know all the details before you can make a plan. There are a few things about those future payments that are rather obvious.<br/><br/><b>The value of software grows if the growth of the present value is greater than the maintenance cost.</b><br/><br/>So the faster new features get into the code the better. Since most of the time the expected future value growth is a lot bigger than the present value, the most important aspect good code must have is:<br/><br/><b>Good code is easy to change.</b><br/><br/>How can you design your code in a way that it is easy to change? Is it only the code? No. Code that is easy to change is a lot more than just a few written statements in a programming language. It's everything that's involved in the build process and all the tools that handle the code or your documentation. So what is needed for good code? What are the minimum requirements for the environment?<br/><br/><br/><script type="text/javascript" src="http://klimek.box4.net/blog/wp-content/themes/manuel/mm/flashobject.js"></script><br/><div id="flashCustomerValue" height="100%"> Flash plugin or Javascript are turned off. Activate both and reload to view the mindmap</div><br/><p><script type="text/javascript"><br/></script><br/></p><br/><br/><h3>ROI-Oriented Design</h3><br/>Design to maximize the return of investment. Make the code easy to change. This means modularity, abstraction, low coupling and high cohesion and all the other wisdoms of software development that are known for ages.<br/><br/>And of course it is highly dependent on the people involved whether code is easy to change. Some people can't read perl, others don't like the structure of python. Some can only work efficiently with static typing, other people like dynamically typed languages. Most of the time you'll hear people say that a feature of their most valued tool is the only way to produce good code. Usually you can safely ignore it.<br/><br/>Good design is always in the eye of the beholder. Take low coupling and high cohesion for example. Modularization is the art of minimizing your interfaces. But most of the time you have more than one option to split up your design. There may be two or more ways to break up dependencies, and different people will find different solutions easier to understand, because they have different models in their minds. Some people envision a graphical representation of the system in their mind. Others think of names and relationships.<br/><br/>Find the model that fits best for your team and make the code easy to change so that the poor souls that will maintain your code when you leave earth on your mission to Pluto are able to refactor it to make change even more easy for themselves.<br/><br/><h3>Automated Test Suite</h3><br/>An automated test suite makes it easier to change the system without breaking it. Even if you never implemented a new feature and later realized that you've broken a different feature, remember that you write code not for yourself, but for other people who have to maintain your code. Sometimes I myself feel like a different person when I read my code from six months ago. You can find out more about automated tests in Kent's book <a href="http://klimek.box4.net/blog/library/kent-beck/test-driven-development-by-example-addison-wesley-signature-series/">Test Driven Development</a> and read why Martin Fowler regards all code that is not unit tested as legacy code in his book <a href="http://klimek.box4.net/blog/library/martin-fowler/refactoring-improving-the-design-of-existing-code/">Refactoring</a>.<br/><br/><h3>Process Automation</h3><br/>Human beings inherently suck at doing complicated things. We make errors. If you have to do stuff that could also be done by a small program, you're wasting resources, increasing maintenance cost and therefore diminishing value. Good code comes with a completely automated build process. The output of that build process is a package that is ready for release.<br/><br/><h3>VCS-Usage</h3><br/>A lot of people have written on what a good VCS tool should do. Today, there are many good VCS tools out there. A good VCS tool helps you to increase the value of your code by being able to work efficiently in teams and to access historic information about your code.<br/><br/>Some of the possibilities to analyze the historic information are active research topics today. Silvia Breu and Thomas Zimmermann work on <a href="http://csdl2.computer.org/persagen/DLAbsToc.jsp?resourcePath=/dl/proceedings/&toc=comp/proceedings/ase/2006/2579/00/2579toc.xml&DOI=10.1109/ASE.2006.50">the extraction of cross-cutting concerns from version history</a>. There is a lot of information waiting in your VCS to be used. If you don't use a VCS tool you loose this information, which can help you understand the reason why your big god-class evolved the way it did. And this may help you to improve your design more easily.<br/><br/><h3>Conclusion</h3><br/>The code itself is the most important piece in the puzzle. But the processes around the code are what keeps it in good shape. There is a lot more to good code than a few design principles or using the best programming language or paradigms. Good code involves everything needed to keep your code easy to change and maintain in the future.<br/><br/>Resources:<br/><a href="http://jamesshore.com/Agile-Book/quality_with_a_name.html">Quality With A Name</a><br/><br/><a href="http://www.computer.org/portal/cms_docs_software/software/content/promo/s2005_07.pdf">What's good software, anyway</a>klimekhttp://www.blogger.com/profile/04044731490885944160noreply@blogger.com0tag:blogger.com,1999:blog-3081442684129016822.post-51213752681220675242007-04-19T16:11:00.000-07:002011-05-30T13:07:33.185-07:00A Matter Of Time - Guesswork, Points And Yesterday's WeatherEd carefully tiptoes towards Karl's desk. "Hey Karl", he cheerily announces: "I still need this time estimation for the Globster project!".<br/><br/>"Yea, right... That was the Irish Coffee feature for their coffee machines at the management offices, wasn't it? I just have to read some specs to figure out what I have to do - give me two days."<br/><br/>Ed hesitates for a second: "So, you tell me you need two days to give me an <i>estimation</i>?"<br/><br/>"Yeah, so?" Karl shoots Ed an inquiring look. He really hates time estimations. They always seem to come back at you. Mostly when you want to leave work in the evening. Looking straight into Ed's face Karl knows what's expected of him: "Ok, let's just say it takes 5 days".<br/><br/>Leaving Karl's desk Ed heads straight for the coffee machine. What on earth is the developer's problem with time estimations? Getting the coffee machine to deliver Irish Coffee can't be that hard, after all... and it would be a nice addition for their own machine. <i>Somebody</i> has to do the testing.<br/><br/>"Hey Ed, good that I meet you. I've got Mr. Globster on hold, how long will the changes for the project take?" - "Oh, hey Dave, Karl said it's about 5 days..." - "Thanks, Ed!"<br/><br/>"Mr. Globster? Yes, the new software will be ready at Friday, no problem. A manager password and taxi service will be included."<br/><br/><a name='more'></a><br/><br/><h3>Just a matter of time</h3><br/><br/>On software projects there's not just <i>time</i>. There's real time and ideal time (and, of course, lunchtime - if there's enough real time left). Ideal time is what you get when you ask a developer to "estimate" a task (though if you ask a developer she will probably call it guessing, not estimating). Real time is, well, the time that it takes.<br/><br/>When Karl says the Irish Coffee feature will take 5 days, he means it could very well take about 5 days (or 10, remember he didn't read the spec yet) if he somehow manages to work uninterrupted for 5 days without introducing subtle bugs. See why it's called ideal?<br/><br/>Somehow the ideal time finds it's way to Dave. In the heat of the battle, Dave forgets what Karl told him about ideal time and real time four times in the past two weeks: they're not equal. Dave sets an artificial deadline that is based on ideal time, which will eventually make the cleaning lady wonder uneasily about the bite marks in Karl's desk.<br/><br/><blockquote>"Time is an illusion, lunchtime doubly so." Douglas Noel Adams</blockquote><br/><br/>Why does Karl hate to guess times? Well, he doesn't like it if he needs longer for a task then he guessed. He really hates it. That's why you can choose his reaction to the ideal time deadline situation from this list:<br/><ul><br/><li><b>Overcommitment</b><br/>But this time I'll do it in 5 days! really!<br/><li><b>Undercommitment</b><br/>If I guess a lot too much, I feel better if I'm wrong. And perhaps my lunch will be still warm.</li><br/><li><b>Artificial Pressure</b><br/>I guessed 5 days and Dave already promised it to Mr. Globster. So let's not do this refactoring now.</li><br/><li><b>Demotivation</b><br/>Being wrong makes me feel bad. Especially if Dave is involved somehow.</li><br/><li><b>Problems with communication to product management</b><br/>"But you said you'll do it in 5 days, now it already took 10!"<br/>"But I said it's ideal time!"<br/>"Than tell us the actual time it will take the next time!"<br/>"But I can't do that"<br/>...<br/></li><br/></ul><br/><br/><h3>Points vs. Hours</h3><br/>But what can you do? You can explain to the developers that they have to accept that ideal time has nothing to do with real time and that they should not worry too much about the correctness of single guesses and you could explain to product management that there are different time estimations (some in ideal time and some in real time) and make them actually <i>remember</i> all this stuff when they talk to the customer the next time.<br/><br/>You could also try to give free copies of "Time Thief" to your developers and wait to see if they start slicing time and getting things done before they even started.<br/><br/>Or you can use <a href="http://c2.com/cgi/wiki?StoryPoints">points</a> instead of ideal time. The idea of story points is an agile practice to help companies keep the cost for replacing bite marks in tables to a minimum. It is closely related to the <a href="http://c2.com/cgi/wiki?ProjectVelocity">project velocity</a>.<br/><br/><blockquote>"Why are our days numbered and not, say, lettered." Woody Allen</blockquote><br/><br/>You guess a rough size for every task in the next <a href="http://c2.com/cgi/wiki?IterationPlan">iteration</a>. The task size is measured in points. At the end of the iteration you can sum up all the points and see how many points the team managed to complete in the iteration. Using this metric on the team level averages out some of the statistical blur. This way you get a good impression how many points can be finished by the team per time unit (we don't need real and ideal time any more - it's just time now (at least until lunchtime, of course)). In the next iteration you just schedule the number of points you finished in the last iteration. Yesterday's weather.<br/><br/>So how is this different from measuring ideal time in hours and summing up ideal time at the end of the iteration to get an impression of "ideal time per real time unit"? It isn't. Well, not technically. But psychologically. Do you feel bad finishing 3 points in two weeks? How about "3 days work" in two weeks? Can you really let go of our strange human affection to time when you talk about "ideal time"?<br/><br/><h3>So what?</h3><br/>If the alternatives are to try to break people's affection to time and nit-pick about nerdy concepts or communicate my intent distinctly I prefer the latter. That way I have more time to write code.<br/><br/>See also: <a href="http://klimek.box4.net/blog/2007/02/19/do-you-understand-xp/">Do you understand XP?</a>klimekhttp://www.blogger.com/profile/04044731490885944160noreply@blogger.com15