Thursday, November 30, 2006

Why Do Programmers Hate To Throw Away Code?

I just stumbled across The Joel Test. The Joel Test is an easy best practice pattern you can use to optimize your productivity. When I read Joel's article I immediately got a new insight into programming psychology.

Why do programmers hate to throw away code?


Like Joel says, programmers really tend to object to throwing away code. But how does this relate to the refactoring hype we've seen over the last few years? The point is, refactoring is probably done badly. Refactoring is all about throwing away old code, getting your new code to reflect new insights you got from problem domain analysis. Refactoring should be done thoroughly once a design problem is detected.

In Joel's opinion, writing specs helps to prevent refactoring. But what is a design spec if not code in an abstract high-level language. So changing the design is basically nothing but throwing away code - again.

Writing more specs leads to more documents that must be maintained in addition to the code. They get obsolete over time if they're not used on a daily basis. In my experience a lot of specs should be provided as comments in the code, or directly as code itself. Simply increasing the amount of specs produced will not achieve a better design. You need good developers who are able to design high quality code.

High quality code implements domain logic in a way that it can be read just like a spec.

Good programmers don't hate to throw away code - they are excited about throwing away code to achieve a better design.

Friday, November 24, 2006

Colinux on Windows Vista

This information is outdated! See Mobile Ubuntu Colinux Setup for more information, even if you're not using a "mobile" setup ;-)

In the last years colinux became one of my most valued tools for cross platform software development. Colinux is a user mode linux, running as a Windows process. This way you don't need to dual boot anymore. And while other virtualization techniques exist, colinux has some advantages that makes it my top choice.

The last couple of days I switched to Vista on my workstation. Getting colinux to run was one of my major concerns and one of the reasons why I will have to wait before I can use a 64 bit Windows. After a little research on the web I found out that the tuntap driver that is bundled with the colinux 0.8 snapshot will render a Vista installation unusable. So at first I installed colinux without networking support. Then I downloaded the latest openvpn beta and installed the tuntap driver from their installation package. But when I booted into colinux, I couldn't get the network to work. On shutting down colinux Vista even bluescreened. So tuntap is not the way to go (yet).

My next attempt was to use winpcap networking. I managed to get the network up and running, but I had some strange connection problems when connecting via ssh to my colinux. Somewhere the ssh just timed out the connection. After a little experimentation I found out that I could open a tcp connection to the colinux and could even send data to a running netcat, but I couldn't get any data back.

Then I checked if the problem still existed when I used the Microsoft Loopback Adapter in a setup very similar to this colinux networking howto. Surprisingly the network was not only faster, but also very stable. I still don't know why the winpcap solution didn't work reliably over the real network device - connections from/to outside my Windows box work without problems.

After setting up colinux with cofs as my cross compilation toolchain, I was ready to use Vista as primary devlopment platform. So far I'm quite impressed. Vista is the first Windows where I can easily work as "normal" user, entering the administrator password only when I need more access. This is a big security plus.

Thanks to the user account protection old programs like teraterm still work. It took some time until I figured out why nothing changed when I was editing the teraterm.ini file in the "C:\Program Files\" folder. Since teraterm opens this file writable at startup, Vista silently set up a copy of the file in my local user account folder. This way you can edit setting files for legacy programs without needing superuser privileges.

While setting up Visual Studio 8, I had one more encouter of the third kind with rights management. To make debugging easier, I'm inserting information about my classes in "..\Microsoft Visual Studio 8\Common7\Packages\Debugger\autoexp.dat". I edited this file as superuser, since Visual Studio doesn't need to write to it. But nothing happened. Then I checked the file permissions and it became evident that the file was not readable by my user. Changing the permissions fixed this problem, too.

Now I'm up and running and still quite impressed by Windows Vista. If Vista shows the same progress that Windows XP showed during it's lifecycle, it will become a nice operating system for software development. Um, and yes, it was only a few years ago that I preferred linux for my daily work - but back than Visual Studio 8 wasn't available, which is still my killer application for C++ development.

Update:
An updated article on a Mobile Ubuntu Colinux Setup for my laptop is available.

Saturday, November 18, 2006

Review: The Pragmatic Programmer

When I was a teenager I used to sit at my computer many hours a day, pondering about interesting computing problems, like writing a magic eye 3d creator or a cool battletech computer game. The first important computer book I read (after reading the micrsoft basic handbook) was "Spiele programmieren mit QBASIC" (Programming games in QBASIC) by Lars Hennigsen. This book explained the basic concept of computer programming and mathematics in a way that fascinated me as a fifteen-year-old. This book was by no means the best book available as a reference on computer programming, but it multiplied my interest in computer programming by giving me just the right information to get me started.

During my studies of computer science I mostly read scientistic articles and scripts written by scientists for scientists. This information was invaluable for laying a foundation of technical knowledge. While studiying I learned mostly by trying out new interesting things, coding open source software and endless discussions with fellow students.

When I started to work I felt confident in my field. I expected to learn a lot on a learning-by-doing basis, or by listening closely to the veteran software developers around me. After two years of architecturing and coding in the real world, my enthusiasm mixed with an exhausted feeling that you just can't tackle the complexity of software development effectively.

When I visited New York City at christmas '05 I spent a lot of my time rummaging through bookstores full of English books (which are rather spare here in Germany). There I stumled across a copy of "The Pragmatic Programmer". Since I had read about the title allready somewhere on the internet and didn't have anything to read at that time, I bought me the copy.

I read this book on my flight from New York to Munich in one go. Andrew Hunt and David Thomas managed to light that spark of hope inside of me that there may be a silver bullet after all. They draw an abstract view on software development, they help us step back to take a look at ourselves and how we're doing things. They explain the world around the software developer and show why it is important to explain the process and not only the tools used to produce code. They inspired me to read "Extreme Programming Explained" by Kent Beck, to join IEEE and utilize their library to learn from the experience of fellow software developers. As with my first computer book, "The Pragmatic Programmer" is by no means as insightful as "The Mythical Man Month" or as complete as "Code Complete", but it is inspiring in it's mission to make programmers deliver better software to the world.

I'd definitely recommend this book to any programmer who hasn't read a book about software development for some time and is just not satisfied with the way software is created nowadays.

Tuesday, November 14, 2006

e-Petition against election machines

In their last elections, the USA managed to show the world that a missionary is not necessarily a role model. Germany is not the showpiece of political leadership, but at least we've got a working democracy - until recently, when they started to think about introducing 'election machines'.

The problem with election machines is, that you can always simply 'switch' an election machine for a different machine that looks exactly the same. This way nobody (not even a highly paid technical expert) can say for sure that the vote is taken into account without messing with the hardware.

Of course such high level attacks can be detected afterwards by inspecting all machines. After all we can just redo the election, that would save us some big money, and would motivate many more people to vote.

And than there's the 'insider attack'. An underpaid software developer working for the election machine company who needs the money to pay an expensive medical operation for her terminally ill son. She has all the cryptographic keys and expert knowledge of the operational tests done during and after the election to modify the program 'just a little' so that Mr. Money becomes chancellor.

And even if all those attacks could be eliminated - only a cryptographic expert would be able to understand and check those machines. The average German is not a mathematical genious. This will certainly boost voter participation.

Some Germans obviously remembered that democracy is all about participation of the people and filed an e-Petition against election machines.

Friday, November 3, 2006

Google codesearch - a new way to track copyright violations?

When I first tried google codesearch I was impressed. But then I tried to enter " sco " ibm. Follow the link and look at the first entry found - the file regexpI.h and read the comment at the top.

Ok, this is just a small header and I couldn't find more information on SCO and IBM. But, you can still search for disclosure agreement...

Is this a new way to track copyright violations?

Saturday, October 28, 2006

Testing doesn't increase quality

At a first glance this is a rather surprising thought. But the explanation McConnel gives in "Code Complete" is obvious: Testing doesn't include debugging or restructuring the code. So testing technically doesn't change the code at all. Finding and fixing the defects hopefully does increase code quality, though.

Speaking about tests, there's another interesting observation in McConnel's writing: You find only less than 50% of all errors by testing. So regardless how much you test, you're still doomed.

Since one of my mental quirks is to state the extremes and think about logic afterwards, what if we don't test code, but just write it - the worst thing we could get is about 1.8 times the defects. The code is buggy anyways, so who cares? Go banana software! We just save a lot of development time (for managers: read "money") and our time to market rocks (did I hear anybody say the name of some company in Redmond?). We just don't test. Remember: The process is as easy as 1: write software, 2: release it to the customer without testing, 3: yea, um, right.

The point is that 3 is the magic number: Software is not finished once it's released to the customer. In fact, many books state that most of the software development cost is spent after the program is in outer space. This is called "Software Maintenance", which is just a nice term for "fixing defects that shouldn't have been there in the beginning". Now what does this mean for our banana software?

Since 40 years software development textbooks feverishly try to get the message out that fixing defects is more expensive the later the defect is found in the software life cycle. Why's that? Simple. Did you ever get an error report from the customer where you needed 4 phone calls and half an hour of talking to different people before figuring out that "The Big Red Button isn't working" means that the program just segfaults every time the cancel button is pressed by the customer and it's simply working at your own computer. Now you spend a day or two, trying to get an exact copy of the customer's configuration, even installing Windows 95 (just to be sure), before finding out the reason of the problem. And than you know what the problem is, but you'll still have to fix it. See the big glowing blue productivity cloud going <poof> over your head.

Now if we had found this error early in a testing stage, it would have been a lot cheaper to fix. So taking into account what we learned earlier, namely that testing doesn't increase quality, what does testing yield? Here's the answer: Testing gives me the opportunity to gain productivity by fixing problems found now [1]. Later the problem will stay clear of my neck and I'm free to do other things, like not staying at work late (again).

The other extreme to completely bananaing your software is found in Kent Beck's "Extreme Programming" practices: Test Driven Development. Although test driven development is more than just "testing" [2], it addresses the old law of software engineering: In order to boost (mean) productivity you have to boost quality. And improving quality by fixing defects at the earliest possible time is the cheapest you'll get.

In my opinion there's still a big BUT. Test driven development increases productivity, BUT only on the long run. You have to pay up front. This means that more capital is bound at an early stage in development. Kent's solution is to include the customer in the development cycle, release as early as possible and as often as possible to get feedback and minimize risk. With my limited experience in software products, I really can't decide if this approach is feasible, or if time to market constraints sometimes get a project enough trajectory to compensate for the increased develpment cost. If early time to market limits competition, history tells us that reality is a lot more complicated than textbook-optimized methotology.


[1]: Erdogmus, H.; Morisio, M.; Torchiano, M.; Software Engineering, IEEE Transactions on Volume 31, Issue 3, March 2005 Page(s):226 - 237 Digital Object Identifier 10.1109/TSE.2005.37
[2]: Janzen, D.; Saiedian, H.; Computer Volume 38, Issue 9, Sept. 2005 Page(s):43 - 50 Digital Object Identifier 10.1109/MC.2005.314

Wednesday, October 25, 2006

The Nerd and the Manager


Since code is the primary output of construction, a key question in managing construction is "How do you encourage good coding practices?" In general, mandating a strict set of standards from the top isn't a good idea. Programmers tend to view managers as being at a lower level of technical evolution, somewhere between single-celled organisms and the woolly mammoths that died out during the Ice Age, and if there are going to be programming standards, programmers need to buy into them.
- From Code Complete by Steve McConnell

Like Frederick P. Brooks, Kent Beck and Alfie Kohn, Steve McConnel emphasizes the humand nature of the software developer, who has her own values (a strong distaste for being managed and the ability to create lines of code from nothing but black coffee). All this leads to a developer-centric view of the development process that is proposed by Kent Beck as the silver bullet with which we can finally kill Brook's werewolf.


But, as the nerd is inherently weak on the communication side of life, it's not easy to create a developer centric environment. Developers tend to fight holy wars between themselves how the top notch development workplace should look like (granted, apart from two or three zillion-gigahertz power machines).


So now I'll have to find a book that tells me how to get deep emotional information from my fellow code wizards. Or I try to learn pi by heart.