Sunday, December 31, 2006

The "Model Driven" Paradigm in Software Architecture

currently I'm researching the "Model Driven" approaches to software development. Yesterday I discovered some groups on xing discussing model driven architecture and model driven design. After browsing through some of the entries I discovered that the common understanding of model driven approaches seams to be "generate code from some diagram" or "introduce a new domain programming language". In my eyes this boils down to using a new programming language. And yes, I consider diagrams that generate code as yet another programming language.

I usually don't rely on forum discussions as primary source of information, so I wanted to find out more. If you do a google search on "model driven" you get the wikipedia page on model driven architecture and the usual suspects: information from companies that want to sell you their brand new model driven tools. As MDA seems to be a major topic on OO conferences you'll probably agree with me that this may be mostly due to some big bucks behind the movement.

But the concepts behind MDA are not really new. The wiki-page sums up the central statement of the model driven approach as:

One of the main aims of the MDA is to separate design from architecture and realization technologies (...)

This is old stuff. And I don't yet see how using UML models as programming language will solve any issues. The domain language approach is interesting, though. But let me explain my view on this whole architecture thing...

Separating the design from architecture and realization technologies is one of the main concerns of good software design for some time now. Eric Evans for example proposes the "Domain Driven Design" concept, which basically means to take the programming language of your choice (state of the art: object oriented) and create a set of objects with which you can build up a domain language that still follows the implementation rules of the underlying language.

The bad thing is that even on the domain level you are still dependent on the underlying language. But I know from experience that you can even code 95% of your C++ project in a technology agnostic way. This is the real challenge of software architecture: build or buy a technology layer that hides your underlying hardware architecture away and create an easy to access domain layer that nonetheless captures your domain logic.

The good thing about not introducing a new language, be it an UML model or a new domain language, is that, well, you don't need a new language... The key question is: why is UML or a new domain language better suited to the task of describing a domain logic than <pick your favorite OO language>?

Let's consider the options: What about using UML as a programming language? One of the first comments in the xing forum about MDA I found was about "hey, our programmers don't like the clumsy way to express all the complex logic in a UML diagram - can I create one by using a textual description? this would especially make diffing much easier...". This says a lot.

When I wanted to learn UML, I asked my colleges if anybody knew a good introduction into the topic. Andreas than got me some UML reference and handbooks. 1500 pages explaining how to use UML. But that wasn't what I was looking for. I was looking for an explanation of how to use UML in a way that it's actually generating business value. I didn't read those manuals but found "Domain Driven Design" by Eric Evans. Now this is technically not really about UML, but it introduces some very interesting practices regarding it's use.

Eric Evans states that a diagram should be used to communicate a concept. He argues that trying to stuff all the information into a diagram tends to render them useless. This exactly matches my experience. And this is why I don't think UML diagrams (or any graphical representation, for that matter) is a good choice as a programming language.

Now for the domain language. A domain language is a new programming language that enables you to write domain logic in a way that matches the way rules are usually expressed in the problem domain. In my opinion this is often a very good idea. It is used widely since a long time now (perl and the posix shell language started as domain languages) and there are myriads of calculator sized scripting interpreters used in commercial applications. You can probably think of some more...

But there is also some choice with regards to general purpose languages. When implementing a domain language (like php, which was originally developed to be used for dynamic web services) often the feature requests make the language a general purpose language over time (perl, php, bash, just to name a few). At that moment you have a new general purpose language on the market and have to ask yourself if it wouldn't have been easier to use an already stable language in the first place.

At the time of this writing, there are many general purpose languages that target different domain audiences. There are the functional programming languages that express mathematical concepts (while high performance maths is still done in fortran), the logic programming languages that target, well, logics and a bunch of object oriented languages that target web server development, web client scripting, web browser extensibility or just try to provide fine building blocks for object oriented domain frameworks and domain design (like java and C#).

So what should you do if you are an architect. My advice is to look for a nice general purpose language first. If your choice is one of those diagram compilers, than fine. Just make sure programming in this language fits your needs. If you find a framework for your chosen language that makes developing the domain logic easier, use it. Mind that I don't argue against using UML diagram compilers or frameworks or your own little language that you always wanted to implement. But I don't believe UML diagram compilers really enhance your productivity in all but a small limited set of applications. The same goes for elaborate frameworks or the use of awkward domain language wannabes like Gupta.

Most of your domain language needs can be met by today's general purpose languages.

The art is to use the language concepts to implicitly express your domain language. This can be done with every general purpose programming language I know (even with perl - which was built as domain language for string manipulation and "grew" into a full-fledged swiss army knife object oriented programming language and - in my opinion - often into a maintenance nightmare).

So what remains of the "model driven" hype? If you take away the fuss about expensive programming tools you get what Eric Evans calls "Domain Driven Design": Use diagrams that really enhance understanding. Use diagrams only to show few concepts at a time. Use your design skills to make your general purpose programming language feel like a domain language. And finally introduce a new domain language where it really makes sense. Most of all, come down from your ivory tower and get your hands dirty on the project.

Thursday, December 28, 2006

Review: Object-Oriented Metrics In Practice

The book in one sentence


A short, modest and dryly written book with some brilliant new concepts and a lot of too easy solutions which is worth reading because of the brilliant ideas.

Preceding events


How did it come about that I read a book about software metrics? I will have to digress somewhat before answering the question. There is a strong common believe in software engineering that software metrics are a dangerous tool, especially in the hand of Joel's bright young management consultant. But sharp tools that you can use to harm people can often be used to get things done.

What if our pre-pre-pre-(...)-pre-ancestors had condemned the use of a knife just because some of their brethren ran around slitting up other peoples throats? So I wanted to find out more about software metrics to decide for myself if they provide a new, sharp tool, or just a deadly weapon to be utilized by management consultants. But even with a lot of google magic I couldn't find a good critical introduction to software metrics anywhere on the web.

Then a few weeks ago an IT consultant asked about software metrics in a forum about software quality assurance at xing. I answered by reiterating the "dangerous tool" conception, emphasizing that software metrics alone can't be used as a quality assessment, which obviously didn't satisfy our questioner. Another member recommended "Object-Oriented Metrics In Practice", praising especially the solution oriented style of the book. So I figured that I should educate myself on the topic before broadcasting my opinion.

Reading...


The first good point the book makes is that software metrics can show only structural points of interest, but not design problems (unless you find a metric that can measure the semantical aspects of variable and class names, of course). Therefore a software metric can be used to find conspicuous code fragments that must be analyzed by hand to get substantial information. If Joel's consultant had read this book and had understood this concept we wouldn't need to be afraid of him any more.

The brilliant part of this book is the part about metrics visualization. If you ever ran cccc over a > 200kloc project, you know that visualization of metric results is hard. When I used cccc at work to assess our C++ project (before reading this book) I didn't find anything new or exciting in the results. The metrics showed me two or three structural problems that I was already aware of without using any metrics and quite a lot of false positives. After reading the book I came to the conclusion that it was not the metrics that were useless, but the simple brute force manner in which they were applied.

The idea of Lanza and Marinescu is to combine metrics in a way that answers specific questions about the code and visualize the results in a way that makes it easy for human beings to browse through the structure and especially to visualize the structure in combination with the metric results. If you want to learn more about how you can utilize metrics efficiently and how much work you have to invest to be able to use metrics at all, I highly recommend this book to you.

In the remainder of the book the authors try to give examples what metrics combinations should be used to detect specific structural problems and what refactoring should be used to solve a specific metric result. Those answers are often way to easy. If you're a fan of domain driven design, you are probably trying to base your design on the structure of the domain. So you can't just mechanically apply standard solution patterns to your problem, but you have to find the right kind of design for your problem domain.

Conclusion


For me this book was definitely worth reading. I'll try the tools mentioned in the book and see how the visualization techniques can be applied in real life. Since even the authors admit that metrics are most useful to get a starting point to assess legacy code, I don't think this book helps me a lot in my current work environment where I am still able to overlook the complete project. But the book helps you to view code and code metrics from a new angle, broadening your understanding.

Saturday, December 23, 2006

The Day The Mouse Broke

My wife and I are visiting our families at Christmas. This morning I sat down at my mother-in-law's computer in order to check my daily spam when the mouse broke. OK, it didn't really break in the sense of the word - I just had to explain my mother-in-law that you have to charge rechargeable batteries before using them in your wireless mouse.

So I put the batteries into the recharger and realized that while the batteries were powering up I had no mouse. Since not using the computer for a full day (my god) was no option, I had to figure out how to use Windows with nothing but a keyboard. The principle was not new to me, as I already controlled my Unix flavored operating systems (like emacs) with keyboard FSAs and never really missed anything. But on a Windows XP box this turned out to be a whole new experience.

First I remembered the "Windows Key". I just realize that I don't know what to do if you don't have a Windows and a context key (first one left of the right control key) on your keyboard. Fortunately my keyboard got them. Anyway, the Windows key helped me to get Thunderbird and Firefox up and running. Thunderbird is really nice to control via keyboard. It's intuitive, and while it's not quite as comfortable as using your mouse, the basic task of classifying the spam mails into the spam folder was no problem.

Then I tried to use Firefox. At first this was rather awkward. Since the only keyboard control key I knew was the tab key, I tabbed endlessly through the user interface before I was able to extract the basic key combinations from various sources on the web:


  1. F6: Change frame

  2. Ctrl-L: Address box

  3. Ctrl-K: Search box

  4. Ctrl-W: Close current tab

  5. Ctrl-PageUp/PageDown Previous/Next frame



Equipped with my new knowledge I entered my Wordpress administration page and tried to start a blog entry. Don't try this. It simply doesn't work. I nearly tabbed my brain out of my head. So I needed a Firefox plug-in to save my day.

Searching for "keyboard" on the Firefox plug-in page revealed the NumberFox extension, which I couldn't get to work, and the Hit-A-Hint extension. Hit-A-Hint worked great for me and I was able to do some serious browsing now. After a few minutes I stumbled across Mouseless Browsing. I couldn't find this one at first, because you usually search for "keyboard" and not "mouse" if you actually lack a mouse.

All of the solutions above share the same common principle. For every link, button and edit field on a web page a number is shown. If you enter this number in a special finite automaton mode you can directly browse there without tabbing to death.

For full blown keyboard control Mouseless Browsing is better suited. You can use it from within edit fields and it has a consistent interface for switching between tabs. It even supports to select a link instead of following it, which makes Firefox show the link target in it's status bar. But it feels a log more sluggish than Hit-A-Hint.

Hit-A-Hint is a quick and small solution, but you have to reenable it on every web page and the default configuration features the "h" as start key for your finite automaton, which is quite inconvenient if you want to enter "hello" into a text field.

While searching for firefox extensions that make my mousless life easier, I found the English and German dictionaries to get inline spell checking support in Firefox. I hope my blog entries will gain some quality with regards to spelling...

Hey, I wrote quite a lot today. This shows that with the right combinations of tools it's easy to blog, search the web and use dictionaries all at once without a mouse. Praise the inventor of finite automaton theory!

Wednesday, December 20, 2006

Top 10 Ways To Demotivate Your Programming Team

If you're in charge of an overly motivated programming team that meets all deadlines and produces high quality code you may recognize that they don't really need you. Here are 10 tips how to regain control.

  1. Set up impossible deadlines!
    Repeated failure demotivates even the most undeviating member of your team. If you don't meet deadlines and are not trying to do something about it (like improving your software process) every new deadline will be a farce. You can be sure that in this case your team members will see every time estimation as a torture, randomly guessing some numbers, hoping that this time everything will work out. But of course they'll know that it can't work (you set an impossible deadline, remember), so they will be demotivated enough to get a nice vicious circle started.

  2. Let them work overtime!
    I wrote let them instead of make them intentionally. Often software developers actually like to program. To make sure that they will introduce a lot of errors, which will eventually demotivate them, you just have to let them work. And work. And work. After some hours they will get tired (but will not recognize this state themselves) and will just check in some messed up code. Time works for you on this issue. If they don't work overtimes for fun, just make them (see 2 for a more humane way to achieve this).

  3. Don't allow breaks!
    This is tightly coupled to 9. If your employee works overtime but makes a lot of breaks you gain nothing. The geeky brain has surprisingly quick regeneration capabilities (especially if a lot of caffeine is involved). So you basically have to combine 8 and 9 to get the pack tired enough. This way you maximize the error rate which will eventually yield the demotivation you aimed for.

  4. Place a ban on laughing!
    You can use this tip not only for programming teams. If you want creative workers to produce nothing useful, don't allow them to laugh or even better: don't allow them to talk. When they're quiet and unhappy you can be sure that you will not be able to write code.

  5. Break the coffee machine!
    Programmer (n): An organism that can turn caffeine into code.

  6. Don't shield them from the dirty daily business
    Even the brains of programmers have limited capabilities. So one easy way to demotivate your software developer is to challenge him with tasks he hates. Tasks that have nothing to do with software development work best here. Make the developer lie to the customer about schedules, or make your team hold the customers hand when they don't want to learn the basics to integrate your product into a complex environment. Often you get a nice demotivation by forwarding angry mails from other company's CEOs to your development team or let them handle wobbly feature requests.

  7. Don't challenge them!
    Most developers are motivated when they can work on a real challenge. So don't let them. Of course with software development being a challenge per se, this will inevitably lead to 5. But if you try to implement tip number 5, you have to remember not to give them tasks that challenge too much.

  8. Underpay them!
    While paying more than your programmer is worth will usually not gain any additional productivity, you can easily get a good demotivation by paying less. The important thing is that the developer knows that he's underpayed - this maximizes the negative impact on his overall performance. You can easily drop the productivity by a factor of two or three depending on the basic motivation level of your employee.

  9. Bribe them!
    And do so generously! Promise them a lot of money if they meet some utterly impossible deadlines (see 10). You can be sure that this will motivate your programmer - to mess up. She will work overtimes (see 9), sitting in front of her computer without a break (see 8), not accepting any interruptions by coworkers that want to cheer her up (see 7) or take her to the coffee machine (see 6). She will be concerned about the figures all the time to make sure that everything is all right (see 5).

  10. Infiltrate a team member who is demotivated anyway!
    If you don't want to use 1 to 9 for ethical reasons, you can always find those people who are demotivated anyway. These are mostly people that don't really want to develop software and just do it for the money. Since it's mostly easy to make everything look bad, this is usually what they're really good at. And since they don't want to work, they'll pull everybody around them down into their little black hole of demotivation.

Sunday, December 17, 2006

Review: Domain-Driven Design

When I wrote my first computer program in BASIC I didn't know anything about software design. My variable names used to be mathematically short and contain a lot of numbers. I wrote tightly coupled functions without parameters and synchronized the whole mess by a myriad of unstructured global data.

This is what naturally happens when you grow software over a period of time. Since I didn't have the mathematical background at the age of thirteen, short variable names seem to be a natural choice for human beings. You start with a seed and add functionality on a step-by-step basis, breaking up stuff at arbitrary boundaries.

I did appreciate the advantages of high level languages, though. When the 64k of BASIC data space and the interpreter's performance limited my possibilities with regards to game programming, I started using C as primary language with some inline assembler for the critical parts. At that time I created my first library package for graphical functions and prototyped some game built upon this foundation.

This was my first technical domain abstraction. Splitting up the software in a library component and a domain component came naturally back than. Of course the library design was crappy and the programs were still tightly coupled to the library implementation.

At that time I also tried to learn C++ from the Borland C++ programming reference, but I couldn't grasp the concepts. I probably didn't really try to, thinking that I already knew how to create working software - I had a nice working repository of gimicks at that time.

The next critical step in my understanding of software design was to drop my arrogance.

One of my first software engineering classes at university covered the basics of object oriented software development in C++ using the STL. The first exercise we had to finish was a small program of which I don't even remember what it was all about. A friend of mine and I worked on this exercise together. We both had programming experience and had a lot of fun hacking together a sharp-witted mess of code. The teaching assistant took his time to comment on every single part we messed up, giving us an overall rating of zero points.

This was the day I started to realize that the main challenge in programming is not to get a working program, but to create a maintainable, readable and easy to debug program.

At university I took part in a few programming projects and started an open source project with a fellow student. During those experiences I developed a strong sense of how structured abstraction can simplify software development. Then I started to work as a software developer and architect. When trying to communicate my architecture and design ideas, I realized that there are more aspects to software design than abstraction, low coupling and high cohesion. But, again, I couldn't grasp the concepts.

"Domain-Driven Design" is all about this additional aspect to good software design - the domain language. You can build a piece of software with low coupling and high cohesion and superb technical abstractions in a technical language that restricts software evolution.

Eric Evans proposes a domain languages built upon phrases that describe the implementation of use cases in the software. This is a concept very similiar to test driven development, where you write down how you want to use your objects before implementing them, being able to change the design while writing the test.

But in "Domain-Driven Design" Evans demands a stronger concept: absorb the domain language from the domain experts. Build a coherent language to express concepts and design descisions.

The model couples the software to a pool of associations.

This is a new level of coupling encountered when building software. It is not related to the technical coupling of software modules. This is just what goes on in the brains of developers when confronted with software. The underlying model limits the way we think about software. This is not always bad, as a good model will bind the parts that are irrelevant during development and allows us to concentrate on the important aspects.

A domain driven model increases the probability that modules may be reused in the domain the software is developed for.

While many developers intuitively know what domain driven design is all about, Eric Evans manages to communicate the concepts in a way that makes them explicit, rather than implicitely hidden in the developers train of thought.

The book is not a page-turner, since most of the concepts are not new per se. It lacks the driven spirit of Kent Beck, or the deep insight of Fred Brook's "Mythical Man Month". I can't say if "Domain-Driven Design" can help a student to boost design comprehension - I think it's rather hard to read if you lack the medium level knowledge the concepts in this book are built upon. But I recommend this book to those that already know what software design is all about intuitively and want to be able to put a finger upon the "big picture".

Monday, December 11, 2006

NEPOMUK - Creating The Social Semantic Desktop

Yesterday the KDE Commit-Digest introduced a new project called NEPOMUK which attracted my attention. Ignoring my buzzword alarm I'd call it "Web 2.0 meets the Semantic Desktop". In a first step NEPOMUK implements a framework to store and manage meta data for files. Now the new idea is to bridge the gap between formal ontologies (idea of the semantic web) and oceans of unstructured online data (folksonomies).

There's a lot of research on semi-formal methods going on, which looks very promising. Of course many projects tried to reach exactly the same goal, but I think that NEPOMUK has a real chance.

NEPOMUK starts with a simple integration as meta data framework in KDE. This way it creates value right from the start. Since KDE 4 is going to be available for Windows, the meta data based semantics will be available there, too. It doesn't try to do everything at once like WinFS does. It is just an extension to the current file model we all know.

Desktop search engines like strigi can integrate the new features of NEPOMUK and collaborate in a way that makes data access what WinFS wanted it to be. This way the desktop will be enhanced, not reinvented.

The next step is to integrate the personal knowledge management with the folksonomy based web applications. In my opinion this will be the great challange, since computers have a hard time understanding semi-formal data. But since NEPOMUK is integrated into KDE, many developers will be available to create the semantic online integration step by step.

I'm very excited about this new project and will follow it closely - this could very well be the hour of birth of a new desktop concept: the social semantic desktop.

Sunday, December 10, 2006

Censorship Actionism In Germany

After the recent amok run in Germany you see helplessness everywhere. But of course the important social and political organizations from Christian churches to the regulars' table know the easy solution:

Just censor computer games and everything will be fine.

Of course killer games must be the cause for people to run around and kill. What else. Apart from the original sin humans are born without a spot. And history tells us that it always works to make things illegal that we don't approve of. Gain control by censorship. And the only solution to bad things happening is control. What else. Humans are bad by design, you have to control them.

Oh, wait, you say that computer games may not be the real cause, but merely a symptom - or, even worse, just a coincidence? You really say that most male teenagers play killer games on their computers? No, I just don't believe this. My son most definitely does not. We're good people.

I'd really like to live in the 1940s. There were no killer games. The world must have been a peaceful place back than.

When will people learn that censorship is not a solution? That censorship is a new problem artificially created by goodwilling people? When will people learn that a society must solve it's problems from within?

I can think of a really good reason why people run amok: a cold, anonymous society where the only value is profit, where you're laughed at in public if you're a looser.

Friday, December 1, 2006

The Crazy Class Layout

When I read books about software development there is one thing that increases my blood pressure to clinically hazardous hights: The class layout.

Regardless of whether they're classics or quite modern, whether they use C++, Java or Python, whether the topic is software design or technical wisdom, they follow the same pattern for the layout of an object oriented class definition:

class FunnyExample
{
private:
FunnyDataType funnyData;
FunnyDataType2 moreData;

void somePrivateFunction();

public:
FunnyExample();
~FunnyExample();

theFirstPublicFunction();
};

The private data is at the top.

In all the books I read, I never found anybody explaining the reason for the private-at-the-top layout. In this blog entry I'll explain why I consider this "bad practice".

When the programmer opens a file like this, the first thing that she sees are the data implementation details of the class. When she writes code using this class, she will probably not be able to forget about those implementation details and use them subconsciously, which makes it harder to change those details afterwards.

This class layout contradicts the object oriented design idea: The class specifies an interface, a set of "messages" that can be sent to an object. In my experience this class layout often shows that the programmer didn't understand the object oriented idea. Classes are simply used as convenient data containers and methods are used as ways to manipulate their data.

But most of the time an object oriented class should be a reference to a domain concept. Methods on this class should be ways to interact with a domain object. Data oriented design leads to a technical view of the solution, rather than the problem, which will be hard to maintain later. Writing the public interface of a class at the top stresses the message oriented design.

Organize a class layout from most used to less used from the perspective of a developer using the class.

This implies putting the private data at the end of the class definition, where it is hidden away if a developer just want's to use the class:

class FunnyExample
{
public:
// this is what I see first when I open this source file

FunnyExample();
~FunnyExample();

theFirstPublicFunction();

private:
// if I just want to use the class, I can stop reading here

FunnyDataType funnyData;
FunnyDataType2 moreData;

void somePrivateFunction();
};

Thursday, November 30, 2006

Why Do Programmers Hate To Throw Away Code?

I just stumbled across The Joel Test. The Joel Test is an easy best practice pattern you can use to optimize your productivity. When I read Joel's article I immediately got a new insight into programming psychology.

Why do programmers hate to throw away code?


Like Joel says, programmers really tend to object to throwing away code. But how does this relate to the refactoring hype we've seen over the last few years? The point is, refactoring is probably done badly. Refactoring is all about throwing away old code, getting your new code to reflect new insights you got from problem domain analysis. Refactoring should be done thoroughly once a design problem is detected.

In Joel's opinion, writing specs helps to prevent refactoring. But what is a design spec if not code in an abstract high-level language. So changing the design is basically nothing but throwing away code - again.

Writing more specs leads to more documents that must be maintained in addition to the code. They get obsolete over time if they're not used on a daily basis. In my experience a lot of specs should be provided as comments in the code, or directly as code itself. Simply increasing the amount of specs produced will not achieve a better design. You need good developers who are able to design high quality code.

High quality code implements domain logic in a way that it can be read just like a spec.

Good programmers don't hate to throw away code - they are excited about throwing away code to achieve a better design.

Friday, November 24, 2006

Colinux on Windows Vista

This information is outdated! See Mobile Ubuntu Colinux Setup for more information, even if you're not using a "mobile" setup ;-)

In the last years colinux became one of my most valued tools for cross platform software development. Colinux is a user mode linux, running as a Windows process. This way you don't need to dual boot anymore. And while other virtualization techniques exist, colinux has some advantages that makes it my top choice.

The last couple of days I switched to Vista on my workstation. Getting colinux to run was one of my major concerns and one of the reasons why I will have to wait before I can use a 64 bit Windows. After a little research on the web I found out that the tuntap driver that is bundled with the colinux 0.8 snapshot will render a Vista installation unusable. So at first I installed colinux without networking support. Then I downloaded the latest openvpn beta and installed the tuntap driver from their installation package. But when I booted into colinux, I couldn't get the network to work. On shutting down colinux Vista even bluescreened. So tuntap is not the way to go (yet).

My next attempt was to use winpcap networking. I managed to get the network up and running, but I had some strange connection problems when connecting via ssh to my colinux. Somewhere the ssh just timed out the connection. After a little experimentation I found out that I could open a tcp connection to the colinux and could even send data to a running netcat, but I couldn't get any data back.

Then I checked if the problem still existed when I used the Microsoft Loopback Adapter in a setup very similar to this colinux networking howto. Surprisingly the network was not only faster, but also very stable. I still don't know why the winpcap solution didn't work reliably over the real network device - connections from/to outside my Windows box work without problems.

After setting up colinux with cofs as my cross compilation toolchain, I was ready to use Vista as primary devlopment platform. So far I'm quite impressed. Vista is the first Windows where I can easily work as "normal" user, entering the administrator password only when I need more access. This is a big security plus.

Thanks to the user account protection old programs like teraterm still work. It took some time until I figured out why nothing changed when I was editing the teraterm.ini file in the "C:\Program Files\" folder. Since teraterm opens this file writable at startup, Vista silently set up a copy of the file in my local user account folder. This way you can edit setting files for legacy programs without needing superuser privileges.

While setting up Visual Studio 8, I had one more encouter of the third kind with rights management. To make debugging easier, I'm inserting information about my classes in "..\Microsoft Visual Studio 8\Common7\Packages\Debugger\autoexp.dat". I edited this file as superuser, since Visual Studio doesn't need to write to it. But nothing happened. Then I checked the file permissions and it became evident that the file was not readable by my user. Changing the permissions fixed this problem, too.

Now I'm up and running and still quite impressed by Windows Vista. If Vista shows the same progress that Windows XP showed during it's lifecycle, it will become a nice operating system for software development. Um, and yes, it was only a few years ago that I preferred linux for my daily work - but back than Visual Studio 8 wasn't available, which is still my killer application for C++ development.

Update:
An updated article on a Mobile Ubuntu Colinux Setup for my laptop is available.

Saturday, November 18, 2006

Review: The Pragmatic Programmer

When I was a teenager I used to sit at my computer many hours a day, pondering about interesting computing problems, like writing a magic eye 3d creator or a cool battletech computer game. The first important computer book I read (after reading the micrsoft basic handbook) was "Spiele programmieren mit QBASIC" (Programming games in QBASIC) by Lars Hennigsen. This book explained the basic concept of computer programming and mathematics in a way that fascinated me as a fifteen-year-old. This book was by no means the best book available as a reference on computer programming, but it multiplied my interest in computer programming by giving me just the right information to get me started.

During my studies of computer science I mostly read scientistic articles and scripts written by scientists for scientists. This information was invaluable for laying a foundation of technical knowledge. While studiying I learned mostly by trying out new interesting things, coding open source software and endless discussions with fellow students.

When I started to work I felt confident in my field. I expected to learn a lot on a learning-by-doing basis, or by listening closely to the veteran software developers around me. After two years of architecturing and coding in the real world, my enthusiasm mixed with an exhausted feeling that you just can't tackle the complexity of software development effectively.

When I visited New York City at christmas '05 I spent a lot of my time rummaging through bookstores full of English books (which are rather spare here in Germany). There I stumled across a copy of "The Pragmatic Programmer". Since I had read about the title allready somewhere on the internet and didn't have anything to read at that time, I bought me the copy.

I read this book on my flight from New York to Munich in one go. Andrew Hunt and David Thomas managed to light that spark of hope inside of me that there may be a silver bullet after all. They draw an abstract view on software development, they help us step back to take a look at ourselves and how we're doing things. They explain the world around the software developer and show why it is important to explain the process and not only the tools used to produce code. They inspired me to read "Extreme Programming Explained" by Kent Beck, to join IEEE and utilize their library to learn from the experience of fellow software developers. As with my first computer book, "The Pragmatic Programmer" is by no means as insightful as "The Mythical Man Month" or as complete as "Code Complete", but it is inspiring in it's mission to make programmers deliver better software to the world.

I'd definitely recommend this book to any programmer who hasn't read a book about software development for some time and is just not satisfied with the way software is created nowadays.

Tuesday, November 14, 2006

e-Petition against election machines

In their last elections, the USA managed to show the world that a missionary is not necessarily a role model. Germany is not the showpiece of political leadership, but at least we've got a working democracy - until recently, when they started to think about introducing 'election machines'.

The problem with election machines is, that you can always simply 'switch' an election machine for a different machine that looks exactly the same. This way nobody (not even a highly paid technical expert) can say for sure that the vote is taken into account without messing with the hardware.

Of course such high level attacks can be detected afterwards by inspecting all machines. After all we can just redo the election, that would save us some big money, and would motivate many more people to vote.

And than there's the 'insider attack'. An underpaid software developer working for the election machine company who needs the money to pay an expensive medical operation for her terminally ill son. She has all the cryptographic keys and expert knowledge of the operational tests done during and after the election to modify the program 'just a little' so that Mr. Money becomes chancellor.

And even if all those attacks could be eliminated - only a cryptographic expert would be able to understand and check those machines. The average German is not a mathematical genious. This will certainly boost voter participation.

Some Germans obviously remembered that democracy is all about participation of the people and filed an e-Petition against election machines.

Friday, November 3, 2006

Google codesearch - a new way to track copyright violations?

When I first tried google codesearch I was impressed. But then I tried to enter " sco " ibm. Follow the link and look at the first entry found - the file regexpI.h and read the comment at the top.

Ok, this is just a small header and I couldn't find more information on SCO and IBM. But, you can still search for disclosure agreement...

Is this a new way to track copyright violations?

Saturday, October 28, 2006

Testing doesn't increase quality

At a first glance this is a rather surprising thought. But the explanation McConnel gives in "Code Complete" is obvious: Testing doesn't include debugging or restructuring the code. So testing technically doesn't change the code at all. Finding and fixing the defects hopefully does increase code quality, though.

Speaking about tests, there's another interesting observation in McConnel's writing: You find only less than 50% of all errors by testing. So regardless how much you test, you're still doomed.

Since one of my mental quirks is to state the extremes and think about logic afterwards, what if we don't test code, but just write it - the worst thing we could get is about 1.8 times the defects. The code is buggy anyways, so who cares? Go banana software! We just save a lot of development time (for managers: read "money") and our time to market rocks (did I hear anybody say the name of some company in Redmond?). We just don't test. Remember: The process is as easy as 1: write software, 2: release it to the customer without testing, 3: yea, um, right.

The point is that 3 is the magic number: Software is not finished once it's released to the customer. In fact, many books state that most of the software development cost is spent after the program is in outer space. This is called "Software Maintenance", which is just a nice term for "fixing defects that shouldn't have been there in the beginning". Now what does this mean for our banana software?

Since 40 years software development textbooks feverishly try to get the message out that fixing defects is more expensive the later the defect is found in the software life cycle. Why's that? Simple. Did you ever get an error report from the customer where you needed 4 phone calls and half an hour of talking to different people before figuring out that "The Big Red Button isn't working" means that the program just segfaults every time the cancel button is pressed by the customer and it's simply working at your own computer. Now you spend a day or two, trying to get an exact copy of the customer's configuration, even installing Windows 95 (just to be sure), before finding out the reason of the problem. And than you know what the problem is, but you'll still have to fix it. See the big glowing blue productivity cloud going <poof> over your head.

Now if we had found this error early in a testing stage, it would have been a lot cheaper to fix. So taking into account what we learned earlier, namely that testing doesn't increase quality, what does testing yield? Here's the answer: Testing gives me the opportunity to gain productivity by fixing problems found now [1]. Later the problem will stay clear of my neck and I'm free to do other things, like not staying at work late (again).

The other extreme to completely bananaing your software is found in Kent Beck's "Extreme Programming" practices: Test Driven Development. Although test driven development is more than just "testing" [2], it addresses the old law of software engineering: In order to boost (mean) productivity you have to boost quality. And improving quality by fixing defects at the earliest possible time is the cheapest you'll get.

In my opinion there's still a big BUT. Test driven development increases productivity, BUT only on the long run. You have to pay up front. This means that more capital is bound at an early stage in development. Kent's solution is to include the customer in the development cycle, release as early as possible and as often as possible to get feedback and minimize risk. With my limited experience in software products, I really can't decide if this approach is feasible, or if time to market constraints sometimes get a project enough trajectory to compensate for the increased develpment cost. If early time to market limits competition, history tells us that reality is a lot more complicated than textbook-optimized methotology.


[1]: Erdogmus, H.; Morisio, M.; Torchiano, M.; Software Engineering, IEEE Transactions on Volume 31, Issue 3, March 2005 Page(s):226 - 237 Digital Object Identifier 10.1109/TSE.2005.37
[2]: Janzen, D.; Saiedian, H.; Computer Volume 38, Issue 9, Sept. 2005 Page(s):43 - 50 Digital Object Identifier 10.1109/MC.2005.314

Wednesday, October 25, 2006

The Nerd and the Manager


Since code is the primary output of construction, a key question in managing construction is "How do you encourage good coding practices?" In general, mandating a strict set of standards from the top isn't a good idea. Programmers tend to view managers as being at a lower level of technical evolution, somewhere between single-celled organisms and the woolly mammoths that died out during the Ice Age, and if there are going to be programming standards, programmers need to buy into them.
- From Code Complete by Steve McConnell

Like Frederick P. Brooks, Kent Beck and Alfie Kohn, Steve McConnel emphasizes the humand nature of the software developer, who has her own values (a strong distaste for being managed and the ability to create lines of code from nothing but black coffee). All this leads to a developer-centric view of the development process that is proposed by Kent Beck as the silver bullet with which we can finally kill Brook's werewolf.


But, as the nerd is inherently weak on the communication side of life, it's not easy to create a developer centric environment. Developers tend to fight holy wars between themselves how the top notch development workplace should look like (granted, apart from two or three zillion-gigahertz power machines).


So now I'll have to find a book that tells me how to get deep emotional information from my fellow code wizards. Or I try to learn pi by heart.

Tuesday, October 3, 2006

KDE 4 on Windows revisited

Today I took another shot at the KDE4 on windows target.

First, I found this link which tells you how to keep up to date with qt-copy: just download the correct snapshot from the troll's ftp site.

I downloaded the 10/02 snapshot and applied the latest msvc 2005 patches from the qtwin sourceforge project. Then I configured and built qt. Everything went fine.

Now back to building qtdbus. As mentioned in the kdelibs.com tutorial, I checked out qdbus from qt-copy in the kde svn and replaced the qdbus dir in the qt package. The problem is that DBUSDIR is not recognized any more in the build. So I opened qdbus/src/src.pro in my editor, and put in a few lines I found in an earlier version of that file in the subversion repository.

My win32 part of src.pro now reads:

win32 {
+ INCLUDEPATH += $$(DBUSDIR)
+ LIBS+= -L$$(DBUSDIR)/lib
LIBS += -lws2_32 -ladvapi32 -lnetapi32

Then I started qmake -spec win32-msvc2005 -recursive and the build started. But there was yet another problem. The link of qdbus.exe missed the xml library. So I added
QT = core xml
to qdbus/tools/qdbus/qdubs.pro.
Another nmake and everything builds fine. Let's see what's next...

Tuesday, September 26, 2006

ubuntu edgy + colinux 0.8.0

I finally managed to get colinux 0.8.0 to run with ubuntu edgy (using upstart).
After browsing the mailing lists I found out that hotplug is disabled in the stock 0.8.0 test release.
So I downloaded the 2.6.17 installer from Henry Nestler - and everything works.

Sunday, September 17, 2006

KDE4 on windows

There's quite a lot going on in the kde on windows world. kdelibs.com features a nice howto for kde 4 development on windows - unfortunately this is already outdated. A newer qt snapshot is required and paths for qdbus have to be hardcoded in the makefile (see this mailing list entry). As much as I'm looking forward to kde 4 on windows with all this work happening, everythings just happening too fast for me to stay tuned. So my xpertmud porting project has to wait...

Saturday, September 2, 2006

Windows PowerShell

I recently discovered the Windows PowerShell. This new shell has some really nice concepts. Unfortunately right now the documentation is rather spare and you have to search a lot before being able to:

sort the process list by reversed process names:
ps | sort-object @{ Expression = { $chars = $_.name.tochararray(); [system.array]::reverse($chars); new-object String(,$chars); } }

Unfortunately the reverse method is in-place, so it took me some time to get this rather easy example working, but it shows that the .Net library is readily accessible within msh.
Especially the piping of real .Net objects is a wonderful innovation in shell context. In the Unix world the most daunting part of shell scripting is often to find the correct language independent regular expression to extract data from the input process. msh introduces yet another programming language to the scripting universe, but at a first glance the designers did a very fine job - the orthogonal concepts are ready for the textbook.