Book: Leading Change

Leading Change is about how to implement significant changes in organizations. It discusses the Eight-Stage Process of Creating Major Change:

  1. Establishing a sense of urgency
  2. Creating the guiding coalition
  3. Developing a vision and strategy
  4. Communicating the change vision
  5. Empowering broad-based action
  6. Generating short-term wins
  7. Consolidating gains and producing more change
  8. Anchoring new approaches in the culture

These actions require leadership more than management, to define what the future should look like, align people with this vision, and inspire them to make it happen despite obstacles.

Establishing a sense of urgency is needed to overcome complacency, which can be caused by

  1. The absence of a major crisis
  2. Too many visible resources
  3. Low overall performance standards
  4. Organizational structures that focus employees on narrow functional goals
  5. Internal measurement systems that focus on the wrong performance indexes
  6. A lack of sufficient performance feedback from external sources
  7. A kill-the-messenger-of-bad-news, low-candor, low-confrontation culture
  8. Human nature, with its capacity for denial, especially if people are already busy or stressed
  9. Too much happy talk from senior management

Creating urgency can be done by attacking each of these, but these forces are not to be underestimated.

A guiding coalition is powerful coalition that can act as a team. It is needed for introducing change, since no one individual has the information needed to make all major decisions or the time or credibility needed to convince lots of people to implement the decisions. The following characteristics are essential for individuals in a guiding coalition:

  1. Position power to prevent others from blocking progress
  2. Expertise to make informed, intelligent decisions
  3. Credibility to be taken seriously by others
  4. Leadership to drive the process

Make sure to avoid individuals with egos that fill up the room. Also avoid so-called snakes, people who create enough mistrust to kill teamwork.

To make the guiding coalition into a team, you have to create trust (through lots of joint activities) and develop a common goal (that is sensible to the head and appealing to the heart).

Developing a vision simplifies the detailed decisions, motivates people to take action in the right direction, and coordinates the actions of different people. An effective vision is:

  1. Imaginable: conveys a picture of what the future will look like
  2. Desirable: appeals to the long-term interests of employees, customers, stockholders and other stakeholders
  3. Feasible: comprises realistic, attainable goals
  4. Focused: is clear enough to provide guidance in decision making
  5. Flexible: is general enough to allow individual initiative and alternate responses in light of changing conditions
  6. Communicable: is easy to communicate; can be successfully explained in 5 minutes

The most effective transformational visions:

  1. Are ambitious enough to force people out of their comfort zones
  2. Aim in a general way at becoming better and better at lower and lower costs
  3. Take advantage of fundamental trends, like globalization and new technology
  4. Exploit nobody and therefore have a certain moral power

Communicating the change vision requires:

  1. Simplicity: all jargon must be eliminated
  2. Metaphor, analogy, and example: a verbal picture is worth a thousands words
  3. Multiple forums: big and small meetings, memos and newspapers, formal and informal interaction
  4. Repetition: ideas sink in deeply only after they have been heard many times
  5. Leadership by example: behavior from important people that is inconsistent with the vision overwhelms other forms of communication
  6. Explanation of seeming inconsistencies: unaddressed inconsistencies undermine the credibility of all communication
  7. Give-and-take: two-way communication is always more powerful than one-way communication

Empowering employees for broad-based action faces these barriers:

  1. Formal structures make it difficult to act
  2. A lack of needed skills undermines action
  3. Personnel and information systems make it difficult to act
  4. Bosses discourage actions aimed at implementing the new vision

Generating short-term wins is also essential for major change. A win has to be:

  1. Visible: so people know it’s not just hype
  2. Unambiguous
  3. Clearly related to the goal

If you get them right, short-term wins:

  1. Provide evidence that sacrifices are worth it
  2. Reward change agents with a pat on the back
  3. Help fine-tune vision and strategies
  4. Undermine cynics and self-serving resisters
  5. Keep bosses on board
  6. Build momentum

You need to plan for these short-term wins, since they don’t just happen. Sometimes people don’t plan short-term wins, because:

  1. They are overwhelmed by the change process
  2. They don’t believe one can produce major change and achieve excellent short-term results
  3. Lack sufficient management skills

Consolidating gains and producing more change is needed because resistance is always waiting to re-assert itself. This resistance can come from interdependency, where a change in one part requires changes in many other parts. Attacking resistance results in:

  1. More change, not less: The guiding coalition uses the credibility afforded by short-term wins to tackle additional and bigger change projects
  2. More help: Additional people are brought in, promoted, and developed to help with all the changes
  3. Leadership from senior management: Senior people focus on maintaining clarity of shared purpose for the overall effort and keeping urgency levels up
  4. Project management and leadership from below: Lower ranks in the hierarchy both provide leadership for specific projects and manage those projects
  5. Reduction of unnecessary interdependencies: To make change easier in both the short and long term, managers identify unnecessary interdependencies and eliminate them

Anchoring new approaches in the culture is the final step in the change process. Culture refers to norms of behavior and shared values among a group of people. Norms of behavior are common or pervasive ways of acting that are found in a group and that persist because group members tend to behave in ways that teach these practices to new members, rewarding those who fit in and sanctioning those who do not. Shared values are important concerns and goals shared by most of the people in a group that tend to shape group behavior and that often persists over time even when group membership changes.

Culture is a powerful thing, because:

  1. Individuals are selected and indoctrinated so well
  2. Culture exerts itself through the actions of hundreds or thousands of people
  3. All of this happens without much conscious intent and thus is difficult to challenge or even discuss

Anchoring change in a culture:

  1. Comes last, not first
  2. Depends on results
  3. Requires a lot of talk
  4. May involve turnover
  5. Makes decisions on succession crucial

Book review: The Goal

The Goal, A Process of Ongoing Improvement, by Eli Goldratt is a business novel that introduced the Theory of Constraints.

Being a novel means that this theory is presented almost casually as a story. We follow Alex Rogo, a plant manager. His plant is in trouble. Big trouble. Alex finds out that it will be closed in three months if things don’t improve. Alex runs into his old physics teacher, Jonah, who starts helping him. Not by telling Alex what to do, but by asking questions. Questions that can only be answered by challenging fundamental assumptions about running the plant. By letting go of these false assumptions, Alex finds ways to dramatically improve his plant’s operations. Not only is the plant not closed down, Alex gets to run the entire division.

I don’t read much fiction. I have too little time as it is to read all the non-fiction that I want. So why did I read this book? Well, the story I outlined above isn’t what this book is about at all. This book tries to teach us to not lose track of the real goal, i.e. making money in the case of an enterprise. We make more money by

  1. Increasing throughput
  2. Decreasing inventory
  3. Decreasing operational expenses

Preferably all of those at the same time.

The primary tool to help us with this is the Theory of Constraints. This is a scientific, i.e. evidence-based, approach to process improvement, based on the assumption that any process is limited by some constraints. To improve the process, you need to follow the 5 focusing steps:

  1. Identify the constraint
  2. Decide how to exploit the constraint
  3. Subordinate all other processes to the above decision
  4. Elevate the constraint
  5. If, as a result of these steps, the constraint has moved, return to Step 1.

It all starts with identifying the constraint. Visualizing the process helps here, e.g. using a Kanban board. We may need to do some root cause analysis to find the real constraint underlying the symptoms that our visual tool shows.

Exploiting the constraint means making sure that the constraint’s capacity isn’t wasted. For instance, if QA is the constraint in a software development process, then we must make sure they’re not sitting idle. One way of doing that would be to decrease iteration size, which will give QA a more even supply of work. Linking back to the 3 elements that help achieve the goal, we see that this is a way of decreasing inventory (of coded, but untested features).

Exploiting the constraint often requires the use of buffers before the constraint, to make sure it doesn’t run out of things to do. For instance, Scrum assumes the development team is the constraint in the organization, and uses a buffer called a backlog to keep it busy.

Subordinating all processes to the decision to exploit the constraint is not easy. It may mean you need to take counter-intuitive actions, such as keeping non-constraint resources idle. For instance, coding more features is not going to help us make more money if QA can’t keep up with testing. So if QA is using up all its capacity, then no more features should be coded. If the coders have more capacity, they should do something else than coding new features. I’m sure that raises some eyebrows somewhere, but it makes perfect sense when you think about it.

Elevating the constraint means improving the capacity of the constraint. For instance, we could hire more people for QA. Better yet, we could have coders use their idle time to write automated tests, which will prevent defects reaching QA, which means less work for QA per feature, which means more features per unit of time can be tested. We know this is a better solution when we compare the 2 proposals against the 3 ways of reaching our goal: the first proposal increases operational expenses, while the other doesn’t.

The final step embodies the “ongoing” part of process improvement. Once QA finds less bugs because of automated tests, they may stop being the constraint. That is good news, but no reason to sit back and relax. By assumption, something else is now the constraint, so we repeat the whole process for the new constraint.

The 5 step process is quite general. The Goal uses it in the context of a manufacturing plant, but the examples I gave talk about software development. This generic ability makes it a very useful tool, so if you haven’t read the book, I suggest you do. It’s a decent read as a novel as well.

Waterfall vs. Agile

I’ve been a fan of Agile methodologies for quite some time now. As an Agilist, I would scoff at the Waterfall process I was taught during my studies. I did read a couple of times that the original paper introducing the Waterfall model wasn’t really supportive of it at all, but I’d never read that paper myself. Until now.

Here’s what I found what its author, Dr. Winston Royce, has to say.

Attitude
The heart of software development is analysis and coding, “since both steps involve genuinely creative work which directly contributes to the usefulness of the final product.” But for larger software systems, they are not enough. “Additional development steps are required […] The prime function of management is to sell these concepts to both groups and then enforce compliance on the part of development personnel.” The groups mentioned are customers and developers. Wow, not really people over processes, huh?

There are big problems with Waterfall
Royce then goes on to introduce the other steps, ending up with what we now call Waterfall. Right after that he adds feedback loops between each step and its predecessor. The caption to this figure says “Hopefully, the iterative interaction between the various phases is confined to successive steps.” Immediately following that, he points out a problem with this process: “Unfortunately, for the process illustrated, the design iterations are never confined to the successive steps”.

But there is a much worse problem. “The testing phase which occurs at the end of the development cycle is the first event for which timing, storage, input/output transfers, etc., are experienced as distinguished from analyzed. […] If these phenomena fail to satisfy the various external constraints, then invariably a major redesign is required. […] In effect the development process has returned to the origin and one can expect up to a 100-percent overrun in schedule and/or costs.” Yes. Been there, done that.

But these can be fixed
Stunningly, though, Royce goes on to claim “However, I believe the illustrated approach to be fundamentally sound.” We just need to tweak it a bit more: add a preliminary design phase before analysis, document the design, do it twice (he means simulate first, but others refer to this as “plan one to throw away”), look closely at testing, and involve the customer.

And these fixes look a lot like Agile
The first trick to do some design before analysis is also what is common in Agile methodologies. However, we usually don’t single out analysis and design, but apply the trick to all the phases. That’s how we end up with Behavior Driven Development.

Royce turns out to be a big fan of documentation: “In order to procure a 5 million dollar hardware device, I would expect that a 30 page specification would provide adequate detail to control the procurement. In order to procure 5 million dollars of software I would estimate a 1000 page specification is about right in order to achieve comparable control”. Why is documentation so important? One of the reasons is that “during the early phase of software development the documentation is the specification and is the design.” Agilist would rather argue that automated tests are both the documentation and the specification and drive the design. Royce could never have thought of that, since testing in his mind occurred at the end and was to be performed manually.

The do it twice trick is also used a lot in Agile. We call it a spike.

For testing, Royce notices that a lot of errors can be caught before the test phase: “every bit of an analysis and every bit of code should be subjected to a simple visual scan by a second party who did not do the original analysis or code”. Agilists would agree that pair programming is very useful. Also, Royce advises to “test every logic path in the computer program at least once”. He understands it is difficult, but should be done anyway. I agree that we should have (nearly) 100% test coverage, and TDD gives us just that.

For customer involvement, Royce notes that “for some reason what a software design is going to do is subject to wide interpretation even after previous agreement. […] To give the contractor free rein between requirement definition and operation is inviting trouble.” I don’t see how he can maintain this and still be a fan of written documentation. But I am with him in seeing the value of close collaboration with the customer.

So there is no dichotomy
In summary, the author of the Waterfall process clearly saw some problems with that approach. He even identified some solutions that look remarkably like what we do in Agile methodologies today. So why don’t we end this Waterfall vs. Agile false dichotomy and from now on talk just about software development best practices? Make progress, not war.

By the way, what I find amazing is that somehow people managed to get the Waterfall process out of this paper, but not the problems and solutions Royce presented. And it’s almost criminal that the Waterfall process is still taught in universities as a good way to do software development. Without the above fixes, it’s clearly not.

Using factory classes in Ant tasks

So you have this nice factory class that prevents your client code from knowing the implementation class of the instances it needs to create and that lets it program to an API only.

Of course, at some point somebody needs to know the implementation class. Since the factory is the one creating instances, it either needs to know itself or be told. And since the factory is probably in the same package as the API, it shouldn’t know the implementation class itself, since that would tie the API package to the implementation package. So the factory needs to be told:

public class MyFactory {

  private static Class implementationClass = null;

  private MyFactory() {
    // Utility class
  }

  /**
   * Create a new instance.
   * @param data Data needed to initialize the instance
   * @return The newly created instance
   */
  public static MyInterface newInstance(final Object data) {
      assertImplementationClass();
      final Class clazz = implementationClass;
      if (data == null) {
        try {
          final Constructor constructor = clazz.getConstructor();
          result = (MyInterface) constructor.newInstance(
              new Object[0]);
        } catch (final Exception e) {
          result = null;
        }
      } else {
        final Constructor[] constructors = clazz.getConstructors();
        for (int i = 0; result == null && i < constructors.length; 
            i++) {
          final Constructor constructor = constructors[i];
          if (constructor.getParameterTypes().length == 1
          && constructor.getParameterTypes()[0].isInstance(data)) {
            try {
              result = (MyInterface) constructor.newInstance(
                  new Object[]{data});
            } catch (final Exception e) {
              result = null;
            }
          }
        }
    }

    return result;
  }

  /**
   * Register a class that implements the interface.
   */
  public static void registerImplementation(
      final Class implementation) {
    implementationClass = implementation;
  }

  /**
   * Unregister the implementation class.
   */
  public static void unregisterImplementation() {
    implementationClass = null;
  }

  private static void assertImplementationClass() {
    if (implementationClass == null) {
      throw new IllegalStateException(
          "Implementation class not set");
    }
  }

}

Now, who’s going to tell the factory what class to instantiate? There must be some entry point in the application where this happens. In your tests (you do write tests, right?), you can do that in the set up method. In a web application, you can do that in the ServletContextListener.

Ant

But what about in Ant tasks? You could create an Ant task that does just that and call it from a dependent target:

  <target name="--init-factory" unless="factory.inited">
    <property name="impl.class" 
        value="com.mycompany.myapp.MyImplementation"/>
    <taskdef name="register-impl"
        classname="com.mycompany.myapp.ant.RegisterTask" 
        classpath="..."/>
    <register-impl classname="${impl.class}"/>
    <property name="factory.inited" value="true"/>
  </target>

However, that doesn’t work. So what’s up?

Debugging Ant tasks

Our Ant task seems so simple that it is hard to see what could be wrong with it. So we want to debug it and find out.

You can debug Ant tasks by setting the environment variable ANT_OPTS:

SET ANT_OPTS=-Xdebug -Xrunjdwp:transport=dt_socket,address=6000,server=y,suspend=n

Now when you run your Ant script, you can attach your debugger on port 6000. You may want to use the input task to have the build wait while you attach your debugger.

Debugging reveals something interesting: The registerImplementation method does get called with the right parameter, but when newInstance is called, implementationClass is still null. Apparently Ant is doing some fancy classloader stuff that gets in our way.

The solution is to have the Ant task set a system property that the factory uses:

  private static void assertImplementationClass() {
    if (implementationClass == null) {
      final String className = (String) 
          System.getProperties().get(IMPLEMENTATION_CLASS_PROPERTY);
      if (StringUtils.isBlank(className)) {
        throw new IllegalStateException("Implementation class not set");
      }
      try {
        registerImplementation(Class.forName(className));
      } catch (final ClassNotFoundException e) {
        throw new IllegalStateException("Invalid implementation class: " 
            + className + "\n" + e.getLocalizedMessage());
      }
    }
  }

OSGi & Maven & Eclipse

If you’re involved in a large software development effort in Java, then OSGi seems like a natural fit to keep things modular and thus maintainable. But every advantage can also be seen as a disadvantage: using OSGi you will end up with lots of small projects. Handling these and their interrelationships can be challenging.

Enter Maven. This build tool makes it a lot easier to build all these little (or not so little) projects. Which is a necessity, since a command line driven build tool is essential for doing Continuous Integration. And we all practice that, right?

However, as a developer it’s a pain to keep switching between your favorite IDE and the command line. Not to worry, Eclipse has plug-ins that handle just about any situation. Using M2Eclipse, you can maintain your POM from within the IDE.

But an Eclipse Maven project is not an Eclipse OSGi project. For handling OSGi bundles, one would want to use the Eclipse Plug-in Development Environment (PDE) with all the goodies that brings to OSGi development. There is, however, a way to get the best of both worlds, although it still isn’t perfect, as we will see shortly.

The trick is to start with a PDE project:

Make sure to follow the Maven convention for sources and classes and to use plain OSGi (so you’re not tied to Eclipse/Equinox):

Once you’ve created the project, you can add Maven support:

Make sure to use the same identification for Maven as for PDE:

Now you have an Eclipse project that plays nice with both PDE (and thus OSGi) and Maven. The only downside to this solution is that some information, like the bundle ID, is duplicated.

Ubuntu 9.10 & Eclipse 3.5

I recently upgraded Ubuntu to its latest version (9.10, Karmic Koala) and it works great so far. Except for Eclipse.

I ran Eclipse 3.5 (Galileo), and apparently SWT in that version does something wrong in communicating with GTK. The end result is that buttons don’t react to mouse clicks anymore. Rather annoying. Luckily, there is a solution available. Alternatively, you can use the latest Eclipse 3.6 (Helios) milestone.

But that wasn’t the end of it. Eclipse would now perform extremely slowly on a variety of tasks. It turns out that this is caused by Eclipse now running on the GCJ Virtual Machine. I simply uninstalled everything with “gcj” in its name using Synaptic and all was well again.

JavaFX for GNU/Linux has arrived

Finally, the time has come: JavaFX is now supported on both GNU/Linux and Solaris.

It’s not really advertised, though, so h Here’s how to get it:

  • Go to the JavaFX website.
  • Click the Download now button. Yes, the one that reads JavaFX 1.1 SDK.
  • Click the JavaFX 1.1.1 1.2 SDK option, and click Download.
  • You’ll be prompted to download javafx_sdk-1_2-linux-i586.sh. Save it somewhere convenient.
  • Make the downloaded file executable with chmod + x
  • Run the shell script with ./javafx_sdk-1_2-linux-i586.sh
  • Page through the annoying legal stuff by pressing Space repeatedly. At the end, type yes.
  • You now have a javafx-sdk1.2 directory that you can play with.

Enjoy!

Oh, and in case you have some JavaFX code from pre-1.2 versions, here’s how to migrate it.

Update: There is also a new Eclipse plugin. Binaries only, the source will have to wait until it gets transferred to eclipse.org.

Supporting multiple versions of a data model

As an application evolves, its data model often does too. If you control both, this usually isn’t a problem. However, sometimes your power to change the data model is restricted. This happens, for instance, when the data model is published, and others may depend on it. An extreme case of this is when the data model is defined by another organization as, for example, with S1000D.

Having no absolute control over the data model isn’t much of a problem if you can leave one version behind completely, and move on to the next. But often you won’t be so lucky. I know I’m not: we need to support both S1000D 3.0 and 4.0.

There’s different ways in which you can support multiple data model versions. The one I’m concerned with here, is when your application needs to support multiple data models at the same time with the same code. That leaves out alternatives like having multiple branches of your code for the different data model versions.

One trick that can come to the rescue here is the Once And Only Once rule (also called the DRY principle). When applied to creating instances, this leads to the Factory pattern. If you have all your instances created by a factory, then there’s only one place where you need to decide which class (e.g. the 3.0 or 4.0 version) to instantiate. If those decisions are similar for all the classes in your model, then you could even extract them into a common base class for your factories.

Most of the time, the different versions of the data model will share a lot of similarities. It is tempting to extract those into a common base class. For example, in S1000D there is a type called descriptive data module, and you could derive DescriptiveDataModule30 and DescriptiveDataModule40 from DecriptiveDataModule.

But when the objects in your data model have inheritance relationships themselves, that can get ugly very fast. For instance, a descriptive data module is one of many kinds of data modules, and these data modules share a lot of characteristics. So in code, DescriptiveDataModule would descend from DataModule, and both would have aspects that differ in the 3.0 and 4.0 versions. This spells trouble.

Therefore, it is usually better to use composition instead. So DataModule would have a reference to a DataModuleIssue (where “issue” is used in the sense of the various issues of the S1000D specification, i.e. what I’ve been calling “versions” so far), which the DescriptiveDataModule would inherit. The factory would inject either a DescriptiveDataModuleIssue30 or a DescriptiveDataModuleIssue40 into the DescriptiveDataModule, where DescriptiveDataModuleIssue30 would descend from DataModuleIssue30, and DescriptiveDataModuleIssue40 from DataModuleIssue40.

The idea is to make the Issue classes very bare, dealing only with the stuff that differs between issues, so there is no need for a common base class (although both do implement the same interface). The things that are the same in all issues, go into the core model objects (DescriptiveDataModule and DataModule in our example).

Kanban

Lately, I’ve seen a lot of discussions on Kanban. For those of you who, like me, want to know what all that fuss is about, I collected a couple of links that I will try to merge into a coherent whole below.

So what exactly is Kanban? Literally, it means “visual card”, but that’s not very helpful. This introduction explains that Kanban revolves around a board that visualizes the software development flow.

In fact, flow is a very important concept here. Kanban is a pull system, in which Minimal Marketable Features (MMFs) flow through the development stages when there is capacity available. This contrasts with most Agile methods that push work items into iterations. Also, note that for most Agile methods, those work items (e.g. User Stories) would be smaller than MMFs.

The other big point is that Kanban limits Work In Progress (i.e. the number of MMFs per development stage). This naturally exposes the bottleneck(s) in the flow.
Kanban limits WIP

This leads us nicely to the main reason to use Kanban: to improve your software development process. Other Agile methods deal with process improvement as well, but Kanban is different from e.g. Scrum.

So, if all this sounds cool and you want to give Kanban a shot, then apparently this is how you should get started. If you do, then you may see these effects. Also, make sure to get into a Kanban state of mind.

Update: here is a great compilation of Kanban resources.