Jeff's Blog

Musings about software development, Java, OO, agile, life, whatever.

Note: This blog is no longer active. Please visit Jeff's current blog instead. Unfortunately, any new comments entered cannot be retained or displayed. If you feel strongly about a blog entry, please contact us.



Wednesday, January 20, 2010 
Tradeoffs

This graph, intended to show the differences in outcome between TDD and not, is a sketch of my observations combined with information extrapolated from other people's anecdotes. One datapoint is backed by third party research: Studies show that it takes about 15% longer to produce an initial solution using TDD. Hence I show in the graph the increased amount of time under TDD to get to "done."

The tradeoff mentioned in the title is that TDD takes a little longer to get to "done" than code 'n' fix. It requires incremental creation of code that is sometimes replaced with incrementally better solutions, a process that often results in a smaller overall amount of code.

When doing TDD, the time spent to go from "done" to "done done" is minimal. When doing code 'n' fix, this time is an unknown. If you're a perfect coder, it is zero time! With some sadness, I must report that I've never encountered any perfect coders, and I know that I'm not one. Instead, my experience has shown that it almost always takes longer for the code 'n' fixers to get to "done done" than what they optimistically predict.

You'll note that I've depicted the overall amount of code in both graphs to be about the same. In a couple cases now, I've seen a team take a TDD mentality, apply legacy test-after techniques, and subsequently refactor a moderate-sized system. In both cases they drove the amount of production code down to less than half the original size, while at the same time regularly adding functionality. But test code is usually as large, if not a bit larger, than the production code. In both of these "legacy salvage" cases, the total amount of code ended up being more or less a wash. Of course, TDD provides a large bonus--it produces tests that verify, document, and allow further refactoring.

Again, this graph is just a rough representation of what I've observed. A research study might be useful for those people who insist on them.

Labels: , ,





Wednesday, January 13, 2010 
Roaming Keymap Profiles

When pairing, I often learn little shortcuts and tips that I might never pick up otherwise (even though I'm the compulsive sort who comprehensively reads things like the entire list of commands, just to find some new trick). I try to impart at least as many tips as I take on. Pairing makes it happen, as there's little better than to have an insistent partner who keeps reminding you to "just hit Ctrl-Alt-v" (introduce variable in IDEA, an essential refactoring). In the absence of a pair, IDEA's Key Promoter plugin helps a bit.

As someone who pairs across many teams, I regularly encounter different IDEs and programmer editors. Usually teams standardize on one tool, but not long ago I worked in a C++ team where there were 8, count 'em, 8 different editors in play. Currently I am working with a team where some people are piloting Eclipse, while the rest of the team uses IDEA.

Give me a few minutes and I can ramp up OK, but switching IDEs throughout the day will make just about anyone feel stupid. Ctrl-d, duplicate line in IDEA. Switch to Eclipse, oops, Ctrl-d, I just deleted a line. Nope, it's Alt-Ctrl-Down to duplicate a line. Move a line up, Ctrl-Shift-Up in IDEA. Wait, though, I can't remember the Eclipse corollary, but I do have muscle memory, and so I now have to go try it... wait for me... it's Alt-Up. And so on.

Why don't I just change the keymaps so that they're more in synch with each other? Or why not use, say, a vi keymap everywhere? The problem is that I'm pairing with others, and so the simplest thing is to go along with the IDE defaults (which is what the predominance of programmers uses).

On my wish list and/or backlog of things to work on, I'd love it if IDEs would support a standardized roaming keymap protocol, as well as a simple mechanism for toggling profiles. I would be able to specify a URL from which the IDE would download my profile. From there on, a simple keystroke would toggle from the active keymap profile to mine and vice versa, in order to expedite pairing.

I've been hoping to see more support in IDEs for things like TDD and pairing. It's coming, albeit slowly. Any plugin fanatics out there who want to give this one a go?

Labels: , , , ,





Wednesday, December 16, 2009 
Framing Your Design

A while back, I worked with a team in a cube bullpen to help mentor them primarily in test-driven development (TDD). Their goal was to deliver a decent-sized web product--not huge, not trivial.

The first time I paired with the programmer nearest the bullpen opening, I noticed a large framed document on a nearby cube wall, outside the bullpen, and asked what it was. All I could tell from that distance was that it was a highly detailed diagram of some sort.

I asked my pair about the document.
"It's our class model."
"How do you use it?"
"We don't. It's out of date."
Of course it was. We went back to building code.

When we took a break, I walked to the diagram. Sure enough, it was a UML class model--with gobs of detail. Public methods, attributes, private methods, annotations on the associations, attribute types, return types, and so on, all specified in what looked like a couple hundred classes. Arrows all over the place. As I previously mentioned, the document was framed (albeit cheaply), which meant that the model itself was enshrined in a protective layer of glass (plastic?).

The framed model sure looked pretty! And it no doubt looked quite impressive to any non-technical observer (such as a vice president): "They built all that!"

Of course, the team that actually produced the detailed model no longer found it useful. During the remainder of my engagement, I never once saw a developer look at the diagram. And most amusing, when I took my one visit to inspect the model, a closer look revealed that the programmers had briefly attempted to keep the model up to date. On the surface of the glass, there were various scribbles and attempted modifications, all written directly on the glass.

Your software is special, but your design models are not, and they change rapidly. Don't enshrine your design.

Labels: , , , ,





Thursday, August 06, 2009 
NetBeans

I was very excited to start using Infinitest once again. But as things seem to go with me, I ended up moving on to something else that dominated my mindshare--in this case, a new project where I am using NetBeans. Why NetBeans? Because someone set up the project to use Maven, and all I've heard is that NetBeans is magical when it comes to integrating with Maven (and Eclipse, not so much).

As I would expect, both Maven and Maven support in NetBeans are really nice as long as they're working well. But Maven seems to adhere to the 80/20 rule. It's great 80% of the time, up until you have to do something a little out of the ordinary, or when things aren't working like they're supposed to. Then the numbers reverse--you spend 80% of your time trying to work on the remaining 20%. In the case of Maven, I'm struggling with getting coverage reports against multiple modules, and it's also presenting me with a defect (operator error?) where NetBeans reports half the Maven modules as "badly formed Maven projects." Google searches, and assistance from others with more Maven experience than me, have not been very fruitful.

This is the third time I've used Maven, and I'm warming up to it more each time, but it's still been a frustration. Part of frustration is always lack of knowledge, particularly around the behind-the-scenes workings that you need to know when you hit the 20% esoterica. Thanks to those who have helped!

On the other hand, this is also the third time I've used NetBeans heavily. I'm not really warming to it much. Granted, part of my disappointment with NetBeans is the same problem. The best way to learn not to hate something is to pair with someone who knows it well, and I've not been able to get much pairing time.

Other than the Maven support, NetBeans seems much weaker than Eclipse. It's slower (e.g. a Ctrl-O to search for types takes many seconds to return class names, compared to Eclipse's Ctrl-Shift-t which is almost immediate). It's flakier (e.g. half the time, ctrl-F6 to run a test pops up its JUnit GUI, the other half the time I see only the console output; also, the formatting mechanism appears to be broken). It seems to be the last of the three main Java IDEs for which people write plugins (e.g. Infinitest is only available for Eclipse and IDEA, not NetBeans). And it's just missing key important things (e.g. there is no inline method refactoring until NetBeans 7.0).

I can live without every last bell and whistle. But it's always disappointing to see other people whistle in cool little ways, like Kerievsky's nice trick with inline method, and not get to play along.



Easier Testing Using the SRP

This simple file monitor class notifies listeners when a watched file is modified:

public class FileMonitor {
 private Set<FileChangeListener> listeners =
    new HashSet<FileChangeListener>();
 private int duration;
 private File file;
 private long lastModified;

 public FileMonitor(String filename, int duration) {
     this.file = new File(filename);
     lastModified = file.lastModified();
     this.duration = duration;
 }

 public void start() {
     boolean isDaemon = true;
     long initialDelay = duration;
     new Timer(isDaemon).schedule(
        new FileMonitorTask(), initialDelay, duration);
 }

 public void addListener(FileChangeListener listener) {
     listeners.add(listener);
 }

 class FileMonitorTask extends TimerTask {
     @Override
     public void run() {
         if (file.lastModified() > lastModified) {
             lastModified = file.lastModified();
             for (FileChangeListener listener: listeners) {
                 listener.modified();
             }
         }
     }
 }
}

A FileMonitor schedules a TimerTask that just checks the modified date from time to time, and compares it to the last modified date. The code above seems like a typical and reasonable implementation. FileMonitorTask could be an anonymous inner class, of course.

I spent more time than I would have preferred test-driving this solution, since I made the mistake of test-driving it as a single unit. These tests had to deal with the vagaries of threading due to the Timer scheduling.

import java.io.*;
import java.util.*;
import org.junit.*;
import static org.junit.Assert.*;

public class FileMonitorTest {
 private static final String FILENAME = "FileMonitorTest.properties";
 private static final int MONITOR_DURATION = 1;
 private static final int TIMEOUT = 50;
 private FileMonitor monitor;
 private File file;
 private List<FileChangeListener> fileChangeNotifications;
 private Thread waitThread;

 @Before
 public void initializeListenerCounter() {
     fileChangeNotifications = new ArrayList<FileChangeListener>();
 }

 @Before
 public void createFileAndMonitor() throws IOException {
     FileUtils.writeFile(FILENAME, "property=originalValue");
     file = new File(FILENAME);
     monitor = new FileMonitor(FILENAME, MONITOR_DURATION);
 }

 @After
 public void deleteFile() {
     FileUtils.deleteIfExists(FILENAME);
 }

 @Test
 public void shouldNotifyWhenFileModified() throws Exception {
     monitor.addListener(createCountingFileChangeListener());
     waitForFileChangeNotifications(1);

     monitor.start();

     alterFileToChangeLastModified();
     joinWaitThread();
     assertEquals(1, numberOfFileChangeNotifications());
 }

 @Test
 public void shouldSupportMultipleListeners() throws Exception {
     monitor.addListener(createCountingFileChangeListener());
     monitor.addListener(createCountingFileChangeListener());
     waitForFileChangeNotifications(2);

     monitor.start();

     alterFileToChangeLastModified();
     joinWaitThread();
     assertEquals(2, numberOfFileChangeNotifications());
     assertAllListenersDistinctlyNotified();
 }

 @Test
 public void shouldOnlyReportOnceForSingleModification()
         throws InterruptedException {
     // slow--must wait on timeout. Better way?
     monitor.addListener(createCountingFileChangeListener());
     waitForFileChangeNotifications(2);

     monitor.start();

     alterFileToChangeLastModified();

     joinWaitThread();
     assertEquals(1, numberOfFileChangeNotifications());
 }

 @Test
 public void shouldReportMultipleTimesForMultipleModification()
     throws InterruptedException {
     monitor.addListener(createCountingFileChangeListener());
     waitForFileChangeNotifications(1);

     monitor.start();

     alterFileToChangeLastModified();
     joinWaitThread();

     waitForFileChangeNotifications(2);
     alterFileToChangeLastModified();
     joinWaitThread();

     assertEquals(2, numberOfFileChangeNotifications());
 }

 private FileChangeListener createCountingFileChangeListener() {
     return new FileChangeListener() {
         @Override
         public void modified() {
             fileChangeNotifications.add(this);
         }
     };
 }

 private int numberOfFileChangeNotifications() {
     return fileChangeNotifications.size();
 }

 private void waitForFileChangeNotifications(final int expected) {
     waitThread = new Thread(new Runnable() {
         @Override
         public void run() {
             while (numberOfFileChangeNotifications() != expected) {
                 sleep(1);
             }
         }
     });
     waitThread.start();
 }

 private void sleep(int ms) {
     try {
         Thread.sleep(ms);
     } catch (InterruptedException exception) {
         fail(exception.getMessage());
     }
 }

 private void alterFileToChangeLastModified() {
     file.setLastModified(file.lastModified() + 1000);
 }

 private void joinWaitThread() throws InterruptedException {
     waitThread.join(TIMEOUT);
 }

 private void assertAllListenersDistinctlyNotified() {
     Set<FileChangeListener> notifications =
         new HashSet<FileChangeListener>(
);
     notifications.addAll(fileChangeNotifications);
     assertEquals(fileChangeNotifications.size(), notifications.size());
 }
}

Wow, that's a bit longer than I would have hoped. Yes, there are better ways to test the threading. And there may be better threading mechanisms to implement the monitor, although I think the Timer is pretty simple. I don't do enough threading, so I usually code a "familiar" solution (i.e. something I already know well) initially, and then refactor into a more expressive and concise solution.

I'm pretty sure this test is flawed. But rather than invest in fixing it, I reconsidered my design (by virtue of wanting an easier way to test it). The SUT is really two classes, the monitor and the task. Each one can easily be tested separately, in isolation, as a unit. Sure, I want to see the threading in action, but perhaps that's better relegated to an integration test (which might also double as an acceptance test).

I decided to change the solution to accommodate more focused and more robust tests. The file monitor tests are pretty much what was there before (with some small additional refactorings), except that there's now no concern over threading--I just test the run method:

import java.io.*;
import org.junit.*;
import static org.mockito.Mockito.*;
import org.mockito.*;

public class FileMonitor_TaskTest {
 private static final String FILENAME = "FileMonitorTest.properties";
 private FileMonitor.Task task = new FileMonitor.Task(FILENAME);
 @Mock
 private FileChangeListener listener;

 @Before
 public void initializeMockito() {
     MockitoAnnotations.initMocks(this);
 }

 @BeforeClass
 public static void createFile() throws IOException {
     FileUtils.writeFile(FILENAME, "property=originalValue");
 }

 @AfterClass
 public static void deleteFile() {
     FileUtils.deleteIfExists(FILENAME);
 }

 @Before
 public void createTask() {
     task = new FileMonitor.Task(FILENAME);
 }

 @Test
 public void shouldNotNotifyWhenFileNotModified() {
     task.addListener(listener);
     task.run();
     verify(listener, never()).modified();
 }

 @Test
 public void shouldNotifyWhenFileModified() throws Exception {
     task.addListener(listener);
     changeFileLastModifiedTime();

     task.run();

     verify(listener, times(1)).modified();
 }

 @Test
 public void shouldSupportMultipleListeners() throws Exception {
     task.addListener(listener);
     FileChangeListener secondListener = mock(FileChangeListener.class);
     task.addListener(secondListener);

     changeFileLastModifiedTime();
     task.run();

     verify(listener, times(1)).modified();
     verify(secondListener, times(1)).modified();
 }

 @Test
 public void shouldOnlyReportOnceForSingleModification() throws InterruptedException {
     task.addListener(listener);

     task.run();
     changeFileLastModifiedTime();
     task.run();
     task.run();

     verify(listener, times(1)).modified();
 }

 @Test
 public void shouldReportMultipleTimesForMultipleModification() throws InterruptedException {
     task.addListener(listener);

     task.run();
     changeFileLastModifiedTime();
     task.run();
     changeFileLastModifiedTime();
     task.run();

     verify(listener, times(2)).modified();
 }

 private void changeFileLastModifiedTime() {
     File file = new File(FILENAME);
     file.setLastModified(file.lastModified() + 1000);
 }
}

I introduced Mockito to provide a simple verifying stub for the listener. I suppose I should also stub out interactions with File, but I'll incur the speed penalty for now.

Next, I needed to test the remaining FileMonitor code. To do this involved proving that it creates and schedules a task appropriately using a timer, and that it handles listeners appropriately.

import xxx.CollectionUtil;
import java.util.Timer;
import org.junit.*;
import org.mockito.*;
import static org.mockito.Mockito.*;

public class FileMonitorTest {
 @Mock
 private Timer timer;

 @Before
 public void initializeMockito() {
     MockitoAnnotations.initMocks(this);
 }

 @Test
 public void shouldScheduleTimerWhenStarted() {
     String filename = "filename";
     long duration = 10;
     FileMonitor monitor = new FileMonitor(filename, duration) {
         @Override
         protected Timer createTimer() {
             return timer;
         }
     };
     monitor.start();
     verify(timer).schedule(any(FileMonitor.Task.class), eq(duration), eq(duration));
 }

 @Test
 public void shouldDelegateListenersToTask() {
     FileChangeListener listener = mock(FileChangeListener.class);
     FileMonitor monitor = new FileMonitor("", 0);
     monitor.addListener(listener);

     CollectionUtil.assertContains(monitor.getTask().getListeners(), listener);
 }
}

The design of the production code had to change based on my interest in better tests. A number of small, incremental refactoring steps led me to this solution:

import java.io.*;
import java.util.*;

public class FileMonitor {
 private final long duration;
 private Task task;

 public FileMonitor(String filename, long durationInMs) {
     task = new Task(filename);
     this.duration = durationInMs;
 }

 public void start() {
     long initialDelay = duration;
     createTimer().schedule(task, initialDelay, duration);
 }

 protected Timer createTimer() {
     final boolean isDaemon = true;
     return new Timer(isDaemon);
 }

 Task getTask() {
     return task;
 }

 public void addListener(FileChangeListener listener) {
     task.addListener(listener);
 }

 static class Task extends TimerTask {
     private final File file;
     private long lastModified;
     private Set<FileChangeListener> listeners = new HashSet<FileChangeListener>();

     Task(String filename) {
         this.file = new File(filename);
         lastModified = file.lastModified();
     }

     public void addListener(FileChangeListener listener) {
         listeners.add(listener);
     }

     Set<FileChangeListener> getListeners() {
         return listeners;
     }

     @Override
     public void run() {
         if (file.lastModified() > lastModified) {
             lastModified = file.lastModified();
             for (FileChangeListener listener: listeners) {
                 listener.modified();
             }
         }
     }
 }
}

Well, most interestingly, I no longer have any tests that must deal with additional threads. I test that the monitor delegates appropriately, and I test that the task works properly when run. I still want the integration tests, but I think they can be relegated to a higher, acceptance level test for the story:

reload properties at runtime

Otherwise, at this point, I have a high level of confidence in my code. I know the Timer class works, and I know my two classes work, and I've also demonstrated to myself that they all work together. I'm ready to move on.

Some additional relevant points:

  • Threading is tough to get right, and it can be even tougher to test it.
  • My final solution is a few more lines of code due to supporting my interest in more focused testing. I'm ok with the additional code because of the flexibility having the tests gives me, and more often than not, having them allows me to easily eliminate duplication elsewhere.
  • It's hard to be overdo the SRP. Far more classes than not weigh in the wrong direction of too many responsibilities.
  • The original solution, probably fine otherwise, is what you might expect from someone uninterested in TDD. It works (I think) but it represents rigid code, code that is more costly to change.

Labels: , , , , , , ,





Monday, June 29, 2009 
Infinitest

I recently re-downloaded Ben Rady's Infinitest, having finally gotten back into some extended Java development work. Infinitest runs tests continually, so that you need not remember to proactively run them.

The last time I played with Infinitest, perhaps over two years ago, there was no Eclipse plugin. As I recall, on save, tests would run in a separate JUnit window external to Eclipse.

At the time, I had had a brief chance to sit with Ben and look at some of the code while at Agile 2007. I thought Inifinitest was a cool idea. But I didn't use it extensively, partly because I'd begun to move into lots of C++ work, but also because I had a hard time fitting it into my very ingrained test-code-refactor cycle habits. Having to deal with an external window seemed disruptive.

Infinitest today is another matter entirely. Plugins have been developed for Eclipse and IntelliJ. In Eclipse, tests run as soon as you save. But they don't run in the JUnit view--instead, test failures are reported in the Problems window, just like compilation errors. Immediately, this means that Infinitest is not disruptive, but is instead built into the normal flow of development.

Showing test failures on save is an extremely important transition, one that will help change the outlook on what TDD means. No longer are tests only an optional afterthought, something that a developer might or might not remember to or choose to run. With Infinitest, you don't think at all about running the tests, they just run themselves. Tests are continual gating criteria for the system: If the code doesn't pass all of its tests, Eclipse continues to point out the problem. This forces the proper mentality for the TDDer: The system is not complete unless the tests pass.

Infinitest uses a dependency tree to determine which tests must be run on save. The nice part for a die-hard TDDer like me is that my tests--my saves--will execute very quickly. This is because the die-hard TDDer knows how to design systems so that the unit tests run fast. I do wonder, however, about what this feels like for the developer on a system with lots of poor dependencies and slow-running tests. I'll have to experiment and see how that feels.

I hate to dredge up an annoying buzzphrase, but Infinitest is really a no-brainer. I've only used it regularly for a few days now, but I can't see doing TDD in Java without it.





Thursday, June 25, 2009 
The Bowling Game

Sometimes the "toy" examples we build lead to something real. A few years ago, I wrote a 13-part article series for Informit, demonstrating TDD using the game of hold 'em poker. Shortly thereafter, I received a call from a company looking for Java developers to help them build an automated poker application for in-casino table. The pitch was that this would eliminate the need for costly human dealers; it would also speed up gameplay, allowing more hands and thus increasing the house's rake. Someone at the small startup had read my articles and figured I at least knew the domain!

That work never materialized. Today I received an email (probably broadcast to wide distribution) from a headhunter with an interesting need: "We’re looking for someone in DFW with C# and C++ development experience who loves to bowl! They will be creating software for the sport of bowling and should have an interest in OR knowledge of the sport…" Ha! I don't know that this is for an agile shop, but is there a hardcore agile developer out there who hasn't coded the bowling game at least once? I wonder if that qualifies. :-)

I'd alert Ron, but I'm sure he wouldn't consider moving to Dallas. Well, if you're interested, let me know and I'll forward the recruiter info. Or if you're looking for work in an agile shop elsewhere, check out the agile jobs list.





Wednesday, June 17, 2009 
Cute Little Abstractions

I'm refactoring several classes of over 2000 lines each. These are classic god-classes that mediate a handful of stubby little data-centric classes around them. I reduced the first, primary domain class by over 1100 lines, to under 1500, by simply pushing responsibilities around. Adding tests after the fact (using WELC techniques, of course) gave me the confidence to refactor. In just a few hours, I've been able to completely eliminate about 700 lines so far out of the 1100 that were shifted elsewhere. I'm loving it!

After cleaning up some really ugly code, I ended up with better (not great) code:

private boolean containsThreeOptionsSameType() {
   Map<String, Integer> types = new HashMap<String, Integer>();
   for (Option option : options)
      increment(types, option.getType());
   return count(types, Option.ALPHA) >= 3 || count(types, Option.BETA) >= 3
        || count(types, Option.GAMMA) >= 3 || count(types, Option.DELTA) >= 3;
}

private int count(Map<String, Integer> map, String key) {
   if (!map.containsKey(key))
      return 0;
   return map.get(key);
}

private void increment(Map<String, Integer> map, String key) {
   if (!map.containsKey(key))
      map.put(key, 1);
   else
      map.put(key, map.get(key) + 1);
}
I could clean up the complex conditional on the return statement (perhaps calling a method to loop through all types). Also, the count and increment methods contain a bit of duplication.

More importantly, the count and increment methods represent a missed abstraction. This is very common: We tend to leave little, impertinent, SRP-snubbing, OCP-violating methods lying about, cluttering up our classes.

(Aside: I'm often challenged during reviews about exposing things so I can test them. Long-term, entrenched developers are usually very good about blindly following the simple rule about making things as private as possible. But they're perfectly willing to throw OO concept #1, cohesion, completely out the window.)

I decided to test-drive the soon-to-be-no-longer-missing abstraction, CountingSet. I ended up with an improved implementation of increment by eliminating the redundancy I spotted above:

import java.util.*;

public class CountingSet<T> {
   private Map<T, Integer> map = new HashMap<T, Integer>();

   public int count(T key) {
      if (!map.containsKey(key))
         return 0;
      return map.get(key);
   }

   public void add(T key) {
      map.put(key, count(key) + 1);
   }
}
Some developers would be appalled: "You built a cruddy little class with just four lines of real code? And for only one client?" And further: "It's only a subset of a counting set, where are all the other methods?"

Sure thing, buddy. Here's my client:

private boolean containsThreeOptionsSameType() {
   CountingSet<String> types = new CountingSet<String>();
   for (Option option : options)
      types.add(option.getType());
   return types.count(Option.ALPHA) >= 3 || types.count(Option.BETA) >= 3
      || types.count(Option.GAMMA) >= 3 || types.count(Option.DELTA) >= 3;
}
My client now only contains intent. The small ugliness with the parameterized map and boxing is now "information hidden." I also have a simple reusable construct--I know I've had a similar need several times before. As for building and/or using a fully-functional CountingSet, there's something to be said for creating "adapter" classes with the smallest possible, and most meaningful, interface.

Good enough for now. Don't hesitate to create your own cute little abstractions!



RSS Feed (XML)

Archives

February 2004   March 2004   May 2004   September 2004   October 2004   January 2005   February 2005   September 2005   October 2005   November 2005   December 2005   January 2006   February 2006   March 2006   June 2006   August 2006   January 2007   February 2007   March 2007   April 2007   September 2007   October 2007   November 2007   December 2007   January 2008   February 2008   March 2008   April 2008   May 2008   June 2008   July 2008   August 2008   September 2008   October 2008   November 2008   December 2008   January 2009   February 2009   March 2009   April 2009   June 2009   August 2009   December 2009   January 2010  

This page is powered by Blogger. Isn't yours?