Technology

This morning I saw a tweet from Simon Brown that that triggered some odd memories. The tweet was:

simonbrown
And, of course, we no longer live in a world where architects create a “logical view” that developers should go and implement … somehow … twitter.com/simonbrown/sta…
24/10/16 10:50

I remember how the architecture role was described in such terms. Where e's role was to dictate how it should work so the developers could assemble. What they forgot to ask them self was probably what view the developers were suppose to work on? Was that the irrational solution to the logical view the architect had delivered?

When turning the "logical view" on it's head like this it is not very logical. I know it wasn't supposed to be so. But language has a tendency to work it's way into our minds. It forms our way of thinking of things and eventually our way to deal with them. Regardless if it's rational or not. When I saw that tweet this morning it became apparent where at least some of the misconception around software architecture comes from. The way we describe the "logical view" just being a symptom of a deeper problem.

I think it comes down to the idea that we should create things using a top down structure. Using the control structures we are used to. Management wants some kind of software. They'll get the architect to design it, to create the "logical view". Only then can they trust it to the developers. They are probably not even aware that most of the unknowns and risks are in the hands of the developers.

But we know software don't work like that. I actually think most of us also know that nothing else does either. It's a flawed view on humanity that makes us think good work can be done top down. It becomes especially apparent in complex situations such as software development and research. But I don't think it works well anywhere. The rational way to go about getting good work done is bottom up. Letting everyone involved feel responsible and have the mandate to do what changes they think are necessary to get the job done.

Oh well, enough of the rant. And thanks Simon for reminding me of the idiocy in some of the software architecture jargon.

I have been at many places, in many teams, where no-one has ownership of the build scripts. It’s an area where a company can bleed a vast amount of time and money without realising it. It  may even be what makes senior member leave in the end due to frustration with to much inaction.

What is unfortunate is that it is relatively simple to fix. Sure, there are risks associated with any legacy clean up. But getting the build to work is far simpler then many other tasks in a legacy codebase. I find that the main reason it is not done is that the team can't communicate the value to management (including product owners). Therefor it gets low priority and ends up at the bottom of the back log.

I have always found that one of the best ways to communicate value to management is in money. By putting forward a business case with a clear RoI (return of investment). When it comes to build scripts this is a pretty easy task. First, ask everyone on the team who uses the build scripts to keep track of how much time they spend doing build related stuff. How much time do they spend working around a broken or non-functional build. Try to count only time which is spent unnecessarily. Once you have enough data to know what the broken builds costs on a weekly/monthly/quarterly (which ever suites your needs) basis it is time to look at how much it will cost to fix the build. I suspect the ratio will be quite scary. The difference is how much you loose each week/month/quarter. Hopefully this should get managements attention and priority.

Now that fixing the build is top in the backlog we need to make sure it doesn't fall back into despair again. This is where ownership comes into play. In a well functioning self organising team this is never an issue. Everyone owns the build. Everyone makes sure the build is just as snappy as everything else.

But If you don't work in senior or self organising team? Who should be responsible? A good place to start is the team. Perhaps someone feels very strongly for it. Passion goes a long way. Granted, you can't have just one who owns the build. It is knowledge which is far to important not to share between at least two. But with one lead it's usually easy to follow. Make sure to pair often to distribute knowledge.

If it's difficult to find someone who wants to take responsibility the team should make sure to pass it around. Share the pain to make it lesser for everyone. The main responsibility will however always fall on the architect. A working build is part of the non functional requirements that makes up an architects main concerns. How will you as the architect be able to guarantee that the solution you are the master builder of will ever reach it's intended user or customer if it cannot be built?

Now, go and fix that build and remove the pain. It is so worth it. And remember, with a working build you'll have so much more time to code, and that is what's really fun!

For some reason I could not get Karma and Angular to work with Jasmine on my machine and there was precious little debug help. So instead I decided to try out QUnit as the test runner inside Karma. This required some puzzling together of different blogs and other instructions so I though I'd putt it here for future reference.

Fist of all I created a new project directory. This is where all commands are executed from and all paths starts from.

Next we will need the libraries Angular and Karma-qunit. Lets start with Angular. Download it from the website. I choose the zip which includes all the Angular files. Expand the zip into lib/angular

To install Karam I use npm. Run the following command:

This will install karma-qunit into the project folder. I prefer this to using a global version since it happens all to often that tools break when upgraded and having them locally means that I can control which version is used for each project. The drawback is that in order to run karma you need to issue the command ./node_modules/karma/bin/karma every time you need to run it. To make life easier for your self you can add an alias for Karma like so:

Put the line at the bottom of your zshrc or bash_profile to ensure it's always loaded.

Next we need to configure Karma so that it can find the source code and libraries. This will also allow us to configure which browsers should be captured during testing. To do this we will use Karmas builtin init function. Run the command karma init and follow the instruction on the screen. Look at the config file below to for guidance when answering the questions.

Now that we have a configuration file there are a couple of things that may need changing. Open the file karma.conf.js in a text editor and compare it to the example config above. I made the following changes to mine:

  • In the file section there are two exploded values, I.E. values in {}. When specifying a file pattern to Karma you can either use a string or expand it to set some properties. Since we will not change the files we serve from Angular there is no need to set watchers on them which is why then are exploded. The lines below should be first in the list (make sure that the same file pattern is not in the list twice).

  • Make sure the following files are in the exclude list:
  • If you want to see more output from the running tests you can sett the logLevel value to config.LOG_DEBUG.

You should now be able to start Karma from the root of your project. When it starts it will report an error:

Firefox 26.0.0 (Mac OS X 10.9): Executed 0 of 0 ERROR (0.235 secs / 0 secs)
This is as it should be. There are no tests available and this is reported as an error. You can now leave karam running. It will detect changes to the files in js/ and test/ and rerun any tests it finds.

Now lets add our first test. To make sure the test runner works with Karma we'll start with a "non test". Since I will be testing a file called controllers.js when I write the real tests I'll add the test to the file test/controllerTests.js.

When I save the file Karma detects the change and runs the test. This should show up in your Karma console:

Firefox 26.0.0 (Mac OS X 10.9): Executed 1 of 1 SUCCESS (0.651 secs / 0.001 secs)

Lets add some real test code instead. I am going to create an application called freecycleApp. It will have a controller called NodeController. To load this into the test there is some scaffolding required so I'll add the following to test/controllerTest.js:

The module call will allow you to set up the scaffolding required to launch your Angular controller. The injector will load the application, the ctrl is your controller and the scope is the data object passed to the controller.

The first test added to test/controllerTest.js looks like this:

The test will fail, complaining that there is no module called freecycleApp so we better create it. The production code goes into the file js/controller.js.

This will create the angular module freecycleApp and the controller NodeController, which returns a list of nodes.

This should set you up to start test driving your Angular development using QUnit. Have fun!

The other day a colleague asked if I had ever tried to set up any of my GitHub projects using one of the free CI services available. I hadn't but thought it an interesting thing to do. It proved to be a ridiculously easy task.

I could find two service providers on the market. One is BuildHive from < href="http://www.cloudbees.com/" target="_blank">CloudBees. This is the the company behind the CI server Jenkins. The other is called Travis CI and is provided by Travis. Travis id a, to me new, CI company who also seem to offer some enterprise services in the cloud.

I have used Jenkins allot in the past and it's a very good CI server. The company behind, CloudBees, also offer an enterprise service which I think is a very good fit for any company who want to get first class support with their CI and CD needs. However I choose to go with Travis since they requested less permissions to my GitHub account. I have not contacted CloudBees yet to find out why the want write access to my account setting, followers and private repositories when I just want to build public ones but I'd like to know.

There are three simple steps to go through to set up Travis CI with your GitHub projects:

  1. Log in to Travis CI using your GitHub OpenId.
  2. Enable the project you want to build in the Travis CI control panel.
  3. Add the .travis.yml build script file matching the needs of your build to your project on GitHub.

It is literary this simple. Granted, the project I selected has no special requirements. It builds nicely (well actually it doesn't since I have test failures) with just a simple mvm install command. But I think this is remarkably easy and something that everyone should try. If for no other reason so just to have the experience of how simple some things in software can be when done well.

Thanks Travis CI for such a spotless integration!

I just stated a new assignment and they use Subversion for version control. It was years since I last worked with Subversion and have been exclusively using git since. So I decided that rather then going back to SVN and getting confused and frustrated I will use git-svn.

After two days of struggling to check out the larger, and more important, repositories using git-svn I was starting to doubt the value of this. Sure, The reason it takes so long is that git fetches the complete history but SVN only fetches the last revision. But will this wait really pay of?

I then realised two things. Firstly, running three processes at the same time will choke the hard drive, leaving them hanging whilst waiting for each other. So running one at the time makes all of them complete faster.

Second, it's a good idea to use the flag --log-window-size. This flag tells git-svn how much history it should get each time it fetches history from the SVN server. If the SVN repository has a long history the default value of 100 is way to low. I increased this to 1000 and this greatly improved the performance. However, reading about the flag, there is a warning. If this number is set to high the SVN server may consume to much memory so don't go overboard.

Another thing worth knowing is that even if the target folder seems empty if the git svn clone command fails or is stopped there is usually a .git folder in it. This contains the work performed up to the point when the task was interrupted. It is possible to resume using git svn fetch rather then starting from scratch.

Edited to add:
Since the source code has some odd ways to add auto-generated code, I.E. not to Mavens target folder but straight into the source, I tried out the command git svn create-ignore. This was however a mistake, at least for my repository. It added a new .gitignore file in almost every folder in the project, all looking pretty much the same. It was quite a hazel to get rid of them so I'd say it's easier to manually craft your ignore file then to use the tool.