Diocletian’s Ruin: Opportunity Assessment

Intercepted Delaque House transmission:

Filling a population resettlement the Ruin is a sparsely settled offshoot off the hive ripe for exploitation. The offshoot is connected by two main ducts to the hive proper but who know what tunnels there are in the underhive. Why was the population repopulated? Where are the old hivers?

Why was the population repopulated? Where are the old hivers? Even the enforcers do not know. Now other houses are sending scouts to examine the opportunities. We must not hesitate. We recommend deploying expendable assets until the situation in the Ruin becomes clear.

Posted in Uncategorized

2015 in review

The WordPress.com stats helper monkeys prepared a 2015 annual report for this blog.

Here’s an excerpt:

A San Francisco cable car holds 60 people. This blog was viewed about 2,900 times in 2015. If it were a cable car, it would take about 48 trips to carry that many people.

Click here to see the complete report.

Posted in Uncategorized

Testing module pattern javascript with Jasmine & Karma

Testing module pattern (http://www.adequatelygood.com/JavaScript-Module-Pattern-In-Depth.html) Javascript browser code using test runners / build systems such as grunt or gulp is difficult. They seem to expect modules to be declared as Node modules. This is annoying when you have client code that uses a module pattern syntax and do not wish to change the client code immediately. I struggled with a simple way to test some module pattern code in Jasmine with a test runner that could also watch my files. Karma (http://karma-runner.github.io/0.12/index.html) appears to do that.

Module pattern

var myModule = (function(mod){
// Module code
return mod;
})(myModule || {});

Karma provides a watcher that can watch some Jasmine tests and application code. As the files are loaded in a browser it allows code using the module pattern to be tested with an automatic watcher.

The documentation for Karma is on the site. Ffiles are loaded as <script> tags in the order that they are configured. E.g.

 // list of files / patterns to load in the browser
 files: [
 'app/libs/*.js',
 'app/scripts/*.js',
 'tests/specs/*.js'
 ],

 // list of files to exclude
 exclude: [
 ],

How it works

While there’s a lot more going on under the hood the basic premise is that without using require.js Karma serves up a page that lists the scripts it finds in the file configuration (in the order they are specified in the configuration) into the body of the page as <script> tags. This means if your scripts work in a browser they are very likely to work in Karma. Fire up the debug Karma window and view source to see what it is going on.

Yes. We could move to require.js and AMD but if you have older Javascript or other reasons to use the module pattern then Karma is a good fit.

It is also possible to run karma in gulp so you can build a fuller workflow.

Posted in Uncategorized

Brightness controls on Dell XPS 13 with Ubuntu 13.04

Am running a Dell XPS 13 with Ubuntu 13.04. Overall the installation is very nice but one niggle is that the brightness controls do not always work. The following scripts have alleviated the problem:
(From the thread in the Launchpad comments for the bug: https://bugs.launchpad.net/ubuntu/precise/+source/linux/+bug/954661)

To fix brightness controls from Add following to /etc/rc.local

#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.

echo 0 > /sys/class/backlight/intel_backlight/brightness
exit 0

And for controlling when resuming from suspend:

Add the following script to /etc/pm/sleep.d

#!/bin/sh
# to fix backlight issues on resume

case "${1}" in
resume)
echo 0 > /sys/class/backlight/intel_backlight/brightness
;;
esac
Posted in Uncategorized

Auto Hibernate in Ubuntu 13.04

Make your Ubuntu notebook hibernate after a period of sleep. The Ubuntu forums here go into depth but the script from the very helpful posters is:

#!/bin/bash
# Script name: /etc/pm/sleep.d/0000rtchibernate
# Purpose: Auto hibernates after a period of sleep
# Edit the "autohibernate" variable below to set the number of seconds to sleep.
curtime=$(date +%s)
autohibernate=7200
echo "$curtime $1" >>/tmp/autohibernate.log
if [ "$1" = "suspend" ]
then
 # Suspending. Record current time, and set a wake up timer.
 echo "$curtime" >/var/run/pm-utils/locks/rtchibernate.lock
 rtcwake -m no -s $autohibernate
fi
if [ "$1" = "resume" ]
then
 # Coming out of sleep
 sustime=$(cat /var/run/pm-utils/locks/rtchibernate.lock)
 rm /var/run/pm-utils/locks/rtchibernate.lock
 # Did we wake up due to the rtc timer above?
 if [ $(($curtime - $sustime)) -ge $autohibernate ]
 then
 # Then hibernate
 rm /var/run/pm-utils/locks/pm-suspend.lock
 /usr/sbin/pm-hibernate
 else
 # Otherwise cancel the rtc timer and wake up normally.
 rtcwake -m no -s 1
 fi
fi

Then

  1. Place in /etc/pm/sleep.d
  2. Name file 000rtchibernate so that it executes first.

See: askubuntu.com/questions/12383/how-to-automatically-from-suspend-into-hibernate

Posted in Uncategorized

aapt error for Android Studio on Ubuntu 13.04

While installing and trying to compile a “Hello World” program in Android Studio on Ubuntu 13.04 the following error occured:

android-apt-compiler: Cannot run program ".../android-sdk-linux/platform-tools/aapt": java.io.IOException: error=2, No such file or directory

The reason is that aapt relies on some 32-bit libraries and Android Studio is running on a 64-bit machine. To fix install some 32-bit libraries with the following:

sudo apt-get install ia32-libs

Now, the program compiles and runs.

Posted in Android, Linux, Programming, Tools

Approaches to refactoring using Gilded Rose kata

As part of our weekly hours worth of group practice we have been working on the Gilded Rose Kata. We worked with this kata for 5-6 weeks with different pairs. We found this a very interesting kata with a number of lessons.

Gilded Rose

The Gilded Rose kata (we choose the C# version) is a kata where there is some existing code with certain restrictions which needs changing. The code tracks the price, quality and sell by dates of fantasy items for sale in the “Gilded Rose” store as they change over a number of days. The change is that a new class of items, “Legendary items”, with their own behaviour for price, quality and sell by date must be added.  By most measures the existing code smells and it is not exactly clear where changes should be made.

Kata overview

With this kata our aims were to practice controlled refactoring. For us this meant creating characterisation tests for the existing code base with suitable coverage before moving onto add the new feature. Whilst adding the new feature the existing code should be refactored to reduce its smell. The testing approach, approaches for the new feature and extent of refactoring were the interesting lessons that came out of this kata.

Testing Approaches

Individual Tests

The main approach taken for characterisation tests was to take each existing item category and create individual tests that checked each item’s behaviour after a certain number of days have passed. The resulting tests ended up like the sort of tests that may have been written if TDD had been used to develop the Gilded Rose code in the first place. Each test aiming to test a single item under certain conditions and often checking only one property of the item.

This resulted in clean understandable tests that also served as a guide to the expected behaviour of the Gilded Rose system. Each test aimed to test only part of the behaviour and a failure in a test indicated the part of the behaviour that had been changed. Once sufficient coverage (~100%) had been achieved our pairs looked at refactoring and adding new behaviour. From these tests it was easy to start adding new tests for the new class of ‘Legendary’ items and develop the new code using TDD.

The downside of this approach was that it took a considerable amount of time and dedication to produce these tests along with a coverage tool to make sure that the code was covered before refactoring started. The production of  the tests was certainly tedious at times. These tests typically took one hour’s practice time and we would work as the same pair the following session to start refactoring and adding the new code.

On reflection a team could spend a lot of time producing extensive tests for a minor change in a small sub-system worth only a small amount to the business. If this was the only approach available then the cost may just have to be paid as the result of working with legacy code. However, other approaches may be available.

Golden Output Tests

Another approach was to take the existing program which runs through 30 days for a list of items covering all the existing item types and prints it to the console. By taking the output and saving it to a file – a golden output file – we were able to write a test that re-directed standard out from the console to a text buffer and compare this text buffer with the captured file. This test took slightly longer to write than a single individual test but once written we were left with 100% coverage according to our tool and could look at implementing and refactoring the existing code. Not a unit test at all but a characterisation test nonetheless.

From this point on we used a TDD approach to write a small test for one part of the behaviour of the new Legendary items and then implement it. We focused on the areas of the existing code base that we would like to add the new functionality to, add the functionality and then refactored the code to make it cleaner. We would only refactor the parts that we had changed and perhaps some small sections around the area of change. Repeat until all the new functionality had been added.

If  our refactoring or new code failed the characterisation test then it was either a quick inspection told us the problem or ctnl-z before trying a smaller step. With this approach we were able to add the new functionality and refactor parts of the code well within a single practice session.

Yet, our characterisation test did not serve as a good example of the expected behaviour of the system. Another developer could not take the test and  start to understand the system with it. Only our coverage tool indicated that we had 100% coverage – with individual tests it was possible to take the system spec. and write each individual test to confirm it. Failures in the test did not point to a specific behaviour that had broken. Also, the approach made us just focus on the area of change – to make a reasonable clean change and relatively minor refactoring.

Comparision

The difference in approach between a test suite and a single edge based characterisation test is important when choosing to employ these approaches in production code. Both approaches bring the code under control and allow the system to be refactored where necessary. I find the difference is a judgement based on the needs of the business and the impact on the whole software system. If the module that needs changing is a small module with little history of change then the golden output approach would be a suitable starting point to bring the module under control and make suitable changes without spending undue effort for a small change.

Where the module is critical part of the software or is subject to repeated change then individual tests are more likely to be suitable. These tests are our aim – they should be fast to run, operate in isolation and isolate the behaviour under test. This investment is best made when we know that we will be making further changes to the module and want a good suite of tests to control the changes.

Experience can be a judge for how to bring code under control. When choosing between techniques then lean principles would tend towards doing just enough to bring the code under control in the least time and defer the creation of individual tests until subsequent changes to the module are necessary.

(In order to introduce any of the tests very careful small changes without surrounding  tests were necessary. See Michael Feather’s “Working Effectively with Legacy Code” for techniques for introducing a testing ‘seam’.)

Posted in Dojo, Practice, Programming, Refactoring, TDD

Installing Windows 7 Upgrade to VirtualBox SATA Drive

Install Windows XP & Windows 7

First install Windows XP Professional from installation iso image onto a suitably sized IDE drive. I choose 60 GB disk as I intend to upgrade to Windows 7. Once the installation is complete, install Windows 7 from upgrade iso image. Once this installation is complete restart the OS and prepare to switch the drive to a SATA drive.

Switching to SATA drive

Thanks to the instructions here: http://netmusician.org/2010/05/virtualboxsata/

  1. Power off the OS. Make sure that there is a VirtualBox IDE controller and a SATA controller attached to the virtual machine. The IDE controller will have your existing disk attached. The SATA controller does not need any drive attached.
  2. Power on the OS. In Device Manager remove all the IDE ATA/ATAPI controllers. This is the scary part. Try to remove all of them. Not all will be deleted. The OS will add the correct controllers back in after a restart. Power off the OS.
  3. In VirtualBox remove the hard disk from the IDE controller and add it to the SATA controller.
  4. Restart the OS. Restart it once more if necessary.
  5. Check that the drive icon in the bottom right of your VM shows a SATA drive if you hover over it.
Posted in Tools

Remap Caps Lock

Another quick tip. It has been said many times before but it bears repeating – how to remap Caps Lock in Windows to something far less annoying.

Caps Lock, apparently a left-over from typewriters where holding down shift required quite a lot of force, is easily the most annoying key on your keyboard. Its uses are very limited, it is often pressed by accident and the effect is significant. So much so that OS password screens display pop-up balloons or similar when it is pressed.

The cure: remap it to Shift. That’s probably what your little finger went over there to find anyway and is the least surprising of the re-mappings available.

Remapping registry edit

Remapping in Windows requires a registry edit. The key is:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Keyboard Layout\Scancode Map

Change or add “Scancode Map” as binary data:

00,00,00,00,00,00,00,00,02,00,00,00,2a,00,3a,00,00,00,00,00

A restart is required.

The meaning is:

  • first 8 bytes are header information which is unexciting all zeros.
  • The next four is the number of mappings in the data including a null terminator (so in this case 2).
  • The next four is the mapping  made up of two 2 byte words with the key to change to 0x2A (Shift), the second is the key we would like to change 0x3A (Caps Lock).
  • The mapping is followed by a 4 byte null terminator.
  • All words are stored little endian.

The MSDN article with extact details is available at: http://msdn.microsoft.com/en-us/library/windows/hardware/gg463447.aspx

Posted in Tools

Using Meld with Mercurial

A quick tooltip.

Meld is a nice diff and file comparison tool. It can be used instead of WinMerge or the built in Mercurial diff tool – KDiff3. Where it improves upon WinMerge and KDiff3 is the visualization of differences between files and versions. The differences expand or balloon out from comparison to comparison. This makes it easier to understand where code has been added and removed. It also supports 3 way merging.

Meld Screnshot

It is primarily a Linux tool but as its written in Python it will run under Windows with the right supporting tools. The notes for installing under Windows are here: https://live.gnome.org/Meld/Windows

Once you have installed: Python, PyGTK All-in-one installer and Meld you can get it to work under Mercurial by:

1. Create a batch file called meld.bat. In the batch file use the following command (change the path to the path of your Meld installation):

C:\Python27\python.exe "C:\Program Files (x86)\meld-1.5.4\bin\meld" %*

Put the meld.bat file somewhere in your PATH environment variable.

2. Edit your mercurial.ini file (found in your C:\Users\<username> directory) and add the following:

[ui]
merge = meld

[extensions]
cmd.extdiff=

[extdiff]
cmd.vdiff = meld
[merge-tools]
meld.args = $local $base $other $output

Fairly, simple. Type “hg vdiff” in a repository to see the diffs in Meld. As Meld notes – version control support under Windows is hit and miss. This “works on my machine” for Mercurial in Windows.

Tagged with: , ,
Posted in Programming, Tools