Hibri Marzook Musings on technology and systems thinking

Experiences of a Lead Developer

It has been 3 years and a bit since I started leading software teams and recently I’ve been reflecting on my experiences since then.

I’ve lead 3 teams, all working on core and large products for the companies I’ve worked for. My current and 3rd team is in the midst of a rebirth and in many ways my 4th team.

No team is the same. What worked for one team does not work for the other. Each team has had different approaches to testing, coding standards and work ethics.

What I’d like to reflect upon is what I did, what worked, what didn’t work. I’m very much interested in what other lead devs do for their teams.

What does a lead developer / team lead do ?

I’m not going to define the role here, only to describe what I’ve had to do as a lead dev so far.

Facilitate standups and retrospectives. Watch out for standup smells

Act as a buffer between product ownership/ project management and the team.

Make the call on technical decisions. Have the technical architect hat handy.

Mentor developers. Enforce TDD, refactoring and clean code practices. I say enforce but not encourage. this is because I believe in these so strongly that I’m not willing let my team do anything else.

Build a team that learns not a team that expects me to tell them what to do. Do regular katas and demos of new tools/techniques we can use.  Do code reviews and refactoring exercises as a team, in front of a projector.

Keep the team focused. I’ve done this by having the team not work on anything else other than what our card wall says.

Know the product/ project thoroughly. Learn the quirks, know where everything is. Know the source, read the source to figure out what is going on.

Own the release process, do releases.

Do the grunt work. As a lead dev, I’ve spent an great deal of time on build and deployment scripts. Setting up source control and continuous integration with Teamcity

Staying out of the way. Leave  the majority of the work to the team. I expect to be pulled into meetings, asked to answer questions and deal with issues outside of the team. Part of this is good, as this takes away distractions from the rest of the team, but this also means that I can’t work on something continuously and see it to the end most of the time.  Let someone else in the team own the work.

Be there for the team, be prepared to answer questions all the time. This could be design decisions, having to explain the domain, or the history of why something was done in a certain way. This can be exhausting but it is necessary. 

Listen to the team chatter. Ask what is going on, when there is a discussion around code or a problem. Listen, observe and break into the conversation if needed.

Know the strengths and weaknesses of each team member. Know what they can do and what they cant do. Manage pair programming, some pairs may not be productive. This is necessary in the early stages of a team, but there will always be differences in skill levels.

What works.

I’ve covered a few of these above, but these the short list below is what I think every team needs.

Fostering a learning culture. The best teams I’ve had learned together and taught each other.

Strong discipline in the team.  Stick to what we’ve agreed as a team.

Having a programming god on the team. An expert, for example who knows NHibernate, MVC or can write a book on TDD, help with technical decisions, and tell you what you are doing is wrong. Someone who can go away and figure out the hard things and come back and present to the team.

Automate automate automate.

Have people with passion for what they do. Encourage this.

What doesn’t work.

Team members with an aversion to pair programming and sharing. No rock stars or heroes. Notice this early, don’t ignore it. This will fester and kill team morale.

Fear of code, and fear of breaking things. People shouldn’t be afraid to make mistakes. Mistakes are ok, we learn by breaking things. Give team members the power and confidence to change things.

Indiscipline. Nip it in the bud. Tell people when they are not following the rules and agreed practices. This is something I’ve learned the hard way.

Self organization without direction. Self organization needs a goal. A team suffers when they don’t have know what they are working towards.

Team democracy is not always good, a benevolent dictatorship works much better.

It’s the team, not the project.

An important lesson I’ve learnt is that, what matters is not the software we produce, it is what the team learns while writing all that code. The code we write is an expression of what we learn, and every new line of code is a new learning opportunity.

A good project, with quality code and a few bugs in production is a side effect of a good team, and this team is the most important asset a company has, and not the software that was produced.

Thoughts on branching strategies.

There comes a time during every project, when someone in the team asks the question “What is our branching strategy? “. Off we go trying to find out what is the current branching best practice, what are other teams using ? , What is the Agile way ? We may find solutions in feature branching, per story branches, release branches and so on.

Let’s take a step back. Why do we need a branching strategy ? What is a branch ?

We want to put some code in a source control branch, because the code contains the risk of breaking the software in the mainline of development, trunk.

Are we 100% sure that the code in the branch won’t break what is in trunk ? We won’t know for sure till we integrate the branch with trunk.  We won’t know, until it goes through automated/manual testing, and all this after going through merge hell.

We put code in a branch to reduce risk. However, we haven’t reduced that risk. The risk is still there, and we bring it back into trunk. Why not look at methods of reducing the risk in the first place ?

How do we reduce risk ?

1. Cut up a big piece of risk into smaller pieces of risk.

The bigger the risk the more chopping it needs. Now we have smaller things to work on. We do those small pieces one by one, and if something breaks, we know which change broke it. Easier to fix because it was a small change.

2. Break down a big piece of risk, into parts that have no risk and parts that have risk.

Analyze the problem, break it out in to parts that can be done without breaking our software. This is interesting. The part which we thought would be a problem might have become a non-issue. We’ve isolated it. We know exactly which change will break our software. See 1 above

3. Write tests for things that can be broken by the risky bit of code and continuously test.

We know what could break, by the new change. Before we make the change, let’s write tests for our software so we know when it is broken.  Let’s make a change. Is anything broken ? No. Did the next change break it ? yes. Fix it. Keep going.

Instead of shoving off our risky code into a branch, we’ve learnt to manage the risk, and reduce it. If the risk is so high that we can’t mitigate it by doing 1,2 and 3, then let’s create a branch for the code. We have a branching strategy based on risk. We create a branch only for code that is riskiest, and we haven’t been able to reduce that risk by a divide and conquer approach.

In my opinion this is a much better way than having a default branching strategy for every piece of work/story/feature/MMF. Working continuously on trunk has benefits. We can release a feature faster, get refactoring changes others have made quicker. Not go through merge hell, and risk loosing code in the process. Along the way we’ve learnt how to break up a problem into smaller pieces. We’ve got better at writing tests. We’ve learnt how to structure our software so that one thing does not break everything else and we get closer to the nirvana of continuous deployment, because we have only one production line of code to deploy from.

Do you need a branching strategy ? Think again.

 

 

How to : Sign XML messages with a SHA-1 signature, for Adobe Content Server

The past week I’ve been doing a spike to talk to Adobe Content Server 4 (ACS4). Querying the content in ACS4 is done in a REST style. The client sends a XML message via HTTP POST to the admin endpoint. The endpoint details are in the documentation but are vague.  This is usually at  http://youracs4server/admin/EndPoint.

These need a signed XML message, in the POST body. A typical message looks like this

<request>
    <nonce>ABCD123==</nonce>
    <hmac>XXXXXXXXX===</hmac>
    <distributor>uid:8888-43434-34343434</distributor>
   <resource>
     ......
   </resouce>
</request>

The hmac element contains the SHA1 signature  of the XML message. The signature is generated using a shared secret.  The signature is for the whole XML except the hmac element. Before signing the message, we have to construct the XML without the hmac, then sign it, and then add then hmac.

I did this by using a two stage Xml serialization process. This may not be the best way to do it, and there has to be a better solution.  I created a class named SignedXMLSerializer, and this inherits from XmlSerializer. SignedXMLSerializer serializes objects that are of the base type Signable. The Signable class has two properties, Nonce and Hmac. Any class that has to be sent as a signed XML message must inherit from Signable

public abstract class Signable
    {
        [XmlElement("nonce")]
        public string Nonce { get; set; }

        [XmlElement("hmac")]
        public string HMAC { get; set; }
    }

In the serialize method of the SignedXMLSerializer, we first generate the nonce. The nonce makes each message unique. Then we serialize the object to a string. The signature is generated using this string, and assigned it to the Hmac property.

We then serialize the signed object to the writer that was passed in.

public void Serialize(T o, XmlTextWriter writer) {
            o.Nonce = GenerateNonce();
            StringBuilder stringBuilder = GetXMLToSign(o);
            string hmac = GetSignature(stringBuilder);
            o.HMAC = hmac;
            Serialize(writer, o);
        }
To use the SignedXMLSerializer, pass in one of the HashAlgorithm types. I’ve used SHA1 in my case.
This makes it easier to use with different signing algorithms. 
Typical usage of the SignedXMLSerializer is as follows
HMACSHA1 hmacsha1 = new HMACSHA1 {Key = Encoding.ASCII.GetBytes("consumerSecret")};
            SignedXmlSerializer<Request> signedXmlSerializer = new SignedXmlSerializer<Request>(hmacsha1);

 StringBuilder sb = new StringBuilder();
            StringWriter writer = new StringWriter(sb);
            signedXmlSerializer.Serialize(req, new XmlTextWriter(writer));

The final serialized string can be sent via HTTP post to ACS4. 

Code for the SignedXMLSerializer class is here http://gist.github.com/303962

Test Smells and Test Code Quality Metrics

The major highlight at XP Day 2009, was Mark Striebeck’s talk on unit testing practices at Google.  What makes a good test depends on experience , skill and school of thought. I had to agree when he said that developers can be almost religious when it comes to the topic of what makes a good test. This made them solve this problem the Google way, by gathering data. Let the data speak.

He went on to describe metrics that they were collecting on tests and test code.  A test that has never failed is likely to be a bad test. If the test was fixed to make the test pass, then this is also an attribute of a bad test. A test can be a good test if the code was fixed to make the test pass.

This got me thinking. Generally I haven’t gathered metrics on test code. We have a pretty good metrics dashboard for production code. What metrics can I gather on test code ?

Metrics on test code should also focus on the readability of the code. Having large test methods is ok, but not too big. My opinion is that a test method with more than 20 lines is too big.

Tests should be concise, the assert should be obvious. Some code duplication is fine to make the test readable. This is all fine, but how can I get these as metrics  ? Only way to judge this is to eyeball the tests, and there are differences of opinion.

However, there are ways to measure what a test should not be. These are test smells. Test smells are described in xUnit Test Patterns

I’ve listed a few test smells and NDepend CQL queries find these smells. These can be automated in the build process and flagged up.

Large Test Methods

These can be a chore to read. Tests should be written as simply as possible. These also  point to too many responsibilities and dependencies in the code being tested, as most of the test code is used to do setup for the test.

SELECT METHODS WHERE HasAttribute “NUnit.Framework.TestAttribute” AND  NbLinesOfCode > 20

Large setup methods

Usually when unit testing the same code, we tend to have a common setup method, in order to make the test more readable. What happens is, more and more code is moved into the common setup method. We get blind to this after a while, and all the dependencies for the test are hidden away. If you do have [Setup] methods, keep them small.

SELECT METHODS WHERE HasAttribute “NUnit.Framework.SetUpAttribute” AND NbLinesOfCode  > 10

Deep inheritance trees in test fixtures

Again, common test code moved up to a base class and the base class is used in many tests. Then more base classes are created. This creates more tight coupling between test classes. Which makes tests harder to change. Low coupling and high cohesion applies to test code as well. Make each unit test class as independent as possible.

SELECT TYPES WHERE HasAttribute “NUnit.Framework.TestFixtureAttribute”  AND DepthOfInheritance >2

Test fixture setup

TestFixtureSetup is bad. The TestFixtureSetup is run once before all tests. This leads to fragile tests and inadvertently leads to using some shared state. Use Setup instead

SELECT METHODS WHERE HasAttribute “NUnit.Framework.TestFixtureSetUpAttribute”

Tests that fail when they are run in a different order

The xUnit test runner helps with this, by randomizing the order tests are run.

Ignored tests

Ignored tests are like comments. Dead code that doesn’t do anything. Either fix them or delete them.

I have yet to find some way of detecting duplicated tests, shared state in tests and multiple asserts in tests. What other ways can I find test smells ?

Avoid the slow Add Reference dialog box in Visual Studio 2008

 

When you are in the zone, and want to add a reference to Rhino, NUnit or any other common reference, adding it via the Add Reference dialog can be painfully slow. Fortunately VS has a good automation interface, which lets you to write macros.

I wrote a simple macro to add a NUnit reference to the current project. Add this to your Macros project in Visual Studio, map a button or a shortcut key to it. This way you can add those common references pretty quick.

This is a cleaned up version of the sample here http://msdn.microsoft.com/en-us/library/vslangproj80.reference3%28VS.80%29.aspx

 

   1:      Sub AddNUnitReference()
   2:          AddNewReference(DTE, "C:\Tools\NUnit\nunit.framework.dll")
   3:      End Sub
   4:   
   5:      Sub AddNewReference(ByVal dte As DTE2, ByVal referencePath As String)
   6:          Dim aProject As Project
   7:          Dim aVSProject As VSProject
   8:   
   9:          aProject = dte.ActiveDocument.ProjectItem.ContainingProject
  10:   
  11:          aVSProject = CType(dte.ActiveDocument.ProjectItem.ContainingProject.Object, VSProject)
  12:          ' Add an Assembly reference and display its type and additional
  13:          ' information.
  14:          Dim newRef As Reference
  15:          newRef = aVSProject.References.Add(referencePath)
  16:   
  17:      End Sub