July 10, 2011
Back in 2004, I took a sabbatical from software development to pursue my passion for snowboarding. It was a decision that lead to a parallel “career” as a snowboard, and later ski, instructor. Aside from a few communication skills, I never really thought the two would intertwine. However, from instructing came an interest in coaching and mentoring, and from those came helpful ideas when I found myself more and more involved with a leadership role.
When I undertook my first training course, in the mountains of New South Wales, there was an important phrase I learnt: Safety, Fun and Learning. In the UK, we use the variation of Safety, Enjoyment and Learning – which gives the acronym SEL (“sell”). We’re selling a suitable framework for a pleasant and productive lesson experience. Each word leads from the next: it’s difficult to enjoy something if you’re not safe, it’s tricky to learn something if you’re not enjoying it.
Safety, Enjoyment, Learning can be applied to many things other than snowboarding or skiing. As a team lead, senior developer or software coach, we also have to be mindful of these three words whether we realise it or not. All three are required to make sure our team has a pleasant and productive experience developing software.
Let’s look at safety first. The risk of injury, avalanches, sunburn or hypothermia is pretty low in software development – if it isn’t, I’d strongly recommend moving to a different office. But safety is more than just about physical danger. I’ve worked with teams where there’s a heavy fear of failure, an atmosphere where making a mistake is a heinous crime. The problem is, people become cautious – they stay inside their comfort zone, they don’t learn or progress, their work remains static. It’s not productive at all – for the team or the customer.
Mistakes are okay. We all make them. The important thing is that people feel they can own up to making a mistake, without retribution, and then take responsibility for correcting that mistake. Mistakes are learning experiences – do them once, try not to make them again. People need to know they can put their hand up and admit they did something wrong safely, and that the team will pull together to help that person take responsibility and correct the problem.
Leading on from this is the knowledge that someone can always ask for help, without feeling they will lose respect or be ridiculed. We all have a moment of stupidity and forget something obvious, we all stare at a problem for hours because our brain has masked the critical (and obvious) cause of the problem. Asking for help is often one of the most difficult things for people to do, so we need to ensure people feel safe enough that they can stick up a hand without fear.
Enjoyment is next. I’ve been very fortunate during my working career that I’ve nearly always worked with teams where we can laugh, joke and gossip – yet still get our work done. On the rare occasions I’ve been in a team that doesn’t have that environment, you certainly notice the difference in productivity – no one really gets anything done, and certainly not to a good standard. A former boss of mine used to say that he didn’t like a quiet office, because it meant no one was working. The social interaction feeds the mood, raises the spirits even when tackling a Project From Hell, and leads to better communication and thus a higher productivity.
Work shouldn’t be a miserable experience, because it becomes a waste of time for everyone – if you don’t enjoy your job, dust off your CV and start looking for one you will enjoy. It was that kind of decision that resulted in me ending up on a mountain on the other side of the world, a decision that both changed my life in a big way. It allowed me to discover a whole new set of skills, but also help me rediscover my love of programming.
And finally learning. If you’re not learning new things each week in your job, something is wrong. It doesn’t have to be big things like a new programming language or framework – it could be a tiny optimisation, a better way of structuring your code, a new bit of business logic, an unusual hobby of the developer sat next to you. As human beings we need to keep growing and learning or we stagnate, and when we stagnate we become less productive. I know far too many programmers who know one programming language, one framework, one operating system. Some even get quite angry that they should learn something new! After all, BASIC and 6502 assembly should be all you could ever need…
I try to learn a new language every year or two: Erlang, PHP, C#, Clojure. Even if you don’t use them for any real world work, sometimes you can gain a new insight or a better way of doing things with your existing toolset. Erlang helped me with concurrency and parallelism, Clojure taught me to write more functionally, PHP makes me appreciate Python more than ever, I wish C# wasn’t tied to .NET/Mono. The definition of “learn” changes each time: I mainly aim to get a feel for the nuances and pros/cons of the language before deciding to stop or carry on.
Code dojos are great fun. Code katas provide useful exercises for developers. Regular retrospectives encourage developers to sit back and think about what they have or haven’t achieved and find new ways to do things. Pair programming provides knowledge transfer. All that gossip and laughter I mentioned earlier can lead to sudden revelations and new ideas. Teams should be mindful of new things, new techniques, making progress in their own skills and knowledge. Programmers should be encouraged to try new things, push their comfort zone wider and improve themselves.
Safety, enjoyment, learning.
Next time you’re with your team, take a good look and ask yourself if these three concepts are being employed successfully. If not, perhaps it’s time you started making some changes?
April 25, 2010
I love books. Proper, paper books. I’m always recommending titles to people, so thought it might be a good idea to pluck ten books from my bookshelves that I regularly recommend to anyone who wants to learn about Agile development, or improve their skills.
Now, please note that my background is with XP as that is the process that has worked the best for myself and for the teams I have worked with. As such, recommendation number one is:
Extreme Programming Explained (2000) – Kent Beck
This is the book that kicked it all off for me back in 2000, and it still remains the most concise and readable introduction to XP. It’s also an important text for anyone wishing to understand what it means to be “Agile”.
User Stories Applied (2004) – Mike Cohn
Human society has used stories for a very long time to convey information, yet writing user stories is often not as easy as it should be. This is one of those books that I ummed and ahhed about buying: what could a book on such a “small” subject really offer me? The answer was quite a lot, because there’s a great deal involved in learning to write effective stories. More than that, writing stories is just the first step – what we do with those stories is an important part often glossed over.
Agile Estimating and Planning (2006) – Mike Cohn
In many ways, this leads on from User Stories Applied. How do we turn a stack of stories into a tangible piece of software that delivers value to the customer? The whole spectrum of estimation, prioritisation and planning is given coverage, explaining not just how we do things, but why we do things.
Domain-Driven Design (2004) – Eric Evans
Anyone who says that Agile doesn’t do design is talking nonsense. Unfortunately, the lack of books on the subject of Agile design doesn’t help to dispel this myth. Domain-Driven Design is a much needed addition to the Agile library. Two core concepts, the use of common language and a strong model, are presented and elaborated upon to form a simple, elegant, yet powerful framework for designing large systems. My only negative point is that the book could probably be condensed into one half the size without losing the clarity of vision.
Refactoring (1999) – Martin Fowler
I picked up this book not long after reading Extreme Programming Explained. This book was very influential on me, because it made me realise we don’t have to be satisfied with the final implemented solution to a problem – people who haven’t read the book often raise an eyebrow when I mention things like “code smells”. Designs and code go stale more often than we like to admit, and this book provides a readable, in-depth guide to tackling the overhaul of code in order to improve the design, readability and maintainability. Those of us who use dynamic languages in our large-scale projects might find some of the refactorings unimportant, but the concepts are still invaluable.
Test-Driven Development (2003) – David Astels
To refactor code safely, you need to have an excellent test suite to ensure you don’t break anything or introduce (or even fix!) unexpected side-effects. This book is an excellent introduction to the subject – very clear and concise, yet often overlooked in favour of Kent Beck’s book. The bulk of the book uses examples with Java and JUnit, but covers other xUnit implementations for a variety of languages. The choice of language is unimportant as the concepts are equally valid.
Agile Testing (2009) – Lisa Crispin / Janet Gregory
An impulse purchase from Amazon that ended up being an essential book in my library. Although aimed more towards a QA team nervously wondering if they’re obsolete in the New Agile Order, this book is an incredibly good resource for anyone involved in a software project. Every aspect of testing is covering in a very approachable and comprehensive style, including the different roles testing plays and how testers can incorporate their role as part of an Agile team. Developers needs to remember that there’s more to testing than their own TDD test suites, and that’s where an Agile QA team comes into play.
Continuous Integration (2007) – Paul Duvall / Steve Matyas / Andrew Glover
There are quite a few good books on continuous integration, but this one manages to provide both a decent introduction to the subject and a good balance between theory and practice. Although I’d used CI long before I’d read this book, I must admit I’d never really needed more than a superficial knowledge. This book helped fill in some massive gaps in my knowledge and understanding.
Agile Retrospectives (2006) – Esther Derby / Diane Larsen
This book covers a much-neglected part of the Agile process: reflection and self-improvement of teams. Regular retrospectives to give teams time to identify where things are going well, where things are not going well and seek out ways to continually improve themselves. The book introduces the concepts behind retrospectives, gives a framework for running them, and presents various supporting exercises that can be used.
Principles of Software Engineering Management (1988) – Tom Gilb
This might seem an unusual one to finish with. Why is it even on an Agile reading list? Because it’s easy to forget that “being Agile” is not new, or a fad, or sprung up suddenly. This book covers evolutionary delivery, estimating and dealing with risk, planning and code inspection. Although written for a “traditional” software engineering audience it reads in places more like a proto-Agile manifesto, a genuine reflection on the discipline and where it was heading at the end of the 80s. I really wish I’d discovered this book at university, rather than ten years later, because it would’ve changed my approach to software engineering outside of academia. Even reading twenty years after it was published, it still offered a very relevant text.
February 19, 2010
At university I was taught the principles of software engineering. The methodology we used was termed “the waterfall model”. Given I was a self-taught programmer until then, it sounded great – a structured, disciplined and “engineering”-like process that would turn me into a coding professional.
The trouble was, outside of the ivory towers of academia it just wouldn’t fit. I jokingly say to people that the process works great if you know exactly what you want at the beginning, everything works exactly to plan and you’re probably NASA. For everyone else, no project plan survives first contact.
I tried for some time to apply rigid software engineering principles in the hope the lack of success was down to inexperience and poor technical skills. Gradually I realised the problem wasn’t the methodology as such, it was that each step in the process wasn’t an exact science, nor did they follow a linear sequence. We can’t do a perfect design up front, because no customer knows exactly what they want. We can’t do exact time and cost estimations at the start, because no developer knows exactly how long it will take to build or even how it will be built. No wonder the poor project manager wonders why their Gantt chart goes stale so quickly, and the customer gets shipped a lemon six months or a year down the line.
There is only one time when everyone knows the whats, whens, whys and hows of a project with any certainty: when the project has finished. And given that many projects are, supposedly, shipped late and over budget that knowledge is often painfully gained. There’s a reason why some organisations call retrospectives “post-mortems”.
Wouldn’t it be great to have the benefit of hindsight for each and every project? Yes, it would. Better yet, we already do to an extent. For every project we work on, we gain knowledge and experience that we apply (one hopes) to future projects. Unfortunately, every project is different. Teams change, customers change, technologies change, budgets change, tasks change, deadlines change, risks change. Even redoing a known project from scratch, using exactly the same customer and team will result in something different. (If it doesn’t, you have a big problem).
Change is often seen as a bad thing, because people don’t normally like change. Go order a mass desk move to a team in mid-project and take copious notes on the response. Bonus points if the move is the day before a hard deadline.
For me, Agile is about managing change. Scratch that, it’s about embracing change.
When we start a project it’s fresh, new… and rather hazy. The customer has a vague idea of what they want, and the development team has a vague idea of what needs to be done. If you’re lucky, those two vague ideas are of the same hazy vision. As we progress through the project, our collective understanding of the domain improves, and the overall vision is refined. The customer slowly realises what they really want, and the developer slowly understands what the customer is aiming for. The developer’s estimates become more accurate, the design takes a more definite shape, the software yields more value to the customer. We all get better.
By embracing change, by acknowledging it’s going to happen, we work on ways to make change less painful. Changing something after a lot of time and effort is not fun, so short iterations of work mean more plentiful feedback and thus more regular course corrections. Short iterations of work mean concentrating on building only what we need for an iteration – we keep our design and our code supple, working on what we need to work on at a given time. We make our code simple and understandable, so changes are easy, and testable, so we can implement and refactor with confidence.
Agile at its very core is about embracing change. Change happens all the time, we can’t avoid it, so we need to take it into account when we develop software. No matter what Agile methodology you use, or want to use, XP, Scrum, RUP, whatever – it’s important that it allows you to deal with change effectively and naturally.
Without that, it just ain’t Agile.
September 6, 2009
I’ve been attending management training courses the last few months, with one one-day course per month. The most recent one was on Time Management. I wouldn’t say I’m the greatest time manager, in fact a little disorganised, but luckily my public image is much better than my own mental image: people tell me that I get things done on time. Checking back, it’s true – but I always feel I can do better.
So it was great that one thing in particular grabbed me during the course: the Urgent / Important Matrix. The matrix struck me as a great way to evaluate workloads and priorities in a lightweight way, and I’m going to start using it at work to see how it pans out. I believe it originated in Stephen Covey’s “Seven Habits of Highly Effective People”, a book I haven’t read but have just added to my Amazon wish list.
Grab your current To Do list, or whatever else you use to track tasks, and go through each in turn. For each task, you assign an Urgency rating and an Importance rating. To keep it simple, your urgency is either Urgent or Not Urgent, and your importance (yep, you guessed it) is either Important or Not Important. You might find it easier to draw a large 2×2 grid with Importance on the x-axis, and Urgency on the y-axis. Write down the tasks in the appropriate box for their Urgency/Importance combination.
Now take a look at those tasks on the matrix.
At one extreme, we have the tasks that are neither urgent nor important. These are likely to be time-wasters and should be dealt with appropriately. Do, Delegate, Dump or Defer. If they’re quick tasks, or you have nothing else to do, do them: get them completed and off your list as soon as possible. Better yet, delegate the tasks to people who want to tackle thm and are able to get the work done. Otherwise, dump or defer the task. If it doesn’t need to be done, get rid of it from your list or defer until you know it can go. If it becomes important or urgent later, it’ll come back – we all know that from experience!
At the other extreme are the tasks that are both important and urgent. Your website has been Slashdotted, your data centre is currently engulfed by flames, or your new project rollout has failed spectacularly and it needs to be fixed ASAP. This section of the matrix is reserved for crises. You need to focus on dealing with those tasks straight away, but you also need to ensure two things: make sure you don’t end up firefighting the same issue again later on, and make sure other tasks don’t start creeping into this part of the matrix while you deal with a crisis task.
That leaves us two bits…
Urgent but Not Important tasks are ripe for delegation, if possible. If you can’t delegate them, they should be dealt with quickly and efficiently. Focus on resolving only the urgent aspect(s) of the task so you don’t use up valuable time needed for other tasks. Ultimately you’ll end up with a task struck off the To Do list, or one that is now no longer urgent or important.
Important but Not Urgent tasks are the interesting bit: these are usually the main tasks of your job role. I found this rather surprising, because I started off thinking most of my work was in the Urgent + Important category (my customers would certainly say they are), but it isn’t. Suddenly things seemed much clearer and more manageable. You need to keep on top of these tasks though, allocating sufficient time to get tasks done. Otherwise, they will end up rising in urgency and become crisis tasks.
If you’re struggling to find the time to do everything on your To Do list, give the matrix a try and see if it helps. You’ll probably find some tasks that can be delegated, some others that can be dumped and a few that can be struck off the list by finding a spare few minutes during a hectic day (great when you’ve hit a problem and need to take a break and do something else). More importantly, you’ll spot the ones that absolutely need to be dealt with first, and the ones that really matter to your day-to-day work.
September 1, 2009
I’ve been playing with Django for a while now, but today was the first day writing proper production code which will end up on a public-facing website. It will be the first Django application in use by the company, so there’s a lot riding on getting it right.
Django is great for writing web applications quickly and easily – whenever I hit upon a problem, I find something in Django that already does what I need, or I can implement something small in Python that fills in the gap.
One really nice feature is that it’s easy to test Django using both of Python’s in-built testing frameworks: unittest and doctest. I’m not a fan of doctests, though I can see why some people find them useful. I do love unittest though, and so it’s great that Django has provided enhancements to unittest to support things like basic browser client tests, loading test data fixtures and using a test database (sqlite is great for this).
The client doesn’t aim to replace dedicated web test systems such as Selenium, Windmill or Twill. It just offers a lightweight way to test your Django application’s functionality out-of-the-box. You can check response codes, test which template is being rendered, fire test data to forms, or check for text in returned pages. This was very useful today when fleshing out the first part of the application: a form with various fields, buttons, drop-downs and validation logic. I could write my tests first, a bit at a time, then implement the functionality to pass the tests. Supported by the confidence given by the tests, I could check the specified logic was correct, keep refactoring the code, and keep the design supple at the same time.
Of course, I will eventually need to invest in dedicated web testing, probably Selenium, but for the moment the Django TestCase gives me a very quick and flexible way to check functionality while things are changing frequently. Django rocks!
August 29, 2009
When I taught myself to code, testing wasn’t mentioned. Things were so simple, with a limited set of inputs, that it was usually obvious when it gave the expected result or not. If it didn’t, out would come the print statements and debugging by trial and error.
During formal lessons on programming, testing became part of the process and rightly so. As I wrote more and more complex code, it became obvious that manual testing could rapidly become impossible if you wanted to test the range of possible inputs and outputs. Formal proofs were offered to ease the situation, but to paraphrase Donald Knuth: the code has not been tested, it’s only been proven correct. We’d manually perform tests to ensure the code worked. It sucked. Automating the process was possible, but it was time-consuming to write our own test harnesses. And we wrote the tests after the code – that sounded right, right?
When I started using XP-style agile development in 2000, I was introduced to the concept of Test-Driven Development or Test-Driven Design (depending on who you speak to). TDD practises writing of tests first. It sounds weird: how do you test code that hasn’t been written yet? Worse, you write tests that will fail first and then write (or fix) the code to make the tests pass. It took a while to get used to the idea, but now it’s a comfortable part of my software development toolkit.
I should make it clear that I’m not one of those people who insist all code should be written test-first. If you’re writing throw-away code, for example, you shouldn’t feel bad if you don’t write tests first. However, for production code it makes a lot of sense. The initial time taken to write those tests pays off later when you need to refactor code or perform maintenance days, weeks, even years later. If you’re suddenly thrust into a legacy project cold, you’ll appreciate the fact your predecessor supplied a comprehensive test suite so you can make changes with confidence. Think of test suites as part specification, part bug tracker and part developer documentation, as well as an equal member of the code base.
Writing tests first also helps with design and implementation. We need to think more carefully about inputs and outputs – not just what is expected, but also the unexpected (what can go wrong). We have to write code that makes testing easy, so we are encouraged and rewarded for writing code that is simpler and more loosely coupled. It also keeps you focused: you write the minimum code that makes the tests pass. If you want to add more functionality, you write more tests to express that functionality. Far too often we as programmers get side-tracked, adding features we might need – all that extra code might not end up being used, but every extra line is a potential hiding place for a bug or a future maintenance problem.
One thing that doesn’t get mentioned often is that you can only have confidence in code covered by the tests. I know some who have supreme confidence in their tests, believing that everything is covered fully. It isn’t. The old mantra of “there’s always one more bug” can be re-applied for the test-drive generation: “there’s always one more test”. Keep adding more tests to cover existing code. If you’re working on something and notice a potential problem: write a test. If something might be a source of trouble, write a test to check if your hypothesis is correct. If you get bug reports, write tests that reproduce the bug before you start coding a fix. Take that test and see if there are related cases that might trigger another bug, write tests for those too. Use code coverage tools to verify how well you’re doing with test writing: programmers are humans, and humans make mistakes – but we, hopefully, learn from mistakes.
Test suites are living and breathing parts of the code base. They need refactoring, they need reorganising, they need to be added to and maintained in order to stay fresh, useful and provide a source of confidence in the code they support. Like stale documentation or comments, stale test suites can be just as much a source of code smells as the code they claim to test.
As with other techniques for improving code craftsmanship, TDD isn’t a cure-all on its own. What it does provide is a powerful addition to your programming toolbox.
August 24, 2009
One problem faced by my team is that we’re not really a team, more a group of teams and individuals lumped under the generic title of data management. We’re Python and .NET programmers, SQL developers, DBAs, reporting and data warehousing specialists, data integration engineers, Salesforce developers, intranet tool builders… and we do more than our job roles. We’re all working on different things, but you appreciate that while our tasks and projects might not be the same, the issues we face are similar. Our customers, mostly internal, are similar – we talk to the same people and use a common business language / jargon. We face typical problems: inter-departmental communication, scheduling of work, dealing with defects, refactoring code, juggling priorities, coping with the office environment. In hindsight, it sounds obvious: it’s the same set of problems that face anyone in IT, or most lines of work for that matter.
I decided that we needed to pool knowledge, get a bit of team bonding going and try to figure out how we can collectively tackle the irritations we face. We also needed to share and celebrate our successes.
A few months ago I picked up a copy of “Agile Retrospectives” written by Esther Derby and Diana Larsen. Even if you don’t practise agile development, retrospectives can be a very useful way to close projects or provide regular team reviews. In my case, it was a great way to get the team talking to each other and sharing their problems and successes. We’ve opted for a 2-hour monthly retrospective, with a general reflection on the time between the current and previous retrospective. One thing I really need to do more is set some proper tasks and goals from the meeting as we tend to be a little vague at times. We also find ourselves focusing on problems we can’t easily fix because they lie outside our immediate control. We need to work on the things we can do ourselves first.
A typical retrospective, as defined in the book, is broken up into five sections:
- Set the Stage
- Gather Data
- Generate Insights
- Decide What to Do
- Close the Retrospective
Set the Stage is a simple welcome and introduction. This is a chance to get everyone to say a couple of words so they feel comfortable talking and contributing. So far, I’ve stuck with asking the question “in one or two words, what’s on your mind?”. It can be uncomfortable for some, but it can be quite revealing too, and it certainly takes the edge off things – once you’ve said one or two words, more will often follow.
For the Gather Data part of the retrospective, the aim is to grab as much raw information about the project / iteration / month covered by the retrospective. Different people experience different things – in our case, the fact we’re all doing different things means we cover quite a lot of events. Some of these overlap (server problems, office temperature, distractions) while others can be specific to one or two people. I use the “Mad, Sad, Glad” game, accompanied by suitably coloured sticky notes, to get the team to write out anything and everything that comes to mind. We’re often talking quite a lot, and as people stick up notes on a wall it prompts other ideas and memories. When done, we collectively cluster the notes and then choose labels for each cluster – it’s perfectly fine if we discover the need to recluster as we label, because we’ve spotted something interesting or have found a better way to cluster.
Generate Insights is about analysing the data gathered: looking for patterns and discussing our stories. For our retrospectives, quite by accident, this ended up becoming part of the clustering process mentioned above. It felt a natural transition, so we’ve kept it that way for now. Things will obviously change when I decide to switch the game to something different.
Next up is Decide What To Do. Using sticky dots, we then vote on what cluster is most important for us to take a look at. Obviously, things that are affecting us negatively will be most likely to want attention, but it’s nice to see the good things voted for as well. We then brainstorm ways we can tackle the cluster before voting on what suggestion we want to do. I’ve found making a list of things we need to Start Doing, Stop Doing and Keep Doing to be simple and interesting for the group. The only downside, which is my fault, is pinning the team down on setting actions that are achievable by us. Too often we hope for other people to change, when we could (and should) more easily work on change amongst ourselves.
Finally, Close the Retrospective. Wrap things up and let the restrospective coach know how things went. I like the Helped, Hindered, Hypothesis game where everyone writes a note to me to tell me what helped them during the retrospective, what didn’t help, and what things I can do next time (the “Hypothesis” part of the game – struggling for an H?).
It’s early days for our retrospectives, but there’s plenty of scope for improvement and mixing things up to keep it relevant and, above all, fun. It’s proved a great way to vent some steam, think about the way we do things and give ourselves a pat on the back for the good things we do. We don’t celebrate success as much as we should, which is a shame because we all want to do the best we can and get a little recognition when we achieve that. I’m enjoying hosting them and it’s a great help for me as it provides the opportunity to work on things I always felt I couldn’t do well: like leading a group, facilitating team discussion and planning something IT-related that isn’t a software project.
Give them a go sometime!
For further reading….