August 30, 2009
The first tech podcast I started listening to was LUG Radio, sadly demised. The second one was This Week In Django, which has been dormant since the beginning of the year. I worry that I jinx podcasts.
Podcasts give me a great way to keep up with what’s happening in areas I’m interested in. I sometimes listen to them at work, but usually in the evenings when I can listen while doing other things. My current list of tech-related podcasts is as follows (in order of appearance in my freedom-hating iTunes podcast list):
Distrocast – http://distrocast.org/
JD and Jeremy cast their highly critical gaze over various Linux distributions, and other things too. I love the fact nothing is sacred and they’re not afraid to rip things to shreds and hurl abuse. Not one for the easily offended or the “Linux and Open Source is Always Perfect” brigade. A much needed contribution to the Linux community. They make me chuckle, they make me think, and they introduced me to ArchLinux, which is a pretty cool distro.
FLOSS Weekly – http://www.twit.tv/FLOSS
Randal Schwartz, Leo Laporte, and guest appearances from the mighty Jono Bacon. FLOSS weekly takes a look at different free, libre open source projects. Always entertaining and informative, it’s great to hear more about projects I use or have heard of, as well as offering an excellent way to discover new projects.
The MDN Show – http://www.mac-developer-network.com/category/shows/
Formerly Late Night Cocoa, which ended up becoming a paid-for service. Scotty and John discuss various aspects of Mac development in an easy to listen format, albeit with some very cheesy music. The supporting tips from “The World According to Gemmell” are always small, thought-provoking chunks of advice that are great to try out, whether you’re a Mac developer or not.
Linux Outlaws – http://www.linuxoutlaws.com/
Fab and Dan’s show has proved a more than capable replacement to LUG Radio, covering Linux and free open source software… with the odd beer review, creative commons music, and Fail. Linux Outlaws is laid-back, light hearted and always interesting, packing a lot into each show.
MacCast – http://www.maccast.com/
Adam Christianson’s enthusiasm for all things Macintosh really shines through. MacCast is a great way to keep up-to-date with the latest news from Apple and the Mac community.
.NET Rocks – http://www.dotnetrocks.com/
I had a bit of trouble finding a decent .NET podcast when I realised I needed to start delving into the murky world of Microsoft and .NET development. Carl and Richard were my salvation. A slick podcast which covers the breadth and depth of .NET, it’s the perfect way for me to keep an eye on the .NET ecosystem. If you’re used to the world of open source and UNIX development, I’d recommend giving this show a go to see things from the other side of the fence. You’ll learn some new things and maybe appreciate what you’ve got a bit more.
Pragmatic Podcasts – http://www.pragprog.com/podcasts
Irregular, short podcasts from the excellent Pragmatic Programmers publishing house. The podcasts involve interviews with authors and are usually linked to new releases – it’s good PR because I’ve ended up checking out books based on the interviews.
Python411 – http://www.awaretek.com/python/
I’m not sure how to categorise Python411, produced by Ron Stephens. Since the dormancy of This Week in Django, it’s the only active Python-related podcast, albeit irregularly. It has a pleasantly cosy and informal feel to it, and there are some useful gems of information to be found in the episodes.
Sod This – http://www.sodthis.com/podcast/
Billed as “Brain Burps for the Tech Savvy”, Gary and Oliver’s podcast is another one that’s difficult to categorise as it’s relatively new and quite an eclectic experience. I can’t write much more than that, but it’s worth a listen for things such as an interview with IronPython guru Michael Foord or the Women In Technology episode.
The Software Freedom Law Show – http://www.softwarefreedom.org/podcast/
Bradley and Karen from the Software Freedom Law Center cover the legal side of the open source community. If you’re an open source software developer, or a developer interested in the issues surrounding patents, copyright and intellectual property, I can highly recommend this podcast. It’s very accessible so you don’t need to be a lawyer to understand it.
TuxRadar Linux Podcast – http://www.tuxradar.com/podcast
Not as good as Linux Outlaws or LUG Radio, sorry guys, but still a worthy addition to anyone’s Linux podcast collection. News and chat on Linux and open source software from the Linux Format crew.
A VerySpatial Podcast – http://veryspatial.com/avsp/
I was umming and ahhing about adding this, but it is tech-related. A Very Spatial Podcast is the long-running weekly show covering geography and geospatial technology. Being quite new to GIS, and then only currently dealing with very basic geocoding of data, parts of the show can be a little indecipherable to my newbie mind. It’s well presented and covers a good range of topics – I might not understand everything, but it’s helping me to learn fast. If you have any interest in the geospatial technology, professional or amateur, check it out.
August 29, 2009
When I taught myself to code, testing wasn’t mentioned. Things were so simple, with a limited set of inputs, that it was usually obvious when it gave the expected result or not. If it didn’t, out would come the print statements and debugging by trial and error.
During formal lessons on programming, testing became part of the process and rightly so. As I wrote more and more complex code, it became obvious that manual testing could rapidly become impossible if you wanted to test the range of possible inputs and outputs. Formal proofs were offered to ease the situation, but to paraphrase Donald Knuth: the code has not been tested, it’s only been proven correct. We’d manually perform tests to ensure the code worked. It sucked. Automating the process was possible, but it was time-consuming to write our own test harnesses. And we wrote the tests after the code – that sounded right, right?
When I started using XP-style agile development in 2000, I was introduced to the concept of Test-Driven Development or Test-Driven Design (depending on who you speak to). TDD practises writing of tests first. It sounds weird: how do you test code that hasn’t been written yet? Worse, you write tests that will fail first and then write (or fix) the code to make the tests pass. It took a while to get used to the idea, but now it’s a comfortable part of my software development toolkit.
I should make it clear that I’m not one of those people who insist all code should be written test-first. If you’re writing throw-away code, for example, you shouldn’t feel bad if you don’t write tests first. However, for production code it makes a lot of sense. The initial time taken to write those tests pays off later when you need to refactor code or perform maintenance days, weeks, even years later. If you’re suddenly thrust into a legacy project cold, you’ll appreciate the fact your predecessor supplied a comprehensive test suite so you can make changes with confidence. Think of test suites as part specification, part bug tracker and part developer documentation, as well as an equal member of the code base.
Writing tests first also helps with design and implementation. We need to think more carefully about inputs and outputs – not just what is expected, but also the unexpected (what can go wrong). We have to write code that makes testing easy, so we are encouraged and rewarded for writing code that is simpler and more loosely coupled. It also keeps you focused: you write the minimum code that makes the tests pass. If you want to add more functionality, you write more tests to express that functionality. Far too often we as programmers get side-tracked, adding features we might need – all that extra code might not end up being used, but every extra line is a potential hiding place for a bug or a future maintenance problem.
One thing that doesn’t get mentioned often is that you can only have confidence in code covered by the tests. I know some who have supreme confidence in their tests, believing that everything is covered fully. It isn’t. The old mantra of “there’s always one more bug” can be re-applied for the test-drive generation: “there’s always one more test”. Keep adding more tests to cover existing code. If you’re working on something and notice a potential problem: write a test. If something might be a source of trouble, write a test to check if your hypothesis is correct. If you get bug reports, write tests that reproduce the bug before you start coding a fix. Take that test and see if there are related cases that might trigger another bug, write tests for those too. Use code coverage tools to verify how well you’re doing with test writing: programmers are humans, and humans make mistakes – but we, hopefully, learn from mistakes.
Test suites are living and breathing parts of the code base. They need refactoring, they need reorganising, they need to be added to and maintained in order to stay fresh, useful and provide a source of confidence in the code they support. Like stale documentation or comments, stale test suites can be just as much a source of code smells as the code they claim to test.
As with other techniques for improving code craftsmanship, TDD isn’t a cure-all on its own. What it does provide is a powerful addition to your programming toolbox.
August 28, 2009
If you’ve not seen a Wiki, where have you been??? Thanks to projects like Wikipedia, the wiki has entered mainstream culture and deservedly so. They’re an amazingly simple, but incredibly powerful, collaborative tool and my thanks go to Ward Cunningham, creator of WikiWikiWeb in the mid-90s.
My first experience of a wiki was in something like 2000. I’d like to say it had an impact on me as deep as my first use of the World Wide Web in 1994, but it didn’t. It was more subtle, more natural than that – it felt like the next logical step from web pages, rather than some huge evolution.
I have a confession to make: I actually like writing documentation. I don’t always get the time to write as much documentation as I would like, but I don’t find it the chore some developers do. The main problem is that documentation gets stale very quickly, and when I write I often want to branch off at a tangent or start in the middle. A bit like the Web, really.
Wikis reduce the barriers to collaboration, which is why they appeal to me, and make it easier to keep content fresh. Rather than those horrible documentation systems that lock files and force you to deal with file format issues like whether you’re using a particular version of Word or OpenOffice, wikis are open through the “standard” interface of a web browser. Using a simple and usually intuitive markup language, you can begin adding and updating content quickly. Look how successful Wikipedia has become, because people can contribute as much or as little as they want in order to collaboratively build and refine the pool of knowledge.
I love the fact our wiki at work has similarly built up over time into a repository of all kinds of useful information. It’s so easy for people to contribute tiny changes and fixes, reorganise information and create cross-references. Best of all, a lot of the information is overkill for a formal document and thus would be stored in people’s heads or even lost. I fear the day when someone who doesn’t understand the power of wikis forces us to change to the proprietary, locked-down and inflexible world of something like Sharepoint.
MoinMoin is my preferred wiki software at the moment. It’s open source, written in Python and provides more than enough features to cover my needs at home and at work. Our office wiki is powered by MoinMoin as well, and I’ve installed personal copies on my home machines to act like digital whiteboards. Using a wiki, I can keep project notes, build up documentation and organise my To Do lists easily.
So it may come as a surprise to find that I haven’t used wikis for one thing yet: testing! Fitnesse uses a wiki front-end to build Fit-style acceptance tests. It sounds like it could be ideal for me because I want to start researching better ways to undertake acceptance testing. It’s on my, wiki-based, To Do list.
Perhaps a subject for a future post…
August 27, 2009
One of the many important skills I learnt in The Real World is to keep things simple. I don’t always achieve that goal, but I try to keep it in my mind as much as possible, especially when the pressure is on and people are making hasty demands. It’s in that situation that standards slip and short cuts are taken.
Keeping code simple is not easy. It’s far too easy to add a little extra here, a little extra there, and maybe throw in something you saw from the Gang of Four patterns book for good measure. I used to try to be clever a lot more than was wise for me, leading to over-engineered and overly complex code. I still have my odd moment, but at least I feel guilty and make a note to go back and refactor… time permitting.
Laziness is a virtue. Do the minimum amount of work required to achieve the goal at hand. Build incrementally, make small changes, test each change.
Simple code is:
- easier to write
- easier to read
- easier to understand
- easier to test
- easier to debug
- easier to change
It’s also more likely to work first time.
For most production code, we spend more time maintaining it than we do writing it. We fix the odd bug, add new features, modify existing features, refactor the code, come back to it days, months, years later and try to figure out what it does. That’s when you appreciate simplicity.
August 26, 2009
If there’s one language right now that I really want to master, it’s Erlang.
A couple of years ago I was, like many, starting to realise the world is multicore. Concurrency is the big area any serious programmer will have to start dealing with sooner or later, if they haven’t already. Now, the problem for me is that multicore is a hardware / operating system thing. As a humble application coder, I shouldn’t need to concern myself with the underlying hardware and whether I’m running on a single processor or a dual quad-core somethingorother. I decided after leaving academia that hardware had pretty much become commodity and I should devote my time more towards the latest in software techniques.
Where am I going with this? Well, I still believe that for many situations, the developer shouldn’t need to know or care about what their code is running: a single core or many, something x86-like, something PowerPC or ARM-like. The hardware should be abstracted away, handled by the OS and the implementation language / compiler. Of course, that presents a problem because that kind of decision isn’t always easy to do behind the scenes.
Even though the concurrency debate has been raging since the 60s (maybe earlier), only now are many developers becoming exposed to the situation. Let’s face it, we’re not all well-prepared and we’re often reinventing the wheel or following dodgy avenues like threads that some consider the 21st Century equivalent of goto.
Meanwhile, the functional programming guys and gals have been sitting on the sidelines looking a bit smug. Side-effect-free programming tackles some of the nasty issues us imperative coders have to deal with, and makes concurrency a much more pleasant prospect. Functional programming is getting a long overdue resurgence in interest. Languages like Erlang, Haskell, F#, R or Scheme seem to be cropping up more and more, which can only be a good thing. I realised it was about time I dusted off my academic knowledge and made a serious effort to understand functional programming and see how I could apply the knowledge to concurrency.
My interest in Erlang came, funnily enough, from the Python community. I heard it mentioned many times in blogs, mailing lists and even in source code, but the big decider was at PyConUK last year. Erlang came up so much in conversation with other delegates and during Mark Shuttleworth’s keynote speech that I decided I really needed to take a look. I bought Joe Armstrong’s excellent “Programming Erlang” book, installed Erlang from MacPorts, and settled down for some quality programming over the Christmas holidays. It wasn’t quite the uninterrupted time I wanted, but it was enough to play around with the language and get a feel for things.
For me, Erlang’s approach to concurrency feels quite natural. I can express concurrency in a way that doesn’t bog me down with the low-level details like creating threads / processes or handling data integrity. The “Actor” model approach fits nicely in my mind and is conceptually analogous to object-oriented programming. Credit to the Wikipedia article on the Actor Model for helping me express that analogy!
The big surprises, however, were in areas outside concurrent programming. With a background in high-reliability systes, it’s no surprise the language has excellent support for writing fault-tolerant code and for inserting new code into a running system. It’s quite elegantly done and a real joy – I’d love to use this capability for projects in my daytime job, but it’s not going to be possible to push Erlang into the company for the foreseeable future.
While not quite CPAN or the Python standard library, Erlang comes (as part of the OTP) with a decent enough standard library and sites like CEAN fills in a few gaps. Someone has even written a library for Python and Erlang to talk to each other! I wasn’t expecting as much library support out-of-the-box, for some reason, so it was an unexpected bonus. I haven’t really explored these much, so I might end up cursing some deficiencies in future!
So anyway, writing this has spurred me into trying to find the spare time to write more Erlang code – the only way I’m going to improve my knowledge and one day master the language is practise, practise, practise!
August 25, 2009
Over the years my thoughts on commenting of code has varied quite dramatically. On the one hand, comments can lend clarity to code, provide valuable documentation and even help to break up the structure of code into managable chunks. One the other hand, comments are deodorant for smelly code, monstrous blobs of text no one can bother to read, or just plain wrong. In an ideal world, your code will be gorgeously self-documenting and as readable as a good book.
Since when have programmers worked in an ideal world?
At the moment, my code comments have reached a stage that is, or aims to be, simple and concise. I’m going to outline the concepts based on commenting of functions or methods. Comments can be one of the religious wars of computing, so I’m not expecting everyone to agree with me. It works for me, it may or may not work for you. To some, it might also be blatantly obvious.
The starting point is not the comments themselves, but the method / function declaration. Is the method name self-revealing? Do the arguments make sense? Calling something x() or foo() or doStuff() could mean anything, while is_numeric() or calculateSumOf() expresses more about what the function does. Of course, if is_numeric() actually returns climate data in XML, then you have a serious naming problem. Likewise, an argument name of x or y probably doesn’t reveal much compared to something like username or payment_type. Before you get as far as writing any comments, think carefully about the method name and arguments – and make sure they’re readable out loud!
Now write a brief comment describing the method. We’re not looking for an essay because no one seriously reads a comment of more than a few lines. In one or two lines, what does it do? What do the arguments mean? If you’re using a dynamically-typed language, what are the arguments expected to be? What is returned? Can the method throw any exceptions? Keep it clear and simple because people want to be able to see at a glance the information they need. If your implementation language supports some kind of documentation comment then use it so IDEs, documentation systems and other tools can parse the text.
If you can’t write the description in one or two lines, that could indicate a problem: your method might be doing too much or could be quite complex. I’ve found the hard way that if I’m struggling to describe what a method does, I haven’t thought the problem through sufficiently. Having to write out the details of lots of arguments is another warning sign of smelly code. You might have too many arguments, you might even lack a clear idea of what each argument does.
Keep comments general so people get an overview of what the method does. The code should indicate the implementation details, not the comments. Generalised comments also help if you refactor the function, changing the internals shouldn’t mean an overhaul of the comments just because you went from a linked list to a binary tree. Keeping it general helps keep things in people’s mind. Much as we like to boast of our mental prowess to our peers, the simple fact is that the human brain can’t hold much in the front of the mind. Be gentle to the reader.
Another great advantage of short, general comments is that it encourages keeping comments fresh. Updating a one-liner to reflect some refactoring you’ve made is easy, adjusting a 30 line essay is not. Stale or contradictory comments rapidly become serious problems.
Comments should be part of your coding tool box. As well as the obvious documentation aspect, they’re a great way to figure out how understandable your code is. When you write a comment, take a good look at the code because too many comments are a sign of something wrong. Comments support the code, providing clarity and insight, they should never contradict or state the obvious. Finding the right level of comments in your code is something that comes with practice.
August 24, 2009
One problem faced by my team is that we’re not really a team, more a group of teams and individuals lumped under the generic title of data management. We’re Python and .NET programmers, SQL developers, DBAs, reporting and data warehousing specialists, data integration engineers, Salesforce developers, intranet tool builders… and we do more than our job roles. We’re all working on different things, but you appreciate that while our tasks and projects might not be the same, the issues we face are similar. Our customers, mostly internal, are similar – we talk to the same people and use a common business language / jargon. We face typical problems: inter-departmental communication, scheduling of work, dealing with defects, refactoring code, juggling priorities, coping with the office environment. In hindsight, it sounds obvious: it’s the same set of problems that face anyone in IT, or most lines of work for that matter.
I decided that we needed to pool knowledge, get a bit of team bonding going and try to figure out how we can collectively tackle the irritations we face. We also needed to share and celebrate our successes.
A few months ago I picked up a copy of “Agile Retrospectives” written by Esther Derby and Diana Larsen. Even if you don’t practise agile development, retrospectives can be a very useful way to close projects or provide regular team reviews. In my case, it was a great way to get the team talking to each other and sharing their problems and successes. We’ve opted for a 2-hour monthly retrospective, with a general reflection on the time between the current and previous retrospective. One thing I really need to do more is set some proper tasks and goals from the meeting as we tend to be a little vague at times. We also find ourselves focusing on problems we can’t easily fix because they lie outside our immediate control. We need to work on the things we can do ourselves first.
A typical retrospective, as defined in the book, is broken up into five sections:
- Set the Stage
- Gather Data
- Generate Insights
- Decide What to Do
- Close the Retrospective
Set the Stage is a simple welcome and introduction. This is a chance to get everyone to say a couple of words so they feel comfortable talking and contributing. So far, I’ve stuck with asking the question “in one or two words, what’s on your mind?”. It can be uncomfortable for some, but it can be quite revealing too, and it certainly takes the edge off things – once you’ve said one or two words, more will often follow.
For the Gather Data part of the retrospective, the aim is to grab as much raw information about the project / iteration / month covered by the retrospective. Different people experience different things – in our case, the fact we’re all doing different things means we cover quite a lot of events. Some of these overlap (server problems, office temperature, distractions) while others can be specific to one or two people. I use the “Mad, Sad, Glad” game, accompanied by suitably coloured sticky notes, to get the team to write out anything and everything that comes to mind. We’re often talking quite a lot, and as people stick up notes on a wall it prompts other ideas and memories. When done, we collectively cluster the notes and then choose labels for each cluster – it’s perfectly fine if we discover the need to recluster as we label, because we’ve spotted something interesting or have found a better way to cluster.
Generate Insights is about analysing the data gathered: looking for patterns and discussing our stories. For our retrospectives, quite by accident, this ended up becoming part of the clustering process mentioned above. It felt a natural transition, so we’ve kept it that way for now. Things will obviously change when I decide to switch the game to something different.
Next up is Decide What To Do. Using sticky dots, we then vote on what cluster is most important for us to take a look at. Obviously, things that are affecting us negatively will be most likely to want attention, but it’s nice to see the good things voted for as well. We then brainstorm ways we can tackle the cluster before voting on what suggestion we want to do. I’ve found making a list of things we need to Start Doing, Stop Doing and Keep Doing to be simple and interesting for the group. The only downside, which is my fault, is pinning the team down on setting actions that are achievable by us. Too often we hope for other people to change, when we could (and should) more easily work on change amongst ourselves.
Finally, Close the Retrospective. Wrap things up and let the restrospective coach know how things went. I like the Helped, Hindered, Hypothesis game where everyone writes a note to me to tell me what helped them during the retrospective, what didn’t help, and what things I can do next time (the “Hypothesis” part of the game – struggling for an H?).
It’s early days for our retrospectives, but there’s plenty of scope for improvement and mixing things up to keep it relevant and, above all, fun. It’s proved a great way to vent some steam, think about the way we do things and give ourselves a pat on the back for the good things we do. We don’t celebrate success as much as we should, which is a shame because we all want to do the best we can and get a little recognition when we achieve that. I’m enjoying hosting them and it’s a great help for me as it provides the opportunity to work on things I always felt I couldn’t do well: like leading a group, facilitating team discussion and planning something IT-related that isn’t a software project.
Give them a go sometime!
For further reading….
August 23, 2009
I’ve been fortunate enough to have used a variety of different programming languages over the years. Some for home projects, some for use at work, some just to tinker with to see what the fuss is about. Five languages in particular have had a big effect on me:
Pretty much everyone who cut their programming teeth on 8-bit micros started off on BASIC. You switched the computer on and there you were: an inviting command prompt waiting for you to type in some BASIC code. No wait, no fuss. We’d laboriously type out listings from computer mags, manuals supplied with the computer or the old Usborne books. A great experience for learning about computers and programming which I worry is lacking these days. It encouraged you to play around and experiment, and the environment BASIC was running on was fairly limited so it was easy to understand the system as a whole. Maybe I’m intertwining BASIC and 8-bit computers?
BASIC was my introduction to programming. I’d sit there hunched over my CBM Plus/4 hooked up to a blurry TV and bash out programs that were crude and contrived, but were fun to write and gave a real sense of achievement. Even when I moved to an Amiga A500, I continued to use BASIC (in the form of the much maligned AmigaBASIC) for a long time because it was quick and easy to write software that met my needs. I remember writing more serious tools on the Amiga, including code for things like GCSE maths coursework.
BASIC might be frowned upon by many these days, but it remains a decent language for introducing people to the general idea of programming – which was its original purpose. The chapter on BASIC in the excellent O’Reilly “Masterminds of Programming” book is a terrific read, if you like that sort of thing.
From BASIC, I went towards assembly language (6502 and then 68000) and C but was never particularly serious about either at the time. When I started university, the course taught Modula-2 as the language used to express the concepts of programming properly, and the serious art of computer science (sic). C might’ve been more powerful, more lower-level and more widely supported, but it had sharp edges that could catch the unwary, including me.
Modula-2 was a revelation and it helped move me completely away from BASIC. It retained a clear, readable syntax and I found it great for writing structured, robust code thanks to features such as its use of modules for encapsulating data and procedures. There was even support for coroutines, something that at the time was a relatively specialised feature for general microcomputer programming.
I’d heard of Prolog back in the 80s and knew that it stood for Programming in Logic (“programmation en logique” if you want to be correct), but that was the extent of my knowledge. I didn’t encounter it properly until late in my university course when I began to study logic programming and artificial intelligence. It was the first time I’d encountered a language that was very different from the imperative programming I had undertaken before.
Prolog is a declarative language. Instead of writing code as a sequence of steps and flow control statements, i.e the “how” of solving a problem, you declare the problem space itself and the rules defining that space. It’s a very different approach to tackling the writing of code in languages like C and was an excellent mental exercise for myself. It had a big effect on the way I looked at programming and its influence can be seen in my fascination with things like domain specific languages or my recent look at Erlang.
Perl was in some ways a return to my roots and in others a radical departure. I learnt Perl in a weekend when I started at my first job in 1997. There were some CGI scripts that needed maintaining and they were written in Perl, a language no one at the company had any in-depth knowledge of.
For all the jokes about being a write-only language, which admittedly can be very true, and it being the Swiss army chainsaw of text utilities, it’s a pretty powerful language. Perl, to me, was always about getting the job done. The lack of compilation gave the ability to make quick changes and experiments easily, just like BASIC. It excelled with text manipulation and the support for regular expressions turned me into a serious regex fanatic. I find regular expressions incredibly useful and have felt comfortable writing and debugging some insanely complex pattern matching in other languages as a result.
Perl gave me a great stepping stone into developing dynamic web sites (as opposed to static HTML) as well as dealing with the processing of large amounts of text. It didn’t occur to me at the time, but it was also the origins of my love for dynamic programming languages. I might not have an interest in working in Perl again, but I’m glad I spent the first few years of my professional programming career using it.
Ah, Python. Truly the best all-round language I have used to date – dynamically typed, supports quick experiments as well as large-scale software development, clean syntax, simple to learn, object-oriented, with a few nods to functional programming in places. It also taught me that life without braces or begin/end statements is not only possible, but actually quite liberating. Python is fun.
Once I’d got past the whole indentation weirdness, Python proved to be an excellent replacement for Perl. I could do everything I did in Perl easily, and more. It also proved to be the first object-oriented language that made me appreciate objects. I never really saw the point of object-oriented code until I started working with Python and Zope. Suddenly it all made sense.
The great thing about Python was that the indentation gave rise to a rather interesting side-effect: readable code. No more wars over bracing style, no more complaints about differng layouts of code amongst team members. You could step into someone else’s code and figure out what it did far more easily than other languages. It’s still possible to write obfuscated Python code, but you have to make a special effort.
Python’s standard library also covers quite a wide range of functionality, which has been a great help when writing code – when people say that Python has batteries included, they aren’t kidding! Perl might have CPAN, which is very large, but I find myself having to hunt out third party libraries less often than I did with Perl, and even less than I did with languages like Modula-2 or C/C++.
I don’t know if or when something will take over from Python, but for now it’s my main choice of language.