Friday, August 24, 2012

PyCon AU write-up

Well, PyCon AU 2012 was definitely the best Olympics ever. The quality of the talks was outstanding, and the event organisation went flawlessly. Kudos all around.

I presented twice, videos available on YouTube:
  "Visualising Architecture" : http://www.youtube.com/watch?v=BGOtqXA_y1E
  "Virtual Robotic Car Racing with Python and TORCS" : http://www.youtube.com/watch?v=BGOtqXA_y1E

Visualising architecture presents some useful tools you can use easily for examing codebases and running systems to show the internal structures, and includes some discussion on good design in context. Robotic cars is pretty much as the title indicates. Thank you to everyone who showed up, sat through them, and special thanks to those who asked some great questions at the end. Speaking is a real pleasure when the audience is happy to talk afterwards.

Other talks I attended (and would recommend) were:

"What to buid. How to build it. Python can help!" by Mark Ramm. A great piece on, essentially, good management practises utilising evidence-gathering to make decisions. Presented examples based on product management at sourceforge. People should do this more.

"The Lazy Web Dev's Guide to Testing Your Web API" by Ryan Kelly. Ryan is a great speaker, and he showed some good techniques for reducing the amount of effort in testing web APIs.

"Python Dark Corners Revisited". Definitely worth a watch for anyone working with Python. A good explanation of Python's types and data structures, presented as a bunch of surprising and challenging short questions and explorations in Python.

"Funcargs and other fun with pytest" by Brianna Laugher. Everyone should know more about testing, py.test is a great tool, and the presentation was very practical and includes applied real-world examples and problem-solving.

"Python Powered Computational Geometry" by Andrew Walker. A great exploration of tools and techniques for representing and visualising 3d objects. Cool!

I didn't see, but plan to watch later, a couple of the other presentations. I'd also queue up "Think, Create, and Critique Design" by Andy Fitzsimon (video not available, but use Google to find the slides he posted); and "An unexpected day" by Aaron Iles.

Monday, August 20, 2012

Committing to Git

I've been doing a bit more developing lately, and a bit less hanging around in meetings. As a result, it has become somewhat painfully obvious that I don't quite know all the right things any more. One of the moves I have made is to use git-svn to interact with our SVN repo. This basically gives me all the relevant advantages of git, without having to go and have a fight well-reasoned discussion with the manager of said repo to bring about the change.

There are some things I've learned painfully, and some positive workflow changes I have come across. These are my notes on the topic so far.

Basic workflow:
  1.) git svn clone the repo
  2.) ALWAYS WORK ON A BRANCH. You do *not*, I repeat do *not* want to do a git svn fetch and discover you have introduced merge conflicts into your master branch. Masters of git kung fu can probably get themselves out of this hole, but it's fairly painful.
  3). ALWAYS WORK ON A BRANCH.
  4.) Okay, so you're working on a branch. Switching branches is pretty easy. However, you do need to notice that creating a new branch is not the same thing as working on it. Either use git checkout -b, or checkout the branch after creating it. You'll get used to this.
  5.) Your local file changes follow you around as you switch branches. I have no idea what would happen if two branches had, say, wildly different directory structures. Probably you would have to rely on git being awesome.
  6.) Sometimes, I have had some problems with the local git log not matching up with the remote SVN log after doing a git svn dcommit. I followed some script on the internet once to "fix" it. It worked the first time. The second time, it corrupted my entire git repository just before a deadline, and basically everything was terrible. Luckily I had an SVN repo as well and copied my changes there manually before blowing away my whole git repository. The moral of the story is that while learning, maintain a parallel repo using plain SVN so that you have a separated clean system which you can switch to if the going gets scary.

Covering Your Ass

I would recommend this. I don't know about you, but my fellow devs tend to get a bit twitchy if I commit too much lint or break tests, I can't think why. I have a tendency to get frustrated about 80% of the way through complex work, and it's really useful to have automation in place to protect against stupid mistakes.

Fortunately, git comes with an inbuilt ass-covering system, called hooks. It lives inside the git repository, which makes me feel faintly suspicious about it, but it both exists and works. Inside .git/hooks you will find a collection of files *.sample. If you make copies of these without the .sample extension, you can add automated checks and processes to the system. As follows.

pre-commit:
    This runs, as said, before anything gets *really* committed. This is a good spot to add a pylint check, a pep8 check, and maybe a short run of unit tests. Before you even get the chance to type your commit message, there is a safety net. You can skip this with --no-verify.

prepare-commit-msg:
   This allows you to template your commit messages. For example, if your system requires a bug ID to go along with each commit message, you can insert template text into the message here.

These are just shell scripts. Since I basically hate everything that is not python, my first step is to start the files with
   #!/usr/bin/python
which will cause them to be interpreted in python rather than bash. FTW!

Initiate debugger using signals

So, I was watching a lightning talk at PyCon AU yesterday, and I think the speaker's first name was Matt. Apologies for not giving a better reference. It all went by pretty quick, but I think I heard something like "Why not use a signal to start the debugger in Python"?

And so I did. I can't work out if this is too trivial to upload to the cheese shop, but if you create a file called "signal_handler.py", and import it from anywhere in your code, you will magically link up the SIGINT (what gets sent by control-C) to the Python debugger. For extra win, it will try to initiate "ipdb" if you have it. I haven't actually tested it on pdb, but it's hard to see how it could fail to work. Besides, you should be using ipdb (pip install ipdb).

import signal

try:
    import ipdb as pdb
except:
    import pdb

def handle_signal(signal_number, frame_stack):
    pdb.set_trace()

signal.signal(signal.SIGINT, handle_signal)

Voila! Now, next time you are watching your app run and you need to start the debugger when you least expected it, you can just do it! 

A small note of caution -- it's probably wrong to override the SIGINT expected behaviour in this way. You could also wire it up to, say, SIGUSR1, but then you would have to explicitly sent the signal with "kill -10". It would work perfectly fine, but is a bit less convenient that just slamming ctrl-c wherever processing happens to be. I'm not sure what else might want/need to sent the occasional SIGINT and rely on normal behaviour, so use this at your own risk!

When you push ctrl-d to end the debugger, you will exit the program.