Thursday, August 7, 2014

PyCon AU 2014 Writeup

I recently attended PyCon AU in Brisbane, Australia. It was an amazing conference, and I wanted to record my thoughts. I will organise this post in time order.

Videos are coming out, and will all eventually be published at

Friday Miniconfs

The first day consisted of "miniconfs", which are independently organised focused streams on specific topics. I attended the "Science and Data" miniconf. It is clear that this is a huge and growing component of the Python community. However, science and data still suffer from a lack of general Python community integration. The tools put in place do appear to be having a transformative effect on scientists who are adopting them (notable technologies include Ipython Notebook, scipy, numpy, and efforts such as Software Carpentry). However, general best practises around software design, team workflow, testing, version control and code review have not been so enthusiastically adopted. Going the other way, data-oriented techniques and self-measurement have not been widely adopted within open source.

On of the major "new" tools is "pandas", which provides extremely strong data management for row/column based data. This tool is a few years old, but is really coming into its own. It supports very strong indexing and data relation methods, and some basic statistical techniques for handling missing data and basic plots. More advanced techniques and plots can be achieved by using existing Python libraries for those purposes by fetching the pandas data structures as numpy arrays.

Saturday: Main Conference Day One

The main conference was opened by a keynote from Dr James Curran, who gave an inspiring presentation which discussed the new Australian national curriculum. This is to include coding from early years through to year ten as a standard part of the standard education given to all Australians. This is an amazing development for software and computing, and it looks likely the Python may have a strong tole to play in this.

I presented next on the topic of "Verification: Truth in Statistics". I can't give an unbiased review, but as a presenter, I felt comfortable with the quality of the presentation and I hope I gave the audience value.

I attended "Graphs, Networks and Python: The Power of Interconnection" by Lachlan Blackhall, which included an interesting presentation of applying the NetworkX library to a variety of network-based computing problems.

For those looking for a relevant introduction, "IPython parallel for distributed computing" by Nathan Faggian was a good overview.

"Record linkage: Join for real life" by Rhydwyn Mcguire gave an interesting discussion of techniques for identity matching, in this case for the purpose of cross-matching partially identified patients in the medical system to reduce errors and improve medical histories.

"The Quest for the Pocket-Sized Python" by Christopher Neugebauer was an informative refresh of my understanding on Python for developing mobile applications. Short version: still use Kivy.

Sunday: Main Conference Day Two

The keynote on day two was given by Katie Cunningham on the topic of "Accessibility: Myths and Delusions". This was a fantastically practical, interesting and well-thought-out presentation and I highly recommend that everyone watch it. It had left a strong impression on many members of the audience, as would be shown later during the sprint sessions.

"Software Carpentry in Australia: current activity and future directions" by Damien Irving further addressed many of the issues hinted at during the data and science miniconf. It covered familiar ground for me in that I am very much working at the intersection of software, systems and science anyway. One of the great tips for helping to break down some of the barriers when presenting software concepts to scientists was to work directly with existing work teams, as those scientists will be more comfortable working together where they have a good understanding of their colleagues work practises and levels of software experience. In a crowd of strangers, it can be much more confronting to talk about unfamiliar areas. It strikes me that the reverse is also probably true when talking about improving scientific and mathematical skills for developers.

"Patents and Copyright and Trademarks… Oh, why!?" by Andrea Casillas gave a very thorough and informative introductory talk on legal issues in open source IP management. She was involved with, a group of legal activities which protect IP for open source projects.

"PyPy.js: What? How? Why?" by Ryan Kelly was a surprisingly practical-sounding affair after you get over the initial surprise and implementing a Python interpreter in Javascript. One argument for doing this rather than a customised web browser is for uniformity of experience across browsers. If a reasonably effective Python implementation can be delivered via javascript, that could help to pave the way for more efficient solutions later.

The final talk was one of the highlights of the conference, "Serialization formats aren't toys by" Tom Eastman. This highlighted the frankly wide-open security vulnerability of integrating XML or JSON (and presumably a variety of other serialisation formats) without a high degree of awareness. Many ingesters will interpret parts of the documents as executable code, and allow people to execute arbitrary commands against your system if they can inject an XML document into it. For example, if you allow the uploading of XML or JSON, then a naive implementation of reading that data will allow untrusted and arbitrary code execution. I think this left a big impression on a lot of people. 

Monday and Tuesday: Developer Sprints

One of the other conference attendees (Nick Farrell) was aware of my experience in natural language generation, and suggested I help him to put together a system for providing automatic text descriptions of graphs. These text descriptions can be used by screen reader applications used by (among others) the visually impaired in order to access information not otherwise available to them.

Together with around eight other developers over the course of the next two days, I provided coordination and an initial design of a system which could do this. The approach taken is a combination of standard NLG design patterns (data transformation --> feature identification --> language realisation) and a selection of appropriate modern Python tools. We utilised "Jinja2", a web page templating language usually used for rendering dynamic web page components, for providing the language realisation. This had the distinct advantage of being a familiar technology to the developers present at the sprint, and provided a ready-to-go system for text generation. I believe this system has significant limitations around complexity which may become a problem later, however it was an excellent choice for getting the initial prototype built quickly.

You can find the code at and the documentation at Wordgraph is the initial working name chosen quickly during the sprints -- it may be that a more specific name should be chosen at some point.  The documentation provides the acknowledgments for all the developers who volunteered their time over this period.

It was very exciting working so fast with an amazing group of co-contributors. We were able to complete a functional proof-of-concept in just two days, which was capable of providing English-language paragraph-length description of data sets produced by "graphite". This is a standard systems metric web application which produces time-series data. The wordgraph design is easily extensible to other formats and other kinds of description. It the system proved to be of wider use, there is a lot of room to grow. However, there is also a long way to go before the system could be said to be truly generally useful.

Concluding Remarks

This was a fantastic achievement by the organisation committee, and a strong set of presentations made it highly worthwhile and valuable for people who might be considering attending in future. It sparked a great deal of commentary among attendees, and I have a lot ideas for the future and I am sure my work practises will also benefit.

The conference vibe was without doubt the friendliest I have ever experienced, improving even further on previous years' commitment to openness and welcoming new people to the community. This was no doubt partially a result of the indigenous "Welcome to Country" which opened the first day, setting a tone of acceptance, welcoming and diversity for the remainder of the event. The dinners and hallway conversations were a true highlight.

I hope that anyone reading this may be encouraged to come and participate in future years. There are major parts of the conference that I haven't even mentioned yet, including the pre-conference workshops, the Django Girls event, organised icebreaker dinners and all the associated activities. It is suitable for everyone from those who have never programmed before through to experienced developers looking for highly technical content. It is a conference, as far as I am concerned, for anybody at all who is interested or even merely curious.

Finally, I would just like to extend my personal thank you to everyone that I met, talked to, ate with, drank with or coded with. I'd like to thank those people I didn't encounter who were volunteering, presenting to others, or in anyway making the event happen. PyCon AU is basically the highlight of my year from a personal and professional development perspective and this year was no exception.

Thursday, March 6, 2014

Pushing state through the URL efficiently

I am building a web app. I want to be able to share URLs to particular things, and more specifically, to parameter-tuned views of particular things. The kind of tuning might be a database query, e.g. to restrict a date range. Or, it might be setting the X and Y axes to use for a chart.

Either way, I needed to gather the state necessary for that somehow. Doing it server-side or using session state was out of the question, since that made it hard to email a URL to a friend.

One option would be to use something like a URL shortener to store the config on the server, and share the key to that set of configuration through the URL. That would work fine, but it has two downsides:
  (1) The state is not user-readable
  (2) You can't blow away the server data and start again without affecting URLs
  (3) Remember, cool URIs don't change

For those reasons, I thought something like json would be perfect. It's well-described, human-readable, and very standard. However, it make your URLs look a bit ... meh. I wanted an alternative which to some degree hid what was going on, but was reverse-engineerable.

So I hit upon encoding the data. Python supports, e.g. string.encode('hex'). This meets some of the brief -- it happily turns a string into a hexadecimal number which can be trivially converted back again. This can be used to encode config into a less visibly clumsy way of passing state. It just tends to be a bit long.

I then hit the tubes to see how one could more efficiently pack data. There were a lot of good answers for really long strings which provided an efficient encoding, but few examples for short strings of ascii. What people were doing, however, was minifying the json.

I ended up using the following process to achieve my goals:
  -- minify the json
  -- call base64.urlsafe_b64encode(minified.encode('bz2'))

This first packs the json down into an efficient number of ascii characters, then applies a bz2 compression technique, and then packs that into a url-safe parameter which can be easily interpreted by the server (or anyone else). It also puts the JSON config data into a fairly safe packet. There's not a lot of risk on the server-side that poor decoding of the url component will result in some kind of security exception.

So, how does it perform? Well, here is the non-minified json snippet:

    "title": "Thunderstorm track error",
    "x_axis": "time",
    "x_labels": "time_labels",
    "y_axis": "dist",
    "y_labels": "dist_labels",
    "series_one": "blah"

The input json snippet was 177 characters long.
The length of the minified json was 140 characters long
The length of the bz2 data was 126 'characters' long
The length of the base-64 url encoding was 168 characters long

For larger json files, the saving from minifying is even greater. Also, for much larger json files, I would expect the saving from the bz2 compression to be a much higher proportion also.

The final url string was slightly shorter than the original string. It's not a big saving, but at least it's not larger. By contrast, if I just hex encode the minified string, the length is 280 long. Each step of the process is important to keeping the shared string as short as possible while still keeping a sensible transport format.

I'd be curious if anyone else had done any work looking into sharing shortish ascii strings for sharing configuration via URL parameter.

Tuesday, February 4, 2014

Help required: Python importing (2.7 but could be convinced to change)

I am writing an application, which also incorporates the ability for users to run -- and test -- their own custom sub-project. These sub-projects are called "experiments". The structure on disk is as follows:

   -- application/

   -- data/
At the moment, includes sys.path.append('../scripts') in order to find the scripts which are being tested.

If I run py.test (or python -m unittest) from the data/experiments/tests/ directory, that's fine. If I run the application tests from the tld, that is fine. But I can't run the experiments tests from the tld, using e.g. py.test or python -m unittest discover. I'm in module importing hell, because '../scripts' is evaluated relative to the executing directory (tld) not relative to the test file directory.

What is the right thing to do here?


Saturday, December 15, 2012

Please help support the lamp with a lamp stack... and here's why

I'm about to engage in blatant fanboi marketing, so if you don't want to experience that, stop reading now.

"The Light by MooresCloud" is the name of an amazing product. It's a computer, inside a lamp. The lamp is attractive, and would be worthy of the $100 price even if it just sat there like a rock making your room light up.

But what's truly amazing about it goes far deeper...

Perhaps you have heard about the "Internet of things". This refers to the idea that everyday appliance will be internet connected. We are starting down that path already. Our phones are internet connected, and they became computers almost overnight. Now they are channels and platforms, delivering not just phone calls, but text messages, emails, movies, web pages, notifications, shopping transactions and limitless other information exchanges. Our televisions are going the same way -- they don't just such down sound and images from the sky any more. They give us internet TV, apps and more.

This is only the beginning.

Software freedom true believers, bleeding heart optimists will know that the beating heart of the internet is software built by volunteers, for free, for the love of the game. People who cared sat down, figured out how to make a million computers talk to eachother efficiently and at great distances, and then just gave it all away. They mostly had day jobs, because creating the internet out of nothing didn't earn them a paycheck. It was an essentially creative exercise, a solution to an out-of-context problem which nobody knew existed. They probably didn't even know what they were building.

Nobody really wants their bedside lamp to do all of these things.  At least, not exactly. But it could certainly do with some upgrades. Like, maybe it could turn on in the morning automatically when the alarm clock goes off, so you don't have to fumble for the switch... and maybe some more...

This is the internet of things. Not powerful phones, or powerful televisions, delivering the same content. But rather, it is the seamless and intelligent integration of tiny appliances, operating in concert based on our intentions. For example, it's 2am. You bump your lamp on. Its onboard computer notifies the Phillips Hue LED lamp down the corridoor to the bathroom. They both recognise the 2am timestamp, and light dimly rather than blazing 60 watts straight into your sleepy eyes.

The Cathedral and the Bazaar is a seminal work on the economics of open source software. It discusses the traditional, capitalist, business-based model of invention and monetary return. It accepts that by creating intellectual property, protecting it, and extracting a return, one can make invention profitable. But it also outlines another approach. Not all work is profitable. Some work is done simply to address costs. For example, if you are in the business of selling fishing lines, you don't care much about phones. You'll pay to get a better one, but you don't mind if that improvement goes only to you, or to everyone at the same time. Imagine a world if every time you paid for something, *everyone in the world* got the benefit. That's open source. Imagine if every time you paid to get a software bug fixed, it got fixed for everyone. And imagine if, every time, anywhere in the world, someone else paid to fix a software bug, your world got automatically better, for free. That's the key. Imagine if you could concentrate on the business which you were really in, while everything else just got better for free. 

Moore's Cloud have done something amazing. They will sell you a light (well, reward you with one at the kickstarter stage). But they will give you everything else for free. Including instructions for building your own light. The software. Oh, and their business model. You can simply download their financial documents and business plan. Just like that. Why? Because they don't care about that. They believe they can do a better job of developing the leading edge than anyone else, and that open developments will drive out closed developments in the short and long run. Nobody can steal their ideas because everybody can have them for free.

So, how does the rubber hit the road? Open source software is still largely a volunteer exercise, although major corporations invest in it for precisely the reasons outlines in the Cathedral and the Bazaar. Google doesn't want to own your web browser and compete against Microsoft. They want to own your search results, and make browser competition irrelevant. Which they pretty much have. Many pieces of software cost money, representing substantial intellectual property and value, and kudos to their inventors. But as many are free, getting quietly and continually better for free, like a rising tide lifting all boats. 

Moore's Cloud live at the intersection of the Open Source movement, the modern startup innovation culture, a commercial business and the obvious strategic trend toward an Internet of Things. Like the early internet pioneers, those people participating in this space are solving an out-of-context problem for the 99%. In twenty years, when the world around us is profoundly inter-connected, and this profound interconnection becomes the environment in which we live, this movement will seem every bit as profound as any other major innovation in our built environment.

Building the internet, and building open-source software takes trust, commitment and skill. It takes people to work together at a distance, with little direct obligation. It takes time and it takes money. It takes donations. It requires a business model which will allow the makers and dreamers to try, fail and succeed. It needs your help. For the price of any other piece of quality industrial design, why not also take part in the revolution?

Check out their kickstarter pitch. Let them tell you their story in their own words. Here's the trick. If they fail, backing on kickstarter is free. You can help with as little as a $1.00 contribution. For $100, one of the lights can be yours, and you can own a part of history. And get a bedside lamp to be proud of. 

  -- This post was made without consultation with the team behind Moore's Cloud
  -- I'm definitely not making any money out of this. I've backed them, but I have no vested interest.
  -- I've probably made lots of mistakes. This is a blog post on the internet, get over it. I did it in a rush.
  -- That said, I'll make any and all corrections required / desired

Thursday, December 13, 2012

[solved ]LG LM7600 Wifi Connection Password not accepted

Hi all,

Some breadcrumbs for anyone else experiencing this problem.

   The LG LM7600 will not connect to the wireless network. It appears not to accept your wireless password, but you're sure it's correct.

   Your password may have spaces in it. The LG LM7600 is too stupid to recognise a password with a space in it.

   Change your wireless password to not have any spaces in it.

Tuesday, November 13, 2012

Career options for ICT staff in Australia

This post is a response to the article below:

The article is fine. However, I think one of the main reasons that ICT careers are not fully appealing is that people have seen that when the rubber hits the road, and ICT pro will NEVER get the big promotion into management over people from other tracks within an organisation. This post is based purely on personal opinions, and has not undergone any real fact-checking. In fact, as soon as I started thinking too hard, I started poking holes in my own arguments. But, rather than sink the entire thing, I've posted it for crumbs of insight and general discussion...

Only a few people make big bucks directly out of ICT: Apple, Facebook, Google, hardware vendors, maybe a few others. By this I mean people who's core business does not extend beyond ICT. People who aren't in business mainly as part of a value chain which leads to something else.

For example, stock-trading companies. This is ludicrously heavily automated, and involved a lot of IT. However, a software engineer is never going to grow up to run the business themselves. They know too much about systems engineering, and not enough about running the business. Other tracks, like sales, or project managers, or product developers know far more about what it takes to stick with the trends and grow the business of taking other people's money in return for a service. And it's those people who will always run the business.

Another example, airplane companies. These businesses require autopilots that work, their flight routes are automatically determined, checkin is self-serve. But the fundamental transaction -- ticket for money -- is defined, grown and managed outside the ICT branch. No ICT professional will ever know as much or be as trusted to make business decisions as someone who has come out of the business part of the business.

ICT is simply not at the big table in most companies. There might be a CIO or CTO who is responsible for things like enterprise architecture, or for negotiating large contracts for computing services. Frequently, said CIO or CTO will not have come from the systems engineering, software engineering or system administration areas. They will only really exist to solve a problem and efficiently manage what looks to most people like a big fat cost centre that everyone needs but nobody really wants to be friends with.

Same with lawyers.

There are big companies, full of lawyers and full of ICT people, going around plying their trade. Within those firms, ICT staff can develop into business managers. But they're still the minority. Most ICT staff are still fundamentally embedded people inside other people's businesses, and with that model, there is always an uphill battle to the next promotion when competing with others who are inherently more trusted by that business. Most people just don't want to deal with the boring details of a technical issue.

There is obviously a strong startup culture in ICT, especially in places like the US where it's practically the standard way of doing business. But not every country has a silicon valley, and even those that do, still have huge numbers of ICT staff embedded in other businesses, part of a branch which might be important but is never really part of the trunk. To break free of this, ICT entrepreneurs mainly find that they have to go it alone.

I think there are a few reasons for this:
  (1) ICT is both more expensive and more valuable than most businesses can easily plan for
  (2) ICT is both harder and more technical than most people can easily accommodate
  (3) It's really hard to balance technical and business priorities at the same time in the same head
  (4) There is such a major history of ICT project failures
  (5) Most business people would rather be managing and doing business than thinking technically, and they have all the money

Is it any surprise that most capable people, when considering a career, don't pick a highly technical and difficult profession, that is generally paid at best a solid middle-class income?

One figure quoted in the article claims that people don't choose ICT as a university course because they don't understand what an ICT career is, and think it's basically just programming. I think it's true that people think that, but I think that is in large part because of how dead boring most IT in Australia is. You get paid okay, which is a good start, but not so well that it seems glamorous or important. Nobody sees ICT as the fast track to a BMW and private school fees for the kids. Doctors and lawyers spring to mind as examples of people who make the big bucks for their primary activity. ICT staff who make big bucks do so by transitioning out of doing ICT work and making the leap into another profession: managing people and running a business.

Most ICT is dead boring. Relatively few people have the chance to work on something that is even visible to a person outside the company, let alone something important. Mostly you get treated like you're not really a part of the business, which you're not. Or like you can't be trusted with business decisions, which you often can't, because you're never given a playground to learn and make mistakes in. If you want rewarding work, you either have to excel at your job, or go out and find it, deliberately and painstakingly. That's what I did.

Which is all completely stupid.

Because most ICT problems are exactly the frickin same as everyone else's problems. ICT staff are, mainly, technically competent general problem-solvers. Sounds like the ideal manager to me. They can tell when something is worth doing and when it's not, because every day they get confronted with a general problem, loosely specified, expressing somebodies need, and get expected to turn that into something that people can use to Get Stuff Done. As an ICT worker, I have seen a wider range of business problems than most. I see financial issues, legal ones, systems issues, scientific problems and the list just goes on and on.

However, they tend not to be exposed to the same range of "people problems" (and ways of solving them), such as negotiating, making a business case, making a sale, designing a business proposal, working with clients etc as those who are in directly relevant roles. It makes some sense. ICT staff need a fair bit of time to complete their technical work. They need the space to think and plan. You can't get into the zone of technical work with less than 3-4 hours of known uninterrupted time. etc etc.

What we mainly have, as I hope I have just illustrated, is in fact an economic and career management issue. It has, in my opinion, almost nothing to do with whether enough capable people would enjoy the work. They can just see it's a bit of a dead end for someone with ambition.