Saturday, December 15, 2012

Please help support the lamp with a lamp stack... and here's why

I'm about to engage in blatant fanboi marketing, so if you don't want to experience that, stop reading now.

"The Light by MooresCloud" is the name of an amazing product. It's a computer, inside a lamp. The lamp is attractive, and would be worthy of the $100 price even if it just sat there like a rock making your room light up.

But what's truly amazing about it goes far deeper...

Perhaps you have heard about the "Internet of things". This refers to the idea that everyday appliance will be internet connected. We are starting down that path already. Our phones are internet connected, and they became computers almost overnight. Now they are channels and platforms, delivering not just phone calls, but text messages, emails, movies, web pages, notifications, shopping transactions and limitless other information exchanges. Our televisions are going the same way -- they don't just such down sound and images from the sky any more. They give us internet TV, apps and more.

This is only the beginning.

Software freedom true believers, bleeding heart optimists will know that the beating heart of the internet is software built by volunteers, for free, for the love of the game. People who cared sat down, figured out how to make a million computers talk to eachother efficiently and at great distances, and then just gave it all away. They mostly had day jobs, because creating the internet out of nothing didn't earn them a paycheck. It was an essentially creative exercise, a solution to an out-of-context problem which nobody knew existed. They probably didn't even know what they were building.

Nobody really wants their bedside lamp to do all of these things.  At least, not exactly. But it could certainly do with some upgrades. Like, maybe it could turn on in the morning automatically when the alarm clock goes off, so you don't have to fumble for the switch... and maybe some more...

This is the internet of things. Not powerful phones, or powerful televisions, delivering the same content. But rather, it is the seamless and intelligent integration of tiny appliances, operating in concert based on our intentions. For example, it's 2am. You bump your lamp on. Its onboard computer notifies the Phillips Hue LED lamp down the corridoor to the bathroom. They both recognise the 2am timestamp, and light dimly rather than blazing 60 watts straight into your sleepy eyes.

The Cathedral and the Bazaar is a seminal work on the economics of open source software. It discusses the traditional, capitalist, business-based model of invention and monetary return. It accepts that by creating intellectual property, protecting it, and extracting a return, one can make invention profitable. But it also outlines another approach. Not all work is profitable. Some work is done simply to address costs. For example, if you are in the business of selling fishing lines, you don't care much about phones. You'll pay to get a better one, but you don't mind if that improvement goes only to you, or to everyone at the same time. Imagine a world if every time you paid for something, *everyone in the world* got the benefit. That's open source. Imagine if every time you paid to get a software bug fixed, it got fixed for everyone. And imagine if, every time, anywhere in the world, someone else paid to fix a software bug, your world got automatically better, for free. That's the key. Imagine if you could concentrate on the business which you were really in, while everything else just got better for free. 

Moore's Cloud have done something amazing. They will sell you a light (well, reward you with one at the kickstarter stage). But they will give you everything else for free. Including instructions for building your own light. The software. Oh, and their business model. You can simply download their financial documents and business plan. Just like that. Why? Because they don't care about that. They believe they can do a better job of developing the leading edge than anyone else, and that open developments will drive out closed developments in the short and long run. Nobody can steal their ideas because everybody can have them for free.

So, how does the rubber hit the road? Open source software is still largely a volunteer exercise, although major corporations invest in it for precisely the reasons outlines in the Cathedral and the Bazaar. Google doesn't want to own your web browser and compete against Microsoft. They want to own your search results, and make browser competition irrelevant. Which they pretty much have. Many pieces of software cost money, representing substantial intellectual property and value, and kudos to their inventors. But as many are free, getting quietly and continually better for free, like a rising tide lifting all boats. 

Moore's Cloud live at the intersection of the Open Source movement, the modern startup innovation culture, a commercial business and the obvious strategic trend toward an Internet of Things. Like the early internet pioneers, those people participating in this space are solving an out-of-context problem for the 99%. In twenty years, when the world around us is profoundly inter-connected, and this profound interconnection becomes the environment in which we live, this movement will seem every bit as profound as any other major innovation in our built environment.

Building the internet, and building open-source software takes trust, commitment and skill. It takes people to work together at a distance, with little direct obligation. It takes time and it takes money. It takes donations. It requires a business model which will allow the makers and dreamers to try, fail and succeed. It needs your help. For the price of any other piece of quality industrial design, why not also take part in the revolution?

Check out their kickstarter pitch. Let them tell you their story in their own words. Here's the trick. If they fail, backing on kickstarter is free. You can help with as little as a $1.00 contribution. For $100, one of the lights can be yours, and you can own a part of history. And get a bedside lamp to be proud of.

http://www.kickstarter.com/projects/cloudlight/light-1/ 


Footnotes:
  -- This post was made without consultation with the team behind Moore's Cloud
  -- I'm definitely not making any money out of this. I've backed them, but I have no vested interest.
  -- I've probably made lots of mistakes. This is a blog post on the internet, get over it. I did it in a rush.
  -- That said, I'll make any and all corrections required / desired

Thursday, December 13, 2012

[solved ]LG LM7600 Wifi Connection Password not accepted

Hi all,

Some breadcrumbs for anyone else experiencing this problem.

SYMPTOM:
   The LG LM7600 will not connect to the wireless network. It appears not to accept your wireless password, but you're sure it's correct.

PROBLEM:
   Your password may have spaces in it. The LG LM7600 is too stupid to recognise a password with a space in it.

SOLUTION:
   Change your wireless password to not have any spaces in it.

Tuesday, November 13, 2012

Career options for ICT staff in Australia

This post is a response to the article below:

http://www.theage.com.au/it-pro/business-it/a-brilliant-career--but-not-in-ict-20121112-29866.html

The article is fine. However, I think one of the main reasons that ICT careers are not fully appealing is that people have seen that when the rubber hits the road, and ICT pro will NEVER get the big promotion into management over people from other tracks within an organisation. This post is based purely on personal opinions, and has not undergone any real fact-checking. In fact, as soon as I started thinking too hard, I started poking holes in my own arguments. But, rather than sink the entire thing, I've posted it for crumbs of insight and general discussion...

Only a few people make big bucks directly out of ICT: Apple, Facebook, Google, hardware vendors, maybe a few others. By this I mean people who's core business does not extend beyond ICT. People who aren't in business mainly as part of a value chain which leads to something else.

For example, stock-trading companies. This is ludicrously heavily automated, and involved a lot of IT. However, a software engineer is never going to grow up to run the business themselves. They know too much about systems engineering, and not enough about running the business. Other tracks, like sales, or project managers, or product developers know far more about what it takes to stick with the trends and grow the business of taking other people's money in return for a service. And it's those people who will always run the business.

Another example, airplane companies. These businesses require autopilots that work, their flight routes are automatically determined, checkin is self-serve. But the fundamental transaction -- ticket for money -- is defined, grown and managed outside the ICT branch. No ICT professional will ever know as much or be as trusted to make business decisions as someone who has come out of the business part of the business.

ICT is simply not at the big table in most companies. There might be a CIO or CTO who is responsible for things like enterprise architecture, or for negotiating large contracts for computing services. Frequently, said CIO or CTO will not have come from the systems engineering, software engineering or system administration areas. They will only really exist to solve a problem and efficiently manage what looks to most people like a big fat cost centre that everyone needs but nobody really wants to be friends with.

Same with lawyers.

There are big companies, full of lawyers and full of ICT people, going around plying their trade. Within those firms, ICT staff can develop into business managers. But they're still the minority. Most ICT staff are still fundamentally embedded people inside other people's businesses, and with that model, there is always an uphill battle to the next promotion when competing with others who are inherently more trusted by that business. Most people just don't want to deal with the boring details of a technical issue.

There is obviously a strong startup culture in ICT, especially in places like the US where it's practically the standard way of doing business. But not every country has a silicon valley, and even those that do, still have huge numbers of ICT staff embedded in other businesses, part of a branch which might be important but is never really part of the trunk. To break free of this, ICT entrepreneurs mainly find that they have to go it alone.

I think there are a few reasons for this:
  (1) ICT is both more expensive and more valuable than most businesses can easily plan for
  (2) ICT is both harder and more technical than most people can easily accommodate
  (3) It's really hard to balance technical and business priorities at the same time in the same head
  (4) There is such a major history of ICT project failures
  (5) Most business people would rather be managing and doing business than thinking technically, and they have all the money

Is it any surprise that most capable people, when considering a career, don't pick a highly technical and difficult profession, that is generally paid at best a solid middle-class income?

One figure quoted in the article claims that people don't choose ICT as a university course because they don't understand what an ICT career is, and think it's basically just programming. I think it's true that people think that, but I think that is in large part because of how dead boring most IT in Australia is. You get paid okay, which is a good start, but not so well that it seems glamorous or important. Nobody sees ICT as the fast track to a BMW and private school fees for the kids. Doctors and lawyers spring to mind as examples of people who make the big bucks for their primary activity. ICT staff who make big bucks do so by transitioning out of doing ICT work and making the leap into another profession: managing people and running a business.

Most ICT is dead boring. Relatively few people have the chance to work on something that is even visible to a person outside the company, let alone something important. Mostly you get treated like you're not really a part of the business, which you're not. Or like you can't be trusted with business decisions, which you often can't, because you're never given a playground to learn and make mistakes in. If you want rewarding work, you either have to excel at your job, or go out and find it, deliberately and painstakingly. That's what I did.

Which is all completely stupid.

Because most ICT problems are exactly the frickin same as everyone else's problems. ICT staff are, mainly, technically competent general problem-solvers. Sounds like the ideal manager to me. They can tell when something is worth doing and when it's not, because every day they get confronted with a general problem, loosely specified, expressing somebodies need, and get expected to turn that into something that people can use to Get Stuff Done. As an ICT worker, I have seen a wider range of business problems than most. I see financial issues, legal ones, systems issues, scientific problems and the list just goes on and on.

However, they tend not to be exposed to the same range of "people problems" (and ways of solving them), such as negotiating, making a business case, making a sale, designing a business proposal, working with clients etc as those who are in directly relevant roles. It makes some sense. ICT staff need a fair bit of time to complete their technical work. They need the space to think and plan. You can't get into the zone of technical work with less than 3-4 hours of known uninterrupted time. etc etc.

What we mainly have, as I hope I have just illustrated, is in fact an economic and career management issue. It has, in my opinion, almost nothing to do with whether enough capable people would enjoy the work. They can just see it's a bit of a dead end for someone with ambition.

Friday, August 24, 2012

PyCon AU write-up

Well, PyCon AU 2012 was definitely the best Olympics ever. The quality of the talks was outstanding, and the event organisation went flawlessly. Kudos all around.

I presented twice, videos available on YouTube:
  "Visualising Architecture" : http://www.youtube.com/watch?v=BGOtqXA_y1E
  "Virtual Robotic Car Racing with Python and TORCS" : http://www.youtube.com/watch?v=BGOtqXA_y1E

Visualising architecture presents some useful tools you can use easily for examing codebases and running systems to show the internal structures, and includes some discussion on good design in context. Robotic cars is pretty much as the title indicates. Thank you to everyone who showed up, sat through them, and special thanks to those who asked some great questions at the end. Speaking is a real pleasure when the audience is happy to talk afterwards.

Other talks I attended (and would recommend) were:

"What to buid. How to build it. Python can help!" by Mark Ramm. A great piece on, essentially, good management practises utilising evidence-gathering to make decisions. Presented examples based on product management at sourceforge. People should do this more.

"The Lazy Web Dev's Guide to Testing Your Web API" by Ryan Kelly. Ryan is a great speaker, and he showed some good techniques for reducing the amount of effort in testing web APIs.

"Python Dark Corners Revisited". Definitely worth a watch for anyone working with Python. A good explanation of Python's types and data structures, presented as a bunch of surprising and challenging short questions and explorations in Python.

"Funcargs and other fun with pytest" by Brianna Laugher. Everyone should know more about testing, py.test is a great tool, and the presentation was very practical and includes applied real-world examples and problem-solving.

"Python Powered Computational Geometry" by Andrew Walker. A great exploration of tools and techniques for representing and visualising 3d objects. Cool!

I didn't see, but plan to watch later, a couple of the other presentations. I'd also queue up "Think, Create, and Critique Design" by Andy Fitzsimon (video not available, but use Google to find the slides he posted); and "An unexpected day" by Aaron Iles.

Monday, August 20, 2012

Committing to Git

I've been doing a bit more developing lately, and a bit less hanging around in meetings. As a result, it has become somewhat painfully obvious that I don't quite know all the right things any more. One of the moves I have made is to use git-svn to interact with our SVN repo. This basically gives me all the relevant advantages of git, without having to go and have a fight well-reasoned discussion with the manager of said repo to bring about the change.

There are some things I've learned painfully, and some positive workflow changes I have come across. These are my notes on the topic so far.

Basic workflow:
  1.) git svn clone the repo
  2.) ALWAYS WORK ON A BRANCH. You do *not*, I repeat do *not* want to do a git svn fetch and discover you have introduced merge conflicts into your master branch. Masters of git kung fu can probably get themselves out of this hole, but it's fairly painful.
  3). ALWAYS WORK ON A BRANCH.
  4.) Okay, so you're working on a branch. Switching branches is pretty easy. However, you do need to notice that creating a new branch is not the same thing as working on it. Either use git checkout -b, or checkout the branch after creating it. You'll get used to this.
  5.) Your local file changes follow you around as you switch branches. I have no idea what would happen if two branches had, say, wildly different directory structures. Probably you would have to rely on git being awesome.
  6.) Sometimes, I have had some problems with the local git log not matching up with the remote SVN log after doing a git svn dcommit. I followed some script on the internet once to "fix" it. It worked the first time. The second time, it corrupted my entire git repository just before a deadline, and basically everything was terrible. Luckily I had an SVN repo as well and copied my changes there manually before blowing away my whole git repository. The moral of the story is that while learning, maintain a parallel repo using plain SVN so that you have a separated clean system which you can switch to if the going gets scary.

Covering Your Ass

I would recommend this. I don't know about you, but my fellow devs tend to get a bit twitchy if I commit too much lint or break tests, I can't think why. I have a tendency to get frustrated about 80% of the way through complex work, and it's really useful to have automation in place to protect against stupid mistakes.

Fortunately, git comes with an inbuilt ass-covering system, called hooks. It lives inside the git repository, which makes me feel faintly suspicious about it, but it both exists and works. Inside .git/hooks you will find a collection of files *.sample. If you make copies of these without the .sample extension, you can add automated checks and processes to the system. As follows.

pre-commit:
    This runs, as said, before anything gets *really* committed. This is a good spot to add a pylint check, a pep8 check, and maybe a short run of unit tests. Before you even get the chance to type your commit message, there is a safety net. You can skip this with --no-verify.

prepare-commit-msg:
   This allows you to template your commit messages. For example, if your system requires a bug ID to go along with each commit message, you can insert template text into the message here.

These are just shell scripts. Since I basically hate everything that is not python, my first step is to start the files with
   #!/usr/bin/python
which will cause them to be interpreted in python rather than bash. FTW!

Initiate debugger using signals

So, I was watching a lightning talk at PyCon AU yesterday, and I think the speaker's first name was Matt. Apologies for not giving a better reference. It all went by pretty quick, but I think I heard something like "Why not use a signal to start the debugger in Python"?

And so I did. I can't work out if this is too trivial to upload to the cheese shop, but if you create a file called "signal_handler.py", and import it from anywhere in your code, you will magically link up the SIGINT (what gets sent by control-C) to the Python debugger. For extra win, it will try to initiate "ipdb" if you have it. I haven't actually tested it on pdb, but it's hard to see how it could fail to work. Besides, you should be using ipdb (pip install ipdb).

import signal

try:
    import ipdb as pdb
except:
    import pdb

def handle_signal(signal_number, frame_stack):
    pdb.set_trace()

signal.signal(signal.SIGINT, handle_signal)

Voila! Now, next time you are watching your app run and you need to start the debugger when you least expected it, you can just do it! 

A small note of caution -- it's probably wrong to override the SIGINT expected behaviour in this way. You could also wire it up to, say, SIGUSR1, but then you would have to explicitly sent the signal with "kill -10". It would work perfectly fine, but is a bit less convenient that just slamming ctrl-c wherever processing happens to be. I'm not sure what else might want/need to sent the occasional SIGINT and rely on normal behaviour, so use this at your own risk!

When you push ctrl-d to end the debugger, you will exit the program.

Tuesday, May 29, 2012

A month with a mac

I recently took the plunge, and started using a Macbook Air. My setup at work had been a capable linux desktop, an underpowered Windows box, and an adequate dual-boot windows/linux laptop. The linux desktop remains my primary at-work development environment, and the only thing I'd like to change about it is to upgrade its monitor to something that supports very high resolutions.

The Macbook offers a potential compromise between portability, capability, corporate interoperability, and development suitability. Well first off the bat, two things are not even compromised: portability and corporate interoperability are just 100% fine. In fact, my office experience has actually been better on the mac, as it has a more recent Office edition and higher screen resolution. Big thumbs up. Capability is, roughly, the same as the last laptop. Fine, but nothing particularly outstanding. The user interface is clean and responsive at all times, but I can still grind the thing to a halt if I try to do too much with it. However, that's really no different to the underpowered Windows box or laptop I used to have.

So the final, and all-important issue: development suitability. It's "okay". That's actually pretty amazing, considering that I work as a professional software engineer developing Python applications for a linux environment. There are lots of little issues, and a few big ones. But it basically works.

First of all, let's take a step back and talk about writing code on the machine. Xcode, the IDE that comes standard, is something I couldn't understand and get comfortable with, maybe because I'm neither writing OSX apps nor web apps. That's right, I'm writing old-school client/server GUI apps, in Python, for linux. Fortunately, my favourite editor, Sublime Text, is available. I've done a lot of learning how to use the mac, but for the task of writing code I just slipped into the beautiful environment of Sublime Text like putting on an old pair of slippers in winter. It's just great.

The screen resolution, for me, noticeably reduces eyestrain. It's easier to read the text when there is more resolution in each character. I'm not a micro-font-size weenie either, although I do like it small enough to see a decent chunk of code at a glance. It's the default font, looks like a size ten fixed-width. But size ten, at high resolution, is finished-off well. The backlit keyboard is nice, and they keyboard feel is also great. A decent amount of space between the keys is tactile and keeps the typo rate down. It's easy to feel when you've drifted off the center of the key and might have smooshed the wrong letter. It's great for "authoring" on.

The setup of software for me takes a little while. First up, you'll want macports installed. Or brew. Or fink. I don't really know how to figure out which is better, but I just think of them like apt or yum. They install things for you. But not everything works, and there is a decent amount of stuff installed in my system now which I downloaded and compiled manually, which will now probably drift over time. Without a whole community of developers keeping the whole system moving forward, I'll have to do it myself, and I'll probably only notice when things break. Sometimes things don't work. For example, I installed wxPython form a binary, but it doesn't run. I don't know why. I couldn't get pyCairo to install either, and apparently a lot of people have trouble with this. On a mac, you are officially off the beaten path, and you can expect to run into some trouble from time to time.

On the plus side, the terminal does provide me with a familiar environment. It does what I want. I can run most of what I want. I got a real kick out of running ipython notebook, setting up numpy and scipy, and sending my CPUs off to processing land for 5 minutes cranking through some scientific data processing.

The whole system definitely "comes together" in a way neither linux nor Windows does. Distributors could learn a lot from working on a mac about how to build a system that people can just "be" on. Be, on a mac. The dock is awesome, the fact the UI is basically always responsive is awesome. It's hard to describe how nice it is having the whole system integrate well. It's like when you're in the office, and the airconditioner turns off, and you suddenly hear the silence and realise how noisy it was. Having a systems parts integrate well is exactly like that.

Ultimately, I think the Mac is a beautiful machine and a beautiful OS. It is okay as a development platform for linux/open source applications, without having to bother with dual-boot or working in a VM. For testing, obviously, you will need to prove the system on the end-users environment. But that is always true. It's also clearly possible to use this in a business setting. If you're a small startup, using a Mac will let you interface with the business world and the developer world at the same time, mostly. But you will also have to do some extra work to master your environment, and get over some hurdles yourself.

File paths and nested dictionaries

Really, I just want to boost the pagerank of this page: http://code.activestate.com/recipes/475156-using-reduce-to-access-deeply-nested-dictionaries/

I had an issue where I had a list, say [1,3,4,6,7, "result"], which I wanted to smoosh into a nested dictionary and get back easily. I found my own way to store said items (built up programmatically in the course of other logic), but I wanted an easy way to get them.

I will now reproduce the solution from the above page in full, for your convenience:

# In your hand you have a dict instance representing the database 'object'
dbo={'m':{'d':{'v':{'version':1}}}}

# You know this thing corresponds to a table whose rows follow some convention
# indicating 'depth', say '__' hapens to be the seperator.

# You wan't to access a particular element, but you know it only by its column
# name 'm__d__v__version'

name='m__d__v__version'

version = reduce(dict.get, name.split('__'), dbo)

assert version == 1

foo = reduce(dict.get, 'm__d__v__foo'.split('__'), dbo)

assert foo == None

Friday, April 13, 2012

Robotic car racing in Python

Now this is really cool.

TORCS is a racing car game / simulator. Here's a good video (has music, so maybe mute your speakers). However, standard TORCS clients have a lot of information, and writing bots for them is cool, but is IMO about writing great game bots.

If you're interested in the idea of autonomous robots at all, you've probably heard of Udacity's online course, "Programming a Robotic Car". Following some discussions on there, someone pointed out the http://games.ws.dei.polimi.it/competitions/scr/ Simulated Car Racing Championship, which gives clients artificial sensor readings as though fitted to a self-driving car. You have a standard set of car controls and a standard set of sensors.

One of the other students, "lanquarden", wrote a Python client which fits this. His original post in the forums describes the modules. Note, this is compatible with TORCS 1.3.1, which works just fine.


I saw you're post and started looking for more information, I ended up reading the manual they provide with a patch for torcs and their client example. It's a lot more realistic using the setup for the SCRC, you don't have all the information available to you like any other 'torcs driver' program. The range finders can even have a noisy measurement output. The people organizing SCRC have made software examples available for both C++ and Java, but no Python. As the server - client interface are UDP messages, it's pretty straightforward to make a Python version. I'm sharing my code that implements a client for the SCRC.
Its devided in 5 files:
pyclient.py: main file, you can make it executable on linux and it has the same command line parameters as the C++/Java client.
driver.py: file holding the driver class, with a method drive that should hold drive the car.
carState.py: file holding the car state class, the state can be updated with a message from the torcs server.
carControl.py: file holding the car control class, the control parameters can be set and then transformed in a message for the torcs server
msgParser.py: file holding a parser class for translating the torcs server message in usable variables and vice versa
Anyone wanting to toy around with the code can download it from GitHub. Install instructions for the torcs server can be found at here.
So, I thought I'd share that this exists, because it's awesome. The car supplied in the standard driver.py file has a max speed set of 100 kph, which is slow enough to drive around the default track without skidding, allowing it to simply follow the "track axis", which you have sensor data for.
Seriously, how awesome is this? My next steps are going to be to try and simply map out the track, then start improving the performance of the car. It's a great problem, really open-ended, and allows you to exercise state-of-the-art robotics algorithms for localisation, navigation and mapping.

Friday, March 23, 2012

My dream editor

I was watching an amazing video today: "Inventing on Principle" by Bret Victor. http://vimeo.com/36579366  Even if you don't read this blog post, go watch that video. Then, if you feel like it, read this :).


I was inspired to think about what features The Great Ultimate Editor of All Time would have in it. So here's the start of my ridiculous wishlist. In particular, I'm concentrating on thinking about any and all kinds of code analysis, visualisation or tools which help the reader to understand code as quickly as possible, and show it to them in ways that makes understanding as simple as humanly possible. It's probably not very deep thinking, so don't shoot me if you think this is all a bit silly to the thinking about :)


Inbuilt:
  • nosier / test re-execution
  • sphinx docs of file
  • code inspections (pylint, pep8)
  • develop-with-example / standard test / default input view
  • advanced folding / hiding
  • embedded images and formulas in docstrings
  • "hg replay" / "instant history"
  • integrated visual diff
  • Code re-use heat-map (number of times function is referenced)
  • coverage heat-map
  • Maybe a performance heat-map (slow lines, fast lines etc)

Wednesday, March 21, 2012

Polymorphism: Beyond the Factory Method

So here's the basic idea. There is some situation, where you have several or many similar classes. Let's say we have customers. They are broken into a bunch of categories, based on buying habits, geographic location, blah blah blah.

Faced with this situation, it's a clear case for inheritance. Common attributes are set in the base class, and specific attributes and/or method overrides are set in the subclasses. A frequent approach to this is to build a factory for building these, which can recognise from the initialisation arguments and maybe some context what kind of customer to build in each situation.

That's all fine and great, but I'd like to propose a new approach: doing all that inside the base class constructor. I initialise a base class Customer with all the relevant info. I get back a customer, but maybe I get back an AustralianCustomer or a FilthyRichCustomer.

Here's an example of how to do that in Python.


class BasicClass(object):
  def __new__(cls, value, category):
    if category == 1:
       return CatOne.__new__(CatOne, value, category)
    if category == 2:
       return CatTwo.__new__(CatTwo, value, category)

class CatOne(BasicClass):
 
   def __new__(cls, value, category):
      instance = object.__new__(CatOne)
      #instance.__class__ = CatOne
      return instance
   def __init__(self, value, category):
      self.value = "CatOne: %s " % value

class CatTwo(BasicClass):
   def __new__(cls, value, category):
     instance = object.__new__(CatTwo)
     return instance
   def __init__(self, value, category):
      self.value = "CatTwo: %s" % value

foo = BasicClass("Hello", 1)
bar = BasicClass("World", 2)
print foo
print foo.value


What's going on here is that the category classes *do* inherit from BasicClass. Any methods I put onto BasicClass will get inherited down the stack. However, when I initialise it, what I actually get back is one of the category classes. I have effectively pushed the factory pattern into the base classe's constructor methods in order to simplify how I create objects.

I think I like it :)


I just wanted to see what would happen...

>>> class Foo:
...    def __init__(self):
...       self.info = "I am a foo"
... 
>>> foo = Foo()
>>> foo.info
'I am a foo'
>>> class Bar(foo):
...    def __init__(self):
...       self.info2 = "This should be interesting"
... 
Traceback (most recent call last):
  File "", line 1, in
TypeError: Error when calling the metaclass bases
    __init__() takes exactly 1 argument (4 given)
>>> 

Sunday, February 19, 2012

Why I Just Unfollowed 60 People

This is just a brief note to explain why I just unfollowed 60 people on twitter. I follow people in the first place for the following reasons: I have a personal connection, I would like to 'network', someone is funny, or someone is unusual or interesting in some way. I basically want to hear about what is happening with all the people I have ever followed.

But I can't keep up. With anyone. Following additional people reduces the average attention I can give each person (my attention is not constant, but has a maximum). I had hit the point where I did not have the quality level of attention to make twitter value for me.

So I brutally unfollowed half of the people on my list. This isn't an act of rudeness, it is an attempt to restore the connection I have with at least some people. If have unfollowed you, and you notice, and you'd explicitly like me to continue to follow you, please do let me know. I will more than happily re-follow people who would like to maintain the connection.

I'll also add: if I had better ways of managing and filtering posts, I would be very happy to follow many more people once again. I still think the internet needs an awesomeness filter :)

Wednesday, February 8, 2012

Understanding a transistor (hint: I don't)

Right, so I have set up a circuit. I'm afraid I'm not much with photoshop, so here's a worded description.

5V+ is wired into what I think is the collector
An Arduino output pin is wired through a 330 Ohm resistor into what I think is the base
The emitter is wired to a connection point
5V - is wired to another connection point
My Arduino is programmed to emit a HIGH pulse for 2 seconds every 2 seconds to the base.

Using the volmeter to complete the circuit and measure the volts saw a change every 2 seconds as expected. However, it was measuring 1.5V, then an increase to 3.4V and back again.

Can someone please explain why the voltage doesn't drop to zero?

Then, for bonus points, why does it not reach 5V? I can accept that the HIGH level might not be quite 5V for a variety of reasons such as the Arduino itself using some power from its source, some current going towards the base which is then not available to the 5V output from the Arduino etc etc.

But why on earth is there 1.5V present when the transistor's base recieves no current???

Now, for the next question. If I detach the wire from the Arduino output pin to the transistor altogether, the circuit from 5V to collector to emitter measures 3.3V! Surely in this case it should be either 1.5V (as per above) or 0.

I just don't get it!

Sunday, February 5, 2012

Resources and tips

In working on this project, I've hard to source my own resources for learning. As a software guy, my hardware knowledge is limited to whatever I can still remember from high school physics, which is 15 years ago now. Granted, I don't seem to need much more than high school physics here, but 15 years is a long time between study sessions.

I discovered that a great many electronics books are structured as follows: fluffy introduction, definition of all terms, history of physics, all of physics, now do it. Unfortunately, I can't assimilate knowledge that way. It's impossible (for me) to integrate an abstract set of definitions and history lessons, and come out with a working knowledge of building circuits. It's hard just to get through the introduction without falling asleep, frankly. So here is where I go:

Book: "Make: electronis: learning by discovery"
http://books.google.com.au/books/about/Make_electronics.html?id=PQzYdC3BtQkC

This video series, an intro to circuits from the absolute, total beginning; by Bucky Roberts:
http://thenewboston.org/list.php?cat=41

This video series, specifically covering the Arduino, by Jeremy Blum: http://www.youtube.com/watch?v=fCxzA9_kg6s

I'm sure I'll end up assembling some more resources as I learn and need more advanced topics, but these constitute a really great start with a gentle learning curve.

Cheers,
-Tennessee

Tuesday, January 31, 2012

Why doesn't my circuit work?

Okay, so I finally managed to build something that didn't work! This is actually neat, since it represents a problem-solving challenge to be overcome, and something that goes beyond colouring in the lines. Well, apparently I couldn't colour in the lines, which is how I got here.


It looks like a fairly simple circuit. Basically, the motor is connected to ground at one end, and via a transistor to a 5V source at the other. Pin 9 there goes to the transistor, so pulsing that slowly ought to turn the motor on and off. There is also a diode, and I don't know what the hell it's for. To be more accurate, I don't know why it's necessary here, or what it's doing in this circuit. In general, the concept seems straitforward enough. I'm also not 100% confident I have the diode, the transistor or the various resistors the right way around.

The resistors, I'm pretty sure, could get plugged in backwards and still operate normally. The diode I realise will simply not allow current passed if plugged in backwards. Given that if I skip the transistor the motor spins, the diode 'must' be the right way in (assuming it's working correctly). So the fault seems to be with the transistor.

However, no matter how I connect the  transistor, it doesn't work correctly. One way, the motor is dead. The other way, the motor spins constantly. Total ignoring of the collector (Pin 9 there). Question: is it possible to break a transistor by sending current the wrong way through? How about with diodes?

Now, I did check. The transistor in NPN, which as far as I can tell translates as "on by default", so there is one explanation: Pin 9 is broken. I tried Pin 6, no luck. So I tried coding up a flashing LED to match the Pin 9 signal. The LED flashes as required, so unless both Pin 9 and Pin 6 are physically damaged on the Arduino, it shouldn't be that.

So, now I'm stuck. Without a multimeter (or maybe just some LEDs and alligator clips) I'm pretty much unable to determine which parts of my circuit are receiving the expected current. Did I short something out? Is my diode in backwards? What current is coming out of Pin 9?

Answers to all these exciting newbie questions, and more, in weeks to come.

In the meantime, to all you electrical engineers out there, I hope you enjoyed this brief sojourn into electrical bafflement :)

PS: Some people who spotted these photos on G+ have pointed out that the diode is likely present to protect against a back-spike of power coming FROM the motor as it spins down.

Saturday, January 28, 2012

Assembling the chassis

This post is going to document the assembling of the chassis. I purchased the Magician Chassis as the basis for my bot. Assembly was mostly straightforward, although I did need to do just a little hackwork along the way to bring it together. The finished product looks like this:



It's not the greatest photo ever, but you get the idea. There are two motors, one attached to each large wheel. There is an additional rear "trackball" thing which bears the weight of the rear end. The construction is sturdy-ish. Certainly it doesn't seem like it's going to break during construction, but it should definitely not suffer a drop.

The kit comes with instructions, which kind of mostly cover what you need to know. In addition to the instructions and this blog post, there is a similar blog post available here: http://www.hobbytronics.co.uk/magician-chassis-build. It's probably more useful than this post, but this is my story :)

Here is a photo of the kit parts, unassembled:



The process begins by attaching the motors. This turns out to be medium level tricky, where easy is "goes according to the instructions" and hard would be "I basically had to MacGuyver it from first principles". My main issue was there there are some little red things which look like gears which attach to a spindle on the side of the motor. THESE APPEAR TO HAVE NO FUNCTION. Even after assembly, no function.


You can see the red thing here. One of my red things needed to have the whole expanded carefully with a Stanley knife in order to accept the spindle. Bolting the struts on was not too bad, although it's a darn tight fit. I had to rotate the struts 180 degrees even though they look pretty symmetrical. They do go on. Anyone who knows what those red things are for, please let me know.

Attaching the wheels is a matter of shoving them onto the outside spindle of each engine. They don't fit very well (no satisfying push... click). So I hope they don't come off, but it seems okay.


Attaching the trackball is super easy. No worries there, although I might lubricate the socket at some point. The base is now completed! The leads running off the motors strike me as a bit of a catching hazard, I think it would be better to put some electrical tape on to help run them up to the upper level where they will attach to the Arduino later on, but I'll do that at another point in time.

The next step is to put the base right-way-up and attach the battery housing. The battery housing is terrible quality, and in a stupid place. The worst thing about it is that the screw-holes are so close to the edge that you can't actually put a screw in. I had to cut down the edges with a Stanley knife. I hope this isn't a problem later.


See the screw? You can see it through the hole I cut to allow the head through.

From here, it's a doddle to finish assembly. Screwing on the spacers and the top level is super-easy, leading to the finished product:


As you can see, the battery housing is in a near-inaccessible location. The HobbyTronics post I linked to earlier said they pulled it out and housed it on the top level, which certainly makes sense. However, I figure I should follow the rules before breaking them, so there is the finished product!

Of course, right now it doesn't DO anything, because I haven't wired up any power or attached the Arduino to control the motors. However, it was a great evening's work and took a little under two hours from fetching my equipment to a finished product and a clean workbench again.

Friday, January 20, 2012

Some inspirational material

This video does a fantastic job of expressing the sentiment and excitement around building a basic autonomous robot....

http://t.co/6ioJYoCI

Thanks, that guy! (http://www.instructables.com/member/Brandon121233/)

Monday, January 16, 2012

It's here! It's here!

Okay, so now my Arduino journey can begin in earnest. Why now? Because it arrived today! Squeee!

So, I thought a short unboxing post was in order. Obviously, I couldn't possibly wait until this evening at home to open it, so here are my on-the-desk photos. I'll follow up with some more later when I've actually had the opportunity to perform some construction or investigation.


Inside the SparkFun Inventor's Kit

The great, the only, Arduino Uno

I haz a "Magician Chassis", a "Sparkfun Inventor's Kit" and a "Soldering Iron". I presume that's enough to get in trouble with....

Thursday, January 12, 2012

Amateur robotics: getting started

So, here's the start of a blog series which I plan to use to document my tinkering with my soon-to-arrive Arduino board. For those who don't know, Arduino is an open-source electronics, um, computer board thingy with lots of i/o pins you can used for sensors and effectors. It can also be hooked up to a mobile phone (or cellphone for any North American readers), which can interface with the Arduino board. This can be done using Java, or even Python (thanks to the SL4A interpreter). No phone hacking required!

So I intend to thoroughly document my process, and share it here. This is really for myself, but anyone else who is interested in amateur robotics might be interested in following along my story as a complete newbie at this stuff.

So, the beginning...

I just placed my order for the following items:
 -- http://littlebirdelectronics.com/products/sparkfun-inventors-kit-for-arduino
 -- http://littlebirdelectronics.com/products/magician-chassis
 -- http://littlebirdelectronics.com/products/ioio-for-android
 -- http://littlebirdelectronics.com/products/infrared-proximity-sensor-short-range-sharp-gp2d120xj00f

My hope is to be able to build something like this:
 -- http://www.youtube.com/watch?feature=player_detailpage&v=9cVSzB8otpU#t=13s

But I'll have to start right from the very beginning:
  -- http://www.youtube.com/watch?v=fCxzA9_kg6s