Planet LUV

September 24, 2017

Dave HallDrupal Puppies

Over the years Drupal distributions, or distros as they're more affectionately known, have evolved a lot. We started off passing around database dumps. Eventually we moved onto using installations profiles and features to share par-baked sites.

There are some signs that distros aren't working for people using them. Agencies often hack a distro to meet client requirements. This happens because it is often difficult to cleanly extend a distro. A content type might need extra fields or the logic in an alter hook may not be desired. This makes it difficult to maintain sites built on distros. Other times maintainers abandon their distributions. This leaves site owners with an unexpected maintenance burden.

We should recognise how people are using distros and try to cater to them better. My observations suggest there are 2 types of Drupal distributions; starter kits and targeted products.

Targeted products are easier to deal with. Increasingly monetising targeted distro products is done through a SaaS offering. The revenue can funds the ongoing development of the product. This can help ensure the project remains sustainable. There are signs that this is a viable way of building Drupal 8 based products. We should be encouraging companies to embrace a strategy built around open SaaS. Open Social is a great example of this approach. Releasing the distros demonstrates a commitment to the business model. Often the secret sauce isn't in the code, it is the team and services built around the product.

Many Drupal 7 based distros struggled to articulate their use case. It was difficult to know if they were a product, a demo or a community project that you extend. Open Atrium and Commerce Kickstart are examples of distros with an identity crisis. We need to reconceptualise most distros as "starter kits" or as I like to call them "puppies".

Why puppies? Once you take a puppy home it becomes your responsibility. Starter kits should be the same. You should never assume that a starter kit will offer an upgrade path from one release to the next. When you install a starter kit you are responsible for updating the modules yourself. You need to keep track of security releases. If your puppy leaves a mess on the carpet, no one else will clean it up.

Sites build on top of a starter kit should diverge from the original version. This shouldn't only be an expectation, it should be encouraged. Installing a starter kit is the starting point of building a unique fork.

Project pages should clearly state that users are buying a puppy. Prospective puppy owners should know if they're about to take home a little lap dog or one that will grow to the size of a pony that needs daily exercise. Puppy breeders (developers) should not feel compelled to do anything once releasing the puppy. That said, most users would like some documentation.

I know of several agencies and large organisations that are making use of starter kits. Let's support people who are adopting this approach. As a community we should acknowledge that distros aren't working. We should start working out how best to manage the transition to puppies.

September 23, 2017

etbeConverting Mbox to Maildir

MBox is the original and ancient format for storing mail on Unix systems, it consists of a single file per user under /var/spool/mail that has messages concatenated. Obviously performance is very poor when deleting messages from a large mail store as the entire file has to be rewritten. Maildir was invented for Qmail by Dan Bernstein and has a single message per file giving fast deletes among other performance benefits. An ongoing issue over the last 20 years has been converting Mbox systems to Maildir. The various ways of getting IMAP to work with Mbox only made this more complex.

The Dovecot Wiki has a good page about converting Mbox to Maildir [1]. If you want to keep the same message UIDs and the same path separation characters then it will be a complex task. But if you just want to copy a small number of Mbox accounts to an existing server then it’s a bit simpler.

Dovecot has a script to convert folders [2].

cd /var/spool/mail
mkdir -p /mailstore/
for U in * ; do
  ~/ -s $(pwd)/$U -d /mailstore/$U

To convert the inboxes shell code like the above is needed. If the users don’t have IMAP folders (EG they are just POP users or use local Unix MUAs) then that’s all you need to do.

cd /home
for DIR in */mail ; do
  U=$(echo $DIR| cut -f1 -d/)
  cd /home/$DIR
  for FOLDER in * ; do
    ~/ -s $(pwd)/$FOLDER -d /mailstore/$U/.$FOLDER
  cp .subscriptions /mailstore/$U/ subscriptions

Some shell code like the above will convert the IMAP folders to Maildir format. The end result is that the users will have to download all the mail again as their MUA will think that every message had been deleted and replaced. But as all servers with significant amounts of mail or important mail were probably converted to Maildir a decade ago this shouldn’t be a problem.

LUVLUV October 2017 Workshop

Oct 21 2017 12:30
Oct 21 2017 16:30
Oct 21 2017 12:30
Oct 21 2017 16:30
Infoxchange, 33 Elizabeth St. Richmond

There will also be the usual casual hands-on workshop, Linux installation, configuration and assistance and advice. Bring your laptop if you need help with a particular issue. This will now occur BEFORE the talks from 12:30 to 14:00. The talks will commence at 14:00 (2pm) so there is time for people to have lunch nearby.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121 (enter via the garage on Jonas St.) Late arrivals, please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria is a subcommittee of Linux Australia.

October 21, 2017 - 12:30

LUVLUV Main October 2017 Meeting

Oct 3 2017 18:30
Oct 3 2017 20:30
Oct 3 2017 18:30
Oct 3 2017 20:30
Mail Exchange Hotel, 688 Bourke St, Melbourne VIC 3000


Tuesday, October 3, 2017
6:30 PM to 8:30 PM
Mail Exchange Hotel
688 Bourke St, Melbourne VIC 3000


  • TBA

Mail Exchange Hotel, 688 Bourke St, Melbourne VIC 3000

Food and drinks will be available on premises.

Linux Users of Victoria is a subcommittee of Linux Australia.
October 3, 2017 - 18:30

September 16, 2017

Dave HallTrying Drupal

While preparing for my DrupalCamp Belgium keynote presentation I looked at how easy it is to get started with various CMS platforms. For my talk I used Contentful, a hosted content as a service CMS platform and contrasted that to the "Try Drupal" experience. Below is the walk through of both.

Let's start with Contentful. I start off by visiting their website.

Contentful homepage

In the top right corner is a blue button encouraging me to "try for free". I hit the link and I'm presented with a sign up form. I can even use Google or GitHub for authentication if I want.

Contentful signup form

While my example site is being installed I am presented with an overview of what I can do once it is finished. It takes around 30 seconds for the site to be installed.

Contentful installer wait

My site is installed and I'm given some guidance about what to do next. There is even an onboarding tour in the bottom right corner that is waving at me.

Contentful dashboard

Overall this took around a minute and required very little thought. I never once found myself thinking come on hurry up.

Now let's see what it is like to try Drupal. I land on d.o. I see a big prominent "Try Drupal" button, so I click that.

Drupal homepage

I am presented with 3 options. I am not sure why I'm being presented options to "Build on Drupal 8 for Free" or to "Get Started Risk-Free", I just want to try Drupal, so I go with Pantheon.

Try Drupal providers

Like with Contentful I'm asked to create an account. Again I have the option of using Google for the sign up or completing a form. This form has more fields than contentful.

Pantheon signup page

I've created my account and I am expecting to be dropped into a demo Drupal site. Instead I am presented with a dashboard. The most prominent call to action is importing a site. I decide to create a new site.

Pantheon dashboard

I have to now think of a name for my site. This is already feeling like a lot of work just to try Drupal. If I was a busy manager I would have probably given up by this point.

Pantheon create site form

When I submit the form I must surely be going to see a Drupal site. No, sorry. I am given the choice of installing WordPress, yes WordPress, Drupal 8 or Drupal 7. Despite being very confused I go with Drupal 8.

Pantheon choose application page

Now my site is deploying. While this happens there is a bunch of items that update above the progress bar. They're all a bit nerdy, but at least I know something is happening. Why is my only option to visit my dashboard again? I want to try Drupal.

Pantheon site installer page

I land on the dashboard. Now I'm really confused. This all looks pretty geeky. I want to try Drupal not deal with code, connection modes and the like. If I stick around I might eventually click "Visit Development site", which doesn't really feel like trying Drupal.

Pantheon site dashboard

Now I'm asked to select a language. OK so Drupal supports multiple languages, that nice. Let's select English so I can finally get to try Drupal.

Drupal installer, language selection

Next I need to chose an installation profile. What is an installation profile? Which one is best for me?

Drupal installer, choose installation profile

Now I need to create an account. About 10 minutes I already created an account. Why do I need to create another one? I also named my site earlier in the process.

Drupal installer, configuration form part 1
Drupal installer, configuration form part 2

Finally I am dropped into a Drupal 8 site. There is nothing to guide me on what to do next.

Drupal site homepage

I am left with a sense that setting up Contentful is super easy and Drupal is a lot of work. For most people wanting to try Drupal they would have abandoned someway through the process. I would love to see the conversion stats for the try Drupal service. It must miniscule.

It is worth noting that Pantheon has the best user experience of the 3 companies. The process with 1&1 just dumps me at a hosting sign up page. How does that let me try Drupal?

Acquia drops onto a page where you select your role, then you're presented with some marketing stuff and a form to request a demo. That is unless you're running an ad blocker, then when you select your role you get an Ajax error.

The Try Drupal program generates revenue for the Drupal Association. This money helps fund development of the project. I'm well aware that the DA needs money. At the same time I wonder if it is worth it. For many people this is the first experience they have using Drupal.

The previous attempt to have added to the try Drupal page ultimately failed due to the financial implications. While this is disappointing I don't think is necessarily the answer either.

There needs to be some minimum standards for the Try Drupal page. One of the key item is the number of clicks to get from d.o to a working demo site. Without this the "Try Drupal" page will drive people away from the project, which isn't the intention.

If you're at DrupalCon Vienna and want to discuss this and other ways to improve the marketing of Drupal, please attend the marketing sprints.

try-contentful-1.png342.82 KB
try-contentful-2.png214.5 KB
try-contentful-3.png583.02 KB
try-contentful-5.png826.13 KB
try-drupal-1.png1.19 MB
try-drupal-2.png455.11 KB
try-drupal-3.png330.45 KB
try-drupal-4.png239.5 KB
try-drupal-5.png203.46 KB
try-drupal-6.png332.93 KB
try-drupal-7.png196.75 KB
try-drupal-8.png333.46 KB
try-drupal-9.png1.74 MB
try-drupal-10.png1.77 MB
try-drupal-11.png1.12 MB
try-drupal-12.png1.1 MB
try-drupal-13.png216.49 KB

September 09, 2017

etbeObserving Reliability

Last year I wrote about how great my latest Thinkpad is [1] in response to a discussion about whether a Thinkpad is still the “Rolls Royce” of laptops.

It was a few months after writing that post that I realised that I omitted an important point. After I had that laptop for about a year the DVD drive broke and made annoying clicking sounds all the time in addition to not working. I removed the DVD drive and the result was that the laptop was lighter and used less power without missing any feature that I desired. As I had installed Debian on that laptop by copying the hard drive from my previous laptop I had never used the DVD drive for any purpose. After a while I got used to my laptop being like that and the gaping hole in the side of the laptop where the DVD drive used to be didn’t even register to me. I would prefer it if Lenovo sold Thinkpads in the T series without DVD drives, but it seems that only the laptops with tiny screens are designed to lack DVD drives.

For my use of laptops this doesn’t change the conclusion of my previous post. Now the T420 has been in service for almost 4 years which makes the cost of ownership about $75 per year. $1.50 per week as a tax deductible business expense is very cheap for such a nice laptop. About a year ago I installed a SSD in that laptop, it cost me about $250 from memory and made it significantly faster while also reducing heat problems. The depreciation on the SSD about doubles the cost of ownership of the laptop, but it’s still cheaper than a mobile phone and thus not in the category of things that are expected to last for a long time – while also giving longer service than phones usually do.

One thing that’s interesting to consider is the fact that I forgot about the broken DVD drive when writing about this. I guess every review has an unspoken caveat of “this works well for me but might suck badly for your use case”. But I wonder how many other things that are noteworthy I’m forgetting to put in reviews because they just don’t impact my use. I don’t think that I am unusual in this regard, so reading multiple reviews is the sensible thing to do.

September 06, 2017

LUVSoftware Freedom Day 2017 and LUV Annual General Meeting

Sep 16 2017 12:00
Sep 16 2017 18:00
Sep 16 2017 12:00
Sep 16 2017 18:00
Electron Workshop, 31 Arden St. North Melbourne


Software Freedom Day 2017

It's that time of the year where we celebrate our freedoms in technology and raise a ruckus about all the freedoms that have been eroded away. The time of the year we look at how we might keep our increasingly digital lives under our our own control and prevent prying eyes from seeing things they shouldn't. You guessed it: It's Software Freedom Day!

LUV would like to acknowledge Electron Workshop for the venue.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

September 16, 2017 - 12:00

read more

August 29, 2017

LUVLUV Main September 2017 Meeting: Cygwin and Virtualbox

Sep 5 2017 18:30
Sep 5 2017 20:30
Sep 5 2017 18:30
Sep 5 2017 20:30
The Dan O'Connell Hotel, 225 Canning Street, Carlton VIC 3053

Tuesday, September 5, 2017

6:30 PM to 8:30 PM
The Dan O'Connell Hotel
225 Canning Street, Carlton VIC 3053


  • Duncan Roe, Cygwin
  • Steve Roylance, Virtualbox

Cygwin is a large collection of GNU and Open Source tools which provide functionality similar to a Linux distribution on Windows.  It allows easy porting of many Unix programs without the need for extensive changes to the source code.

The Dan O'Connell Hotel, 225 Canning Street, Carlton VIC 3053

Food and drinks will be available on premises.

Before and/or after each meeting those who are interested are welcome to join other members for dinner.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

September 5, 2017 - 18:30

read more

August 01, 2017

etbeQEMU for ARM Processes

I’m currently doing some embedded work on ARM systems. Having a virtual ARM environment is of course helpful. For the i586 class embedded systems that I run it’s very easy to setup a virtual environment, I just have a chroot run from systemd-nspawn with the --personality=x86 option. I run it on my laptop for my own development and on a server my client owns so that they can deal with the “hit by a bus” scenario. I also occasionally run KVM virtual machines to test the boot image of i586 embedded systems (they use GRUB etc and are just like any other 32bit Intel system).

ARM systems have a different boot setup, there is a uBoot loader that is fairly tightly coupled with the kernel. ARM systems also tend to have more unusual hardware choices. While the i586 embedded systems I support turned out to work well with standard Debian kernels (even though the reference OS for the hardware has a custom kernel) the ARM systems need a special kernel. I spent a reasonable amount of time playing with QEMU and was unable to make it boot from a uBoot ARM image. The Google searches I performed didn’t turn up anything that helped me. If anyone has good references for getting QEMU to work for an ARM system image on an AMD64 platform then please let me know in the comments. While I am currently surviving without that facility it would be a handy thing to have if it was relatively easy to do (my client isn’t going to pay me to spend a week working on this and I’m not inclined to devote that much of my hobby time to it).

QEMU for Process Emulation

I’ve given up on emulating an entire system and now I’m using a chroot environment with systemd-nspawn.

The package qemu-user-static has staticly linked programs for emulating various CPUs on a per-process basis. You can run this as “/usr/bin/qemu-arm-static ./staticly-linked-arm-program“. The Debian package qemu-user-static uses the binfmt_misc support in the kernel to automatically run /usr/bin/qemu-arm-static when an ARM binary is executed. So if you have copied the image of an ARM system to /chroot/arm you can run the following commands like the following to enter the chroot:

cp /usr/bin/qemu-arm-static /chroot/arm/usr/bin/qemu-arm-static
chroot /chroot/arm bin/bash

Then you can create a full virtual environment with “/usr/bin/systemd-nspawn -D /chroot/arm” if you have systemd-container installed.

Selecting the CPU Type

There is a huge range of ARM CPUs with different capabilities. How this compares to the range of x86 and AMD64 CPUs depends on how you are counting (the i5 system I’m using now has 76 CPU capability flags). The default CPU type for qemu-arm-static is armv7l and I need to emulate a system with a armv5tejl. Setting the environment variable QEMU_CPU=pxa250 gives me armv5tel emulation.

The ARM Architecture Wikipedia page [2] says that in armv5tejl the T stands for Thumb instructions (which I don’t think Debian uses), the E stands for DSP enhancements (which probably isn’t relevant for me as I’m only doing integer maths), the J stands for supporting special Java instructions (which I definitely don’t need) and I’m still trying to work out what L means (comments appreciated).

So it seems clear that the armv5tel emulation provided by QEMU_CPU=pxa250 will do everything I need for building and testing ARM embedded software. The issue is how to enable it. For a user shell I can just put export QEMU_CPU=pxa250 in .login or something, but I want to emulate an entire system (cron jobs, ssh logins, etc).

I’ve filed Debian bug #870329 requesting a configuration file for this [1]. If I put such a configuration file in the chroot everything would work as desired.

To get things working in the meantime I wrote the below wrapper for /usr/bin/qemu-arm-static that calls /usr/bin/qemu-arm-static.orig (the renamed version of the original program). It’s ugly (I would use a config file if I needed to support more than one type of CPU) but it works.

#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>

int main(int argc, char **argv)
  if(setenv("QEMU_CPU", "pxa250", 1))
    printf("Can't set $QEMU_CPU\n");
    return 1;
  execv("/usr/bin/qemu-arm-static.orig", argv);
  printf("Can't execute \"%s\" because of qemu failure\n", argv[0]);
  return 1;

July 31, 2017

etbeRunning a Tor Relay

I previously wrote about running my SE Linux Play Machine over Tor [1] which involved configuring ssh to use Tor.

Since then I have installed a Tor hidden service for ssh on many systems I run for clients. The reason is that it is fairly common for them to allow a server to get a new IP address by DHCP or accidentally set their firewall to deny inbound connections. Without some sort of VPN this results in difficult phone calls talking non-technical people through the process of setting up a tunnel or discovering an IP address. While I can run my own VPN for them I don’t want their infrastructure tied to mine and they don’t want to pay for a 3rd party VPN service. Tor provides a free VPN service and works really well for this purpose.

As I believe in giving back to the community I decided to run my own Tor relay. I have no plans to ever run a Tor Exit Node because that involves more legal problems than I am willing or able to deal with. A good overview of how Tor works is the EFF page about it [2]. The main point of a “Middle Relay” (or just “Relay”) is that it only sends and receives encrypted data from other systems. As the Relay software (and the sysadmin if they choose to examine traffic) only sees encrypted data without any knowledge of the source or final destination the legal risk is negligible.

Running a Tor relay is quite easy to do. The Tor project has a document on running relays [3], which basically involves changing 4 lines in the torrc file and restarting Tor.

If you are running on Debian you should install the package tor-geoipdb to allow Tor to determine where connections come from (and to not whinge in the log files).

ORPort [IPV6ADDR]:9001

If you want to use IPv6 then you need a line like the above with IPV6ADDR replaced by the address you want to use. Currently Tor only supports IPv6 for connections between Tor servers and only for the data transfer not the directory services.

Data Transfer

I currently have 2 systems running as Tor relays, both of them are well connected in a European DC and they are each transferring about 10GB of data per day which isn’t a lot by server standards. I don’t know if there is a sufficient number of relays around the world that the share of the load is small or if there is some geographic dispersion algorithm which determined that there are too many relays in operation in that region.

May 29, 2017

Stewart SmithFedora 25 + Lenovo X1 Carbon 4th Gen + OneLink+ Dock

As of May 29th 2017, if you want to do something crazy like use *both* ports of the OneLink+ dock to use monitors that aren’t 640×480 (but aren’t 4k), you’re going to need a 4.11 kernel, as everything else (for example 4.10.17, which is the latest in Fedora 25 at time of writing) will end you in a world of horrible, horrible pain.

To install, run this:

sudo dnf install \ \ \ \ \ \ \ \

This grabs a kernel that’s sitting in testing and isn’t yet in the main repositories. However, I can now see things on monitors, rather than 0 to 1 monitor (most often 0). You can also dock/undock and everything doesn’t crash in a pile of fail.

I remember a time when you could fairly reliably buy Intel hardware and have it “just work” with the latest distros. It’s unfortunate that this is no longer the case, and it’s more of a case of “wait six months and you’ll still have problems”.


(at least Wayland and X were bug for bug compatible?)

May 03, 2017

Stewart SmithAPI, ABI and backwards compatibility are a hard necessity

Recently, I was reading a thread on LKML on a proposal to change the behavior of the open system call when confronted with unknown flags. The thread is worth a read as the topic of augmenting things that exist probably by accident to be “better” is always interesting, as is the definition of “better”.

Keeping API and/or ABI compatibility is something that isn’t a new problem, and it’s one that people are pretty good at sometimes messing up.

This problem does not go away just because “we have cloud now”. In any distributed system, in order to upgrade it (or “be agile” as the kids are calling it), you by definition are going to have either downtime or at least two versions running concurrently. Thus, you have to have your interfaces/RPCs/APIs/ABIs/protocols/whatever cope with changes.

You cannot instantly upgrade the world, it happens gradually. You also have to design for at least three concurrent versions running. One is the original, the second is your upgrade, your third is the urgent fix because the upgrade is quite broken in some new way you only discover in production.

So, the way you do this? Never ever EVER design for N-1 compatibility only. Design for going back a long way, much longer than you officially support. You want to have a design and programming culture of backwards compatibility to ensure you can both do new and exciting things and experiment off to the side.

It’s worth going and rereading Rusty’s API levels posts from 2008:

April 27, 2017

Dave HallContinuing the Conversation at DrupalCon and Into the Future

My blog post from last week was very well received and sparked a conversation in the Drupal community about the future of Drupal. That conversation has continued this week at DrupalCon Baltimore.

Yesterday during the opening keynote, Dries touched on some of the issues raised in my blog post. Later in the day we held an unofficial BoF. The turn out was smaller than I expected, but we had a great discussion.

Drupal moving from a hobbyist and business tool to being an enterprise CMS for creating "ambitious digital experiences" was raised in the Driesnote and in other conversations including the BoF. We need to acknowledge that this has happened and consider it an achievement. Some people have been left behind as Drupal has grown up. There is probably more we can do to help these people. Do we need more resources to help them skill up? Should we direct them towards WordPress, backdrop, squarespace, wix etc? Is it is possible to build smaller sites that eventually grow into larger sites?

In my original blog post I talked about "peak Drupal" and used metrics that supported this assertion. One metric missing from that post is dollars spent on Drupal. It is clear that the picture is very different when measuring success using budgets. There is a general sense that a lot of money is being spent on high end Drupal sites. This has resulted in less sites doing more with Drupal 8.

As often happens when trying to solve problems with Drupal during the BoF descended into talking technical solutions. Technical solutions and implementation detail have a place. I think it is important for the community to move beyond this and start talking about Drupal as a product.

In my mind Drupal core should be a content management framework and content hub service for building compelling digital experiences. For the record, I am not arguing Drupal should become API only. Larger users will take this and build their digital stack on top of this platform. This same platform should support an ecosystem of Drupal "distros". These product focused projects target specific use cases. Great examples of such distros include Lightning, Thunder, Open Social, aGov and Drupal Commerce. For smaller agencies and sites a distro can provide a great starting point for building new Drupal 8 sites.

The biggest challenge I see is continuing this conversation as a community. The majority of the community toolkit is focused on facilitating technical discussions and implementations. These tools will be valuable as we move from talking to doing, but right now we need tools and processes for engaging in silver discussions so we can build platinum level products.

April 21, 2017

Dave HallMany People Want To Talk

WOW! The response to my blog post on the future of Drupal earlier this week has been phenomenal. My blog saw more traffic in 24 hours than it normally sees in a 2 to 3 week period. Around 30 comments have been left by readers. My tweet announcing the post was the top Drupal tweet for a day. Some 50 hours later it is still number 4.

It seems to really connected with many people in the community. I am still reflecting on everyone's contributions. There is a lot to take in. Rather than rush a follow up that responds to the issues raised, I will take some time to gather my thoughts.

One thing that is clear is that many people want to use DrupalCon Baltimore next week to discuss this issue. I encourage people to turn up with an open mind and engage in the conversation there.

A few people have suggested a BoF. Unfortunately all of the official BoF slots are full. Rather than that be a blocker, I've decided to run an unofficial BoF on the first day. I hope this helps facilitate the conversation.

Unofficial BoF: The Future of Drupal

When: Tuesday 25 April 2017 @ 12:30-1:30pm
Where: Exhibit Hall - meet at the Digital Echidna booth (#402) to be directed to the group
What: High level discussion about the direction people think Drupal should take.
UPDATE: An earlier version of this post had this scheduled for Monday. It is definitely happening on Tuesday.

I hope to see you in Baltimore.

February 22, 2017

Julien GoodwinMaking a USB powered soldering iron that doesn't suck

Today's evil project was inspired by a suggestion after my talk on USB-C & USB-PD at this years's Open Hardware miniconf.

Using a knock-off Hakko driver and handpiece I've created what may be the first USB powered soldering iron that doesn't suck (ok, it's not a great iron, but at least it has sufficient power to be usable).

Building this was actually trivial, I just wired the 20v output of one of my USB-C ThinkPad boards to a generic Hakko driver board, the loss of power from using 20v not 24v is noticeable, but for small work this would be fine (I solder in either the work lab or my home lab, where both have very nice soldering stations so I don't actually expect to ever use this).

If you were to turn this into a real product you could in fact do much better, by doing both power negotiation and temperature control in a single micro, the driver could instead be switched to a boost converter instead of just a FET, and by controlling the output voltage control the power draw, and simply disable the regulator to turn off the heater. By chance, the heater resistance of the Hakko 907 clone handpieces is such that combined with USB-PD power rules you'd always be boost converting, never needing to reduce voltage.

With such a driver you could run this from anything starting with a 5v USB-C phone charger or battery (15W for the nicer ones), 9v at up to 3A off some laptops (for ~25W), or all the way to 20V@5A for those who need an extremely high-power iron. 60W, which happens to be the standard power level of many good irons (such as the Hakko FX-888D) is also at 20v@3A a common limit for chargers (and also many cables, only fixed cables, or those specially marked with an ID chip can go all the way to 5A). As higher power USB-C batteries start becoming available for laptops this becomes a real option for on-the-go use.

Here's a photo of it running from a Chromebook Pixel charger:

February 14, 2017

Stewart Smithj-core + Numato Spartan 6 board + Fedora 25

A couple of changes to made it easy for me to get going:

  • In order to make ModemManager not try to think it’s a “modem”, create /etc/udev/rules.d/52-numato.rules with the following content:
    # Make ModemManager ignore Numato FPGA board
    ATTRS{idVendor}=="2a19", ATTRS{idProduct}=="1002", ENV{ID_MM_DEVICE_IGNORE}="1"
  • You will need to install python3-pyserial and minicom
  • The minicom command line i used was:
    sudo stty -F /dev/ttyACM0 -crtscts && minicom -b 115200 -D /dev/ttyACM0

and along with the instructions on, I got it to load a known good build.

January 30, 2017

Stewart SmithRecording of my LCA2017 talk: Organizational Change: Challenges in shipping open source firmware

January 28, 2017

Julien GoodwinCharging my ThinkPad X1 Gen4 using USB-C

As with many massive time-sucking rabbit holes in my life, this one starts with one of my silly ideas getting egged on by some of my colleagues in London (who know full well who they are), but for a nice change, this is something I can talk about.

I have a rather excessive number of laptops, at the moment my three main ones are a rather ancient Lenovo T430 (personal), a Lenovo X1 Gen4, and a Chromebook Pixel 2 (both work).

At the start of last year I had a T430s in place of the X1, and was planning on replacing both it and my personal ThinkPad mid-year. However both of those older laptops used Lenovo's long-held (back to the IBM days) barrel charger, which lead to me having a heap of them in various locations at home and work, but all the newer machines switched to their newer rectangular "slim" style power connector and while adapters exist, I decided to go in a different direction.

One of the less-touted features of USB-C is USB-PD[1], which allows devices to be fed up to 100W of power, and can do so while using the port for data (or the other great feature of USB-C, alternate modes, such as DisplayPort, great for docks), which is starting to be used as a way to charge laptops, such as the Chromebook Pixel 2, various models of the Apple MacBook line, and more.

Instead of buying a heap of slim-style Lenovo chargers, or a load of adapters (which would inevitably disappear over time) I decided to bridge towards the future by making an adapter to allow me to charge slim-type ThinkPads (at least the smaller ones, not the portable workstations which demand 120W or more).

After doing some research on what USB-PD platforms were available at the time I settled on the TI TPS65986 chip, which, with only an external flash chip, would do all that I needed.

Devkits were ordered to experiment with, and prove the concept, which they did very quickly, so I started on building the circuit, since just reusing the devkit boards would lead to an adapter larger than would be sensible. As the TI chip is a many-pin BGA, and breaking it out on 2-layers would probably be too hard for my meager PCB design skills, I needed a 4-layer board, so I decided to use KiCad for the project.

It took me about a week of evenings to get the schematic fully sorted, with much of the time spent reading the chip datasheet, or digging through the devkit schematic to see what they did there for some cases that weren't clear, then almost a month for the actual PCB layout, with much of the time being sucked up learning a tool that was brand new to me, and also fairly obtuse.

By mid-June I had a PCB which should (but, spoiler, wouldn't) work, however as mentioned the TI chip is a 96-ball 3x3mm BGA, something I had no hope of manually placing for reflow, and of course, no hope of hand soldering, so I would need to get these manufactured commercially. Luckily there are several options for small scale assembly at very reasonable prices, and I decided to try a new company still (at the time of ordering) in closed trials, PCB.NG, they have a nice simple procedure to upload board files, and a slightly custom pick & place file that includes references to the exact component I want by Digikey[link] part number. Best of all the pricing was completely reasonable, with a first test run of six boards only costing my US$30 each.

Late in June I recieved a mail from PCB.NG telling me that they'd built my boards, but that I had made a mistake with the footprint I'd used for the USB-C connector and they were posting my boards along with the connectors. As I'd had them ship the order to California (at the time they didn't seem to offer international shipping) it took a while for them to arrive in Sydney, courtesy a coworker.

I tried to modify a connector by removing all through hole board locks, keeping just the surface mount pins, however I was unsuccessful, and that's where the project stalled until mid-October when I was in California myself, and was able to get help from a coworker who can perform miracles of surface mount soldering (while they were working on my board they were also dead-bug mounting a BGA). Sadly while I now had a board I could test it simply dropped off my priority list for months.

At the start of January another of my colleagues (a US-based teammate of the London rabble-rousers) asked for a status update, which prompted me to get off my butt and perform the testing. The next day I added some reinforcement to the connector which was only really held on by the surface mount pins, and was highly likely to rip off the board, so I covered it in epoxy. Then I knocked up some USB A plug/socket to bare wires test adapters using some stuff from the junk bin we have at the office maker space for just this sort of occasion (the socket was actually a front panel USB port from an old IBM x-series server). With some trepidation I plugged the board into my newly built & tested adapter, and powered the board from a lab supply set to limit current in case I'd any shorts in the board. It all came up straight away, and even lit the LEDs I'd added for some user feedback.

Next was to load a firmware for the chip. I'd previously used TI's tool to create a firmware image, and after some messing around with the SPI flash programmer I'd purchased managed to get the board programmed. However the behaviour of the board didn't change with (what I thought was) real firmware, I used an oscilloscope to verify the flash was being read, and a twinkie to sniff the PD negotiation, which confirmed that no request for 20v was being sent. This was where I finished that day.

Over the weekend that followed I dug into what I'd seen and determined that either I'd killed the SPI MISO port (the programmer I used was 5v, not 3.3v), or I just had bad firmware and the chip had some good defaults. I created a new firmware image from scratch, and loaded that.

Sure enough it worked first try. Once I confirmed 20v was coming from the output ports I attached it to my recently acquired HP 6051A DC load where it happily sank 45W for a while, then I attached the cable part of a Lenovo barrel to slim adapter and plugged it into my X1 where it started charging right away.

At last week I gave (part of) a hardware miniconf talk about USB-C & USB-PD, which open source hardware folk might be interested in. Over the last few days while visiting my dad down in Gippsland I made the edits to fix the footprint and sent a new rev to the manufacturer for some new experiments.

Of course at CES Lenovo announced that this years ThinkPads would feature USB-C ports and allow charging through them, and due to laziness I never got around to replacing my T430, so I'm planning to order a T470 as soon as they're available, making my adapter obsolete.

Rough timeline:
  • April 21st 2016, decide to start working on the project
  • April 28th, devkits arrive
  • May 8th, schematic largely complete, work starts on PCB layout
  • June 14th, order sent to CM
  • ~July 6th, CM ships order to me (to California, then hand carried to me by a coworker)
  • early August, boards arrive from California
  • Somewhere here I try, and fail, to reflow a modified connector onto a board
  • October 13th, California cowoker helps to (successfully) reflow a USB-C connector onto a board for testing
  • January 6th 2017, finally got around to reinforce the connector with epoxy and started testing, try loading firmware but no dice
  • January 10th, redo firmware, it works, test on DC load, then modify a ThinkPad-slim adapater and test on a real ThinkPad
  • January 25/26th, fixed USB-C connector footprint, made one more minor tweak, sent order for rev2 to CM, then some back & forth over some tolerance issues they're now stricter on.

1: There's a previous variant of USB-PD that works on the older A/B connector, but, as far as I'm aware, was never implemented in any notable products.

August 01, 2013

Tim ConnorsNo trains for the corporatocracy

Sigh, look, I know we don't actually live in a democracy (but a corporatocracy instead), and I should never expect the relevant ministers to care about my meek little protestations otherwise, but I keep writing these letters to ministers for transport anyway, under the vague hope that it might remind them that they're ministers for transport, and not just roads.

Dear Transport Minister, Terry Mulder,

I encourage you and your fellow ministers to read this article
("Tracking the cost", The Age, June 13 2009) from 2009, back when the
Liberals claimed to have a very different attitude, and when
circumstances seemed to mirror the current time:

The eventual costs of building the first extensions to the Melbourne
public transport system in 80 years eventually blew out from $8M to
$500M over the short life of the South Morang project; despite being a
much smaller project than the entire rail lines built cheaper by
cities such as Perth in recent years.

The increased cost is explained away as a safety requirement - it
being so important to now start building grade separated lines rather
than level crossings regardless of circumstances. Perceived safety
trumps real safety (I'd much rather be in a train than suffer from one
of the 300 Victorian deaths on the roads each year), but more sinister
is that because of this inflated expense, we'll probably never see
another rail line like this built at all in Melbourne (although we'll
build at public expense a wonderful road tunnel that no-one but
Lindsay Fox will use at more than 10 times the cost, though).

I suspect the real reason for grade separation is not safety, but to
cause less inconvenience to car drivers stuck for 30 seconds at these
minor crossings. Since the delays at level crossings are a roads
problem, and collisions of errant motorists with trains at level
crossings is a roads problem, and the South Morang railway reservation
existed far before any of the roads were put in place, I'm wondering
whether you can answer why the blowout in costs of construction of
train lines comes out of the public transport budget, and not at the
expense of what causes these problems in the first place - the roads?
These train lines become harder to build because of an artificial cost
inflation caused by something that will be less of a problem if only
we could built more rail lines and actually improve the Melbourne
public transport system and make it attractive to use, for once (we've
been waiting for 80 years).

Yours sincerely,

And a little while later, the reply!

July 01, 2013

Tim ConnorsYarra trail pontoon closures

I do have to admit, I had some fun writing this one:

Dear Transport Minister, Terry Mulder (Denis Napthine, Local MP Ted Baillieu, Ryan Smith MP responsible for Parks Victoria, Parks Victoria itself, and Bicycle Victoria CCed),

I am writing about the sudden closure of the Main Yarra bicycle trail around Punt Road. The floating sections of the trail have been closed for the foreseeable future because of some over-zealous lawyer at Parks Victoria who has decided that careless riders might injure themselves on the rare occasion when the pontoon is both icy, and resting on the bottom of the Yarra at very low tides, sloping sideways at a minor angle. The trail has been closed before Parks Victoria have even planned for how they're going to rectify the problem with the pontoons. Instead, the lawyers have forced riders to take to parallel streets such as Swan St (which I took tonight in the rain, negotiating the thin strip between parked cars far enough from their doors being flung out illegally by careless drivers, and the wet tram tracks beside them). Obviously, causing riders to take these detours will be very much less safe than just keeping the trail open until a plan is developed, but I can see why Parks Victoria would want to shift the legal burden away from them.

I have no faith that the pontoon will be fixed in the foreseeable future without your intervention, because of past history -- that trail has been partially closed for about 18 months out of the past 3 years due to the very important works on the freeway above (keeping the economy going, as they say, by digging ditches and filling them immediately back up again).

Since we're already wasting $15B on an east-west freeway tunnel that will do absolutely nothing to alleviate traffic congestion because the outbound (Easterly direction) freeway is already at capacity in the afternoon without the extra induced traffic this project will add, I was wondering if you could spare a few million to duplicate the only easterly bicycle trail we have, so that these sorts of incidents don't reoccur and have so much impact on riders in the future.

I do hope that this trail will be fixed in a timely fashion before myself and all other 3000-4000 cyclists who currently use the trail every day resorting to riding through any of your freeway tunnels.

Yours sincerely,


June 18, 2013

Julien GoodwinOn programming languages

One thing people might be surprised about my job is that although I'm in network operations I write a lot of code, mostly for tools we use to monitor and maintain the network, but I've also had a few features and bug-fixes get into Google-wide (well, eng-wide) internal tools. This has lead me to use a bunch of languages, quite probably more than most Google software engineers.

Languages I've used in the last three months:
  • C++
  • Java
  • Go
  • Python
  • bash
  • Javascript & HTML (Including several templating languages)

In the two years since I started there's also:
  • perl
  • tcsh
  • PHP
  • SLAX (An alernate syntax version of XSLT)

These end up being a fairly unsurprising mix of standard sysadmin, web and systems programmer faire, with the real outliers being Go, the new c-ish systems language created at Google (several of the people working on the language sit just on the other side of a wall from me), and tcsh & SLAX which come from working with Juniper's JunOS which is built on FreeBSD with an XML-based configuration.

April 30, 2013

Julien GoodwinWhy I care about the NBN

First of all, yes I'm a general Greens / Labour supporter so it's not surprising that I like the NBN. I work in a tech field (specifically datacomms) that also generally supports the NBN. I have plenty of reasons to support the NBN from those, but here's the "how it affects me" ones.

I live in inner-city Sydney, specifically Ultimo, literally in the afternoon shadow of (probably) the most wired building in the country, the GlobalSwitch datacenter, yet on a bad day I get 1Mb or so through my Internet connection.

I do have two basic options for a home Internet connection where I live, both through Telstra's infrastructure, either their HFC cable network, or classic copper. As Telstra (last time I tried) were unable to sell a real business-class service on HFC I use ADSL, and since I consider IPv6 support a requirement, I use Internode (there's other reasons I use them as well, but IPv6 sadly is still hard to get from other providers). Due to the location of my apartment 3/4g services aren't an option even if they were fast enough (or affordable).

Sadly the copper in Ultimo is in a sad state with high cross-talk and attenuation. I suspect this is likely due to passing underground through Wentworth Park[1] which is only a meter or two above sea level so water damage is highly likely.

Even on a good day I only barely sustain ~8Mb down, ~1Mb up, with no disconnects, on a bad day it can be as low as 1Mb down, with multiple disconnects per hour.

It's the last point I particularly care about, regular disconnects make the service unreliable and are the aspect that is most irritating.

Speed, although people often want it is of limited value to me, once I can do ~20Mb down and ~5Mb up that's enough for me, handles file uploads fast enough and offers a nice buffer for video conferencing (yes I really do join video conferences from home). I suspect I will subscribe to a 50/20 plan if/when NBN is available in my area. In theory NBNco should have commenced construction in the Pyrmont/Glebe/Ultimo/Haymarket area this month (per their map), but I'm not aware of anyone who has been contacted to confirm this.

Because I'm in an apartment complex it's likely that the coalition's plan would have a node in the basement (there's already powered HFC amps in the basement car parks so it wouldn't be unprecedented), this matches some versions of the NBN plan, although the current plan (last I saw) is fibre to each apartment. Once a node is within a building cable damage due to water is unlikely and I'd probably be able to sustain a decent service, but if I ever moved to a townhouse then I'd be back to potentially dodgy underground cable.

I don't believe forcing Telstra & Optus to convert their HFC networks into wholesale capable networks is a sensible idea for a variety of reasons, the two major being would they still be able to send TV, and the need to split the HFC segments into much smaller sections to be able to sustain throughput. Even with the current low take-up rate of cable Internet I know many people in Melbourne who find their usable throughput massively degrades at peak times, something that would almost certainly get worse, not better, if HFC became the sole method of high speed Internet in a region.

I also still maintain my server and Internet service at my old place in Melbourne, and will get at least 50/20 there, should it ever be launched, when I first got ADSL2+ service I managed to sustain 24/2 but now only 12/1, presumably due to degrading lines, which makes me wonder about how fast and reliable a VDSL based FTTN would actually be.

1: The only real alternate path would add a kilometre or more of cable to avoid that low lying area.

April 14, 2013

Tim Connors

Oh well, if The Age aren't going to publish my Thatcher rant, I will:

Jan White (
Letters, 11 Apr) is heavily misguided if she believes that Thatcher was one of Britain's greatest leaders. For whom? By any metric 70% of Brits cared about, she was one of the worst. Any harmony, strength of character and respect Brits may be missing now would be due to her having nearly destroyed everything about British society with her Thatchernomics. Her funeral should be privatised and definitely not funded by the state as it is going to be. Instead, it could be funded by the long queue of people who want to dance on her grave.

March 21, 2013

Tim ConnorsRagin' on the road

Since The Age didn't publish my letter, my 3 readers ought to see it anyway:

Reynah Tang of the Law Institute of Victoria says that road rage offences shouldn't necessarily lead to loss of licence ("Offenders risk losing their licence", The Age, Mar 21) . He misses the point -- a vehicle is a weapon. Road ragers demonstrably do not have enough self control to drive. They have already lost their temper when in control of such a weapon, so they must never be given a licence to use that weapon again (the weapon should also be forfeited). The same is presumably true of gun murderers after their initial jail time (which road ragers rarely are given). RACV's Brian Negus also doesn't appear to realise that a driving license is a privilege, not an automatic right. You can still have all your necessary mobility without your car - it's not a human rights issue.

It was less than 200 words even dammit! But because the editor didn't check the basic arithmetic in a previous day's letter, they had to publish someone's correction.

November 18, 2012

Ben McGinnesFixed it

I've fixed the horrible errors that were sending my tweets here, it only took a few hours.

To do that I've had to disable cross-posting and it looks like it won't even work manually, so my updates will likely only occur on my own domain.

Details of the changes are here. They include better response times for my domain and no more Twitter posts on the main page, which should please those of you who hate that. Apparently that's a lot of people, but since I hate being inundated with FarceBook's crap I guess it evens out.

The syndicated feed for my site around here somewhere will get everything, but there's only one subscriber to that (last time I checked) and she's smart enough to decide how she wants to deal with that.

Ben McGinnesTweet Sometimes I amaze even myself; I remembered the pa…

Sometimes I amaze even myself; I remembered the passphrases to old PGP keys I thought had been lost to time. #crypto

Originally published at Organised Adversary. Please leave any comments there.

Ben McGinnesTweet These are the same keys I referred to in the PPAU…

These are the same keys I referred to in the PPAU #NatSecInquiry submission as being able to be used against me. #crypto

Originally published at Organised Adversary. Please leave any comments there.

Ben McGinnesTweet Now to give them their last hurrah: sign my curren…

Now to give them their last hurrah: sign my current key with them and then revoke them! #crypto

Originally published at Organised Adversary. Please leave any comments there.

October 26, 2011

Donna Benjaminheritage and hysterics

Originally published at KatteKrab. Please leave any comments there.

This gorgeous photo of The Queen in Melbourne on the Royal Tram made me smile this morning.

I've long been a proponent of an Australian Republic - but the populist hysteria of politicians, this photo, and the Kingdom of the Netherlands is actually making me rethink that position.

At least for today.  Long may she reign over us.

"Queen Elizabeth II smiles as she rides on the royal tram down St Kilda Road"
Photo from Getty Images published on

October 02, 2011

Donna BenjaminSticks and Stones and Speech

Originally published at KatteKrab. Please leave any comments there.

THE law does treat race differently: it is not unlawful to publish an article that insults, offends, humiliates or intimidates old people, for instance, or women, or disabled people. Professor Joseph, director of the Castan Centre for Human Rights Law at Monash University, said in principle ''humiliate and intimidate'' could be extended to other anti-discrimination laws. But historically, racial and religious discrimination is treated more seriously because of the perceived potential for greater public order problems and violence.

Peter Munro The Age  2 Oct 2011

Ahaaa. Now I get it! We've been doing it wrong. 

Racial villification is against the law because it might be more likely to lead to violence than villifying women, the elderly or the disabled.

Interesting debates and articles about free speech and discrimination are bobbing up and down in the flotsam and jetsam of the Bolt decision. Much of it seems to hinge on some kind of legal see-saw around notions of a bad law about bad words.

I've always been a proponent of the sticks and stones philosophy.  For those not familiar, it's the principle behind a children's nursery rhyme.

Sticks and Stones may break my bones
But  words will never hurt me

But I'm increasingly disturbed by the hateful culture of online comment.  I am a very strong proponent of the human right to free expression, and abhor censorship, but I'm seriously sick of "My right to free speech" being used as the ultimate excuse for people using words to denigrate, humiliate, intimidate, belittle and attack others, particularly women.

We should defend a right to free speech, but condemn hate speech when ever and where ever we see it.  Maybe we actually need to get violent to make this stop? Surely not.

September 20, 2011

Donna BenjaminQantas Pilots

Originally published at KatteKrab. Please leave any comments there.

The Qantas Pilot Safety culture is something worth fighting to protect. I read Malcolm Gladwell's Outliers whilst on board a Qantas flight recently. While Qantas itself isn't mentioned in the book, a footnote listed Australia as having the 2nd lowest Pilot Power-Distance Index (PDI) in the world. New Zealand had the lowest. The entire chapter "The Ethnic Theory of Plane Crashes" is the strongest argument I've seen which explains the Qantas safety record. The experience of pilots and relationships amongst the entire air crew is a crucial differentiating factor. Other airlines work hard to develop this culture, often needing to work against their own cultural patterns to achieve it. At Qantas, and likely at other Australian airlines too, this culture is the norm.

I want Australian Qantas Pilots flying Qantas planes. I'd like an Australian in charge too.

If you too support Qantas Pilots - go to their website, sign the petition.

Do your own reading.

G.R. Braithwaite, R.E. Caves, J.P.E. Faulkner, Australian aviation safety — observations from the ‘lucky’ countryJournal of Air Transport Management, Volume 4, Issue 1, January 1998: 55-62.

Anthony Dennis, What it takes to become a Qantas pilot, 8 September 2011.

Ashleigh Merritt, Culture in the Cockpit: Do Hofstede’s Dimensions Replicate?  Journal of Cross-Cultural Psychology, May 2000 31: 283-30.

Matt Phillips, Malcolm Gladwell on Culture, Cockpit Communication and Plane Crashes, WSJ Blogs, 4 December 2008.


September 18, 2011

Donna BenjaminRegistering for LCA2012

Originally published at KatteKrab. Please leave any comments there. ballarat 2012

I am right now, at this very minute, registering for in Ballarat in January. Creating my planet feed. Yep. Uhuh.

I reckon the "book a bus" feature of rego is pretty damn cool.  I won't be using it, because I'll be driving up from Melbourne. Serious kudos to the Ballarat team. Also nice to see they'll add busses from Avalon airport as well as from Tullamarine airport if there's demand.

Too cool.