Planet LUV

September 20, 2018

etbeWords Have Meanings

As a follow-up to my post with Suggestions for Trump Supporters [1] I notice that many people seem to have private definitions of words that they like to use.

There are some situations where the use of a word is contentious and different groups of people have different meanings. One example that is known to most people involved with computers is “hacker”. That means “criminal” according to mainstream media and often “someone who experiments with computers” to those of us who like experimenting with computers. There is ongoing discussion about whether we should try and reclaim the word for it’s original use or whether we should just accept that’s a lost cause. But generally based on context it’s clear which meaning is intended. There is also some overlap between the definitions, some people who like to experiment with computers conduct experiments with computers they aren’t permitted to use. Some people who are career computer criminals started out experimenting with computers for fun.

But some times words are misused in ways that fail to convey any useful ideas and just obscure the real issues. One example is the people who claim to be left-wing Libertarians. Murray Rothbard (AKA “Mr Libertarian”) boasted about “stealing” the word Libertarian from the left [2]. Murray won that battle, they should get over it and move on. When anyone talks about “Libertarianism” nowadays they are talking about the extreme right. Claiming to be a left-wing Libertarian doesn’t add any value to any discussion apart from demonstrating the fact that the person who makes such a claim is one who gives hipsters a bad name. The first time penny-farthings were fashionable the word “libertarian” was associated with left-wing politics. Trying to have a sensible discussion about politics while using a word in the opposite way to almost everyone else is about as productive as trying to actually travel somewhere by penny-farthing.

Another example is the word “communist” which according to many Americans seems to mean “any person or country I don’t like”. It’s often invoked as a magical incantation that’s supposed to automatically win an argument. One recent example I saw was someone claiming that “Russia has always been communist” and rejecting any evidence to the contrary. If someone was to say “Russia has always been a shit country” then there’s plenty of evidence to support that claim (Tsarist, communist, and fascist Russia have all been shit in various ways). But no definition of “communism” seems to have any correlation with modern Russia. I never discovered what that person meant by claiming that Russia is communist, they refused to make any comment about Russian politics and just kept repeating that it’s communist. If they said “Russia has always been shit” then it would be a clear statement, people can agree or disagree with that but everyone knows what is meant.

The standard response to pointing out that someone is using a definition of a word that is either significantly different to most of the world (or simply inexplicable) is to say “that’s just semantics”. If someone’s “contribution” to a political discussion is restricted to criticising people who confuse “their” and “there” then it might be reasonable to say “that’s just semantics”. But pointing out that someone’s writing has no meaning because they choose not to use words in the way others will understand them is not just semantics. When someone claims that Russia is communist and Americans should reject the Republican party because of their Russian connection it’s not even wrong. The same applies when someone claims that Nazis are “leftist”.

Generally the aim of a political debate is to convince people that your cause is better than other causes. To achieve that aim you have to state your cause in language that can be understood by everyone in the discussion. Would the person who called Russia “communist” be more or less happy if Russia had common ownership of the means of production and an absence of social classes? I guess I’ll never know, and that’s their failure at debating politics.

September 17, 2018

LUVLUV October 2018 Workshop

Oct 20 2018 12:30
Oct 20 2018 16:30
Oct 20 2018 12:30
Oct 20 2018 16:30
Location: 
Infoxchange, 33 Elizabeth St. Richmond

Topic To Be Announced

There will also be the usual casual hands-on workshop, Linux installation, configuration and assistance and advice. Bring your laptop if you need help with a particular issue. This will now occur BEFORE the talks from 12:30 to 14:00. The talks will commence at 14:00 (2pm) so there is time for people to have lunch nearby.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121.  Late arrivals please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria is a subcommittee of Linux Australia.

October 20, 2018 - 12:30

LUVLUV October 2018 Main Meeting

Oct 2 2018 18:30
Oct 2 2018 20:30
Oct 2 2018 18:30
Oct 2 2018 20:30
Location: 
Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

PLEASE NOTE RETURN TO ORIGINAL START TIME

6:30 PM to 8:30 PM Tuesday, October 2, 2018
Training Room, Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

Speakers:

  • To Be Announced

Many of us like to go for dinner nearby after the meeting, typically at Brunetti's or Trotters Bistro in Lygon St.  Please let us know if you'd like to join us!

Linux Users of Victoria is a subcommittee of Linux Australia.

October 2, 2018 - 18:30

September 11, 2018

etbeThinkpad X1 Carbon Gen 6

In February I reviewed a Thinkpad X1 Carbon Gen 1 [1] that I bought on Ebay.

I have just been supplied the 6th Generation of the Thinkpad X1 Carbon for work, which would have cost about $1500 more than I want to pay for my own gear. ;)

The first thing to note is that it has USB-C for charging. The charger continues the trend towards smaller and lighter chargers and also allows me to charge my phone from the same charger so it’s one less charger to carry. The X1 Carbon comes with a 65W charger, but when I got a second charger it was only 45W but was also smaller and lighter.

The laptop itself is also slightly smaller in every dimension than my Gen 1 version as well as being noticeably lighter.

One thing I noticed is that the KDE power applet disappears when battery is full – maybe due to my history of buying refurbished laptops I haven’t had a battery report itself as full before.

Disabling the touch pad in the BIOS doesn’t work. This is annoying, there are 2 devices for mouse type input so I need to configure Xorg to only read from the Trackpoint.

The labels on the lid are upside down from the perspective of the person using it (but right way up for people sitting opposite them). This looks nice for observers, but means that you tend to put your laptop the wrong way around on your desk a lot before you get used to it. It is also fancier than the older model, the red LED on the cover for the dot in the I in Thinkpad is one of the minor fancy features.

As the new case is thinner than the old one (which was thin compared to most other laptops) it’s difficult to open. You can’t easily get your fingers under the lid to lift it up.

One really annoying design choice was to have a proprietary Ethernet socket with a special dongle. If the dongle is lost or damaged it will probably be expensive to replace. An extra USB socket and a USB Ethernet device would be much more useful.

The next deficiency is that it has one USB-C/DisplayPort/Thunderbolt port and 2 USB 3.1 ports. USB-C is going to be used for everything in the near future and a laptop with only a single USB-C port will be as annoying then as one with a single USB 2/3 port would be right now. Making a small laptop requires some engineering trade-offs and I can understand them limiting the number of USB 3.1 ports to save space. But having two or more USB-C ports wouldn’t have taken much space – it would take no extra space to have a USB-C port in place of the proprietary Ethernet port. It also has only a HDMI port for display, the USB-C/Thunderbolt/DisplayPort port is likely to be used for some USB-C device when you want an external display. The Lenovo advertising says “So you get Thunderbolt, USB-C, and DisplayPort all rolled into one”, but really you get “a choice of one of Thunderbolt, USB-C, or DisplayPort at any time”. How annoying would it be to disconnect your monitor because you want to read a USB-C storage device?

As an aside this might work out OK if you can have a DisplayPort monitor that also acts as a USB-C hub on the same cable. But if so requiring a monitor that isn’t even on sale now to make my laptop work properly isn’t a good strategy.

One problem I have is that resume from suspend requires holding down power button. I’m not sure if it’s hardware or software issue. But suspend on lid close works correctly and also suspend on inactivity when running on battery power. The X1 Carbon Gen 1 that I own doesn’t suspend on lid close or inactivity (due to a Linux configuration issue). So I have one laptop that won’t suspend correctly and one that won’t resume correctly.

The CPU is an i5-8250U which rates 7,678 according to cpubenchmark.net [2]. That’s 92% faster than the i7 in my personal Thinkpad and more importantly I’m likely to actually get that performance without having the CPU overheat and slow down, that said I got a thermal warning during the Debian install process which is a bad sign. It’s also only 114% faster than the CPU in the Thinkpad T420 I bought in 2013. The model I got doesn’t have the fastest possible CPU, but I think that the T420 didn’t either. A 114% increase in CPU speed over 5 years is a long way from the factor of 4 or more that Moore’s law would have predicted.

The keyboard has the stupid positions for the PgUp and PgDn keys I noted on my last review. It’s still annoying and slows me down, but I am starting to get used to it.

The display is FullHD, it’s nice to have a laptop with the same resolution as my phone. It also has a slider to cover the built in camera which MIGHT also cause the microphone to be disconnected. It’s nice that hardware manufacturers are noticing that some customers care about privacy.

The storage is NVMe. That’s a nice feature, although being only 240G may be a problem for some uses.

Conclusion

Definitely a nice laptop if someone else is paying.

The fact that it had cooling issues from the first install is a concern. Laptops have always had problems with cooling and when a laptop has cooling problems before getting any dust inside it’s probably going to perform poorly in a few years.

Lenovo has gone too far trying to make it thin and light. I’d rather have the same laptop but slightly thicker, with a built-in Ethernet port, more USB ports, and a larger battery.

September 09, 2018

etbeFail2ban

I’ve recently setup fail2ban [1] on a bunch of my servers. It’s purpose is to ban IP addresses associated with password guessing – or whatever other criteria for badness you configure. It supports Linux, OpenBSD [2] and probably most Unix type OSs too. I run Debian so I’ve been using the Debian packages of fail2ban.

The first thing to note is that it is very easy to install and configure (for the common cases at least). For a long time installing it had been on my todo list but I didn’t make the time to do it, after installing it I realised that I should have done it years ago, it was so easy.

Generally to configure it you just create a file under /etc/fail2ban/jail.d with the settings you want, any settings that are different from the defaults will override them. For example if you have a system running dovecot on the default ports and sshd on port 999 then you could put the following in /etc/fail2ban/jail.d/local.conf:

[dovecot]
enabled = true

[sshd]
port = 999

By default the Debian package of fail2ban only protects sshd.

When fail2ban is running on Linux the command “iptables -L -n -v|grep f2b” will show the rules that match inbound traffic and the names of the chains they direct traffic to. To see if fail2ban has acted to protect a service you can run a command like “iptables -L f2b-sshd -n” to see the iptables rules.

The fail2ban entries in the INPUT table go before other rules, so it should work with any custom iptables rules you have configured as long as either fail2ban is the last thing to be started or your custom rules don’t flush old entries.

There are hooks for sending email notifications etc, that seems excessive to me but it’s always good to have options to extend a program.

In the past I’ve tried using kernel rate limiting to minimise hostile activity. That didn’t work well as there are legitimate end users who do strange things (like a user who setup their web-cam to email them every time it took a photo).

Conclusion

Fail2ban has some good features. I don’t think it will do much good at stopping account compromise as anything that is easily guessed could be guessed using many IP addresses and anything that has a good password can’t be guessed without taking many years of brute-force attacks while also causing enough noise in the logs to be noticed. What it does do is get rid of some of the noise in log files which makes it easier to find and fix problems. To me the main benefit is to improve the signal to noise ratio of my log files.

September 08, 2018

etbeGoogle and Certbot (Letsencrypt)

Like most people I use Certbot AKA Letsencrypt to create SSL certificates for my sites. It’s a great service, very easy to use and it generally works well.

Recently the server running www.coker.com.au among other domains couldn’t get a certbot certificate renewed, here’s the error message:

Failed authorization procedure. mail.gw90.de (http-01): urn:acme:error:unauthorized :: The client lacks sufficient authorization :: "mail.gw90.de" was considered an unsafe domain by a third-party API, listen.gw90.de (http-01): urn:acme:error:unauthorized :: The client lacks sufficient authorization :: "listen.gw90.de" was considered an unsafe domain by a third-party API

IMPORTANT NOTES:
 - The following errors were reported by the server:

   Domain: mail.gw90.de
   Type:   unauthorized
   Detail: "mail.gw90.de" was considered an unsafe domain by a third-
   party API

   Domain: listen.gw90.de
   Type:   unauthorized
   Detail: "listen.gw90.de" was considered an unsafe domain by a
   third-party API

It turns out that Google Safebrowsing had listed those two sites. Visit https://listen.gw90.de/ or https://mail.gw90.de/ today (and maybe for some weeks or months in the future) using Google Chrome (or any other browser that uses the Google Safebrowsing database) and it will tell you the site is “Dangerous” and probably refuse to let you in.

One thing to note is that neither of those sites has any real content, I only set them up in Apache to get SSL certificates that are used for other purposes (like mail transfer as the name suggests). If Google had listed my blog as a “Dangerous” site I wouldn’t be so surprised, WordPress has had more than a few security issues in the past and it’s not implausible that someone could have compromised it and made it serve up hostile content without me noticing. But the two sites in question have a DocumentRoot that is owned by root and was (until a few days ago) entirely empty, now they have a index.html that just says “This site is empty”. It’s theoretically possible that someone could have exploited a RCE bug in Apache to make it serve up content that isn’t in the DocumentRoot, but that seems unlikely (why waste an Apache 0day on one of the less important of my personal sites). It is possible that the virtual machine in question was compromised (a VM on that server has been compromised before [1]) but it seems unlikely that they would host bad things on those web sites if they did.

Now it could be that some other hostname under that domain had something inappropriate (I haven’t yet investigated all possibilities). But if so Google’s algorithm has a couple of significant problems, firstly if they are blacklisting sites related to one that had an issue then it would probably make more sense to blacklist by IP address (which means including some coker.com.au entries on the same IP). In the case of a compromised server it seems more likely to have multiple bad sites on one IP than multiple bad subdomains on different IPs (given that none of the hostnames in question have changed IP address recently and Google of course knows this). The next issue is that extending blacklisting doesn’t make sense unless there is evidence of hostile intent. I’m pretty sure that Google won’t blacklist all of ibm.com when (not if) a server in that domain gets compromised. I guess they have different policies for sites of different scale.

Both I and a friend have reported the sites in question to Google as not being harmful, but that hasn’t changed anything yet. I’m very disappointed in Google, listing sites, not providing any reason why (it could be a hostname under that domain was compromised and if so it’s not fixed yet BECAUSE GOOGLE DIDN’T REPORT A PROBLEM), and not removing the listing when it’s totally obvious there’s no basis for it.

While it makes sense for certbot to not issue SSL certificates to bad sites. It seems that they haven’t chosen a great service for determining which sites are bad.

Anyway the end result was that some of my sites had an expired SSL certificate for a day. I decided not to renew certificates before they expired to give Google a better chance of noticing their mistake and then I was busy at the time they expired. Now presumably as the sites in question have an invalid SSL certificate it will be even harder to convince anyone that they are not hostile.

August 29, 2018

LUVSoftware Freedom Day 2018 and LUV AGM

Sep 15 2018 13:00
Sep 15 2018 17:00
Sep 15 2018 13:00
Sep 15 2018 17:00
Location: 
Electron Workshop, 31 Arden Street North Melbourne 3051

It's time once again to get excited about all the benefits that Free and Open Source Software have given us over the past year and get together to talk about how Freedom and Openness can improve our human rights, our privacy, our security and our communities. It's Software Freedom Day!

Linux Users of Victoria is a subcommittee of Linux Australia.

September 15, 2018 - 13:00

read more

LUVLUV September 2018 Main Meeting: New Developments in Supercomputing

Sep 4 2018 18:30
Sep 4 2018 20:30
Sep 4 2018 18:30
Sep 4 2018 20:30
Location: 
Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

PLEASE NOTE RETURN TO ORIGINAL START TIME

6:30 PM to 8:30 PM Tuesday, September 4, 2018
Training Room, Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

Speakers:

Many of us like to go for dinner nearby after the meeting, typically at Trotters Bistro in Lygon St.  Please let us know if you'd like to join us!

Linux Users of Victoria is a subcommittee of Linux Australia.

September 4, 2018 - 18:30

read more

August 25, 2018

Dave HallAWS Parameter Store

Anyone with a moderate level of AWS experience will have learned that Amazon offers more than one way of doing something. Storing secrets is no exception. 

It is possible to spin up Hashicorp Vault on AWS using an official Amazon quick start guide. The down side of this approach is that you have to maintain it.

If you want an "AWS native" approach, you have 2 services to choose from. As the name suggests, Secrets Manager provides some secrets management tools on top of the store. This includes automagic rotation of AWS RDS credentials on a regular schedule. For the first 30 days the service is free, then you start paying per secret per month, plus API calls.

There is a free option, Amazon's Systems Manager Parameter Store. This is what I'll be covering today.

Structure

It is easy when you first start out to store all your secrets at the top level. After a while you will regret this decision. 

Parameter Store supports hierarchies. I recommend using them from day one. Today I generally use /[appname]-[env]/[KEY]. After some time with this scheme I am finding that /[appname]/[env]/[KEY] feels like it will be easier to manage. IAM permissions support paths and wildcards, so either scheme will work.

If you need to migrate your secrets, use Parameter Store namespace migration script

Access Controls

Like most Amazon services IAM controls access to Parameter Store. 

Parameter Store allows you to store your values as plain text or encrypted using a key using KMS. For encrypted values the user must have have grants on the parameter store value and KMS key. For consistency I recommend encrypting all your parameters.

If you have a monolith a key per application per envionment is likely to work well. If you have a collection of microservices having a key per service per environment becomes difficult to manage. In this case share a key between several services in the same environment.

Here is an IAM policy for an Lambda function to access a hierarchy of values in parameter store:

To allow your developers to manage the parameters in dev you will need a policy that looks like this:

Amazon has great documentation on controlling access to Parameter Store and KMS.

Adding Parameters

Amazon allows you to store almost any string up to 4Kbs in length in the Parameter store. This gives you a lot of flexibility.

Parameter Store supports deep hierarchies. You will find this becomes annoying to manage. Use hierarchies to group your values by application and environment. Within the heirarchy use a flat structure. I recommend using lower case letters with dashes between words for your paths. For the parameter keys use upper case letters with underscores. This makes it easy to differentiate the two when searching for parameters. 

Parameter store encodes everything as strings. There may be cases where you want to store an integer as an integer or a more complex data structure. You could use a naming convention to differentiate your different types. I found it easiest to encode every thing as json. When pulling values from the store I json decode it. The down side is strings must be wrapped in double quotes. This is offset by the flexibility of being able to encode objects and use numbers.

It is possible to add parameters to the store using 3 different methods. I generally find the AWS web console easiest when adding a small number of entries. Rather than walking you through this, Amazon have good documentation on adding values. Remember to always use "secure string" to encrypt your values.

Adding parameters via boto3 is straight forward. Once again it is well documented by Amazon.

Finally you can maintain parameters in with a little bit of code. In this example I do it with Python.

Using Parameters

I have used Parameter Store from Python and the command line. It is easier to use it from Python.

My example assumes that it a Lambda function running with the policy from earlier. The function is called my-app-dev. This is what my code looks like:

If you want to avoid loading your config each time your Lambda function is called you can store the results in a global variable. This leverages Amazon's feature that doesn't clear global variables between function invocations. The catch is that your function won't pick up parameter changes without a code deployment. Another option is to put in place logic for periodic purging of the cache.

On the command line things are little harder to manage if you have more than 10 parameters. To export a small number of entries as environment variables, you can use this one liner:

Make sure you have jq installed and the AWS cli installed and configured.

Conclusion

Amazon's System Manager Parameter Store provides a secure way of storing and managing secrets for your AWS based apps. Unlike Hashicorp Vault, Amazon manages everything for you. If you don't need the more advanced features of Secrets Manager you don't have to pay for them. For most users Parameter Store will be adequate.

August 23, 2018

Julien GoodwinCustom output pods for the Standard Research CG635 Clock Generator

As part of my previously mentioned side project the ability to replace crystal oscillators in a circuit with a higher quality frequency reference is really handy, to let me eliminate a bunch of uncertainty from some test setups.

A simple function generator is the classic way to handle this, although if you need square wave output it quickly gets hard to find options, with arbitrary waveform generators (essentially just DACs) the common option. If you can get away with just sine wave output an RF synthesizer is the other main option.

While researching these I discovered the CG635 Clock Generator from Stanford Research, and some time later picked one of these up used.

As well as being a nice square wave generator at arbitrary voltages these also have another set of outputs on the rear of the unit on an 8p8c (RJ45) connector, in both RS422 (for lower frequencies) and LVDS (full range) formats, as well as some power rails to allow a variety of less common output formats.

All I needed was 1.8v LVCMOS output, and could get that from the front panel output, but I'd then need a coax tail on my boards, as well as potentially running into voltage rail issues so I wanted to use the pod output instead. Unfortunately none of the pods available from Stanford Research do LVCMOS output, so I'd have to make my own, which I did.

The key chip in my custom pod is the TI SN65LVDS4, a 1.8v capable single channel LVDS reciever that operates at the frequencies I need. The only downside is this chip is only available in a single form factor, a 1.5mm x 2mm 10 pin UQFN, which is far too small to hand solder with an iron. The rest of the circuit is just some LED indicators to signal status.


Here's a rendering of the board from KiCad.

Normally "not hand solderable" for me has meant getting the board assembled, however my normal assembly house doesn't offer custom PCB finishes, and I wanted these to have white solder mask with black silkscreen as a nice UX when I go to use them, so instead I decided to try my hand at skillet reflow as it's a nice option given the space I've got in my tiny apartment (the classic tutorial on this from SparkFun is a good read if you're interested). Instead of just a simple plate used for cooking you can now buy hot plates with what are essentially just soldering iron temperature controllers, sold as pre-heaters making it easier to come close to a normal soldering profile.

Sadly, actually acquiring the hot plate turned into a bit of a mess, the first one I ordered in May never turned up, and it wasn't until mid-July that one arrived from a different supplier.

Because of the aforementioned lack of space instead of using stencils I simply hand-applied (leaded) paste, without even an assist tool (which I probably will acquire for next time), then hand-mounted the components, and dropped them on the plate to reflow. I had one resistor turn 90 degrees, and a few bridges from excessive paste, but for a first attempt I was really happy.


Here's a photo of the first two just after being taken off the hot plate.

Once the reflow was complete it was time to start testing, and this was where I ran into my biggest problems.

The two big problems were with the power supply I was using, and with my oscilloscope.

The power supply (A Keithley 228 Voltage/Current source) is from the 80's (Keithley's "BROWN" era), and while it has nice specs, doesn't have the most obvious UI. Somehow I'd set it to limit at 0ma output current, and if you're not looking at the segment lights it's easy to miss. At work I have an EEZ H24005 which also resets the current limit to zero on clear, however it's much more obvious when limiting, and a power supply with that level of UX is now on my "to buy" list.

The issues with my scope were much simpler. Currently I only have an old Rigol DS1052E scope, and while it works fine it is a bit of a pain to use, but ultimately I made a very simple mistake while testing. I was feeding in a trigger signal direct from the CG635's front outputs, and couldn't figure out why the generator was putting out such a high voltage (implausibly so). To cut the story short, I'd simply forgotten that the scope was set for use with 10x probes, and once I realised that everything made rather more sense. An oscilloscope with auto-detection for 10x probes, as well as a bunch of other features I want in a scope (much bigger screen for one), has now been ordered, but won't arrive for a while yet.

Ultimately the boards work fine, but until the new scope arrives I can't determine signal quality of them, but at least they're ready for when I'll need them, which is great for flow.

July 05, 2018

Dave HallMigrating AWS System Manager Parameter Store Secrets to a new Namespace

When starting with a new tool it is common to jump in start doing things. Over time you learn how to do things better. Amazon's AWS System Manager (SSM) Parameter Store was like that for me. I started off polluting the global namespace with all my secrets. Over time I learned to use paths to create namespaces. This helps a lot when it comes to managing access.

Recently I've been using Parameter Store a lot. During this time I have been reminded that naming things is hard. This lead to me needing to change some paths in SSM Parameter Store. Unfortunately AWS doesn't allow you to rename param store keys, you have to create new ones.

There was no way I was going to manually copy and paste all those secrets. Python (3.6) to the rescue! I wrote a script to copy the values to the new namespace. While I was at it I migrated them to use a new KMS key for encryption.

Grab the code from my gist, make it executable, pip install boto3 if you need to, then run it like so:

copy-ssm-ps-path.py source-tree-name target-tree-name new-kms-uuid

The script assumes all parameters are encrypted. The same key is used for all parameters. boto3 expects AWS credentials need to be in ~/.aws or environment variables.

Once everything is verified, you can use a modified version of the script that calls ssm.delete_parameter() or do it via the console.

I hope this saves someone some time.

June 12, 2018

Julien GoodwinCustom uBlox GPSDO board

For the next part of my ongoing project I needed to test the GPS reciever I'm using, a uBlox LEA-M8F (M8 series chip, LEA form factor, and with frequency outputs). Since the native 30.72MHz oscillator is useless for me I'm using an external TCVCXO (temperature compensated, voltage controlled oscillator) for now, with the DAC & reference needed to discipline the oscillator based on GPS. If uBlox would sell me the frequency version of the chip on its own that would be ideal, but they don't sell to small customers.

Here's a (rather modified) board sitting on top of an Efratom FRK rubidium standard that I'm going to mount to make a (temporary) home standard (that deserves a post of its own). To give a sense of scale the silver connector at the top of the board is a micro-USB socket.



Although a very simple board I had a mess of problems once again, both in construction and in component selection.

Unlike the PoE board from the previous post I didn't have this board manufactured. This was for two main reasons, first, the uBlox module isn't available from Digikey, so I'd still need to mount it by hand. The second, to fit all the components this board has a much greater area, and since the assembly house I use charges by board area (regardless of the number or density of components) this would have cost several hundred dollars. In the end, this might actually have been the sensible way to go.

By chance I'd picked up a new soldering iron at the same time these boards arrived, a Hakko FX-951 knock-off and gave it a try. Whilst probably an improvement over my old Hakko FX-888 it's not a great iron, especially with the knife tip it came with, and certainly nowhere near as nice to use as the JBC CD-B (I think that's the model) we have in the office lab. It is good enough that I'm probably going to buy a genuine Hakko FM-203 with an FM-2032 precision tool for the second port.

The big problem I had hand-soldering the boards was bridges on several of the components. Not just the tiny (0.65mm pitch, actually the *second largest* of eight packages for that chip) SC70 footprint of the PPS buffer, but also the much more generous 1.1mm pitch of the uBlox module. Luckily solder wick fixed most cases, plus one where I pulled the buffer and soldered a new one more carefully.

With components, once again I made several errors:
  • I ended up buying the wrong USB connectors for the footprint I chose (the same thing happened with the first run of USB-C modules I did in 2016), and while I could bodge them into use easily enough there wasn't enough mechanical retention so I ended up ripping one connector off the board. I ordered some correct ones, but because I wasn't able to wick all solder off the pads they don't attach as strongly as they should, and whilst less fragile, are hardly what I'd call solid.
  • The surface mount GPS antenna (Taoglas AP.10H.01 visible in this tweet) I used was 11dB higher gain than the antenna I'd tested with the devkit, I never managed to get it to lock while connected to the board, although once on a cable it did work ok. To allow easier testing, in the end I removed the antenna and bodged on an SMA connector for easy testing.
  • When selecting the buffer I accidentally chose one with an open-drain output, I'd meant to use one with a push-pull output. This took quite a silly long time for me to realise what mistake I'd made. Compounding this, the buffer is on the 1PPS line, which only strobes while locked to GPS, however my apartment is a concrete box, with what GPS signal I can get inside only available in my bedroom, and my oscilloscope is in my lab, so I couldn't demonstrate the issue live, and had to inject test signals. Luckily a push-pull is available in the same footprint, and a quick hot-air aided swap later (once parts arrived from Digikey) it was fixed.

Lessons learnt:
  • Yes I can solder down to ~0.5mm pitch, but not reliably.
  • More test points on dev boards, particularly all voltage rails, and notable signals not otherwise exposed.
  • Flux is magic, you probably aren't using enough.

Although I've confirmed all basic functions of the board work, including GPS locking, PPS (quick video of the PPS signal LED), and frequency output, I've still not yet tested the native serial ports and frequency stability from the oscillator. Living in an urban canyon makes such testing a pain.

Eventually I might also test moving the oscillator, DAC & reference into a mini oven to see if a custom OCXO would be any better, if small & well insulated enough the power cost of an oven shouldn't be a problem.

Also as you'll see if you look at the tweets, I really should have posted this almost a month ago, however I finished fixing the board just before heading off to California for a work trip, and whilst I meant to write this post during the trip, it's not until I've been back for more than a week that I've gotten to it. I find it extremely easy to let myself be distracted from side projects, particularly since I'm in a busy period at $ORK at the moment.

April 28, 2018

Julien GoodwinPoE termination board

For my next big project I'm planning on making it run using power over ethernet. Back in March I designed a quick circuit using the TI TPS2376-H PoE termination chip, and an LMR16020 switching regulator to drop the ~48v coming in down to 5v. There's also a second stage low-noise linear regulator (ST LDL1117S33R) to further drop it down to 3.3v, but as it turns out the main chip I'm using does its own 5->3.3v conversion already.

Because I was lazy, and the pricing was reasonable I got these boards manufactured by pcb.ng who I'd used for the USB-C termination boards I did a while back.

Here's the board running a Raspberry Pi 3B+, as it turns out I got lucky and my board is set up for the same input as the 3B+ supplies.



One really big warning, this is a non-isolated supply, which, in general, is a bad idea for PoE. For my specific use case there'll be no exposed connectors or metal, so this should be safe, but if you want to use PoE in general I'd suggest using some of the isolated convertors that are available with integrated PoE termination.

For this series I'm going to try and also make some notes on the mistakes I've made with these boards to help others, for this board:
  • I failed to add any test pins, given this was the first try I really should have, being able to inject power just before the switching convertor was helpful while debugging, but I had to solder wires to the input cap to do that.
  • Similarly, I should have had a 5v output pin, for now I've just been shorting the two diodes I had near the output which were intended to let me switch input power between two feeds.
  • The last, and the only actual problem with the circuit was that when selecting which exact parts to use I optimised by choosing the same diode for both input protection & switching, however this was a mistake, as the switcher needed a Schottky diode, and one with better ratings in other ways than the input diode. With the incorrect diode the board actually worked fine under low loads, but would quickly go into thermal shutdown if asked to supply more than about 1W. With the diode swapped to a correctly rated one it now supplies 10W just fine.
  • While debugging the previous I also noticed that the thermal pads on both main chips weren't well connected through. It seems the combination of via-in-thermal-pad (even tented), along with Kicad's normal reduction in paste in those large pads, plus my manufacturer's use of a fairly thin application of paste all contributed to this. Next time I'll probably avoid via-in-pad.


Coming soon will be a post about the GPS board, but I'm still testing bits of that board out, plus waiting for some missing parts (somehow not only did I fail to order 10k resistors, I didn't already have some in stock).

September 24, 2017

Dave HallDrupal Puppies

Over the years Drupal distributions, or distros as they're more affectionately known, have evolved a lot. We started off passing around database dumps. Eventually we moved onto using installations profiles and features to share par-baked sites.

There are some signs that distros aren't working for people using them. Agencies often hack a distro to meet client requirements. This happens because it is often difficult to cleanly extend a distro. A content type might need extra fields or the logic in an alter hook may not be desired. This makes it difficult to maintain sites built on distros. Other times maintainers abandon their distributions. This leaves site owners with an unexpected maintenance burden.

We should recognise how people are using distros and try to cater to them better. My observations suggest there are 2 types of Drupal distributions; starter kits and targeted products.

Targeted products are easier to deal with. Increasingly monetising targeted distro products is done through a SaaS offering. The revenue can funds the ongoing development of the product. This can help ensure the project remains sustainable. There are signs that this is a viable way of building Drupal 8 based products. We should be encouraging companies to embrace a strategy built around open SaaS. Open Social is a great example of this approach. Releasing the distros demonstrates a commitment to the business model. Often the secret sauce isn't in the code, it is the team and services built around the product.

Many Drupal 7 based distros struggled to articulate their use case. It was difficult to know if they were a product, a demo or a community project that you extend. Open Atrium and Commerce Kickstart are examples of distros with an identity crisis. We need to reconceptualise most distros as "starter kits" or as I like to call them "puppies".

Why puppies? Once you take a puppy home it becomes your responsibility. Starter kits should be the same. You should never assume that a starter kit will offer an upgrade path from one release to the next. When you install a starter kit you are responsible for updating the modules yourself. You need to keep track of security releases. If your puppy leaves a mess on the carpet, no one else will clean it up.

Sites build on top of a starter kit should diverge from the original version. This shouldn't only be an expectation, it should be encouraged. Installing a starter kit is the starting point of building a unique fork.

Project pages should clearly state that users are buying a puppy. Prospective puppy owners should know if they're about to take home a little lap dog or one that will grow to the size of a pony that needs daily exercise. Puppy breeders (developers) should not feel compelled to do anything once releasing the puppy. That said, most users would like some documentation.

I know of several agencies and large organisations that are making use of starter kits. Let's support people who are adopting this approach. As a community we should acknowledge that distros aren't working. We should start working out how best to manage the transition to puppies.

September 16, 2017

Dave HallTrying Drupal

While preparing for my DrupalCamp Belgium keynote presentation I looked at how easy it is to get started with various CMS platforms. For my talk I used Contentful, a hosted content as a service CMS platform and contrasted that to the "Try Drupal" experience. Below is the walk through of both.

Let's start with Contentful. I start off by visiting their website.

Contentful homepage

In the top right corner is a blue button encouraging me to "try for free". I hit the link and I'm presented with a sign up form. I can even use Google or GitHub for authentication if I want.

Contentful signup form

While my example site is being installed I am presented with an overview of what I can do once it is finished. It takes around 30 seconds for the site to be installed.

Contentful installer wait

My site is installed and I'm given some guidance about what to do next. There is even an onboarding tour in the bottom right corner that is waving at me.

Contentful dashboard

Overall this took around a minute and required very little thought. I never once found myself thinking come on hurry up.

Now let's see what it is like to try Drupal. I land on d.o. I see a big prominent "Try Drupal" button, so I click that.

Drupal homepage

I am presented with 3 options. I am not sure why I'm being presented options to "Build on Drupal 8 for Free" or to "Get Started Risk-Free", I just want to try Drupal, so I go with Pantheon.

Try Drupal providers

Like with Contentful I'm asked to create an account. Again I have the option of using Google for the sign up or completing a form. This form has more fields than contentful.

Pantheon signup page

I've created my account and I am expecting to be dropped into a demo Drupal site. Instead I am presented with a dashboard. The most prominent call to action is importing a site. I decide to create a new site.

Pantheon dashboard

I have to now think of a name for my site. This is already feeling like a lot of work just to try Drupal. If I was a busy manager I would have probably given up by this point.

Pantheon create site form

When I submit the form I must surely be going to see a Drupal site. No, sorry. I am given the choice of installing WordPress, yes WordPress, Drupal 8 or Drupal 7. Despite being very confused I go with Drupal 8.

Pantheon choose application page

Now my site is deploying. While this happens there is a bunch of items that update above the progress bar. They're all a bit nerdy, but at least I know something is happening. Why is my only option to visit my dashboard again? I want to try Drupal.

Pantheon site installer page

I land on the dashboard. Now I'm really confused. This all looks pretty geeky. I want to try Drupal not deal with code, connection modes and the like. If I stick around I might eventually click "Visit Development site", which doesn't really feel like trying Drupal.

Pantheon site dashboard

Now I'm asked to select a language. OK so Drupal supports multiple languages, that nice. Let's select English so I can finally get to try Drupal.

Drupal installer, language selection

Next I need to chose an installation profile. What is an installation profile? Which one is best for me?

Drupal installer, choose installation profile

Now I need to create an account. About 10 minutes I already created an account. Why do I need to create another one? I also named my site earlier in the process.

Drupal installer, configuration form part 1
Drupal installer, configuration form part 2

Finally I am dropped into a Drupal 8 site. There is nothing to guide me on what to do next.

Drupal site homepage

I am left with a sense that setting up Contentful is super easy and Drupal is a lot of work. For most people wanting to try Drupal they would have abandoned someway through the process. I would love to see the conversion stats for the try Drupal service. It must miniscule.

It is worth noting that Pantheon has the best user experience of the 3 companies. The process with 1&1 just dumps me at a hosting sign up page. How does that let me try Drupal?

Acquia drops onto a page where you select your role, then you're presented with some marketing stuff and a form to request a demo. That is unless you're running an ad blocker, then when you select your role you get an Ajax error.

The Try Drupal program generates revenue for the Drupal Association. This money helps fund development of the project. I'm well aware that the DA needs money. At the same time I wonder if it is worth it. For many people this is the first experience they have using Drupal.

The previous attempt to have simplytest.me added to the try Drupal page ultimately failed due to the financial implications. While this is disappointing I don't think simplytest.me is necessarily the answer either.

There needs to be some minimum standards for the Try Drupal page. One of the key item is the number of clicks to get from d.o to a working demo site. Without this the "Try Drupal" page will drive people away from the project, which isn't the intention.

If you're at DrupalCon Vienna and want to discuss this and other ways to improve the marketing of Drupal, please attend the marketing sprints.

AttachmentSize
try-contentful-1.png342.82 KB
try-contentful-2.png214.5 KB
try-contentful-3.png583.02 KB
try-contentful-5.png826.13 KB
try-drupal-1.png1.19 MB
try-drupal-2.png455.11 KB
try-drupal-3.png330.45 KB
try-drupal-4.png239.5 KB
try-drupal-5.png203.46 KB
try-drupal-6.png332.93 KB
try-drupal-7.png196.75 KB
try-drupal-8.png333.46 KB
try-drupal-9.png1.74 MB
try-drupal-10.png1.77 MB
try-drupal-11.png1.12 MB
try-drupal-12.png1.1 MB
try-drupal-13.png216.49 KB

May 29, 2017

Stewart SmithFedora 25 + Lenovo X1 Carbon 4th Gen + OneLink+ Dock

As of May 29th 2017, if you want to do something crazy like use *both* ports of the OneLink+ dock to use monitors that aren’t 640×480 (but aren’t 4k), you’re going to need a 4.11 kernel, as everything else (for example 4.10.17, which is the latest in Fedora 25 at time of writing) will end you in a world of horrible, horrible pain.

To install, run this:

sudo dnf install \
https://kojipkgs.fedoraproject.org//packages/kernel/4.11.3/200.fc25/x86_64/kernel-4.11.3-200.fc25.x86_64.rpm \
https://kojipkgs.fedoraproject.org//packages/kernel/4.11.3/200.fc25/x86_64/kernel-core-4.11.3-200.fc25.x86_64.rpm \
https://kojipkgs.fedoraproject.org//packages/kernel/4.11.3/200.fc25/x86_64/kernel-cross-headers-4.11.3-200.fc25.x86_64.rpm \
https://kojipkgs.fedoraproject.org//packages/kernel/4.11.3/200.fc25/x86_64/kernel-devel-4.11.3-200.fc25.x86_64.rpm \
https://kojipkgs.fedoraproject.org//packages/kernel/4.11.3/200.fc25/x86_64/kernel-modules-4.11.3-200.fc25.x86_64.rpm \
https://kojipkgs.fedoraproject.org//packages/kernel/4.11.3/200.fc25/x86_64/kernel-tools-4.11.3-200.fc25.x86_64.rpm \
https://kojipkgs.fedoraproject.org//packages/kernel/4.11.3/200.fc25/x86_64/kernel-tools-libs-4.11.3-200.fc25.x86_64.rpm \
https://kojipkgs.fedoraproject.org//packages/kernel/4.11.3/200.fc25/x86_64/perf-4.11.3-200.fc25.x86_64.rpm

This grabs a kernel that’s sitting in testing and isn’t yet in the main repositories. However, I can now see things on monitors, rather than 0 to 1 monitor (most often 0). You can also dock/undock and everything doesn’t crash in a pile of fail.

I remember a time when you could fairly reliably buy Intel hardware and have it “just work” with the latest distros. It’s unfortunate that this is no longer the case, and it’s more of a case of “wait six months and you’ll still have problems”.

Urgh.

(at least Wayland and X were bug for bug compatible?)

May 03, 2017

Stewart SmithAPI, ABI and backwards compatibility are a hard necessity

Recently, I was reading a thread on LKML on a proposal to change the behavior of the open system call when confronted with unknown flags. The thread is worth a read as the topic of augmenting things that exist probably by accident to be “better” is always interesting, as is the definition of “better”.

Keeping API and/or ABI compatibility is something that isn’t a new problem, and it’s one that people are pretty good at sometimes messing up.

This problem does not go away just because “we have cloud now”. In any distributed system, in order to upgrade it (or “be agile” as the kids are calling it), you by definition are going to have either downtime or at least two versions running concurrently. Thus, you have to have your interfaces/RPCs/APIs/ABIs/protocols/whatever cope with changes.

You cannot instantly upgrade the world, it happens gradually. You also have to design for at least three concurrent versions running. One is the original, the second is your upgrade, your third is the urgent fix because the upgrade is quite broken in some new way you only discover in production.

So, the way you do this? Never ever EVER design for N-1 compatibility only. Design for going back a long way, much longer than you officially support. You want to have a design and programming culture of backwards compatibility to ensure you can both do new and exciting things and experiment off to the side.

It’s worth going and rereading Rusty’s API levels posts from 2008:

February 22, 2017

Julien GoodwinMaking a USB powered soldering iron that doesn't suck

Today's evil project was inspired by a suggestion after my talk on USB-C & USB-PD at this years's linux.conf.au Open Hardware miniconf.

Using a knock-off Hakko driver and handpiece I've created what may be the first USB powered soldering iron that doesn't suck (ok, it's not a great iron, but at least it has sufficient power to be usable).

Building this was actually trivial, I just wired the 20v output of one of my USB-C ThinkPad boards to a generic Hakko driver board, the loss of power from using 20v not 24v is noticeable, but for small work this would be fine (I solder in either the work lab or my home lab, where both have very nice soldering stations so I don't actually expect to ever use this).

If you were to turn this into a real product you could in fact do much better, by doing both power negotiation and temperature control in a single micro, the driver could instead be switched to a boost converter instead of just a FET, and by controlling the output voltage control the power draw, and simply disable the regulator to turn off the heater. By chance, the heater resistance of the Hakko 907 clone handpieces is such that combined with USB-PD power rules you'd always be boost converting, never needing to reduce voltage.

With such a driver you could run this from anything starting with a 5v USB-C phone charger or battery (15W for the nicer ones), 9v at up to 3A off some laptops (for ~25W), or all the way to 20V@5A for those who need an extremely high-power iron. 60W, which happens to be the standard power level of many good irons (such as the Hakko FX-888D) is also at 20v@3A a common limit for chargers (and also many cables, only fixed cables, or those specially marked with an ID chip can go all the way to 5A). As higher power USB-C batteries start becoming available for laptops this becomes a real option for on-the-go use.

Here's a photo of it running from a Chromebook Pixel charger:

February 14, 2017

Stewart Smithj-core + Numato Spartan 6 board + Fedora 25

A couple of changes to http://j-core.org/#download_bitstream made it easy for me to get going:

  • In order to make ModemManager not try to think it’s a “modem”, create /etc/udev/rules.d/52-numato.rules with the following content:
    # Make ModemManager ignore Numato FPGA board
    ATTRS{idVendor}=="2a19", ATTRS{idProduct}=="1002", ENV{ID_MM_DEVICE_IGNORE}="1"
  • You will need to install python3-pyserial and minicom
  • The minicom command line i used was:
    sudo stty -F /dev/ttyACM0 -crtscts && minicom -b 115200 -D /dev/ttyACM0

and along with the instructions on j-core.org, I got it to load a known good build.

January 30, 2017

Stewart SmithRecording of my LCA2017 talk: Organizational Change: Challenges in shipping open source firmware

August 01, 2013

Tim ConnorsNo trains for the corporatocracy

Sigh, look, I know we don't actually live in a democracy (but a corporatocracy instead), and I should never expect the relevant ministers to care about my meek little protestations otherwise, but I keep writing these letters to ministers for transport anyway, under the vague hope that it might remind them that they're ministers for transport, and not just roads.


Dear Transport Minister, Terry Mulder,

I encourage you and your fellow ministers to read this article
("Tracking the cost", The Age, June 13 2009) from 2009, back when the
Liberals claimed to have a very different attitude, and when
circumstances seemed to mirror the current time:
http://www.theage.com.au/national/tracking-the-cost-20090612-c67m.html

The eventual costs of building the first extensions to the Melbourne
public transport system in 80 years eventually blew out from $8M to
$500M over the short life of the South Morang project; despite being a
much smaller project than the entire rail lines built cheaper by
cities such as Perth in recent years.

The increased cost is explained away as a safety requirement - it
being so important to now start building grade separated lines rather
than level crossings regardless of circumstances. Perceived safety
trumps real safety (I'd much rather be in a train than suffer from one
of the 300 Victorian deaths on the roads each year), but more sinister
is that because of this inflated expense, we'll probably never see
another rail line like this built at all in Melbourne (although we'll
build at public expense a wonderful road tunnel that no-one but
Lindsay Fox will use at more than 10 times the cost, though).

I suspect the real reason for grade separation is not safety, but to
cause less inconvenience to car drivers stuck for 30 seconds at these
minor crossings. Since the delays at level crossings are a roads
problem, and collisions of errant motorists with trains at level
crossings is a roads problem, and the South Morang railway reservation
existed far before any of the roads were put in place, I'm wondering
whether you can answer why the blowout in costs of construction of
train lines comes out of the public transport budget, and not at the
expense of what causes these problems in the first place - the roads?
These train lines become harder to build because of an artificial cost
inflation caused by something that will be less of a problem if only
we could built more rail lines and actually improve the Melbourne
public transport system and make it attractive to use, for once (we've
been waiting for 80 years).


Yours sincerely,


And a little while later, the reply!

July 01, 2013

Tim ConnorsYarra trail pontoon closures

I do have to admit, I had some fun writing this one:

Dear Transport Minister, Terry Mulder (Denis Napthine, Local MP Ted Baillieu, Ryan Smith MP responsible for Parks Victoria, Parks Victoria itself, and Bicycle Victoria CCed),

I am writing about the sudden closure of the Main Yarra bicycle trail around Punt Road. The floating sections of the trail have been closed for the foreseeable future because of some over-zealous lawyer at Parks Victoria who has decided that careless riders might injure themselves on the rare occasion when the pontoon is both icy, and resting on the bottom of the Yarra at very low tides, sloping sideways at a minor angle. The trail has been closed before Parks Victoria have even planned for how they're going to rectify the problem with the pontoons. Instead, the lawyers have forced riders to take to parallel streets such as Swan St (which I took tonight in the rain, negotiating the thin strip between parked cars far enough from their doors being flung out illegally by careless drivers, and the wet tram tracks beside them). Obviously, causing riders to take these detours will be very much less safe than just keeping the trail open until a plan is developed, but I can see why Parks Victoria would want to shift the legal burden away from them.

I have no faith that the pontoon will be fixed in the foreseeable future without your intervention, because of past history -- that trail has been partially closed for about 18 months out of the past 3 years due to the very important works on the freeway above (keeping the economy going, as they say, by digging ditches and filling them immediately back up again).

Since we're already wasting $15B on an east-west freeway tunnel that will do absolutely nothing to alleviate traffic congestion because the outbound (Easterly direction) freeway is already at capacity in the afternoon without the extra induced traffic this project will add, I was wondering if you could spare a few million to duplicate the only easterly bicycle trail we have, so that these sorts of incidents don't reoccur and have so much impact on riders in the future.

I do hope that this trail will be fixed in a timely fashion before myself and all other 3000-4000 cyclists who currently use the trail every day resorting to riding through any of your freeway tunnels.

Yours sincerely,

Me

April 14, 2013

Tim Connors

Oh well, if The Age aren't going to publish my Thatcher rant, I will:

Jan White (
Letters, 11 Apr) is heavily misguided if she believes that Thatcher was one of Britain's greatest leaders. For whom? By any metric 70% of Brits cared about, she was one of the worst. Any harmony, strength of character and respect Brits may be missing now would be due to her having nearly destroyed everything about British society with her Thatchernomics. Her funeral should be privatised and definitely not funded by the state as it is going to be. Instead, it could be funded by the long queue of people who want to dance on her grave.

March 21, 2013

Tim ConnorsRagin' on the road

Since The Age didn't publish my letter, my 3 readers ought to see it anyway:


Reynah Tang of the Law Institute of Victoria says that road rage offences shouldn't necessarily lead to loss of licence ("Offenders risk losing their licence", The Age, Mar 21) . He misses the point -- a vehicle is a weapon. Road ragers demonstrably do not have enough self control to drive. They have already lost their temper when in control of such a weapon, so they must never be given a licence to use that weapon again (the weapon should also be forfeited). The same is presumably true of gun murderers after their initial jail time (which road ragers rarely are given). RACV's Brian Negus also doesn't appear to realise that a driving license is a privilege, not an automatic right. You can still have all your necessary mobility without your car - it's not a human rights issue.


It was less than 200 words even dammit! But because the editor didn't check the basic arithmetic in a previous day's letter, they had to publish someone's correction.

November 18, 2012

Ben McGinnesFixed it

I've fixed the horrible errors that were sending my tweets here, it only took a few hours.

To do that I've had to disable cross-posting and it looks like it won't even work manually, so my updates will likely only occur on my own domain.

Details of the changes are here. They include better response times for my domain and no more Twitter posts on the main page, which should please those of you who hate that. Apparently that's a lot of people, but since I hate being inundated with FarceBook's crap I guess it evens out.

The syndicated feed for my site around here somewhere will get everything, but there's only one subscriber to that (last time I checked) and she's smart enough to decide how she wants to deal with that.

Ben McGinnesTweet Sometimes I amaze even myself; I remembered the pa…

Sometimes I amaze even myself; I remembered the passphrases to old PGP keys I thought had been lost to time. #crypto

Originally published at Organised Adversary. Please leave any comments there.

Ben McGinnesTweet These are the same keys I referred to in the PPAU…

These are the same keys I referred to in the PPAU #NatSecInquiry submission as being able to be used against me. #crypto

Originally published at Organised Adversary. Please leave any comments there.

Ben McGinnesTweet Now to give them their last hurrah: sign my curren…

Now to give them their last hurrah: sign my current key with them and then revoke them! #crypto

Originally published at Organised Adversary. Please leave any comments there.

October 26, 2011

Donna Benjaminheritage and hysterics

Originally published at KatteKrab. Please leave any comments there.

This gorgeous photo of The Queen in Melbourne on the Royal Tram made me smile this morning.

I've long been a proponent of an Australian Republic - but the populist hysteria of politicians, this photo, and the Kingdom of the Netherlands is actually making me rethink that position.

At least for today.  Long may she reign over us.

"Queen Elizabeth II smiles as she rides on the royal tram down St Kilda Road"
Photo from Getty Images published on theage.com.au

October 02, 2011

Donna BenjaminSticks and Stones and Speech

Originally published at KatteKrab. Please leave any comments there.

THE law does treat race differently: it is not unlawful to publish an article that insults, offends, humiliates or intimidates old people, for instance, or women, or disabled people. Professor Joseph, director of the Castan Centre for Human Rights Law at Monash University, said in principle ''humiliate and intimidate'' could be extended to other anti-discrimination laws. But historically, racial and religious discrimination is treated more seriously because of the perceived potential for greater public order problems and violence.

Peter Munro The Age  2 Oct 2011

Ahaaa. Now I get it! We've been doing it wrong. 

Racial villification is against the law because it might be more likely to lead to violence than villifying women, the elderly or the disabled.

Interesting debates and articles about free speech and discrimination are bobbing up and down in the flotsam and jetsam of the Bolt decision. Much of it seems to hinge on some kind of legal see-saw around notions of a bad law about bad words.

I've always been a proponent of the sticks and stones philosophy.  For those not familiar, it's the principle behind a children's nursery rhyme.

Sticks and Stones may break my bones
But  words will never hurt me

But I'm increasingly disturbed by the hateful culture of online comment.  I am a very strong proponent of the human right to free expression, and abhor censorship, but I'm seriously sick of "My right to free speech" being used as the ultimate excuse for people using words to denigrate, humiliate, intimidate, belittle and attack others, particularly women.

We should defend a right to free speech, but condemn hate speech when ever and where ever we see it.  Maybe we actually need to get violent to make this stop? Surely not.

September 20, 2011

Donna BenjaminQantas Pilots

Originally published at KatteKrab. Please leave any comments there.

The Qantas Pilot Safety culture is something worth fighting to protect. I read Malcolm Gladwell's Outliers whilst on board a Qantas flight recently. While Qantas itself isn't mentioned in the book, a footnote listed Australia as having the 2nd lowest Pilot Power-Distance Index (PDI) in the world. New Zealand had the lowest. The entire chapter "The Ethnic Theory of Plane Crashes" is the strongest argument I've seen which explains the Qantas safety record. The experience of pilots and relationships amongst the entire air crew is a crucial differentiating factor. Other airlines work hard to develop this culture, often needing to work against their own cultural patterns to achieve it. At Qantas, and likely at other Australian airlines too, this culture is the norm.

I want Australian Qantas Pilots flying Qantas planes. I'd like an Australian in charge too.

If you too support Qantas Pilots - go to their website, sign the petition.

Do your own reading.

G.R. Braithwaite, R.E. Caves, J.P.E. Faulkner, Australian aviation safety — observations from the ‘lucky’ countryJournal of Air Transport Management, Volume 4, Issue 1, January 1998: 55-62.

Anthony Dennis, What it takes to become a Qantas pilot news.com.au, 8 September 2011.

Ashleigh Merritt, Culture in the Cockpit: Do Hofstede’s Dimensions Replicate?  Journal of Cross-Cultural Psychology, May 2000 31: 283-30.

Matt Phillips, Malcolm Gladwell on Culture, Cockpit Communication and Plane Crashes, WSJ Blogs, 4 December 2008.

 

September 18, 2011

Donna BenjaminRegistering for LCA2012

Originally published at KatteKrab. Please leave any comments there.

linux.conf.au ballarat 2012

I am right now, at this very minute, registering for linux.conf.au in Ballarat in January. Creating my planet feed. Yep. Uhuh.

I reckon the "book a bus" feature of rego is pretty damn cool.  I won't be using it, because I'll be driving up from Melbourne. Serious kudos to the Ballarat team. Also nice to see they'll add busses from Avalon airport as well as from Tullamarine airport if there's demand.

Too cool.