Planet LUV

January 19, 2020

LUVLUV February 2020 Workshop: TBA

Feb 15 2020 12:30
Feb 15 2020 16:30
Feb 15 2020 12:30
Feb 15 2020 16:30
Location: 
Infoxchange, 33 Elizabeth St. Richmond

Topic to be announced

Please email the LUV committee at luv-ctte@luv.asn.au if you would like to give a talk, presentation or workshop or have a topic you would like to see covered.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121.  Late arrivals please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria is a subcommittee of Linux Australia.

February 15, 2020 - 12:30

read more

LUVAnnual Penguin Picnic, January 25, 2020

Jan 25 2020 12:00
Jan 25 2020 16:00
Jan 25 2020 12:00
Jan 25 2020 16:00
Location: 
Yarra Bank Reserve, Hawthorn

The Linux Users of Victoria Annual Penguin Picnic will be held on Saturday, January 25, starting at 12 noon at the Yarra Bank Reserve, Hawthorn.  In the event of hazardous levels of smoke or other dangerous weather, we will announce an alternate indoor location.

LUV would like to acknowledge Infoxchange for the Richmond venue.

Linux Users of Victoria Inc., is a subcommitee of Linux Australia.

January 25, 2020 - 12:00

read more

January 01, 2020

Stewart SmithSpeeding up Blackbird boot: the SBE

The Self Boot Engine (SBE) is a small embedded PPE42 core inside the POWER9 CPU which has the unenvious job of getting a single POWER9 core ready enough to start executing instructions out of L3 cache, and poking some instructions into said cache for the core to start executing.

It’s called the “Self Boot Engine” as in generations prior to POWER8, it was the job of the FSP (Service Processor) to do all of the booting for the CPU. On POWER8, there was still an SBE, but it was a custom instruction set (this was the Power On Reset Engine – PORE), while the PPE42 is basically a 32bit powerpc core cut straight down the middle (just the way to make it awkward for toolchains).

One of the things I noted in my post on Booting temporary firmware on the Raptor Blackbird is that we got serial console output from the SBE. It turns out one of thing things explicitly not enabled by Raptor in their build was this output as “it made the SBE boot much slower”. I’d actually long suspected this, but hadn’t really had the time to delve into it.

Since for POWER9, the firmware for the SBE is now open source code, as is the ppe42-binutils and ppe42-gcc toolchain for it. This means we can hack on it!

WARNING: hacking on your SBE firmware can be relatively dangerous, as it’s literally the first thing that needs to work in order to boot the system, and there isn’t (AFAIK) publicly documented easy way to re-flash your SBE firmware if you mess it up.

Seeing as we saw a regression in boot time with the UART output enabled, we need to look at the uartPutChar() function in sbeConsole.C (error paths removed for clarity):

static void uartPutChar(char c)
{
    #define SBE_FUNC "uartPutChar"
    uint32_t rc = SBE_SEC_OPERATION_SUCCESSFUL;
    do {
        static const uint64_t DELAY_NS = 100;
        static const uint64_t DELAY_LOOPS = 100000000;

        uint64_t loops = 0;
        uint8_t data = 0;
        do {
            rc = readReg(LSR, data);
...
            if(data == LSR_BAD || (data & LSR_THRE))
            {
                break;
            }
            delay(DELAY_NS, 1000000);
        } while(++loops < DELAY_LOOPS);

...
        rc = writeReg(THR, c);
...
    } while(0);

    #undef SBE_FUNC
}

One thing you may notice if you’ve spent some time around serial ports is that it’s not using the transmit FIFO! While according to Wikipedia the original 16550 had a broken FIFO, but we’re certainly not going to be hooked up to an original rev of that silicon.

To compare, let’s look at the skiboot code, which is all in hw/lpc-uart.c:

static void uart_check_tx_room(void)
{
	if (uart_read(REG_LSR) & LSR_THRE) {
		/* FIFO is 16 entries */
		tx_room = 16;
		tx_full = false;
	}
}

The uart_check_tx_room() function is pretty simple, it checks if there’s room in the FIFO and knows that there’s 16 entries. Next, we have a busy loop that waits until there’s room again in the FIFO:

static void uart_wait_tx_room(void)
{
	while (!tx_room) {
		uart_check_tx_room();
		if (!tx_room) {
			smt_lowest();
			do {
				barrier();
				uart_check_tx_room();
			} while (!tx_room);
			smt_medium();
		}
	}
}

Finally, the bit of code that writes the (internal) log buffer out to a serial port:

/*
 * Internal console driver (output only)
 */
static size_t uart_con_write(const char *buf, size_t len)
{
	size_t written = 0;

	/* If LPC bus is bad, we just swallow data */
	if (!lpc_ok() && !mmio_uart_base)
		return written;

	lock(&uart_lock);
	while(written < len) {
		if (tx_room == 0) {
			uart_wait_tx_room();
			if (tx_room == 0)
				goto bail;
		} else {
			uart_write(REG_THR, buf[written++]);
			tx_room--;
		}
	}
 bail:
	unlock(&uart_lock);
	return written;
}

The skiboot code ends up being a bit more complicated thanks to a number of reasons, but the basic algorithm could be applied to the SBE code, and rather than busy waiting for each character to be written out before sending the other into the FIFO, we could just splat things down there and continue with life. So, I put together a patch to try out.

Before (i.e. upstream SBE code): it took about 15 seconds from “Welcome to SBE” to “Booting Hostboot”.

Now (with my patch): Around 10 seconds.

It’s a full five seconds (33%) faster to get through the SBE stage of booting. Wow.

Hopefully somebody looks at the pull request sometime soon, as it’s probably useful to a lot of people doing firmware and Operating System development.

So, Happy New Year for Blackbird owners (I’ll publish a build with this and other misc improvements “soon”).

December 17, 2019

Stewart SmithA close-to-upstream firmware build for the Raptor Blackbird

It goes without saying that using this build is a At Your Own Risk and I make zero warranty. AFAIK it can’t physically destroy your system.

My GitHub op-build branch stewart-blackbird-v1 has all the changes built into this build (the VERSION displayed in firmware will be slightly weird as I did the tagging afterwards… this is not meant to be “howto release firmware to the public”). Follow op-build pull 3341 for the state of upstreaming everything.

Binaries are over at https://www.flamingspork.com/blackbird/stewart-blackbird-v1-images/ (see the git branch of op-build for source).

To flash it (temporarily), grab blackbird.pnor, get it to /tmp on your BMC and follow the instructions I posted the other day.

I’d be interested in any feedback on what does/does not work.

December 15, 2019

Stewart SmithAre you Fans of the Blackbird? Speak up, I can’t hear you over the fan.

So, as of yesterday, I started running a pretty-close-to-upstream op-build host firmware stack on my Blackbird. Notable yak-shaving has included:

Apart from that, I was all happy as Larry. Except then I went into the room with the Blackbird in it an went “huh, that’s loud”, and since it was bedtime, I decided it could all wait until the morning.

It is now the morning. Checking fan speeds over IPMI, one fan stood out (fan2, sitting at 4300RPM). This was a bit of a surprise as what’s silkscreened on the board is that the rear case fan is hooked up to ‘fan2″, and if we had a “start from 0/1” mix up, it’d be the front case fan. I had just assumed it’d be maybe OCC firmware dying or something, but this wasn’t the case (I checked – thanks occtoolp9!)

After a bit of digging around, I worked out this mapping:

IPMI fan0Rear Case FanMotherboard Fan 2
IPMI fan1Front Case FanMotherboard Fan 3
IPMI fan2CPU FanMotherboard Fan 1

Which is about as surprising and confusing as you’d think.

After a bunch of digging around the Raptor ports of OpenBMC and Hostboot, it seems that the IPL Observer which is custom to Raptor controls if the BMC decides to do fan control or not.

You can get its view of the world from the BMC via the (incredibly user friendly) poking at DBus:

busctl get-property org.openbmc.status.IPL /org/openbmc/status/IPL org.openbmc.status.IPL current_status; busctl get-property org.openbmc.status.IPL /org/openbmc/status/IPL org.openbmc.status.IPL current_istep

Which if you just have the Hostboot patch in (like I first did) you end up with:

s "IPL_RUNNING"
s "21,3"

Which is where Hostboot exits the IPL process (as you see on the screen) and hands over to skiboot. But if you start digging through their op-build tree, you find that there’s a signal_linux_start_complete script which calls pnv-lpc to write two values to LPC ports 0x81 and 0x82. The pnv-lpc utility is the external/lpc/ binary from skiboot, and these two ports are the “extended lpc port 80h” state.

So, to get back fan control? First, build the lpc utility:

git clone git@github.com:open-power/skiboot.git
cd skiboot/external/lpc
make

and then poke the magic values of “IPL complete and linux running”:

$ sudo ./lpc io 0x81.b=254
[io] W 0x00000081.b=0xfe
$ sudo ./lpc io 0x82.b=254
[io] W 0x00000082.b=0xfe

You get a friendly beep, and then your fans return to sanity.

Of course, for that to work you need to have debugfs mounted, as this pokes OPAL debugfs to do direct LPC operations.

Next up: think of a smarter way to trigger that than “stewart runs it on the command line”. Also next up: work out the better way to determine that fan control should be on and patch the BMC.

December 14, 2019

Stewart SmithBooting temporary firmware on the Raptor Blackbird

In a future post, I’ll detail how to build my ported-to-upstream Blackbird firmware. Here though, we’ll explore booting some firmware temporarily to experiment.

Step 1: Copy your new PNOR image over to the BMC.
Step 2: …
Step 3: Profit!

Okay, not really, once you’ve copied over your image, ensure the computer is off and then you can tell the daemon that provides firmware to the host to use a file backend for it rather than the PNOR chip on the motherboard (i.e. yes, you can boot your system even when the firmware chip isn’t there – although I’ve not literally tried this).

root@blackbird:~# mboxctl --backend file:/tmp/blackbird.pnor 
SetBackend: Success
root@blackbird:~# obmcutil poweron

If we look at the serial console (ssh to the BMC port 2200) we’ll see Hostboot start, realise there’s newer SBE code, flash it, and reboot:

--== Welcome to Hostboot hostboot-b284071/hbicore.bin ==--

  3.02606|secure|SecureROM valid - enabling functionality
  5.14678|Booting from SBE side 0 on master proc=00050000
  5.18537|ISTEP  6. 5 - host_init_fsi
  5.47985|ISTEP  6. 6 - host_set_ipl_parms
  5.54476|ISTEP  6. 7 - host_discover_targets
  6.56106|HWAS|PRESENT> DIMM[03]=8080000000000000
  6.56108|HWAS|PRESENT> Proc[05]=8000000000000000
  6.56109|HWAS|PRESENT> Core[07]=1511540000000000
  6.61373|ISTEP  6. 8 - host_update_master_tpm
  6.61529|SECURE|Security Access Bit> 0x0000000000000000
  6.61530|SECURE|Secure Mode Disable (via Jumper)> 0x8000000000000000
  6.61543|ISTEP  6. 9 - host_gard
  7.20987|HWAS|FUNCTIONAL> DIMM[03]=8080000000000000
  7.20988|HWAS|FUNCTIONAL> Proc[05]=8000000000000000
  7.20989|HWAS|FUNCTIONAL> Core[07]=1511540000000000
  7.21299|ISTEP  6.11 - host_start_occ_xstop_handler
  8.28965|ISTEP  6.12 - host_voltage_config
  8.47973|ISTEP  7. 1 - mss_attr_cleanup
  9.07674|ISTEP  7. 2 - mss_volt
  9.35627|ISTEP  7. 3 - mss_freq
  9.63029|ISTEP  7. 4 - mss_eff_config
 10.35189|ISTEP  7. 5 - mss_attr_update
 10.38489|ISTEP  8. 1 - host_slave_sbe_config
 10.45332|ISTEP  8. 2 - host_setup_sbe
 10.45450|ISTEP  8. 3 - host_cbs_start
 10.45574|ISTEP  8. 4 - proc_check_slave_sbe_seeprom_complete
 10.48675|ISTEP  8. 5 - host_attnlisten_proc
 10.50338|ISTEP  8. 6 - host_p9_fbc_eff_config
 10.50771|ISTEP  8. 7 - host_p9_eff_config_links
 10.53338|ISTEP  8. 8 - proc_attr_update
 10.53634|ISTEP  8. 9 - proc_chiplet_fabric_scominit
 10.55234|ISTEP  8.10 - proc_xbus_scominit
 10.56202|ISTEP  8.11 - proc_xbus_enable_ridi
 10.57788|ISTEP  8.12 - host_set_voltages
 10.59421|ISTEP  9. 1 - fabric_erepair
 10.65877|ISTEP  9. 2 - fabric_io_dccal
 10.66048|ISTEP  9. 3 - fabric_pre_trainadv
 10.66665|ISTEP  9. 4 - fabric_io_run_training
 10.66860|ISTEP  9. 5 - fabric_post_trainadv
 10.67060|ISTEP  9. 6 - proc_smp_link_layer
 10.67503|ISTEP  9. 7 - proc_fab_iovalid
 11.10386|ISTEP  9. 8 - host_fbc_eff_config_aggregate
 11.15103|ISTEP 10. 1 - proc_build_smp
 11.27537|ISTEP 10. 2 - host_slave_sbe_update
 11.68581|sbe|System Performing SBE Update for PROC 0, side 0
 34.50467|sbe|System Rebooting To Complete SBE Update Process
 34.50595|IPMI: Initiate power cycle
 34.54671|Stopping istep dispatcher
 34.68729|IPMI: shutdown complete

One of the improvements is we now get output from the SBE! This means that when we do things like mess up secure boot and non secure boot firmware (I’ll explain why/how this is a thing later), we’ll actually get something useful out of a serial port:

--== Welcome to SBE - CommitId[0x8b06b5c1] ==--
istep 3.19
istep 3.20
istep 3.21
istep 3.22
istep 4.1
istep 4.2
istep 4.3
istep 4.4
istep 4.5
istep 4.6
istep 4.7
istep 4.8
istep 4.9
istep 4.10
istep 4.11
istep 4.12
istep 4.13
istep 4.14
istep 4.15
istep 4.16
istep 4.17
istep 4.18
istep 4.19
istep 4.20
istep 4.21
istep 4.22
istep 4.23
istep 4.24
istep 4.25
istep 4.26
istep 4.27
istep 4.28
istep 4.29
istep 4.30
istep 4.31
istep 4.32
istep 4.33
istep 4.34
istep 5.1
istep 5.2
SBE starting hostboot

And then we’re back into normal Hostboot boot (which we’ve all seen before) and end up at a newer petitboot!

Petitboot 1.11 on a Raptor Blackbird

One notable absence from that screenshot is my installed Fedora is missing. This is because there appears to be a bug in the 5.3.7 kernel that’s currently upstream, and if we drop to the shell and poke at lspci and dmesg, we can work out what could be the culprit:

Exiting petitboot. Type 'exit' to return.
You may run 'pb-sos' to gather diagnostic data
No password set, running as root. You may set a password in the System Configuration screen.
# lspci
0000:00:00.0 PCI bridge: IBM Device 04c1
0001:00:00.0 PCI bridge: IBM Device 04c1
0001:01:00.0 Non-Volatile memory controller: Intel Corporation Device f1a8 (rev 03)
0002:00:00.0 PCI bridge: IBM Device 04c1
0002:01:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9235 PCIe 2.0 x2 4-port SATA 6 Gb/s Controller (rev 11)
0003:00:00.0 PCI bridge: IBM Device 04c1
0003:01:00.0 USB controller: Texas Instruments TUSB73x0 SuperSpeed USB 3.0 xHCI Host Controller (rev 02)
0004:00:00.0 PCI bridge: IBM Device 04c1
0004:01:00.0 Ethernet controller: Broadcom Limited NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
0004:01:00.1 Ethernet controller: Broadcom Limited NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
0004:01:00.2 Ethernet controller: Broadcom Limited NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
0005:00:00.0 PCI bridge: IBM Device 04c1
0005:01:00.0 PCI bridge: ASPEED Technology, Inc. AST1150 PCI-to-PCI Bridge (rev 04)
0005:02:00.0 VGA compatible controller: ASPEED Technology, Inc. ASPEED Graphics Family (rev 41)
# dmesg|grep -i nvme
[    2.991038] nvme nvme0: pci function 0001:01:00.0
[    2.991088] nvme 0001:01:00.0: enabling device (0140 -> 0142)
[    3.121799] nvme nvme0: Identify Controller failed (19)
[    3.121802] nvme nvme0: Removing after probe failure status: -5
# uname -a
Linux skiroot 5.3.7-openpower1 #2 SMP Sat Dec 14 09:06:20 PST 2019 ppc64le GNU/Linux

If for some reason the device didn’t show up in lspci, then I’d look at the skiboot firmware log, which is /sys/firmware/opal/msglog.

Looking at upstream stable kernel patches, it seems like 5.3.8 has a interesting looking patch when you realize that ppc64le uses a 64k page size:

commit efac0f186ea654e8389f5017c7f643ef48cb4b93
Author: Kevin Hao <haokexin@gmail.com>
Date:   Fri Oct 18 10:53:14 2019 +0800

    nvme-pci: Set the prp2 correctly when using more than 4k page
    
    commit a4f40484e7f1dff56bb9f286cc59ffa36e0259eb upstream.
    
    In the current code, the nvme is using a fixed 4k PRP entry size,
    but if the kernel use a page size which is more than 4k, we should
    consider the situation that the bv_offset may be larger than the
    dev->ctrl.page_size. Otherwise we may miss setting the prp2 and then
    cause the command can't be executed correctly.
    
    Fixes: dff824b2aadb ("nvme-pci: optimize mapping of small single segment requests")
    Cc: stable@vger.kernel.org
    Reviewed-by: Christoph Hellwig <hch@lst.de>
    Signed-off-by: Kevin Hao <haokexin@gmail.com>
    Signed-off-by: Keith Busch <kbusch@kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

So, time to go try 5.3.8. My yaks are getting quite smooth.

Oh, and when you’re done with your temporary firmware, either fiddle with mboxctl or restart the systemd service for it, or reboot your BMC or… well, I gotta leave you something to work out on your own :)

December 09, 2019

etbesystemd-nspawn and Private Networking

Currently there’s two things I want to do with my PC at the same time, one is watching streaming services like ABC iView (which won’t run from non-Australian IP addresses) and another is torrenting over a VPN. I had considered doing something ugly with iptables to try and get routing done on a per-UID basis but that seemed to difficult. At the time I wasn’t aware of the ip rule add uidrange [1] option. So setting up a private networking namespace with a systemd-nspawn container seemed like a good idea.

Chroot Setup

For the chroot (which I use as a slang term for a copy of a Linux installation in a subdirectory) I used a btrfs subvol that’s a snapshot of the root subvol. The idea is that when I upgrade the root system I can just recreate the chroot with a new snapshot.

To get this working I created files in the root subvol which are used for the container.

I created a script like the following named /usr/local/sbin/container-sshd to launch the container. It sets up the networking and executes sshd. The systemd-nspawn program is designed to launch init but that’s not required, I prefer to just launch sshd so there’s only one running process in a container that’s not being actively used.

#!/bin/bash

# restorecon commands only needed for SE Linux
/sbin/restorecon -R /dev
/bin/mount none -t tmpfs /run
/bin/mkdir -p /run/sshd
/sbin/restorecon -R /run /tmp
/sbin/ifconfig host0 10.3.0.2 netmask 255.255.0.0
/sbin/route add default gw 10.2.0.1
exec /usr/sbin/sshd -D -f /etc/ssh/sshd_torrent_config

How to Launch It

To setup the container I used a command like “/usr/bin/systemd-nspawn -D /subvols/torrent -M torrent –bind=/home -n /usr/local/sbin/container-sshd“.

First I had tried the --network-ipvlan option which creates a new IP address on the same MAC address. That gave me an interface iv-br0 on the container that I could use normally (br0 being the bridge used in my workstation as it’s primary network interface). The IP address I assigned to that was in the same subnet as br0, but for some reason that’s unknown to me (maybe an interaction between bridging and network namespaces) I couldn’t access it from the host, I could only access it from other hosts on the network. I then tried the --network-macvlan option (to create a new MAC address for virtual networking), but that had the same problem with accessing the IP address from the local host outside the container as well as problems with MAC redirection to the primary MAC of the host (again maybe an interaction with bridging).

Then I tried just the “-n” option which gave it a private network interface. That created an interface named ve-torrent on the host side and one named host0 in the container. Using ifconfig and route to configure the interface in the container before launching sshd is easy. I haven’t yet determined a good way of configuring the host side of the private network interface automatically.

I had to use a bind for /home because /home is a subvol and therefore doesn’t get included in the container by default.

How it Works

Now when it’s running I can just “ssh -X” to the container and then run graphical programs that use the VPN while at the same time running graphical programs on the main host that don’t use the VPN.

Things To Do

Find out why --network-ipvlan and --network-macvlan don’t work with communication from the same host.

Find out why --network-macvlan gives errors about MAC redirection when pinging.

Determine a good way of setting up the host side after the systemd-nspawn program has run.

Find out if there are better ways of solving this problem, this way works but might not be ideal. Comments welcome.

December 02, 2019

LUVLUV December 2019 Main Meeting: A review of Linux and Open Source in 2019

Dec 3 2019 19:00
Dec 3 2019 21:00
Dec 3 2019 19:00
Dec 3 2019 21:00
Location: 
Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

NOTE: The library closes at 7pm so arrivals after that time will need to contact Andrew on (0421) 775 358 or any other attendee for admission.

Speaker:  Alexar Pendashteh

A review of Linux and Open Source in 2019

This is the last main meeting of LUV in 2019!
In this meeting we are going to have a look at what 2019 had for Linux and Open Source and have a peek into what's coming up.
This event will be mainly a social event, with group discussion followed by a dinner in a nearby resturant or cafe!

Many of us like to go for dinner nearby in Lygon St. after the meeting. Please let us know if you'd like to join us!

Linux Users of Victoria is a subcommittee of Linux Australia.

December 3, 2019 - 19:00

November 18, 2019

etbe4K Monitors

A couple of years ago a relative who uses a Linux workstation I support bought a 4K (4096*2160 resolution) monitor. That meant that I had to get 4K working, which was 2 years of pain for me and probably not enough benefit for them to justify it. Recently I had the opportunity to buy some 4K monitors at a low enough price that it didn’t make sense to refuse so I got to experience it myself.

The Need for 4K

I’m getting older and my vision is decreasing as expected. I recently got new glasses and got a pair of reading glasses as a reduced ability to change focus is common as you get older. Unfortunately I made a mistake when requesting the focus distance for the reading glasses and they work well for phones, tablets, and books but not for laptops and desktop computers. Now I have the option of either spending a moderate amount of money to buy a new pair of reading glasses or just dealing with the fact that laptop/desktop use isn’t going to be as good until the next time I need new glasses (sometime 2021).

I like having lots of terminal windows on my desktop. For common tasks I might need a few terminals open at a time and if I get interrupted in a task I like to leave the terminal windows for it open so I can easily go back to it. Having more 80*25 terminal windows on screen increases my productivity. My previous monitor was 2560*1440 which for years had allowed me to have a 4*4 array of non-overlapping terminal windows as well as another 8 or 9 overlapping ones if I needed more. 16 terminals allows me to ssh to lots of systems and edit lots of files in vi. Earlier this year I had found it difficult to read the font size that previously worked well for me so I had to use a larger font that meant that only 3*3 terminals would fit on my screen. Going from 16 non-overlapping windows and an optional 8 overlapping to 9 non-overlapping and an optional 6 overlapping is a significant difference. I could get a second monitor, and I won’t rule out doing so at some future time. But it’s not ideal.

When I got a 4K monitor working properly I found that I could go back to a smaller font that allowed 16 non overlapping windows. So I got a real benefit from a 4K monitor!

Video Hardware

Version 1.0 of HDMI released in 2002 only supports 1920*1080 (FullHD) resolution. Version 1.3 released in 2006 supported 2560*1440. Most of my collection of PCIe video cards have a maximum resolution of 1920*1080 in HDMI, so it seems that they only support HDMI 1.2 or earlier. When investigating this I wondered what version of PCIe they were using, the command “dmidecode |grep PCI” gives that information, seems that at least one PCIe video card supports PCIe 2 (released in 2007) but not HDMI 1.3 (released in 2006).

Many video cards in my collection support 2560*1440 with DVI but only 1920*1080 with HDMI. As 4K monitors don’t support DVI input that meant that when initially using a 4K monitor I was running in 1920*1080 instead of 2560*1440 with my old monitor.

I found that one of my old video cards supported 4K resolution, it has a NVidia GT630 chipset (here’s the page with specifications for that chipset [1]). It seems that because I have a video card with 2G of RAM I have the “Keplar” variant which supports 4K resolution. I got the video card in question because it uses PCIe*8 and I had a workstation that only had PCIe*8 slots and I didn’t feel like cutting a card down to size (which is apparently possible but not recommended), it is also fanless (quiet) which is handy if you don’t need a lot of GPU power.

A couple of months ago I checked the cheap video cards at my favourite computer store (MSY) and all the cheap ones didn’t support 4K resolution. Now it seems that all the video cards they sell could support 4K, by “could” I mean that a Google search of the chipset says that it’s possible but of course some surrounding chips could fail to support it.

The GT630 card is great for text, but the combination of it with a i5-2500 CPU (rating 6353 according to cpubenchmark.net [3]) doesn’t allow playing Netflix full-screen and on 1920*1080 videos scaled to full-screen sometimes gets mplayer messages about the CPU being too slow. I don’t know how much of this is due to the CPU and how much is due to the graphics hardware.

When trying the same system with an ATI Radeon R7 260X/360 graphics card (16* PCIe and draws enough power to need a separate connection to the PSU) the Netflix playback appears better but mplayer seems no better.

I guess I need a new PC to play 1920*1080 video scaled to full-screen on a 4K monitor. No idea what hardware will be needed to play actual 4K video. Comments offering suggestions in this regard will be appreciated.

Software Configuration

For GNOME apps (which you will probably run even if like me you use KDE for your desktop) you need to run commands like the following to scale menus etc:

gsettings set org.gnome.settings-daemon.plugins.xsettings overrides "[{'Gdk/WindowScalingFactor', <2>}]"
gsettings set org.gnome.desktop.interface scaling-factor 2

For KDE run the System Settings app, go to Display and Monitor, then go to Displays and Scale Display to scale things.

The Arch Linux Wiki page on HiDPI [2] is good for information on how to make apps work with high DPI (or regular screens for people with poor vision).

Conclusion

4K displays are still rather painful, both in hardware and software configuration. For serious computer use it’s worth the hassle, but it doesn’t seem to be good for general use yet. 2560*1440 is pretty good and works with much more hardware and requires hardly any software configuration.

November 16, 2019

Dave HallDrupalSouth Diversity Scholarship Winner Announced

A few weeks ago we announced our diversity scholarship for DrupalSouth. Before announcing the winner I want to talk a bit about our experience doing this for the first time.

DrupalSouth is the largest Drupal event held in Oceania every year. It provides a great marketing opportunity for businesses wanting to promote their products and services to the Drupal community. Dave Hall Consulting planned to sponsor DrupalSouth to promote our new training business - Getting It Live training. By the time we got organised all of the (affordable) sponsorship opportunities had gone. After considering various opportunities around the event we felt the best way of investing a similar amount of money and giving something back to the community was through a diversity scholarship

The community provided positive feedback about the initiative. However despite the enthusiasm and working our networks to get a range of applicants, we only ended up with 7 applicants. They were all guys. One applicant was from Australia, the rest were from overseas. About half the applicants dropped out when contacted to confirm that they could cover their own travel and visa expenses.

We are likely to offer other scholarships in the future. We will start earlier and explore other channels for promoting the program.

The scholarship has been awarded to Yogesh Ingale, from Mumbai, India. Over the last 3 years Yogesh has been employed by Tata Consultancy Services’ digital operations team as a DevOps Engineer. During this time he has worked with Drupal, Cloud Computing, Python and Web Technologies. Yogesh is interested in automating processes. When he’s not working, Yogesh likes to travel, automate things and write blog posts. Disclaimer: I know Yogesh through my work with one of my clients. Some times the Drupal community feels pretty small.

Congratulations Yogesh! I am looking forward to seeing you in Hobart.

If you want to meet Yogesh before DrupalSouth, we still have some seats available for our 73780151419">2 day git training course that’s running on 25-26 November. If you won’t be in Hobart, contact us to discuss your training needs.

November 10, 2019

Julien GoodwinSome thoughts on Storytelling as an engineering teaching tool

Every week at work on Wednesday afternoons we have the SRE ops review, a relaxed two hour affair where SREs (& friends of, not all of whom are engineers) share interesting tidbits that have happened over the last week or so, this might be a great success, an outage, a weird case, or even a thorny unsolved problem. Usually these relate to a service the speaker is oncall for, or perhaps a dependency or customer service, but we also discuss major incidents both internal & external. Sometimes a recent issue will remind one of the old-guard (of which I am very much now a part) of a grand old story and we share those too.

Often the discussion continues well into the evening as we decant to one of the local pubs for dinner & beer, sometimes chatting away until closing time (probably quite regularly actually, but I'm normally long gone).

It was at one of these nights at the pub two months ago (sorry!), that we ended up chatting about storytelling as a teaching tool, and a colleague asked an excellent question, that at the time I didn't have a ready answer for, but I've been slowly pondering, and decided to focus on over an upcoming trip.

As I start to write the first draft of this post I've just settled in for cruise on my first international trip in over six months[1], popping over to Singapore for the Melbourne Cup weekend, and whilst I'd intended this to be a holiday, I'm so terrible at actually having a holiday[2] that I've ended up booking two sessions of storytelling time, where I present the history of Google's production networks (for those of you reading this who are current of former engineering Googlers, similar to Traffic 101). It's with this perspective of planning, and having run those sessions that I'm going to try and answer the question that I was asked.

Or at least, I'm going to split up the question I was asked and answer each part.

"What makes storytelling good"

On its own this is hard to answer, there are aspects that can help, such as good presentation skills (ideally keeping to spoken word, but simple graphs, diagrams & possibly photos can help), but a good story can be told in a dry technical monotone and still be a good story. That said, as with the rest of these items charisma helps.

"What makes storytelling interesting"

In short, a hook or connection to the audience, for a lot of my infrastructure related outage stories I have enough context with the audience to be able to tie the impact back in a way that resonates with a person. For larger disparate groups shared languages & context help ensure that I'm not just explaining to one person.

In these recent sessions one was with a group of people who work in our Singapore data centre, in that session I focused primarily on the history & evolution of our data centre fabrics, giving them context to understand why some of the (at face level) stranger design decisions have been made that way.

The second session was primarily people involved in the deployment side of our backbone networks, and so I focused more on the backbones, again linking with knowledge the group already had.

"What makes storytelling entertaining"

Entertaining storytelling is a matter of style, skills and charisma, and while many people can prepare (possibly with help) an entertaining talk, the ability to tell an entertaining story off the cuff is more of a skill, luckily for me, one I seem to do ok with. Two things that can work well are dropping in surprises, and where relevant some level of self-deprecation, however both need to be done very carefully.

Surprises can work very well when telling a story chronologically "I assumed X because Y, <five minutes of waffling>, so it turned out I hadn't proved Y like I thought, so it wasn't X, it was Z", they can help the audience to understand why a problem wasn't solved so easily, and explaining "traps for young players" as Dave Jones (of the EEVblog) likes to say can themselves be really helpful learning elements. Dropping surprises that weren't surprises to the story's protagonist generally only works if it's as a punchline of a joke, and even then it often doesn't.

Self-deprecation is an element that I've often used in the past, however more recently I've called others out on using it, and have been trying to reduce it myself, depending on the audience you might appear as a bumbling success or stupid, when the reality may be that nobody understood the situation properly, even if someone should have. In the ops review style of storytelling, it can also lead to a less experienced audience feeling much less confident in general than they should, which itself can harm productivity and careers.

If the audience already had relevant experience (presenting a classic SRE issue to other SREs for example, a network issue to network engineers, etc.) then audience interaction can work very well for engagement. "So the latency graph for database queries was going up and to the right, what would you look at?" This is also similar to one of the ways to run a "wheel of misfortune" outage simulation.

"What makes storytelling useful & informative at the same time"

In the same way as interest, to make storytelling useful & informative for the audience involves consideration for the audience, as a presenter if you know the audience, at least in broad strokes this helps. As I mentioned above, when I presented my talk to a group of datacenter-focused people I focused on the DC elements, connecting history to the current incarnations; when I presented to a group of more general networking folk a few days later, I focused more on the backbones and other elements they'd encountered.

Don't assume that a story will stick wholesale, just leaving a few keywords, or even just a vague memory with a few key words they can go digging for can make all the difference in the world. Repetition works too, sharing many interesting stories that share the same moral (for an example, one of the ops review classics is demonstrations about how lack of exponential backoff can make recovery from outages hard), hearing this over dozens of different stories over weeks (or months, or years...) it eventually seeps in as something to not even question having been demonstrated as such an obvious foundation of good systems.

When I'm speaking to an internal audience I'm happy if they simply remember that I (or my team) exist and might be worth reaching out to in future if they have questions.

Lastly, storytelling is a skill you need to practice, whether a keynote presentation in front of a few thousand people, or just telling tall takes to some mates at the pub practice helps, and eventually many of the elements I've mentioned above become almost automatic. As can probably be seen from this post I could do with some more practice on the written side.

1: As I write these words I'm aboard a Qantas A380 (QF1) flying towards Singapore, the book I'm currently reading, of all things about mechanical precision ("Exactly: How Precision Engineers Created the Modern World" or as it has been retitled for paperback "The Perfectionists"), has a chapter themed around QF32, the Qantas A380 that notoriously had to return to Singapore after an uncontained engine failure. Both the ATSB report on the incident and the captain Richard de Crespigny's book QF32 are worth reading. I remember I burned though QF32 one (very early) morning when I was stuck in GlobalSwitch Sydney waiting for approval to repatch a fibre, one of the few times I've actually dealt with the physical side of Google's production networks, and to date the only time the fact I live just a block from that facility has been used at all sensibly.

2: To date, I don't think I've ever actually had a holiday that wasn't organised by family, or attached to some conference, event or work travel I'm attending. This trip is probably the closest I've ever managed (roughly equal to my burnout trip to Hawaii in 2014), and even then I've ruined it by turning two of the three weekdays into work. I'm much better at taking breaks that simply involve not leaving home or popping back to stay with family in Melbourne.

November 03, 2019

etbeKMail Crashing and LIBGL

One problem I’ve had recently on two systems with NVideo video cards is KMail crashing (SEGV) while reading mail. Sometimes it goes for months without having problems, and then it gets into a state where reading a few messages (or sometimes reading one particular message) causes a crash. The crash happens somewhere in the Mesa library stack.

In an attempt to investigate this I tried running KMail via ssh (as that precludes a lot of the GL stuff), but that crashed in a different way (I filed an upstream bug report [1]).

I have discovered a workaround for this issue, I set the environment variable LIBGL_ALWAYS_SOFTWARE=1 and then things work. At this stage I can’t be sure exactly where the problems are. As it’s certain KMail operations that trigger it I think that’s evidence of problems originating in KMail, but the end result when it happens often includes a kernel error log so there’s probably a problem in the Nouveau driver. I spent quite a lot of time investigating this, including recompiling most of the library stack with debugging mode and didn’t get much of a positive result. Hopefully putting it out there will help the next person who has such issues.

Here is a list of environment variables that can be set to debug LIBGL issues (strangely I couldn’t find documentation on this when Googling it). If you are stuck with a problem related to LIBGL you can try setting each of these to “1” in turn and see if it makes a difference. That can either be for the purpose of debugging a problem or creating a workaround that allows you to run the programs you need to run. I don’t know why GL is required to read email.

LIBGL_DIAGNOSTIC
LIBGL_ALWAYS_INDIRECT
LIBGL_ALWAYS_SOFTWARE
LIBGL_DRI3_DISABLE
LIBGL_NO_DRAWARRAYS
LIBGL_DEBUG
LIBGL_DRIVERS_PATH
LIBGL_DRIVERS_DIR
LIBGL_SHOW_FPS

November 01, 2019

LUVLUV November 2019 Workshop: Replacing Windows 7 with Linux

Nov 16 2019 12:30
Nov 16 2019 16:30
Nov 16 2019 12:30
Nov 16 2019 16:30
Location: 
Infoxchange, 33 Elizabeth St. Richmond

Replacing Windows 7 with Linux

What to do with your Windows 7 PC when its EOL arrives in January next year?  Install Linux of course!  Wen Lin will lead this talk with an intro, then get everyone to join in for a Q&A - let's share all the great ideas (and personal experience) on how to install a variety of Linux Distros to replace one's obsolete Win7 - and breathe new life into one's PC.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121.  Late arrivals please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria is a subcommittee of Linux Australia.

November 16, 2019 - 12:30

read more

October 29, 2019

Dave HallBuying an Apple Watch for 7USD

For DrupalCon Amsterdam, Srijan ran a competition with the prize being an Apple Watch 5. It was a fun idea. Try to get a screenshot of an animated GIF slot machine showing 3 matching logos and tweet it.

Try your luck at @DrupalConEur Catch 3 in a row and win an #AppleWatchSeries5. To participate, get 3 of the same logos in a series, grab a screenshot and share it with us in the comment section below. See you in Amsterdam! #SrijanJackpot #ContestAlert #DrupalCon

I entered the competition.

I managed to score 3 of the no logo logos. That's gotta be worth something, right? #srijanJackpot

The competition had a flaw. The winner was selected based on likes.

After a week I realised that I wasn’t going to win. Others were able to garner more likes than I could. Then my hacker mindset kicked in.

I thought I’d find how much 100 likes would cost. A quick search revealed likes costs pennies a piece. At this point I decided that instead of buying an easy win, I’d buy a ridiculous number of likes. 500 likes only cost 7USD. Having a blog post about gaming the system was a good enough prize for me.

Receipt: 500 likes for 7USD

I was unsure how things would go. I was supposed to get my 500 likes across 10 days. For the first 12 hours I got nothing. I thought I’d lost my money on a scam. Then the trickle of likes started. Every hour I’d get a 2-3 likes, mostly from Eastern Europe. Every so often I’d get a retweet or a bonus like on a follow up comment. All up I got over 600 fake likes. Great value for money.

Today Sirjan awarded me the watch. I waited until after they’ve finished taking photos before coming clean. Pics or it didn’t happen and all that. They insisted that I still won the competition without the bought likes.

The prize being handed over

Think very carefully before launching a competition that involves social media engagement. There’s a whole fake engagement economy.

October 04, 2019

Dave HallAnnouncing the DrupalSouth Diversity Scholarship

Over the years I have benefited greatly from the generosity of the Drupal Community. In 2011 people sponsored me to write lines of code to get me to DrupalCon Chicago.

Today Dave Hall Consulting is a very successful small business. We have contributed code, time and content to Drupal. It is time for us to give back in more concrete terms.

We want to help someone from an under represented group take their career to the next level. This year we will provide a Diversity Scholarship for one person to attend DrupalSouth, our 2 day Gettin’ Git training course and 5 nights at the conference hotel. This will allow this person to attend the premier Drupal event in the region while also learning everything there is to know about git.

To apply for the scholarship, fill out the form by 23:59 AEST 19 October 2019 to be considered. (Extended from 12 October)

The winner has been announced.

June 29, 2019

etbeLong-term Device Use

It seems to me that Android phones have recently passed the stage where hardware advances are well ahead of software bloat. This is the point that desktop PCs passed about 15 years ago and laptops passed about 8 years ago. For just over 15 years I’ve been avoiding buying desktop PCs, the hardware that organisations I work for throw out is good enough that I don’t need to. For the last 8 years I’ve been avoiding buying new laptops, instead buying refurbished or second hand ones which are more than adequate for my needs. Now it seems that Android phones have reached the same stage of development.

3 years ago I purchased my last phone, a Nexus 6P [1]. Then 18 months ago I got a Huawei Mate 9 as a warranty replacement [2] (I had swapped phones with my wife so the phone I was using which broke was less than a year old). The Nexus 6P had been working quite well for me until it stopped booting, but I was happy to have something a little newer and faster to replace it at no extra cost.

Prior to the Nexus 6P I had a Samsung Galaxy Note 3 for 1 year 9 months which was a personal record for owning a phone and not wanting to replace it. I was quite happy with the Note 3 until the day I fell on top of it and cracked the screen (it would have been ok if I had just dropped it). While the Note 3 still has my personal record for continuous phone use, the Nexus 6P/Huawei Mate 9 have the record for going without paying for a new phone.

A few days ago when browsing the Kogan web site I saw a refurbished Mate 10 Pro on sale for about $380. That’s not much money (I usually have spent $500+ on each phone) and while the Mate 9 is still going strong the Mate 10 is a little faster and has more RAM. The extra RAM is important to me as I have problems with Android killing apps when I don’t want it to. Also the IP67 protection will be a handy feature. So that phone should be delivered to me soon.

Some phones are getting ridiculously expensive nowadays (who wants to walk around with a $1000+ Pixel?) but it seems that the slightly lower end models are more than adequate and the older versions are still good.

Cost Summary

If I can buy a refurbished or old model phone every 2 years for under $400 that will make using a phone cost about $0.50 per day. The Nexus 6P cost me $704 in June 2016 which means that for the past 3 years my phone cost was about $0.62 per day.

It seems that laptops tend to last me about 4 years [3], and I don’t need high-end models (I even used one from a rubbish pile for a while). The last laptops I bought cost me $289 for a Thinkpad X1 Carbon [4] and $306 for the Thinkpad T420 [5]. That makes laptops about $0.20 per day.

In May 2014 I bought a Samsung Galaxy Note 10.1 2014 edition tablet for $579. That is still working very well for me today, apart from only having 32G of internal storage space and an OS update preventing Android apps from writing to the micro SD card (so I have to use USB to copy TV shows on to it) there’s nothing more than I need from a tablet. Strangely I even get good battery life out of it, I can use it for a couple of hours without the battery running out. Battery life isn’t nearly as good as when it was new, but it’s still OK for my needs. As Samsung stopped providing security updates I can’t use the tablet as a SSH client, but now that my primary laptop is a small and light model that’s less of an issue. Currently that tablet has cost me just over $0.30 per day and it’s still working well.

Currently it seems that my hardware expense for the forseeable future is likely to be about $1 per day. 20 cents for laptop, 30 cents for tablet, and 50 cents for phone. The overall expense is about $1.66 per month as I’m on a $20 per month pre-paid plan with Aldi Mobile.

Saving Money

A laptop is very important to me, the amounts of money that I’m spending don’t reflect that. But it seems that I don’t have any option for spending more on a laptop (the Thinkpad X1 Carbon I have now is just great and there’s no real option for getting more utility by spending more). I also don’t have any option to spend less on a tablet, 5 years is a great lifetime for a device that is practically impossible to repair (repair will cost a significant portion of the replacement cost).

I hope that the Mate 10 can last at least 2 years which will make it a new record for low cost of ownership of a phone for me. If app vendors can refrain from making their bloated software take 50% more RAM in the next 2 years that should be achievable.

The surprising thing I learned while writing this post is that my mobile phone expense is the largest of all my expenses related to mobile computing. Given that I want to get good reception in remote areas (needs to be Telstra or another company that uses their network) and that I need at least 3GB of data transfer per month it doesn’t seem that I have any options for reducing that cost.

April 27, 2019

Julien GoodwinBuilding new pods for the Spectracom 8140 using modern components

I've mentioned a bunch of times on the time-nuts list that I'm quite fond of the Spectracom 8140 system for frequency distribution. For those not familiar with it, it's simply running a 10MHz signal against a 12v DC power feed so that line-powered pods can tap off the reference frequency and use it as an input to either a buffer (10MHz output pods), decimation logic (1MHz, 100kHz etc.), or a full synthesizer (Versa-pods).

It was only in October last year that I got a house frequency standard going using an old Efratom FRK-LN which now provides the reference; I'd use a GPSDO, but I live in a ground floor apartment without a usable sky view, this of course makes it hard to test some of the GPS projects I'm doing. Despite living in a tiny apartment I have test equipment in two main places, so the 8140 is a great solution to allow me to lock all of them to the house standard.


(The rubidium is in the chunky aluminium chassis underneath the 8140)

Another benefit of the 8140 is that many modern pieces of equipment (such as my [HP/Agilent/]Keysight oscilloscope) have a single connector for reference frequency in/out, and should the external frequency ever go away it will switch back to its internal reference, but also send that back out the connector, which could lead to other devices sharing the same signal switching to it. The easy way to avoid that is to use a dedicated port from a distribution amplifier for each device like this, which works well enough until you have this situation in multiple locations.

As previously mentioned the 8140 system uses pods to add outputs, while these pods are still available quite cheaply used on eBay (as of this writing, for as low as US$8, but ~US$25/pod has been common for a while), recently the cost of shipping to Australia has gone up to the point I started to plan making my own.

By making my own pods I also get to add features that the original pods didn't have[1], I started with a quad-output pod with optional internal line termination. This allows me to have feeds for multiple devices with the annoying behaviour I mentioned earlier. The enclosure is a Pomona model 4656, with the board designed to slot in, and offer pads for the BNC pins to solder to for easy assembly.



This pod uses a Linear Technologies (now Analog Devices) LTC6957 buffer for the input stage replacing a discrete transistor & logic gate combined input stage in the original devices. The most notable change is that this stage works reliably down to -30dBm input (possibly further, couldn't test beyond that), whereas the original pods stop working right around -20dBm.

As it turns out, although it can handle lower input signal levels, in other ways including power usage it seems very similar. One notable downside is the chip tops out at 4v absolute maximum input, so a separate regulator is used just to feed this chip. The main regulator has also been changed from a 7805 to an LD1117 variant.

On this version the output stage is the same TI 74S140 dual 4-input NAND gate as was used on the original pods, just in SOIC form factor.

As with the next board there is one error on the board, the wire loop that forms the ground connection was intended to fit a U-type pin header, however the footprint I used on the boards was just too tight to allow the pins through, so I've used some thin bus wire instead.



The second major variant I designed was a combo version, allowing sine & square outputs by just switching a jumper, or isolated[2] or line-regenerator (8040TA from Spectracom) versions with a simple sub-board containing just an inductor (TA) or 1:1 transformer (isolated).



This is the second revision of that board, where the 74S140 has been replaced by a modern TI 74LVC1G17 buffer. This version of the pod, set for sine output, uses almost exactly 30mA of current (since both the old & new pods use linear supplies that's the most sensible unit), whereas the original pods are right around 33mA. The empty pods at the bottom-left are simply placeholders for 2 100 ohm resistors to add 50 ohm line termination if desired.

The board fits into the Pomona 2390 "Size A" enclosures, or for the isolated version the Pomona 3239 "Size B". This is the reason the BNC connectors have to be extended to reach the board, on the isolated boxes the BNC pins reach much deeper into the enclosure.

If the jumpers were removed, plus the smaller buffer it should be easy to fit a pod into the Pomona "Miniature" boxes too.



I was also due to create some new personal businesscards, so I arranged the circuit down to a single layer (the only jumper is the requirement to connect both ground pins on the connectors) and merged it with some text converted to KiCad footprints to make a nice card on some 0.6mm PCBs. The paper on that photo is covering the link to the build instructions, which weren't written at the time (they're *mostly* done now, I may update this post with the link later).

Finally, while I was out travelling at the start of April my new (to me) HP 4395A arrived so I've finally got some spectrum output. The output is very similar between the original and my version, with the major notable difference being that my version is 10dB worse at the third harmonic. I lack the equipment (and understanding) to properly measure phase noise, but if anyone in AU/NZ wants to volunteer their time & equipment for an afternoon I'd love an excuse for a field trip.



Spectrum with input sourced from my house rubidium (natively a 5MHz unit) via my 8140 line. Note that despite saying "ExtRef" the analyzer is synced to its internal 10811 (which is an optional unit, and uses an external jumper, hence the display note.



Spectrum with input sourced from the analyzer's own 10811, and power from the DC bias generator also from the analyzer.


1: Or at least I didn't think they had, I've since found out that there was a multi output pod, and one is currently in the post heading to me.
2: An option on the standard Spectracom pods, albeit a rare one.

January 13, 2019

Julien GoodwinTransport security for BGP, AKA BGP-STARTTLS, a proposal

Several days ago, inspired in part by an old work mail thread being resurrected I sent this image as a tweet captioned "The state of BGP transport security.":



The context of the image for those not familiar with it is this image about noSQL databases.

This triggered a bunch of discussion, with multiple people saying "so how would *you* do it", and we'll get to that (or for the TL;DR skip to the bottom), but first some background.

The tweet is a reference to the BGP protocol the Internet uses to exchange routing data between (and within) networks. This protocol (unless inside a separate container) is never encrypted, and can only be authenticated (in practice) by a TCP option known as TCP-MD5 (standardised in RFC2385). The BGP protocol itself has no native encryption or authentication support. Since routing data can often be inferred by the packets going across a link anyway, this has lead to this not being a priority to fix.

Transport authentication & encryption is a distinct issue from validation of the routing data transported by BGP, an area already being worked on by the various RPKI projects, eventually transport authentication may be able to benefit from some of the work done by those groups.

TCP-MD5 is quite limited, and while generally supported by all major BGP implementations it has one major limitation that makes it particularly annoying, in that it takes a single key, making key migration difficult (and in many otherwise sensible topologies, impossible without impact). Being a TCP option is also a pain, increasing fragility.

At the time of its introduction TCP-MD5 gave two main benefits the first was to have some basic authentication beyond the basic protocol (for which the closest element in the protocol is the validation of peer-as in the OPEN message, and a mismatch will helpfully tell you who the far side was looking for), plus making it harder to interfere with the TCP session, which on many of the TCP implementations of the day was easier than it should have been. Time, however has marched on, and protection against session interference from non-MITM is no longer needed, the major silent MITM case of Internet Exchanges using hubs is long obsolete, plus, in part due to the pain associated in changing keys many networks have a "default" key they will use when turning up a peering session, these keys are often so well known for major networks that they've often been shared on public mailing lists, eliminating what little security benefit TCP-MD5 still brings.

This has been known to be a problem for many years, and the proposed replacement TCP-AO (The TCP Authentication Option) was standardised in 2010 as RFC5925, however, to the best of my knowledge eight years later no mainstream BGP implementation supports it, and as it too is a TCP option, not only does it still has many of the downsides of MD5, but major OS kernels are much less likely to implement new ones (indeed, an OS TCP maintainer commenting such on the thread I mentioned above is what kicked off my thinking).

TCP, the wire format, is in practice unchangeable. This is one of the major reasons for QUIC, the TCP replacement protocol soon to be standardised as HTTP/3, so for universal deployment any proposal that requires changes to TCP is a non-starter.

Any solution must be implementable & deployable.
  • Implementable - BGP implementations must be able to implement it, and do so correctly, ideally with a minimum of effort.
  • Deployable - Networks need to be able to deploy it, when authentication issues occur error messages should be no worse than with standard BGP (this is an area most TCP-MD5 implementations fail at, of those I've used JunOS is a notable exception, Linux required kernel changes for it to even be *possible* to debug)


Ideally any security-critical code should already exist in a standardised form, with multiple widely-used implementations.

Fortunately for us, that exists in the form of TLS. IPSEC, while it exists, fails the deployable tests, as almost anyone who's ever had the misfortune of trying to get IPSEC working between different organisations using different implementations can attest, sure it can usually be made to work, but nowhere near as easily as TLS.

Discussions about the use of TLS for this purpose have happened before, but always quickly run into the problem of how certificates for this should be managed, and that is still an open problem, potentially the work on RPKI may eventually provide a solution here, but until that time we have a workable alternative in the form of TLS-PSK (first standardised in RFC4279), a set of TLS modes that allow the use of pre-shared keys instead of certificates (for those wondering, not only does this still exist in TLS1.3 it's in a better form). For a variety of reasons, not the least the lack of real-time clocks in many routers that may not be able to reach an NTP server until BGP is established, PSK modes are still more deployable than certificate verification today. One key benefit for TLS-PSK is it supports multiple keys to allow migration to a new key in a significantly less impactful manner.

The most obvious way to support BGP-in-TLS would simply be to listen on a new port (as is done for HTTPS for example), however there's a few reasons why I don't think such a method is deployable for BGP, primarily due to the need to update control-plane ACLs, a task that in large networks is often distant from the folk working with BGP, and in small networks may not be understood by any current staff (a situation not dissimilar to the state of TCP). Another option would simply be to use protocol multiplexing and do a TLS negotiation if a TLS hello is received, or unencrypted BGP for a BGP OPEN, this would violate the general "principal of least astonishment", and would be harder for operators to debug.

Instead I propose a design similar to that used by SMTP (where it is known as STARTTLS), during early protocol negotiation support for TLS is signalled using a zero-length capability in the BGP OPEN, the endpoints do a TLS negotiation, and then the base protocol continues inside the new TLS tunnel. Since this negotiation happens during the BGP OPEN, it does mean that other data included in the OPEN leaks. Primarily this is the ASN, but also the various other capabilities supported by the implementation (which could identify the implementation), I suggest that if TLS is required information in the initial OPEN not be validated, and standard reserved ASN be sent instead, and any other capabilities not strictly required not sent, with a fresh OPEN containing all normal information sent inside the TLS session.

Migration from TCP-MD5 is key point, however not one I can find any good answer for. Some implementations already allow TCP-MD5 to be optional, and that would allow an easy upgrade, however such support is rare, and unlikely to be more widely supported.

On that topic, allowing TLS to be optional in a consistent manner is particularly helpful, and something that I believe SHOULD be supported to allow cases like currently unauthenticated public peering sessions to be migrated to TLS with minimal impact. Allowing this does open the possibility of a downgrade attack, and make more plausible attacks causing state machine confusions (implementation believes it's talking on a TLS-secured session when it isn't).

What do we lose from TCP-MD5? Some performance, whilst this is not likely to be an issue for most situations, it is likely not an option for anyone still trying to run IPv4 full tables on a Cisco Catalyst 6500 with Sup720. We do also lose the TCP manipulation prevention aspects, however these days those are (IMO) of minimal value in practice. There's also the costs of implementations needing to include a TLS implementations, and whilst nearly every system will have one (at the very least for SSH) it may not already be linked to the routing protocol implementation.

Lastly, my apologies to anyone who has proposed this before, but my neither I nor my early reviewers were aware of such a proposal. Should such a proposal already exist, meeting the goals of implementable & deployable it may be sensible to pick that up instead.

The IETF is often said to work on "rough consensus and running code", for this proposal here's what I believe a minimal *actual* demonstration of consensus with code would be:
  • Two BGP implementations, not derived from the same source.
  • Using two TLS implementations, not derived from the same source.
  • Running on two kernels (at the very least, Linux & FreeBSD)


The TL;DR version:
  • Using a zero-length BGP capability in the BGP OPEN message implementations advertise their support for TLS
    • TLS version MUST be at least 1.3
    • If TLS is required, the AS field in the OPEN MAY be set to a well-known value to prevent information leakage, and other capabilities MAY be removed, however implementations MUST NOT require the TLS capability be the first, last or only capability in the OPEN
    • If TLS is optional, which MUST NOT be default behaviour), the OPEN MUST be (other than the capability) be the same as a session configured for no encryption
  • After the TCP client receives a conformation of TLS support from the TCP server's OPEN message, a TLS handshake begins
    • To make this deployable TLS-PSK MUST be supported, although exact configuration is TBD.
    • Authentication-only variants of TLS (ex RFC4785) REALLY SHOULD NOT be supported.
    • Standard certificate-based verification MAY be supported, and if supported MUST validate use client certificates, validating both. However, how roots of trust would work for this has not been investigated.
  • Once the TCP handshake completes the BGP state starts over with the client sending a new OPEN
    • Signalling the TLS capability in this OPEN is invalid and MUST be rejected
  • (From here, everything is unchanged from normal BGP)


Magic numbers for development:
  • Capability: (to be referred to as EXPERIMENTAL-STARTTLS) 219
  • ASN (for avoiding data leaks in OPEN messages): 123456
    • Yes this means also sending 4-byte capability. Every implementation that might possibly implement this already supports 4-byte ASNs.


The key words "MUST (BUT WE KNOW YOU WON'T)", "SHOULD CONSIDER", "REALLY SHOULD NOT", "OUGHT TO", "WOULD PROBABLY", "MAY WISH TO", "COULD", "POSSIBLE", and "MIGHT" in this document are to be interpreted as described in RFC 6919.

The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC2119.

August 25, 2018

Dave HallAWS Parameter Store

Anyone with a moderate level of AWS experience will have learned that Amazon offers more than one way of doing something. Storing secrets is no exception. 

It is possible to spin up Hashicorp Vault on AWS using an official Amazon quick start guide. The down side of this approach is that you have to maintain it.

If you want an "AWS native" approach, you have 2 services to choose from. As the name suggests, Secrets Manager provides some secrets management tools on top of the store. This includes automagic rotation of AWS RDS credentials on a regular schedule. For the first 30 days the service is free, then you start paying per secret per month, plus API calls.

There is a free option, Amazon's Systems Manager Parameter Store. This is what I'll be covering today.

Structure

It is easy when you first start out to store all your secrets at the top level. After a while you will regret this decision. 

Parameter Store supports hierarchies. I recommend using them from day one. Today I generally use /[appname]-[env]/[KEY]. After some time with this scheme I am finding that /[appname]/[env]/[KEY] feels like it will be easier to manage. IAM permissions support paths and wildcards, so either scheme will work.

If you need to migrate your secrets, use Parameter Store namespace migration script

Access Controls

Like most Amazon services IAM controls access to Parameter Store. 

Parameter Store allows you to store your values as plain text or encrypted using a key using KMS. For encrypted values the user must have have grants on the parameter store value and KMS key. For consistency I recommend encrypting all your parameters.

If you have a monolith a key per application per envionment is likely to work well. If you have a collection of microservices having a key per service per environment becomes difficult to manage. In this case share a key between several services in the same environment.

Here is an IAM policy for an Lambda function to access a hierarchy of values in parameter store:

To allow your developers to manage the parameters in dev you will need a policy that looks like this:

Amazon has great documentation on controlling access to Parameter Store and KMS.

Adding Parameters

Amazon allows you to store almost any string up to 4Kbs in length in the Parameter store. This gives you a lot of flexibility.

Parameter Store supports deep hierarchies. You will find this becomes annoying to manage. Use hierarchies to group your values by application and environment. Within the heirarchy use a flat structure. I recommend using lower case letters with dashes between words for your paths. For the parameter keys use upper case letters with underscores. This makes it easy to differentiate the two when searching for parameters. 

Parameter store encodes everything as strings. There may be cases where you want to store an integer as an integer or a more complex data structure. You could use a naming convention to differentiate your different types. I found it easiest to encode every thing as json. When pulling values from the store I json decode it. The down side is strings must be wrapped in double quotes. This is offset by the flexibility of being able to encode objects and use numbers.

It is possible to add parameters to the store using 3 different methods. I generally find the AWS web console easiest when adding a small number of entries. Rather than walking you through this, Amazon have good documentation on adding values. Remember to always use "secure string" to encrypt your values.

Adding parameters via boto3 is straight forward. Once again it is well documented by Amazon.

Finally you can maintain parameters in with a little bit of code. In this example I do it with Python.

Using Parameters

I have used Parameter Store from Python and the command line. It is easier to use it from Python.

My example assumes that it a Lambda function running with the policy from earlier. The function is called my-app-dev. This is what my code looks like:

If you want to avoid loading your config each time your Lambda function is called you can store the results in a global variable. This leverages Amazon's feature that doesn't clear global variables between function invocations. The catch is that your function won't pick up parameter changes without a code deployment. Another option is to put in place logic for periodic purging of the cache.

On the command line things are little harder to manage if you have more than 10 parameters. To export a small number of entries as environment variables, you can use this one liner:

Make sure you have jq installed and the AWS cli installed and configured.

Conclusion

Amazon's System Manager Parameter Store provides a secure way of storing and managing secrets for your AWS based apps. Unlike Hashicorp Vault, Amazon manages everything for you. If you don't need the more advanced features of Secrets Manager you don't have to pay for them. For most users Parameter Store will be adequate.

August 23, 2018

Julien GoodwinCustom output pods for the Standard Research CG635 Clock Generator

As part of my previously mentioned side project the ability to replace crystal oscillators in a circuit with a higher quality frequency reference is really handy, to let me eliminate a bunch of uncertainty from some test setups.

A simple function generator is the classic way to handle this, although if you need square wave output it quickly gets hard to find options, with arbitrary waveform generators (essentially just DACs) the common option. If you can get away with just sine wave output an RF synthesizer is the other main option.

While researching these I discovered the CG635 Clock Generator from Stanford Research, and some time later picked one of these up used.

As well as being a nice square wave generator at arbitrary voltages these also have another set of outputs on the rear of the unit on an 8p8c (RJ45) connector, in both RS422 (for lower frequencies) and LVDS (full range) formats, as well as some power rails to allow a variety of less common output formats.

All I needed was 1.8v LVCMOS output, and could get that from the front panel output, but I'd then need a coax tail on my boards, as well as potentially running into voltage rail issues so I wanted to use the pod output instead. Unfortunately none of the pods available from Stanford Research do LVCMOS output, so I'd have to make my own, which I did.

The key chip in my custom pod is the TI SN65LVDS4, a 1.8v capable single channel LVDS reciever that operates at the frequencies I need. The only downside is this chip is only available in a single form factor, a 1.5mm x 2mm 10 pin UQFN, which is far too small to hand solder with an iron. The rest of the circuit is just some LED indicators to signal status.


Here's a rendering of the board from KiCad.

Normally "not hand solderable" for me has meant getting the board assembled, however my normal assembly house doesn't offer custom PCB finishes, and I wanted these to have white solder mask with black silkscreen as a nice UX when I go to use them, so instead I decided to try my hand at skillet reflow as it's a nice option given the space I've got in my tiny apartment (the classic tutorial on this from SparkFun is a good read if you're interested). Instead of just a simple plate used for cooking you can now buy hot plates with what are essentially just soldering iron temperature controllers, sold as pre-heaters making it easier to come close to a normal soldering profile.

Sadly, actually acquiring the hot plate turned into a bit of a mess, the first one I ordered in May never turned up, and it wasn't until mid-July that one arrived from a different supplier.

Because of the aforementioned lack of space instead of using stencils I simply hand-applied (leaded) paste, without even an assist tool (which I probably will acquire for next time), then hand-mounted the components, and dropped them on the plate to reflow. I had one resistor turn 90 degrees, and a few bridges from excessive paste, but for a first attempt I was really happy.


Here's a photo of the first two just after being taken off the hot plate.

Once the reflow was complete it was time to start testing, and this was where I ran into my biggest problems.

The two big problems were with the power supply I was using, and with my oscilloscope.

The power supply (A Keithley 228 Voltage/Current source) is from the 80's (Keithley's "BROWN" era), and while it has nice specs, doesn't have the most obvious UI. Somehow I'd set it to limit at 0ma output current, and if you're not looking at the segment lights it's easy to miss. At work I have an EEZ H24005 which also resets the current limit to zero on clear, however it's much more obvious when limiting, and a power supply with that level of UX is now on my "to buy" list.

The issues with my scope were much simpler. Currently I only have an old Rigol DS1052E scope, and while it works fine it is a bit of a pain to use, but ultimately I made a very simple mistake while testing. I was feeding in a trigger signal direct from the CG635's front outputs, and couldn't figure out why the generator was putting out such a high voltage (implausibly so). To cut the story short, I'd simply forgotten that the scope was set for use with 10x probes, and once I realised that everything made rather more sense. An oscilloscope with auto-detection for 10x probes, as well as a bunch of other features I want in a scope (much bigger screen for one), has now been ordered, but won't arrive for a while yet.

Ultimately the boards work fine, but until the new scope arrives I can't determine signal quality of them, but at least they're ready for when I'll need them, which is great for flow.

August 01, 2013

Tim ConnorsNo trains for the corporatocracy

Sigh, look, I know we don't actually live in a democracy (but a corporatocracy instead), and I should never expect the relevant ministers to care about my meek little protestations otherwise, but I keep writing these letters to ministers for transport anyway, under the vague hope that it might remind them that they're ministers for transport, and not just roads.


Dear Transport Minister, Terry Mulder,

I encourage you and your fellow ministers to read this article
("Tracking the cost", The Age, June 13 2009) from 2009, back when the
Liberals claimed to have a very different attitude, and when
circumstances seemed to mirror the current time:
http://www.theage.com.au/national/tracking-the-cost-20090612-c67m.html

The eventual costs of building the first extensions to the Melbourne
public transport system in 80 years eventually blew out from $8M to
$500M over the short life of the South Morang project; despite being a
much smaller project than the entire rail lines built cheaper by
cities such as Perth in recent years.

The increased cost is explained away as a safety requirement - it
being so important to now start building grade separated lines rather
than level crossings regardless of circumstances. Perceived safety
trumps real safety (I'd much rather be in a train than suffer from one
of the 300 Victorian deaths on the roads each year), but more sinister
is that because of this inflated expense, we'll probably never see
another rail line like this built at all in Melbourne (although we'll
build at public expense a wonderful road tunnel that no-one but
Lindsay Fox will use at more than 10 times the cost, though).

I suspect the real reason for grade separation is not safety, but to
cause less inconvenience to car drivers stuck for 30 seconds at these
minor crossings. Since the delays at level crossings are a roads
problem, and collisions of errant motorists with trains at level
crossings is a roads problem, and the South Morang railway reservation
existed far before any of the roads were put in place, I'm wondering
whether you can answer why the blowout in costs of construction of
train lines comes out of the public transport budget, and not at the
expense of what causes these problems in the first place - the roads?
These train lines become harder to build because of an artificial cost
inflation caused by something that will be less of a problem if only
we could built more rail lines and actually improve the Melbourne
public transport system and make it attractive to use, for once (we've
been waiting for 80 years).


Yours sincerely,


And a little while later, the reply!

July 01, 2013

Tim ConnorsYarra trail pontoon closures

I do have to admit, I had some fun writing this one:

Dear Transport Minister, Terry Mulder (Denis Napthine, Local MP Ted Baillieu, Ryan Smith MP responsible for Parks Victoria, Parks Victoria itself, and Bicycle Victoria CCed),

I am writing about the sudden closure of the Main Yarra bicycle trail around Punt Road. The floating sections of the trail have been closed for the foreseeable future because of some over-zealous lawyer at Parks Victoria who has decided that careless riders might injure themselves on the rare occasion when the pontoon is both icy, and resting on the bottom of the Yarra at very low tides, sloping sideways at a minor angle. The trail has been closed before Parks Victoria have even planned for how they're going to rectify the problem with the pontoons. Instead, the lawyers have forced riders to take to parallel streets such as Swan St (which I took tonight in the rain, negotiating the thin strip between parked cars far enough from their doors being flung out illegally by careless drivers, and the wet tram tracks beside them). Obviously, causing riders to take these detours will be very much less safe than just keeping the trail open until a plan is developed, but I can see why Parks Victoria would want to shift the legal burden away from them.

I have no faith that the pontoon will be fixed in the foreseeable future without your intervention, because of past history -- that trail has been partially closed for about 18 months out of the past 3 years due to the very important works on the freeway above (keeping the economy going, as they say, by digging ditches and filling them immediately back up again).

Since we're already wasting $15B on an east-west freeway tunnel that will do absolutely nothing to alleviate traffic congestion because the outbound (Easterly direction) freeway is already at capacity in the afternoon without the extra induced traffic this project will add, I was wondering if you could spare a few million to duplicate the only easterly bicycle trail we have, so that these sorts of incidents don't reoccur and have so much impact on riders in the future.

I do hope that this trail will be fixed in a timely fashion before myself and all other 3000-4000 cyclists who currently use the trail every day resorting to riding through any of your freeway tunnels.

Yours sincerely,

Me

April 14, 2013

Tim Connors

Oh well, if The Age aren't going to publish my Thatcher rant, I will:

Jan White (
Letters, 11 Apr) is heavily misguided if she believes that Thatcher was one of Britain's greatest leaders. For whom? By any metric 70% of Brits cared about, she was one of the worst. Any harmony, strength of character and respect Brits may be missing now would be due to her having nearly destroyed everything about British society with her Thatchernomics. Her funeral should be privatised and definitely not funded by the state as it is going to be. Instead, it could be funded by the long queue of people who want to dance on her grave.

March 21, 2013

Tim ConnorsRagin' on the road

Since The Age didn't publish my letter, my 3 readers ought to see it anyway:


Reynah Tang of the Law Institute of Victoria says that road rage offences shouldn't necessarily lead to loss of licence ("Offenders risk losing their licence", The Age, Mar 21) . He misses the point -- a vehicle is a weapon. Road ragers demonstrably do not have enough self control to drive. They have already lost their temper when in control of such a weapon, so they must never be given a licence to use that weapon again (the weapon should also be forfeited). The same is presumably true of gun murderers after their initial jail time (which road ragers rarely are given). RACV's Brian Negus also doesn't appear to realise that a driving license is a privilege, not an automatic right. You can still have all your necessary mobility without your car - it's not a human rights issue.


It was less than 200 words even dammit! But because the editor didn't check the basic arithmetic in a previous day's letter, they had to publish someone's correction.

November 18, 2012

Ben McGinnesFixed it

I've fixed the horrible errors that were sending my tweets here, it only took a few hours.

To do that I've had to disable cross-posting and it looks like it won't even work manually, so my updates will likely only occur on my own domain.

Details of the changes are here. They include better response times for my domain and no more Twitter posts on the main page, which should please those of you who hate that. Apparently that's a lot of people, but since I hate being inundated with FarceBook's crap I guess it evens out.

The syndicated feed for my site around here somewhere will get everything, but there's only one subscriber to that (last time I checked) and she's smart enough to decide how she wants to deal with that.

Ben McGinnesTweet Sometimes I amaze even myself; I remembered the pa…

Sometimes I amaze even myself; I remembered the passphrases to old PGP keys I thought had been lost to time. #crypto

Originally published at Organised Adversary. Please leave any comments there.

Ben McGinnesTweet These are the same keys I referred to in the PPAU…

These are the same keys I referred to in the PPAU #NatSecInquiry submission as being able to be used against me. #crypto

Originally published at Organised Adversary. Please leave any comments there.

Ben McGinnesTweet Now to give them their last hurrah: sign my curren…

Now to give them their last hurrah: sign my current key with them and then revoke them! #crypto

Originally published at Organised Adversary. Please leave any comments there.

October 26, 2011

Donna Benjaminheritage and hysterics

Originally published at KatteKrab. Please leave any comments there.

This gorgeous photo of The Queen in Melbourne on the Royal Tram made me smile this morning.

I've long been a proponent of an Australian Republic - but the populist hysteria of politicians, this photo, and the Kingdom of the Netherlands is actually making me rethink that position.

At least for today.  Long may she reign over us.

"Queen Elizabeth II smiles as she rides on the royal tram down St Kilda Road"
Photo from Getty Images published on theage.com.au

October 02, 2011

Donna BenjaminSticks and Stones and Speech

Originally published at KatteKrab. Please leave any comments there.

THE law does treat race differently: it is not unlawful to publish an article that insults, offends, humiliates or intimidates old people, for instance, or women, or disabled people. Professor Joseph, director of the Castan Centre for Human Rights Law at Monash University, said in principle ''humiliate and intimidate'' could be extended to other anti-discrimination laws. But historically, racial and religious discrimination is treated more seriously because of the perceived potential for greater public order problems and violence.

Peter Munro The Age  2 Oct 2011

Ahaaa. Now I get it! We've been doing it wrong. 

Racial villification is against the law because it might be more likely to lead to violence than villifying women, the elderly or the disabled.

Interesting debates and articles about free speech and discrimination are bobbing up and down in the flotsam and jetsam of the Bolt decision. Much of it seems to hinge on some kind of legal see-saw around notions of a bad law about bad words.

I've always been a proponent of the sticks and stones philosophy.  For those not familiar, it's the principle behind a children's nursery rhyme.

Sticks and Stones may break my bones
But  words will never hurt me

But I'm increasingly disturbed by the hateful culture of online comment.  I am a very strong proponent of the human right to free expression, and abhor censorship, but I'm seriously sick of "My right to free speech" being used as the ultimate excuse for people using words to denigrate, humiliate, intimidate, belittle and attack others, particularly women.

We should defend a right to free speech, but condemn hate speech when ever and where ever we see it.  Maybe we actually need to get violent to make this stop? Surely not.

September 20, 2011

Donna BenjaminQantas Pilots

Originally published at KatteKrab. Please leave any comments there.

The Qantas Pilot Safety culture is something worth fighting to protect. I read Malcolm Gladwell's Outliers whilst on board a Qantas flight recently. While Qantas itself isn't mentioned in the book, a footnote listed Australia as having the 2nd lowest Pilot Power-Distance Index (PDI) in the world. New Zealand had the lowest. The entire chapter "The Ethnic Theory of Plane Crashes" is the strongest argument I've seen which explains the Qantas safety record. The experience of pilots and relationships amongst the entire air crew is a crucial differentiating factor. Other airlines work hard to develop this culture, often needing to work against their own cultural patterns to achieve it. At Qantas, and likely at other Australian airlines too, this culture is the norm.

I want Australian Qantas Pilots flying Qantas planes. I'd like an Australian in charge too.

If you too support Qantas Pilots - go to their website, sign the petition.

Do your own reading.

G.R. Braithwaite, R.E. Caves, J.P.E. Faulkner, Australian aviation safety — observations from the ‘lucky’ countryJournal of Air Transport Management, Volume 4, Issue 1, January 1998: 55-62.

Anthony Dennis, What it takes to become a Qantas pilot news.com.au, 8 September 2011.

Ashleigh Merritt, Culture in the Cockpit: Do Hofstede’s Dimensions Replicate?  Journal of Cross-Cultural Psychology, May 2000 31: 283-30.

Matt Phillips, Malcolm Gladwell on Culture, Cockpit Communication and Plane Crashes, WSJ Blogs, 4 December 2008.

 

September 18, 2011

Donna BenjaminRegistering for LCA2012

Originally published at KatteKrab. Please leave any comments there.

linux.conf.au ballarat 2012

I am right now, at this very minute, registering for linux.conf.au in Ballarat in January. Creating my planet feed. Yep. Uhuh.

I reckon the "book a bus" feature of rego is pretty damn cool.  I won't be using it, because I'll be driving up from Melbourne. Serious kudos to the Ballarat team. Also nice to see they'll add busses from Avalon airport as well as from Tullamarine airport if there's demand.

Too cool.