Planet Russell


Planet Linux AustraliaChris Smart: Building and Booting Upstream Linux and U-Boot for Raspberry Pi 2/3 ARM Boards

My home automation setup will make use of Arduinos and also embedded Linux devices. I’m currently looking into a few boards to see if any meet my criteria. Previously I looked at the Orange Pi One, now I’m looking at the Raspberry Pi 2 (which is compatible with the 3).

The most important factor for me is that the device must be supported in upstream Linux (preferably stable, but mainline will do) and U-Boot. I do not wish to use any old, crappy, vulnerable vendor trees!

The Raspberry Pi needs little introduction. It’s a small ARM device, created for education, that’s taken the world by storm and is used in lots of projects.

Raspberry Pi 2, powered by USB with 3v UART connected

Raspberry Pi 2, powered by USB with 3v UART connected

The Raspberry Pi actually has native support for booting a kernel, you don’t have to use U-Boot. However, one of the neat things about U-Boot is that it can provide netboot capabilities, so that you can boot your device from images across the network (we’re just going to use it to boot a kernel and initramfs, however).

One of the other interesting things about the Raspberry Pi is that there are lots of ways to tweak the device using a config.txt file.

The Raspberry Pi 3 has a 64bit CPU, however it is probably best run in 32bit mode (as a Raspberry Pi 2) as 64bit userland is not particularly advanced in ARM world, yet.

Fedora 25 will finally support Raspberry Pi 2 and 3 (although not all peripherals will be supported right away).

Connecting UART

The UART on the Raspberry Pi uses the GND, TX and RX connections which are on the GPIO pins (see above). Plug the corresponding cables from a 3.3V UART cable onto these pins and then into a USB port on your machine.

Your device will probably be /dev/ttyUSB0, but you can check this with dmesg just after plugging it in.

Now we can simply use screen to connect to the UART, but you’ll have to be in the dialout group.

sudo gpasswd -a ${USER} dialout
newgrp dialout
screen /dev/ttyUSB0 115200

Note that you won’t see anything just yet without an SD card that has a working bootloader. We’ll get to that shortly!

Partition the SD card

First things first, get yourself an SD card.

The card needs to have an msdos partition table with a smallish boot partition (formatted FAT32). The binary U-Boot file will sit there, called kernel.img, along with some other bootloader files. You can use the rest of the card for the root file system (but we’ll boot an initramfs, so it’s not needed).

Assuming your card is at /dev/sdx (replace as necessary, check dmesg after plugging it in if you’re not sure).

sudo umount /dev/sdx* # makes sure it's not mounted
sudo parted -s /dev/sdx \
mklabel msdos \
mkpart primary fat32 1M 30M \
mkpart primary ext4 30M 100%

Now we can format the partitions (upstream U-Boot supports ext3 on the boot partition).
sudo mkfs.vfat /dev/sdx1
sudo mkfs.ext4 /dev/sdx2

Next, mount the boot partition to /mnt, this is where we will copy everything to.
sudo mount /dev/sdb1 /mnt

Leave your SD card plugged in, we will need to copy the bootloader to it soon!

Upstream U-Boot Bootloader

Install the arm build toolchain dependencies.

sudo dnf install gcc-arm-linux-gnu binutils-arm-linux-gnu

We need to clone upstream U-Boot Git tree. Note that I’m checking out the release directly (-b v2016.09.01) but you could leave this off to get master, or change it to a different tag if you want.
cd "${HOME}"
git clone --depth 1 -b v2016.09.01 git://
cd u-boot

There are default configs for both Raspberry Pi 2 and 3, so select the one you want.
# Run this for the Pi 2
CROSS_COMPILE=arm-linux-gnu- make rpi_2_defconfig
# Run this for the Pi 3
CROSS_COMPILE=arm-linux-gnu- make rpi_3_defconfig

Now, compile it.
CROSS_COMPILE=arm-linux-gnu- make -j$(nproc)

Now, copy the u-boot.bin file onto the SD card, and call it kernel.img (this is what the bootloader looks for).

sudo cp -iv u-boot.bin /mnt/kernel.img

Proprietary bootloader files

Sadly, the Raspberry Pi cannot boot entirely on open source software, we need to get the proprietary files from Broadcom and place them on the SD card also.

Clone the Raspberry Pi Foundation’s GitHub repository.
cd "${HOME}"
git clone --depth 1

Copy the minimum set of required files to the SD card.
sudo cp -iv firmware/boot/{bootcode.bin,fixup.dat,start.elf} /mnt/

Finally, unmount the SD card.
sync && sudo umount /mnt

OK, now our bootloader should be ready to go.

Testing our bootloader

Now we can remove the SD card from the computer and plug it into the powered off Raspberry Pi to see if our bootloader build was successful.

Switch back to your terminal that’s running screen and then power up the Pi. Note that the device will try to netboot by default, so you’ll need to hit the enter key when you see a line that says the following.

(Or you can just repeatedly hit enter key in the screen console while you turn the device on.)

Note that if you don’t see anything, swap the RX and TX pins on the UART and try again.

With any luck you will then get to a U-Boot prompt where we can check the build by running the version command. It should have the U-Boot version we checked out from Git and today’s build date!

Raspberry Pi running U-Boot

Raspberry Pi running U-Boot

Hurrah! If that didn’t work for you, repeat the build and writing steps above. You must have a working bootloader before you can get a kernel to work.

If that worked, power off your device and re-insert the SD card into your computer and mount it at /mnt.

sudo umount /dev/sdx* # unmount everywhere first
sudo mount /dev/sdx1 /mnt

Creating an initramfs

Of course, a kernel won’t be much good without some userspace. Let’s use Fedora’s static busybox package to build a simple initramfs that we can boot on the Raspberry Pi.

I have a script that makes this easy, you can grab it from GitHub.

Ensure your SD card is plugged into your computer and mounted at /mnt, then we can copy the file on!

cd ${HOME}
git clone
cd custom-initramfs
./ --arch arm --dir "${PWD}" --tty ttyAMA0

This will create an initramfs for us in your custom-initramfs directory, called initramfs-arm.cpio.gz. We’re not done yet, though, we need to convert this to the format supported by U-Boot (we’ll write it directly to the SD card).

gunzip initramfs-arm.cpio.gz
sudo mkimage -A arm -T ramdisk -C none -n uInitrd \
-d initramfs-arm.cpio /mnt/uInitrd

Now we have a simple initramfs ready to go.

Upstream Linux Kernel

Clone the mainline Linux tree (this will take a while). Note that I’m getting the latest tagged release by default (-b v4.9-rc1) but you could leave this off or change it to some other tag if you want.

cd ${HOME}
git clone --depth 1 -b v4.9-rc1 \

Or, if you want to try linux-stable, clone this repo instead.
git clone --depth 1 -b v4.8.4 \
git:// linux

Now go into the linux directory.
cd linux

Building the kernel

Now we are ready to build our kernel!

Load the default kernel config for the sunxi boards.
ARCH=arm CROSS_COMPILE=arm-linux-gnu- make bcm2835_defconfig

If you want, you could modify the kernel config here, but it’s not necessary.
ARCH=arm CROSS_COMPILE=arm-linux-gnu- make menuconfig

Build the kernel image and device tree blob.
ARCH=arm CROSS_COMPILE=arm-linux-gnu- make -j$(nproc) zImage dtbs

Mount the boot partition and copy on the kernel and device tree file.
sudo cp -iv arch/arm/boot/zImage /mnt/
sudo cp -iv arch/arm/boot/dts/bcm2836-rpi-2-b.dtb /mnt/

Bootloader config

Next we need to make a bootloader file, boot.cmd, which tells U-Boot what to load and boot (the kernel, device tree and initramfs).

The bootargs line says to output the console to serial and to boot from the ramdisk. Variables are used for the memory locations of the kernel, dtb and initramfs.

Note, if you want to boot from the second partition instead of an initramfs, change root argument to root=/dev/mmcblk0p2 (or other partition as required).

cat > boot.cmd << EOF
fatload mmc 0 \${kernel_addr_r} zImage
fatload mmc 0 \${fdt_addr_r} bcm2836-rpi-2-b.dtb
fatload mmc 0 \${ramdisk_addr_r} uInitrd
setenv bootargs console=ttyAMA0,115200 earlyprintk root=/dev/root \
rootwait panic=10
bootz \${kernel_addr_r} \${ramdisk_addr_r} \${fdt_addr_r}

Compile the bootloader file and output it directly to the SD card at /mnt.
sudo mkimage -C none -A arm -T script -d boot.cmd /mnt/boot.scr

Now, unmount your SD card.

sudo umount /dev/sdx*

Testing it all

Insert it into the Raspberry Pi and turn it on! Hopefully you’ll see it booting the kernel on your screen terminal window.

You should be greeted by a login prompt. Log in with root (no password).

Login prompt

Login prompt

That’s it! You’ve built your own Linux system for the Raspberry Pi!


Log in as root and give the Ethernet device (eth0) an IP address on your network.

Now test it with a tool, like ping, and see if your network is working.

Here’s an example:

Networking on Raspberry Pi

Networking on Raspberry Pi

Memory usage

There is clearly lots more you can do with this device…

Raspberry Pi Memory Usage

Raspberry Pi Memory Usage


CryptogramHow Powell's and Podesta's E-mail Accounts Were Hacked

Worse Than FailureDeep Fried Offshore

Stephen worked for an Initech that sold specialized hardware: high-performance, high-throughput systems for complex data processing tasks in the enterprise world, sold at exorbitant enterprise prices. Once deployed, these systems were configured via a management app that exposed an HTTP interface, just like any consumer-grade router or Wi-Fi access point that is configurable through a website (e.g.

Stephen worked with a diverse team of American engineers who were finishing up the management application for a new model. The product was basically done but needed a little bit of testing and polish before the official release. They expected several months of post-release work and then the project would go into maintenance mode.

Then disaster struck. A pointy-haired boss somewhere up in a fuzzy area of the organization chart simply labeled “Here be VPs” discovered the large salary difference between American engineers and off-shore workers, and decided American engineers were far too expensive for software “maintenance”. The company decided to lay off 300 software engineers and hire 300 fresh-out-of-college replacements in a foreign country with much lower labor rates. The announcement was overshadowed by the fanfare of the product’s release and proudly billed as a major “win” for the company.

Stephen was lucky enough to stay on and shift to other projects, the first of which was to assist with the transition by documenting everything he could for the new team. Which he did, in hundreds of pages of explicit detail, explaining how to use the source control repository for the project, execute and log unit tests, and who to contact when they had questions regarding the hardware itself. After that, he devoted himself to other projects. Months turned into years and Stephen assumed by the silence that the handover was successful and he’d never see the management app again.

Of course that isn’t what happened.

There’s an old joke on the Internet called “How to shoot yourself in the foot” ( that lampoons the complicated process of shooting yourself in the foot in various programming languages. Here is one such entry:

370 JCL: You send your foot down to MIS and include a 300-page document explaining exactly how you want it to be shot. Three years later, your foot returns deep-fried.

Three years after the management app was passed off to the new offshore engineer team, a new panic was boiling through Initech. The management application was as slow as molasses, buggier than flies in soup, with a user interface that was battered and deep-fried in a nonsense language that vaguely resembled English. It also crashed a lot, each time requiring a power reset to bring the system back up.

Initech had simmered along by only shipping the original version, but now they had this expensive, powerful product with three years worth of hardware and firmware updates that the management app could not configure and thus were not available to customers. Several important customers canceled their support contracts rather than pay for half-baked updates that rarely worked, and many prospective customers passed them right by when basic features printed in the product brochure were “unavailable for demo”. They were falling behind in the industry.

It was bad enough that management finally decided to do something.

That something was to toss the offshore team and see how many of their old laid-off engineers they could scoop up, which was, not surprisingly, few. As one of the few original engineers who still worked there, Stephen was whisked away from his current projects to assist.

With a bite-sized portion of the old team re-assembled, they set to work to unscramble the situation. Stephen noticed that the last source control check-in was from three years ago. Upon further inspection, he realized that not one of the offshore engineers had ever committed.

He called up the offshore office to see if they had their own repository. The last few remaining offshore workers didn’t know, and when told to search every system they had, they replied they couldn’t because their local manager had sold all of their computer equipment on an auction site upon learning of their impending layoffs.

He then contacted the hardware team in hopes that maybe somebody had a clone of the offshore team’s repository, but the hardware guys claimed the software team was laid off three years ago and they were not aware of an offshore replacement team.

With no current source code, and an old repository that hadn’t been committed to in three years, Stephen and the team were left with no choice but to pick up exactly where they left off three years ago…

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet DebianRuss Allbery: DocKnot 1.00

I'm a bit of a perfectionist about package documentation, and I'm also a huge fan of consistency. As I've slowly accumulated more open source software packages (alas, fewer new ones these days since I have less day-job time to work on them), I've developed a standard format for package documentation files, particularly the README in the package and the web pages I publish. I've iterated on these, tweaking them and messing with them, trying to incorporate all my accumulated wisdom about what information people need.

Unfortunately, this gets very tedious, since I have dozens of packages at this point and rarely want to go through the effort of changing every one of them every time I come up with a better way of phrasing something or change some aspect of my standard package build configuration. I also have the same information duplicated in multiple locations (the README and the web page for the package). And there's a lot of boilerplate that's common for all of my packages that I don't want to keep copying (or changing when I do things like change all URLs to HTTPS).

About three years ago, I started seriously brainstorming ways of automating this process. I made a start on it during one self-directed day at my old job at Stanford, but only got it far enough to generate a few basic files. Since then, I keep thinking about it, keep wishing I had it, and keep not putting the work into finishing it.

During this vacation, after a week and a half of relaxing and reading, I finally felt like doing a larger development project and finally started working on this for long enough to build up some momentum. Two days later, and this is finally ready for an initial release.

DocKnot uses metadata (which I'm putting in docs/metadata) that's mostly JSON plus some documentation fragments and generates README, the web page for the package (in thread, the macro language I use for all my web pages), and (the other thing I've wanted to do and didn't want to tackle without this tool), a Markdown version of README that will look nice on GitHub.

The templates that come with the package are all rather specific to me, particularly the thread template which would be unusable by anyone else. I have no idea if anyone else will want to use this package (and right now the metadata format is entirely undocumented). But since it's a shame to not release things as free software, and since I suspect I may need to upload it to Debian since, technically, this tool is required to "build" the README file distributed with my packages, here it is. I've also uploaded it to CPAN (it's my first experiment with the App::* namespace for things that aren't really meant to be used as a standalone Perl module).

You can get the latest version from the DocKnot distribution page (which is indeed generated with DocKnot). Also now generated with DocKnot are the rra-c-util and C TAP Harness distribution pages. Let me know if you see anything weird; there are doubtless still a few bugs.


CryptogramOPM Attack

Good long article on the 2015 attack against the US Office of Personnel Management.

Google AdsenseDo more with Ads on AMP

Cross-posted from the Accelerated Mobile Pages (AMP) Blog

Over a year has passed since the AMP Project first launched with the vision of making mobile web experiences faster and better for everybody. From the very beginning, we’ve maintained that the AMP project would support publishers’ existing business models while creating new monetization opportunities. With regards to advertising, this meant giving publishers the flexibility to use the current technology and systems they’re used to, and evolving user-first mobile web initiatives like AMP for Ads (A4A).

With a growing number of publishers embracing the speed of AMP, today we’re addressing some of the ways in which we’re helping you do more with ads on AMP.

Serve ads from more than 70+ ad tech providers

Keeping with the open source nature of the project, more than 70+ advertising technology providers have already integrated with AMP. And that list is only growing. Existing tags that are delivered via a supported ad server also work in AMP. So, you can serve ads from both directly-sold campaigns as well as third-party ad networks and exchanges so long as they have support for AMP.

Keep 100% of the ad revenue

AMP is an open source project. It does not take a revenue share. AMP is not an advertising service provider or intermediary, and publishers can monetize AMP pages the same way you monetize HTML pages, keeping 100% of the revenue you earn based on negotiated rates with ad providers.

Choose the advertising experience on your pages

You can choose to serve any number of ads per page to serve in locations that works best for your content, including the first viewport. Just remember that regular ads in AMP load after the primary content. So, unless you’re loading the lightning fast A4A ads, we recommend placing the first ad below the first viewpoint to optimize for viewability and user experience.

Take advantage of video ad support

AMP currently supports 13 different video players, ranging from Brightcove to Teads, all of which can serve video ads. If you want to use a video player that is not currently supported in AMP, place the video player inside amp-iframe. Learn more.

Differentiate yourself with rich and custom ad formats

AMP accommodates a large variety of ad formats by default, ranging from publisher custom ad units to IAB standard outstream video and in-feed native ads. We value publisher choice and support efforts to create proprietary ad formats. For example, with responsive layouts in AMP, you can offer advertisers custom ads that can dynamically span the entire width of the mobile device. Learn more about how you can adapt your ads strategy for AMP.

Maximize revenue with interchangeable ad slots

In September 2016, both YieldMo and DoubleClick announced support for multi-size ad requests on AMP pages. With this launch, you can optimize yield by allowing multiple ad creative sizes to compete for each ad slot, capturing the most advertiser demand possible on AMP pages while still protecting the user’s experience.

Plan ahead with a view into AMP’s roadmap

Transparency is important to the success of any open source project and is a key value for AMP. Accordingly, we started publishing the AMP roadmap publicly nearly 6 months ago, including milestones for ads. These roadmaps are accompanied with bi-quarterly status updates and you can also see all AMP releases here.

Over 700,000 domains have published AMP pages and a good many are monetizing them with ads. Early studies suggest that ads on AMP are more viewable and engaging than ads on non-AMP mobile pages. That’s because with AMP, you don’t have to choose between good user experiences and monetization opportunities. When balanced and optimized, you can have both.

Reach out -- we’re eager to hear your suggestions and feedback to make sure that AMP pays off for everyone.

Posted by Vamsee Jasti, Product Manager, AMP Project

Sociological ImagesAtheists Still America’s Most Disliked Group, Now Along with Muslims

Originally posted at The Society Pages’ Discoveries.

Ten years ago, sociologist Penny Edgell and her colleagues published a surprising finding: atheists were the most disliked minority group in the United States. Americans said atheists were less likely share their vision of Americans society than were Muslims, gays and lesbians, African Americans, and a host of other groups — and that they wouldn’t like their child marrying one.

But that was a decade ago. Today, fewer Americans report a religious affiliation and, in the intervening years, many non-religious groups have made efforts to improve their public image.

So, have things gotten better for atheists? The authors recently published the findings from a ten-year follow up to answer these questions, and found that not much has changed. Atheists are now statistically tied with Muslims for the most disliked group in the United States. Despite an increased awareness of atheists and other non-religious people over the last decade, Americans still distance themselves from the non-religious.

Flickr photo from David Riggs.
Flickr photo from David Riggs.

This time around, the authors asked some additional questions to get at why so many people dislike atheists. They asked if respondents think atheists are immoral, criminal, or elitist, and whether or not the increase in non-religious people is a good or bad thing. They found that one of the strongest predictors of disliking atheists is assuming that they are immoral. People are less likely to think atheists are criminals and those who think they are elitist actually see it as a good thing. However, 40% of Americans also say that the increase of people with “no religion” is a bad thing.

These findings highlight the ways that many people in the United States still use religion as a sign of morality, of who is a good citizen, a good neighbor, and a good American. And the fact that Muslims are just as disliked as atheists shows that it is not only the non-religious that get cast as different and bad. Religion can be a basis for both inclusion and exclusion, and the authors conclude that it is important to continue interrogating when and why it excludes.

Amber Joy Powell is a PhD student in sociology at the University of Minnesota. Her current research interests include crime, punishment, victimization, and the intersectionalities of race and gender. She is currently working on an ethnographic study involving the criminal justice response to child sexual assault victims.

(View original at

Google AdsenseDid You Receive A Policy Violation Warning?

Have you received an email from with a warning that you’ve violated the AdSense policies? These warnings are usually issued in instances of mild violations that we believe can be fixed quickly.

In addition to an email, you’ll receive a notification in your AdSense account under the “Status” tab. Both the email and notification will explain where your violation occurred and how to fix it and by clicking the link provided, you’ll be sent to the page where the violation has occurred. To resolve the issue, you can either fix the content that violates AdSense policies across your site or remove the AdSense code.

Remember, your site must be compliant in order to participate in the AdSense program. When you’ve made all the necessary changes to your site, check “Resolved” on the site level violation notification in the “Status” tab of your AdSense account. You don’t need to notify us when you’ve fixed the violation; however, you do need to resolve it in a timely manner. 

There are cases where ads stop appearing on your site altogether. This can happen when a publisher fails to respond to policy violation warnings, receives multiple warnings, or displays egregious violations across their site(s). Violations are categorized as egregious when we believe they can cause significant harm to advertisers, Google, users, or the overall ads ecosystem. 

In these cases you’ll receive an email and a notification in your AdSense account under the “Status” tab to notify you of this change. A link will also be included to show you where the violation appears. You can resolve it by either removing the content in question or by removing the AdSense code from the affected page. It’s important to note that a very small percentage of sites have their ads disabled after receiving a policy violation warning. 

Once you’ve corrected the violations across your entire site, you can submit an appeal from the “Status” tab in your AdSense account or by using the AdSense policy troubleshooter. Please bear in mind that we can only review appeals from sites that have AdSense code enabled.

Stay tuned for some best practices to help you avoid a policy violation.

Posted by: Anastasia Almiasheva from the AdSense team

CryptogramMalicious AI

It's not hard to imagine the criminal possibilities of automation, autonomy, and artificial intelligence. But the imaginings are becoming mainstream -- and the future isn't too far off.

Along similar lines, computers are able to predict court verdicts. My guess is that the real use here isn't to predict actual court verdicts, but for well-paid defense teams to test various defensive tactics.

Planet DebianChristoph Egger: Installing a python systemd service?

As web search engines and IRC seems to be of no help, maybe someone here has a helpful idea. I have some service written in python that comes with a .service file for systemd. I now want to build&install a working service file from the software's I can override the build/build_py commands of setuptools, however that way I still lack knowledge wrt. the bindir/prefix where my service script will be installed.


Turns out, if you override the install command (not the install_data!), you will have self.root and self.install_scripts (and lots of other self.install_*). As a result, you can read the template and write the desired output file after calling super's run method. The fix was inspired by GateOne (which, however doesn't get the --root parameter right, you need to strip self.root from the beginning of the path to actually make that work as intended).

class myinstall(install):
    _servicefiles = [

    def run(self):

        if not self.dry_run:
            bindir = self.install_scripts
            if bindir.startswith(self.root):
                bindir = bindir[len(self.root):]

            systemddir = os.path.join(self.root, "lib/systemd/system")

            for servicefile in self._servicefiles:
                service = os.path.split(servicefile)[1]
                self.announce("Creating %s" % os.path.join(systemddir, service),
                with open(servicefile) as servicefd:
                    servicedata =

                with open(os.path.join(systemddir, service), "w") as servicefd:
                    servicefd.write(servicedata.replace("%BINDIR%", bindir))

Comments, suggestions and improvements, of course, welcome!

Worse Than FailureCodeSOD: data-wheel="reinvented"

In HTML5, the data-* attributes were codified, and this is a really nice thing for building applications. They are an app-defined namespace to attach any sorts of custom data to your HTML attributes. For example, a div responsible for displaying a User object might have an attribute like <div data-user-id="5123">…</div>, which allows us to better bind our DOM to our application model. They can even be used in stylesheet selectors, so I could make a styling rule for div[data-user-id].

I’m not the only one who thinks they’re a nice feature. Eric W has a co-worker who’s come up with a very… unique way of using them. First, he has the following Django template:

{% for cat in categories %}
        <a href="#" {% if shopType ==|slugify %}class="navOn" {% endif %}data-type="{{|slugify }}" data-id="{{ }}">{{ }}</a>
{% endfor %}

Which generates links like: <a href="#" data-type="housewares" data-id="54">Housewares</a>

Obviously, since the href points nowhere, there must be a JavaScript event handler. I wonder what it does…

$('.shop-nav a').unbind('click').click(function(e){
    var page = $(this).data('type');
    window.location.href = window.location.origin+'/shop/'+ page;
    return false;

This is one of my favorite classes of bad code- the one where a developer uses a new feature to completely reinvent an old feature, badly. This JavaScript uses the data-type attribute to hold what should actually be in the href. Instead of having real navigation, we hijack the click event to force navigation. The entire JavaScript could be replaced in favor of a link like this: <a href="/shop/housewares">Housewares</a>.

Maybe they need data-type because there’s a CSS rule that styles it?

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet DebianSteinar H. Gunderson: Why does software development take so long?

Nageru 1.4.0 is out (and on its way through the Debian upload process right now), so now you can do live video mixing with multichannel audio to your heart's content. I've already blogged about most of the interesting new features, so instead, I'm trying to answer a question: What took so long?

To be clear, I'm not saying 1.4.0 took more time than I really anticipated (on the contrary, I pretty much understood the scope from the beginning, and there was a reason why I didn't go for building this stuff into 1.0.0); but if you just look at the changelog from the outside, it's not immediately obvious why “multichannel audio support” should take the better part of three months of develoment. What I'm going to say is of course going to be obvious to most software developers, but not everyone is one, and perhaps my experiences will be illuminating.

Let's first look at some obvious things that isn't the case: First of all, development is not primarily limited by typing speed. There are about 9,000 lines of new code in 1.4.0 (depending a bit on how you count), and if it was just about typing them in, I would be done in a day or two. On a good keyboard, I can type plain text at more than 800 characters per minute—but you hardly ever write code for even a single minute at that speed. Just as when writing a novel, most time is spent thinking, not typing.

I also didn't spend a lot of time backtracking; most code I wrote actually ended up in the finished product as opposed to being thrown away. (I'm not as lucky in all of my projects.) It's pretty common to do so if you're in an exploratory phase, but in this case, I had a pretty good idea of what I wanted to do right from the start, and that plan seemed to work. This wasn't a difficult project per se; it just needed to be done (which, in a sense, just increases the mystery).

However, even if this isn't at the forefront of science in any way (most code in the world is pretty pedestrian, after all), there's still a lot of decisions to make, on several levels of abstraction. And a lot of those decisions depend on information gathering beforehand. Let's take a look at an example from late in the development cycle, namely support for using MIDI controllers instead of the mouse to control the various widgets.

I've kept a pretty meticulous TODO list; it's just a text file on my laptop, but it serves the purpose of a ghetto bugtracker. For 1.4.0, it contains 83 work items (a single-digit number is not ticked off, mostly because I decided not to do those things), which corresponds roughly 1:2 to the number of commits. So let's have a look at what the ~20 MIDI controller items went into.

First of all, to allow MIDI controllers to influence the UI, we need a way of getting to it. Since Nageru is single-platform on Linux, ALSA is the obvious choice (if not, I'd probably have to look for a library to put in-between), but seemingly, ALSA has two interfaces (raw MIDI and sequencer). Which one do you want? It sounds like raw MIDI is what we want, but actually, it's the sequencer interface (it does more of the MIDI parsing for you, and generally is friendlier).

The first question is where to start picking events from. I went the simplest path and just said I wanted all events—anything else would necessitate a UI, a command-line flag, figuring out if we wanted to distinguish between different devices with the same name (and not all devices potentially even have names), and so on. But how do you enumerate devices? (Relatively simple, thankfully.) What do you do if the user inserts a new one while Nageru is running? (Turns out there's a special device you can subscribe to that will tell you about new devices.) What if you get an error on subscription? (Just print a warning and ignore it; it's legitimate not to have access to all devices on the system. By the way, for PCM devices, all of these answers are different.)

So now we have a sequencer device, how do we get events from it? Can we do it in the main loop? Turns out it probably doesn't integrate too well with Qt, but it's easy enough to put it in a thread. The class dealing with the MIDI handling now needs locking; what mutex granularity do we want? (Experience will tell you that you nearly always just want one mutex. Two mutexes give you all sorts of headaches with ordering them, and nearly never gives any gain.) ALSA expects us to poll() a given set of descriptors for data, but on shutdown, how do you break out of that poll to tell the thread to go away? (The simplest way on Linux is using an eventfd.)

There's a quirk where if you get two or more MIDI messages right after each other and only read one, poll() won't trigger to alert you there are more left. Did you know that? (I didn't. I also can't find it documented. Perhaps it's a bug?) It took me some looking into sample code to find it. Oh, and ALSA uses POSIX error codes to signal errors (like “nothing more is available”), but it doesn't use errno.

OK, so you have events (like “controller 3 was set to value 47”); what do you do about them? The meaning of the controller numbers is different from device to device, and there's no open format for describing them. So I had to make a format describing the mapping; I used protobuf (I have lots of experience with it) to make a simple text-based format, but it's obviously a nightmare to set up 50+ controllers by hand in a text file, so I had to make an UI for this. My initial thought was making a grid of spinners (similar to how the input mapping dialog already worked), but then I realized that there isn't an easy way to make headlines in Qt's grid. (You can substitute a label widget for a single cell, but not for an entire row. Who knew?) So after some searching, I found out that it would be better to have a tree view (Qt Creator does this), and then you can treat that more-or-less as a table for the rows that should be editable.

Of course, guessing controller numbers is impossible even in an editor, so I wanted it to respond to MIDI events. This means the editor needs to take over the role as MIDI receiver from the main UI. How you do that in a thread-safe way? (Reuse the existing mutex; you don't generally want to use atomics for complicated things.) Thinking about it, shouldn't the MIDI mapper just support multiple receivers at a time? (Doubtful; you don't want your random controller fiddling during setup to actually influence the audio on a running stream. And would you use the old or the new mapping?)

And do you really need to set up every single controller for each bus, given that the mapping is pretty much guaranteed to be similar for them? Making a “guess bus” button doesn't seem too difficult, where if you have one correctly set up controller on the bus, it can guess from a neighboring bus (assuming a static offset). But what if there's conflicting information? OK; then you should disable the button. So now the enable/disable status of that button depends on which cell in your grid has the focus; how do you get at those events? (Install an event filter, or subclass the spinner.) And so on, and so on, and so on.

You could argue that most of these questions go away with experience; if you're an expert in a given API, you can answer most of these questions in a minute or two even if you haven't heard the exact question before. But you can't expect even experienced developers to be an expert in all possible libraries; if you know everything there is to know about Qt, ALSA, x264, ffmpeg, OpenGL, VA-API, libusb, microhttpd and Lua (in addition to C++11, of course), I'm sure you'd be a great fit for Nageru, but I'd wager that pretty few developers fit that bill. I've written C++ for almost 20 years now (almost ten of them professionally), and that experience certainly helps boosting productivity, but I can't say I expect a 10x reduction in my own development time at any point.

You could also argue, of course, that spending so much time on the editor is wasted, since most users will only ever see it once. But here's the point; it's not actually a lot of time. The only reason why it seems like so much is that I bothered to write two paragraphs about it; it's not a particular pain point, it just adds to the total. Also, the first impression matters a lot—if the user can't get the editor to work, they also can't get the MIDI controller to work, and is likely to just go do something else.

A common misconception is that just switching languages or using libraries will help you a lot. (Witness the never-ending stream of software that advertises “written in Foo” or “uses Bar” as if it were a feature.) For the former, note that nothing I've said so far is specific to my choice of language (C++), and I've certainly avoided a bunch of battles by making that specific choice over, say, Python. For the latter, note that most of these problems are actually related to library use—libraries are great, and they solve a bunch of problems I'm really glad I didn't have to worry about (how should each button look?), but they still give their own interaction problems. And even when you're a master of your chosen programming environment, things still take time, because you have all those decisions to make on top of your libraries.

Of course, there are cases where libraries really solve your entire problem and your code gets reduced to 100 trivial lines, but that's really only when you're solving a problem that's been solved a million times before. Congrats on making that blog in Rails; I'm sure you're advancing the world. (To make things worse, usually this breaks down when you want to stray ever so slightly from what was intended by the library or framework author. What seems like a perfect match can suddenly become a development trap where you spend more of your time trying to become an expert in working around the given library than actually doing any development.)

The entire thing reminds me of the famous essay No Silver Bullet by Fred Brooks, but perhaps even more so, this quote from John Carmack's .plan has struck with me (incidentally about mobile game development in 2006, but the basic story still rings true):

To some degree this is already the case on high end BREW phones today. I have a pretty clear idea what a maxed out software renderer would look like for that class of phones, and it wouldn't be the PlayStation-esq 3D graphics that seems to be the standard direction. When I was doing the graphics engine upgrades for BREW, I started along those lines, but after putting in a couple days at it I realized that I just couldn't afford to spend the time to finish the work. "A clear vision" doesn't mean I can necessarily implement it in a very small integral number of days.

In a sense, programming is all about what your program should do in the first place. The “how” question is just the “what”, moved down the chain of abstractions until it ends up where a computer can understand it, and at that point, the three words “multichannel audio support” have become those 9,000 lines that describe in perfect detail what's going on.

Planet DebianDaniel Pocock: FOSDEM 2017 Real-Time Communications Call for Participation

FOSDEM is one of the world's premier meetings of free software developers, with over five thousand people attending each year. FOSDEM 2017 takes place 4-5 February 2017 in Brussels, Belgium.

This email contains information about:

  • Real-Time communications dev-room and lounge,
  • speaking opportunities,
  • volunteering in the dev-room and lounge,
  • related events around FOSDEM, including the XMPP summit,
  • social events (the legendary FOSDEM Beer Night and Saturday night dinners provide endless networking opportunities),
  • the Planet aggregation sites for RTC blogs

Call for participation - Real Time Communications (RTC)

The Real-Time dev-room and Real-Time lounge is about all things involving real-time communication, including: XMPP, SIP, WebRTC, telephony, mobile VoIP, codecs, peer-to-peer, privacy and encryption. The dev-room is a successor to the previous XMPP and telephony dev-rooms. We are looking for speakers for the dev-room and volunteers and participants for the tables in the Real-Time lounge.

The dev-room is only on Saturday, 4 February 2017. The lounge will be present for both days.

To discuss the dev-room and lounge, please join the FSFE-sponsored Free RTC mailing list.

To be kept aware of major developments in Free RTC, without being on the discussion list, please join the Free-RTC Announce list.

Speaking opportunities

Note: if you used FOSDEM Pentabarf before, please use the same account/username

Real-Time Communications dev-room: deadline 23:59 UTC on 17 November. Please use the Pentabarf system to submit a talk proposal for the dev-room. On the "General" tab, please look for the "Track" option and choose "Real-Time devroom". Link to talk submission.

Other dev-rooms and lightning talks: some speakers may find their topic is in the scope of more than one dev-room. It is encouraged to apply to more than one dev-room and also consider proposing a lightning talk, but please be kind enough to tell us if you do this by filling out the notes in the form.

You can find the full list of dev-rooms on this page and apply for a lightning talk at

Main track: the deadline for main track presentations is 23:59 UTC 31 October. Leading developers in the Real-Time Communications field are encouraged to consider submitting a presentation to the main track.

First-time speaking?

FOSDEM dev-rooms are a welcoming environment for people who have never given a talk before. Please feel free to contact the dev-room administrators personally if you would like to ask any questions about it.

Submission guidelines

The Pentabarf system will ask for many of the essential details. Please remember to re-use your account from previous years if you have one.

In the "Submission notes", please tell us about:

  • the purpose of your talk
  • any other talk applications (dev-rooms, lightning talks, main track)
  • availability constraints and special needs

You can use HTML and links in your bio, abstract and description.

If you maintain a blog, please consider providing us with the URL of a feed with posts tagged for your RTC-related work.

We will be looking for relevance to the conference and dev-room themes, presentations aimed at developers of free and open source software about RTC-related topics.

Please feel free to suggest a duration between 20 minutes and 55 minutes but note that the final decision on talk durations will be made by the dev-room administrators. As the two previous dev-rooms have been combined into one, we may decide to give shorter slots than in previous years so that more speakers can participate.

Please note FOSDEM aims to record and live-stream all talks. The CC-BY license is used.

Volunteers needed

To make the dev-room and lounge run successfully, we are looking for volunteers:

  • FOSDEM provides video recording equipment and live streaming, volunteers are needed to assist in this
  • organizing one or more restaurant bookings (dependending upon number of participants) for the evening of Saturday, 4 February
  • participation in the Real-Time lounge
  • helping attract sponsorship funds for the dev-room to pay for the Saturday night dinner and any other expenses
  • circulating this Call for Participation (text version) to other mailing lists

See the mailing list discussion for more details about volunteering.

Related events - XMPP and RTC summits

The XMPP Standards Foundation (XSF) has traditionally held a summit in the days before FOSDEM. There is discussion about a similar summit taking place on 2 and 3 February 2017. XMPP Summit web site - please join the mailing list for details.

We are also considering a more general RTC or telephony summit, potentially in collaboration with the XMPP summit. Please join the Free-RTC mailing list and send an email if you would be interested in participating, sponsoring or hosting such an event.

Social events and dinners

The traditional FOSDEM beer night occurs on Friday, 3 February.

On Saturday night, there are usually dinners associated with each of the dev-rooms. Most restaurants in Brussels are not so large so these dinners have space constraints and reservations are essential. Please subscribe to the Free-RTC mailing list for further details about the Saturday night dinner options and how you can register for a seat.

Spread the word and discuss

If you know of any mailing lists where this CfP would be relevant, please forward this email (text version). If this dev-room excites you, please blog or microblog about it, especially if you are submitting a talk.

If you regularly blog about RTC topics, please send details about your blog to the planet site administrators:

Planet site Admin contact
All projects Free-RTC Planet ( contact
XMPP Planet Jabber ( contact
SIP Planet SIP ( contact
SIP (Español) Planet SIP-es ( contact

Please also link to the Planet sites from your own blog or web site as this helps everybody in the free real-time communications community.


For any private queries, contact us directly using the address and for any other queries please ask on the Free-RTC mailing list.

The dev-room administration team:

Planet DebianJoachim Breitner: Showcasing Applicative

My plan for this week’s lecture of the CIS 194 Haskell course at the University of Pennsylvania is to dwell a bit on the concept of Functor, Applicative and Monad, and to highlight the value of the Applicative abstraction.

I quite like the example that I came up with, so I want to share it here. In the interest of long-term archival and stand-alone pesentation, I include all the material in this post.1


In case you want to follow along, start with these imports:

import Data.Char
import Data.Maybe
import Data.List

import System.Environment
import System.IO
import System.Exit

The parser

The starting point for this exercise is a fairly standard parser-combinator monad, which happens to be the result of the student’s homework from last week:

newtype Parser a = P (String -> Maybe (a, String))

runParser :: Parser t -> String -> Maybe (t, String)
runParser (P p) = p

parse :: Parser a -> String -> Maybe a
parse p input = case runParser p input of
    Just (result, "") -> Just result
    _ -> Nothing -- handles both no result and leftover input

noParserP :: Parser a
noParserP = P (\_ -> Nothing)

pureParserP :: a -> Parser a
pureParserP x = P (\input -> Just (x,input))

instance Functor Parser where
    fmap f p = P $ \input -> do
	(x, rest) <- runParser p input
	return (f x, rest)

instance Applicative Parser where
    pure = pureParserP
    p1 <*> p2 = P $ \input -> do
        (f, rest1) <- runParser p1 input
        (x, rest2) <- runParser p2 rest1
        return (f x, rest2)

instance Monad Parser where
    return = pure
    p1 >>= k = P $ \input -> do
        (x, rest1) <- runParser p1 input
        runParser (k x) rest1

anyCharP :: Parser Char
anyCharP = P $ \input -> case input of
    (c:rest) -> Just (c, rest)
    []       -> Nothing

charP :: Char -> Parser ()
charP c = do
    c' <- anyCharP
    if c == c' then return ()
               else noParserP

anyCharButP :: Char -> Parser Char
anyCharButP c = do
    c' <- anyCharP
    if c /= c' then return c'
               else noParserP

letterOrDigitP :: Parser Char
letterOrDigitP = do
    c <- anyCharP
    if isAlphaNum c then return c else noParserP

orElseP :: Parser a -> Parser a -> Parser a
orElseP p1 p2 = P $ \input -> case runParser p1 input of
    Just r -> Just r
    Nothing -> runParser p2 input

manyP :: Parser a -> Parser [a]
manyP p = (pure (:) <*> p <*> manyP p) `orElseP` pure []

many1P :: Parser a -> Parser [a]
many1P p = pure (:) <*> p <*> manyP p

sepByP :: Parser a -> Parser () -> Parser [a]
sepByP p1 p2 = (pure (:) <*> p1 <*> (manyP (p2 *> p1))) `orElseP` pure []

A parser using this library for, for example, CSV files could take this form:

parseCSVP :: Parser [[String]]
parseCSVP = manyP parseLine
    parseLine = parseCell `sepByP` charP ',' <* charP '\n'
    parseCell = do
        charP '"'
        content <- manyP (anyCharButP '"')
        charP '"'
        return content

We want EBNF

Often when we write a parser for a file format, we might also want to have a formal specification of the format. A common form for such a specification is EBNF. This might look as follows, for a CSV file:

cell = '"', {not-quote}, '"';
line = (cell, {',', cell} | ''), newline;
csv  = {line};

It is straight-forward to create a Haskell data type to represent an ENBF syntax description. Here is a simple EBNF library (data type and pretty-printer) for your convenience:

data RHS
  = Terminal String
  | NonTerminal String
  | Choice RHS RHS
  | Sequence RHS RHS
  | Optional RHS
  | Repetition RHS
  deriving (Show, Eq)

ppRHS :: RHS -> String
ppRHS = go 0
    go _ (Terminal s)     = surround "'" "'" $ concatMap quote s
    go _ (NonTerminal s)  = s
    go a (Choice x1 x2)   = p a 1 $ go 1 x1 ++ " | " ++ go 1 x2
    go a (Sequence x1 x2) = p a 2 $ go 2 x1 ++ ", "  ++ go 2 x2
    go _ (Optional x)     = surround "[" "]" $ go 0 x
    go _ (Repetition x)   = surround "{" "}" $ go 0 x

    surround c1 c2 x = c1 ++ x ++ c2

    p a n | a > n     = surround "(" ")"
          | otherwise = id

    quote '\'' = "\\'"
    quote '\\' = "\\\\"
    quote c    = [c]

type Production = (String, RHS)
type BNF = [Production]

ppBNF :: BNF -> String
ppBNF = unlines . map (\(i,rhs) -> i ++ " = " ++ ppRHS rhs ++ ";")

Code to produce EBNF

We had a good time writing combinators that create complex parsers from primitive pieces. Let us do the same for EBNF grammars. We could simply work on the RHS type directly, but we can do something more nifty: We create a data type that keeps track, via a phantom type parameter, of what Haskell type the given EBNF syntax is the specification:

newtype Grammar a = G RHS

ppGrammar :: Grammar a -> String
ppGrammar (G rhs) = ppRHS rhs

So a value of type Grammar t is a description of the textual representation of the Haskell type t.

Here is one simple example:

anyCharG :: Grammar Char
anyCharG = G (NonTerminal "char")

Here is another one. This one does not describe any interesting Haskell type, but is useful when spelling out the special characters in the syntax described by the grammar:

charG :: Char -> Grammar ()
charG c = G (Terminal [c])

A combinator that creates new grammar from two existing grammars:

orElseG :: Grammar a -> Grammar a -> Grammar a
orElseG (G rhs1) (G rhs2) = G (Choice rhs1 rhs2)

We want the convenience of our well-known type classes in order to combine these values some more:

instance Functor Grammar where
    fmap _ (G rhs) = G rhs

instance Applicative Grammar where
    pure x = G (Terminal "")
    (G rhs1) <*> (G rhs2) = G (Sequence rhs1 rhs2)

Note how the Functor instance does not actually use the function. How should it? There are no values inside a Grammar!

We cannot define a Monad instance for Grammar: We would start with (G rhs1) >>= k = …, but there is simply no way of getting a value of type a that we can feed to k. So we will do without a Monad instance. This is interesting, and we will come back to that later.

Like with the parser, we can now begin to build on the primitive example to build more complicated combinators:

manyG :: Grammar a -> Grammar [a]
manyG p = (pure (:) <*> p <*> manyG p) `orElseG` pure []

many1G :: Grammar a -> Grammar [a]
many1G p = pure (:) <*> p <*> manyG p

sepByG :: Grammar a -> Grammar () -> Grammar [a]
sepByG p1 p2 = ((:) <$> p1 <*> (manyG (p2 *> p1))) `orElseG` pure []

Let us run a small example:

dottedWordsG :: Grammar [String]
dottedWordsG = many1G (manyG anyCharG <* charG '.')
*Main> putStrLn $ ppGrammar dottedWordsG
'', ('', char, ('', char, ('', char, ('', char, ('', char, ('', …

Oh my, that is not good. Looks like the recursion in manyG does not work well, so we need to avoid that. But anyways we want to be explicit in the EBNF grammars about where something can be repeated, so let us just make many a primitive:

manyG :: Grammar a -> Grammar [a]
manyG (G rhs) = G (Repetition rhs)

With this definition, we already get a simple grammar for dottedWordsG:

*Main> putStrLn $ ppGrammar dottedWordsG
'', {char}, '.', {{char}, '.'}

This already looks like a proper EBNF grammar. One thing that is not nice about it is that there is an empty string ('') in a sequence (…,…). We do not want that.

Why is it there in the first place? Because our Applicative instance is not lawful! Remember that pure id <*> g == g should hold. One way to achieve that is to improve the Applicative instance to optimize this case away:

instance Applicative Grammar where
    pure x = G (Terminal "")
    G (Terminal "") <*> G rhs2 = G rhs2
    G rhs1 <*> G (Terminal "") = G rhs1
    (G rhs1) <*> (G rhs2) = G (Sequence rhs1 rhs2)
Now we get what we want:
*Main> putStrLn $ ppGrammar dottedWordsG
{char}, '.', {{char}, '.'}

Remember our parser for CSV files above? Let me repeat it here, this time using only Applicative combinators, i.e. avoiding (>>=), (>>), return and do-notation:

parseCSVP :: Parser [[String]]
parseCSVP = manyP parseLine
    parseLine = parseCell `sepByP` charG ',' <* charP '\n'
    parseCell = charP '"' *> manyP (anyCharButP '"') <* charP '"'

And now we try to rewrite the code to produce Grammar instead of Parser. This is straight forward with the exception of anyCharButP. The parser code for that in inherently monadic, and we just do not have a monad instance. So we work around the issue by making that a “primitive” grammar, i.e. introducing a non-terminal in the EBNF without a production rule – pretty much like we did for anyCharG:

primitiveG :: String -> Grammar a
primitiveG s = G (NonTerminal s)

parseCSVG :: Grammar [[String]]
parseCSVG = manyG parseLine
    parseLine = parseCell `sepByG` charG ',' <* charG '\n'
    parseCell = charG '"' *> manyG (primitiveG "not-quote") <* charG '"'

Of course the names parse… are not quite right any more, but let us just leave that for now.

Here is the result:

*Main> putStrLn $ ppGrammar parseCSVG
{('"', {not-quote}, '"', {',', '"', {not-quote}, '"'} | ''), '

The line break is weird. We do not really want newlines in the grammar. So let us make that primitive as well, and replace charG '\n' with newlineG:

newlineG :: Grammar ()
newlineG = primitiveG "newline"

Now we get

*Main> putStrLn $ ppGrammar parseCSVG
{('"', {not-quote}, '"', {',', '"', {not-quote}, '"'} | ''), newline}

which is nice and correct, but still not quite the easily readable EBNF that we saw further up.

Code to produce EBNF, with productions

We currently let our grammars produce only the right-hand side of one EBNF production, but really, we want to produce a RHS that may refer to other productions. So let us change the type accordingly:

newtype Grammar a = G (BNF, RHS)

runGrammer :: String -> Grammar a -> BNF
runGrammer main (G (prods, rhs)) = prods ++ [(main, rhs)]

ppGrammar :: String -> Grammar a -> String
ppGrammar main g = ppBNF $ runGrammer main g

Now we have to adjust all our primitive combinators (but not the derived ones!):

charG :: Char -> Grammar ()
charG c = G ([], Terminal [c])

anyCharG :: Grammar Char
anyCharG = G ([], NonTerminal "char")

manyG :: Grammar a -> Grammar [a]
manyG (G (prods, rhs)) = G (prods, Repetition rhs)

mergeProds :: [Production] -> [Production] -> [Production]
mergeProds prods1 prods2 = nub $ prods1 ++ prods2

orElseG :: Grammar a -> Grammar a -> Grammar a
orElseG (G (prods1, rhs1)) (G (prods2, rhs2))
    = G (mergeProds prods1 prods2, Choice rhs1 rhs2)

instance Functor Grammar where
    fmap _ (G bnf) = G bnf

instance Applicative Grammar where
    pure x = G ([], Terminal "")
    G (prods1, Terminal "") <*> G (prods2, rhs2)
        = G (mergeProds prods1 prods2, rhs2)
    G (prods1, rhs1) <*> G (prods2, Terminal "")
        = G (mergeProds prods1 prods2, rhs1)
    G (prods1, rhs1) <*> G (prods2, rhs2)
        = G (mergeProds prods1 prods2, Sequence rhs1 rhs2)

primitiveG :: String -> Grammar a
primitiveG s = G (NonTerminal s)

The use of nub when combining productions removes duplicates that might be used in different parts of the grammar. Not efficient, but good enough for now.

Did we gain anything? Not yet:

*Main> putStr $ ppGrammar "csv" (parseCSVG)
csv = {('"', {not-quote}, '"', {',', '"', {not-quote}, '"'} | ''), newline};

But we can now introduce a function that lets us tell the system where to give names to a piece of grammar:

nonTerminal :: String -> Grammar a -> Grammar a
nonTerminal name (G (prods, rhs))
  = G (prods ++ [(name, rhs)], NonTerminal name)

Ample use of this in parseCSVG yields the desired result:

parseCSVG :: Grammar [[String]]
parseCSVG = manyG parseLine
    parseLine = nonTerminal "line" $
        parseCell `sepByG` charG ',' <* newline
    parseCell = nonTerminal "cell" $
        charG '"' *> manyG (primitiveG "not-quote") <* charG '"
*Main> putStr $ ppGrammar "csv" (parseCSVG)
cell = '"', {not-quote}, '"';
line = (cell, {',', cell} | ''), newline;
csv = {line};

This is great!

Unifying parsing and grammar-generating

Note how simliar parseCSVG and parseCSVP are! Would it not be great if we could implement that functionaliy only once, and get both a parser and a grammar description out of it? This way, the two would never be out of sync!

And surely this must be possible. The tool to reach for is of course to define a type class that abstracts over the parts where Parser and Grammer differ. So we have to identify all functions that are primitive in one of the two worlds, and turn them into type class methods. This includes char and orElse. It includes many, too: Although manyP is not primitive, manyG is. It also includes nonTerminal, which does not exist in the world of parsers (yet), but we need it for the grammars.

The primitiveG function is tricky. We use it in grammars when the code that we might use while parsing is not expressible as a grammar. So the solution is to let it take two arguments: A String, when used as a descriptive non-terminal in a grammar, and a Parser a, used in the parsing code.

Finally, the type classes that we except, Applicative (and thus Functor), are added as constraints on our type class:

class Applicative f => Descr f where
    char :: Char -> f ()
    many :: f a -> f [a]
    orElse :: f a -> f a -> f a
    primitive :: String -> Parser a -> f a
    nonTerminal :: String -> f a -> f a

The instances are easily written:

instance Descr Parser where
    char = charP
    many = manyP
    orElse = orElseP
    primitive _ p = p
    nonTerminal _ p = p

instance Descr Grammar where
    char = charG
    many = manyG
    orElse = orElseG
    primitive s _ = primitiveG s
    nonTerminal s g = nonTerminal s g

And we can now take the derived definitions, of which so far we had two copies, and define them once and for all:

many1 :: Descr f => f a -> f [a]
many1 p = pure (:) <*> p <*> many p

anyChar :: Descr f => f Char
anyChar = primitive "char" anyCharP

dottedWords :: Descr f => f [String]
dottedWords = many1 (many anyChar <* char '.')

sepBy :: Descr f => f a -> f () -> f [a]
sepBy p1 p2 = ((:) <$> p1 <*> (many (p2 *> p1))) `orElse` pure []

newline :: Descr f => f ()
newline = primitive "newline" (charP '\n')

And thus we now have our CSV parser/grammar generator:

parseCSV :: Descr f => f [[String]]
parseCSV = many parseLine
    parseLine = nonTerminal "line" $
        parseCell `sepBy` char ',' <* newline
    parseCell = nonTerminal "cell" $
        char '"' *> many (primitive "not-quote" (anyCharButP '"')) <* char '"'

We can now use this definition both to parse and to generate grammars:

*Main> putStr $ ppGrammar2 "csv" (parseCSV)
cell = '"', {not-quote}, '"';
line = (cell, {',', cell} | ''), newline;
csv = {line};
*Main> parse parseCSV "\"ab\",\"cd\"\n\"\",\"de\"\n\n"
Just [["ab","cd"],["","de"],[]]

The INI file parser and grammar

As a final exercise, let us transform the INI file parser into a combined thing. Here is the parser (another artifact of last week’s homework) again using applicative style2:

parseINIP :: Parser INIFile
parseINIP = many1P parseSection
    parseSection =
        (,) <$  charP '['
            <*> parseIdent
            <*  charP ']'
            <*  charP '\n'
            <*> (catMaybes <$> manyP parseLine)
    parseIdent = many1P letterOrDigitP
    parseLine = parseDecl `orElseP` parseComment `orElseP` parseEmpty

    parseDecl = Just <$> (
        (,) <*> parseIdent
            <*  manyP (charP ' ')
            <*  charP '='
            <*  manyP (charP ' ')
            <*> many1P (anyCharButP '\n')
            <*  charP '\n')

    parseComment =
        Nothing <$ charP '#'
                <* many1P (anyCharButP '\n')
                <* charP '\n'

    parseEmpty = Nothing <$ charP '\n'

Transforming that to a generic description is quite straight-forward. We use primitive again to wrap letterOrDigitP:

descrINI :: Descr f => f INIFile
descrINI = many1 parseSection
    parseSection =
        (,) <*  char '['
            <*> parseIdent
            <*  char ']'
            <*  newline
            <*> (catMaybes <$> many parseLine)
    parseIdent = many1 (primitive "alphanum" letterOrDigitP)
    parseLine = parseDecl `orElse` parseComment `orElse` parseEmpty

    parseDecl = Just <$> (
        (,) <*> parseIdent
            <*  many (char ' ')
            <*  char '='
            <*  many (char ' ')
            <*> many1 (primitive "non-newline" (anyCharButP '\n'))
	    <*  newline)

    parseComment =
        Nothing <$ char '#'
                <* many1 (primitive "non-newline" (anyCharButP '\n'))
		<* newline

    parseEmpty = Nothing <$ newline

This yields this not very helpful grammar (abbreviated here):

*Main> putStr $ ppGrammar2 "ini" descrINI
ini = '[', alphanum, {alphanum}, ']', newline, {alphanum, {alphanum}, {' '}…

But with a few uses of nonTerminal, we get something really nice:

descrINI :: Descr f => f INIFile
descrINI = many1 parseSection
    parseSection = nonTerminal "section" $
        (,) <$  char '['
            <*> parseIdent
            <*  char ']'
            <*  newline
            <*> (catMaybes <$> many parseLine)
    parseIdent = nonTerminal "identifier" $
        many1 (primitive "alphanum" letterOrDigitP)
    parseLine = nonTerminal "line" $
        parseDecl `orElse` parseComment `orElse` parseEmpty

    parseDecl = nonTerminal "declaration" $ Just <$> (
        (,) <$> parseIdent
            <*  spaces
            <*  char '='
            <*  spaces
            <*> remainder)

    parseComment = nonTerminal "comment" $
        Nothing <$ char '#' <* remainder

    remainder = nonTerminal "line-remainder" $
        many1 (primitive "non-newline" (anyCharButP '\n')) <* newline

    parseEmpty = Nothing <$ newline

    spaces = nonTerminal "spaces" $ many (char ' ')
*Main> putStr $ ppGrammar "ini" descrINI
identifier = alphanum, {alphanum};
spaces = {' '};
line-remainder = non-newline, {non-newline}, newline;
declaration = identifier, spaces, '=', spaces, line-remainder;
comment = '#', line-remainder;
line = declaration | comment | newline;
section = '[', identifier, ']', newline, {line};
ini = section, {section};

Recursion (variant 1)

What if we want to write a parser/grammar-generator that is able to generate the following grammar, which describes terms that are additions and multiplications of natural numbers:

const = digit, {digit};
spaces = {' ' | newline};
atom = const | '(', spaces, expr, spaces, ')', spaces;
mult = atom, {spaces, '*', spaces, atom}, spaces;
plus = mult, {spaces, '+', spaces, mult}, spaces;
expr = plus;

The production of expr is recursive (via plus, mult, atom). We have seen above that simply defining a Grammar a recursively does not go well.

One solution is to add a new combinator for explicit recursion, which replaces nonTerminal in the method:

class Applicative f => Descr f where
    recNonTerminal :: String -> (f a -> f a) -> f a

instance Descr Parser where
    recNonTerminal _ p = let r = p r in r

instance Descr Grammar where
    recNonTerminal = recNonTerminalG

recNonTerminalG :: String -> (Grammar a -> Grammar a) -> Grammar a
recNonTerminalG name f =
    let G (prods, rhs) = f (G ([], NonTerminal name))
    in G (prods ++ [(name, rhs)], NonTerminal name)

nonTerminal :: Descr f => String -> f a -> f a
nonTerminal name p = recNonTerminal name (const p)

runGrammer :: String -> Grammar a -> BNF
runGrammer main (G (prods, NonTerminal nt)) | main == nt = prods
runGrammer main (G (prods, rhs)) = prods ++ [(main, rhs)]

The change in runGrammer avoids adding a pointless expr = expr production to the output.

This lets us define a parser/grammar-generator for the arithmetic expressions given above:

data Expr = Plus Expr Expr | Mult Expr Expr | Const Integer
    deriving Show

mkPlus :: Expr -> [Expr] -> Expr
mkPlus = foldl Plus

mkMult :: Expr -> [Expr] -> Expr
mkMult = foldl Mult

parseExpr :: Descr f => f Expr
parseExpr = recNonTerminal "expr" $ \ exp ->
    ePlus exp

ePlus :: Descr f => f Expr -> f Expr
ePlus exp = nonTerminal "plus" $
    mkPlus <$> eMult exp
           <*> many (spaces *> char '+' *> spaces *> eMult exp)
           <*  spaces

eMult :: Descr f => f Expr -> f Expr
eMult exp = nonTerminal "mult" $
    mkPlus <$> eAtom exp
           <*> many (spaces *> char '*' *> spaces *> eAtom exp)
           <*  spaces

eAtom :: Descr f => f Expr -> f Expr
eAtom exp = nonTerminal "atom" $
    aConst `orElse` eParens exp

aConst :: Descr f => f Expr
aConst = nonTerminal "const" $ Const . read <$> many1 digit

eParens :: Descr f => f a -> f a
eParens inner =
    id <$  char '('
       <*  spaces
       <*> inner
       <*  spaces
       <*  char ')'
       <*  spaces

And indeed, this works:

*Main> putStr $ ppGrammar "expr" parseExpr
const = digit, {digit};
spaces = {' ' | newline};
atom = const | '(', spaces, expr, spaces, ')', spaces;
mult = atom, {spaces, '*', spaces, atom}, spaces;
plus = mult, {spaces, '+', spaces, mult}, spaces;
expr = plus;

Recursion (variant 1)

Interestingly, there is another solution to this problem, which avoids introducing recNonTerminal and explicitly passing around the recursive call (i.e. the exp in the example). To implement that we have to adjust our Grammar type as follows:

newtype Grammar a = G ([String] -> (BNF, RHS))

The idea is that the list of strings is those non-terminals that we are currently defining. So in nonTerminal, we check if the non-terminal to be introduced is currently in the process of being defined, and then simply ignore the body. This way, the recursion is stopped automatically:

nonTerminalG :: String -> (Grammar a) -> Grammar a
nonTerminalG name (G g) = G $ \seen ->
    if name `elem` seen
    then ([], NonTerminal name)
    else let (prods, rhs) = g (name : seen)
         in (prods ++ [(name, rhs)], NonTerminal name)

After adjusting the other primitives of Grammar (including the Functor and Applicative instances, wich now again have nonTerminal) to type-check again, we observe that this parser/grammar generator for expressions, with genuine recursion, works now:

parseExp :: Descr f => f Expr
parseExp = nonTerminal "expr" $

ePlus :: Descr f => f Expr
ePlus = nonTerminal "plus" $
    mkPlus <$> eMult
           <*> many (spaces *> char '+' *> spaces *> eMult)
           <*  spaces

eMult :: Descr f => f Expr
eMult = nonTerminal "mult" $
    mkPlus <$> eAtom
           <*> many (spaces *> char '*' *> spaces *> eAtom)
           <*  spaces

eAtom :: Descr f => f Expr
eAtom = nonTerminal "atom" $
    aConst `orElse` eParens parseExp

Note that the recursion is only going to work if there is at least one call to nonTerminal somewhere around the recursive calls. We still cannot implement many as naively as above.


If you want to play more with this: The homework is to define a parser/grammar-generator for EBNF itself, as specified in this variant:

identifier = letter, {letter | digit | '-'};
spaces = {' ' | newline};
quoted-char = non-quote-or-backslash | '\\', '\\' | '\\', '\'';
terminal = '\'', {quoted-char}, '\'', spaces;
non-terminal = identifier, spaces;
option = '[', spaces, rhs, spaces, ']', spaces;
repetition = '{', spaces, rhs, spaces, '}', spaces;
group = '(', spaces, rhs, spaces, ')', spaces;
atom = terminal | non-terminal | option | repetition | group;
sequence = atom, {spaces, ',', spaces, atom}, spaces;
choice = sequence, {spaces, '|', spaces, sequence}, spaces;
rhs = choice;
production = identifier, spaces, '=', spaces, rhs, ';', spaces;
bnf = production, {production};

This grammar is set up so that the precedence of , and | is correctly implemented: a , b | c will parse as (a, b) | c.

In this syntax for BNF, terminal characters are quoted, i.e. inside '…', a ' is replaced by \' and a \ is replaced by \\ – this is done by the function quote in ppRHS.

If you do this, you should able to round-trip with the pretty-printer, i.e. parse back what it wrote:

*Main> let bnf1 = runGrammer "expr" parseExpr
*Main> let bnf2 = runGrammer "expr" parseBNF
*Main> let f = Data.Maybe.fromJust . parse parseBNF. ppBNF
*Main> f bnf1 == bnf1
*Main> f bnf2 == bnf2

The last line is quite meta: We are unsing parseBNF as a parser on the pretty-printed grammar produced from interpreting parseBNF as a grammar.


We have again seen an example of the excellent support for abstraction in Haskell: Being able to define so very different things such as a parser and a grammar description with the same code is great. Type classes helped us here.

Note that it was crucial that our combined parser/grammars are only able to use the methods of Applicative, and not Monad. Applicative is less powerful, so by giving less power to the user of our Descr interface, the other side, i.e. the implementation, can be more powerful.

The reason why Applicative is ok, but Monad is not, is that in Applicative, the results do not affect the shape of the computation, whereas in Monad, the whole point of the bind operator (>>=) is that the result of the computation is used to decide the next computation. And while this is perfectly fine for a parser, it just makes no sense for a grammar generator, where there simply are no values around!

We have also seen that a phantom type, namely the parameter of Grammar, can be useful, as it lets the type system make sure we do not write nonsense. For example, the type of orElseG ensures that both grammars that are combined here indeed describe something of the same type.

  1. It seems to be the week of applicative-appraising blog posts: Brent has posted a nice piece about enumerations using Applicative yesterday.

  2. I like how in this alignment of <*> and <* the > point out where the arguments are that are being passed to the function on the left.

Planet DebianDirk Eddelbuettel: Rblpapi 0.3.5

A new release of Rblpapi is now on CRAN. Rblpapi provides a direct interface between R and the Bloomberg Terminal via the C++ API provided by Bloomberg Labs (but note that a valid Bloomberg license and installation is required).

This is the sixth release since the package first appeared on CRAN last year. This release brings new functionality via new (getPortfolio()) and extended functions (getTicks()) as well as several fixes:

Changes in Rblpapi version 0.3.5 (2016-10-25)

  • Add new function getPortfolio to retrieve portfolio data via bds (John in #176)

  • Extend getTicks() to (optionally) return non-numeric data as part of data.frame or data.table (Dirk in #200)

  • Similarly extend getMultipleTicks (Dirk in #202)

  • Correct statement on timestamp for getBars (Closes issue #192)

  • Minor edits to a few files in order to either please R(-devel) CMD check --as-cran, or update documentation

Courtesy of CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the Rblpapi page. Questions, comments etc should go to the issue tickets system at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet Linux AustraliaMaxim Zakharov: dpsearch-4.54-2016-10-26

A new snapshot version of DataparkSearch Engine has been released. You can get it on Google Drive or on GitHub.

Changes made since previous one:

  • Corrected SQL schema for MySQL5
  • Fixed crash on URLs with no schema specified
  • New detection for Apache version
  • Fixed crossword section construction
  • p, option and input HTML tags can now be a section
  • More fine sleeping on mutex lock failure
  • Fixed compilation on FreeBSD 10
  • Added 'Robots collect' command
  • Fixed crash if dt:minute limit specified
  • Do not process sitemaps for Server/Realm/Subnet commands with nofollow option specified
  • Some other minor fixes

I am planning to retire the support for Apache 1.3 in future. Let me know if your are still using it.


Planet DebianLaura Arjona Reina: Rankings, Condorcet and free software: Calculating the results for the Stretch Artwork Survey

We had 12 candidates for the Debian Stretch Artwork and a survey was set up for allowing people to vote which one they prefer.

The survey was run in my LimeSurvey instance, LimeSurvey  its a nice free software with a lot of features. It provides a “Ranking” question type, and it was very easy for allowing people to “vote” in the Debian style (Debian uses the Condorcet method in its elections).

However, although LimeSurvey offers statistics and even graphics to show the results of many type of questions, its output for the Ranking type is not useful, so I had to export the data and use another tool to find the winner.

Export the data from LimeSurvey

I’ve created a read-only user to visit the survey site. With this visitor you can explore the survey questionnaire, its results, and export the data.
Username: stretch
Password: artwork

First attempt, the quick and easy (and nonfree, I guess)

There is an online tool to calculate the Condorcet winner, 
The steps I followed to feed the tool with the data from LimeSurvey were these:
1.- Went to admin interface of lime survey, selected the stretch artwork survey, responses and statistics, export results to application
2.- Selected “Completed responses only”, “Question codes”, “Answer codes”, and exported to CSV. (results_stretch1.csv)
3.- Opened the CSV with LibreOffice Calc, and removed these columns:
id    submitdate    lastpage    startlanguage
4.- Remove the first row containing the headers and saved the result (results_stretch2.csv)
5.- In commandline:
sort results_stretch2.csv | uniq -c > results_stretch3.csv
6.- Opened results_stretch3.csv with LibreOffice Calc and “merge delimitors” when importing.
7.- Removed the first column (blank) and added a column between the numbers and the first ranked option, and fulfilled that column with “:” value. Saved (results_stretch4.csv)
8.- Opened results_stretch4.csv with my preferred editor and search and replace “,:,” for “:” and after that, search and replace “,” for “>”. Save the result (results_stretch5.csv)
9.- Went to, selected Condorcet basic, “tell me some things”, and pasted the contents of results_stretch5.csv there.
The results are in results_stretch1.html

But where is the source code of this Condorcet tool?

I couldn’t find the source code (nor license) of the solver by Eric Gorr.
The tool is mentioned in where other tools are listed and when the tool is libre software, is noted so. But not in this case.
There, I found another tool, VoteEngine, which is open source, so I tried with that.

Second attempt: VoteEngine, a Free Open Source Software tool made with Python

I used a modification of voteengine-0.99 (the original zip is available in and a diff with the changes I made (basically, Numeric -> numpy and Int -> int, inorder that works in Debian stable), here.
Steps 1 to 4 are the same as in the first attempt.
5.- Sorted alphabetically the different 12 options to vote, and
assigned a letter to each one (saved the assignments in a file called 
6.- Opened results_stretch2.csv with my favorite editor, and search
and replace the name of the different options, for their corresponding
letter in stretch_key.txt file.
Searched and replaced “,” for ” ” (space). Then, saved the results into
7.- Copied the input.txt file from voteengine-0.99 into stretch.txt and edited the options
to our needs. Pasted the contents of results_stretch3_voteengine.cvs
at the end of stretch.txt
8.-In the commandline
./ <stretch.txt  > winner.txt
(winner.txt contains the results for the Condorcet method).
9.- I edited again stretch.txt to change the method to shulze and
calculated the results, and again with the smith method. The winner in
the 3 methods is the same. I pasted the summary of these 3 methods
(shulze and smith provide a ranked list) in stretch_results.txt

If it can be done, it can be done with R…

I found the algstat R package:
which includes a “condorcet” function but I couldn’t make it work with the data.
I’m not sure how the data needs to be shaped. I’m sure that this can be done in R and the problem is me, in this case. Comments are welcome, and I’ll try to ask to a friend whose R skills are better than mine!

And another SaaS

I found and its source code. It would be interesting to deploy a local instance to drive future surveys, but for this time I didn’t want to fight with PHP in order to use only the “solver” part, nor install another SaaS in my home server just to find that I need some other dependency or whatever.
I’ll keep an eye on this, though, because it looks like a modern and active project.

Finally, devotee

Well and which software Debian uses for its elections? 
There is a git repository with devotee, you can clone it:
I found that although the tool is quite modular, it’s written specifically for the Debian case (votes received by mail, GPG signed, there is a quorum, and other particularities) and I was not sure if I could use it with my data. It is written in Perl and then I understood it worse than the Python from VoteEngine.
Maybe I’ll return to it, though, when I have more time, to try to put our data in the shape of a typicall tally.txt file and then see if the module solving the condorcet winner can work for me.
That’s all, folks! (for now…)


You can coment on this blog post in this thread

Filed under: Tools Tagged: data mining, Debian, English, SaaS, statistics

Planet DebianJose M. Calhariz: New packages for Amanda on the works

Because of the upgrade of perl, amanda is currently broken on testing and unstable on Debian. The problem is known and I am working with my sponsor to create new packages to solve the problem. Please hang a little more.

Planet DebianBits from Debian: "softWaves" will be the default theme for Debian 9

The theme "softWaves" by Juliette Taka Belin has been selected as default theme for Debian 9 'stretch'.

softWaves Login screen. Click to see the whole theme proposal

After the Debian Desktop Team made the call for proposing themes, a total of twelve choices have been submitted, and any Debian contributor has received the opportunity to vote on them in a survey. We received 3,479 responses ranking the different choices, and softWaves has been the winner among them.

We'd like to thank all the designers that have participated providing nice wallpapers and artwork for Debian 9, and encourage everybody interested in this area of Debian, to join the Design Team. It is being considered to package all of them so they are easily available in Debian. If you want to help in this effort, or package any other artwork (for example, particularly designed to be accessibility-friendly), please contact the Debian Desktop Team, but hurry up, because the freeze for new packages in the next release of Debian starts on January 5th, 2017.

This is the second time that Debian ships a theme by Juliette Belin, who also created the theme "Lines" that enhances our actual stable release, Debian 8. Congratulations, Juliette, and thank you very much for your continued commitment to Debian!

Google AdsenseThe AdSense Guide to Audience Engagement is Now Available in More Languages!

Thank you for your feedback on our recently launched The AdSense Guide to Audience Engagement and letting us know how it has helped you grow your online business. Now you can download the guide in 2 additional languages: Portuguese and Spanish.

Download the guide today, and like thousands of other AdSense publishers, learn how to engage with your users like never before. The guide contains useful advice and best practices that will help you drive engagement on your site, including:
  1. Tips to help your audience become familiar with your brand 
  2. Best practices to design delightful user journeys 
  3. Ideas on how to develop content that resonates with your audience 
  4. Ways to make your content easy to consume 
  5. Reasons why you should share the love with other sites by referring to good sources.
Ready? Download your free copy of the #AdSenseGuide now in any of the following languages:

Enjoy the guide and we’d love to hear your feedback on Google+ and Twitter using #AdSenseGuide.

Posted by: Jay Castro from the AdSense team

Krebs on SecuritySenator Prods Federal Agencies on IoT Mess

The co-founder of the newly launched Senate Cybersecurity Caucus is pushing federal agencies for possible solutions and responses to the security threat from insecure “Internet of Things” (IoT) devices, such as the network of hacked security cameras and digital video recorders that were reportedly used to help bring about last Friday’s major Internet outages.

In letters to the Federal Communications Commission (FCC), the Federal Trade Commission (FTC) and the Department of Homeland Security (DHS), Virginia Senator Mark Warner (D) called the proliferation of insecure IoT devices a threat to resiliency of the Internet.

“Manufacturers today are flooding the market with cheap, insecure devices, with few market incentives to design the products with security in mind, or to provide ongoing support,” Warner wrote to the agencies. “And buyers seem unable to make informed decisions between products based on their competing security features, in part because there are no clear metrics.”

The letter continues:

“Because the producers of these insecure IoT devices currently are insulated from any standards requirements, market feedback, or liability concerns, I am deeply concerned that we are witnessing a ‘tragedy of the commons’ threat to the continued functioning of the internet, as the security so vital to all internet users remains the responsibility of none. Further, buyers have little recourse when, despite their best efforts, security failures occur” [link added].

As Warner’s letter notes, last week’s attack on online infrastructure provider Dyn was launched at least in part by Mirai, a now open-source malware strain that scans the Internet for routers, cameras, digital video recorders and other Internet of Things “IoT” devices protected only by the factory-default passwords.

Once infected with Mirai, the IoT systems can be used to flood a target with so much junk Web traffic that the target site can no longer accommodate legitimate users or visitors. The attack on Dyn was slightly different because it resulted in prolonged outages for many other networks and Web sites, including Netflix, PayPal, Reddit and Twitter.

As a result of that attack, one of the most-read stories on KrebsOnSecurity so far this year is “Who Makes the IoT Things Under Attack?“, in which I tried to match default passwords sought out by the Mirai malware with IoT hardware devices for sale on the commercial market today.

In a follow-up to that story, I interviewed researchers at Flashpoint who discovered that one of the default passwords sought by machines infected with Mirai — username: root and password: xc3511 — is embedded in a broad array of white-labeled DVR and IP camera electronics boards made by a Chinese company called XiongMai Technologies. These components are sold downstream to vendors who then use them in their own products (for a look at XionMai’s response to all this, see Monday’s story, IoT Device Maker Vows Product Recall, Legal Action Against Western Accusers).

In his inquiry to the federal agencies, Warner asked whether there was more the government could be doing to vet the security of IoT devices before or after they are plugged into networks.

“In the FCC’s Open Internet Order, the Commission suggested that ISPs could take such steps only when addressing ‘traffic that constitutes a denial-of-service attack on specific network infrastructure elements,'” Warner wrote in his missive to the FCC.  “Is it your agency’s opinion that the Mirai attack has targeted ‘specific network infrastructure elements’ to warrant a response from ISPs?”

In another line of questioning, Warner also asked whether it would it be a reasonable network management practice for ISPs to designate insecure network devices as “insecure” and thereby deny them connections to their networks, including by refraining from assigning devices IP addresses.

It’s good to see lawmakers asking questions about whether there is a market failure here that requires government intervention or regulation. Judging from the comments on my story earlier this month — Europe to Push New Security Rules Amid IoT Mess — KrebsOnSecurity readers remain fairly divided on the role of government in addressing the IoT problem.

I have been asked by several reporters over the past few days whether I think government has a role to play in fixing the IoT mess. Personally, I do not believe there has ever been a technology challenge that was best served by additional government regulation.

However, I do believe that the credible threat of government regulation is very often what’s needed to spur the hi-tech industry into meaningful action and self-regulation. And that process usually starts with inquiries like these. So, here’s hoping more lawmakers in Congress can get up to speed quickly on this vitally important issue.

Sen. Warner’s letter to the FCC looks very similar to those sent to the other two agencies. A copy of it is available here.

Planet DebianJulian Andres Klode: Introducing DNS66, a host blocker for Android


I’m proud (yes, really) to announce DNS66, my host/ad blocker for Android 5.0 and newer. It’s been around since last Thursday on F-Droid, but it never really got a formal announcement.

DNS66 creates a local VPN service on your Android device, and diverts all DNS traffic to it, possibly adding new DNS servers you can configure in its UI. It can use hosts files for blocking whole sets of hosts or you can just give it a domain name to block (or multiple hosts files/hosts). You can also whitelist individual hosts or entire files by adding them to the end of the list. When a host name is looked up, the query goes to the VPN which looks at the packet and responds with NXDOMAIN (non-existing domain) for hosts that are blocked.

You can find DNS66 here:

F-Droid is the recommended source to install from. DNS66 is licensed under the GNU GPL 3, or (mostly) any later version.

Implementation Notes

DNS66’s core logic is based on another project,  dbrodie/AdBuster, which arguably has the cooler name. I translated that from Kotlin to Java, and cleaned up the implementation a bit:

All work is done in a single thread by using poll() to detect when to read/write stuff. Each DNS request is sent via a new UDP socket, and poll() polls over all UDP sockets, a Device Socket (for the VPN’s tun device) and a pipe (so we can interrupt the poll at any time by closing the pipe).

We literally redirect your DNS servers. Meaning if your DNS server is, all traffic to is routed to the VPN. The VPN only understands DNS traffic, though, so you might have trouble if your DNS server also happens to serve something else. I plan to change that at some point to emulate multiple DNS servers with fake IPs, but this was a first step to get it working with fallback: Android can now transparently fallback to other DNS servers without having to be aware that they are routed via the VPN.

We also need to deal with timing out queries that we received no answer for: DNS66 stores the query into a LinkedHashMap and overrides the removeEldestEntry() method to remove the eldest entry if it is older than 10 seconds or there are more than 1024 pending queries. This means that it only times out up to one request per new request, but it eventually cleans up fine.


Filed under: Android, Uncategorized

Planet DebianMichal Čihař: New features on Hosted Weblate

Today, new version has been deployed on Hosted Weblate. It brings many long requested features and enhancements.

Adding project to watched got way simpler, you can now do it on the project page using watch button:

Watch project

Another feature which will be liked by project admins is that they can now change project metadata without contacting me. This works for both project and component level:

Project settings

And adding some fancy things, there is new badge showing status of translations into all languages. This is how it looks for Weblate itself:

Translation status

As you can see it can get pretty big for projects with many translations, but you get complete picture of the translation status in it.

You can find all these features in upcoming Weblate 2.9 which should be released next week. Complete list of changes in Weblate 2.9 is described in our documentation.

Filed under: Debian English phpMyAdmin SUSE Weblate | 0 comments

Worse Than FailureTest Overflow

WidCo was a victim of its own success. It had started as a small purveyor of widgets: assembling, storing, transporting, and shipping the widgets to their small client base in their podunk state. They'd once had the staff to fill orders placed by phone. As they'd begun to make a name for themselves in the surrounding tri-state region, however, their CEO had caught wise to the value of "this Internet thing."

Within a decade, they were not only the country's foremost producer of widgets, but their entire staff makeup had changed. They now had more employees in IT than the other departments combined, and relied on in-house software to manage inventory, take orders, and fulfill them in a timely fashion. If the software went down, they could no longer fall back on a manual process.

And—as the IT manager was fond of pointing out in budget meetings—they had a QA department of 0 and no automated tests.

Bug reports piled up. Who had time to fix bugs? Monday was for scrambling to remedy production incidents after the weekend's jobs ran, Tuesday was for slapping together something sensible out of requirements documents, Wednesday for coding, Thursday for manual testing, and Friday for shoving half-tested code into production. Agile!

Finally, over the post-Christmas slump, the IT manager managed to convince the CTO to bring in a trainer to teach developers about "improving our unit tests". All 0 of them.

Adding tests to existing code was deemed a waste of time. It compiles, therefore it works. Begrudgingly, the CTO admitted that unit tests might be a good idea for new applications. Sometime in the next decade, they were bound to build a new application. They'd do this testing thing then.

Desperate, the IT manager put in place a code review policy: before anything could be deployed, someone had to look at it. But they were checking only the changes, the CTO pointed out. It was a waste of time to examine working code in production. Standards were seen as just documentation, and documentation was waste. Look at the existing code and do more of that.

"Think lean," the CTO said.

The IT manager sighed and hung his head.

And then a lucrative new contract was signed: WidCo would be selling widgets in Canada as well. When their distribution partner expressed concern about the lack of a QA department, the CTO loudly proclaimed that all their developers had QA training, and they were bringing in an engineer to streamline the testing process. A position promptly opened under the IT manager, who was seen with an actual, honest-to-God smile on his face for the first time all year.

Interview after interview was conducted. The first engineer was a bright young chap, a brunet with an easy smile and big round glasses. He started on a Tuesday, bringing in donuts to share with the team and promising to have things cleaned up within the week.

Two days later, he handed in his resignation, crawled into a bottle, and refused to answer his phone, muttering about raw pointers and RAII. Later, the team found him waiting tables at the local pub, his bright smile turned into a sullen sneer.

The second QA engineer was made of sterner stuff. She had the benefit of already being familiar with the code: she'd been brought in as a development contractor to save a project that was running over deadline earlier that year, and while she hadn't managed to work a miracle, she did impress the IT manager.

By now, the CTO was so over the whole QA engineer thing. He was onto this newfangled "DTAP" standard, declaring that he'd stand up a Development server, a Test server, an Acceptance server, and a Production server, and all code would be promoted between them instead of going right from the developers' machines to prod.

And so the QA Engineer rolled up her sleeves and tried to develop a sane promotions process. She set up an instance of Subversion and stuffed all the code into it. In order to comply with the standard, she made four branches: Dev, QA, Acceptance, and Trunk. Code would be done in dev, merged into QA for testing, merged into Acceptance for acceptance testing, and merged into Production to deploy. A daily cron job would push the code onto each server automatically. Continuous Integration!

The build system complete, she could get to her main job: testing code on the Testing server. She sat the project managers in a room and explained how to test on the Acceptance server. Within a month, however, she was doing it for them instead.

At least we're testing, she told herself, resigned to switching hats between "Try to break it" mode and "Is it actually nice to use?" mode when she changed servers.

Of course, there were 30 developers and only 1 QA engineer, so she didn't have time to go through changes one by one. Instead, she'd test a batch of them on a weekly schedule. But by the time they were merged together and pushed to the servers, she had no idea which change was responsible for a test failing. So all the tickets in that batch would have to be failed, and the whole thing reverted back to the dev branch. Sometimes multiple times per batch.

It was getting better, though. Every batch had fewer issues. Every code review had fewer comments. The QA engineer was beginning to see the light at the end of the tunnel. Maybe this would get to be so low-key she could handle it. Maybe she'd even be rewarded for her herculean efforts. Maybe she'd get a second helper. Maybe things would be okay.

Then the CTO read an article about how some big-name companies had "staging servers," and declared that they ought to have one to improve quality. After code was accepted, code would then be deployed to the staging server for a round of regression testing before it could go live. Of course, since they had a dedicated tester, that'd simply be her responsibility.

On the way out of that meeting, her heart sinking, the QA engineer was stopped in the hall by a project manager. "We've noticed a lot of tickets are failing," he began with a stern look. "That looks bad on our reports, so we're going to be running an extra round of testing on the dev server before it gets to you."

"Oh, good, so maybe there'll be less tickets I have to send back," she said, raking a hand through her hair.

"Exactly," he said with a hint of a smile. "I'll forward the invite to you so you can sit in and give your feedback. It'll be 4 hours on Monday afternoon."

Her efforts to weasel out of it were to no avail. She was the QA expert, after all, and her job was to train anyone who needed training. Henceforth, code would be checked into dev, tested by the project management group and the QA Engineer, moved to QA, checked by the QA Engineer, moved to Acceptance, checked by the QA Engineer, moved to Staging, checked by the QA Engineer, and then finally moved into production.

After a few failure cycles in QA (somehow, the project managers weren't very effective at finding bugs), she was staying late into the evening on Friday nights, testing code as fast as possible so it could go to production before it was technically Saturday and thus overdue.

Of course, because only she knew when the code was ready to move, the QA Engineer found herself in charge of merging between branches. And she had no idea what the code even did anymore. She never got to work with it, only seeing things from the UI level. And she was usually exhausted when she had to merge things. So there began to be a new class of error: merge errors, introduced by sloppy testers.

The CTO had a brillant solution to that as well. He split up the shared libraries, making multiple copies of the repository. Each team would have their own stack of DTASP servers and their own version of source code. That way, they could deploy without having to merge code belonging to different teams. In addition to their existing separate dev instances, all 10 3-man teams would get 8 servers each: test, acceptance, staging, and prod, plus a database for each. And the QA engineer would have to test all of them.

These efforts failed to make a dent in WidCo's bug backlog. However, the QA engineer made a killing running a betting pool in the breakroom. The odds were calculated fresh every Monday morning, and the payout went out when metrics were pulled on Friday afternoon. There was only one question to gamble on.

Which was larger this week: the average time required to deploy a fix, or the average time for a developer to leave for a new job?

[Advertisement] Incrementally adopt DevOps best practices with BuildMaster, ProGet and Otter, creating a robust, secure, scalable, and reliable DevOps toolchain.

Planet DebianJaldhar Vyas: Aaargh gcc 5.x You Suck

Aaargh gcc 5.x You Suck

I had to write a quick program today which is going to be run many thousands of times a day so it has to run fast. I decided to do it in c++ instead of the usual perl or javascript because it seemed appropriate and I've been playing around a lot with c++ lately trying to update my knowledge of its' modern features. So 200 LOC later I was almost done so I ran the program through valgrind a good habit I've been trying to instill. That's when I got a reminder of why I avoid c++.

==37698== HEAP SUMMARY:
==37698==     in use at exit: 72,704 bytes in 1 blocks
==37698==   total heap usage: 5 allocs, 4 frees, 84,655 bytes allocated
==37698== LEAK SUMMARY:
==37698==    definitely lost: 0 bytes in 0 blocks
==37698==    indirectly lost: 0 bytes in 0 blocks
==37698==      possibly lost: 0 bytes in 0 blocks
==37698==    still reachable: 72,704 bytes in 1 blocks
==37698==         suppressed: 0 bytes in 0 blocks

One of things I've learnt which I've been trying to apply more rigorously is to avoid manual memory management (news/deletes.) as much as possible in favor of modern c++ features such as std::unique_ptr etc. By my estimation there should only be three places in my code where memory is allocated and none of them should leak. Where do the others come from? And why is there a missing free (or delete.) Now the good news is that valgrind is saying that the memory is not technically leaking. It is still reachable at exit but that's ok because the OS will reclaim it. But this program will run a lot and I think it could still lead to problems over time such as memory fragmentation so I wanted to understand what was going on. Not to mention the bad aesthetics of it.

My first assumption (one which has served me well over the years) was to assume that I had screwed up somewhere. Or perhaps it could some behind the scenes compiler magic. It turned out to be the latter -- sort of as I found out only after two hours of jiggling code in different ways and googling for clues. That's when I found this Stack Overflow question which suggests that it is either a valgrind or compiler bug. The answer specifically mentions gcc 5.1. I was using Ubuntu LTS which has gcc 5.4 so I have just gone ahead and assumed all 5.x versions of gcc have this problem. Sure enough, compiling the same program on Debian stable which has gcc 4.9 gave this...

==6045== HEAP SUMMARY:
==6045==     in use at exit: 0 bytes in 0 blocks
==6045==   total heap usage: 3 allocs, 3 frees, 10,967 bytes allocated
==6045== All heap blocks were freed -- no leaks are possible

...Much better. The executable was substantially smaller too. The time was not a total loss however. I learned that valgrind is pronounced val-grinned (it's from Norse mythology.) not val-grind as I had thought. So I have that going for me which is nice.

Planet DebianRuss Allbery: Review: Lord of Emperors

Review: Lord of Emperors, by Guy Gavriel Kay

Series: Sarantine Mosaic #2
Publisher: Eos
Copyright: 2000
Printing: February 2001
ISBN: 0-06-102002-8
Format: Mass market
Pages: 560

Lord of Emperors is the second half of a work that began with Sailing to Sarantium and is best thought of as a single book split for publishing reasons. You want to read the two together and in order.

As is typical for this sort of two-part work, it's difficult to review the second half without spoilers. I'll be more vague about the plot and the characters than normal, and will mark one bit that's arguably a bit of a spoiler (although I don't think it would affect the enjoyment of the book).

At the end of Sailing to Sarantium, we left Crispin in the great city, oddly and surprisingly entangled with some frighteningly powerful people and some more mundane ones (insofar as anyone is mundane in a Guy Gavriel Kay novel, but more on that in a bit). The opening of Lord of Emperors takes a break from the city to introduce a new people, the Bassanids, and a new character, Rustem of Karakek. While Crispin is still the heart of this story, the thread that binds the entirety of the Sarantine Mosaic together, Rustem is the primary protagonist for much of this book. I had somehow forgotten him completely since my first read of this series many years ago. I have no idea how.

I mentioned in my review of the previous book that one of the joys of reading this series is competence porn: watching the work of someone who is extremely good at what they do, and experiencing vicariously some of the passion and satisfaction they have for their work. Kay's handling of Crispin's mosaics is still the highlight of the series for me, but Rustem's medical practice (and Strumosus, and the chariot races) comes close. Rustem is a brilliant doctor by the standards of the time, utterly frustrated with the incompetence of the Sarantine doctors, but also weaving his own culture's belief in omens and portents into his actions. He's more reserved, more laconic than Crispin, but is another character with focused expertise and a deep internal sense of honor, swept unexpectedly into broader affairs and attempting to navigate them by doing the right thing in each moment. Kay fills this book with people like that, and it's compelling reading.

Rustem's entrance into the city accidentally sets off a complex chain of events that draws together all of the major characters of Sailing to Sarantium and adds a few more. The stakes are no less than war and control of major empires, and here Kay departs firmly from recorded history into his own creation. I had mentioned in the previous review that Justinian and Theodora are the clear inspirations for this story; that remains true, and many other characters are easy to map, but don't expect history to go here the way that it did in our world. Kay's version diverges significantly, and dramatically.

But one of the things I love the most about this book is its focus on the individual acts of courage, empathy, and ethics of each of the characters, even when those acts do not change the course of empires. The palace intrigue happens, and is important, but the individual acts of Kay's large cast get just as much epic narrative attention even if they would never appear in a history book. The most globally significant moment of the book is not the most stirring; that happens slightly earlier, in a chariot race destined to be forgotten by history. And the most touching moment of the book is a moment of connection between two people who would never appear in history, over the life of a third, that matters so much to the reader only because of the careful attention to individual lives and personalities Kay has shown over the course of a hundreds of pages.

A minor spoiler follows in the next paragraph, although I don't think it affects the reading of the book.

One brilliant part of Kay's fiction is that he doesn't have many villains, and goes to some lengths to humanize the actions of nearly everyone in the book. But sometimes the author's deep dislike of one particular character shows through, and here it's Pertennius (the clear analogue of Procopius). In a way, one could say the entirety of the Sarantine Mosaic is a rebuttal of the Secret History. But I think Kay's contrast between Crispin's art (and Scortius's, and Strumosus's) and Pertennius's history has a deeper thematic goal. I came away from this book feeling like the Sarantine Mosaic as a whole stands in contrast to a traditional history, stands against a reduction of people to dates and wars and buildings and governments. Crispin's greatest work attempts to capture emotion, awe, and an inner life. The endlessly complex human relationships shown in this book running beneath the political events occasionally surface in dramatic upheavals, but in Kay's telling the ones that stay below the surface are just as important. And while much of the other art shown in this book differs from Crispin's in being inherently ephemeral, it shares that quality of being the art of life, of complexity, of people in dynamic, changing, situational understanding of the world, exercising competence in some area that may or may not be remembered.

Kay raises to the level of epic the bits of history that don't get recorded, and, in his grand and self-conscious fantasy epic style, encourages the reader to feel those just as deeply as the ones that will have later historical significance. The measure of people, their true inner selves, is often shown in moments that Pertennius would dismiss and consider unworthy of recording in his history.

End minor spoiler.

I think Lord of Emperors is the best part of the Sarantine Mosaic duology. It keeps the same deeply enjoyable view of people doing things they are extremely good at while correcting some of the structural issues in the previous book. Kay continues to use a large cast, and continues to cut between viewpoint characters to show each event from multiple angles, but he has a better grasp of timing and order here than in Sailing to Sarantium. I never got confused about the timeline, thanks in part to more frequent and more linear scene cuts. And Lord of Emperors passes, with flying colors, the hardest test of a novel with a huge number of viewpoint characters: when Kay cuts to a new viewpoint, my reaction is almost always "yes, I wanted to see what they were thinking!" and almost never "wait, no, go back!".

My other main complaint about Sailing to Sarantium was the treatment of women, specifically the irresistibility of female sexual allure. Kay thankfully tones that down a lot here. His treatment of women is still a bit odd — one notices that five women seem to all touch the lives of the same men, and little room is left for Platonic friendship between the genders — but they're somewhat less persistently sexualized. And the women get a great deal of agency in this book, and a great deal of narrative respect.

That said, Lord of Emperors is also emotionally brutal. It's beautifully done, and entirely appropriate to the story, and Kay does provide a denouement that takes away a bit of the sting. But it's still very hard to read in spots if you become as invested in the characters and in the world as I do. Kay is writing epic that borders on tragedy, and uses his full capabilities as a writer to make the reader feel it. I love it, but it's not a book that I want to read too often.

As with nearly all Kay, the Sarantine Mosaic as a whole is intentional, deliberate epic writing, wearing its technique on its sleeve and making no apologies. There is constant foreshadowing, constant attempts to draw larger conclusions or reveal great principles of human nature, and a very open, repeated stress on the greatness and importance of events while they're being described. This works for me, but it doesn't work for everyone. If it doesn't work for you, the Sarantine Mosaic is unlikely to change your mind. But if you're in the mood for that type of story, I think this is one of Kay's best, and Lord of Emperors is the best half of the book.

Rating: 10 out of 10

Valerie AuroraWhen is naming abuse itself abusive?

Thanks to everyone who read my previous post about why I’m not attending Systems We Love, and especially to all those who shared their own experiences that led them to the same decision. I’m going to follow Charles’ Rules of Argument and reply one time, and then I’m going back to doing things I enjoy.

People asked me a lot of specific questions about this post: Why did you name Bryan Cantrill when many people in the systems community are abusive? Why didn’t you talk to Bryan privately first? Aren’t you insulting Bryan when you criticize him for being insulting? In my opinion, all of these questions all boil down to the same basic question: Even if it everything you said in your post was true, was your post also a form of abuse?

My answer is simple: No. The rest of this post is a general discussion about when you should name specific people and describe their abusive behavior in public, with this specific case as the example.

Maybe in some cases a post saying “some people are behaving badly in our community, please stop” works. It captures an important point, which is that bad behavior doesn’t happen in isolation – it takes a community of people to enable it. I’ve never personally seen the “some people” kind of post work, and I have several times seen it backfire: the very people who were being called out sometimes latch on to the post and say, “Yeah! This sucks! All you other people doing this need to stop!” Then they use this call to action as a weapon against people they disagree with for other reasons.

In this specific case, Bryan has done exactly this in the past, once vowing to fire any employee rejecting a patch on the principle that pronouns should be gendered. I agree with the argument that this vow was more about establishing Bryan’s dominance over others than demonstrating his devotion to supporting women in the workplace. In this case, the potential downside of vagueposting was much greater than any potential upside.

In some cases, talking to someone privately about their abusive behavior will work. It depends on what their values are, how close your relationship is, and how willing they are to engage in self-reflection. In this specific case, I did approach Bryan privately about his behavior as a co-worker about a month ago, and he completely dismissed my experience. Based on that and my prior years of experience as his co-worker, I did not think that approaching him privately would have any positive effect.

Sometimes talking privately to someone’s peers or colleagues or management will work. In this specific case, Bryan’s behavior is so public and striking that his colleagues and management at Joyent are already fully aware of his behavior; anything I had to say would have no effect. Since this is a conference, I considered talking to the program committee. Unfortunately, I don’t know anyone on the Systems We Love program committee well enough to expect them to work with me against the wishes of the person who created the conference, is a VP at the company hosting the event, and has significant influence over their future career. I warned one committee member and they told me I was the second person to warn them about working with Bryan. Their plan was to just avoid working closely with Bryan. In this case, there was no one with influence over Bryan that I could talk to privately.

Sometimes calling someone out for abusive behavior can be done in a way that is itself abusive. For example, if the response is out of proportion to the original offense, that can be abusive (see again Bryan’s vow to fire a person over one relatively minor act and the discussion on proportionality in “Is Shame Necessary?“). Sometimes we shame an abusive person not for their actual behavior, but for unrelated things that reinforce inequality. For example, body-shaming Donald Trump reinforces the idea that it’s okay to body-shame a wide variety of people (trans men, people who aren’t the “right” size or shape, older folks, all women, etc.). It’s really important to think carefully about exactly how you are calling someone out and whether it will reinforce existing structures of oppression.

In this specific case, my goal with the original post was to clearly and honestly describe Bryan’s actual behavior (insults, humiliation, dominance, all wrapped in beautiful language) and the effect it had on me and others. I did so without calling him names, speculating on his motivations, or diagnosing him with any disorders. I was equally straightforward about Bryan’s positive qualities and the admiration many people have for him, including myself. If describing someone’s behavior clearly, accurately, and in good faith comes across as an insult, it’s because that behavior is not admirable. In general, I agree with Jennifer Jacquet’s argument in the book “Is Shame Necessary?” that, used properly, public shaming can be an act of nonviolent resistance in pursuit of justice.

Naming and accurately describing abusive behavior is necessary and powerful at the same time that it makes many people feel uncomfortable. Here’s a quote (by permission) from a message sent to me about a different but similar situation:

[…] Your post was like a shining light, suddenly offering a gasp of hope. It clearly articulated exactly the trouble with these elite programmers that seem to thrive off of burying and insulting the people around them either directly or by proxy through peoples’ [sic] work. I’ve long wanted to paint and share a portrait of this problematic behavior, but could never figure out how to articulate this. Your post puts into words what I have been struggling with for some time now.”

Being uncomfortable is not in and of itself a sign that you are doing something wrong. I encourage people to think about what makes you uncomfortable about naming and describing abusive behavior, or seeing other people do it. Is it compassion for the person engaging in abusive behavior? Then I ask you to apply that compassion to the targets of abuse. Is it fear of further abuse by the person being called out? Then I urge you to support people taking action to end that abuse. Is it desire for a lack of overt conflict – a “negative peace“? Then I suggest you raise your sights and aim for a positive peace that includes justice and consideration for all. Is it fear that the wrong person will be accidentally targeted? Then I invite you to reflect on the enormous risk and backlash faced by people do this kind of naming and describing. And then I invite you to worry more about the people who are remaining silent when speaking up would benefit us all.

I appreciate everyone who spoke up about their own similar experiences with Bryan Cantrill and the wider culture of systems programming, whether they did it publicly under their own name, publicly but anonymously, or privately. Whichever way you chose to share your experiences, it was brave. I hope it makes it easier for you to speak up the next time you see injustice.

I am personally ending my commentary on this issue (unless some major change is announced, but I don’t expect that). I will keep comments open on this post and approve anything that isn’t outright abusive, but I won’t be replying to them. Thank you for reading and commenting!

Planet DebianGunnar Wolf: On the results of vote "gr_private2"

Given that I started the GR process, and that I called for discussion and votes, I feel somehow as my duty to also put a simple wrap-around to this process. Of course, I'll say many things already well-known to my fellow Debian people, but also non-debianers read this.

So, for further context, if you need to, please read my previous blog post, where I was about to send a call for votes. It summarizes the situation and proposals; you will find we had a nice set of messages in during September; I have to thank all the involved parties, much specially to Ian Jackson, who spent a lot of energy summing up the situation and clarifying the different bits to everyone involved.

So, we held the vote; you can be interested in looking at the detailed vote statistics for the 235 correctly received votes, and most importantly, the results:

Results for gr_private2

First of all, I'll say I'm actually surprised at the results, as I expected Ian's proposal (acknowledge difficulty; I actually voted this proposal as my top option) to win and mine (repeal previous GR) to be last; turns out, the winner option was Iain's (remain private). But all in all, I am happy with the results: As I said during the discussion, I was much disappointed with the results to the previous GR on this topic — And, yes, it seems the breaking point was when many people thought the privacy status of posted messages was in jeopardy; we cannot really compare what I would have liked to have in said vote if we had followed the strategy of leaving the original resolution text instead of replacing it, but I believe it would have passed. In fact, one more surprise of this iteration was that I expected Further Discussion to be ranked higher, somewhere between the three explicit options. I am happy, of course, we got such an overwhelming clarity of what does the project as a whole prefer.

And what was gained or lost with this whole excercise? Well, if nothing else, we gain to stop lying. For over ten years, we have had an accepted resolution binding us to release the messages sent to debian-private given such-and-such-conditions... But never got around to implement it. We now know that debian-private will remain private... But we should keep reminding ourselves to use the list as little as possible.

For a project such as Debian, which is often seen as a beacon of doing the right thing no matter what, I feel being explicit about not lying to ourselves of great importance. Yes, we have the principle of not hiding our problems, but it has long been argued that the use of this list is not hiding or problems. Private communication can happen whenever you have humans involved, even if administratively we tried to avoid it.

Any of the three running options could have won, and I'd be happy. My #1 didn't win, but my #2 did. And, I am sure, it's for the best of the project as a whole.


CryptogramUK Admitting "Offensive Cyber" Against ISIS/Daesh

I think this might be the first time it has been openly acknowledged:

Sir Michael Fallon, the defence secretary, has said Britain is using cyber warfare in the bid to retake Mosul from Islamic State. Speaking at an international conference on waging war through advanced technology, Fallon made it clear Britain was unleashing its cyber capability on IS, also known as Daesh. Asked if the UK was launching cyber attacks in the bid to take the northern Iraqi city from IS, he replied:

I'm not going into operational specifics, but yes, you know we are conducting military operations against Daesh as part of the international coalition, and I can confirm that we are using offensive cyber for the first time in this campaign.

Planet DebianChris Lamb: Concorde

Today marks the 13th anniversary since the last passenger flight from New York arrived in the UK. Every seat was filled, a feat that had become increasingly rare for a plane that was a technological marvel but a commercial flop….

  • Only 20 aircraft were ever built despite 100 orders, most of them cancelled in the early 1970s.
  • Taxiing to the runway consumed 2 tons of fuel.
  • The white colour scheme was specified to reduce the outer temperature by about 10°C.
  • In a promotional deal with Pepsi, F-BTSD was temporarily painted blue. Due to the change of colour, Air France were advised to remain at Mach 2 for no more than 20 minutes at a time.
  • At supersonic speed the fuselage would heat up and expand by as much as 30cm. The most obvious manifestation of this was a gap that opened up on the flight deck between the flight engineer's console and the bulkhead. On some aircraft conducting a retiring supersonic flight, the flight engineers placed their caps in this expanded gap, permanently wedging the cap as it shrank again.
  • At Concorde's altitude a breach of cabin integrity would result in a loss of pressure so severe that passengers would quickly suffer from hypoxia despite application of emergency oxygen. Concorde was thus built with smaller windows to reduce the rate of loss in such a breach.
  • The high cruising altitude meant passengers received almost twice the amount of radiation as a conventional long-haul flight. To prevent excessive exposure, the flight deck comprised of a radiometer; if the radiation level became too high, pilots would descend below 45,000 feet.
  • BA's service had a greater number of passengers who booked a flight and then failed to appear than any other aircraft in their fleet.
  • Market research later in Concorde's life revealed that customers thought Concorde was more expensive than it actually was. Ticket prices were progressively raised to match these perceptions.
  • The fastest transatlantic airliner flight was from New York JFK to London Heathrow on 7 February 1996 by British Airways' G-BOAD in 2 hours, 52 minutes, 59 seconds from takeoff to touchdown. It was aided by a 175 mph tailwind.

See also: A Rocket to Nowhere.

Krebs on SecurityIoT Device Maker Vows Product Recall, Legal Action Against Western Accusers

A Chinese electronics firm pegged by experts as responsible for making many of the components leveraged in last week’s massive attack that disrupted Twitter and dozens of popular Web sites has vowed to recall some of its vulnerable products, even as it threatened legal action against this publication and others for allegedly tarnishing the company’s brand.


Last week’s attack on online infrastructure provider Dyn was launched at least in part by Mirai, a now open-source malware strain that scans the Internet for routers, cameras, digital video recorders and other Internet of Things “IoT” devices protected only by the factory-default passwords. Once infected with Mirai, the IoT systems can be used to flood a target with so much junk Web traffic that the target site can no longer accommodate legitimate users or visitors.

In an interim report on the attack, Dyn said: “We can confirm, with the help of analysis from Flashpoint and Akamai, that one source of the traffic for the attacks were devices infected by the Mirai botnet. We observed 10s of millions of discrete IP addresses associated with the Mirai botnet that were part of the attack.”

As a result of that attack, one of the most-read stories on KrebsOnSecurity so far this year is “Who Makes the IoT Things Under Attack?“, in which I tried to match default passwords sought out by the Mirai malware with IoT hardware devices for sale on the commercial market today.

In a follow-up to that story, I interviewed researchers at Flashpoint who discovered that one of the default passwords sought by machines infected with Mirai — username: root and password: xc3511 — is embedded in a broad array of white-labeled DVR and IP camera electronics boards made by a Chinese company called XiongMai Technologies. These components are sold downstream to vendors who then use them in their own products.

The scary part about IoT products that include XiongMai’s various electronics components, Flashpoint found, was that while users could change the default credentials in the devices’ Web-based administration panel, the password is hardcoded into the device firmware and the tools needed to disable it aren’t present.

In a statement issued on social media Monday, XiongMai (referring to itself as “XM”) said it would be issuing a recall on millions of devices — mainly network cameras.

“Mirai is a huge disaster for the Internet of Things,” the company said in a separate statement emailed to journalists. “XM have to admit that our products also suffered from hacker’s break-in and illegal use.”

At the same time, the Chinese electronics firm said that in September 2015 it issued a firmware fix for vulnerable devices, and that XiongMai hardware shipped after that date should not by default be vulnerable.

“Since then, XM has set the device default Telnet off to avoid the hackers to connect,” the company said. “In other words, this problem is absent at the moment for our devices after Sep 2015, as Hacker cannot use the Telnet to access our devices.”

Regarding the default user name/password that ships with XM, “our devices are asking customers to change the default password when they first time to login,” the electronics maker wrote. “When customer power on the devices, the first step, is change the default password.”

I’m working with some researchers who are testing XM’s claims, and will post an update here if and when that research is available. In the meantime, XM is threatening legal action against media outlets that it says are issuing “false statements” against the company.

Google’s translation of their statement reads, in part: “Organizations or individuals false statements, defame our goodwill behavior … through legal channels to pursue full legal responsibility for all violations of people, to pursue our legal rights are reserved.”

Xiongmail's electrical components that are white-labeled and embedded in countless IoT products sold under different brand names.

Xiongmail’s electrical components that are white-labeled and embedded in countless IoT products sold under different brand names.

The statement by XM’s lawyers doesn’t name KrebsOnSecurity per se, but instead links to a Chinese media story referencing this site under the heading, “untrue reports link.”

Brian Karas, a business analyst with IPVM — a subscription-based news, testing and training site for the video surveillance industry which first reported the news of potential litigation by XM — said that over the past five years China’s market share in the video surveillance industry has surged, due to the efforts of companies like XiongMai and Dahua to expand globally, and from the growth of government-controlled security company Hikvision.

Karas said the recent Mirai botnet attacks have created “extreme concerns about the impact of Chinese video surveillance products.” Nevertheless,  he said, the threats against those the company accuses of issuing false statements are more about saving face.

“We believe Xiongmai has issued this announcement as a PR effort within China, to help counter criticisms they are facing,” Karas wrote. “We do not believe that Xiongmai or the Ministry of Justice is seriously going to sue any Western companies as this is a typical tactic to save face.”

Update,Oct. 25, 8:47 a.m.: Updated the story to reflect an oddity of Google Translate, which translated the statement from XM’s legal department as Justice Ministry. The threats of litigation come from XM, not the Chinese government. Also made clear that the threat was first written about by IPVM.

Planet DebianReproducible builds folks: Reproducible Builds: week 78 in Stretch cycle

What happened in the Reproducible Builds effort between Sunday October 16 and Saturday October 22 2016:

Media coverage

Upcoming events

In order to build packages reproducibly, you not only need identical sources but also some external definition of the environment used for a particular build. This definition includes the inputs and the outputs and, in the Debian case, are available in a $package_$architecture_$version.buildinfo file.

We anticipate the next dpkg upload to sid will create .buildinfo files by default. Whilst it's clear that we also need to teach dak to deal with them (#763822) its not actually clear how to handle .buildinfo files after dak has processed them and how to make them available to the world.

To this end, Chris Lamb has started development on a proof-of-concept .buildinfo server to see what issues arise. Source

Reproducible work in other projects

  • Ximin Luo submitted a patch to GCC as a prerequisite for future patches to make debugging symbols reproducible.

Packages reviewed and fixed, and bugs filed

Reviews of unreproducible packages

99 package reviews have been added, 3 have been updated and 6 have been removed in this week, adding to our knowledge about identified issues.

6 issue types have been added:

Weekly QA work

During of reproducibility testing, some FTBFS bugs have been detected and reported by:

  • Chris Lamb (23)
  • Daniel Reichelt (2)
  • Lucas Nussbaum (1)
  • Santiago Vila (18)

diffoscope development

  • h01ger increased the diskspace for reproducible content on Jenkins. Thanks to ProfitBricks.
  • Valerie Young supplied a patch to make Python SQL interface more SQLite/PostgresSQL agnostic.
  • lynxis worked hard to make LEDE and OpenWrt builds happen on two hosts.


Our poll to find a good time for an IRC meeting is still running until Tuesday, October 25st; please reply as soon as possible.

We need a logo! Some ideas and requirements for a Reproducible Builds logo have been documented in the wiki. Contributions very welcome, even if simply by forwarding this information.

This week's edition was written by Chris Lamb & Holger Levsen and reviewed by a bunch of Reproducible Builds folks on IRC.

Sociological ImagesReligious Trump Supporters Have Changed Their Mind about Morality

Originally posted at Montclair SocioBlog.

Most people agree that when this election is over, Trump will have changed American politics. Bigly, perhaps. But one of the more ironic changes may be that he caused the most conservative sectors of the electorate to relax their views on the connection between a politician’s private life and his fitness for public office. (Yes, “his.” Their ideas about the importance of a woman’s private sexual life may not have evolved in a similar way.)

Call it “motivated morality.” That sounds much better than hypocrisy. It’s like “motivated perception” – unconsciously adjusting your perceptions so that the facts fit with your ideology. But with motivated morality, you change your moral judgments.

For religious conservatives, Donald Trump presents quite a challenge. It’s the sex. One of the things that conservatives are conservative about is sex, and Trump’s sexual language and behavior clearly fall on the side of sin. What to do? Conservatives might try for motivated cognition and refuse to believe the women who were the recipients of Trumps kissing, groping, and voyeurism. That’s difficult when Trump himself is on the record claiming to have done all these things, and making those claims using decidedly unChristian language.

Instead, they have changed their judgment about the link between groping and governing. Previously, they had espoused “moral clarity” – a single principle applied unbendingly to all situations. Good is good, evil is evil. If a man is immoral in his private life, he will be immoral or worse as a public official.

Now they favor “situational morality,” the situation in this case being the prospect of a Clinton victory. So rather than condemn Trump absolutely, they say that, although he is out of line, they will vote for him and encourage others to do likewise in order to keep Hillary out of the White House. For example, in a USA Today op-ed, Diann Catlin, a “Bible-thumping etiquette teacher” says:

I like God’s ways. … I also know that he wants discerning believers to take part in government. … God has always used imperfect people for his glory.

God uses people like Trump and like me who are sinners but whose specific issues, such as the life of the unborn child, align with his word.

She includes the “we’re all sinners” trope that’s so popular now among the Trump’s Christian supporters (funny how they never mention that when the topic is Bill Clinton’s infidelities or Hillary’s e-mails). More important is the implication that even a sinner can make good governmental decisions. That’s an idea that US conservatives used to dismiss as European amorality. In government, they would insist, “character” is everything.

It’s not just professional conservatives who have crossed over to the view that sex and politics are separate spheres and that a person can be sinful in one and yet virtuous in the other. Ordinary conservatives and Evangelicals have also (to use the word of the hour) pivoted.

Five years ago, the Public Religion Research Institute at Brookings asked people whether someone who had committed immoral acts in their private life could still be effective in their political or professional life. Nationwide, 44% said Yes. PRRI asked the same question this year. The Yes vote had risen to 61%. But the move to compartmentalize sin was most pronounced among those who were most conservative.


The unchurched or “unaffiliated” didn’t change much in five years. But White Catholics and mainline Protestants both became more tolerant of private immorality. And among the most religiously conservative, the White evangelical Protestants, that percentage more than doubled. They went from being the least accepting to being the most accepting.

As with religion, so with political views.

People of all political stripes became more accepting, but when it came to judging a privately immoral person in public life, Republicans, like White evangelicals, went from least tolerant to most tolerant.

What could have happened?

Flickr photo by Darron Birgenheier.
Flickr photo by Darron Birgenheier.

There’s no absolute proof that it was the Donald that made the difference. But those White evangelicals support him over Hillary by better than four to one. Those who identify as Republicans favor Trump by an even greater margin. There may be some other explanation, but for now, I’ll settle for the idea that in order to vote for Trump, they had to keep their judgment of him as a politician separate from their judgment of his sexual behavior – a separation they would not have made five years ago.

Jay Livingston is the chair of the Sociology Department at Montclair State University. You can follow him at Montclair SocioBlog or on Twitter.

(View original at

CryptogramHow Different Stakeholders Frame Security

Josephine Wolff examines different Internet governance stakeholders and how they frame security debates.

Her conclusion:

The tensions that arise around issues of security among different groups of internet governance stakeholders speak to the many tangled notions of what online security is and whom it is meant to protect that are espoused by the participants in multistakeholder governance forums. What makes these debates significant and unique in the context of internet governance is not that the different stakeholders often disagree (indeed, that is a common occurrence), but rather that they disagree while all using the same vocabulary of security to support their respective stances. Government stakeholders advocate for limitations on WHOIS privacy/proxy services in order to aid law enforcement and protect their citizens from crime and fraud. Civil society stakeholders advocate against those limitations in order to aid activists and minorities and protect those online users from harassment. Both sides would claim that their position promotes a more secure internet and a more secure society -- ­and in a sense, both would be right, except that each promotes a differently secure internet and society, protecting different classes of people and behaviour from different threats.

While vague notions of security may be sufficiently universally accepted as to appear in official documents and treaties, the specific details of individual decisions­ -- such as the implementation of dotless domains, changes to the WHOIS database privacy policy, and proposals to grant government greater authority over how their internet traffic is routed­ -- require stakeholders to disentangle the many different ideas embedded in that language. For the idea of security to truly foster cooperation and collaboration as a boundary object in internet governance circles, the participating stakeholders will have to more concretely agree on what their vision of a secure internet is and how it will balance the different ideas of security espoused by different groups. Alternatively, internet governance stakeholders may find it more useful to limit their discussions on security, as a whole, and try to force their discussions to focus on more specific threats and issues within that space as a means of preventing themselves from succumbing to a façade of agreement without grappling with the sources of disagreement that linger just below the surface.

The intersection of multistakeholder internet governance and definitional issues of security is striking because of the way that the multistakeholder model both reinforces and takes advantage of the ambiguity surrounding the idea of security explored in the security studies literature. That ambiguity is a crucial component of maintaining a functional multistakeholder model of governance because it lends itself well to high-level agreements and discussions, contributing to the sense of consensus building across stakeholders. At the same time, gathering those different stakeholders together to decide specific issues related to the internet and its infrastructure brings to a fore the vast variety of definitions of security they employ and forces them to engage in security-versus-security fights, with each trying to promote their own particular notion of security. Security has long been a contested concept, but rarely do these contestations play out as directly and dramatically as in the multistakeholder arena of internet governance, where all parties are able to face off on what really constitutes security in a digital world.

We certainly saw this in the "going dark" debate: e.g. the FBI vs. Apple and their iPhone security.

Worse Than FailureCodeSOD: Keeping it Regular

Regular expressions are like one of those multi-tools: they're a knife, they're a screwdriver, they're pliers, and there's a pair of tweezers stuck in the handle. We can use them to do anything.

For example, Linda inherited a site that counted up and down votes, like Reddit, implemented in CoffeeScript. Instead of using variables or extracting the text from the DOM, this code brought regular expressions to bear.

updateBookmarkCounter = (upOrDown) ->
    counterSpan = $('.bookmark .counter')
    spanHtml = counterSpan.html()
    count = spanHtml.match(/\d/).first().toInteger()
    newCount = if (upOrDown == 'down') then (count - 1) else (count + 1)
    newCount = 0 if newCount < 1
    counterSpan.html(spanHtml.replace(/\d/,  newCount))
    updateUserBookmarkCount upOrDown

There’s a glitch in this code, and no, it’s not that this code exists in the first place. Think about what happens when the number of votes exceeds 10.

Okay, maybe not the best use of regexes. What about sanitizing inputs? That seems like a textbook use case. Alexander T’s co-worker found a very clever way to convert any input into a floating point number. Any input.

function convertToFloat(value) {
  if(typeof value == "number") return value;
  return parseFloat(value.replace(/\D/g, ''));

Values like “D12.3” convert seamlessly to “123”.

You know what else regexes can do? Parse things! Not just HTML and XML, but anything. Like, for example, parsing out an INI file. Kate found this.

$ini = array();
preg_match_all('/\[(?<sections>[^\]\r\n]+)\][\s\r\n]*(?<values>([^\r\n\[=]+(=[^\r\n]+)?[\s\r\n]*)+)?/i', file_get_contents($iniFile), $ini);
foreach($ini['sections'] as $i=>$section) {
    $ini[$section] = array();
    $values = $sections['values'][$i];
    foreach($ini['names'] as $j=>$name) {
        $name = trim($name);
        if($name && !preg_match("/[#;]/", $name)){
            $value = trim($ini['values'][$j]);
            if(!preg_match("/[#;]/", $value))
                $ini[$section][$name] = $value;

She was able to replace that entire block with $ini = parse_ini_file($iniFile, true);. parse_ini_file is a built-in library function in PHP.

Speaking of parsing, R.J. L. works for a company that runs printed documents through optical character recognition, and then uses regular expressions to identify interesting parts of the document to store in the database. These regular expressions are written in a custom, in-house regex language, that is almost but not quite a PCRE. By using their own regular expression language, tasks that might be inelegant or complicated in traditional languages become simple. For example, this regex find the document identifier on any page.

([:-.,;/\\(]{0,2}(( [C|c][P|p][K,<|k,<][0-9]{11}
)||([:#.$",'#-/|][C|c][P|p][K,<|k,<][0-9]{11} )||( [C|c][P|p][K,<|k,<][0-9]{11}[
01[A|a|C|c|D|d|E|e|R|r][0-9]{7} )||([:#.$",'#-/|]01[A|a|C|c|D|d|E|e|R|r][0-9]{7}
|d|E|e|R|r][0-9]{7}[:#.$",'#-/|l\\])||( 02[A|a|B|b|C|c|D|d|E|e|F|f][0-9]{7}
)||([:#.$",'#-/|]02[A|a|B|b|C|c|D|d|E|e|F|f][0-9]{7} )||( 02[A|a|B|b|C|c|D|d|E|e
",'#-/|l\\])||( 04[C|c|D|d|F|f|V|v][0-9]{7}
)||([:#.$",'#-/|]04[C|c|D|d|F|f|V|v][0-9]{7} )||( 04[C|c|D|d|F|f|V|v][0-9]{7}[:#
05[M|m|A|a][0-9]{7} )||([:#.$",'#-/|]05[M|m|A|a][0-9]{7} )||( 05[M|m|A|a][0-9]{7
)||([:#.$",'#-/|]06[B|b|C|c|G|g|H|h|J|j|K|k|L|l|M|m|S|s|U|u|Y|y][0-9]{7} )||( 06
( 07[U|u][0-9]{7} )||([:#.$",'#-/|]07[U|u][0-9]{7} )||( 07[U|u][0-9]{7}[:#.$",'#
-/|l\\])||([:#.$",'#-/|]07[U|u][0-9]{7}[:#.$",'#-/|l\\])||( 08[A|a][0-9]{7}
)||([:#.$",'#-/|]08[A|a][0-9]{7} )||( 08[A|a][0-9]{7}[:#.$",'#-/|l\\])||([:#.$",
'#-/|]08[A|a][0-9]{7}[:#.$",'#-/|l\\])||( 09[A|a|B|b|C|c|D|d|F|f][0-9]{7}
)||([:#.$",'#-/|]09[A|a|B|b|C|c|D|d|F|f][0-9]{7} )||( 09[A|a|B|b|C|c|D|d|F|f][0-
|l\\])||( 10[M|m|F|f][0-9]{7} )||([:#.$",'#-/|]10[M|m|F|f][0-9]{7} )||( 10[M|m|F
||( 13[A|a][0-9]{7} )||([:#.$",'#-/|]13[A|a][0-9]{7} )||( 13[A|a][0-9]{7}[:#.$",
'#-/|l\\])||([:#.$",'#-/|]13[A|a][0-9]{7}[:#.$",'#-/|l\\])||( 14[A|a][0-9]{7}
)||([:#.$",'#-/|]14[A|a][0-9]{7} )||(
15[D|d|E|e|R|r|T|t][0-9]{7} )||([:#.$",'#-/|]15[D|d|E|e|R|r|T|t][0-9]{7} )||( 15
9]{7}[:#.$",'#-/|l\\])||( 17[A|a|E|e|L|l|M|m|P|p|S|s|U|u|W|w][0-9]{7}
)||([:#.$",'#-/|]17[A|a|E|e|L|l|M|m|P|p|S|s|U|u|W|w][0-9]{7} )||( 17[A|a|E|e|L|l
|P|p|S|s|U|u|W|w][0-9]{7}[:#.$",'#-/|l\\])||( 18[A|a][0-9]{7}
)||([:#.$",'#-/|]18[A|a][0-9]{7} )||( 18[A|a][0-9]{7}[:#.$",'#-/|l\\])||([:#.$",
'#-/|]18[A|a][0-9]{7}[:#.$",'#-/|l\\])||( 21[A|a|C|c|D|d][0-9]{7}
)||([:#.$",'#-/|]21[A|a|C|c|D|d][0-9]{7} )||( 21[A|a|C|c|D|d][0-9]{7}[:#.$",'#-/
)||([:#.$",'#-/|]23[A|a|B|b|C|c|D|d|L|l|M|m][0-9]{7} )||(23[A|a|B|b|C|c|D|d|L|l|
[:#.$",'#-/|l\\]) ||( 24[A|a|B|b|C|c|F|f|K|k|M|m|T|t][0-9]{7}
)||([:#.$",'#-/|]24[A|a|B|b|C|c|F|f|K|k|M|m|T|t][0-9]{7} )||( 24[A|a|B|b|C|c|F|f
|T|t][0-9]{7}[:#.$",'#-/|l\\]) ||( 25[A|a][0-9]{7}
)||([:#.$",'#-/|]25[A|a][0-9]{7} )||( 25[A|a][0-9]{7}[:#.$",'#-/|l\\])||([:#.$",
'#-/|]25[A|a][0-9]{7}[:#.$",'#-/|l\\])||( 32[A|a|F|f|H|h|X|x|Y|y|Z|z][0-9]{7}
)||([:#.$",'#-/|]32[A|a|F|f|H|h|X|x|Y|y|Z|z][0-9]{7} )||( 32[A|a|F|f|H|h|X|x|Y|y
}[:#.$",'#-/|l\\])||( 34[A|a][0-9]{7} )||([:#.$",'#-/|]34[A|a][0-9]{7} )||(
||( 35[A|a|B|R|r|S|s|T|t|U|u][0-9]{7}
)||([:#.$",'#-/|]35[A|a|B|R|r|S|s|T|t|U|u][0-9]{7} )||( 35[A|a|B|R|r|S|s|T|t|U|u
",'#-/|l\\])||( 39[C|c|P|p][0-9]{7} )||([:#.$",'#-/|]39[C|c|P|p][0-9]{7} )||( 39
|l\\])||( 40[A|a|C|c|D|d|S|s][0-9]{7}
)||([:#.$",'#-/|]40[A|a|C|c|D|d|S|s][0-9]{7} )||( 40[A|a|C|c|D|d|S|s][0-9]{7}[:#
46[A|a|B|b][0-9]{7} )||([:#.$",'#-/|]46[A|a|B|b][0-9]{7} )||( 46[A|a|B|b][0-9]{7
}[:#.$",'#-/|l\\])||([:#.$",'#-/|]46[A|a|B|b][0-9]{7}[:#.$",'#-/|l\\]) ||(
01[A|a|C|c|D|d|E|e|R|r][0-9]{9} )||([:#.$",'#-/|]01[A|a|C|c|D|d|E|e|R|r][0-9]{9}
|d|E|e|R|r][0-9]{9}[:#.$",'#-/|l\\])||( 02[A|a|B|b|C|c|D|d|E|e|F|f][0-9]{9} )
||([:#.$",'#-/|]02[A|a|B|b|C|c|D|d|E|e|F|f][0-9]{9} )||( 02[A|a|B|b|C|c|D|d|E|e|
[:#.$",'#-/|l\\]) ||( 04[C|c|D|d|F|f|V|v][0-9]{9}
)||([:#.$",'#-/|]04[C|c|D|d|F|f|V|v][0-9]{9} )||( 04[C|c|D|d|F|f|V|v][0-9]{9}[:#
05[M|m|A|a][0-9]{9} )||([:#.$",'#-/|]05[M|m|A|a][0-9]{9} )||( 05[M|m|A|a][0-9]{9
)||([:#.$",'#-/|]06[B|b|C|c|G|g|H|h|J|j|K|k|L|l|M|m|S|s|U|u|Y|y][0-9]{9} )||( 06
( 07[U|u][0-9]{9} )||([:#.$",'#-/|]07[U|u][0-9]{9} )||( 07[U|u][0-9]{9}[:#.$",'#
-/|l\\])||([:#.$",'#-/|]07[U|u][0-9]{9}[:#.$",'#-/|l\\])||( 08[A|a][0-9]{9}
)||([:#.$",'#-/|]08[A|a][0-9]{9} )||( 08[A|a][0-9]{9}[:#.$",'#-/|l\\])||([:#.$",
'#-/|]08[A|a][0-9]{9}[:#.$",'#-/|l\\])||( 09[A|a|B|b|C|c|D|d|F|f][0-9]{9} )||
([:#.$",'#-/|]09[A|a|B|b|C|c|D|d|F|f][0-9]{9} )||( 09[A|a|B|b|C|c|D|d|F|f][0-9]{
\])||( 10[M|m|F|f][0-9]{9} )||([:#.$",'#-/|]10[M|m|F|f][0-9]{9} )||( 10[M|m|F|f]
13[A|a][0-9]{9} )||([:#.$",'#-/|]13[A|a][0-9]{9} )||( 13[A|a][0-9]{9}[:#.$",'#-/
|l\\])||([:#.$",'#-/|]13[A|a][0-9]{9}[:#.$",'#-/|l\\])||( 14[A|a][0-9]{9} )||
([:#.$",'#-/|]14[A|a][0-9]{9} )||( 14[A|a][0-9]{9}[:#.$",'#-/|l\\])||([:#.$",'#-
/|]14[A|a][0-9]{9}[:#.$",'#-/|l\\])|| ( 15[D|d|E|e|R|r|T|t][0-9]{9}
)||([:#.$",'#-/|]15[D|d|E|e|R|r|T|t][0-9]{9} )||( 15[D|d|E|e|R|r|T|t][0-9]{9}[:#
)||([:#.$",'#-/|]17[A|a|E|e|L|l|M|m|P|p|S|s|U|u|W|w][0-9]{9} )||( 17[A|a|E|e|L|l
|P|p|S|s|U|u|W|w][0-9]{9}[:#.$",'#-/|l\\])||( 18[A|a][0-9]{9}
)||([:#.$",'#-/|]18[A|a][0-9]{9} )||(
||( 21[A|a|C|c|D|d][0-9]{9} )||([:#.$",'#-/|]21[A|a|C|c|D|d][0-9]{9} )||( 21[A|a
'#-/|l\\])||( 23[A|a|B|b|C|c|D|d|L|l|M|m][0-9]{9}
)||([:#.$",'#-/|]23[A|a|B|b|C|c|D|d|L|l|M|m][0-9]{9} )||( 23[A|a|B|b|C|c|D|d|L|l
}[:#.$",'#-/|l\\])||( 24[A|a|B|b|C|c|F|f|K|k|M|m|T|t][0-9]{9}
)||([:#.$",'#-/|]24[A|a|B|b|C|c|F|f|K|k|M|m|T|t][0-9]{9} )||( 24[A|a|B|b|C|c|F|f
|T|t][0-9]{9}[:#.$",'#-/|l\\])||( 25[A|a][0-9]{9}
)||([:#.$",'#-/|]25[A|a][0-9]{9} )||(
||( 32[A|a|F|f|H|h|X|x|Y|y|Z|z][0-9]{9}
)||([:#.$",'#-/|]32[A|a|F|f|H|h|X|x|Y|y|Z|z][0-9]{9} )||( 32[A|a|F|f|H|h|X|x|Y|y
}[:#.$",'#-/|l\\])||( 34[A|a][0-9]{9} )||([:#.$",'#-/|]34[A|a][0-9]{9} )||( 34[A
)||([:#.$",'#-/|]35[A|a|B|b|R|r|S|s|T|t|U|u][0-9]{9} )||( 35[A|a|B|b|R|r|S|s|T|t
}[:#.$",'#-/|l\\])||( 39[C|c|P|p][0-9]{9} )||([:#.$",'#-/|]39[C|c|P|p][0-9]{9}
)||( 39[C|c|P|p][0-9]{9}[:#.$",'#-/|l\\])||([:#.$",'#-/|]39[C|c|P|p][0-9]{9}[:#.
$",'#-/|l\\]) ||( 40[A|a|C|c|D|d|S|s][0-9]{9}
)||([:#.$",'#-/|]40[A|a|C|c|D|d|S|s][0-9]{9} )||( 40[A|a|C|c|D|d|S|s][0-9]{9}[:#
46[A|a|B|b][0-9]{9} )||([:#.$",'#-/|]46[A|a|B|b][0-9]{9} )||( 46[A|a|B|b][0-9]{9
/\\]{0,2} )

Simplicity itself.

[Advertisement] BuildMaster integrates with an ever-growing list of tools to automate and facilitate everything from continuous integration to database change scripts to production deployments. Interested? Learn more about BuildMaster!

Planet DebianRuss Allbery: Review: The Design of Everyday Things

Review: The Design of Everyday Things, by Don Norman

Publisher: Basic Books
Copyright: 2013
ISBN: 0-465-05065-4
Format: Trade paperback
Pages: 298

There are several editions of this book (the first under a different title, The Psychology of Everyday Things). This review is for the Revised and Expanded Edition, first published in 2013 and quite significantly revised compared to the original. I probably read at least some of the original for a class in human-computer interaction around 1994, but that was long enough ago that I didn't remember any of the details.

I'm not sure how much impact this book has had outside of the computer field, but The Design of Everyday Things is a foundational text of HCI (human-computer interaction) despite the fact that many of its examples and much of its analysis is not specific to computers. Norman's goal is clearly to write a book that's fundamental to the entire field of design; not having studied the field, I don't know if he succeeded, but the impact on computing was certainly immense. This is the sort of book that everyone ends up hearing about, if not necessarily reading, in college. I was looking forward to filling a gap in my general knowledge.

Having now read it cover-to-cover, would I recommend others invest the time? Maybe. But probably not.

There are several things this book does well. One of the most significant is that it builds a lexicon and a set of general principles that provide a way of talking about design issues. Lexicons are not the most compelling reading material (see also Design Patterns), but having a common language is useful. I still remember affordances from college (probably from this book or something else based on it). Norman also adds, and defines, signifiers, constraints, mappings, and feedback, and talks about the human process of building a conceptual model of the objects with which one is interacting.

Even more useful, at least in my opinion, is the discussion of human task-oriented behavior. The seven stages of action is a great systematic way of analyzing how humans perform tasks, where those actions can fail, and how designers can help minimize failure. One thing I particularly like about Norman's presentation here is the emphasis on the feedback cycle after performing a task, or a step in a task. That feedback, and what makes good or poor feedback, is (I think) an underappreciated part of design and something that too often goes missing. I thought Norman was a bit too dismissive of simple beeps as feedback (he thinks they don't carry enough information; while that's not wrong, I think they're far superior to no feedback at all), but the emphasis on this point was much appreciated.

Beyond these dry but useful intellectual frameworks, though, Norman seems to have a larger purpose in The Design of Everyday Things: making a passionate argument for the importance of design and for not tolerating poor design. This is where I think his book goes a bit off the rails.

I can appreciate the boosterism of someone who feels an aspect of creating products is underappreciated and underfunded. But Norman hammers on the unacceptability of bad design to the point of tedium, and seems remarkably intolerant of, and unwilling to confront, the reasons why products may be released with poor designs for their eventual users. Norman clearly wishes that we would all boycott products with poor designs and prize usability above most (all?) other factors in our decisions. Equally clearly, this is not happening, and Norman knows it. He even describes some of the reasons why not, most notably (and most difficultly) the fact that the purchasers of many products are not the eventual users. Stoves are largely sold to builders, not kitchen cooks. Light switches are laid out for the convenience of the electrician; here too, the motive for the builder to spend additional money on better lighting controls is unclear. So much business software is purchased by people who will never use it directly, and may have little or no contact with the people who do. These layers of economic separation result in deep disconnects of incentive structure between product manufacturers and eventual consumers.

Norman acknowledges this, writes about it at some length, and then seems to ignore the point entirely, returning to ranting about the deficiencies of obviously poor design and encouraging people to care more about design. This seems weirdly superficial in this foundational of a book. I came away half-convinced that these disconnects of incentive (and some related problems, such as the unwillingness to invest in proper field research or the elaborate, expensive, and lengthy design process Norman lays out as ideal) are the primary obstacle in the way of better-designed consumer goods. If that's the case, then this is one of the largest, if not the largest, obstacle in the way of doing good design, and I would have expected this foundational of a book to tackle it head-on and provide some guidance for how to fight back against this problem. But Norman largely doesn't.

There is some mention of this in the introduction. Apparently much of the discussion of the practical constraints on product design in the business world was added in this revised edition, and perhaps what I'm seeing is the limitations of attempting to revise an existing text. But that also implies that the original took an even harder line against poor design. Throughout, Norman is remarkably high-handed in his dismissal of bad design, focusing more on condemnation than on an investigation of why bad design might happen and what we, as readers, can learn from that process to avoid repeating it. Norman does provide extensive analysis of the design process and the psychology of human interaction, but still left me with the impression that he believes most design failures stem from laziness and stupidity. The negativity and frustration got a bit tedious by the middle of the book.

There's quite a lot here that someone working in design, particularly interface design, should be at least somewhat familiar with: affordances, signifiers, the importance of feedback, the psychological model of tasks and actions, and the classification of errors, just to name a few. However, I'm not sure this book is the best medium for learning those things. I found it a bit tedious, a bit too arrogant, and weirdly unconcerned with feasible solutions to the challenge of mismatched incentives. I also didn't learn that much from it; while the concepts here are quite important, most of them I'd picked up by osmosis from working in the computing field for twenty years.

In that way, The Design of Everyday Things reminded me a great deal of the Gang of Four's Design Patterns, even though it's a more readable book and less of an exercise in academic classification. The concepts presented are useful and important, but I'm not sure I can recommend the book as a book. It may be better to pick up the same concepts as you go, with the help of Internet searches and shorter essays.

Rating: 6 out of 10

Planet DebianDirk Eddelbuettel: Word Marathon Majors: Five Star Finisher!

A little over eight years ago, I wrote a short blog post which somewhat dryly noted that I had completed the five marathons constituting the World Marathon Majors. I had completed Boston, Chicago and New York during 2007, adding London and then Berlin (with a personal best) in 2008. The World Marathon Majors existed then, but I was not aware of a website. The organisation was aiming to raise the profile of the professional and very high-end aspect of the sport. But marathoning is funny as they let somewhat regular folks like you and me into the same race. And I always wondered if someone kept track of regular folks completing the suite...

I have been running a little less the last few years, though I did get around to complete the Illinois Marathon earlier this year (only tweeted about it and still have not added anything to the running section of my blog). But two weeks ago, I was once again handing out water cups at the Chicago Marathon, sending along two tweets when the elite wheelchair and elite male runners flew by. To the first, the World Marathon Majors account replied, which lead me to their website. Which in turn lead me to the Five Star Finisher page, and the newer / larger Six Star Finisher page now that Tokyo has been added.

And in short, one can now request one's record to be added (if they check out). So I did. And now I am on the Five Star Finisher page!

I don't think I'll ever surpass that as a runner. The table header and my row look like this:

Table header Dirk Eddelbuettel

If only my fifth / sixth grade physical education teacher could see that---he was one of those early running nuts from the 1970s and made us run towards / around this (by now enlarged) pond and boy did I hate that :) Guess it did have some long lasting effects. And I casually circled the lake a few years ago, starting much further away from my parents place. Once you are in the groove for distance...

But leaving that aside, running has been fun and I with some luck I may have another one or two marathons or Ragnar Relays left. The only really bad part about this is that I may have to get myself to Tokyo after all (for something that is not an ISM workshop) ...

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianDaniel Silverstone: Gitano - Approaching Release - Deprecated commands

As mentioned previously I am working toward getting Gitano into Stretch. Last time we spoke about lace, on which a colleague and friend of mine (Richard Maw) did a large pile of work. This time I'm going to discuss deprecation approaches and building more capability out of fewer features.

First, a little background -- Gitano is written in Lua which is a deliberately small language whose authors spend more time thinking about what they can remove from the language spec than they do what they could add in. I first came to Lua in the 3.2 days, a little before 4.0 came out. (The authors provide a lovely timeline in case you're interested.) With each of the releases of Lua which came after 3.2, I was struck with how the authors looked to take a number of features which the language had, and collapse them into more generic, more powerful, smaller, fewer features.

This approach to design stuck with me over the subsequent decade, and when I began Gitano I tried to have the smallest number of core features/behaviours, from which could grow the power and complexity I desired. Gitano is, at its core, a set of files in a single format (clod) stored in a consistent manner (Git) which mediate access to a resource (Git repositories). Some of those files result in emergent properties such as the concept of the 'owner' of a repository (though that can simply be considered the value of the project.owner property for the repository). Indeed the concept of the owner of a repository is a fiction generated by the ACL system with a very small amount of collusion from the core of Gitano. Yet until recently Gitano had a first class command set-owner which would alter that one configuration value.

[gitano]  set-description ---- Set the repo's short description (Takes a repo)
[gitano]         set-head ---- Set the repo's HEAD symbolic reference (Takes a repo)
[gitano]        set-owner ---- Sets the owner of a repository (Takes a repo)

Those of you with Gitano installations may see the above if you ask it for help. Yet you'll also likely see:

[gitano]           config ---- View and change configuration for a repository (Takes a repo)

The config command gives you access to the repository configuration file (which, yes, you could access over git instead, but the config command can be delegated in a more fine-grained fashion without having to write hooks). Given the config command has all the functionality of the three specific set-* commands shown above, it was time to remove the specific commands.


If you had automation which used the set-description, set-head, or set-owner commands then you will want to switch to the config command before you migrate your server to the current or any future version of Gitano.

In brief, where you had:

ssh git@gitserver set-FOO repo something

You now need:

ssh git@gitserver config repo set project.FOO something

It looks a little more wordy but it is consistent with the other features that are keyed from the project configuration, such as:

ssh git@gitserver config repo set cgitrc.section Fooble Section Name

And, of course, you can see what configuration is present with:

ssh git@gitserver config repo show

Or look at a specific value with:

ssh git@gitserver config repo show specific.key

As always, you can get more detailed (if somewhat cryptic) help with:

ssh git@gitserver help config

Next time I'll try and touch on the new PGP/GPG integration support.

Planet DebianFrancois Marier: Tweaking Referrers For Privacy in Firefox

The Referer header has been a part of the web for a long time. Websites rely on it for a few different purposes (e.g. analytics, ads, CSRF protection) but it can be quite problematic from a privacy perspective.

Thankfully, there are now tools in Firefox to help users and developers mitigate some of these problems.


In a nutshell, the browser adds a Referer header to all outgoing HTTP requests, revealing to the server on the other end the URL of the page you were on when you placed the request. For example, it tells the server where you were when you followed a link to that site, or what page you were on when you requested an image or a script. There are, however, a few limitations to this simplified explanation.

First of all, by default, browsers won't send a referrer if you place a request from an HTTPS page to an HTTP page. This would reveal potentially confidential information (such as the URL path and query string which could contain session tokens or other secret identifiers) from a secure page over an insecure HTTP channel. Firefox will however include a Referer header in HTTPS to HTTPS transitions unless network.http.sendSecureXSiteReferrer (removed in Firefox 52) is set to false in about:config.

Secondly, using the new Referrer Policy specification web developers can override the default behaviour for their pages, including on a per-element basis. This can be used both to increase or reduce the amount of information present in the referrer.

Legitimate Uses

Because the Referer header has been around for so long, a number of techniques rely on it.

Armed with the Referer information, analytics tools can figure out:

  • where website traffic comes from, and
  • how users are navigating the site.

Another place where the Referer is useful is as a mitigation against cross-site request forgeries. In that case, a website receiving a form submission can reject that form submission if the request originated from a different website.

It's worth pointing out that this CSRF mitigation might be better implemented via a separate header that could be restricted to particularly dangerous requests (i.e. POST and DELETE requests) and only include the information required for that security check (i.e. the origin).

Problems with the Referrer

Unfortunately, this header also creates significant privacy and security concerns.

The most obvious one is that it leaks part of your browsing history to sites you visit as well as all of the resources they pull in (e.g. ads and third-party scripts). It can be quite complicated to fix these leaks in a cross-browser way.

These leaks can also lead to exposing private personally-identifiable information when they are part of the query string. One of the most high-profile example is the accidental leakage of user searches by

Solutions for Firefox Users

While web developers can use the new mechanisms exposed through the Referrer Policy, Firefox users can also take steps to limit the amount of information they send to websites, advertisers and trackers.

In addition to enabling Firefox's built-in tracking protection by setting privacy.trackingprotection.enabled to true in about:config, which will prevent all network connections to known trackers, users can control when the Referer header is sent by setting network.http.sendRefererHeader to:

  • 0 to never send the header
  • 1 to send the header only when clicking on links and similar elements
  • 2 (default) to send the header on all requests (e.g. images, links, etc.)

It's also possible to put a limit on the maximum amount of information that the header will contain by setting the network.http.referer.trimmingPolicy to:

  • 0 (default) to send the full URL
  • 1 to send the URL without its query string
  • 2 to only send the scheme, host and port

or using the network.http.referer.XOriginTrimmingPolicy option (added in Firefox 52) to only restrict the contents of referrers attached to cross-origin requests.

Site owners can opt to share less information with other sites, but they can't share any more than what the user trimming policies allow.

Another approach is to disable the Referer when doing cross-origin requests (from one site to another). The network.http.referer.XOriginPolicy preference can be set to:

  • 0 (default) to send the referrer in all cases
  • 1 to send a referrer only when the base domains are the same
  • 2 to send a referrer only when the full hostnames match


If you try to remove all referrers (i.e. network.http.sendRefererHeader = 0, you will most likely run into problems on a number of sites, for example:

The first two have been worked-around successfully by setting network.http.referer.spoofSource to true, an advanced setting which always sends the destination URL as the referrer, thereby not leaking anything about the original page.

Unfortunately, the last two are examples of the kind of breakage that can only be fixed through a whitelist (an approach supported by the smart referer add-on) or by temporarily using a different browser profile.

My Recommended Settings

As with my cookie recommendations, I recommend strengthening your referrer settings but not disabling (or spoofing) it entirely.

While spoofing does solve many the breakage problems mentioned above, it also effectively disables the anti-CSRF protections that some sites may rely on and that have tangible user benefits. A better approach is to limit the amount of information that leaks through cross-origin requests.

If you are willing to live with some amount of breakage, you can simply restrict referrers to the same site by setting:

network.http.referer.XOriginPolicy = 2

or to sites which belong to the same organization (i.e. same ETLD/public suffix) using:

network.http.referer.XOriginPolicy = 1

This prevent leaks to third-parties while giving websites all of the information that they can already see in their own server logs.

On the other hand, if you prefer a weaker but more compatible solution, you can trim cross-origin referrers down to just the scheme, hostname and port:

network.http.referer.XOriginTrimmingPolicy = 2

I have not yet found user-visible breakage using this last configuration. Let me know if you find any!


Planet DebianCarl Chenet: PyMoneroWallet: the Python library for the Monero wallet

Do you know the Monero crytocurrency? It’s a cryptocurrency, like Bitcoin, focused on the security, the privacy and the untracabily. That’s a great project launched in 2014, today called XMR on all cryptocurrency exchange platforms (like Kraken or Poloniex).

So what’s new? In order to work with a Monero wallet from some Python applications, I just wrote a Python library to use the Monero wallet: PyMoneroWallet


Using PyMoneroWallet is as easy as:

$ python3
>>> from monerowallet import MoneroWallet
>>> mw = MoneroWallet()
>>> mw.getbalance()
{'unlocked_balance': 2262265030000, 'balance': 2262265030000}

Lots of features are included, you should have a look at the documentation of the monerowallet module to know them all, but quickly here are some of them:

And so on. Have a look at the complete documentation for extensive available functions.

UPDATE: I’m trying to launch a crowdfunding of the PyMoneroWallet project. Feel free to comment in this thread of the official Monero forum to let them know you think that PyMoneroWallet is a great idea 😉

Feel free to contribute to this starting project to help spreading the Monero use by using the PyMoneroWallet project with your Python applications 🙂

Planet DebianVincent Sanders: Rabbit of Caerbannog

Subsequent to my previous use of American Fuzzy Lop (AFL) on the NetSurf bitmap image library I applied it to the gif library which, after fixing the test runner, failed to produce any crashes but did result in a better test corpus improving coverage above 90%

I then turned my attention to the SVG processing library. This was different to the bitmap libraries in that it required parsing a much lower density text format and performing operations on the resulting tree representation.

The test program for the SVG library needed some improvement but is very basic in operation. It takes the test SVG, parses it using libsvgtiny and then uses the parsed output to write out an imagemagick mvg file.

The libsvg processing uses the NetSurf DOM library which in turn uses an expat binding to parse the SVG XML text. To process this with AFL required instrumenting not only the XVG library but the DOM library. I did not initially understand this and my first run resulted in a "map coverage" indicating an issue. Helpfully the AFL docs do cover this so it was straightforward to rectify.

Once the test program was written and environment set up an AFL run was started and left to run. The next day I was somewhat alarmed to discover the fuzzer had made almost no progress and was running very slowly. I asked for help on the AFL mailing list and got a polite and helpful response, basically I needed to RTFM.

I must thank the members of the AFL mailing list for being so helpful and tolerating someone who ought to know better asking  dumb questions.

After reading the fine manual I understood I needed to ensure all my test cases were as small as possible and further that the fuzzer needed a dictionary as a hint to the file format because the text file was of such low data density compared to binary formats.

Rabbit of Caerbannog. Death awaits you with pointy teeth
I crafted an SVG dictionary based on the XML one, ensured all the seed SVG files were as small as possible and tried again. The immediate result was thousands of crashes, nothing like being savaged by a rabbit to cause a surprise.

Not being in possession of the appropriate holy hand grenade I resorted instead to GDB and electric fence. Unlike the bitmap library crashes memory bounds issues simply did not feature in the crashes.Instead they mainly centered around actual logic errors when constructing and traversing the data structures.

For example Daniel Silverstone fixed an interesting bug where the XML parser binding would try and go "above" the root node in the tree if the source closed more tags than it opened which resulted in wild pointers and NULL references.

I found and squashed several others including dealing with SVG which has no valid root element and division by zero errors when things like colour gradients have no points.

I find it interesting that the type and texture of the crashes completely changed between the SVG and binary formats. Perhaps it is just the nature of the textural formats that causes this although it might be due to the techniques used to parse the formats.

Once all the immediately reproducible crashes were dealt with I performed a longer run. I used my monster system as previously described and ran the fuzzer for a whole week.

Summary stats

Fuzzers alive : 10
Total run time : 68 days, 7 hours
Total execs : 9268 million
Cumulative speed : 15698 execs/sec
Pending paths : 0 faves, 2501 total
Pending per fuzzer : 0 faves, 250 total (on average)
Crashes found : 9 locally unique

After burning almost seventy days of processor time AFL found me another nine crashes and possibly more importantly a test corpus that generates over 90% coverage.

A useful tool that AFL provides is afl-cmin. This reduces the number of test files in a corpus to only those that are required to exercise all the code paths reached by the test set. In this case it reduced the number of files from 8242 to 2612

afl-cmin -i queue_all/ -o queue_cmin -- test_decode_svg @@ 1.0 /dev/null
corpus minimization tool for afl-fuzz by

[+] OK, 1447 tuples recorded.
[*] Obtaining traces for input files in 'queue_all/'...
Processing file 8242/8242...
[*] Sorting trace sets (this may take a while)...
[+] Found 23812 unique tuples across 8242 files.
[*] Finding best candidates for each tuple...
Processing file 8242/8242...
[*] Sorting candidate list (be patient)...
[*] Processing candidates and writing output files...
Processing tuple 23812/23812...
[+] Narrowed down to 2612 files, saved in 'queue_cmin'.

Additionally the actual information within the test files can be minimised with the afl-tmin tool. This must be run on each file individually and can take a relatively long time. Fortunately with GNU parallel one can run many of these jobs simultaneously which merely required another three days of CPU time to process. The resulting test corpus weighs in at a svelte 15 Megabytes or so against the 25 Megabytes before minimisation.

The result is yet another NetSurf library significantly improved by the use of AFL both from finding and squashing crashing bugs and from having a greatly improved test corpus to allow future library changes with a high confidence there will not be any regressions.

Planet Linux AustraliaTridge on UAVs: APM:Plane 3.7.1 released and 3.8.0 beta nearly ready for testing

New plane releases from

Development is really hotting up for fixed wing and quadplanes! I've just released 3.7.1 and I plan on releasing the first beta of the major 3.8.0 release in the next week.
The 3.7.1 release fixes some significant bugs reported by users since the 3.7.0 release. Many thanks for all the great feedback on the release from users!
The 3.8.0 release includes a lot more new stuff, including a new servo mapping backend and a new persistent auto-trim system that is makes getting the servo trim just right a breeze.
Happy flying!

Planet Linux AustraliaChris Smart: Building and Booting Upstream Linux and U-Boot for Orange Pi One ARM Board (with Ethernet)

My home automation setup will make use of Arduinos and also embedded Linux devices. I’m currently looking into a few boards to see if any meet my criteria.

The most important factor for me is that the device must be supported in upstream Linux (preferably stable, but mainline will do) and U-Boot. I do not wish to use any old, crappy, vulnerable vendor trees!

The Orange Pi One is a small, cheap ARM board based on the AllWinner H3 (sun8iw7p1) SOC with a quad-Core Cortex-A7 ARM CPU and 512MB RAM. It has no wifi, but does have an onboard 10/100 Ethernet provided by the SOC (Linux patches incoming). It has no NAND flash (not supported upstream yet anyway), but does support SD. There is lots of information available at

Orange Pi One

Orange Pi One

Note that while Fedora 25 does not yet support this board specifically it does support both the Orange Pi PC (which is effectively a more full-featured version of this device) and the Orange Pi Lite (which is the same but swaps Ethernet for WiFi). Using either of those configurations should at least boot on the Orange Pi One.

Connecting UART

The UART on the Pi One uses the GND, TX and RX connections which are next to the Ethernet jack. Plug the corresponding cables from a 3.3V UART cable onto these pins and then into a USB port on your machine.

Orange Pi One UART Pin Connections

UART Pin Connections (RX yellow, TX orange, GND black)

Your device will probably be /dev/ttyUSB0, but you can check this with dmesg just after plugging it in.

Now we can simply use screen to connect to the UART, but you’ll have to be in the dialout group.

sudo gpasswd -a ${USER} dialout
newgrp dialout
screen /dev/ttyUSB0 115200

Note that you won’t see anything just yet without an SD card that has a working bootloader. We’ll get to that shortly!

Partition the SD card

First things first, get yourself an SD card.

While U-Boot itself is embedded in the card and doesn’t need it to be partitioned, it will be required later to read the boot files.

U-Boot needs the card to have an msdos partition table with a small boot partition (ext now supported) that starts at 1MB. You can use the rest of the card for the root file system (but we’ll boot an initramfs, so it’s not needed).

Assuming your card is at /dev/sdx (replace as necessary, check dmesg after plugging it in if you’re not sure).

sudo umount /dev/sdx* # makes sure it's not mounted
sudo parted -s /dev/sdx \
mklabel msdos mkpart \
primary ext3 1M 10M \
mkpart primary ext4 10M 100%

Now we can format the partitions (upstream U-Boot supports ext3 on the boot partition).
sudo mkfs.ext3 /dev/sdx1
sudo mkfs.ext4 /dev/sdx2

Leave your SD card plugged in, we will need to write the bootloader to it soon!

Upstream U-Boot Bootloader

Install the arm build toolchain dependencies.

sudo dnf install gcc-arm-linux-gnu binutils-arm-linux-gnu

We need to clone upstream U-Boot Git tree. Note that I’m checking out the release directly (-b v2016.09.01) but you could leave this off to get master, or change it to a different tag if you want.
cd ${HOME}
git clone --depth 1 -b v2016.09.01 git://
cd u-boot

There is a defconfig already for this board, so simply make this and build the bootloader binary.
CROSS_COMPILE=arm-linux-gnu- make orangepi_one_defconfig
CROSS_COMPILE=arm-linux-gnu- make -j$(nproc)

Write the bootloader to the SD card (replace /dev/sdx, like before).
sudo dd if=u-boot-sunxi-with-spl.bin of=/dev/sdx bs=1024 seek=8

Wait until your device has stopped writing (if you have an LED you can see this) or run sync command before ejecting.

Testing our bootloader

Now we can remove the SD card and plug it into the powered off Orange Pi One to see if our bootloader build was successful.

Switch back to your terminal that’s running screen and then power up the Orange Pi One. Note that the device will try to netboot by default, so you’ll need to hit the enter key when you see a line that says the following.

(Or you can just repeatedly hit enter key in the screen console while you turn the device on.)

Note that if you don’t see anything, swap the RX and TX pins on the UART and try again.

With any luck you will then get to a U-Boot prompt where we can check the build by running the version command. It should have the U-Boot version we checked out from Git and today’s build date!

U-Boot version

U-Boot version

Hurrah! If that didn’t work for you, repeat the build and writing steps above. You must have a working bootloader before you can get a kernel to work.

If that worked, power off your device and re-insert the SD card into your computer and mount it at /mnt.

sudo umount /dev/sdx* # unmount everywhere first
sudo mount /dev/sdx1 /mnt

Creating an initramfs

Of course, a kernel won’t be much good without some userspace. Let’s use Fedora’s static busybox package to build a simple initramfs that we can boot on the Orange Pi One.

I have a script that makes this easy, you can grab it from GitHub.

Ensure your SD card is plugged into your computer and mounted at /mnt, then we can copy the file on!

cd ${HOME}
git clone
cd custom-initramfs
./ --arch arm --dir "${PWD}" --tty ttyS0

This will create an initramfs for us in your custom-initramfs directory, called initramfs-arm.cpio.gz. We’re not done yet, though, we need to convert this to the format supported by U-Boot (we’ll write it directly to the SD card).

gunzip initramfs-arm.cpio.gz
sudo mkimage -A arm -T ramdisk -C none -n uInitrd \
-d initramfs-arm.cpio /mnt/uInitrd

Now we have a simple initramfs ready to go.

Upstream Linux Kernel

The Ethernet driver has been submitted to the arm-linux mailing list (it’s up to its 4th iteration) and will hopefully land in 4.10 (it’s too late for 4.9 with RC1 already out).

Clone the mainline Linux tree (this will take a while). Note that I’m getting the latest tagged release by default (-b v4.9-rc1) but you could leave this off or change it to some other tag if you want.

cd ${HOME}
git clone --depth 1 -b v4.9-rc1 \

Or, if you want to try linux-stable, clone this repo instead.
git clone --depth 1 -b v4.8.4 \
git:// linux

Now go into the linux directory.
cd linux

Patching in EMAC support for SOC

If you don’t need the onboard Ethernet, you can skip this step.

We can get the patches from the Linux kernel’s Patchwork instance, just make sure you’re in the directory for your Linux Git repository.

Note that these will probably only apply cleanly on top of mainline v4.9 Linux tree, not stable v4.8.

# [v4,01/10] ethernet: add sun8i-emac driver
wget \
-O sun8i-emac-patch-1.patch
# [v4,04/10] ARM: dts: sun8i-h3: Add dt node for the syscon
wget \
-O sun8i-emac-patch-4.patch
# [v4,05/10] ARM: dts: sun8i-h3: add sun8i-emac ethernet driver
wget \
-O sun8i-emac-patch-5.patch
# [v4,07/10] ARM: dts: sun8i: Enable sun8i-emac on the Orange PI One
wget \
-O sun8i-emac-patch-7.patch
# [v4,09/10] ARM: sunxi: Enable sun8i-emac driver on sunxi_defconfig
wget \
-O sun8i-emac-patch-9.patch

We will apply these patches (you could also use git apply, or grab the mbox if you want and use git am).

for patch in 1 4 5 7 9 ; do
    patch -p1 < sun8i-emac-patch-${patch}.patch

Hopefully that will apply cleanly.

Building the kernel

Now we are ready to build our kernel!

Load the default kernel config for the sunxi boards.
ARCH=arm CROSS_COMPILE=arm-linux-gnu- make sunxi_defconfig

If you want, you could modify the kernel config here, for example remove support for other AllWinner SOCs.
ARCH=arm CROSS_COMPILE=arm-linux-gnu- make menuconfig

Build the kernel image and device tree blob.
ARCH=arm CROSS_COMPILE=arm-linux-gnu- make -j$(nproc) zImage dtbs

Mount the boot partition and copy on the kernel and device tree file.
sudo cp arch/arm/boot/zImage /mnt/
sudo cp arch/arm/boot/dts/sun8i-h3-orangepi-one.dtb /mnt/

Bootloader config

Next we need to make a bootloader file, boot.cmd, which tells U-Boot what to load and boot (the kernel, device tree and initramfs).

The bootargs line says to output the console to serial and to boot from the ramdisk. Variables are used for the memory locations of the kernel, dtb and initramfs.

Note, if you want to boot from the second partition instead of an initramfs, change root argument to root=/dev/mmcblk0p2 (or other partition as required).

cat > boot.cmd << EOF
ext2load mmc 0 \${kernel_addr_r} zImage
ext2load mmc 0 \${fdt_addr_r} sun8i-h3-orangepi-one.dtb
ext2load mmc 0 \${ramdisk_addr_r} uInitrd
setenv bootargs console=ttyS0,115200 earlyprintk root=/dev/root \
rootwait panic=10
bootz \${kernel_addr_r} \${ramdisk_addr_r} \${fdt_addr_r}

Compile the bootloader file and output it directly to the SD card at /mnt.
sudo mkimage -C none -A arm -T script -d boot.cmd /mnt/boot.scr

Now, unmount your SD card.

sudo umount /dev/sdx*

Testing it all

Insert it into the Orange Pi One and turn it on! Hopefully you’ll see it booting the kernel on your screen terminal window.

You should be greeted by a login prompt. Log in with root (no password).

Login prompt

Login prompt

That’s it! You’ve built your own Linux system for the Orange Pi One!


Log in as root and give the Ethernet device (eth0) an IP address on your network.

Now test it with a tool, like ping, and see if your network is working.

Here’s an example:

Networking on Orange Pi One

Networking on Orange Pi One

Memory usage

There is clearly lots more you can do with this device…

Memory usage

Memory usage


Planet DebianJaldhar Vyas: What I Did During My Summer Vacation

Thats So Raven

If I could sum up the past year in one word, that word would be distraction. There have been so many strange, confusing or simply unforseen things going on I have had trouble focusing like never before.

For instance, on the opposite side of the street from me is one of Jersey City's old resorvoirs. It's not used for drinking water anymore and the city eventually plans on merging it into the park on the other side. In the meantime it has become something of a wildlife refuge. Which is nice except one of the newly settled critters was a bird of prey -- the consensus is possibly some kind of hawk or raven. Starting your morning commute under the eyes of a harbinger of death is very goth and I even learned to deal with the occasional piece of deconstructed rodent on my doorstep but nighttime was a big problem. For contrary to popular belief, ravens do not quoth "nevermore" but "KRRAAAA". Very loudly. Just as soon as you have drifted of to sleep. Eventually my sleep-deprived neighbors and I appealed to the NJ division of enviromental protection to get it removed but by the time they were ready to swing into action the bird had left for somewhere more congenial like Transylvania or Newark.

Or here are some more complete wastes of time: I go the doctor for my annual physical. The insurance company codes it as Adult Onset Diabetes by accident. One day I opened the lid of my laptop and there's a "ping" sound and a piece of the hinge flies off. Apparently that also severed the connection to the screen and naturally the warranty had just expired so I had to spend the next month tethered to an external monitor until I could afford to buy a new one. Mix in all the usual social, political, family and work drama and you can see that it has been a very trying time for me.


I have managed to get some Debian work done. On Dovecot, my principal package, I have gotten tremendous support from Apollon Oikonomopolous who I belatedly welcome as a member of the Dovecot maintainer team. He has been particularly helpful in fixing our systemd support and cleaning out a lot of the old and invalid bugs. We're in pretty good shape for the freeze. Upstream has released an RC of 2.2.26 and hopefully the final version will be out in the next couple of days so we can include it in Stretch. We can always use more help with the package so let me know if you're interested.


Most of the action has been going on without me but I've been lending support and sponsoring whenever I can. We have several new DDs and DMs but still no one north of the Vindhyas I'm afraid.

Debian Perl Group

gregoa did a ping of inactive maintainers and I regretfully had to admit to myself that I wasn't going to be of use anytime soon so I resigned. Perl remains my favorite language and I've actually been more involved in the meetings of my local Perlmongers group so hopefully I will be back again one day. And I still maintain the Perl modules I wrote myself.


May have gained a recruit.

*Stricly speaking it should be called Debian-People-Who-Dont-Think-Faults-in-One-Moral-Domain-Such-As-For-Example-Axe-Murdering-Should-Leak-Into-Another-Moral-Domain-Such-As-For-Example-Debian but come on, that's just silly.


Valerie AuroraWhy I won’t be attending Systems We Love

Systems We Love is a one day event in San Francisco to talk excitedly about systems computing. When I first heard about it, I was thrilled! I love systems so much that I moved from New Mexico to the Bay Area when I was 23 years old purely so that I could talk to more people about them. I’m the author of the Kernel Hacker’s Bookshelf series, in which I enthusiastically described operating systems research papers I loved in the hopes that systems programmers would implement them. The program committee of Systems We Love includes many people I respect and enjoy being around. And the event is so close to me that I could walk to it.

So why I am not going to Systems We Love? Why am I warning my friends to think twice before attending? And why am I writing a blog post warning other people about attending Systems We Love?

The answer is that I am afraid that Bryan Cantrill, the lead organizer of Systems We Love, will say cruel and humiliating things to people who attend. Here’s why I’m worried about that.

I worked with Bryan in the Solaris operating systems group at Sun from 2002 to 2004. We didn’t work on the same projects, but I often talked to him at the weekly Monday night Solaris kernel dinner at Osteria in Palo Alto, participated in the same mailing lists as him, and stopped by his office to ask him questions every week or two. Even 14 years ago, Bryan was one of the best systems programmers, writers, and speakers I have ever met. I admired him and learned a lot from him. At the same time, I was relieved when I left Sun because I knew I’d never have to work with Bryan again.

Here’s one way to put it: to me, Bryan Cantrill is the opposite of another person I admire in operating systems (whom I will leave unnamed). This person makes me feel excited and welcome and safe to talk about and explore operating systems. I’ve never seen them shame or insult or put down anyone. They enthusiastically and openly talk about learning new systems concepts, even when other people think they should already know them. By doing this, they show others that it’s safe to admit that they don’t know something, which is the first step to learning new things. They are helping create the kind of culture I want in systems programming – the kind of culture promoted by Papers We Love, which Bryan cites as the inspiration for Systems We Love.

By contrast, when I’m talking to Bryan I feel afraid, cautious, and fearful. Over the years I worked with Bryan, I watched him shame and insult hundreds of people, in public and in private, over email and in person, in papers and talks. Bryan is no Linus Torvalds – Bryan’s insults are usually subtle, insinuating, and beautifully phrased, whereas Linus’ insults tend towards the crude and direct. Even as you are blushing in shame from what Bryan just said about you, you are also admiring his vocabulary, cadence, and command of classical allusion. When I talked to Bryan about any topic, I felt like I was engaging in combat with a much stronger foe who only wanted to win, not help me learn. I always had the nagging fear that I probably wouldn’t even know how cleverly he had insulted me until hours later. I’m sure other people had more positive experiences with Bryan, but my experience matches that of many others. In summary, Bryan is supporting the status quo of the existing culture of systems programming, which is a culture of combat, humiliation, and domination.

People admire and sometimes hero-worship Bryan because he’s a brilliant technologist, an excellent communicator, and a consummate entertainer. But all that brilliance, sparkle, and wit are often used in the service of mocking and humiliating other people. We often laugh and are entertained by what Bryan says, but most of the time we are laughing at another person, or at a person by proxy through their work. I think we rationalize taking part in this kind of cruelty by saying that the target “deserves” it because they made a short-sighted design decision, or wrote buggy code, or accidentally made themselves appear ridiculous. I argue that no one deserves to be humiliated or laughed at for making an honest mistake, or learning in public, or doing the best they could with the resources they had. And if that means that people like Bryan have to learn how to be entertaining without humiliating people, I’m totally fine with that.

I stopped working with Bryan in 2004, which was 12 years ago. It’s fair to wonder if Bryan has had a change of heart since then. As far as I can tell, the answer is no. I remember speaking to Bryan in 2010 and 2011 and it was déjà vu all over again. The first time, I had just co-founded a non-profit for women in open technology and culture, and I was astonished when Bryan delivered a monologue to me on the “right” way to get more women involved in computing. The second time I was trying to catch up with a colleague I hadn’t seen in a while and Bryan was invited along. Bryan dominated the conversation and the two of us the entire evening, despite my best efforts. I tried one more time about a month ago: I sent Bryan a private message on Twitter telling him honestly and truthfully what my experience of working with him was like, and asking if he’d had a change of heart since then. His reply: “I don’t know what you’re referring to, and I don’t feel my position on this has meaningfully changed — though I am certainly older and wiser.” Then he told me to google something he’d written about women in computing.

But you don’t have to trust my word on what Bryan is like today. The blog post Bryan wrote announcing Systems We Love sounds exactly like the Bryan I knew: erudite, witty, self-praising, and full of elegant insults directed at a broad swathe of people. He gaily recounts the time he gave a highly critical keynote speech at USENIX, bashfully links to a video praising him at a Papers We Love event, elegantly puts down most of the existing operating systems research community, and does it all while using the words “ancillary,” “verve,” and “quadrennial.” Once you know the underlying structure – a layer cake of vituperation and braggadocio, frosted with eloquence – you can see the same pattern in most of his writing and talks.

So when I heard about Systems We Love, my first thought was, “Maybe I can go but just avoid talking to Bryan and leave the room when he is speaking.” Then I thought, “I should warn my friends who are going.” Then I realized that my friends are relatively confident and successful in this field, but the people I should be worried about are the ones just getting started. Based on the reputation of Papers We Love and the members of the Systems We Love program committee, they probably fully expect to be treated respectfully and kindly. I’m old and scarred and know what to expect when Bryan talks, and my stomach roils at the thought of attending this event. How much worse would it be for someone new and open and totally unprepared?

Bryan is a better programmer than I am. Bryan is a better systems architect than I am. Bryan is a better writer and speaker than I am. The one area I feel confident that I know more about than Bryan is increasing diversity in computing. And I am certain that the environment that Bryan creates and fosters is more likely to discourage and drive off women of all races, people of color, queer and trans folks, and other people from underrepresented groups. We’re already standing closer to the exit; for many of us, it doesn’t take much to make us slip quietly out the door and never return.

I’m guessing that Bryan will respond to me saying that he humiliates, dominates, and insults people by trying to humiliate, dominate, and insult me. I’m not sure if he’ll criticize my programming ability, my taste in operating systems, or my work on increasing diversity in tech. Maybe he’ll criticize me for humiliating, dominating, and insulting people myself – and I’ll admit, I did my fair share of that when I was trying to emulate leaders in my field such as Bryan Cantrill and Linus Torvalds. It’s gone now, but for years there was a quote from me on a friend’s web site, something like: “I’m an elitist jerk, I fit right in at Sun.” It took me years to detox and unlearn those habits and I hope I’m a kinder, more considerate person now.

Even if Bryan doesn’t attack me, people who like the current unpleasant culture of systems programming will. I thought long and hard about the friendships, business opportunities, and social capital I would lose over this blog post. I thought about getting harassed and threatened on social media. I thought about a week of cringing whenever I check my email. Then I thought about the people who might attend Systems We Love: young folks, new developers, a trans woman at her first computing event since coming out – people who are looking for a friendly and supportive place to talk about systems at the beginning of their careers. I thought about them being deeply hurt and possibly discouraged for life from a field that gave me so much joy.

Come at me, Bryan.

Note: comments are now closed on this post. You can read and possibly comment on the follow-up post, When is naming abuse itself abusive?

Tagged: conferences, feminism, kernel

Planet DebianIngo Juergensmann: Automatically update TLSA records on new Letsencrypt Certs

I've been using DNSSEC for some quite time now and it is working quite well. When LetsEncrypt went public beta I jumped on the train and migrated many services to LE-based TLS. However there was still one small problem with LE certs: 

When there is a new cert, all of the old TLSA resource records are not valid anymore and might give problems to strict DNSSEC checking clients. It took some while until my pain was big enough to finally fix it by some scripts.

There are at least two scripts involved:

This script does all of my DNSSEC handling. You can just do a " enable-dnssec domain.tld" and everything is configured so that you only need to copy the appropriate keys into the webinterface of your DNS registry.

No parameter given.

MODE can be one of the following:
enable-dnssec : perform all steps to enable DNSSEC for your domain
edit-zone     : safely edit your zone after enabling DNSSEC
create-dnskey : create new dnskey only
load-dnskey   : loads new dnskeys and signs the zone with them
show-ds       : shows DS records of zone
zoneadd-ds    : adds DS records to the zone file
show-dnskey   : extract DNSKEY record that needs to uploaded to your registrar
update-tlsa   : update TLSA records with new TLSA hash, needs old and new TLSA hashes as additional parameters

For updating zone-files just do a " edit-zone domain.tld" to add new records and such and the script will take care e.g. of increasing the serial of the zone file. I find this very convenient, so I often use this script for non-DNSSEC-enabled domains as well.

However you can spot the command line option "update-tlsa". This option needs the old and the new TLSA hashes beside the domain.tld parameter. However, this option is used from the second script: 

This is a quite simple Bash script that parses the domains.txt from script, looking up the old TLSA hash in the zone files (structured in TLD/domain.tld directories), compare the old with the new hash (by invoking and if there is a difference in hashes, call with the proper parameters: 

set -e
for i in `cat /etc/ | awk '{print $1}'` ; do
        domain=`echo $i | awk 'BEGIN {FS="."} ; {print $(NF-1)"."$NF}'`
        #echo -n "Domain: $domain"
        TLD=`echo $i | awk 'BEGIN {FS="."}; {print $NF}'`
        #echo ", TLD: $TLD"
        OLDTLSA=`grep -i "in.*tlsa" /etc/bind/${TLD}/${domain} | grep ${i} | head -n 1 | awk '{print $NF}'`
        if [ -n "${OLDTLSA}" ] ; then
                #echo "--> ${OLDTLSA}"
                # Usage: cert.pem host[:port] usage selector mtype
                NEWTLSA=`/path/to/ $LEPATH/certs/${i}/fullchain.pem ${i} 3 1 1 | awk '{print $NF}'`
                #echo "==> $NEWTLSA"
                if [ "${OLDTLSA}" != "${NEWTLSA}" ] ; then
                        /path/to/ update-tlsa ${domain} ${OLDTLSA} ${NEWTLSA} > /dev/null
                        echo "TLSA RR update for ${i}"

So, quite simple and obviously a quick hack. For sure someone else can write a cleaner and more sophisticated implementation to do the same stuff, but at least it works for meTM. Use it on your own risk and do whatever you want with these scripts (licensed under public domain).

You can invoke right after your crontab call for In a more sophisticated way it should be fairly easy to invoke these scripts from post hooks as well.
Please find the files attached to this page (remove the .txt extension after saving, of course).


AttachmentSize bytes KB

Planet DebianMatthieu Caneill: Debugging 101

While teaching this semester a class on concurrent programming, I realized during the labs that most of the students couldn't properly debug their code. They are at the end of a 2-year cursus, know many different programming languages and frameworks, but when it comes to tracking down a bug in their own code, they often lacked the basics. Instead of debugging for them I tried to give them general directions that they could apply for the next bugs. I will try here to summarize the very first basic things to know about debugging. Because, remember, writing software is 90% debugging, and 10% introducing new bugs (that is not from me, but I could not find the original quote).

So here is my take at Debugging 101.

Use the right tools

Many good tools exist to assist you in writing correct software, and it would put you behind in terms of productivity not to use them. Editors which catch syntax errors while you write them, for example, will help you a lot. And there are many features out there in editors, compilers, debuggers, which will prevent you from introducing trivial bugs. Your editor should be your friend; explore its features and customization options, and find an efficient workflow with them, that you like and can improve over time. The best way to fix bugs is not to have them in the first place, obviously.

Test early, test often

I've seen students writing code for one hour before running make, that would fail so hard that hundreds of lines of errors and warnings were outputted. There are two main reasons doing this is a bad idea:

  • You have to debug all the errors at once, and the complexity of solving many bugs, some dependent on others, is way higher than the complexity of solving a single bug. Moreover, it's discouraging.
  • Wrong assumptions you made at the beginning will make the following lines of code wrong. For example if you chose the wrong data structure for storing some information, you will have to fix all the code using that structure. It's less painful to realize earlier it was the wrong one to choose, and you have more chances of knowing that if you compile and execute often.

I recommend to test your code (compilation and execution) every few lines of code you write. When something breaks, chances are it will come from the last line(s) you wrote. Compiler errors will be shorter, and will point you to the same place in the code. Once you get more confident using a particular language or framework, you can write more lines at once without testing. That's a slow process, but it's ok. If you set up the right keybinding for compiling and executing from within your editor, it shouldn't be painful to test early and often.

Read the logs

Spot the places where your program/compiler/debugger writes text, and read it carefully. It can be your terminal (quite often), a file in your current directory, a file in /var/log/, a web page on a local server, anything. Learn where different software write logs on your system, and integrate reading them in your workflow. Often, it will be your only information about the bug. Often, it will tell you where the bug lies. Sometimes, it will even give you hints on how to fix it.

You may have to filter out a lot of garbage to find relevant information about your bug. Learn to spot some keywords like error or warning. In long stacktraces, spot the lines concerning your files; because more often, your code is to be blamed, rather than deeper library code. grep the logs with relevant keywords. If you have the option, colorize the output. Use tail -f to follow a file getting updated. There are so many ways to grasp logs, so find what works best with you and never forget to use it!

Print foobar

That one doesn't concern compilation errors (unless it's a Makefile error, in that case this file is your code anyway).

When the program logs and output failed to give you where an error occured (oh hi Segmentation fault!), and before having to dive into a memory debugger or system trace tool, spot the portion of your program that causes the bug and add in there some print statements. You can either print("foo") and print("bar"), just to know that your program reaches or not a certain place in your code, or print(some_faulty_var) to get more insights on your program state. It will give you precious information.

stderr >> "foo" >> endl;
my_db.connect(); // is this broken?
stderr >> "bar" >> endl;

In the example above, you can be sure it is the connection to the database my_db that is broken if you get foo and not bar on your standard error.

(That is an hypothetical example. If you know something can break, such as a database connection, then you should always enclose it in a try/catch structure).

Isolate and reproduce the bug

This point is linked to the previous one. You may or may not have isolated the line(s) causing the bug, but maybe the issue is not always raised. It can depend on many other things: the program or function parameters, the network status, the amount of memory available, the decisions of the OS scheduler, the user rights on the system or on some files, etc. More generally, any assumption you made on any external dependency can appear to be wrong (even if it's right 99% of the time). According to the context, try to isolate the set of conditions that trigger the bug. It can be as simple as "when there is no internet connection", or as complicated as "when the CPU load of some external machine is too high, it's a leap year, and the input contains illegal utf-8 characters" (ok, that one is fucked up; but it surely happens!). But you need to reliably be able to reproduce the bug, in order to be sure later that you indeed fixed it.

Of course when the bug is triggered at every run, it can be frustrating that your program never works but it will in general be easier to fix.


Always read the documentation before reaching out for help. Be it man, a book, a website or a wiki, you will find precious information there to assist you in using a language or a specific library. It can be quite intimidating at first, but it's often organized the same way. You're likely to find a search tool, an API reference, a tutorial, and many examples. Compare your code against them. Check in the FAQ, maybe your bug and its solution are already referenced there.

You'll rapidly find yourself getting used to the way documentation is organized, and you'll be more and more efficient at finding instantly what you need. Always keep the doc window open!

Google and Stack Overflow are your friends

Let's be honest: many of the bugs you'll encounter have been encountered before. Learn to write efficient queries on search engines, and use the knowledge you can find on questions&answers forums like Stack Overflow. Read the answers and comments. Be wise though, and never blindly copy and paste code from there. It can be as bad as introducing malicious security issues into your code, and you won't learn anything. Oh, and don't copy and paste anyway. You have to be sure you understand every single line, so better write them by hand; it's also better for memorizing the issue.

Take notes

Once you have identified and solved a particular bug, I advise to write about it. No need for shiny interfaces: keep a list of your bugs along with their solutions in one or many text files, organized by language or framework, that you can easily grep.

It can seem slightly cumbersome to do so, but it proved (at least to me) to be very valuable. I can often recall I have encountered some buggy situation in the past, but don't always remember the solution. Instead of losing all the debugging time again, I search in my bug/solution list first, and when it's a hit I'm more than happy I kept it.

Further reading degugging

Remember this was only Debugging 101, that is, the very first steps on how to debug code on your own, instead of getting frustrated and helplessly stare at your screen without knowing where to begin. When you'll write more software, you'll get used to more efficient workflows, and you'll discover tools that are here to assist you in writing bug-free code and spotting complex bugs efficiently. Listed below are some of the tools or general ideas used to debug more complex software. They belong more to a software engineering course than a Debugging 101 blog post. But it's good to know as soon as possible these exist, and if you read the manuals there's no reason you can't rock with them!

  • Loggers. To make the "foobar" debugging more efficient, some libraries are especially designed for the task of logging out information about a running program. They often have way more features than a simple print statement (at the price of being over-engineered for simple programs): severity levels (info, warning, error, fatal, etc), output in rotating files, and many more.

  • Version control. Following the evolution of a program in time, over multiple versions, contributors and forks, is a hard task. That's where version control plays: it allows you to keep the entire history of your program, and switch to any previous version. This way you can identify more easily when a bug was introduced (and by whom), along with the patch (a set of changes to a code base) that introduced it. Then you know where to apply your fix. Famous version control tools include Git, Subversion, and Mercurial.

  • Debuggers. Last but not least, it wouldn't make sense to talk about debugging without mentioning debuggers. They are tools to inspect the state of a program (for example the type and value of variables) while it is running. You can pause the program, and execute it line by line, while watching the state evolve. Sometimes you can also manually change the value of variables to see what happens. Even though some of them are hard to use, they are very valuable tools, totally worth diving into!

Don't hesitate to comment on this, and provide your debugging 101 tips! I'll be happy to update the article with valuable feedback.

Happy debugging!

Planet DebianIain R. Learmonth: The Domain Name System

As I posted yesterday, we released PATHspider 1.0.0. What I didn’t talk about in that post was an event that occured only a few hours before the release.

Everything was going fine, proofreading of the documentation was in progress, a quick git push with the documentation updates and… CI FAILED!?! Our CI doesn’t build the documentation, only tests the core code. I’m planning to release real soon and something has broken.

Starting to panic.

irl@orbiter# ./
Ran 16 tests in 0.984s


This makes no sense. Maybe I forgot to add a dependency and it’s been broken for a while? I scrutinise the dependencies list and it all looks fine.

In fairness, probably the first thing I should have done is look at the build log in Jenkins, but I’ve never had a failure that I couldn’t reproduce locally before.

It was at this point that I realised there was something screwy going on. A sigh of relief as I realise that there’s not a catastrophic test failure but now it looks like maybe there’s a problem with the University research group network, which is arguably worse.

Being focussed on getting the release ready, I didn’t realise that the Internet was falling apart. Unknown to me, a massive DDoS attack against Dyn, a major DNS host, was in progress. After a few attempts to debug the problem, I hardcoded a line into /etc/hosts, still believing it to be a localised issue.

I’ve just removed this line as the problem seems to have resolved itself for now. There are two main points I’ve taken away from this:

  • CI failure doesn’t necessarily mean that your code is broken, it can also indicate that your CI infrastructure is broken.
  • Decentralised internetwork routing is pretty worthless when the centralised name system goes down.

This afternoon I read a post by [tj] on the 57North Planet, and this is where I learnt what had really happened. He mentions multicast DNS and Namecoin as distributed name system alternatives. I’d like to add some more to that list:

Only the first of these is really a distributed solution.

My idea with ICMP Domain Name Messages is that you send an ICMP message to a webserver. Somewhere along the path, you’ll hit either a surveillance or censorship middlebox. These middleboxes can provide value by caching any DNS replies that are seen so that an ICMP DNS request message will cause the message to not be forwarded but a reply is generated to provide the answer to the query. If the middlebox cannot generate a reply, it can still forward it to other surveillance and censorship boxes.

I think this would be a great secondary use for the NSA and GCHQ boxen on the Internet, clearly fits within the scope of “defending national security” as if DNS is down the Internet is kinda dead, plus it’d make it nice and easy to find the boxes with PATHspider.

CryptogramDDoS Attacks against Dyn

Yesterday's DDoS attacks against Dyn are being reported everywhere.

I have received a gazillion press requests, but I am traveling in Australia and Asia and have had to decline most of them. That's okay, really, because we don't know anything much of anything about the attacks.

If I had to guess, though, I don't think it's China. I think it's more likely related to the DDoS attacks against Brian Krebs than the probing attacks against the Internet infrastructure, despite how prescient that essay seems right now. And, no, I don't think China is going to launch a preemptive attack on the Internet.

Planet DebianDirk Eddelbuettel: RcppArmadillo 0.7.500.0.0

armadillo image

A few days ago, Conrad released Armadillo 7.500.0. The corresponding RcppArmadillo release 0.7.500.0.0 is now on CRAN (and will get into Debian shortly).

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language--and is widely used by (currently) 274 other packages on CRAN.

Changes in this release relative to the previous CRAN release are as follows:

Changes in RcppArmadillo version 0.7.500.0.0 (2016-10-20)

  • Upgraded to Armadillo release 7.500.0 (Coup d'Etat)

    • Expanded qz() to optionally specify ordering of the Schur form

    • Expanded each_slice() to support matrix multiplication

Courtesy of CRANberries, there is a diffstat report. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Sociological ImagesFrom the Archives: Halloween

It’s 9 days to Halloween and 17 days to election day. Here at SocImages, I’ve decided to continue to focus on election analysis and current events until Election Day. In the meantime, for your holiday pleasure, please enjoy our collection of Halloween posts from years past or visit our Halloween-themed Pinterest page. And feel free to follow me on Instagram for pics from tonight’s Krewe de Boo parade in New Orleans! Wish you were here!

Just for fun


Social psychology

Politics and culture

Race and ethnicity

Sexual orientation


Gender and kids


Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.

(View original at

Planet DebianChristoph Egger: Running Debian on the ClearFog

Back in August, I was looking for a Homeserver replacement. During FrOSCon I was then reminded of the Turris Omnia project by The basic SoC (Marvel Armada 38x) seemed to be nice hand have decent mainline support (and, with the turris, users interested in keeping it working). Only I don't want any WIFI and I wasn't sure the standard case would be all that usefully. Fortunately, there's also a simple board available with the same SoC called ClearFog and so I got one of these (the Base version). With shipping and the SSD (the only 2242 M.2 SSD with 250 GiB I could find, a ADATA SP600) it slightly exceeds the budget but well.

ClearFog with SSD

When installing the machine, the obvious goal was to use mainline FOSS components only if possible. Fortunately there's mainline kernel support for the device as well as mainline U-Boot. First attempts to boot from a micro SD card did not work out at all, both with mainline U-Boot and the vendor version though. Turns out the eMMC version of the board does not support any micro SD cards at all, a fact that is documented but others failed to notice as well.


As the board does not come with any loader on eMMC and booting directly from M.2 requires removing some resistors from the board, the easiest way is using UART for booting. The vendor wiki has some shell script wrapping an included C fragment to feed U-Boot to the device but all that is really needed is U-Boot's kwboot utility. For some reason the SPL didn't properly detect UART booting on my device (wrong magic number) but patching the if (in arch-mvebu's spl.c) and always assume UART boot is an easy way around.

The plan then was to boot a Debian armhf rootfs with a defconfig kernel from USB stick. and install U-Boot and the rootfs to eMMC from within that system. Unfortunately U-Boot seems to be unable to talk to the USB3 port so no kernel loading from there. One could probably make UART loading work but switching between screen for serial console and xmodem seemed somewhat fragile and I never got it working. However ethernet can be made to work, though you need to set eth1addr to eth3addr (or just the right one of these) in U-Boot, saveenv and reboot. After that TFTP works (but is somewhat slow).


There's one last step required to allow U-Boot and Linux to access the eMMC. eMMC is wired to the same PINs as the SD card would be. However the SD card has an additional indicator pin showing whether a card is present. You might be lucky inserting a dummy card into the slot or go the clean route and remove the pin specification from the device tree.

--- a/arch/arm/dts/armada-388-clearfog.dts
+++ b/arch/arm/dts/armada-388-clearfog.dts
@@ -306,7 +307,6 @@

                        sdhci@d8000 {
                                bus-width = <4>;
-                               cd-gpios = <&gpio0 20 GPIO_ACTIVE_LOW>;
                                pinctrl-0 = <&clearfog_sdhci_pins

Next Up is flashing the U-Boot to eMMC. This seems to work with the vendor U-Boot but proves to be tricky with mainline. The fun part boils down to the fact that the boot firmware reads the first block from eMMC, but the second from SD card. If you write the mainline U-Boot, which was written and tested for SD card, to eMMC the SPL will try to load the main U-Boot starting from it's second sector from flash -- obviously resulting in garbage. This one took me several tries to figure out and made me read most of the SPL code for the device. The fix however is trivial (apart from the question on how to support all different variants from one codebase, which I'll leave to the U-Boot developers):

--- a/include/configs/clearfog.h
+++ b/include/configs/clearfog.h
@@ -143,8 +143,7 @@
 #define CONFIG_SYS_MMC_U_BOOT_OFFS             (160 << 10)
-                                                + 1)
 #define CONFIG_SYS_U_BOOT_MAX_SIZE_SECTORS     ((512 << 10) / 512) /* 512KiB */
 #define CONFIG_FIXED_SDHCI_ALIGNED_BUFFER      0x00180000      /* in SDRAM */


Now we have a System booting from eMMC with mainline U-Boot (which is a most welcome speedup compared to the UART and TFTP combination from the beginning). Getting to fine-tune linux on the device -- we want to install the armmp Debian kernel and have it work. As all the drivers are build as modules for that kernel this also means initrd support. Funnily U-Boots bootz allows booting a plain vmlinux kernel but I couldn't get it to boot a plain initrd. Passing a uImage initrd and a normal kernel however works pretty well. Back when I first tried there were some modules missing and ethernet didn't work with the PHY driver built as a module. In the meantime the PHY problem was fixed in the Debian kernel and almost all modules already added. Ben then only added the USB3 module on my suggestion and as a result, unstable's armhf armmp kernel should work perfectly well on the device (you still need to patch the device tree similar to the patch above). Still missing is an updated flash-kernel to automatically generate the initrd uImage which is work in progress but got stalled until I fixed the U-Boot on eMMC problem and everything should be fine -- maybe get debian u-boot builds for that board.

Pro versus Base

The main difference so far between the Pro and the Base version of the ClearFog is the switch chip which is included on the Pro. The Base instead "just" has two gigabit ethernet ports and a SFP. Both, linux' and U-Boot's device tree are intended for the Pro version which makes on of the ethernet ports unusable (it tries to find the switch behind the ethernet port which isn't there). To get both ports working (or the one you settled on earlier) there's a second patch to the device tree (my version might be sub-optimal but works), U-Boot -- the linux-kernel version is a trivial adaption:

--- a/arch/arm/dts/armada-388-clearfog.dts
+++ b/arch/arm/dts/armada-388-clearfog.dts
@@ -89,13 +89,10 @@
        internal-regs {
            ethernet@30000 {
                mac-address = [00 50 43 02 02 02];
+              managed = "in-band-status";
+              phy = <&phy1>;
                phy-mode = "sgmii";
                status = "okay";
-              fixed-link {
-                  speed = <1000>;
-                  full-duplex;
-              };

            ethernet@34000 {
@@ -227,6 +224,10 @@
                pinctrl-0 = <&mdio_pins>;
                pinctrl-names = "default";

+              phy1: ethernet-phy@1 { /* Marvell 88E1512 */
+                   reg = <1>;
+              };
                phy_dedicated: ethernet-phy@0 {
                     * Annoyingly, the marvell phy driver
@@ -386,62 +386,6 @@
        tx-fault-gpio = <&expander0 13 GPIO_ACTIVE_HIGH>;

-  dsa@0 {
-      compatible = "marvell,dsa";
-      dsa,ethernet = <&eth1>;
-      dsa,mii-bus = <&mdio>;
-      pinctrl-0 = <&clearfog_dsa0_clk_pins &clearfog_dsa0_pins>;
-      pinctrl-names = "default";
-      #address-cells = <2>;
-      #size-cells = <0>;
-      switch@0 {
-          #address-cells = <1>;
-          #size-cells = <0>;
-          reg = <4 0>;
-          port@0 {
-              reg = <0>;
-              label = "lan1";
-          };
-          port@1 {
-              reg = <1>;
-              label = "lan2";
-          };
-          port@2 {
-              reg = <2>;
-              label = "lan3";
-          };
-          port@3 {
-              reg = <3>;
-              label = "lan4";
-          };
-          port@4 {
-              reg = <4>;
-              label = "lan5";
-          };
-          port@5 {
-              reg = <5>;
-              label = "cpu";
-          };
-          port@6 {
-              /* 88E1512 external phy */
-              reg = <6>;
-              label = "lan6";
-              fixed-link {
-                  speed = <1000>;
-                  full-duplex;
-              };
-          };
-      };
-  };
    gpio-keys {
        compatible = "gpio-keys";
        pinctrl-0 = <&rear_button_pins>;


Apart from the mess with eMMC this seems to be a pretty nice device. It's now happily running with a M.2 SSD providing enough storage for now and still has a mSATA/mPCIe plug left for future journeys. It seems to be drawing around 5.5 Watts with SSD and one Ethernet connected while mostly idle and can feed around 500 Mb/s from disk over an encrypted ethernet connection which is, I guess, not too bad. My plans now include helping to finish flash-kernel support, creating a nice case and probably get it deployed. I might bring it to FOSDEM first though.

Working on it was really quite some fun (apart from the frustrating parts finding the one-block-offset ..) and people were really helpful. Big thanks here to Debian's arm folks, Ben Hutchings the kernel maintainer and U-Boot upstream (especially Tom Rini and Stefan Roese)

Planet Linux AustraliaChris Smart: My Custom Open Source Home Automation Project – Part 3, Roll Out

In Part 1 I discussed motivation and research where I decided to build a custom, open source wired solution. Part 2 discussed the prototype and other experiments.

Because we were having to fit in with the builder, I didn’t have enough time to finalise the smart system, so I needed a dumb mode. This Part 3 is about rolling out dumb mode in Smart house!

Operation “dumb mode, Smart house” begins

We had a proven prototype, now we had to translate that into a house-wide implementation.

First we designed and mapped out the cabling.

Cabling Plans

Cabling Plans


  • Cat5e (sometimes multiple runs) for room Arduinos
  • Cat5e to windows for future curtain motors
  • Reed switch cables to light switch
  • Regular Cat6 data cabling too, of course!
  • Whatever else we thought we might need down the track

Time was tight (fitting in with the builder) but we got there (mostly).

  • Ran almost 2 km of cable in total
  • This was a LOT of work and took a LOT of time


Cabled Wall Plates

Cabled Wall Plates

Cabled Wireless Access Point

Cabled Wireless Access Point

Cable Run

Cable Run

Electrical cable

This is the electrician’s job.

  • Electrician ran each bank of lights on their own circuit
  • Multiple additional electrical circuits
    • HA on its own electrical circuit, UPS backed
    • Study/computers on own circuit, UPS backed
    • Various others like dryer, ironing board, entertainment unit, alarm, ceiling fans, ovens, etc
  • Can leave the house and turn everything off
    (except essentials)


The relays had to be reliable, but also available off-the-shelf as I didn’t want to get something that’s custom or hard to replace. Again, for devices that draw too much current for the relay, it will throw a contactor instead so that the device can still be controlled.

  • Went with Finder 39 series relays, specifically
  • Very thin profile
  • Built in fuses
  • Common bus bars
  • Single Pole Double Throw (SPDT)
Finder Relays with Din Mount Module

Finder Relays with Din Mount Module

These are triggered by 24V DC which switches the 240V AC for the circuit. The light switches are running 24V and when you press the button it completes the circuit, providing 24V input to the relay (which turns on the 240V and therefore the light).

There are newer relays now which have an input range (e.g. 0-24V), I would probably use those instead if I was doing it again today so that it can be more easily fired from an array of outputs (not just a 24V relay driver shield).

The fact that they are SPDT means that I can set the relay to be normally open (NO) in which case the relay is off by default, or normally closed (NC) in which case the relay is on by default. This means that if the smart system goes down and can’t provide voltage to the input side of the relay, certain banks of lights (such as the hallway, stairs, kitchen, bathroom and garage lights) will turn on (so that the house is safe while I fix it).

Bank of Relays

Bank of Relays

In the photo above you can see the Cat5e 24V lines from the light switch circuits coming into the grey terminal block at the bottom. They are then cabled to the input side of the relays. This means that we don’t touch any AC and I can easily change what’s providing the input to the relay (to a relay board on an Arduino, for example).

Rack (excuse the messy data cabling)

Rack (excuse the messy data cabling)

There are two racks, one upstairs and one downstairs, that provide the infrastructure for the HA and data networks.

PSU running at 24V

PSU running at 24V

Each rack has a power supply unit (PSU) running at 24V which provides the power for the light switches in dumb mode. These are running in parallel to provide redundancy for the dumb network in case one dies.

You can see that there is very little voltage drop.

Relay Timers

The Finder relays also support timer modules, which is very handy for something that you want to automatically turn off after a certain (configurable) amount of time.

  • Heated towel racks are bell press switches
  • Uses a timer relay to auto turn off
  • Modes and delay configurable via dip switches on relay

UPS backed GPO Circuits

UPS in-lined GPO Circuits

UPS in-lined GPO Circuits

Two GPO circuits are backed by UPS which I can simply plug in-line and feed back to the circuit. These are the HA network as well as the computer network. If the power is cut to the house, my HA will still have power (for a while) and the Internet connection will remain up.

Clearly I’ll have no lights if the power is cut, but I could power some emergency LED lighting strips from the UPS lines – that hasn’t been done yet though.


The switches for dumb mode are regular also off the shelf light switches.

  • Playing with light switches (yay DC!)
  • Cabling up the switches using standard Clipsal gear
  • Single Cat5e cable can power up to 4 switches
  • Support one, two, three and four way switches
  • Bedroom switches use switch with LED (can see where the switch is at night)
Bathroom Light Switch  (Dumb Mode)

Bathroom Light Switch (Dumb Mode)

We have up to 4-way switches so you can turn a single light on or off from 4 locations. The entrance light is wired up this way, you can control it from:

  • Front door
  • Hallway near internal garage door
  • Bottom of the stairs
  • Top of the stairs

A single Cat5e cable can run up to 4 switches.

Cat5e Cabling for Light Switch

Cat5e Cabling for Light Switch

  • Blue and orange +ve
  • White-blue and white-orange -ve
  • Green switch 1
  • White-green switch 2
  • Brown switch 3
  • White-brown switch 4

Note that we’re using two wires together for +ve and -ve which helps increase capacity and reduce voltage drop (followed the Clipsal standard wiring of blue and orange pairs).

Later in Smart Mode, this Cat5e cable will be re-purposed as Ethernet for an Arduino or embedded Linux device.

Hallway Passive Infrared (PIR) Motion Sensors

I wanted the lights in the hallway to automatically turn on at night if someone was walking to the bathroom or what not.

  • Upstairs hallway uses two 24V PIRs in parallel
  • Either one turns on lights
  • Connected to dumb mode network so fire relay as everything else
  • Can be overridden by switch on the wall
  • Adjustable for sensitivity, light level and length of time
Hallway PIR

Hallway PIRs

Tunable settings for PIR

Tunable settings for PIR

Tweaking the PIR means I have it only turning on the lights at night.

Dumb mode results

We have been using dumb mode for over a year now, it has never skipped a beat.

Now I just need to find the time to start working on the Smart Mode…

Planet Linux AustraliaChris Smart: My Custom Open Source Home Automation Project – Part 2, Design and Prototype

In Part 1 I discussed motivation and research where I decided to build a custom, open source wired solution. In this Part 2 I discuss the design and the prototype that proved the design.

Wired Design

Although there are options like 1-Wire, I decided that I wanted more flexibility at the light switches.

  • Inspired by Jon Oxer’s awesome
  • Individual circuits for lights and some General Purpose Outlets (GPO)
  • Bank of relays control the circuits
  • Arduinos and/or embedded Linux devices control the relays

How would it work?

  • One Arduino or embedded Linux device per room
  • Run C-Bus Cat5e cable to light switches to power Arduino, provide access to HA network
  • Room Arduino takes buttons (lights, fans, etc) and sensors (temp, humidity, reed switch, PIR, etc) as inputs
  • Room Arduino sends network message to relay Arduino
  • Arduino with relay shield fires relay to enable/disable power to device (such as a light, fan, etc)
Basic design

Basic design

Of course this doesn’t just control lights, but also towel racks, fans, etc. Running C-Bus cable means that I can more easily switch to their proprietary system if I fail (or perhaps sell the house).

A relay is fine for devices such as LED downlights which don’t draw much current, but for larger devices (such as ovens and airconditioning) I will use the relay to throw a contactor.

Network Messaging

For the network messaging I chose MQTT (as many others have).

  • Lightweight
  • Supports encryption
  • Uses publish/subscribe model with a broker
  • Very easy to set up and well supported by Arduino and Linux

The way it works would be for the relay driver to subscribe to topics from devices around the house. Those devices publish messages to the topic, such as when buttons are pressed or sensors are triggered. The relay driver parses the messages and reacts accordingly (turning a device on or off).

Cabling Benefits

  • More secure than wireless
  • More future proof
  • DC only, no need to touch AC
  • Provides PoE for devices and motors
  • Can still use wireless (e.g. ZigBee) if I want to
  • Convert to proprietary system (C-Bus) if I fail
  • My brother is a certified cabler 🙂


  • Got some Freeduinos and relay boards (from Freetronics)
  • Hacked around with some ideas, was able to control the relays
  • Basic concept seems doable
  • More on that later..

However, I realised I wasn’t going to get this working in time. I needed a “dumb” mode that wouldn’t require any computers to turn on the lights at least.

Dumb Mode Prototype

So, the dumb mode looks like this.

  • Use the same Cat5e cabling so that Arduino devices can be installed later for smart mode
  • Use standard off the shelf Clipsal light switches
    • Support one, two, three and four way switching
  • Run 24 volts over the Cat5e
  • Light switch completes the circuit which feeds 24 volts into the relay
  • Relay fires the circuit and light turns on!
Dumb mode design

Dumb mode design

We created a demo board that supported both dumb mode and smart mode, which proved that the concept worked well.

HA Prototype Board

HA Prototype Board

The board has:

  • Six LEDs representing the lights
  • Networking switch for the smart network
  • One Arduino as the input (representing the light switch)
  • One Arduino as the relay driver
  • One Raspberry Pi (running Fedora) as the MQTT broker
  • Several dumb mode multi-way light switches
  • Smart inputs such has:
    • Reed switch
    • Temperature and humidity sensor
    • Light sensor
    • PIR

The dumb lights work without input from the smart network.

In smart mode, the Ardinos and Pi are on the same HA network and connect to the broker running on the Pi.

The input Arduino publishes MQTT messages from inputs such as sensors and buttons. The relay Arduino is subscribed to those topics and responds accordingly (e.g. controlling the relay when appropriate).

Dimming Lights

Also played with pulse width modulation (PWM) for LED downlights.

  • Most LEDs come with smart dimmable drivers (power packs) that use leading or trailing edge on AC
  • Wanted to control brightness via DC
  • Used an arduino to program a small ATTiny for PWM
  • Worked, but only with non-smart driver
  • Got electrician to install manual dimmers for now where needed, such as family room


Given the smart power packs, I cannot dim on the DC side, unless I replace the power packs (which is expensive).

In future, I plan to put some leading/trailing edge dimmers inline on the AC side of my relays (which will need an electrician) which I can control from an Arduino or embedded Linux device via a terminal block. This should be more convenient than replacing the power packs in the ceiling space and running lines to control the ATTiny devices.


Doors are complicated and have quite a few requirements.

  • Need to also work with physical key
  • Once in, door should be unlocked from inside
  • Need to be fire-able from an Arduino
  • Work with multiple smart inputs, e.g. RFID, pin pad
  • Played with wireless rolling q-code, arduino can fire the remote (Jon Oxer has done this)
  • Maybe pair this with deadlock and electric strike
  • Perhaps use electronic door closers

I’m not sure what route I will take here yet, but it’s been cabled up and electronic strike plates are in place.

At this point I had a successful prototype which was ready to be rolled out across the house. Stay tuned for Part 3!

Planet Linux AustraliaChris Smart: My Custom Open Source Home Automation Project – Part 1, Motivation and Research

In January 2016 I gave a presentation at the Canberra Linux Users Group about my journey developing my own Open Source home automation system. This is an adaptation of that presentation for my blog. Big thanks to my brother, Tim, for all his help with this project!

Comments and feedback welcome.

Why home automation?

  • It’s cool
  • Good way to learn something new
  • Leverage modern technology to make things easier in the home

At the same time, it’s kinda scary. There is a lack of standards and lack of decent security applied to most Internet of Things (IoT) solutions.

Motivation and opportunity

  • Building a new house
  • Chance to do things more easily at frame stage while there are no walls
Frame stage

Frame stage

Some things that I want to do with HA

  • Respond to the environment and people in the home
  • Alert me when there’s a problem (fridge left open, oven left on!)
  • Gather information about the home, e.g.
    • Temperature, humidity, CO2, light level
    • Open doors and windows and whether the house is locked
    • Electricity usage
  • Manage lighting automatically, switches, PIR, mood, sunset, etc
  • Control power circuits
  • Manage access to the house via pin pad, proxy card, voice activation, retina scans
  • Control gadgets, door bell/intercom, hot water, AC heating/cooling, exhaust fans, blinds and curtains, garage door
  • Automate security system
  • Integrate media around the house (movie starts, dim the lights!)
  • Water my garden, and more..

My requirements for HA

  • Open
  • Secure
  • Extensible
  • Prefer DC only, not AC
  • High Wife Acceptance Factor (important!)

There’s no existing open source IoT framework that I could simply install, sit back and enjoy. Where’s the fun in that, anyway?

Research time!

Three main options:

  • Wireless
  • Wired
  • Combination of both

Wireless Solutions

  • Dominated by proprietary Z-Wave (although has since become more open)
  • Although open standards based also exist, like ZigBee and 6LoWPAN
Z-Wave modules

Z-Wave modules

Wireless Pros

  • Lots of different gadgets available
  • Gadgets are pretty cheap and easy to find
  • Easy to get up and running
  • Widely supported by all kinds of systems

Wireless Cons

  • Wireless gadgets are pretty cheap and nasty
  • Most are not open
  • Often not updateable, potentially insecure
  • Connect to AC
  • Replace or install a unit requires an electrician
  • Often talk to the “cloud”

So yeah, I could whack those up around my house, install a bridge and move on with my life, but…

  • Not as much fun!
  • Don’t want to rely on wireless
  • Don’t want to rely on an electrician
  • Don’t really want to touch AC
  • Cheap gadgets that are never updated
  • Security vulnerabilities makes it high risk

Wired Solutions

  • Proprietary systems like Clipsal’s C-Bus
  • Open standards based systems like KNX
  • Custom hardware
  • Expensive
  • 🙁
Clipsal C-Bus light switch

Clipsal C-Bus light switch

Cabling Benefits

  • More secure than wireless
  • More future proof
  • DC only, no need to touch AC
  • Provides PoE for devices and motors
  • Can still use wireless (e.g. ZigBee) if I want to
  • Convert to proprietary system (C-Bus) if I fail
  • My brother is a certified cabler 🙂

Technology Choice Overview

So comes down to this.

  • Z-Wave = OUT
  • ZigBee/6LoWPAN = MAYBE IN
  • C-Bus = OUT (unless I screw up)
  • KNX = OUT
  • Arduino, Raspberry Pi = IN

I went with a custom wired system, after all, it seems like a lot more fun…

Stay tuned for Part 2!

Planet DebianMatthew Garrett: Fixing the IoT isn't going to be easy

A large part of the internet became inaccessible today after a botnet made up of IP cameras and digital video recorders was used to DoS a major DNS provider. This highlighted a bunch of things including how maybe having all your DNS handled by a single provider is not the best of plans, but in the long run there's no real amount of diversification that can fix this - malicious actors have control of a sufficiently large number of hosts that they could easily take out multiple providers simultaneously.

To fix this properly we need to get rid of the compromised systems. The question is how. Many of these devices are sold by resellers who have no resources to handle any kind of recall. The manufacturer may not have any kind of legal presence in many of the countries where their products are sold. There's no way anybody can compel a recall, and even if they could it probably wouldn't help. If I've paid a contractor to install a security camera in my office, and if I get a notification that my camera is being used to take down Twitter, what do I do? Pay someone to come and take the camera down again, wait for a fixed one and pay to get that put up? That's probably not going to happen. As long as the device carries on working, many users are going to ignore any voluntary request.

We're left with more aggressive remedies. If ISPs threaten to cut off customers who host compromised devices, we might get somewhere. But, inevitably, a number of small businesses and unskilled users will get cut off. Probably a large number. The economic damage is still going to be significant. And it doesn't necessarily help that much - if the US were to compel ISPs to do this, but nobody else did, public outcry would be massive, the botnet would not be much smaller and the attacks would continue. Do we start cutting off countries that fail to police their internet?

Ok, so maybe we just chalk this one up as a loss and have everyone build out enough infrastructure that we're able to withstand attacks from this botnet and take steps to ensure that nobody is ever able to build a bigger one. To do that, we'd need to ensure that all IoT devices are secure, all the time. So, uh, how do we do that?

These devices had trivial vulnerabilities in the form of hardcoded passwords and open telnet. It wouldn't take terribly strong skills to identify this at import time and block a shipment, so the "obvious" answer is to set up forces in customs who do a security analysis of each device. We'll ignore the fact that this would be a pretty huge set of people to keep up with the sheer quantity of crap being developed and skip straight to the explanation for why this wouldn't work.

Yeah, sure, this vulnerability was obvious. But what about the product from a well-known vendor that included a debug app listening on a high numbered UDP port that accepted a packet of the form "BackdoorPacketCmdLine_Req" and then executed the rest of the payload as root? A portscan's not going to show that up[1]. Finding this kind of thing involves pulling the device apart, dumping the firmware and reverse engineering the binaries. It typically takes me about a day to do that. Amazon has over 30,000 listings that match "IP camera" right now, so you're going to need 99 more of me and a year just to examine the cameras. And that's assuming nobody ships any new ones.

Even that's insufficient. Ok, with luck we've identified all the cases where the vendor has left an explicit backdoor in the code[2]. But these devices are still running software that's going to be full of bugs and which is almost certainly still vulnerable to at least half a dozen buffer overflows[3]. Who's going to audit that? All it takes is one attacker to find one flaw in one popular device line, and that's another botnet built.

If we can't stop the vulnerabilities getting into people's homes in the first place, can we at least fix them afterwards? From an economic perspective, demanding that vendors ship security updates whenever a vulnerability is discovered no matter how old the device is is just not going to work. Many of these vendors are small enough that it'd be more cost effective for them to simply fold the company and reopen under a new name than it would be to put the engineering work into fixing a decade old codebase. And how does this actually help? So far the attackers building these networks haven't been terribly competent. The first thing a competent attacker would do would be to silently disable the firmware update mechanism.

We can't easily fix the already broken devices, we can't easily stop more broken devices from being shipped and we can't easily guarantee that we can fix future devices that end up broken. The only solution I see working at all is to require ISPs to cut people off, and that's going to involve a great deal of pain. The harsh reality is that this is almost certainly just the tip of the iceberg, and things are going to get much worse before they get any better.

Right. I'm off to portscan another smart socket.

[1] UDP connection refused messages are typically ratelimited to one per second, so it'll take almost a day to do a full UDP portscan, and even then you have no idea what the service actually does.

[2] It's worth noting that this is usually leftover test or debug code, not an overtly malicious act. Vendors should have processes in place to ensure that this isn't left in release builds, but ha well.

[3] My vacuum cleaner crashes if I send certain malformed HTTP requests to the local API endpoint, which isn't a good sign

comment count unavailable comments

Planet DebianRussell Coker: Another Broken Nexus 5

In late 2013 I bought a Nexus 5 for my wife [1]. It’s a good phone and I generally have no complaints about the way it works. In the middle of 2016 I had to make a warranty claim when the original Nexus 5 stopped working [2]. Google’s warranty support was ok, the call-back was good but unfortunately there was some confusion which delayed replacement.

Once the confusion about the IMEI was resolved the warranty replacement method was to bill my credit card for a replacement phone and reverse the charge if/when they got the original phone back and found it to have a defect covered by warranty. This policy meant that I got a new phone sooner as they didn’t need to get the old phone first. This is a huge benefit for defects that don’t make the phone unusable as you will never be without a phone. Also if the user determines that the breakage was their fault they can just refrain from sending in the old phone.

Today my wife’s latest Nexus 5 developed a problem. It turned itself off and went into a reboot loop when connected to the charger. Also one of the clips on the rear case had popped out and other clips popped out when I pushed it back in. It appears (without opening the phone) that the battery may have grown larger (which is a common symptom of battery related problems). The phone is slightly less than 3 years old, so if I had got the extended warranty then I would have got a replacement.

Now I’m about to buy a Nexus 6P (because the Pixel is ridiculously expensive) which is $700 including postage. Kogan offers me a 3 year warranty for an extra $108. Obviously in retrospect spending an extra $100 would have been a benefit for the Nexus 5. But the first question is whether new phone going to have a probability greater than 1/7 of failing due to something other than user error in years 2 and 3? For an extended warranty to provide any benefit the phone has to have a problem that doesn’t occur in the first year (or a problem in a replacement phone after the first phone was replaced). The phone also has to not be lost, stolen, or dropped in a pool by it’s owner. While my wife and I have a good record of not losing or breaking phones the probability of it happening isn’t zero.

The Nexus 5 that just died can be replaced for 2/3 of the original price. The value of the old Nexus 5 to me is less than 2/3 of the original price as buying a newer better phone is the option I want. The value of an old phone to me decreases faster than the replacement cost because I don’t want to buy an old phone.

For an extended warranty to be a good deal for me I think it would have to cost significantly less than 1/10 of the purchase price due to the low probability of failure in that time period and the decreasing value of a replacement outdated phone. So even though my last choice to skip an extended warranty ended up not paying out I expect that overall I will be financially ahead if I keep self-insuring, and I’m sure that I have already saved money by self-insuring all my previous devices.


Planet DebianIain R. Learmonth: PATHspider 1.0.0 released!

In today’s Internet we see an increasing deployment of middleboxes. While middleboxes provide in-network functionality that is necessary to keep networks manageable and economically viable, any packet mangling — whether essential for the needed functionality or accidental as an unwanted side effect — makes it more and more difficult to deploy new protocols or extensions of existing protocols.

For the evolution of the protocol stack, it is important to know which network impairments exist and potentially need to be worked around. While classical network measurement tools are often focused on absolute performance values, PATHspider performs A/B testing between two different protocols or different protocol extensions to perform controlled experiments of protocol-dependent connectivity problems as well as differential treatment.

PATHspider is a framework for performing and analyzing these measurements, while the actual A/B test can be easily customized. Late on the 21st October, we released version 1.0.0 of PATHspider and it’s ready for “production” use (whatever that means for Internet research software).

Our first real release of PATHspider was version 0.9.0 just in time for the presentation of PATHspider at the 2016 Applied Networking Research Workshop co-located with IETF 96 in Berlin earlier this year. Since this release we have made a lot of changes and I’ll talk about some of the highlights here (in no particular order):

Switch from twisted.plugin to straight.plugin

While we anticipate that some plugins may wish to use some features of Twisted, we didn’t want to have Twisted as a core dependency for PATHspider. We found that straight.plugin was not just a drop-in replacement but it simplified the way in which 3rd-party plugins could be developed and it was worth the effort for that alone.

Library functions for the Observer

PATHspider has an embedded flow-meter (think something like NetFlow but highly customisable). We found that even with the small selection of plugins that we had we were duplicating code across plugins for these customisations of the flow-meter. In this release we now provide library functions for common needs such as identifying TCP 3-way handshake completions or identifying ICMP Unreachable messages for flows.

New plugin: DSCP

We’ve added a new plugin for this release to detect breakage when using DiffServ code points to achieve differentiated services within a network.

Plugins are now subcommands

Using the subparsers feature of argparse, all plugins including 3rd-party plugins will now appear as subcommands to the PATHspider command. This makes every plugin a first-class citizen and makes PATHspider truly generalised.

We have an added benefit from this that plugins can also ask for extra arguments that are specific to the needs of the plugin, for example the DSCP plugin allows the user to select which code point to send for the experimental test.

Test Suite

PATHspider now has a test suite! As the size of the PATHspider code base grows we need to be able to make changes and have confidence that we are not breaking code that another module relies on. We have so far only achieved 54% coverage of the codebase but we hope to improve this for the next release. We have focussed on the critical portions of data collection to ensure that all the results collected by PATHspider during experiments is valid.

DNS Resolver Utility

Back when PATHspider was known as ECNSpider, it had a utility for resolving IP addresses from the Alexa top 1 million list. This utility has now been fully integrated into PATHspider and appears as a plugin to allow for repeated experiments against the same IP addresses even if the DNS resolver would have returned a different addresss.


Documentation is definitely not my favourite activity, but it has to be done. PATHspider 1.0.0 now ships with documentation covering commandline usage, input/output formats and development of new plugins.

If you’d like to check out PATHspider, you can find the website at

Debian packages will be appearing shortly and will find their way into stable-backports within the next 2 weeks (hopefully).

Current development of PATHspider is supported by the European Union’s Horizon 2020 project MAMI. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 688421. The opinions expressed and arguments employed reflect only the authors’ view. The European Commission is not responsible for any use that may be made of that information.

Krebs on SecurityHacked Cameras, DVRs Powered Today’s Massive Internet Outage

A massive and sustained Internet attack that has caused outages and network congestion today for a large number of Web sites was launched with the help of hacked “Internet of Things” (IoT) devices, such as CCTV video cameras and digital video recorders, new data suggests.

Earlier today cyber criminals began training their attack cannons on Dyn, an Internet infrastructure company that provides critical technology services to some of the Internet’s top destinations. The attack began creating problems for Internet users reaching an array of sites, including Twitter, Amazon, Tumblr, Reddit, Spotify and Netflix.


A depiction of the outages caused by today’s attacks on Dyn, an Internet infrastructure company. Source:

At first, it was unclear who or what was behind the attack on Dyn. But over the past few hours, at least one computer security firm has come out saying the attack involved Mirai, the same malware strain that was used in the record 620 Gpbs attack on my site last month. At the end September 2016, the hacker responsible for creating the Mirai malware released the source code for it, effectively letting anyone build their own attack army using Mirai.

Mirai scours the Web for IoT devices protected by little more than factory-default usernames and passwords, and then enlists the devices in attacks that hurl junk traffic at an online target until it can no longer accommodate legitimate visitors or users.

According to researchers at security firm Flashpoint, today’s attack was launched at least in part by a Mirai-based botnet. Allison Nixon, director of research at Flashpoint, said the botnet used in today’s ongoing attack is built on the backs of hacked IoT devices — mainly compromised digital video recorders (DVRs) and IP cameras made by a Chinese hi-tech company called XiongMai Technologies. The components that XiongMai makes are sold downstream to vendors who then use it in their own products.

“It’s remarkable that virtually an entire company’s product line has just been turned into a botnet that is now attacking the United States,” Nixon said, noting that Flashpoint hasn’t ruled out the possibility of multiple botnets being involved in the attack on Dyn.

“At least one Mirai [control server] issued an attack command to hit Dyn,” Nixon said. “Some people are theorizing that there were multiple botnets involved here. What we can say is that we’ve seen a Mirai botnet participating in the attack.”

As I noted earlier this month in Europe to Push New Security Rules Amid IoT Mess, many of these products from XiongMai and other makers of inexpensive, mass-produced IoT devices are essentially unfixable, and will remain a danger to others unless and until they are completely unplugged from the Internet.

That’s because while many of these devices allow users to change the default usernames and passwords on a Web-based administration panel that ships with the products, those machines can still be reached via more obscure, less user-friendly communications services called “Telnet” and “SSH.”

Telnet and SSH are command-line, text-based interfaces that are typically accessed via a command prompt (e.g., in Microsoft Windows, a user could click Start, and in the search box type “cmd.exe” to launch a command prompt, and then type “telnet” to reach a username and password prompt at the target host).

“The issue with these particular devices is that a user cannot feasibly change this password,” Flashpoint’s Zach Wikholm told KrebsOnSecurity. “The password is hardcoded into the firmware, and the tools necessary to disable it are not present. Even worse, the web interface is not aware that these credentials even exist.”

Flashpoint’s researchers said they scanned the Internet on Oct. 6 for systems that showed signs of running the vulnerable hardware, and found more than 515,000 of them were vulnerable to the flaws they discovered.

“I truly think this IoT infrastructure is very dangerous on the whole and does deserve attention from anyone who can take action,” Flashpoint’s Nixon said.

It’s unclear what it will take to get a handle on the security problems introduced by millions of insecure IoT devices that are ripe for being abused in these sorts of assaults.

As I noted in The Democratization of Censorship, to address the threat from the mass-proliferation of hardware devices such as Internet routers, DVRs and IP cameras that ship with default-insecure settings, we probably need an industry security association, with published standards that all members adhere to and are audited against periodically.

The wholesalers and retailers of these devices might then be encouraged to shift their focus toward buying and promoting connected devices which have this industry security association seal of approval. Consumers also would need to be educated to look for that seal of approval. Something like Underwriters Laboratories (UL), but for the Internet, perhaps.

Until then, these insecure IoT devices are going to stick around like a bad rash — unless and until there is a major, global effort to recall and remove vulnerable systems from the Internet. In my humble opinion, this global cleanup effort should be funded mainly by the companies that are dumping these cheap, poorly-secured hardware devices onto the market in an apparent bid to own the market. Well, they should be made to own the cleanup efforts as well.

Devices infected with Mirai are instructed to scour the Internet for IoT devices protected by more than 60 default usernames and passwords. The entire list of those passwords — and my best approximation of which firms are responsible for producing those hardware devices — can be found at my story, Who Makes the IoT Things Under Attack.

Update 10:30 a.m., Oct. 22: Corrected attribution on outage graphic.

CryptogramFriday Squid Blogging: Which Squid Can I Eat?

Interesting article listing the squid species that can still be ethically eaten.

The problem, of course, is that on a restaurant menu it's just labeled "squid."

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

EDITED TO ADD: By "ethically," I meant that the article discusses which species can be sustainably caught. The article does not address the moral issues of eating squid -- and other cephlapods -- in the first place.

Chaotic IdealismMartin, Elisa, Maria

In Sydney, Australia lived two children, a brother and sister, ten and eleven years old. Their family came from Colombia. Their mother loved them. They went to school together, to a special school called St. Lucy's, because they were both autistic. They didn't speak, but they found ways to communicate all the same through their art.

I love this photo of Elisa. It shows just how she interacts with her world. I've done the same thing, feeling objects with my lips rather than just my fingertips. If this picture is typical of her, then Elisa must indeed have had an artist's mind. She didn't see just the feather; she saw alll its small parts. And she didn't only see it; she wanted to know that feather, every part of it, taste and smell and feel of feather. At my age, I know too much about how many germs can be on a feather; but at her age, I wouldn't have known. I would just have wanted to know that feather in its entirety. I never met Elisa, but if this picture does show how she experienced the world, I think we might have been friends.

Ten-year-old Martin was artistically precocious--not just good at art for an autistic child, but good at art for any child his age. Here's some of his artwork:

Remember--this is a kid who is only ten years old. This is a kid who already knows more about color than I do, and I'm thirty-three years old. Look at the placement of those fish. They're moving just like real fish, and you can see them doing it. Give that child some time to gain technical skill, and he could have become a wonderful artist as his style and skills matured.

Big sister Elisa loved art too, but she was known more for her outgoing personality. When they were interviewed, her teachers always mentioned her smile. She would look up at her teachers, smile, and lead them by the hand. She loved music and dancing.

Here's Elisa using a computer:

Interesting, isn't it, how she always seems to have a thoughtful look on her face? It's as though she doesn't really bother with facial expressions when she's thinking deeply, because she's too involved in her world. Maybe her teachers remember her smile so much because when she smiled, she did it because she *meant* to smile. She wanted to connect with them, and her desire was so obvious they loved her for it.

The family adopted a puppy:

He grew into a big dog, all wrinkly face and slobbery tongue. He should have grown old as they grew up.

Their mom was called Maria Claudia, and she loved them. She was one of the parents who could always be counted on to come to school and participate in her children's education. For autism awareness day, Maria posted about her children: "...people with autism have the [same] needs and desires as you and me, but they just see life in a different way. They think freely." She was considering returning to Colombia with her children, where she hoped to find better support for them.

The family: Martin, Elisa, and Maria.

Those of you reading this probably already know what happened to Maria, Elisa, Martin, and their dog. When I read about them, I wasn't surprised, and I was saddened by how little I was surprised. All that potential, all that childlike enjoyment of life, all that charm, all that love, was lost to us forever.

They died on Monday, October 17, 2016. They were murdered by their father, her husband. In the weekend before, he had installed pipes throughout the house to release deadly carbon monoxide, turning the house into a gas chamber that would pipe poison into their rooms from the ceiling. Maria, Elisa, Martin, and their dog were all gassed to death; the killer died in the house as well. Elisa and Maria died together, mother and daughter sharing a bed. Maybe she had had nightmares, or needed her mother nearby to help her sleep.

The usual media response, "pity them, for they had to raise autistic children", and the excuses about being overwhelmed, followed. It came out that the husband had looked into euthanasia.

I need to emphasize here that this is not an isolated incident. Every day I monitor the news and document the deaths of disabled people; every day there are more to add to the list. Sometimes there's half a paragraph and one mention. Sometimes, like for these children, there's wider coverage, but for every death that's widely covered, there are dozens that you don't hear about. For every death that's mentioned even in a local paper, there are even more that go unrecognized and unreported.

Martin, Elisa, and their mother Maria are representatives of a large population. And every single murder victim, however unknown, is just as real a person as these children were.

For the disabled, gas chambers are still a daily reality of life. This is our Holocaust--a slow, hidden thing that takes place in back rooms and shoddy nursing homes and jails and on the street. Our elders die from starvation with bedsores down to the bone. Our children are beaten to death by those they should have been able to trust. Our teenagers are labeled "behaviorally challenged" and suffocated by staff who restrain them so that they can't breathe. We commit suicide after years of harassment by bullies no one will protect us from. We are shot on the streets by police when we can't comply immediately with shouted commands we may not understand. We are poisoned through medical malpractice and by alternative medicine. We die from child abuse; we're beaten to death by muggers who think we look like an easy target.

When I first started researching for my Autism Memorial, I had few enough names that I knew them all by heart. Eventually I began making notes of all disabilities, not just of autism; I now have so many names in the Memorial Annex that I have to cross-reference them to figure out whether I am reading about an old tragedy or a new one. I'm up to number 2,097, and when I do my resarch today, I'll probably find another half-dozen or so. These are just the ones that make it to the news. These are the people I'll never meet, whose contributions our world will never have. We are missing a part of the disability community, and we can never get it back.

I'm working for ASAN's Disability Day of Mourning now, as a volunteer researcher. I monitor the news; I find names. The day of mourning is for those disabled who were killed by caregivers--not by paid caregivers, but by family members who should have treated them with kindness. Even these specific criteria leave us with a list hundreds of names long.

I debated with myself about whether to use the word "Holocaust" to describe what is happening to disabled people worldwide. I've studied the Holocaust that happened before and during World War 2, and I know just how serious and horrible a thing it was. I worried that to compare modern hate crimes to the Nazi Holocaust would minimize the Holocaust. But... I kept thinking of autopsy photos I had come across, those of a disabled woman who had been starved to death. There was no difference between that body and one you might find in pictures of prisoners starved to death at Auschwitz. And there are many, many more like her. I will never forget those pictures, and I never want to.

I came to the conclusion that to call the widespread murders of the disabled a "holocaust" is not an exaggeration. For us, it's not organized and systematic with entire towns purged all at once; it's hidden, almost casual, deaths isolated and scattered. We die alone, rather than in crowds. Though we did lose tens of thousands to the Nazi Holocaust, it wasn't six million; rather, we have been targets of hate crimes for all of recorded history. We die of starvation, of preventable disease; we die from shooting, and yes, we die in gas chambers. We too have gaps in our communities that can never be filled because the person that should have been there is dead--dead because someone hated them, or thought they were a burden, or simply didn't care.

I have no answers and no solutions, other than to tell you to remember the dead. Remember them, and keep your eyes open. Your disabled neighbors, your family members, or even a total stranger you meet on the street may need you to defend them. What you see in public will be the mildest, least offensive of what happens behind closed doors. Don't ignore it. And when you get a chance to help, however slim it is, take it.

Sociological ImagesTrump Is Not Stoking a Crisis in Government’s Legitimacy

Originally posted at Montclair SocioBlog.

Is Donald Trump undermining the legitimacy of the office of the presidency? He has been at it a while. His “birther” campaign – begun in 2008 and still alive – was aimed specifically at the legitimacy of the Obama presidency. Most recently, he has been questioning the legitimacy of the upcoming presidential election and by implications all presidential elections.

If he is successful, if the US will soon face a crisis of legitimacy, that’s a serious problem. Legitimacy requires the consent of the governed. We agree that the government has the right to levy taxes, punish criminals, enforce contracts, regulate all sorts of activities…  The list is potentially endless.

Legitimacy is to the government what authority is to the police officer – the agreement of those being policed that the officer has the right to enforce the law. So when the cop says, “Move to the other side of the street,” we move. Without that agreement, without the authority of the badge, the cop has only the power of the gun. Similarly, a government that does not have legitimacy must rule by sheer power. Such governments, even if they are democratically elected, use the power of the state to lock up their political opponents, to harass or imprison journalists, and generally to ensure the compliance.

Trump is obviously not alone in his views about legitimacy.  When I see the posters and websites claiming that Obama is a “tyrant” – one who rules by power rather than by legitimate authority; when I see the Trump supporters chanting “Lock Her Up,” I wonder whether it’s all just good political fun and hyperbole or whether the legitimacy of the US government is really at risk.

This morning, I saw this headline at the Washington Post:


Scary. But the content of the story tells a story that is completely the opposite. The first sentence of the story quotes the Post’s own editorial, which says that Trump, with his claims of rigged elections, “poses an unprecedented threat to the peaceful transition of power.” The second sentence evaluates this threat.

Trump’s October antics may be unprecedented, but his wild allegations about the integrity of the elections might not be having much effect on voter attitudes.

Here’s the key evidence. Surveys of voters in 2012 and 2016 show no increase in fears of a rigged election. In fact, on the whole people in 2016 were more confident that their vote would be fairly counted.


The graph on the left shows that even among Republicans, the percent who were “very confident” that their vote would be counted was about the same in 2016 as in 2012. (Technically, one point lower, a difference well within the margin of error.)

However, two findings from the research suggest a qualification to the idea that legitimacy has not been threatened. First, only 45% of the voters are “very confident” that their votes will be counted. That’s less than half. The Post does not say what percent were “somewhat confident” (or whatever the other choices were), and surely these would have pushed the confident tally well above 50%.

Second, fears about rigged elections conform to the “elsewhere effect” – the perception that things may be OK where I am, but in the nation at large, things are bad and getting worse. Perceptions of Congressional representatives, race relations, and marriage follow this pattern (see this post). The graph on the left shows that 45% were very confident that their own vote would be counted. In the graph on the right, only 28% were very confident that votes nationwide would get a similarly fair treatment.

These numbers do not seem like a strong vote of confidence (or a strong confidence in voting). Perhaps the best we can say is that if there is any change in the last four years, it is in the direction of legitimacy.

Jay Livingston is the chair of the Sociology Department at Montclair State University. You can follow him at Montclair SocioBlog or on Twitter.

(View original at

Google AdsenseShare your feedback on AdSense and other Google publisher solutions

It’s time to share your feedback! To improve our product and services, we send out a survey to a group of publishers every six months.

Your feedback and comments are important to us, and we really do read and consider everything you write. Thanks to previous suggestions, we’ve launched a number of new features to improve our services and help you grow your earnings. These include a new Optimization tab that provides tips on increasing your revenue, an improved AdSense interface for easier user navigation, and more transparency on our policies.

You may have received a survey by email over the last few weeks, if so please take the time to respond to it as we value your input.

To make sure that you're eligible to receive the next survey email, please:

Whether you’ve completed this survey before or you’re providing feedback for the first time, we’d like to thank you for sharing your valuable thoughts. We’re looking forward to feedback!

Posted by Susie Reinecke - AdSense Publisher Happiness Team

Krebs on SecurityDDoS on Dyn Impacts Twitter, Spotify, Reddit

Criminals this morning massively attacked Dyn, a company that provides core Internet services for Twitter, SoundCloud, Spotify, Reddit and a host of other sites, causing outages and slowness for many of Dyn’s customers.

Twitter is experiencing problems, as seen through the social media platform Hootsuite.

Twitter is experiencing problems, as seen through the social media platform Hootsuite.

In a statement, Dyn said that this morning, October 21, Dyn received a global distributed denial of service (DDoS) attack on its DNS infrastructure on the east coast starting at around 7:10 a.m. ET (11:10 UTC).

“DNS traffic resolved from east coast name server locations are experiencing a service interruption during this time. Updates will be posted as information becomes available,” the company wrote.

DYN encouraged customers with concerns to check the company’s status page for updates and to reach out to its technical support team.

A DDoS is when crooks use a large number of hacked or ill-configured systems to flood a target site with so much junk traffic that it can no longer serve legitimate visitors.

DNS refers to Domain Name System services. DNS is an essential component of all Web sites, responsible for translating human-friendly Web site names like “” into numeric, machine-readable Internet addresses. Anytime you send an e-mail or browse a Web site, your machine is sending a DNS look-up request to your Internet service provider to help route the traffic.


The attack on DYN comes just hours after DYN researcher Doug Madory presented a talk on DDoS attacks in Dallas, Texas at a meeting of the North American Network Operators Group (NANOG). Madory’s talk — available here on — delved deeper into research that he and I teamed up on to produce the data behind the story DDoS Mitigation Firm Has History of Hijacks.

That story (as well as one published earlier this week, Spreading the DDoS Disease and Selling the Cure) examined the sometimes blurry lines between certain DDoS mitigation firms and the cybercriminals apparently involved in launching some of the largest DDoS attacks the Internet has ever seen. Indeed, the record 620 Gbps DDoS against came just hours after I published the story on which Madory and I collaborated.

The record-sized attack that hit my site last month was quickly superseded by a DDoS against OVH, a French hosting firm that reported being targeted by a DDoS that was roughly twice the size of the assault on KrebsOnSecurity. As I noted in The Democratization of Censorship — the first story published after bringing my site back up under the protection of Google’s Project Shield — DDoS mitigation firms simply did not count on the size of these attacks increasing so quickly overnight, and are now scrambling to secure far greater capacity to handle much larger attacks concurrently.

The size of these DDoS attacks has increased so much lately thanks largely to the broad availability of tools for compromising and leveraging the collective firepower of so-called Internet of Things devices — poorly secured Internet-based security cameras, digital video recorders (DVRs) and Internet routers. Last month, a hacker by the name of Anna_Senpai released the source code for Mirai, a crime machine that enslaves IoT devices for use in large DDoS attacks. The 620 Gbps attack that hit my site last month was launched by a botnet built on Mirai, for example.

Interestingly, someone is now targeting infrastructure providers with extortion attacks and invoking the name Anna_senpai. According to a discussion thread started Wednesday on Web Hosting Talk, criminals are now invoking the Mirai author’s nickname in a bid to extort Bitcoins from targeted hosting providers.

“If you will not pay in time, DDoS attack will start, your web-services will
go down permanently. After that, price to stop will be increased to 5 BTC
with further increment of 5 BTC for every day of attack.

NOTE, i?m not joking.

My attack are extremely powerful now – now average 700-800Gbps, sometimes over 1 Tbps per second. It will pass any remote protections, no current protection systems can help.”

Let me be clear: I have no data to indicate that the attack on Dyn is related to extortion, to Mirai or to any of the companies or individuals Madory referenced in his talk this week in Dallas. But Dyn is known for publishing detailed writeups on outages at other major Internet service providers. Here’s hoping the company does not deviate from that practice and soon publishes a postmortem on its own attack.

Update, 3:50 p.m. ET: Security firm Flashpoint is now reporting that they have seen indications that a Mirai-based botnet is indeed involved in the attack on Dyn today. Separately, I have heard from a trusted source who’s been tracking this activity and saw chatter in the cybercrime underground yesterday discussing a plan to attack Dyn.

Update, 10:22 a.m. ET: Dyn’s status page reports that all services are back to normal as of 13:20 UTC (9:20 a.m. ET). Fixed the link to Doug Madory’s talk on Youtube, to remove the URL shortener (which isn’t working because of this attack).

Update, 1:01 p.m. ET: Looks like the attacks on Dyn have resumed and this event is ongoing. This, from the Dyn status page:

This DDoS attack may also be impacting Dyn Managed DNS advanced services with possible delays in monitoring. Our Engineers are continuing to work on mitigating this issue.
Oct 21, 16:48 UTC
As of 15:52 UTC, we have begun monitoring and mitigating a DDoS attack against our Dyn Managed DNS infrastructure. Our Engineers are continuing to work on mitigating this issue.
Oct 21, 16:06 UTC

Worse Than FailureError'd: The Duke of Error'd

"Ok, so what happens if I'm a duke? Then what?" wrote Adam S.


"I got this mysterious prompt on the long-haul flight from Auckland to London," wrote Ben, "I'm sad to say I chickened out and chose 'No Thanks'"


"Incorporated in May 2015, and it's now 2016, been in business for 46 years, so... they must be selling time machines!" wrote James.


Frank G. writes, "To be fair, he did tell us that outsiders are trying to sneak in and attack US citizens."


"Bruxelles-Midi introduced their new 41:10 info screens, just the resolution of 320x78 could be a tad higher," Alexander writes.


"Given that I'm not supposed to see this, should I close my eyes?" writes Craig W.


"So, if unboxing SQL Server 2016 completed successfully, where's the box?!" wrote Maxwell.


[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet DebianDirk Eddelbuettel: anytime 0.0.4: New features and fixes

A brand-new release of anytime is now on CRAN following the three earlier releases since mid-September. anytime aims to convert anything in integer, numeric, character, factor, ordered, ... format to POSIXct (or Date) objects -- and does so without requiring a format string. See the anytime page for a few examples.

With release 0.0.4, we add two nice new features. First, NA, NaN and Inf are now simply skipped (similar to what the corresponding Base R functions do). Second, we now also accept large numeric values so that, _e.g., anytime(as.numeric(Sys.time()) also works, effectively adding another input type. We also have squashed an issue reported by the 'undefined behaviour' sanitizer, and the widened the test for when we try to deploy the gettz package get missing timezone information.

A quick example of the new features:

anydate(c(NA, NaN, Inf, as.numeric(as.POSIXct("2016-09-01 10:11:12"))))
[1] NA           NA           NA           "2016-09-01"

The NEWS file summarises the release:

Changes in anytime version 0.0.4 (2016-10-20)

  • Before converting via lexical_cast, assign to atomic type via template logic to avoid an UBSAN issue (PR #15 closing issue #14)

  • More robust initialization and timezone information gathering.

  • More robust processing of non-finite input also coping with non-finite values such as NA, NaN and Inf which all return NA

  • Allow numeric POSIXt representation on input, also creating proper POSIXct (or, if requested, Date)

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the anytime page.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.


Planet DebianKees Cook: CVE-2016-5195

My prior post showed my research from earlier in the year at the 2016 Linux Security Summit on kernel security flaw lifetimes. Now that CVE-2016-5195 is public, here are updated graphs and statistics. Due to their rarity, the Critical bug average has now jumped from 3.3 years to 5.2 years. There aren’t many, but, as I mentioned, they still exist, whether you know about them or not. CVE-2016-5195 was sitting on everyone’s machine when I gave my LSS talk, and there are still other flaws on all our Linux machines right now. (And, I should note, this problem is not unique to Linux.) Dealing with knowing that there are always going to be bugs present requires proactive kernel self-protection (to minimize the effects of possible flaws) and vendors dedicated to updating their devices regularly and quickly (to keep the exposure window minimized once a flaw is widely known).

So, here are the graphs updated for the 668 CVEs known today:

  • Critical: 3 @ 5.2 years average
  • High: 44 @ 6.2 years average
  • Medium: 404 @ 5.3 years average
  • Low: 216 @ 5.5 years average

© 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

CryptogramPresident Obama Talks About AI Risk, Cybersecurity, and More

Interesting interview:

Obama: Traditionally, when we think about security and protecting ourselves, we think in terms of armor or walls. Increasingly, I find myself looking to medicine and thinking about viruses, antibodies. Part of the reason why cybersecurity continues to be so hard is because the threat is not a bunch of tanks rolling at you but a whole bunch of systems that may be vulnerable to a worm getting in there. It means that we've got to think differently about our security, make different investments that may not be as sexy but may actually end up being as important as anything.

What I spend a lot of time worrying about are things like pandemics. You can't build walls in order to prevent the next airborne lethal flu from landing on our shores. Instead, what we need to be able to do is set up systems to create public health systems in all parts of the world, click triggers that tell us when we see something emerging, and make sure we've got quick protocols and systems that allow us to make vaccines a lot smarter. So if you take a public health model, and you think about how we can deal with, you know, the problems of cybersecurity, a lot may end up being really helpful in thinking about the AI threats.

Worse Than FailureThe Contractor

As developers, we often find ourselves working in stupid ways because the folks who were hired above/before us think that what they set up is ideal. While this happens in numerous industries, finance, especially at huge conglomerates, takes IT/Software-WTF to a whole new level. As contractors, we often get the we need your help in an emergency even though everything is unicorns and rainbows speech that precedes some meltdown for which they want you to take the blame.


After taking a contract position at a large financial company, Bryan T. expected to see some amazing things. On the interview, they talked a big game and had even bigger budgets. It didn't take long to see some amazing things; but not the kind of amazing you'd think.

To begin with, the managers and developers were still trying to roll their own time zones and caching. They didn't understand any of these terms: object graph, business intelligence services, concurrency, message pump, domain model, and well-defined. Bryan even needed to explain to them why JavaScript on random web pages doesn't have natural mechanisms to attach to .NET event handlers in other applications.

Their head DBA explained that the difference between a uniqueness constraint and a primary key was semantics, and that audit records and current records should always be stored in the same table so as to keep related data together. They even used a simple text column to store City, State and Country, which led to obvious issues like three different values for the US ("US", "USA", "US_TOTAL").

We all need to conform to some semblance of coding practices. These folks decided to use anti-coding-practices. For example, IEntity was a class but the "I" prefix was used because it was returned from an API.

Shared common libraries were not allowed; if you needed to re-use a chunk of code, copy and paste it to where you need it; that's why they implemented cut-n-paste across applications!

This also explains why SLOC is their primary productivity metric.

There were no planned releases or scheduled iterations; whenever someone barked, a snapshot was manually copied from local builds.

Perhaps most interesting of all, they had an awesome approach to branching. Instead of actually branching, they copied the whole code base into a new repository and ran that forward. Of course, this left a trail of repository droppings that you had to navigate.

It took Bryan quite a while to acclimate to all of this. Then the team received a massive product request. Unfortunately, nobody understood the concept of scalability, let alone why it had to be considered. Instead, they decided that Cowboy Coding would be the M.O. of the project.

At this point, Bryan decided he didn't want the job all that much. That very day, he had to work with another developer with whom he'd not yet had the pleasure. Their task was to return some JSON from a web service call. After more than a month of work, the other developer proudly showed Bryan what he had come up with to return specific data from the web service:

      ["Morning Workflow Status", 
      ["Scrape Status", 
      ["UK Nominations Storage", 

For Bryan, it was the last straw.

[Advertisement] Otter allows you to easily create and configure 1,000's of servers, all while maintaining ease-of-use, and granular visibility down to a single server. Find out more and download today!

Planet Linux AustraliaStewart Smith: Workaround for opal-prd using 100% CPU

opal-prd is the Processor RunTime Diagnostics daemon, the userspace process that on OpenPower systems is responsible for some of the runtime diagnostics. Although a userspace process, it memory maps (as in mmap) in some code loaded by early firmware (Hostboot) called the HostBoot RunTime (HBRT) and runs it, using calls to the kernel to accomplish any needed operations (e.g. reading/writing registers inside the chip). Running this in user space gives us benefits such as being able to attach gdb, recover from segfaults etc.

The reason this code is shipped as part of firmware rather than as an OS package is that it is very system specific, and it would be a giant pain to update a package in every Linux distribution every time a new chip or machine was introduced.

Anyway, there’s a bug in the HBRT code that means if there’s an ECC error in the HBEL (HostBoot Error Log) partition in the system flash (“bios” or “pnor”… the flash where your system firmware lives), the opal-prd process may get stuck chewing up 100% CPU and not doing anything useful. There’s for this.

You will notice a problem if the opal-prd process is using 100% CPU and the last log messages are something like:

HBRT: ERRL:>>ErrlManager::ErrlManager constructor.
HBRT: ERRL:iv_hiddenErrorLogsEnable = 0x0
HBRT: ERRL:>>setupPnorInfo
HBRT: PNOR:>>RtPnor::getSectionInfo
HBRT: PNOR:>>RtPnor::readFromDevice: i_offset=0x0, i_procId=0 sec=11 size=0x20000 ecc=1
HBRT: PNOR:RtPnor::readFromDevice: removing ECC...
HBRT: PNOR:RtPnor::readFromDevice> Uncorrectable ECC error : chip=0,offset=0x0

(the parameters to readFromDevice may differ)

Luckily, there’s a simple workaround to fix it all up! You will need the pflash utility. Primarily, pflash is meant only for developers and those who know what they’re doing. You can turn your computer into a brick using it.

pflash is packaged in Ubuntu 16.10 and RHEL 7.3, but you can otherwise build it from source easily enough:

git clone
cd skiboot/external/pflash

Now that you have pflash, you just need to erase the HBEL partition and write (ECC) zeros:

dd if=/dev/zero of=/tmp/hbel bs=1 count=147456
pflash -P HBEL -e
pflash -P HBEL -p /tmp/hbel

Note: you cannot just erase the partition or use the pflash option to do an ECC erase, you may render your system unbootable if you get it wrong.

After that, restart opal-prd however your distro handles restarting daemons (e.g. systemctl restart opal-prd.service) and all should be well.

Planet DebianHéctor Orón Martínez: Build a Debian package against Debian 8.0 using Download On Demand (DoD) service

In the previous post Open Build Service software architecture has been overviewed. In the current blog post, a tutorial on setting up a package build with OBS from Debian packages is presented.


  • Generate a test environment by creating Stretch/SID VM
  • Enable experimental repository
  • Install OBS server, api, worker and osc CLI packages
  • Ensure all OBS services are running
  • Create an OBS project for Download on Demand (DoD)
  • Create an OBS project linked to DoD
  • Adding a package to the project
  • Troubleshooting OBS

Generate a test environment by creating Stretch/SID VM

Really, use whatever suits you best, but please create an untrusted test environment for this one.

In the current tutorial it assumes “$hostname” is “stretch”, which should be stretch or sid suite.

Be aware that copy & paste configuration files from current post might lead you into broken characters (i.e. “).

Debian Stretch weekly netinst CD

Enable experimental repository

# echo "deb experimental main" >> /etc/apt/sources.list.d/experimental.list
# apt-get update

Install and setup OBS server, api, worker and osc CLI packages

# apt-get install obs-server obs-api obs-worker osc

In the install process mysql database is needed, therefore if mysql server is not setup, a password needs to be provided.
When OBS API database ‘obs-api‘ is created, we need to pick a password for it, provide “opensuse”. The ‘obs-api’ package will configure apache2 https webserver (creating a dummy certificate for “stretch”) to serve OBS webui.
Add “stretch” and “obs” aliases to “localhost” entry in your /etc/hosts file.
Enable worker by setting ENABLED=1 in /etc/default/obsworker
Try to connect to the web UI https://stretch/
Login into OBS webui, default login credentials: Admin/opensuse).
From command line tool, try to list projects in OBS

 $ osc -A https://stretch ls

Accept dummy certificate and provide credentials (defaults: Admin/opensuse)
If the install proceeds as expected follow to the next step.

Ensure all OBS services are running

# backend services
obsrun     813  0.0  0.9 104960 20448 ?        Ss   08:33   0:03 /usr/bin/perl -w /usr/lib/obs/server/bs_dodup
obsrun     815  0.0  1.5 157512 31940 ?        Ss   08:33   0:07 /usr/bin/perl -w /usr/lib/obs/server/bs_repserver
obsrun    1295  0.0  1.6 157644 32960 ?        S    08:34   0:07  \_ /usr/bin/perl -w /usr/lib/obs/server/bs_repserver
obsrun     816  0.0  1.8 167972 38600 ?        Ss   08:33   0:08 /usr/bin/perl -w /usr/lib/obs/server/bs_srcserver
obsrun    1296  0.0  1.8 168100 38864 ?        S    08:34   0:09  \_ /usr/bin/perl -w /usr/lib/obs/server/bs_srcserver
memcache   817  0.0  0.6 346964 12872 ?        Ssl  08:33   0:11 /usr/bin/memcached -m 64 -p 11211 -u memcache -l
obsrun     818  0.1  0.5  78548 11884 ?        Ss   08:33   0:41 /usr/bin/perl -w /usr/lib/obs/server/bs_dispatch
obsserv+   819  0.0  0.3  77516  7196 ?        Ss   08:33   0:05 /usr/bin/perl -w /usr/lib/obs/server/bs_service
mysql      851  0.0  0.0   4284  1324 ?        Ss   08:33   0:00 /bin/sh /usr/bin/mysqld_safe
mysql     1239  0.2  6.3 1010744 130104 ?      Sl   08:33   1:31  \_ /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib/mysql/plugin --log-error=/var/log/mysql/error.log --pid-file=/var/run/mysqld/ --socket=/var/run/mysqld/mysqld.sock --port=3306

# web services
root      1452  0.0  0.1 110020  3968 ?        Ss   08:34   0:01 /usr/sbin/apache2 -k start
root      1454  0.0  0.1 435992  3496 ?        Ssl  08:34   0:00  \_ Passenger watchdog
root      1460  0.3  0.2 651044  5188 ?        Sl   08:34   1:46  |   \_ Passenger core
nobody    1465  0.0  0.1 444572  3312 ?        Sl   08:34   0:00  |   \_ Passenger ust-router
www-data  1476  0.0  0.1 855892  2608 ?        Sl   08:34   0:09  \_ /usr/sbin/apache2 -k start
www-data  1477  0.0  0.1 856068  2880 ?        Sl   08:34   0:09  \_ /usr/sbin/apache2 -k start
www-data  1761  0.0  4.9 426868 102040 ?       Sl   08:34   0:29 delayed_job.0
www-data  1767  0.0  4.8 425624 99888 ?        Sl   08:34   0:30 delayed_job.1
www-data  1775  0.0  4.9 426516 101708 ?       Sl   08:34   0:28 delayed_job.2
nobody    1788  0.0  5.7 496092 117480 ?       Sl   08:34   0:03 Passenger RubyApp: /usr/share/obs/api
nobody    1796  0.0  4.9 488888 102176 ?       Sl   08:34   0:00 Passenger RubyApp: /usr/share/obs/api
www-data  1814  0.0  4.5 282576 92376 ?        Sl   08:34   0:22 delayed_job.1000
www-data  1829  0.0  4.4 282684 92228 ?        Sl   08:34   0:22 delayed_job.1010
www-data  1841  0.0  4.5 282932 92536 ?        Sl   08:34   0:22 delayed_job.1020
www-data  1855  0.0  4.9 427988 101492 ?       Sl   08:34   0:29 delayed_job.1030
www-data  1865  0.2  5.0 492500 102964 ?       Sl   08:34   1:09 clockworkd.clock
www-data  1899  0.0  0.0  87100  1400 ?        S    08:34   0:00 /usr/bin/searchd --pidfile --config /usr/share/obs/api/config/production.sphinx.conf
www-data  1900  0.1  0.4 161620  8276 ?        Sl   08:34   0:51  \_ /usr/bin/searchd --pidfile --config /usr/share/obs/api/config/production.sphinx.conf

# OBS worker
root      1604  0.0  0.0  28116  1492 ?        Ss   08:34   0:00 SCREEN -m -d -c /srv/obs/run/worker/boot/screenrc
root      1605  0.0  0.9  75424 18764 pts/0    Ss+  08:34   0:06  \_ /usr/bin/perl -w ./bs_worker --hardstatus --root /srv/obs/worker/root_1 --statedir /srv/obs/run/worker/1 --id stretch:1 --reposerver http://obs:5252 --jobs 1

Create an OBS project for Download on Demand (DoD)

Create a meta project file:

$ osc -A https://stretch:443 meta prj Debian:8 -e

<project name=”Debian:8″>
<title>Debian 8 DoD</title>
<description>Debian 8 DoD</description>
<person userid=”Admin” role=”maintainer”/>
<repository name=”main”>
<download arch=”x86_64″ url=”” repotype=”deb”/>

Visit webUI to check project configuration

Create a meta project configuration file:

$ osc -A https://stretch:443 meta prjconf Debian:8 -e

Add the following file, as found at

Repotype: debian

# create initial user
Preinstall: base-passwd
Preinstall: user-setup

# required for preinstall images
Preinstall: perl

# preinstall essentials + dependencies
Preinstall: base-files base-passwd bash bsdutils coreutils dash debconf
Preinstall: debianutils diffutils dpkg e2fslibs e2fsprogs findutils gawk
Preinstall: gcc-4.9-base grep gzip hostname initscripts insserv libacl1
Preinstall: libattr1 libblkid1 libbz2-1.0 libc-bin libc6 libcomerr2 libdb5.3
Preinstall: libgcc1 liblzma5 libmount1 libncurses5 libpam-modules
Preinstall: libpcre3 libsmartcols1
Preinstall: libpam-modules-bin libpam-runtime libpam0g libreadline6
Preinstall: libselinux1 libsemanage-common libsemanage1 libsepol1 libsigsegv2
Preinstall: libslang2 libss2 libtinfo5 libustr-1.0-1 libuuid1 login lsb-base
Preinstall: mount multiarch-support ncurses-base ncurses-bin passwd perl-base
Preinstall: readline-common sed sensible-utils sysv-rc sysvinit sysvinit-utils
Preinstall: tar tzdata util-linux zlib1g

Runscripts: base-passwd user-setup base-files gawk

VMinstall: libdevmapper1.02.1

Order: user-setup:base-files

# Essential packages (this should also pull the dependencies)
Support: base-files base-passwd bash bsdutils coreutils dash debianutils
Support: diffutils dpkg e2fsprogs findutils grep gzip hostname libc-bin 
Support: login mount ncurses-base ncurses-bin perl-base sed sysvinit 
Support: sysvinit-utils tar util-linux

# Build-essentials
Required: build-essential
Prefer: build-essential:make

# build script needs fakeroot
Support: fakeroot
# lintian support would be nice, but breaks too much atm
#Support: lintian

# helper tools in the chroot
Support: less kmod net-tools procps psmisc strace vim

# everything below same as for Debian:6.0 (apart from the version macros ofc)

# circular dependendencies in openjdk stack
Order: openjdk-6-jre-lib:openjdk-6-jre-headless
Order: openjdk-6-jre-headless:ca-certificates-java

Keep: binutils cpp cracklib file findutils gawk gcc gcc-ada gcc-c++
Keep: gzip libada libstdc++ libunwind
Keep: libunwind-devel libzio make mktemp pam-devel pam-modules
Keep: patch perl rcs timezone

Prefer: cvs libesd0 libfam0 libfam-dev expect

Prefer: gawk locales default-jdk
Prefer: xorg-x11-libs libpng fam mozilla mozilla-nss xorg-x11-Mesa
Prefer: unixODBC libsoup glitz java-1_4_2-sun gnome-panel
Prefer: desktop-data-SuSE gnome2-SuSE mono-nunit gecko-sharp2
Prefer: apache2-prefork openmotif-libs ghostscript-mini gtk-sharp
Prefer: glib-sharp libzypp-zmd-backend mDNSResponder

Prefer: -libgcc-mainline -libstdc++-mainline -gcc-mainline-c++
Prefer: -libgcj-mainline -viewperf -compat -compat-openssl097g
Prefer: -zmd -OpenOffice_org -pam-laus -libgcc-tree-ssa -busybox-links
Prefer: -crossover-office -libgnutls11-dev

# alternative pkg-config implementation
Prefer: -pkgconf
Prefer: -openrc
Prefer: -file-rc

Conflict: ghostscript-library:ghostscript-mini

Ignore: sysvinit:initscripts

Ignore: aaa_base:aaa_skel,suse-release,logrotate,ash,mingetty,distribution-release
Ignore: gettext-devel:libgcj,libstdc++-devel
Ignore: pwdutils:openslp
Ignore: pam-modules:resmgr
Ignore: rpm:suse-build-key,build-key
Ignore: bind-utils:bind-libs
Ignore: alsa:dialog,pciutils
Ignore: portmap:syslogd
Ignore: fontconfig:freetype2
Ignore: fontconfig-devel:freetype2-devel
Ignore: xorg-x11-libs:freetype2
Ignore: xorg-x11:x11-tools,resmgr,xkeyboard-config,xorg-x11-Mesa,libusb,freetype2,libjpeg,libpng
Ignore: apache2:logrotate
Ignore: arts:alsa,audiofile,resmgr,libogg,libvorbis
Ignore: kdelibs3:alsa,arts,pcre,OpenEXR,aspell,cups-libs,mDNSResponder,krb5,libjasper
Ignore: kdelibs3-devel:libvorbis-devel
Ignore: kdebase3:kdebase3-ksysguardd,OpenEXR,dbus-1,dbus-1-qt,hal,powersave,openslp,libusb
Ignore: kdebase3-SuSE:release-notes
Ignore: jack:alsa,libsndfile
Ignore: libxml2-devel:readline-devel
Ignore: gnome-vfs2:gnome-mime-data,desktop-file-utils,cdparanoia,dbus-1,dbus-1-glib,krb5,hal,libsmbclient,fam,file_alteration
Ignore: libgda:file_alteration
Ignore: gnutls:lzo,libopencdk
Ignore: gnutls-devel:lzo-devel,libopencdk-devel
Ignore: pango:cairo,glitz,libpixman,libpng
Ignore: pango-devel:cairo-devel
Ignore: cairo-devel:libpixman-devel
Ignore: libgnomeprint:libgnomecups
Ignore: libgnomeprintui:libgnomecups
Ignore: orbit2:libidl
Ignore: orbit2-devel:libidl,libidl-devel,indent
Ignore: qt3:libmng
Ignore: qt-sql:qt_database_plugin
Ignore: gtk2:libpng,libtiff
Ignore: libgnomecanvas-devel:glib-devel
Ignore: libgnomeui:gnome-icon-theme,shared-mime-info
Ignore: scrollkeeper:docbook_4,sgml-skel
Ignore: gnome-desktop:libgnomesu,startup-notification
Ignore: python-devel:python-tk
Ignore: gnome-pilot:gnome-panel
Ignore: gnome-panel:control-center2
Ignore: gnome-menus:kdebase3
Ignore: gnome-main-menu:rug
Ignore: libbonoboui:gnome-desktop
Ignore: postfix:pcre
Ignore: docbook_4:iso_ent,sgml-skel,xmlcharent
Ignore: control-center2:nautilus,evolution-data-server,gnome-menus,gstreamer-plugins,gstreamer,metacity,mozilla-nspr,mozilla,libxklavier,gnome-desktop,startup-notification
Ignore: docbook-xsl-stylesheets:xmlcharent
Ignore: liby2util-devel:libstdc++-devel,openssl-devel
Ignore: yast2:yast2-ncurses,yast2-theme-SuSELinux,perl-Config-Crontab,yast2-xml,SuSEfirewall2
Ignore: yast2-core:netcat,hwinfo,wireless-tools,sysfsutils
Ignore: yast2-core-devel:libxcrypt-devel,hwinfo-devel,blocxx-devel,sysfsutils,libstdc++-devel
Ignore: yast2-packagemanager-devel:rpm-devel,curl-devel,openssl-devel
Ignore: yast2-devtools:perl-XML-Writer,libxslt,pkgconfig
Ignore: yast2-installation:yast2-update,yast2-mouse,yast2-country,yast2-bootloader,yast2-packager,yast2-network,yast2-online-update,yast2-users,release-notes,autoyast2-installation
Ignore: yast2-bootloader:bootloader-theme
Ignore: yast2-packager:yast2-x11
Ignore: yast2-x11:sax2-libsax-perl
Ignore: openslp-devel:openssl-devel
Ignore: java-1_4_2-sun:xorg-x11-libs
Ignore: java-1_4_2-sun-devel:xorg-x11-libs
Ignore: kernel-um:xorg-x11-libs
Ignore: tetex:xorg-x11-libs,expat,fontconfig,freetype2,libjpeg,libpng,ghostscript-x11,xaw3d,gd,dialog,ed
Ignore: yast2-country:yast2-trans-stats
Ignore: susehelp:susehelp_lang,suse_help_viewer
Ignore: mailx:smtp_daemon
Ignore: cron:smtp_daemon
Ignore: hotplug:syslog
Ignore: pcmcia:syslog
Ignore: avalon-logkit:servlet
Ignore: jython:servlet
Ignore: ispell:ispell_dictionary,ispell_english_dictionary
Ignore: aspell:aspel_dictionary,aspell_dictionary
Ignore: smartlink-softmodem:kernel,kernel-nongpl
Ignore: OpenOffice_org-de:myspell-german-dictionary
Ignore: mediawiki:php-session,php-gettext,php-zlib,php-mysql,mod_php_any
Ignore: squirrelmail:mod_php_any,php-session,php-gettext,php-iconv,php-mbstring,php-openssl

Ignore: simias:mono(log4net)
Ignore: zmd:mono(log4net)
Ignore: horde:mod_php_any,php-gettext,php-mcrypt,php-imap,php-pear-log,php-pear,php-session,php
Ignore: xerces-j2:xml-commons-apis,xml-commons-resolver
Ignore: xdg-menu:desktop-data
Ignore: nessus-libraries:nessus-core
Ignore: evolution:yelp
Ignore: mono-tools:mono(gconf-sharp),mono(glade-sharp),mono(gnome-sharp),mono(gtkhtml-sharp),mono(atk-sharp),mono(gdk-sharp),mono(glib-sharp),mono(gtk-sharp),mono(pango-sharp)
Ignore: gecko-sharp2:mono(glib-sharp),mono(gtk-sharp)
Ignore: gnome-libs:libgnomeui
Ignore: nautilus:gnome-themes
Ignore: gnome-panel:gnome-themes
Ignore: gnome-panel:tomboy

Substitute: utempter

%ifnarch s390 s390x ppc ia64
Substitute: java2-devel-packages java-1_4_2-sun-devel
 %ifnarch s390x
Substitute: java2-devel-packages java-1_4_2-ibm-devel
Substitute: java2-devel-packages java-1_4_2-ibm-devel xorg-x11-libs-32bit

Substitute: yast2-devel-packages docbook-xsl-stylesheets doxygen libxslt perl-XML-Writer popt-devel sgml-skel update-desktop-files yast2 yast2-devtools yast2-packagemanager-devel yast2-perl-bindings yast2-testsuite

# SUSE compat mappings
Substitute: gcc-c++ gcc
Substitute: libsigc++2-devel libsigc++-2.0-dev
Substitute: glibc-devel-32bit
Substitute: pkgconfig pkg-config

%ifarch %ix86
Substitute: kernel-binary-packages kernel-default kernel-smp kernel-bigsmp kernel-debug kernel-um kernel-xen kernel-kdump
%ifarch ia64
Substitute: kernel-binary-packages kernel-default kernel-debug
%ifarch x86_64
Substitute: kernel-binary-packages kernel-default kernel-smp kernel-xen kernel-kdump
%ifarch ppc
Substitute: kernel-binary-packages kernel-default kernel-kdump kernel-ppc64 kernel-iseries64
%ifarch ppc64
Substitute: kernel-binary-packages kernel-ppc64 kernel-iseries64
%ifarch s390
Substitute: kernel-binary-packages kernel-s390
%ifarch s390x
Substitute: kernel-binary-packages kernel-default

%define debian_version 800

%debian_version 800

Visit webUI to check project configuration

Create an OBS project linked to DoD

$ osc -A https://stretch:443 meta prj test -e

<project name=”test”>
<person userid=”Admin” role=”maintainer”/>
<repository name=”Debian_8.0″>
<path project=”Debian:8″ repository=”main”/>

Visit webUI to check project configuration

Adding a package to the project

$ osc -A https://stretch:443 co test ; cd test
$ mkdir hello ; cd hello ; apt-get source -d hello ; cd - ; 
$ osc add hello 
$ osc ci -m "New import" hello

The package should go to dispatched state then get in blocked state while it downloads build dependencies from DoD link, eventually it should start building. Please check the journal logs to check if something went wrong or gets stuck.

Visit webUI to check hello package build state

OBS logging to the journal

Check in the journal logs everything went fine:

$ sudo journalctl -u obsdispatcher.service -u obsdodup.service -u obsscheduler@x86_64.service -u obsworker.service -u obspublisher.service


Currently we are facing few issues with web UI:

And there are more issues that have not been reported, please do ‘reportbug obs-api‘.

Planet DebianDaniel Pocock: Choosing smartcards, readers and hardware for the Outreachy project

One of the projects proposed for this round of Outreachy is the PGP / PKI Clean Room live image.

Interns, and anybody who decides to start using the project (it is already functional for command line users) need to decide about purchasing various pieces of hardware, including a smart card, a smart card reader and a suitably secure computer to run the clean room image. It may also be desirable to purchase some additional accessories, such as a hardware random number generator.

If you have any specific suggestions for hardware or can help arrange any donations of hardware for Outreachy interns, please come and join us in the pki-clean-room mailing list or consider adding ideas on the PGP / PKI clean room wiki.

Choice of smart card

For standard PGP use, the OpenPGP card provides a good choice.

For X.509 use cases, such as VPN access, there are a range of choices. I recently obtained one of the SmartCard HSM cards, Card Contact were kind enough to provide me with a free sample. An interesting feature of this card is Elliptic Curve (ECC) support. More potential cards are listed on the OpenSC page here.

Choice of card reader

The technical factors to consider are most easily explained with a table:

On disk Smartcard reader without PIN-pad Smartcard reader with PIN-pad
Software Free/open Mostly free/open, Proprietary firmware in reader
Key extraction Possible Not generally possible
Passphrase compromise attack vectors Hardware or software keyloggers, phishing, user error (unsophisticated attackers) Exploiting firmware bugs over USB (only sophisticated attackers)
Other factors No hardware Small, USB key form-factor Largest form factor

Some are shortlisted on the GnuPG wiki and there has been recent discussion of that list on the GnuPG-users mailing list.

Choice of computer to run the clean room environment

There are a wide array of devices to choose from. Here are some principles that come to mind:

  • Prefer devices without any built-in wireless communications interfaces, or where those interfaces can be removed
  • Even better if there is no wired networking either
  • Particularly concerned users may also want to avoid devices with opaque micro-code/firmware
  • Small devices (laptops) that can be stored away easily in a locked cabinet or safe to prevent tampering
  • No hard disks required
  • Having built-in SD card readers or the ability to add them easily

SD cards and SD card readers

The SD cards are used to store the master private key, used to sign the certificates/keys on the smart cards. Multiple copies are kept.

It is a good idea to use SD cards from different vendors, preferably not manufactured in the same batch, to minimize the risk that they all fail at the same time.

For convenience, it would be desirable to use a multi-card reader:

although the software experience will be much the same if lots of individual card readers or USB flash drives are used.

Other devices

One additional idea that comes to mind is a hardware random number generator (TRNG), such as the FST-01.

Can you help with ideas or donations?

If you have any specific suggestions for hardware or can help arrange any donations of hardware for Outreachy interns, please come and join us in the pki-clean-room mailing list or consider adding ideas on the PGP / PKI clean room wiki.

LongNowDouglas Coupland Seminar Tickets


The Long Now Foundation’s monthly

Seminars About Long-term Thinking

Douglas Coupland presents The Extreme Present

Douglas Coupland presents “The Extreme Present”


Tuesday November 1, 02016 at 7:30pm SFJAZZ Center

Long Now Members can reserve 2 seats, join today! General Tickets $15


About this Seminar:

Douglas Coupland has done so much more than name a generation (“Generation X”—post-Boomer, pre-Millennial, from his novel of that name). He is a prolific writer (22 books, including nonfiction such as his biography of Marshall McLuhan) and a brilliant visual artist with installations at a variety of museums and public sites. His 1995 novel Microserfs nailed the contrast between corporate and startup cultures in software and Web design.

Coupland is fascinated by time. For Long Now he plans to deploy ideas and graphics “all dealing on some level with time and how we perceive it, how we used to perceive it, and where our perception of it may be going.” A time series about time.


Planet DebianPau Garcia i Quiles: FOSDEM Desktops DevRoom 2017 all for Participation

FOSDEM is one of the largest (5,000+ hackers!) gatherings of Free Software contributors in the world and happens each February in Brussels (Belgium, Europe).

Once again, one of the tracks will be the Desktops DevRoom (formerly known as “CrossDesktop DevRoom”), which will host Desktop-related talks.

We are now inviting proposals for talks about Free/Libre/Open-source Software on the topics of Desktop development, Desktop applications and interoperability amongst Desktop Environments. This is a unique opportunity to show novel ideas and developments to a wide technical audience.

Topics accepted include, but are not limited to:

  • Open Desktops: Gnome, KDE, Unity, Enlightenment, XFCE, Razor, MATE, Cinnamon, ReactOS, CDE etc
  • Closed desktops: Windows, Mac OS X, MorphOS, etc (when talking about a FLOSS topic)
  • Software development for the desktop
  • Development tools
  • Applications that enhance desktops
  • General desktop matters
  • Cross-platform software development
  • Web
  • Thin clients, desktop virtualiation, etc

Talks can be very specific, such as the advantages/disadvantages of distributing a desktop application with snap vs flatpak, or as general as using HTML5 technologies to develop native applications.

Topics that are of interest to the users and developers of all desktop environments are especially welcome. The FOSDEM 2016 schedule might give you some inspiration.


Please include the following information when submitting a proposal:

  • Your name
  • The title of your talk (please be descriptive, as titles will be listed with around 400 from other projects)
  • Short abstract of one or two paragraphs
  • Short bio (with photo)
  • Requested time: from 15 to 45 minutes. Normal duration is 30 minutes. Longer duration requests must be properly justified. You may be assigned LESS time than you request.

How to submit

All submissions are made in the Pentabarf event planning tool:

To submit your talk, click on “Create Event”, then make sure to select the “Desktops” devroom as the “Track”. Otherwise your talk will not be even considered for any devroom at all.

If you already have a Pentabarf account from a previous year, even if your talk was not accepted, please reuse it. Create an account if, and only if, you don’t have one from a previous year. If you have any issues with Pentabarf, please contact


The deadline for submissions is December 5th 2016.

FOSDEM will be held on the weekend of 4 & 5 February 2017 and the Desktops DevRoom will take place on Sunday, February 5th 2017.

We will contact every submitter with a “yes” or “no” before December 11th 2016.

Recording permission

The talks in the Desktops DevRoom will be audio and video recorded, and possibly streamed live too.

In the “Submission notes” field, please indicate that you agree that your presentation will be licensed under the CC-By-SA-4.0 or CC-By-4.0 license and that you agree to have your presentation recorded. For example:

“If my presentation is accepted for FOSDEM, I hereby agree to license all recordings, slides, and other associated materials under the Creative Commons Attribution Share-Alike 4.0 International License. Sincerely, <NAME>.”

If you want us to stop the recording in the Q & A part (should you have one), please tell us. We can do that but only for the Q & A part.

More information

The official communication channel for the Desktops DevRoom is its mailing list

Use this page to manage your subscription:


The Desktops DevRoom 2017 is managed by a team representing the most notable open desktops:

  • Pau Garcia i Quiles, KDE
  • Christophe Fergeau, Gnome
  • Michael Zanetti, Unity
  • Philippe Caseiro, Enlightenment
  • Jérome Leclanche, Razor

If you want to join the team, please contact

Krebs on SecuritySpreading the DDoS Disease and Selling the Cure

Earlier this month a hacker released the source code for Mirai, a malware strain that was used to launch a historically large 620 Gbps denial-of-service attack against this site in September. That attack came in apparent retribution for a story here which directly preceded the arrest of two Israeli men for allegedly running an online attack for hire service called vDOS. Turns out, the site where the Mirai source code was leaked had some very interesting things in common with the place vDOS called home.

The domain name where the Mirai source code was originally placed for download — santasbigcandycane[dot]cx — is registered at the same domain name registrar that was used to register the now-defunct DDoS-for-hire service vdos-s[dot]com.

Normally, this would not be remarkable, since most domain registrars have thousands or millions of domains in their stable. But in this case it is interesting mainly because the registrar used by both domains — a company called namecentral.comhas apparently been used to register just 38 domains since its inception by its current owner in 2012, according to a historic WHOIS records gathered by (for the full list see this PDF).

What’s more, a cursory look at the other domains registered via since then reveals a number of other DDoS-for-hire services, also known as “booter” or “stresser” services.

It’s extremely odd that someone would take on the considerable cost and trouble of creating a domain name registrar just to register a few dozen domains. It costs $3,500 to apply to the Internet Corporation for Assigned Names and Numbers (ICANN) for a new registrar authority. The annual fee for being an ICANN-approved registrar is $4,000, and then there’s a $800 quarterly fee for smaller registrars. In short, domain name registrars generally need to register many thousands of new domains each year just to turn a profit.

Many of the remaining three dozen or so domains registered via Namecentral over the past few years are tied to vDOS. Before vDOS was taken offline it was massively hacked, and a copy of the user and attack database was shared with KrebsOnSecurity. From those records it was easy to tell which third-party booter services were using vDOS’s application programming interface (API), a software function that allowed them to essentially resell access to vDOS with their own white-labeled stresser.

And a number of those vDOS resellers were registered through Namecentral, including 83144692[dot].com — a DDoS-for-hire service marketed at Chinese customers. Another Namecentral domain — — also was a vDOS reseller.

Other DDoS-for-hire domains registered through Namecentral include xboot[dot]net, xr8edstresser[dot]com, snowstresser[dot]com, ezstress[dot]com, exilestress[dot]com, diamondstresser[dot]net, dd0s[dot]pw, rebelsecurity[dot]net, and beststressers[dot]com.


Namecentral’s current owner is a 19-year-old California man by the name of Jesse Wu. Responding to questions emailed from KrebsOnSecurity, Wu said Namecentral’s policy on abuse was inspired by Cloudflare, the DDoS protection company that guards Namecentral and most of the above-mentioned DDoS-for-hire sites from attacks of the very kind they sell.

“I’m not sure (since registrations are automated) but I’m going to guess that the reason you’re interested in us is because some stories you’ve written in the past had domains registered on our service or otherwise used one of our services,” Wu wrote.

“We have a policy inspired by Cloudflare’s similar policy that we ourselves will remain content-neutral and in the support of an open Internet, we will almost never remove a registration or stop providing services, and furthermore we’ll take any effort to ensure that registrations cannot be influenced by anyone besides the actual registrant making a change themselves – even if such website makes us uncomfortable,” Wu said. “However, as a US based company, we are held to US laws, and so if we receive a valid court issued order to stop providing services to a client, or to turn over/disable a domain, we would happily comply with such order.”

Wu’s message continued:

“As of this email, we have never received such an order, we have never been contacted by any law enforcement agency, and we have never even received a legitimate, credible abuse report. We realize this policy might make us popular with ‘dangerous’ websites but even then, if we denied them services, simply not providing them services would not make such website stop existing, they would just have to find some other service provider/registrar or change domains more often. Our services themselves cannot be used for anything harmful – a domain is just a string of letters, and the rest of our services involve website/ddos protection/ecommerce security services designed to protect websites.”

Taking a page from Cloudflare, indeed. I’ve long taken Cloudflare to task for granting DDoS protection for countless DDoS-for-hire services, to no avail. I’ve maintained that Cloudflare has a blatant conflict of interest here, and that the DDoS-for-hire industry would quickly blast itself into oblivion because the proprietors of these attack services like nothing more than to turn their attack cannons on each other. Cloudflare has steadfastly maintained that picking and choosing who gets to use their network is a slippery slope that it will not venture toward.

Although Mr. Wu says he had nothing to do with the domains registered through Namecentral, public records filed elsewhere raise serious unanswered questions about that claim.

In my Sept. 8 story, Israeli Online Attack Service Earned $600,000 in Two Years, I explained that the hacked vDOS database indicated the service was run by two 18-year-old Israeli men. At some point, vDOS decided to protect all customer logins to the service with an extended validation (EV) SSL certificate. And for that, it needed to show it was tied to an actual corporate entity.

My investigation into those responsible for supporting vDOS began after I took a closer look at the SSL certificate that vDOS-S[dot]com used to encrypt customer logins. On May 12, 2015, issued an EV SSL certificate for vDOS, according to this record.

As we can see, whoever registered that EV cert did so using the business name VS NETWORK SERVICES LTD, and giving an address in the United Kingdom of 217 Blossomfield Rd., Solihull, West Midlands.

Who owns VS NETWORK SERVICES LTD? According this record from Companies House UK — an official ledger of corporations located in the United Kingdom — the director of the company was listed as one Thomas McGonagall. 

Records from Companies House UK on the firm responsible for registering vDOS's SSL certificate.

Records from Companies House UK on the firm responsible for registering vDOS’s SSL certificate.

This individual gave the same West Midlands address, stating that he was appointed to VS Network Services on May 12, 2015, and that his birthday was in May 1988. A search in Companies House for Thomas McGonagall shows that a person by that same name and address also was listed that very same day as a director for a company called REBELSECURITY LTD.

If we go back even further into the corporate history of this mysterious Mr. McGonagall we find that he was appointed director of NAMECENTRAL LTD on August 18, 2014. Mr. McGonagall’s birthday is listed as December 1995 in this record, and his address is given as 29 Wigorn Road, 29 Wigorn Road, Smethwick, West Midlands, United Kingdom, B67 5HL. Also on that same day, he was appointed to run EZSTRESS LTD, a company at the same Smethwick, West Midlands address.

Strangely enough, those company names correspond to other domains registered through Namecentral around the same time the companies were created, including rebelsecurity[dot]net, ezstress[dot]net.

Asked to explain the odd apparent corporate connections between Namecentral, vDOS, EZStress and Rebelsecurity, Wu chalked that up to an imposter or potential phishing attack.

“I’m not sure who that is, and we are not affiliated with Namecentral Ltd.,” he wrote. “I looked it up though and it seems like it is either closed or has never been active. From what you described it could be possible someone set up shell companies to try and get/resell EV certs (and someone’s failed attempt to set up a phishing site for us – thanks for the heads up).”

Interestingly, among the three dozen or so domains registered through is “,” a site that until recently included nearly identical content as Namecentral’s home page and appears to be aimed at selling EV certs. was responding as of early-October, but it is no longer online.

I also asked Wu why he chose to become a domain registrar when it appeared he had very few domains to justify the substantial annual costs of maintaining a registrar business.

“Like most other registrars, we register domains only as a value added service,” he replied via email. “We have more domains than that (not willing to say exactly how many) but primarily we make our money on our website/ddos protection/ecommerce protection.”

Now we were getting somewhere. Turns out, Wu isn’t really in the domain registrar business — not for the money, anyway. The real money, as his response suggests, is in selling DDoS protection against the very DDoS-for-hire services he is courting with his domain registration service.

Asked to reconcile his claim for having a 100 percent hands-off, automated domain registration system with the fact that Namecentral’s home page says the company doesn’t actually have a way to accept automated domain name registrations (like most normal domain registrars), Wu again had an answer.

“Our site says we only take referred registrations, meaning that at the moment we’re asking that another prior customer referred you to open a new account for our services, including if you’d like a reseller account,” he wrote.


I was willing to entertain the notion that perhaps Mr. Wu was in fact the target of a rather elaborate scam of some sort. That is, until I stumbled upon another company that was registered in the U.K. to Mr. McGonagall.

That other company —SIMPLIFYNT LTD — was registered by Mr. McGonagall on October 29, 2014. Turns out, almost the exact same information included in the original Web site registration records for Jesse Wu’s purchase of was used for the domain, which also was registered on Oct. 29, 2014. I initially missed this domain because it was not registered through Namecentral. If someone had phished Mr. Wu in this case, they had been very quick to the punch indeed.

In the domain registration records, Jesse Wu gave his email address as That domain is no longer online, but a cached copy of it at shows that it was once a Web development business. That cached page lists yet another contact email address:

I ordered a reverse WHOIS lookup from on all historic Web site registration records that included the domain “” anywhere in the records. The search returned 15 other domains, including several more apparent DDoS-for-hire domains such as,, and

Among the oldest and most innocuous of those 15 domains was, a fan site for a massively multiplayer online role-playing game (MMORPG) called Maple Story. Another historic record lookup ordered from shows that was originally registered in 2009 to a “Denny Ng.” As it happens, Denny Ng is listed as the co-owner of the $1.6 million Walnut, Calif. home where Jesse until very recently lived with his mom Cindy Wu (Jesse is now a student at the University of California, San Diego).


Another domain of interest that was secured via Namecentral is Registered by 19-year-old Christopher J. “CJ” Sculti Jr., Datawagon also bills itself as a DDoS mitigation firm. It appears Mr. Sculti built his DDoS protection empire out of his parents’ $2.6 million home in Rye, NY. He’s now a student at Clemson University, according to his Facebook page.

CJ Sculti Jr.'s Facebook profile photo. Sculti is on pictured on the right.

CJ Sculti Jr.’s Facebook profile photo. Sculti is on pictured on the right.

As I noted in my story DDoS Mitigation Firm Has a History of Hijacks, Sculti earned his 15 minutes of fame in 2015 when he lost a cybersquatting suit with Dominos Pizza after registering the domain (another domain registered via Namecentral).

Around that time, Sculti contacted KrebsOnSecurity via Skype, asking if I’d be interested in writing about this cybersquatting dispute with Dominos. In that conversation, Sculti — apropos of nothing — admits to having just scanned the Internet for routers that were known to be protected by little more than the factory-default usernames and passwords.

Sculti goes on to brag that his scan revealed a quarter-million routers that were vulnerable, and that he then proceeded to upload some kind software to each vulnerable system. Here’s a snippet of that chat conversation, which is virtually one-sided.

July 7, 2015:

21:37 CJ

21:37 CJ
vulnerable routers are a HUGE issue

21:37 CJ
a few months ago

21:37 CJ
I scanned the internet with a few sets of defualt logins

21:37 CJ
for telnet

21:37 CJ
and I was able to upload and execute a binary

21:38 CJ
on 250k devices

21:38 CJ
most of which were routers

21:38 Brian Krebs

21:38 CJ

21:38 CJ
i’m surprised no one has looked into that yet

21:38 CJ

21:39 CJ
it’s a huge issue lol

21:39 CJ
that’s tons of bandwidth

21:39 CJ
that could be potentially used

21:39 CJ
in the wrong way

21:39 CJ

Tons of bandwidth, indeed. The very next time I heard from Sculti was the same day I published the above-mentioned story about Datawagon’s relationship to BackConnect Inc., a company that admitted to hijacking 256 Internet addresses from vDOS’s hosting provider in Bulgaria — allegedly to defend itself against a monster attack allegedly launched by vDOS’s proprietors.

Sculti took issue with how he was portrayed in that report, and after a few terse words were exchanged, I blocked his Skype account from further communicating with mine. Less than an hour after that exchange, my Skype inbox was flooded with thousands of bogus contact requests from hacked or auto-created Skype accounts.

Less than six hours after that conversation, my site came under the biggest DDoS attack the Internet had ever witnessed at the time, an attack that experts have since traced back to a large botnet of IoT devices infected with Mirai.

As I wrote in the story that apparently incurred Sculti’s ire, Datawagon — like BackConnect — also has a history of hijacking broad swaths of Internet address space that do not belong to it. That listing came not long after Datawagon announced that it was the rightful owner of some 256 Internet addresses ( that had long been dormant.

The Web address currently does not respond to browser requests, but it previously routed to a page listing the core members of a hacker group calling itself the Money Team. Other sites also previously tied to that Internet address include numerous DDoS-for-hire services, such as nazistresser[dot]biz, exostress[dot]in, scriptkiddie[dot]eu, packeting[dot]eu, leet[dot]hu, booter[dot]in, vivostresser[dot]com, shockingbooter[dot]com and xboot[dot]info, among others.

Datawagon has earned a reputation on hacker forums as a “bulletproof” hosting provider — one that will essentially ignore abuse complaints from other providers and turn a blind eye to malicious activity perpetrated by its customers. In the screenshot below — taken from a thread on Hackforums where Datawagon was suggested as a reliable bulletproof hoster — the company is mentioned in the same vein as HostSailor, another bulletproof provider that has been the source of much badness (as well as legal threats against this author).


In yet another Hackforums discussion thread from June 2016 titled “VPS [virtual private servers] that allow DDoS scripts,” one user recommends Datawagon. “I use They allow anything.”

Last year, Sculti formed a company in Florida along with a self-avowed spammer. Perhaps unsurprisingly, anti-spam group Spamhaus soon listed virtually all of Datawagon’s Internet address space as sources of spam.

Are either Mr. Wu or Mr. Sculti behind the Mirai botnet attacks? I cannot say. But I’d be willing to bet money that one or both of them knows who is. In any case, it would appear that both men may have hit upon a very lucrative business model. More to come.

Cory DoctorowInterview with IEEE-USA Insight Podcast

I was interviewed for the IEEE-USA Insight Podcast last summer in New Orleans, during their Future Leaders Summit, where I was privileged to give the keynote (MP3)

TEDTED Fellows in the Field: Exploring Nairobi with “Blinky” Bill Sellanga

Nairobi, Kenya is one of the undisputed hubs of creativity on the African continent — and TED Fellows are at the center of the action. They’re building global technology companies like Ushahidi and BRCK, making genre-busting music that draws on wide-ranging cultural influences and working with marginalized communities in Kenya to make sure their voices are heard.

Meet Kenyan musician, DJ and TED Fellow “Blinky” Bill Sellanga in the latest installment of the Fellows in the Field video series. Explore Nairobi with Bill as he works on his new solo album in his studio, wanders the bustling streets in search of inspiration and DJs at The Alchemist, one of his favorite spots in the city.

“If you’re looking at Africa, you take a look at Nairobi,” Sellanga says. “We’re just discovering ourselves and figuring out how to express ourselves in a way that makes sense to us.”

Interested in becoming a TED Fellow yourself? The search is on for our next class. Learn more about becoming a TEDGlobal 2017 Fellow in Arusha, Tanzania. We encourage all talented innovators in their fields — science, art, technology, entrepreneurship, film and beyond — to apply to become a TED Fellow, especially those working across the African continent.

Apply now to become a TEDGlobal 2017 Fellow.

CryptogramBypassing Intel's ASLR

Researchers discover a clever attack that bypasses the address space layout randomization (ALSR) on Intel's CPUs.

Here's the paper. It discusses several possible mitigation techniques.

Planet DebianHéctor Orón Martínez: Open Build Service in Debian needs YOU! ☞

“Open Build Service is a generic system to build and distribute packages from sources in an automatic, consistent and reproducible way.”


openSUSE distributions’ build system is based on a generic framework named Open Build Service (OBS), I have been using these tools in my work environment, and I have to say, as Debian developer, that it is a great tool. In the current blog post I plan for you to learn the very basics of such tool and provide you with a tutorial to get, at least, a Debian package building.


Fig 1 – Open Build Service Architecture

The figure above shows Open Build Service, from now on OBS, software architecture. There are several parts which we should differenciate:

  • Web UI / API (obs-api)
  • Backend (obs-server)
  • Build daemon / worker (obs-worker)
  • CLI tool to manage API (osc)

Each one of the above packages can be installed in separated machines as a distributed architecture, it is very easy to split the system into several machines running the services, however in the tutorial below everything installs in one machine.


The backend is composed of several scripts written either in shell or Perl. There are several services running in the backend:

  • Source service
  • Repository service
  • Scheduler service
  • Dispatcher service
  • Warden service
  • Publisher service
  • Signer service
  • DoD service

The backend manages source packages (any format such RPM, DEB, …) and schedules them for a build in the worker. Once the package is built it can be published in a repository for the wider audience or kept unpublished and used by other builds.


System can have several worker machines which are encharged to perform the package builds. There are different options that can be configured (see /etc/default/obsworker) such enabling switch, number of worker instances, jobs per instance. This part of the system is written in shell and/or Perl language.


The frontend allows in a clickable way to get around most options OBS provides: setup projects, upload/branch/delete packages, submit review requests, etc. As an example, you can see a live instance running at

The frontend parts are really a Ruby-on-rails web application, we (mainly thanks to Andrew Lee with ruby team help) have tried to get it nicely running, however we have had lots of issues due to javascripts or rubygems malfunctioning. Current webui is visible and provides some package status, however most actions do not work properly, or configurations cannot be applied as editor does not save changes, projects or packages in a project are not listed either. If you are a Ruby-on-rails expert or if you are able to help us out with some of the webui issues we get at Debian that would be really appreciated from our side.


OSC is a managing command line tool, written in Python, that interfaces with OBS API to be able to perform actions, edit configurations, do package reviews, etc.


Now that we have done a general overview of the system, let me introduce you to OBS with a practical tutorial.

TUTORIAL: Build a Debian package against Debian 8.0 using Download On Demand (DoD) service.

Sociological ImagesCan Cognitive Sociology Explain Why People Are Voting for a Different Candidate Than You?

Who among us this election — except perhaps that elusive undecided voter — has not turned to a politically aligned friend and said, from their heart of hearts, “I just can’t understand how anyone could vote for Clinton/Trump”? The sheer mindbogglingness of it, the utter failure of so many Americans to even begin to fathom voting for the other candidate, is one of the most disturbing features of this election. We all seem to be asking: What could the other side be thinking!?

left: flickr photo by Sarah Hina; right: flickr photo by Darron Bergenheier.
left: flickr photo by Sarah Hina; right: flickr photo by Darron Bergenheier

Perhaps what we need is a “sociology of thinking.” And we’ve got one; it’s called cognitive sociology.

One of the foundational texts in the subfield is called Social Mindscapes. In it, the sociologist Eviatar Zerubavel argues that we think as individuals (we are all alone in our brains) and we think as human beings (with the cognitive processes that humans have inherited from evolution), but we also think as members of social groups. Our thinking, then, is not only idiosyncratic (i.e., “individual”), nor universal (i.e., “human”) — though it is both those things — it’s also social. Our thinking is influenced by the groups to which we belong, what Zerubavel called “thought communities.” These are the people with whom we enjoy a meeting of the minds.

By this, Zerubavel doesn’t simply mean that our social groups shape what information we get and what arguments resonate, though that’s true. He and other cognitive sociologists argue that our thought communities shape cognition itself, that the brains of people in strongly divergent thought communities literally work differently. To Zerubavel, the idea that many Democrats can’t begin to understand Republican thinking — and vice versa — isn’t a surprise, it’s a hypothesis.

Research on sensory perception is fun evidence for their claims. Researchers have shown, for example, that our language categories influence not just how we describe the world we see, but how we see it. The Himba in Namibia, for example — who have one word for blue and some greens and another word for other greens, reds, and browns — are better than English speakers at differentiating one shade of green from another, but worse at differentiating green and blue from each other. Likewise, Russian speakers are better than English speakers at differentiating shades of blue because they have more than one word for the color and English speakers, in turn, are better than Japanese speakers at recognizing the gradations between blue and green, because the Japanese have traditionally used only one word to describe them both.

If our membership in thought communities is powerful enough to shift our very perception of color, then it must be able to influence our thinking in many other ways, too. In Social Mindscapes, Zerubavel shows that what we pay attention to, the categories we use, what we remember, and even our perception of time are all shaped by our thought communities.

Accordingly, cognitive sociology would predict that the rising polarization in politics and the fragmentation of media will make it harder and harder to understand each other, not because we don’t agree on the facts or because we have different political interests, but because our brains are actually working in divergent ways. That is, what we’re experiencing with this election is not just political disagreement, it’s a total breakdown in functional communication, which sounds about right.

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.

(View original at

Google AdsenseLearn the top triggers of policy violation warnings

Today we’ll highlight some of the top triggers of policy violation warnings to help you avoid common pitfalls. If you haven’t already, download the All-In-One Policy Compliance Guide to help you understand the what's and why's of our policy processes so you can always stay one step ahead.

As a general guideline to building a strong policy compliant foundation, ensure that the pages within your site offer a unique value for users and comply with AdSense policies. Let’s get started.

Google ads cannot be placed on pages that infringe on copyrighted materials. Don’t try to monetize content that isn’t yours or you don’t have permission to use. 
Because users come to your site for the content, it should then be easy for users of your site to distinguish ads from content at all times. Ads that blend in or that are situated too close to content and navigational icons can cause invalid clicks. AdSense will deduct clicks that are determined to be invalid and, where possible, reimburse advertisers.

Text descriptions that include excessive use of profanity or erotic stories, jokes, or discussions are violations of AdSense policies.
Placing ads under misleading headings like “Resources” is a policy violation. Users should not be mislead or asked to click on ads. Acceptable headers are “Advertisements or “Sponsored Links”.
Content that’s sexually explicit – or suggestive without being explicit, such as lingerie – isn’t allowed. If you wouldn't show it in polite company, we don’t want AdSense advertisements appearing there.
Drawing unnatural attention to ads by using visuals, call-outs or placements that call too much attention to ads aren’t permitted either.

Content that features bloodshed, fight scenes, and gruesome or freak accidents is not permitted by the AdSense policy.
Webmaster Guidelines require publishers to make sure their content is original, adds value, and is intended primarily for users, not for search engines. Failure to adhere to these adds up to a violation of AdSense policy.

There you have it: eight common triggers for a potential policy violation warning. We recommend that you refer back to this blog post and use Google Search to identify if you have any violations in your content as you review your site and upcoming content. 

Coming up next – what to do if you receive a policy violation warning.

Posted by: Anastasia Almiasheva from the AdSense team

CryptogramSecurity Lessons from a Power Saw

Lance Spitzner looks at the safety features of a power saw and tries to apply them to Internet security:

By the way, here are some of the key safety features that are built into the DeWalt Mitre Saw. Notice in all three of these the human does not have to do anything special, just use the device. This is how we need to think from a security perspective.

  • Safety Cover: There is a plastic safety cover that protects the entire rotating blade. The only time the blade is actually exposed is when you lower the saw to actually cut into the wood. The moment you start to raise the blade after cutting, the plastic cover protects everything again. This means to hurt yourself you have to manually lower the blade with one hand then insert your hand into the cutting blade zone.

  • Power Switch: Actually, there is no power switch. Instead, after the saw is plugged in, to activate the saw you have to depress a lever. Let the lever go and saw stops. This means if you fall, slip, blackout, have a heart attack or any other type of accident and let go of the lever, the saw automatically stops. In other words, the saw always fails to the off (safe) position.

  • Shadow: The saw has a light that projects a shadow of the cutting blade precisely on the wood where the blade will cut. No guessing where the blade is going to cut.

Safety is like security, you cannot eliminate risk. But I feel this is a great example of how security can learn from others on how to take people into account.

Worse Than FailureCodeSOD: Work Items Incomplete

Owen J picked up a ticket complaining that users were not seeing all of their work items. Now, these particular “work items” weren’t merely project tasks, but vital for regulatory compliance. We’re talking the kinds of regulations that have the words “criminal penalties” attached to them.

What made it even more odd was that only one user was complaining. The user knew it was odd, their ticket even said, “Other people in my department aren’t having this issue, so maybe it’s something with my account?” Owen quickly eliminated their account as a likely source of the problem, but Owen also couldn’t duplicate the bug in test.

A quick check showed him that the big difference between test and production was the number of work items. With a few experiments, Owen was able to trigger an infinite loop, when the number was very low. He dove into the code.

    var fullList = _complianceBL.GetRegulatoryNotificationWorkItems(_sessionGuid).ToList().ToObservableCollection();
    Workitems = new ObservableCollection<RegulatoryNotificationWorkItem>();
    for (int i = 0; i < fullList.Count; i += fullList.Count/35) // Get the FIRST 30 items for TEST
    Workitems.ToList().ForEach(wi => wi.Customer = new Customer(wi.CustomerID, this));

One of the senior devs had wanted to check something in the app, and wanted only a small pile of items for the test, so they wrote this block. Then, they checked it in. Then, when organizing the next code review, they saw they were responsible for the change, so they assumed it must be good.

[Advertisement] Otter enables DevOps best practices by providing a visual, dynamic, and intuitive UI that shows, at-a-glance, the configuration state of all your servers. Find out more and download today!

Planet DebianRaphaël Hertzog: Freexian’s report about Debian Long Term Support, September 2016

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In September, about 152 work hours have been dispatched among 13 paid contributors. Their reports are available:

  • Balint Reczey did 15 hours (out of 12.25 hours allocated + 7.25 remaining, thus keeping 4.5 extra hours for October).
  • Ben Hutchings did 6 hours (out of 12.3 hours allocated + 1.45 remaining, he gave back 7h and thus keeps 9.75 extra hours for October).
  • Brian May did 12.25 hours.
  • Chris Lamb did 12.75 hours (out of 12.30 hours allocated + 0.45 hours remaining).
  • Emilio Pozuelo Monfort did 1 hour (out of 12.3 hours allocated + 2.95 remaining) and gave back the unused hours.
  • Guido Günther did 6 hours (out of 7h allocated, thus keeping 1 extra hour for October).
  • Hugo Lefeuvre did 12 hours.
  • Jonas Meurer did 8 hours (out of 9 hours allocated, thus keeping 1 extra hour for October).
  • Markus Koschany did 12.25 hours.
  • Ola Lundqvist did 11 hours (out of 12.25 hours assigned thus keeping 1.25 extra hours).
  • Raphaël Hertzog did 12.25 hours.
  • Roberto C. Sanchez did 14 hours (out of 12.25h allocated + 3.75h remaining, thus keeping 2 extra hours).
  • Thorsten Alteholz did 12.25 hours.

Evolution of the situation

The number of sponsored hours reached 172 hours per month thanks to maxcluster GmbH joining as silver sponsor and RHX Srl joining as bronze sponsor.

We only need a couple of supplementary sponsors now to reach our objective of funding the equivalent of a full time position.

The security tracker currently lists 39 packages with a known CVE and the dla-needed.txt file 34. It’s a small bump compared to last month but almost all issues are affected to someone.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Planet DebianKees Cook: Security bug lifetime

In several of my recent presentations, I’ve discussed the lifetime of security flaws in the Linux kernel. Jon Corbet did an analysis in 2010, and found that security bugs appeared to have roughly a 5 year lifetime. As in, the flaw gets introduced in a Linux release, and then goes unnoticed by upstream developers until another release 5 years later, on average. I updated this research for 2011 through 2016, and used the Ubuntu Security Team’s CVE Tracker to assist in the process. The Ubuntu kernel team already does the hard work of trying to identify when flaws were introduced in the kernel, so I didn’t have to re-do this for the 557 kernel CVEs since 2011.

As the README details, the raw CVE data is spread across the active/, retired/, and ignored/ directories. By scanning through the CVE files to find any that contain the line “Patches_linux:”, I can extract the details on when a flaw was introduced and when it was fixed. For example CVE-2016-0728 shows:

 break-fix: 3a50597de8635cd05133bd12c95681c82fe7b878 23567fd052a9abb6d67fe8e7a9ccdd9800a540f2

This means that CVE-2016-0728 is believed to have been introduced by commit 3a50597de8635cd05133bd12c95681c82fe7b878 and fixed by commit 23567fd052a9abb6d67fe8e7a9ccdd9800a540f2. If there are multiple lines, then there may be multiple SHAs identified as contributing to the flaw or the fix. And a “-” is just short-hand for the start of Linux git history.

Then for each SHA, I queried git to find its corresponding release, and made a mapping of release version to release date, wrote out the raw data, and rendered graphs. Each vertical line shows a given CVE from when it was introduced to when it was fixed. Red is “Critical”, orange is “High”, blue is “Medium”, and black is “Low”:

CVE lifetimes 2011-2016

And here it is zoomed in to just Critical and High:

Critical and High CVE lifetimes 2011-2016

The line in the middle is the date from which I started the CVE search (2011). The vertical axis is actually linear time, but it’s labeled with kernel releases (which are pretty regular). The numerical summary is:

  • Critical: 2 @ 3.3 years
  • High: 34 @ 6.4 years
  • Medium: 334 @ 5.2 years
  • Low: 186 @ 5.0 years

This comes out to roughly 5 years lifetime again, so not much has changed from Jon’s 2010 analysis.

While we’re getting better at fixing bugs, we’re also adding more bugs. And for many devices that have been built on a given kernel version, there haven’t been frequent (or some times any) security updates, so the bug lifetime for those devices is even longer. To really create a safe kernel, we need to get proactive about self-protection technologies. The systems using a Linux kernel are right now running with security flaws. Those flaws are just not known to the developers yet, but they’re likely known to attackers, as there have been prior boasts/gray-market advertisements for at least CVE-2010-3081 and CVE-2013-2888.

(Edit: see my updated graphs that include CVE-2016-5195.)

© 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

Planet DebianMichal Čihař: Gammu 1.37.90

Yesterday Gammu 1.37.90 has been released. This release brings quite a lot of changes and it's for testing purposes. Hopefully stable 1.38.0 will follow soon as soon as I won't get negative feedback on the changes.

Besides code changes, there is one news for Windows users - there is Windows binary coming with the release. This was possible to automate thanks to AppVeyor, who does provide CI service where you can download built artifacts. Without this, I'd not be able to do make this as I don't have single Windows computer :-).

Full list of changes:

  • Improved support Huawei K3770.
  • API changes in some parameter types.
  • Fixed various Windows compilation issues.
  • Fixed several resource leaks.
  • Create outbox SMS atomically in FILES backend.
  • Removed getlocation command as we no longer fit into their usage policy.
  • Fixed call diverts on TP-LINK MA260.
  • Initial support for Oracle database.
  • Removed unused daemons, pbk and pbk_groups tables from the SMSD schema.
  • SMSD outbox entries now can have priority set in the database.
  • Added SIM IMSI to the SMSD status table.
  • Added CheckNetwork directive.
  • SMSD attempts to power on radio if disabled.
  • Fixed processing of AT unsolicited responses in some cases.
  • Fixed parsing USSD responses from some devices.

Would you like to see more features in Gammu? You an support further Gammu development at Bountysource salt or by direct donation.

Filed under: Debian English Gammu | 0 comments

Planet DebianReproducible builds folks: Reproducible Builds: week 77 in Stretch cycle

What happened in the Reproducible Builds effort between Sunday October 9 and Saturday October 15 2016:

Media coverage

  • despinosa wrote a blog post on Vala and reproducibility
  • h01ger and lynxis gave a talk called "From Reproducible Debian builds to Reproducible OpenWrt, LEDE" (video, slides) at the OpenWrt Summit 2016 held in Berlin, together with ELCE, held by the Linux Foundation.
  • A discussion on debian-devel@ resulted in a nice quotable comment from Paul Wise: "(Reproducible) builds from source (with continuous rechecking) is the only way to have enough confidence that a Debian user has the freedoms promised to them by the Debian social contract."
  • Chris Lamb will present a talk at Software Freedom Kosovo on reproducible builds on Saturday 22nd October.

Documentation update

After discussions with HW42, Steven Chamberlain, Vagrant Cascadian, Daniel Shahaf, Christopher Berg, Daniel Kahn Gillmor and others, Ximin Luo has started writing up more concrete and detailed design plans for setting SOURCE_ROOT_DIR for reproducible debugging symbols, buildinfo security semantics and buildinfo security infrastructure.

Toolchain development and fixes

Dmitry Shachnev noted that our patch for #831779 has been temporarily rejected by docutils upstream; we are trying to persuade them again.

Tony Mancill uploaded javatools/0.59 to unstable containing original patch by Chris Lamb. This fixed an issue where documentation Recommends: substvars would not be reproducible.

Ximin Luo filed bug 77985 to GCC as a pre-requisite for future patches to make debugging symbols reproducible.

Packages reviewed and fixed, and bugs filed

The following updated packages have become reproducible - in our current test setup - after being fixed:

The following updated packages appear to be reproducible now, for reasons we were not able to figure out. (Relevant changelogs did not mention reproducible builds.)

  • aodh/3.0.0-2 by Thomas Goirand.
  • eog-plugins/3.16.5-1 by Michael Biebl.
  • flam3/3.0.1-5 by Daniele Adriana Goulart Lopes.
  • hyphy/2.2.7+dfsg-1 by Andreas Tille.
  • libbson/1.4.1-1 by A. Jesse Jiryu Davis.
  • libmongoc/1.4.1-1 by A. Jesse Jiryu Davis.
  • lxc/1:2.0.5-1 by Evgeni Golov.
  • spice-gtk/0.33-1 by Liang Guo.
  • spice-vdagent/0.17.0-1 by Liang Guo.
  • tnef/1.4.12-1 by Kevin Coyner.

Some uploads have addressed some reproducibility issues, but not all of them:

Some uploads have addressed nearly all reproducibility issues, except for build path issues:

Patches submitted that have not made their way to the archive yet:

Reviews of unreproducible packages

101 package reviews have been added, 49 have been updated and 4 have been removed in this week, adding to our knowledge about identified issues.

3 issue types have been updated:

Weekly QA work

During of reproducibility testing, some FTBFS bugs have been detected and reported by:

  • Anders Kaseorg (1)
  • Chris Lamb (18)


  • h01ger has turned off the "Scheduled in testing+unstable+experimental" regular IRC notifications and turned them into emails to those running jenkins.d.n.
  • Re-add opi2a armhf node and 3 new builder jobs for a total of 60 build jobs for armhf. (h01ger and vagrant)
  • vagrant suggested to add a variation of init systems effecting the build, and h01ger added it to the TODO list.
  • Steven Chamberlain submitted a patch so that now all buildinfo files are collected (unsigned yet) at
  • Holger enabled CPU type variation (Intel Haswell or AMD Opteron 62xx) for i386. Thanks to for their great and continued support!


  • Increase memory on the 2 build nodes from 12 to 16gb, thanks to


We are running a poll to find a good time for an IRC meeting.

This week's edition was written by Ximin Luo, Holger Levsen & Chris Lamb and reviewed by a bunch of Reproducible Builds folks on IRC.


CryptogramIntelligence Oversight and How It Can Fail

Former NSA attorneys John DeLong and Susan Hennessay have written a fascinating article describing a particular incident of oversight failure inside the NSA. Technically, the story hinges on a definitional difference between the NSA and the FISA court meaning of the word "archived." (For the record, I would have defaulted to the NSA's interpretation, which feels more accurate technically.) But while the story is worth reading, what's especially interesting are the broader issues about how a nontechnical judiciary can provide oversight over a very technical data collection-and-analysis organization -- especially if the oversight must largely be conducted in secret.

From the article:

Broader root cause analysis aside, the BR FISA debacle made clear that the specific matter of shared legal interpretation needed to be addressed. Moving forward, the government agreed that NSA would coordinate all significant legal interpretations with DOJ. That sounds like an easy solution, but making it meaningful in practice is highly complex. Consider this example: a court order might require that "all collected data must be deleted after two years." NSA engineers must then make a list for the NSA attorneys:

  1. What does deleted mean? Does it mean make inaccessible to analysts or does it mean forensically wipe off the system so data is gone forever? Or does it mean something in between?

  2. What about backup systems used solely for disaster recovery? Does the data need to be removed there, too, within two years, even though it's largely inaccessible and typically there is a planned delay to account for mistakes in the operational system?

  3. When does the timer start?

  4. What's the legally-relevant unit of measurement for timestamp computation­ -- a day, an hour, a second, a millisecond?

  5. If a piece of data is deleted one second after two years, is that an incident of noncompliance? What about a delay of one day? ....

  6. What about various system logs that simply record the fact that NSA had a data object, but no significant details of the actual object? Do those logs need to be deleted too? If so, how soon?

  7. What about hard copy printouts?

And that is only a tiny sample of the questions that need to be answered for that small sentence fragment. Put yourself in the shoes of an NSA attorney: which of these questions -- ­in particular the answers­ -- require significant interpretations to be coordinated with DOJ and which determinations can be made internally?

Now put yourself in the shoes of a DOJ attorney who receives from an NSA attorney a subset of this list for advice and counsel. Which questions are truly significant from your perspective? Are there any questions here that are so significant they should be presented to the Court so that that government can be sufficiently confident that the Court understands how the two-year rule is really being interpreted and applied?

In many places I have separated different kinds of oversight: are we doing things right versus are we doing the right things? This is very much about the first: is the NSA complying with the rules the courts impose on them? This is about the first kind. I believe that the NSA tries very hard to follow the rules it's given, while at the same time being very aggressive about how it interprets any kind of ambiguities and using its nonadversarial relationship with its overseers to its advantage.

The only possible solution I can see to all of this is more public scrutiny. Secrecy is toxic here.

TEDTEDWomen update: Diana Nyad and EverWalk Nation

At the age of 64, Diana Nyad became the first person to make the 110-mile swim from Havana to Key West without a shark cage. The swim took her 52 hours and 54 minutes to complete, a lifetime goal she had begun dreaming of in the late 1970s. She made her first attempt in 1978 at the age of 28. Then after decades of not swimming, she decided to try again in her 60s. After three unsuccessful attempts, she made it in 2013.

She talked about her epic swim at TEDWomen 2013. So far, more than 3.3 million people have watched her talk, titled “Never, Ever Give Up.”

Now, Diana is working on a new goal. Along with her Cuba swim expedition leader, Bonnie Stoll, Diana has founded EverWalk Nation, a bold movement to get people to pledge to walk three times a week. Research shows that Westerners’ sedentary lifestyle can be as damaging to our health as smoking, so Diana and Bonnie want to get us moving!

“We intend to amass over the next calendar year a million people pledging to walk three times a week.  We are going to spark a tsunami of walking in this country, turning America into a rabid nation of walkers,” she writes in a recent email.

And as you might expect from someone like Diana, the launch of the initiative is no small affair. This Sunday, on the third anniversary of Diana’s historic Cuba swim, she will lead a gang of walkers on a 145-mile walk down the California coast from Los Angeles to San Diego. EverWalk will sponsor more long walks around the country over the next several years, including from Boston to New York City and from Chicago to St. Louis. Participants can sign on for different distances depending on their abilities.

Registration is closed for this weekend’s walk, but if you’re in LA, cheer the group on at their kick-off event from 7 to 8:30 AM at the Santa Monica Civic Auditorium on Oct. 23.

Get Involved: To take the pledge, join the community and to find out about future walks, visit and be a leader in the biggest walking initiative in American history.

We’ll be checking in on Diana and Bonnie’s progress with a live update from their LA-to–San Diego walk at this year’s TEDWomen, held next week, Oct. 26–28, in San Francisco.

Worse Than FailureThe Case of the Missing Signal

Satellite dish in Austria

"My satellite connection is down," reported the user on the phone. "Can you help me?"

"Sure!" Omar, a support tech for a firm that supplied broadband by satellite, mentally suited up for his latest troubleshooting battle. The service he supported provided decent connection speeds to some remote geographic locations, but was far from perfect.

After collecting some basic data from the user, Omar proceeded to explain the usual suspects. "Most likely, your dish is either blocked or not pointed correctly. High winds and bad weather are enough to push it out of alignment. Heck, I've even heard of toys, lawn furniture, all kinds of stuff knocking into dishes," he said with a chuckle. "How's your weather been lately, sir?"

"The weather's been good," the user replied. "Everything was working fine last night."

"All right," Omar replied. "Well, the good news is that we have a portal we can use to see how the dish is doing and repoint it if necessary. Are you near your computer, sir? I'll help you open it."

He walked the user through the process. The portal reported that the dish was healthy, but completely misaligned.

"No problem," Omar said. "We can use this portal to repoint the dish."

Omar taught the user how to nudge the dish around with the portal's controls. Normally, this was a very fiddly process. In this case, no matter what the user did, the portal kept saying the dish was way off base.

While chewing on a pencil, Omar began to wonder whether the dish was looking for the wrong beam. There were 83 of them, after all. Sometimes, customers moved their dishes into another beam area by mistake.

But, no. According to the portal, neither the beam nor the hardware were problematic. The dish just wasn't getting a signal.

Omar frowned. "Is there any chance you could go look at the satellite and make sure it's where it's supposed to be, with nothing obstructing it?"

"Well, OK," the customer replied reluctantly.

A long, scratchy pause followed as the user moved around without muting his phone first. While Omar waited, he tried to think of further troubleshooting ideas, but he was getting desperate.

"Hey, I think there may be something in the line of sight after all!" the user reported with the breathless wonder of discovery.

"Oh?" Omar perked up, hopeful.

"We put the satellite on the quayside of the Clyde River. Looks like a ship has parked up overnight!"

"Oh." For a moment, simultaneous bafflement and relief stunned Omar. "Well, I ... think we're gonna have to wait for the ship to leave, then."

"Yeah, guess so!" the user replied cheerfully.

"If it still doesn't work afterward, call us back, OK?"

"Will do! Thanks!"

Fortunately, the problem never recurred. When he had some downtime, Omar quietly updated his team's Tech Support troubleshooting guide to include a new bullet point: CHECK FOR BLOCKING SHIPS.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet DebianEnrico Zini: debtags and aptitude forget-new

I like to regularly go through the new packages section in aptitude to see what interesting new packages entered testing, but recently that joyful moment got less joyful for me because of a barrage of obscurely named packages.

I have just realised that aptitude forget-new supports search patterns, and that brought back the joy.

I put this in a script that I run before looking for new packages in aptitude:

aptitude forget-new '?tag(field::biology)
                   | ?tag(devel::lang:ruby)
                   | ?tag(devel::lang:perl)
                   | ?tag(role::shared-lib)
                   | ?tag(suite::openstack)
                   | ?tag(implemented-in::php)
                   | ~n^node-'

The actual content of the search pattern is purely a matter of taste.

I'm happy to see how debtags becomes quite useful here, to keep my own user experience manageable as the size of Debian keeps growing.

Update: pabs suggested to use apt post-invoke hooks. For example:

        $ cat /etc/apt/apt.conf.d/99forget-new
        APT::Update::Post-Invoke { "aptitude forget-new '~sdebug'"; };

Planet DebianMJ Ray: Rinse and repeat

Forgive me, reader, for I have sinned. It has been over a year since my last blog post. Life got busy. Paid work. Another round of challenges managing my chronic illness. Cycle campaigning. Fun bike rides. Friends. Family. Travels. Other social media to stroke. I’m still reading some of the planets where this blog post should appear and commenting on some, so I’ve not felt completely cut off, but I am surprised how many people don’t allow comments on their blogs any more (or make it too difficult for me with reCaptcha and the like).

The main motive for this post is to test some minor upgrades, though. Hi everyone. How’s it going with you? I’ll probably keep posting short updates in the future.

Go in peace to love and serve the web. 🙂

Planet DebianDirk Eddelbuettel: gettz 0.0.2

Release 0.0.2 of gettz is now on CRAN.

gettz provides a possible fallback in situations where Sys.timezone() fails to determine the system timezone. That can happen when e.g. the file /etc/localtime somehow is not a link into the corresponding file with zoneinfo data in, say, /usr/share/zoneinfo.

Windows is now no longer excluded, though it doesn't do anything useful yet. The main use of the package is still for Linux.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.


Planet DebianRuss Allbery: pgpcontrol 2.5

pgpcontrol is the collection of the original signing and verification scripts that David Lawrence wrote (in Perl) for verification of Usenet control messages. I took over maintenance of it, with a few other things, but haven't really done much with it. It would benefit a lot from an overhaul of both the documentation and the code, and turning it into a more normal Perl module and supporting scripts.

This release is none of those things. It's just pure housekeeping, picking up changes made by other people (mostly Julien ÉLIE) to the copies of the scripts in INN and making a few minor URL tweaks. But I figured I may as well, rather than distribute old versions of the scripts.

You can tell how little I've done with this stuff by noting that they don't even have a distribution page on my web site. The canonical distribution site is, although I'm not sure if that site will pick up the new release. (This relies on a chain of rsync commands that have been moved multiple times since the last time I pushed the release button, and I suspect that has broken.) I'll ping someone about possibly fixing that; in the meantime, you can find the files on

Planet Linux AustraliaBinh Nguyen: Common Russian Media Themes, Has Western Liberal Capitalist Democracy Failed?, and More

After watching international media for a while (particularly those who aren't part of the standard 'Western Alliance') you'll realise that their are common themes: - they are clearly against the current international order. Believe that things will be better if changed. Wants the rules changed (especially as they seemed to have favoured some countries who went through the World Wars relatively

Sociological ImagesHow the Bureau of Justice Statistics Launched a White Supremacist Meme

TW: racism  and sexual violence; originally posted at Family Inequality.

I’ve been putting off writing this post because I wanted to do more justice both to the history of the Black-men-raping-White-women charge and the survey methods questions. Instead I’m just going to lay this here and hope it helps someone who is more engaged than I am at the moment. I’m sorry this post isn’t higher quality.

Obviously, this post includes extremely racist and misogynist content, which I am showing you to explain why it’s bad.

This is about this very racist meme, which is extremely popular among extreme racists.


The modern racist uses statistics, data, and even math. They use citations. And I think it takes actually engaging with this stuff to stop it (this is untested, though, as I have no real evidence that facts help). That means anti-racists need to learn some demography and survey methods, and practice them in public. I was prompted to finally write on this by a David Duke video streamed on Facebook, in which he used exaggerated versions of these numbers, and the good Samaritans arguing with him did not really know how to respond.

For completely inadequate context: For a very long time, Black men raping White women has been White supremacists’ single favorite thing. This was the most common justification for lynching, and for many of the legal executions of Black men throughout the 20th century. From 1930 to 1994 there were 455 people executed for rape in the U.S., and 89% of them were Black (from the 1996 Statistical Abstract):


For some people, this is all they need to know about how bad the problem of Blacks raping Whites is. For better informed people, it’s the basis for a great lesson in how the actions of the justice system are not good measures of the crimes it’s supposed to address.

Good data gone wrong

Which is one reason the government collects the National Crime Victimization Survey (NCVS), a large sample survey of about 90,000 households with 160,000 people. In it they ask about crimes against the people surveyed, and the answers the survey yields are usually pretty different from what’s in the crime report statistics – and even further from the statistics on things like convictions and incarceration. It’s supposed to be a survey of crime as experienced, not as reported or punished.

It’s an important survey that yields a lot of good information. But in this case the Bureau of Justice Statistics is doing a serious disservice in the way they are reporting the results, and they should do something about it. I hope they will consider it.

Like many surveys, the NCVS is weighted to produce estimates that are supposed to reflect the general population. In a nutshell, that means, for example, that they treat each of the 158,000 people (over age 12) covered in 2014 as about 1,700 people. So if one person said, “I was raped,” they would say, “1700 people in the US say they were raped.” This is how sampling works. In fact, they tweak it much more than that, to make the numbers add up according to population distributions of variables like age, sex, race, and region – and non-response, so that if a certain group (say Black women) has a low response rate, their responses get goosed even more. This is reasonable and good, but it requires care in reporting to the general public.

So, how is the Bureau of Justice Statistics’ (BJS) reporting method contributing to the racist meme above? The racists love to cite Table 42 of this report, which last came out for the 2008 survey. This is the source for David Duke’s rant, and the many, many memes about this. The results of Google image search gives you a sense of how many websites are distributing this:


Here is Table 42, with my explanation below:


What this shows is that, based on their sample, BJS extrapolates an estimate of 117,640 White women who say they were sexually assaulted, or threatened with sexual assault, in 2008 (in the red box). Of those, 16.4% described their assailant as Black (the blue highlight). That works out to 19,293 White women sexually assaulted or threatened by Black men in one year – White supremacists do math. In the 2005 version of the table these numbers were 111,490 and 33.6%, for 37,460 White women sexually assaulted or threatened by Black men, or:


Now, go back to the structure of the survey. If each respondent in the survey counts for about 1,700 people, then the survey in 2008 would have found 69 White women who were sexually assaulted or threatened, 11 of whom said their assailant was Black (117,640/1,700). Actually, though, we know it was less than 11, because the asterisk on the table takes you to the footnote below which says it was based on 10 or fewer sample cases. In comparison, the survey may have found 27 Black women who said they were sexually assaulted or threatened (46,580/1,700), none of whom said their attacker was White, which is why the second blue box shows 0.0. However, it actually looks like the weights are bigger for Black women, because the figure for the percentage assaulted or threatened by Black attackers, 74.8%, has the asterisk that indicates 10 or fewer cases. If there were 27 Black women in this category, then 74.8% of them would be 20. So this whole Black women victim sample might be as little as 13, with bigger weights applied (because, say, Black women had a lower response rate). If in fact Black women are just as likely to be attacked or assaulted by White men as the reverse, 16%, you might only expect 2 of those 13 to be White, and so finding a sample 0 is not very surprising. The actual weighting scheme is clearly much more complicated, and I don’t know the unweighted counts, as they are not reported here (and I didn’t analyze the individual-level data).

I can’t believe we’re talking about this. The most important bottom line is that the BJS should not report extrapolations to the whole population from samples this small. These population numbers should not be on this table. At best these numbers are estimated with very large standard errors. (Using a standard confident interval calculator, that 16% of White women, based on a sample of 69, yields a confidence interval of +/- 9%.) It’s irresponsible, and it’s inadvertently (I assume) feeding White supremacist propaganda.

Rape and sexual assault are very disturbingly common, although not as common as they were a few decades ago, by conventional measures. But it’s a big country, and I don’t doubt lots of Black men sexual assault or threaten White women, and that White men sexually assault or threaten Black women a lot, too – certainly more than never. If we knew the true numbers, they would be bad. But we don’t.

A couple more issues to consider. Most sexual assault happens within relationships, and Black women have interracial relationships at very low rates. In round numbers (based on marriages), 2% of White women are with Black men, and 5% of Black women are with White men, which – because of population sizes – means there are more than twice as many couples with Black-man/White-woman than the reverse. At very small sample sizes, this matters a lot. But we would expect there to be more Black-White rape than the reverse based on this pattern alone. Consider further that the NCVS is a householdsample, which means that if any Black women are sexually assaulted by White men in prison, it wouldn’t be included. Based on a 2011-2012 survey of prison and jail inmates, 3,500 women per year are the victim of staff sexual misconduct, and Black women inmates were about 50% more likely to report this than White women. So I’m guessing the true number of Black women sexually assaulted by White men is somewhat greater than zero, and that’s just in prisons and jails.

The BJS seems to have stopped releasing this form of the report, with Table 42, maybe because of this kind of problem, which would be great. In that case they just need to put out a statement clarifying and correcting the old reports – which they should still do, because they are out there. (The more recent reports are skimpier, and don’t get into this much detail [e.g., 2014] – and their custom table tool doesn’t allow you to specify the perceived race of the offender).

So, next time you’re arguing with David Duke, the simplest response to this is that the numbers he’s talking about are based on very small samples, and the asterisk means he shouldn’t use the number. The racists won’t take your advice, but it’s good for everyone else to know.

Philip N. Cohen is a professor of sociology at the University of Maryland, College Park. He writes the blog Family Inequality and is the author of The Family: Diversity, Inequality, and Social Change. You can follow him on Twitter or Facebook.

(View original at

Krebs on SecurityHackers Hit U.S. Senate GOP Committee

The national news media has been consumed of late with reports of Russian hackers breaking into networks of the Democratic National Committee. Lest the Republicans feel left out of all the excitement, a report this past week out of The Netherlands suggests Russian hackers have for the past six months been siphoning credit card data from visitors to the Web storefront of the National Republican Senatorial Committee (NRSC).

nrscThat’s right: If you purchased a “Never Hillary” poster or donated funds to the NRSC through its Web site between March 2016 and the first week of this month, there’s an excellent chance that your payment card data was siphoned by malware and is now for sale in the cybercrime underground.

News of the break-in comes from Dutch researcher Willem De Groot, co-founder and head of security at Dutch e-commerce site De Groot said the NRSC was one of more than 5,900 e-commerce sites apparently hacked by the same actors, and that the purloined card data was sent to a network of servers operated by a Russian-language Internet service provider incorporated in Belize.

De Groot said he dissected the malware planted on the NRSC’s site and other servers (his analysis of the malware is available here) and found that the hackers used security vulnerabilities or weak passwords to break in to the various e-commerce sites.

The researcher found the malware called home to specific Web destinations made to look like legitimate sites associated with e-commerce activity, such as jquery-cloud[dot]net, visa-cdn[dot]com, and magento-connection[dot]com.

“[The attackers] really went out of their way to pick domain names that look legitimate,” De Groot said.

The NRSC did not respond to multiple requests for comment, but a cached copy of the site’s source code from October 5, 2016 indicates the malicious code was on the site at the time (load this link, click “view source” and then Ctrl-F for “”).

A majority of the malicious domains inserted into the hacked sites by the malware map back to a few hundred Internet addresses assigned to a company called dataflow[dot]su.

Dataflow markets itself as an “offshore” hosting provider with presences in Belize and The Seychelles. Dataflow has long been advertised on Russian-language cybercrime forums as an offshore haven that offers so-called “bulletproof hosting,” a phrase used to describe hosting firms that court all manner of sites that most legitimate hosting firms shun, including those that knowingly host spam and phishing sites as well as malicious software.

De Groot published a list of the sites currently present at Dataflow. The list speaks for itself as a collection of badness, including quite a number of Russian-language sites selling synthetic drugs and stolen credit card data.

According to De Groot, other sites that were retrofitted with the malware included e-commerce sites for the shoe maker Converse as well as the automaker Audi, although he says those sites and the NRSC’s have been scrubbed of the malicious software since his report was published.

But De Groot said the hackers behind this scheme are continuing to find new sites to compromise.

“Last Monday my scans found about 5,900 hacked sites,” he said. “When I did another scan two days later, I found about 340 of those had been fixed, but that another 170 were newly compromised.”

According to the researcher’s analysis, many of the hacked sites are running outdated e-commerce software or content management software. In other cases, it appears the attackers simply brute-forced or guessed passwords needed to administer the sites.

Further, he said, the attackers appear to have inserted their malware into the e-commerce sites’ databases, rather than into the portion of the Web server used to store HTML and other components that make up how the site looks to visitors

“That’s why I think this has remained under the radar for a while now,” De Groot said. “Because some companies use filesystem checkers so that if some file changes on the system they will get a notice that alerts them something is wrong.”

Unfortunately, those same checking systems generally aren’t configured to look for changes in the site’s database files, he explained, since those are expected to change constantly — such as when a new customer order for merchandise is added.

De Groot said he was amazed at how many e-commerce merchants he approached about the hack dismissed the intrusion, reasoning that they employed secure sockets layer (SSL) technology that encrypted the customers’ information end-to-end.

What many Webmaster fail to realize is that just as PC-based trojan horse programs can steal data from Web browsers of infected victims, Web-based keylogging programs can do the same, except they’re designed to steal data from Web server applications.

PC Trojans siphon information using two major techniques: snarfing passwords stored in the browser, and conducting “form grabbing” — capturing any data entered into a form field in the browser before it can be encrypted in the Web session and sent to whatever site the victim is visiting.

Web-based keyloggers also can do form grabbing, ripping out form data submitted by visitors — including names, addresses, phone numbers, credit card numbers and card verification code — as customers are submitting the data during the online checkout process.

These attacks drive home one immutable point about malware’s role in subverting secure connections: Whether resident on a Web server or on an end-user computer, if either endpoint is compromised, it’s ‘game over’ for the security of that Web session.

With PC banking trojans, it’s all about surveillance on the client side pre-encryption, whereas what the bad guys are doing with these Web site attacks involves sucking down customer data post- or pre-encryption (depending on whether the data was incoming or outgoing).

Planet DebianArturo Borrero González: nftables in Debian Stretch

Debian - Netfilter

The next Debian stable release is codenamed Stretch, which I would expect to be released in less than a year.

The Netfilter Project has been developing nftables for years now, and the status of the framework has been improved to a good point: it’s ready for wide usage and adoption, even in high-demand production environments.

The last released version of nft was 0.6, and the Debian package was updated just a day after Netfilter announced it.

With the 0.6 version the software framework reached a good state of maturity, and I myself encourage users to migrate from iptables to nftables.

In case you don’t know about nftables yet, here is a quick reference:

  • it’s the tool/framework meant to replace iptables (also ip6tables, arptables and ebtables)
  • it integrates advanced structures which allow to arrange your ruleset for optimal performance
  • all the system is more configurable than in iptables
  • the syntax is much better than in iptables
  • several actions in a single rule
  • simplified IPv4/IPv6 dual stack
  • less kernel updates required
  • great support for incremental, dynamic and atomic ruleset updates

To run nftables in Debian Stretch you need several components:

  1. nft: the command line interface
  2. libnftnl: the nftables-netlink library
  3. linux kernel: a least 4.7 is recommended

A simple aptitude run will put your system ready to go, out of the box, with nftables:

root@debian:~# aptitude install nftables

Once installed, you can start using the nft command:

root@debian:~# nft list ruleset

A good starting point is to copy a simple workstation firewall configuration:

root@debian:~# cp /usr/share/doc/nftables/examples/syntax/workstation /etc/nftables.conf

And load it:

root@debian:~# nft -f /etc/nftables.conf

Your nftables ruleset is now firewalling your network:

root@debian:~# nft list ruleset
table inet filter {
        chain input {
                type filter hook input priority 0;
                iif lo accept
                ct state established,related accept
                ip6 nexthdr icmpv6 icmpv6 type { nd-neighbor-solicit,  nd-router-advert, nd-neighbor-advert } accept
                counter drop

Several examples can be found at /usr/share/doc/nftables/examples/.

A simple systemd service is included to load your ruleset at boot time, which is disabled by default.

If you are running Debian Jessie and want to give a try, you can use nftables from jessie-backports.

If you want to migrate your ruleset from iptables to nftables, good news. There are some tools in place to help in the task of translating from iptables to nftables, but that doesn’t belong to this post :-)


The nano editor includes nft syntax highlighting. What are you waiting for to use nftables?

Google AdsenseHave you experienced an unauthorized access issue with your AdSense Account?

Unfortunately, even for AdSense publishers, there’s always the risk of an unauthorized source compromising your secure login credentials. In these instances, you might be locked out of your AdSense account. Here’s what you can do to recover your account and avoid the same issue in the future:

For starters, these triggers can help you identify if your account has been compromised.  
  • You’ve noticed suspicious account activity (for example: there are new users that you haven’t granted access to; the payment details have changed without your permission; your security settings have been updated; and your email notification settings have changed).
  • You cannot login to your AdSense account.
 If you’ve found that your account has been compromised, follow these steps to resolve the issue:
  1. Run a malware scan on your devices
  2. Visit our Login Troubleshooter. If you’re locked out of your account, the troubleshooter will help you recover your Google Account. Once you’ve recovered your Google Account: 
  3. Then, the troubleshooter will take you to the Account login issues form, which will direct you to an AdSense account issues specialists.  
    • The specialist will start an investigation and communicate next steps (including the investigation report and reimbursement options). 
    • For a speedy resolution, be sure to include the following with the form as accurately as possible: 
      • Proof of your identity and address using acceptable documents.
      • Proof of a recent AdSense payment, such as a copy of a check, Western Union receipt or bank statement clearly showing a recent AdSense EFT deposit.
      • The URL of a test page of your website. This is to prove that you own and manage the domain listed in your account.
It’s important to note that the Account login issues form cannot be used to report disputes between authorized users. The administrator of the account is responsible for all user permissions. In general, it’s a best practice to remove former employees and inactive users from your account to help prevent unauthorized changes.  

Unfortunately, in some cases with ongoing security concerns, account reinstatement may not be possible. The AdSense specialist team will let you know if this is the case. 

We understand that compromised login credentials may be a huge problem for you and your business. Bookmark these valuable help resources to help retrieve a hijacked account:

If you have questions about this topic, join us on Twitter or Google+ for one of our #AskAdSense office hour sessions. 

Posted by: Hievda Ugur, from the AdSense Team

Mark ShuttleworthThe mouse that jumped

The naming of Ubuntu releases is, of course, purely metaphorical. We are a diverse community of communities – we are an assembly of people interested in widely different things (desktops, devices, clouds and servers) from widely different backgrounds (hello, world) and with widely different skills (from docs to design to development, and those are just the d’s).

As we come to the end of the alphabet, I want to thank everyone who makes this fun. Your passion and focus and intellect, and occasionally your sharp differences, all make it a privilege to be part of this body incorporate.

Right now, Ubuntu is moving even faster to the centre of the cloud and edge operations. From AWS to the zaniest new devices, Ubuntu helps people get things done faster, cleaner, and more efficiently, thanks to you. From the launch of our kubernetes charms which make it very easy to operate k8s everywhere, to the fun people seem to be having with snaps at for shipping bits from cloud to top of rack to distant devices, we love the pace of change and we change the face of love.

We are a tiny band in a market of giants, but our focus on delivering free software freely together with enterprise support, services and solutions appears to be opening doors, and minds, everywhere. So, in honour of the valiantly tiny leaping long-tailed over the obstacles of life, our next release which will be Ubuntu 17.04, is hereby code named the ‘Zesty Zapus’.


Planet DebianThomas Lange: FAI 5.2 is going to the cloud

The newest version of FAI, the Fully Automatic Installation tool set, now supports creating disk images for virtual machines or for your cloud environment.

The new command fai-diskimage uses the normal FAI process for building disk images of different formats. An image with a small set of packages can be created in less than 50 seconds, a Debian XFCE desktop in nearly two minutes and a complete Ubuntu 16.04 desktop image is created in four minutes.

New FAI installation images for CD and USB stick are also available.

Update: Add link to announcement

FAI cloud

CryptogramVirtual Kidnapping

This is a harrowing story of a scam artist that convinced a mother that her daughter had been kidnapped. More stories are here. It's unclear if these virtual kidnappers use data about their victims, or just call people at random and hope to get lucky. Still, it's a new criminal use of smartphones and ubiquitous information.

Reminds me of the scammers who call low-wage workers at retail establishments late at night and convince them to do outlandish and occasionally dangerous things.

Worse Than FailureCodeSOD: Excellent Test

These days, you aren’t just doing development. Your development has to be driven. Business driven. Domain driven. Test driven.

TDD is generally held up as a tool for improving developer efficiency and code quality. As it turns out, scholarly research doesn’t support that: there’s no indication that TDD has any impact at all.

I have to wonder, though, if maybe that’s because people are still writing tests like this one, which Tomas received from an offshore contractor.

    public void CompareValues(string con, int pf, int pt, string tableVal, int len, string Ledpos1)
        string databasevalue1 = (con.Substring(pf-1,len)).Trim();
        string tableval = tableVal.Trim();
        switch (pf)
            case 61:
                Assert.AreEqual(databasevalue1, "00000" + Ledpos1, " Values are not equal ");
            case 76:
                Console.WriteLine("Not check as it is random values");
            case 164:
            case 160:
                string todayDate = DateTime.Now.ToString("yyyyMMdd");
                Assert.AreEqual(todayDate, databasevalue1, "Both dates are not equal");
            case 270:
                string tOnedate = DateTime.Now.AddDays(1).ToString("yyyyMMdd");
                Assert.AreEqual(tOnedate, databasevalue1, "Both dates are not equal");
                Console.WriteLine(pf + pt + len + tableval + databasevalue1);
               Assert.AreEqual(tableval, databasevalue1, "Both values are not equal");

        Console.WriteLine(pf + pt + len + tableval + databasevalue1);

“Excellent”, indeed.

[Advertisement] Application Release Automation for DevOps – integrating with best of breed development tools. Free for teams with up to 5 users. Download and learn more today!

Planet Linux AustraliaTridge on UAVs: CanberraUAV Outback Challenge 2016 Debrief

I have finally written up an article on our successful Outback Challenge 2016 entry

The members of CanberraUAV are home from the Outback Challenge and life is starting to return to normal after an extremely hectic (and fun!) time preparing our aircraft for this years challenge. It is time to write up our usual debrief acticle to give those of you who weren't able to be there some idea of what happened.

For reference here are the articles from the 2012 and 2014 challenges:

Medical Express

The Outback Challenge is held every two years in Queensland, Australia. As the challenge was completed by multiple teams in 2014 the organisers needed to come up with a new challenge. The new challenge for 2016 was called "Medical Express" and the challenge was to retrieve a blood sample from Joe at a remote landing site.

The back-story is that poor Outback Joe is trapped behind flood waters on his remote property in Queensland. Unfortunately he is ill, and doctors at a nearby hospital need a blood sample to diagnose his illness. A UAV is called in to fly a 23km path to a place where Joe is waiting. We only know Joes approximate position (within 100 meters) so
first off the UAV needs to find Joe using an on-board camera. After finding Joe the aircraft needs to find a good landing site in an area littered with obstacles. The landing site needs to be more than 30 meters from Joe (to meet CASA safety requirements) but less than 80 meters (so Joe doesn't have to walk too far).

The aircraft then needs to perform an automatic landing, and then wait for Joe to load the blood sample into an easily accessible carrier. Joe then presses a button to indicate he is done loading the blood sample. The aircraft needs to wait for one minute for Joe to get clear, and then perform an automatic takeoff and flight back to the home location to deliver the blood sample to waiting hospital staff.

That story hides a lot of very challenging detail. For example, the UAV must maintain continuous telemetry contact with the operators back at the base. That needs to be done despite not knowing exactly where the landing site will be until the day before the challenge starts.

Also, the landing area has trees around it and no landing strip, so a normal fixed wing landing and takeoff is very problematic. The organisers wanted teams to come up with a VTOL solution and in this
they were very successful, kickstarting a huge effort to develop the VTOL capabilities of multiple open source autopilot systems.

The organisers also had provided a strict flight path that the teams have to follow to reach the search area where Joe is located. The winding path over the rural terrain of Dalby is strictly enforced, with any aircraft breaching the geofence required to immediately and automatically terminate by crashing into the ground.

The organisers also gave quite a wide range of flight distance and weather conditions that the teams had to be able to cope with. The distance to the search area could be up to 30km, meaning a round trip distance of 60km without taking into account all the time spent above the search area trying to find Joe. The teams had to be able to fly in up to 25 knots average wind on the ground, which could mean well over 30 knots in the air.

The mission also needed to be completed in one hour, including the time for spent loading the blood sample and circling above Joe.

Planet DebianJaldhar Vyas: Something Else Will Be Posted Soon Also.

Yikes today was Sharad Purnima which means there is about two weeks to go before Diwali and I haven't written anything here all year.

OK new challenge: write 7 substantive blog posts before Diwali. Can I manage to do it? Let's see...

Planet DebianRussell Coker: Improving Memory

I’ve just attended a lecture about improving memory, mostly about mnemonic techniques. I’m not against learning techniques to improve memory and I think it’s good to teach kids a variety of things many of which won’t be needed when they are younger as you never know which kids will need various skills. But I disagree with the assertion that we are losing valuable skills due to “digital amnesia”.

Nowadays we have programs to check spelling so we can avoid the effort of remembering to spell difficult words like mnemonic, calendar apps on our phones that link to addresses and phone numbers, and the ability to Google the world’s knowledge from the bathroom. So the question is, what do we need to remember?

For remembering phone numbers it seems that all we need is to remember numbers that we might call in the event of a mobile phone being lost or running out of battery charge. That would be a close friend or relative and maybe a taxi company (and 13CABS isn’t difficult to remember).

Remembering addresses (street numbers etc) doesn’t seem very useful in any situation. Remembering the way to get to a place is useful and it seems to me that the way the navigation programs operate works against this. To remember a route you would want to travel the same way on multiple occasions and use a relatively simple route. The way that Google maps tends to give the more confusing routes (IE routes varying by the day and routes which take all shortcuts) works against this.

I think that spending time improving memory skills is useful, but it will either take time away from learning other skills that are more useful to most people nowadays or take time away from leisure activities. If improving memory skills is fun for you then it’s probably better than most hobbies (it’s cheap and provides some minor benefits in life).

When I was in primary school it was considered important to make kids memorise their “times tables”. I’m sure that memorising the multiplication of all numbers less than 13 is useful to some people, but I never felt a need to do it. When I was young I could multiply any pair of 2 digit numbers as quickly as most kids could remember the result. The big difference was that most kids needed a calculator to multiply any number by 13 which is a significant disadvantage.

What We Must Memorise

Nowadays the biggest memory issue is with passwords (the Correct Horse Battery Staple XKCD comic is worth reading [1]). Teaching mnemonic techniques for the purpose of memorising passwords would probably be a good idea – and would probably get more interest from the audience.

One interesting corner-case of passwords is ATM PIN numbers. The Wikipedia page about PIN numbers states that 4-12 digits can be used for PINs [2]. The 4 digit PIN was initially chosen because John Adrian Shepherd-Barron (who is credited with inventing the ATM) was convinced by his wife that 6 digits would be too difficult to memorise. The fact that hardly any banks outside Switzerland use more than 4 digits suggests that Mrs Shepherd-Barron had a point. The fact that this was decided in the 60’s proves that it’s not “digital amnesia”.

We also have to memorise how to use various supposedly user-friendly programs. If you observe an iPhone or Mac being used by someone who hasn’t used one before it becomes obvious that they really aren’t so user friendly and users need to memorise many operations. This is not a criticism of Apple, some tasks are inherently complex and require some complexity of the user interface. The limitations of the basic UI facilities become more obvious when there are operations like palm-swiping the screen for a screen-shot and a double-tap plus drag for a 1 finger zoom on Android.

What else do we need to memorise?


Planet DebianThomas Goirand: Released OpenStack Newton, Moving OpenStack packages to upstream Gerrit CI/CD

OpenStack Newton is released, and uploaded to Sid

OpenStack Newton was released on the Thursday 6th of October. I was able to upload nearly all of it before the week-end, though there was a bit of hick-ups still, as I forgot to upload python-fixtures 3.0.0 to unstable, and only realized it thanks to some bug reports. As this is a build time dependency, it didn’t disrupt Sid users too much, but 38 packages wouldn’t build without it. Thanks to Santiago Vila for pointing at the issue here.

As of writing, a lot of the Newton packages didn’t migrate to Testing yet. It’s been migrating in a very messy way. I’d love to improve this process, but I’m not sure how, if not filling RC bugs against 250 packages (which would be painful to do), so they would migrate at once. Suggestions welcome.

Bye bye Jenkins

For a few years, I was using Jenkins, together with a post-receive hook to build Debian Stable backports of OpenStack packages. Though nearly a year and a half ago, we had that project to build the packages within the OpenStack infrastructure, and use the CI/CD like OpenStack upstream was doing. This is done, and Jenkins is gone, as of OpenStack Newton.

Current status

As of August, almost all of the packages Git repositories were uploaded to OpenStack Gerrit, and the build now happens in OpenStack infrastructure. We’ve been able to build all packages a release OpenStack Newton Debian packages using this system. This non-official jessie backports repository has also been validated using Tempest.

Goodies from Gerrit and upstream CI/CD

It is very nice to have it built this way, so we will be able to maintain a full CI/CD in upstream infrastructure using Newton for the life of Stretch, which means we will have the tools to test security patches virtually forever. Another thing is that now, anyone can propose packaging patches without the need for an Alioth account, by sending a patch for review through Gerrit. It is our hope that this will increase the likeliness of external contribution, for example from 3rd party plugins vendors (ie: networking driver vendors, for example), or upstream contributors themselves. They are already used to Gerrit, and they all expected the packaging to work this way. They are all very much welcome.

The upstream infra: nodepool, zuul and friends

The OpenStack infrastructure has been described already in, by Ian Wienand. So I wont describe it again, he did a better job than I ever would.

How it works

All source packages are stored in Gerrit with the “deb-” prefix. This is in order to avoid conflict with upstream code, and to easily locate packaging repositories. For example, you’ll find Nova packaging under Two Debian repositories are stored in the infrastructure AFS (Andrew File System, which means a copy of that repository exist on each cloud were we have compute resources): one for the actual deb-* builds, under “jessie-newton”, and one for the automatic backports, maintained in the deb-auto-backports gerrit repository.

We’re using a “git tag” based workflow. Every Gerrit repository contains all of the upstream branch, plus a “debian/newton” branch, which contains the same content as a tag of upstream, plus the debian folder. The orig tarball is generated using “git archive”, then used by sbuild to produce binaries. To package a new upstream release, one simply needs to “git merge -X theirs FOO” (where FOO is the tag you want to merge), then edit debian/changelog so that the Debian package version matches the tag, then do “git commit -a –amend”, and simply “git review”. At this point, the OpenStack CI will build the package. If it builds correctly, then a core reviewer can approve the “merge commit”, the patch is merged, then the package is built and the binary package published on the OpenStack Debian package repository.

Maintaining backports automatically

The automatic backports is maintained through a Gerrit repository called “deb-auto-backports” containing a “packages-list” file that simply lists source packages we need to backport. On each new CR (change request) in Gerrit, thanks to some madison-lite and dpkg –compare-version magic, the packages-list is used to compare what’s in the Debian archive and what we have in the jessie-newton-backports repository. If the version is lower in our repository, or if the package doesn’t exist, then a build is triggered. There is the possibility to backport from any Debian release (using the -d flag in the “packages-list” file), and even we can use jessie-backports to just rebuild the package. I also had to write a hack to just download from jessie-backports without rebuilding, because rebuilding the webkit2gtk package (needed by sphinx) was taking too resources (though we’ll try to never use it, and rebuild packages when possible).

The nice thing with this system, is that we don’t need to care much about maintaining packages up-to-date: the script does that for us.

Upstream Debian repository are NOT for production

The produced package repositories are there because we have interconnected build dependencies, needed to run unit test at build time. It is the only reason why such Debian repository exist. They are not for production use. If you wish to deploy OpenStack, we very much recommend using packages from distributions (like Debian or Ubuntu). Indeed, the infrastructure Debian repositories are updated multiple times daily. As a result, it is very likely that you will experience failures to download (hash or file size mismatch and such). Also, the functional tests aren’t yet wired in the CI/CD in OpenStack infra, and therefore, we cannot guarantee yet that the packages are usable.

Improving the build infrastructure

There’s a bunch of things which we could do to improve the build process. Let me give a list of things we want to do.

  • Get sbuild pre-setup in the Jessie VM images, so we can win 3 minutes per build. This means writing a diskimage-builder element for sbuild.
  • Have the infrastructure use a state-of-the-art Debian ftp-sync mirror, instead of the current reprepro mirroring which produces an unsigned reprository, which we can’t use for sbuild-createchroot. This will improve things a lot, as currently, there’s lots of build failures because of mirror inconsistencies (and these are very frustrating loss of time).
  • For each packaging change, there’s 3 build: the check job, the gate job, and the POST job. This is a loss of time and resources, as we need to build a package once only. It will be hopefully possible to fix this when the OpenStack infra team will deploy Zuul 3.

Generalizing to Debian

During Debconf 16, I had very interesting talks with the DSA (Debian System Administrator) about deploying such a CI/CD for the whole of the Debian archive, interfacing Gerrit with something like dgit and a build CI. I was told that I should provide a proof of concept first, which I very much agreed with. Such a PoC is there now, within OpenStack infra. I very much welcome any Debian contributor to try it, through a packaging patch. If you wish to do so, you should read how to contribute to OpenStack here: and then simply send your patch with “git review”.

This system, however, currently only fits the “git tag” based packaging workflow. We’d have to do a little bit more work to make it possible to use pristine-tar (basically, allow to push in the upstream and pristine-tar branches without any CI job connected to the push).

Dear DSA team, as we now nice PoC that is working well, on which the OpenStack PKG team is maintaining 100s of packages, shall we try to generalize and provide such infrastructure for every packaging team and DDs?

CryptogramSecurity Economics of the Internet of Things

Brian Krebs is a popular reporter on the cybersecurity beat. He regularly exposes cybercriminals and their tactics, and consequently is regularly a target of their ire. Last month, he wrote about an online attack-for-hire service that resulted in the arrest of the two proprietors. In the aftermath, his site was taken down by a massive DDoS attack.

In many ways, this is nothing new. Distributed denial-of-service attacks are a family of attacks that cause websites and other Internet-connected systems to crash by overloading them with traffic. The "distributed" part means that other insecure computers on the Internet -- sometimes in the millions­ -- are recruited to a botnet to unwittingly participate in the attack. The tactics are decades old; DDoS attacks are perpetrated by lone hackers trying to be annoying, criminals trying to extort money, and governments testing their tactics. There are defenses, and there are companies that offer DDoS mitigation services for hire.

Basically, it's a size vs. size game. If the attackers can cobble together a fire hose of data bigger than the defender's capability to cope with, they win. If the defenders can increase their capability in the face of attack, they win.

What was new about the Krebs attack was both the massive scale and the particular devices the attackers recruited. Instead of using traditional computers for their botnet, they used CCTV cameras, digital video recorders, home routers, and other embedded computers attached to the Internet as part of the Internet of Things.

Much has been written about how the IoT is wildly insecure. In fact, the software used to attack Krebs was simple and amateurish. What this attack demonstrates is that the economics of the IoT mean that it will remain insecure unless government steps in to fix the problem. This is a market failure that can't get fixed on its own.

Our computers and smartphones are as secure as they are because there are teams of security engineers working on the problem. Companies like Microsoft, Apple, and Google spend a lot of time testing their code before it's released, and quickly patch vulnerabilities when they're discovered. Those companies can support such teams because those companies make a huge amount of money, either directly or indirectly, from their software­ -- and, in part, compete on its security. This isn't true of embedded systems like digital video recorders or home routers. Those systems are sold at a much lower margin, and are often built by offshore third parties. The companies involved simply don't have the expertise to make them secure.

Even worse, most of these devices don't have any way to be patched. Even though the source code to the botnet that attacked Krebs has been made public, we can't update the affected devices. Microsoft delivers security patches to your computer once a month. Apple does it just as regularly, but not on a fixed schedule. But the only way for you to update the firmware in your home router is to throw it away and buy a new one.

The security of our computers and phones also comes from the fact that we replace them regularly. We buy new laptops every few years. We get new phones even more frequently. This isn't true for all of the embedded IoT systems. They last for years, even decades. We might buy a new DVR every five or ten years. We replace our refrigerator every 25 years. We replace our thermostat approximately never. Already the banking industry is dealing with the security problems of Windows 95 embedded in ATMs. This same problem is going to occur all over the Internet of Things.

The market can't fix this because neither the buyer nor the seller cares. Think of all the CCTV cameras and DVRs used in the attack against Brian Krebs. The owners of those devices don't care. Their devices were cheap to buy, they still work, and they don't even know Brian. The sellers of those devices don't care: they're now selling newer and better models, and the original buyers only cared about price and features. There is no market solution because the insecurity is what economists call an externality: it's an effect of the purchasing decision that affects other people. Think of it kind of like invisible pollution.

What this all means is that the IoT will remain insecure unless government steps in and fixes the problem. When we have market failures, government is the only solution. The government could impose security regulations on IoT manufacturers, forcing them to make their devices secure even though their customers don't care. They could impose liabilities on manufacturers, allowing people like Brian Krebs to sue them. Any of these would raise the cost of insecurity and give companies incentives to spend money making their devices secure.

Of course, this would only be a domestic solution to an international problem. The Internet is global, and attackers can just as easily build a botnet out of IoT devices from Asia as from the United States. Long term, we need to build an Internet that is resilient against attacks like this. But that's a long time coming. In the meantime, you can expect more attacks that leverage insecure IoT devices.

This essay previously appeared on Vice Motherboard.

Slashdot thread.

Here are some of the things that are vulnerable.

EDITED TO ADD (10/17: DARPA is looking for IoT-security ideas from the private sector.

CryptogramMurder Is a Relatively Recent Evolutionary Strategy

Interesting research in Nature.

The article is behind a paywall, but here are five summaries of the research.

EDITED TO ADD (10/16): Non-paywalled copy of the paper.

Planet Linux AustraliaClinton Roy: In Memory of Gary Curtis

This week we learnt of the sad passing of a long term regular attendee of Humbug, Gary Curtis. Gary was often early, and nearly always the last to leave.

One  of Gary’s prized possessions was his car, more specifically his LINUX number plate. Gary was very happy to be our official airport-conference shuttle for keynote speakers in 2011 with this number plate.

Gary always had very strong opinions about how Humbug and our Humbug organised conferences should be run, but rarely took to running the events himself. It became a perennial joke at Humbug AGMs that we would always nominate Gary for positions, and he would always decline. Eventually we worked out that Humbug was one of the few times Gary wasn’t in charge of a group, and that was relaxing for him.

A topic that Gary always came back to was genealogy, especially the phone app he was working on.

A peculiar quirk of Humbug meetings is that they run on Saturday nights, and thus we often have meetings at the same time as Australian elections. Gary was always keen to keep up with the election on the night, often with interesting insights.

My most personal memory of Gary was our road trip after OSDC New Zealand, we did something like three days of driving around in a rental car, staying at hotels along the way. Gary’s driving did little to impress me, but he was certainly enjoying himself.

Gary will be missed.


Filed under: Uncategorized


Sam VargheseDonald Trump has sane supporters too

I have a friend who has been living in the US for the last 30 years. He is an intelligent, rational person who is widely read. We have been close friends for the last 37 years.

He is one of the people who will be voting for Donald Trump on November 8. He went to the US on an H1-B visa.

He wrote what follows, well before Trump’s comments on women came to light. Read, judge if you wish, but ponder: if reasonable, sensible, middle-class people come to these conclusions, there must be something terribly wrong with the social system in the US.


Religion by itself is not the problem – all religions are, by nature, esoteric and mysterious and, ultimately, a matter of faith. It is the clash of cultures originating from these religions that causes all strife and conflict in our society and all societies on planet earth.

Donald Trump is bringing some strong medicine to this country, and the party elites are scared – that their gravy train is coming to an end, that better trade deals will result in good jobs for minorities, that lobbyists will have to downsize, that waste, fraud and abuse in no-bid military contracts and pharmaceutical purchases are going to stop, and that the immigration rules which determine who, and how many people, can come into this country will undergo some major changes.

“America will be the country we all believe in, we all dream of….” said Chef Andres. The only way to keep that intact is to stop a million new immigrants coming into this nation every year. Already our cities are getting crowded, our roads clogged, and our countryside (what is left of it) is fast disappearing. This has a huge impact on our jobs, our culture, our values, and our standard of living. That is why Trump wants all immigration halted until we come up with a sane immigration policy.

There was a time when this country could use a lot of immigrants. That time has passed. Allowing a million immigrants each year is mass migration. That is like taking over a country without a war. If you are supporting Hillary Clinton and open borders, then we no longer need a military. What is there to protect?

By the same token, Clinton and all her supporters should get rid of the doors in their houses, so that anyone can come in and stay and help themselves to anything they want. You have to get real.

This election is about the choice between open borders and controlled borders, between having a country and not having one. It is also about the choice between insider career politicians who are beholden to big moneyed special interest groups, and Trump, the outsider, who cannot be bought. Godspeed Trump!

It is time for change. Time for a non career politician to step in and stop the 1 million immigrants flooding into this country every year; eliminate ISIS, and build safe havens so like-minded people can live in their own countries; and work the room with our elected representatives and develop consensus around the public agenda and not around what Wall Street wants.

In 2014, 1.3 million foreign-born individuals moved to the United States, an 11 per cent increase from 1.2 million in 2013. India was the leading country of origin, with 147,500 arriving in 2014, followed by China with 131,800, and Mexico with 130,000.

Trump is the only one standing between us and the millions of new immigrants who are coming in each year and having an impact on our jobs, our values, and our culture. Vote for change!

Automation is one reason, and immigration (H1-B abuse) and outsourcing are the other main reasons for job losses in the US.

Clinton has no clue on how to address the former, and — considering her donor class — she has no interest in addressing the latter.

Good negotiators are not necessarily good debaters. How else would you explain Barack Obama who is a gifted public speaker, but could not for the life of him work the room with his opposition in it? If debating skills were the basis for selecting a president, the people would have voted for Ted Cruz who is a better debater than Hillary. No, what this country needs is a good negotiator, not a debate champion.

Thousands of people are ripping into Trump for not speaking like a politician. And yet deep down inside they know that he is the only one standing between them and the millions of new immigrants who are coming in each year and taking over their jobs, their values, and their culture.  Vote for change.

Trump’s favourite Bible verse is 1 John 4:12: “No one has ever seen God; but if we love one another, God remains in us, and His love is perfected in us.”

Notable takeaways from a recent Trump speech:

“Hillary Clinton is the last line of defence for a failed political establishment and she does not have you at heart. The Clinton campaign exists for one reason, and that is to continue rigging the system. We will break up the industrial-media complex. That is why I am running – I will be your greatest voice ever.

“They go to the same restaurants, they go to the same conferences, they have the same friends and connections, they write cheques to the same thinktanks, and produce exactly the same reports. It’s a gravy train and it just keeps on flowing. On November 8 that special interest gravy train is coming to a very abrupt end.

“The insiders in Washington and Wall Street look down at the hardworking people like you, like so many people in this state, like so many people in this county. But you are the backbone, your are the heart and soul of the nation. Don’t ever forget it folks.”

The reasons why I am voting for Trump, shown in the order in which things need to be fixed in this country:

Issue 1. Our rigged economic and political systems – Issues 2-6 have remained unsolved for decades because the special interests who control our rigged system proactively stop any solution from being adopted. Only Trump can fix this.

Issue 2. The flagging economy 𔃀 caused by both automation and globalisation. Prosperity can be more widely spread if we negotiate good deals both within and outside this country. Trump will bring good negotiators into our government.

Issue 3. Unfettered immigration – bringing in a million immigrants a year is transforming this nation into a Third World country. Everything from birthright citizenship to safe zones for war-torn countries must be examined.

Issue 4. Minorities – we need to provide a hand up to minority communities. Wasted and under-utilised human resources are one of the primary reasons/sources for crime in many of our cities.

Issue 5. Healthcare – costs need to brought down and basic services made available to every citizen. There is enough money in the system, it just needs to be managed better.

Issue 6. Debt overhang – we need to write down the debts — both credit card and college loans — held by ordinary citizens. This will also help the economy (Issue 2).

The elimination of waste fraud and abuse will cut across many of the above issues. That, and bringing competent people into our government who can deliver on the public agenda, will surely make our country great again. I cannot wait for Trump to arrive in our capital.


Google Adsense4 steps to build a strong brand experience

Exposing your audience to a rock solid brand leaves a lasting impression on your site’s visitors, and helps separate you from your competitors. To establish brand consistency across multiple touch points, it’s important to create and stick to guidelines unique to your brand.

Building a strong brand experience comes down to four things:

1. Find your voice

A brand’s voice means more than just the tone you use in your content and communications. It also applies to style, colors, and graphics. Is your brand bubbly, bright, and fun? Or is it straight to the point with clean lines and a matter-of-fact tone? Often times, the type of product or services you're selling as well as your company philosophy can help you determine an appropriate tone. There’s no secret for determining what an audience will respond best to, as all styles can be effective in their own way. So choose what works for you and your creative vision.

2. Be consistent

Once you’ve laid the groundwork for what defines your brand, it’s important to stick to these principles. This applies to your website, emails, social media posts, and any other place users come into contact with your brand. Taking the time to stick to an easy to read font, finding a color scheme that draws the eye and guides your readers, or having consistent verbiage can do wonders to further cement your brand’s presence and make it memorable.

3. Know your audience

While it’s important to decide what your brand is, it’s also important to know your audience, their interests, and how they prefer to communicate. For example, if you’re targeting busy, high-level decision makers, they may prefer something short and sweet—perhaps bullet points are the way to go. If you’re targeting creative individuals, it may be worth investing in a personalized logo and site. Highly visual assets such as videos would also be a great way to go. The more you know and cater to your intended audience, the more successful your brand will be.

To understand your users’ interests, use Google Analytics to view your bounce rates, time on pages, and pageviews—three indicators of user engagement. Understand where you stand in comparison to other sites and, if needed, improve on these rates by creating a stronger connection between your site and your audience, i.e. creating content relevant to your audience’s interests.

4. Prove your Worth

Having a particular value that you provide to your customers (not to be mistaken for price) can help separate your brand from competitors. For instance, what do you provide to your customers that is different or special? This can include everything from innovative products to great customer service and can also be an emotional value (think Kleenex being associated with comfort and support). Just make sure to deliver on any and all promises made on your site.

To learn more about how to develop your user experience, check out the AdSense Guide to Audience Engagement.

Posted by Jay Castro
From the AdSense team

CryptogramFriday Squid Blogging: Barramundi with Squid Ink Risotto

Squid ink risotto is a good accompaniment for any mild fish.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Krebs on SecuritySelf-Checkout Skimmers Go Bluetooth

This blog has featured several stories about payment card skimming devices designed to be placed over top of credit card terminals in self-checkout lanes at grocery stores and other retailers. Many readers have asked for more details about the electronics that power these so-called “overlay” skimmers. Here’s a look at one overlay skimmer  equipped with Bluetooth technology that allows thieves to snarf swiped card data and PINs wirelessly using nothing more than a mobile phone.

The rather crude video below shows a Bluetooth enabled overlay skimmer crafted to be slipped directly over top of Ingenico iSC250 credit card terminals. These Ingenico terminals are widely used at countless U.S. based merchants; earlier this year I wrote about Ingenico overlay skimmers being found in self-checkout lanes at some WalMart locations.

The demo video briefly shows the electronics hidden on the back side of the overlay skimmer, but most of the sales video demonstrates the Bluetooth functionality built into the device. The video appears to show the skimmer seller connecting his mobile phone to the Bluetooth elements embedded in the skimmer. The demo continues on to show the phone intercepting PIN pad presses and card swipe data.

Your basic Bluetooth signal has a range of approximately 100 meters (328 feet), so theoretically skimmer scammers who placed one of these devices over top of a card terminal in a store’s self-checkout lane could simply sit in a vehicle parked outside the storefront and suck down card data wirelessly in real-time. However, that kind of continuous communication likely would place undue strain on the skimmer’s internal battery, thus dramatically decreasing the length of time the skimmer could collect card and PIN data before needed a new battery.

Rather, such a skimmer would most likely be configured to store the stolen PIN and card data until such time as its owner skulks within range of the device and instructs it to transmit the stored card data.

Concerned about whether the Ingenico terminals at your favorite store may be compromised by one of these overlay skimmers? Turns out, payment terminals retrofitted with overlay skimmers have quite a few giveaways if you know what to look for. Learn how to identify one, by checking out my tutorial, How to Spot Ingenico Self-Checkout Skimmers.

If you liked this piece, have a look at the other skimmer stories in my series, All About Skimmers. And if you’re curious about how card data stolen through skimmers like these are typically sold, take a peek inside a professional carding shop.

The red calipers in the image above show the size differences in various noticeable areas of the case overlay on the left compared to the actual ISC250 on the right. Source: Ingenico.

The red calipers in the image above show the size differences in various noticeable areas of the case overlay on the left compared to the actual ISC250 on the right. Source: Ingenico.

Thanks to Alex Holden of Hold Security LLC for sharing the above video footage.

TEDOne TED speaker becomes a college president, another designs a beautiful new perfume bottle …


The TED community has been very busy over the past few weeks. Below, some newsy highlights.

Wellesley has a new president. Dr. Paula Johnson, a longtime champion of women’s health and health policy, is Wellesley College’s 14th president. The celebrations surrounding her inauguration focused on the theme of Intersections; in her inauguration address, she reflected on how she will incorporate the theme into her administration. “How do we unleash the riches embedded in crucial intersections—among people, among ideas, across communities and cultures, through time and space?” she asked.  (Watch Paula’s TED Talk)

The fight for gender balance in tech. Melinda Gates has her eyes set on a pervasive problem in tech: the underrepresentation of women. Working outside the Bill and Melinda Gates Foundation for this particular initiative, Melinda is building up a personal office that will dedicate resources and attention to the issue. She’ll begin by going into learning mode, up to two years of “learning, collecting information, talking to lots of experts, and looking at what research is out there,” she told Backchannel. As part of the interview, readers were asked to comment in response to the question, “How should Gates spend her money and her time?” Their comments were then assembled into an open letter. (Watch Melinda’s TED Talk)

How to separate fact from fiction. Our digital lives are a constant onslaught of information, but do you know how to distinguish between the reliable and the misleading? In A Field Guide to Lies, Daniel Levitin explains how faulty arguments and information permeate the sources that we rely on for information, everything from Wikipedia to news organizations to the government. The solution? Become more infoliterate, he says, and learn how to think more critically about the information that you receive rather than passively accepting it as true. (Watch Daniel’s TED Talk)

America’s favorite poet returns. Two-time US poet laureate Billy Collins released his twelfth collection of poetry on October 4. With close to 50 new poems, The Rain in Portugal looks at everything from beauty and death, to cats and dogs, with the poet’s usual wit and accessibility.  (Watch Billy’s TED Talk)

A tribute to his roots. John Legend helped pay for the renovation of a historic high school theater in Springfield, Ohio, where he performed as a child. “I knew how important that was for me when I was in high school and middle school and throughout my time in Springfield, and how important the arts were and performance spaces were to me,” John told the Springfield News-Sun. He joined the city on October 9 for the ribbon-cutting ceremony of what is now called the John Legend Theater. (Watch John’s TED Talk)

The intricate design of perfume bottles. While designer Thomas Heatherwick is known for large-scale projects like the UK pavilion at the Shanghai Expo in 2010 and the proposed Pier55 park in Manhattan, this time he’s dialed it down to the size of a perfume bottle. His intricate bottles, for French shoe designer Christian Louboutin, are beautiful pieces of twisted glass. The bottles were an expansion of Heatherwick’s interest in creating folded surfaces with different materials, seen in such work as Twisted Cabinet and Paternoster Vents, and they were excited to give it a try with twisted glass. (Watch Thomas’ TED Talk)

The new how-to. Babble co-founder Alisa Volkman is launching a new company called Knowsy. The company makes short how-to videos on everything from Microsoft Word shortcuts to table setting. The videos are short, visually appealing, and free. Rather than making money from advertises, the company may at some point charge a subscription fee for some of the videos, or sell bundles to corporate clients to help them teach specific skills to their employees, reports Recode. (Watch Alisa’s TED Talk)

Have a news item to share? Write us at and you may see it included in this weekly round-up.

Chaotic IdealismQ&A: Who would win if all the Disney princesses fought, Hunger Games style?

Q: Who would win if all the Disney princesses fought, Hunger Games style?

Putting these girls into a Hunger Games-style arena simply wouldn’t end with a Hunger Games-style brawl.

They’re all protagonists. They’re all strongly Good, philosophically and in practical terms.

Mulan is a soldier, so she’s capable of killing, but not murder. Merida can hold her own in battle, but won’t kill an innocent. Elsa could kill everyone, but she won’t. Many of the others—Snow White, Cinderella, Rapunzel—have primarily social strengths, as in charisma.

They wouldn’t fight each other. They would fight the Arena itself. And it’s likely that they would win. Why? Just think about their abilities, and then imagine the many ways those abilities could be used to break the Hunger Games.

Snow White: She’s an innocent, and probably the youngest. She’s the Rue of the group—so vulnerable that nobody can help but like her. Even small animals love her; in fact, she has the power to get small animals to do her chores for her. Never underestimate the power of a squirrel, bird, a badger in the right place.

Cinderella: Another innocent, and another girl with the power to communicate with animals. Her powers aren’t as extreme as Snow White’s, but she’s capable enough. She’s also an abuse survivor—which means she knows how to deal with trauma, because she’s done it before. And she has allies on the outside—a fairy godmother with transfiguration magic, who won’t be happy that her charge has been imprisoned in the Arena.

Aurora: She’s not much use in a fight, and she doesn’t have any particular powers (she’s animal-friendly, but not an actual animal-charmer the way Cinderella and Snow White are); but she’s kind and introverted, which means she won’t upset anybody. She’s also spent her entire childhood living in a cabin in the woods—she knows how to survive in the woods. With her around, they aren’t going to starve.

Ariel: She’s a specialist, the literal Aquaman of the group. In the water, she’s faster than anybody else, absolutely unbeatable, and she can talk to creatures that live in the water. As far as animal whisperers go, Ariel’s the only one who can actually talk in two-way abstract language with her animals friends, so that means that the group knows anything that’s happening anywhere near water once Ariel gets her fish spy network going. Unfortunately, out of the water, she’s very slow and probably has to be carried.

Belle: Is a nerd. That’s her strength and her weakness. She’s read every book she can get her hands on, and she has an inventor for a father. She’s got an extremely good brain in her head, and she’ll probably be the group’s source of information, though not their chief strategist.

Jasmine: Born and bred nobility, her primary power is her charisma and ability to communicate. Totally naive outside her native environment, she nevertheless has the wherewithal to think on her feet. She’s been on quite a few adventures with Aladdin, so she doesn’t scare easily. Jasmine won’t be a major asset, but she also won’t hold them back.

Pocahontas: It goes without saying that Pocahontas, along with Aurora, will be one of the group’s major food providers. Her biggest strength is her diplomatic ability; when the group threatens to fall apart when things get rough, she can keep them together. She has some minor animal-communication ability, but nowhere near some of the others’.

Mulan: The soldier. She’s a military girl, and she’s good at what she does. When someone needs to think fast, she can. She’s good at using the environment to her advantage, and she’s good at disguising herself and others. She’s likely to become their de facto leader. She can also train others in the basics of fighting, which is very useful because many of the other girls have no experience; while they won’t be fighting each other, the Arena itself is very dangerous.

Tiana: She hasn’t got any exciting powers, but she’s sure going to come in handy. She knows how to cook, naturally. She’s lived as a commoner for a lot of her life, which means she’s more independent than most of the other girls. She also has experience with magic, and has had to survive in extreme situations before (can you get more extreme than being turned into a frog?). She won’t crack, and she’ll be useful. Tiana will be just fine.

Rapunzel: If she’s still got her hair, she’s a serious asset. Rapunzel’s hair can heal anything, even old age. She can keep the entire group healthy. And the hair isn’t just a whole lot of dead weight, either; she uses it as an aid to do some pretty cool parkour-style moves. Rapunzel is also very strong, as shown by her ability to pull people up by her hair without breaking a sweat. If she hasn’t got her hair, she’s less of an asset, but her physical strength can still see her through (and quite possibly give Ariel a way to get around out of water). Don’t underestimate her charisma, either; if she can convince a bar full of ruffians to join her team, she can talk anybody into anything. Like Cinderella, Rapunzel has survived an abusive childhood. In her case, it was mainly psychological and emotional abuse, so she’ll be the one to see through the mind games the gamemakers try to play on the girls; she’s seen them all before from Mother Gothel.

Merida: Archery, obviously. She’s as good as Katniss, if not better, and she’s one of the few girls with the ability to attack at range. She can ride, survive in the wilderness, and keep her head in a crisis. No problems here.

Anna and Elsa: Have to be taken as a set. Each is willing to die for the other, and Elsa’s ice powers are prodigious. Anna and Elsa may be the main reason why the whole group is likely to try to defeat the Arena itself rather than fighting each other: Each sister knows she wants her sister to live, and each knows that her sister wants her to live. The only option for these two is defeating the Hunger Games together. Elsa has leadership experience, having been a queen; Anna may be younger, but she’s proven herself in crisis situations already.

There’s really only two ways this can end: The girls join together to fight the Arena, and win; or the girls join together to fight the Arena, and they lose. A win may still mean the deaths of several of the girls, and almost certainly at least one such death; but they’ll be deaths inflicted by the Arena itself. The first death will affect them deeply, of course; they may actually have to go through the entire ordeal with one or more of their number essentially disabled by emotional shock. But these girls have learned too much about the power of love, friendship, courage, and hope to give up in the Arena.

That says a lot about the Hunger Games, doesn’t it? A big part of the reason why the children in the Hunger Games kill each other is that they have lost hope. Their whole world says you can’t escape the Arena; their whole culture says you can’t defy the Gamemakers. There have been plenty of people as skilled as Katniss in the Arena, who didn’t manage to defy the games as she did, because they weren’t able to defy the basic concept of the Games: “You must kill each other. There is no other option.”

Ironically, I see a lot of Hunger Games fans looking at the Arena in the same way: A lot of people go in; only one leaves. They don’t challenge that; they focus on the strategy of survival. But as Katniss knows, and as the Disney princesses would understand, the other tributes were never the enemy.

Take a life lesson from this: When the power structure around you seems to be trying to pit you against other people, ask why. Challenge the paradigm. Chances are, the people you're being encouraged to fight aren't the enemy, and chances are, you'll be stronger together.

Sociological ImagesWestern Corporations and the Cultivation of Colorism in India

Flashback Friday.

Previously marketed to women, skin lightening, bleaching, and “fairness” creams are being newly marketed to men.  The introduction of a Facebook application has triggered a wave of commentary among American journalists and bloggers.  The application, launched by Vaseline and aimed at men in India, smoothes out blotches and lightens the overall skin color of your profile photo, allowing men to present a more “radiant” face to their friends.

The U.S. commentary involves a great deal of hand-wringing over Indian preference for light skin and the lengths to which even men will go to get a few shades lighter.  Indians, it is claimed, have a preference for light skin because skin color and caste are connected in the Indian imagination.  Dating and career success, they say further, are linked to skin color.  Perhaps, these sources admit, colorism in India is related to British colonialism and the importation of a color-based hierarchy; but that was then and, today, India embraces prejudice against dark-skinned people, thereby creating a market for these unsavory products.

The obsession with light skin, however, cannot be solely blamed on insecure individuals or a now internalized colorism imported from elsewhere a long time ago.  Instead, a preference for white skin is being cultivated, today, by corporations seeking profit.  Sociologist Evelyn Nakano Glenn documents the global business of skin lightening in her article, Yearning for Lightness.  She argues that interest in the products is rising, especially in places where “…the influence of Western capitalism and culture are most prominent.”  The success of these products, then, “cannot be seen as simply a legacy of colonialism.”  Instead, it is being actively produced by giant multinational companies today.

The Facebook application is one example of this phenomenon.  It does not simply reflect an interest in lighter skin; it very deliberately tells users that they need to “be prepared” to make a first impression and makes it very clear that skin blotches and overall darkness is undesirable and smooth, light-colored skin is ideal.  Marketing for skin lightening products not only suggests that light skin is more attractive, it also links light skin to career success, overall upward mobility, and Westernization.  Some advertising, for example, overtly links dark skin with saris and unemployment for women, while linking light skin with Western clothes and a career.

The desire for light skin, then, isn’t an “Indian problem” for which they should be entirely blamed. It is being encouraged by corporations who stand to profit from color-based anxieties that are overtly tied to the supposed superiority of Western culture.  These corporations, it stands to be noted, are not Indian.  They are largely Western: L’Oreal and Unilever are two of the biggest companies.  The supposedly Indian preference for light skin, then, is being stoked and manufactured by companies based in countries populated primarily by light-skinned people.  As Glenn explains, “Such advertisements can be seen as not simply responding to a preexisting need but actually creating a need by depicting having dark skin as a painful and depressing experience.”

Before pitying Indian seekers of light-skin, condemning the nation for colorism, or gently shaking our heads over the legacies of colonialism, we should consider how ongoing Western cultural dominance (that is, racism and colorism in the West today) and capitalist economic penetration (that is, profit through the cultivation of insecurities around the world) contributes to the global market in skin lightening products.

Originally posted in 2010; crossposted at BlogHer.

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.

(View original at

CryptogramCybersecurity Issues for the Next Administration

On today's Internet, too much power is concentrated in too few hands. In the early days of the Internet, individuals were empowered. Now governments and corporations hold the balance of power. If we are to leave a better Internet for the next generations, governments need to rebalance Internet power more towards the individual. This means several things.

First, less surveillance. Surveillance has become the business model of the Internet, and an aspect that is appealing to governments worldwide. While computers make it easier to collect data, and networks to aggregate it, governments should do more to ensure that any surveillance is exceptional, transparent, regulated and targeted. It's a tall order; governments such as that of the US need to overcome their own mass-surveillance desires, and at the same time implement regulations to fetter the ability of Internet companies to do the same.

Second, less censorship. The early days of the Internet were free of censorship, but no more. Many countries censor their Internet for a variety of political and moral reasons, and many large social networking platforms do the same thing for business reasons. Turkey censors anti-government political speech; many countries censor pornography. Facebook has censored both nudity and videos of police brutality. Governments need to commit to the free flow of information, and to make it harder for others to censor.

Third, less propaganda. One of the side-effects of free speech is erroneous speech. This naturally corrects itself when everybody can speak, but an Internet with centralized power is one that invites propaganda. For example, both China and Russia actively use propagandists to influence public opinion on social media. The more governments can do to counter propaganda in all forms, the better we all are.

And fourth, less use control. Governments need to ensure that our Internet systems are open and not closed, that neither totalitarian governments nor large corporations can limit what we do on them. This includes limits on what apps you can run on your smartphone, or what you can do with the digital files you purchase or are collected by the digital devices you own. Controls inhibit innovation: technical, business, and social.

Solutions require both corporate regulation and international cooperation. They require Internet governance to remain in the hands of the global community of engineers, companies, civil society groups, and Internet users. They require governments to be agile in the face of an ever-evolving Internet. And they'll result in more power and control to the individual and less to powerful institutions. That's how we built an Internet that enshrined the best of our societies, and that's how we'll keep it that way for future generations.

This essay previously appeared on, in a section about issues for the next president. It was supposed to appear in the print magazine, but was preempted by Donald Trump coverage.

Worse Than FailureError'd: Breaking News! (or Just a Test?)

"Hey, Angela! Helio is working!" writes Lawrence R.


"While setting up a new IIS site on Server 2008 within a tightly controlled corporate intranet, I had to remember where the JavaScript setting was in IE 8 in order to test properly," Brian R. wrote, "Thanks Bing for taking me to a page that was very relevant to my issue, and had no other content to distract me from my goal."


"Yes, I think I'll join... wait, what!?" writes Simon.


Yeah, we may need to check into our SEO.


"I was trying to lodge a support ticket to my ISP and, well, it's good to see they're doing their best to prevent spammers," wrote H.


Connor O. writes, "Well, I suppose my email address could serve as my city in a futuristic world."


"Well, looks like I have a long way to go before I ever see 'inbox zero'," wrote Eric


[Advertisement] Onsite, remote, bare-metal or cloud – create, configure and orchestrate 1,000s of servers, all from the same dashboard while continually monitoring for drift and allowing for instantaneous remediation. Download Otter today!


Google AdsenseIncrease the speed of your mobile site with this toolkit

Cross-Posted from the DoubleClick for Publishers blog

Last month we released a new study, "The need for mobile speed", highlighting the impact of mobile latency on publisher revenue. Simply having your site load on a mobile device is no longer enough: Mobile sites have to be fast and relevant. The study analyzed 10,000+ mobile web domains, and from the results we gained several insights about the impact of mobile latency on user experience.

Critically, the study also revealed strong correlations between page speed and the following key performance indicators:
  1. Revenue
  2. Bounce rate
  3. Session duration
  4. Viewability

It’s clear mobile speed matters to the success of publisher sites, but making mobile load times a priority doesn’t always make achieving speed easy. To help you build a faster mobile web experience, we’ve created a mobile web speed toolkit. It outlines a 4-step process to diagnose and fix mobile speed issues:
  1. Measure your site’s performance.
  2. Assess the different components impacting speed.
  3. Prioritize the order your site loads.
  4. Test, remeasure and repeat to improve your site speed.
The mobile web speed toolkit offers tactical recommendations to begin achieving mobile speed. 

The relationship between page speed and publisher revenue is clearer than ever before. Small improvements to your mobile site may yield big gains for your mobile revenue, so get your copy of the mobile web speed toolkit and start making changes today. 


Posted by: Jay Castro
AdSense team

Google AdsenseMeet the new AdSense user interface

The new AdSense user interface (UI) is here. Over the last year, our product team has been hard at work bringing Material Design principles to AdSense. This new UI highlights the information that’s relevant to you on a personalized homepage and streamlines navigation.

Over the next few weeks we’ll be offering the new UI to AdSense publishers.  All you’ll need to do is opt in when you log in to AdSense: 
What’s new?
  • A fresh new look & feel. We're adopting Material Design principles with a completely redesigned homepage and menu. We’ll roll out further improvements throughout the product soon.
  • A great new homepage. All the information you need, right where you need it. We've organized your homepage into a stream of interactive cards. You can pin your favorites to the top of the stream, and arrange your homepage just the way you’d like.
  • A streamlined new menu. We’ve brought everything together in a new left hand menu.
We’ll continue to improve and refine AdSense over the coming months. While we’re making these improvements, you’ll still be able to find all the content and features that you’re used to–right where you expect them.

Opt in through the AdSense interface to try it for yourself, and let us know what you think in the feedback tool.

Posted by: Andrew Gildfind, Daniel White & Louis Collard
From the AdSense Product Team

Cory DoctorowTalking about Allan Sherman on the Comedy on Vinyl podcast

Jason Klamm stopped my office to interview me for his Comedy on Vinyl podcast, where I talked about the first comedy album I ever loved: Allan Sherman’s My Son, the Nut.

I inherited my mom’s copy of the album when I was six years old, and listened to it over and over until I discovered — the hard way — that you can’t leave vinyl records on the dashboard of a car on a hot day.

Our discussion ranged far and wide, over the golden age of novelty flexidiscs, Thomas Piketty, Hamilton, corporate anthems and many other subjects.

Episode 199 – Cory Doctorow on Allan Sherman – My Son, The Nut
[Comedy on Vinyl]

CryptogramThe Psychological Impact of Doing Classified Intelligence Work

Richard Thieme gave a talk on the psychological impact of doing classified intelligence work. Summary here.

Krebs on SecurityIoT Devices as Proxies for Cybercrime

Multiple stories published here over the past few weeks have examined the disruptive power of hacked “Internet of Things” (IoT) devices such as routers, IP cameras and digital video recorders. This post looks at how crooks are using hacked IoT devices as proxies to hide their true location online as they engage in a variety of other types of cybercriminal activity — from frequenting underground forums to credit card and tax refund fraud.

networktechniciansRecently, I heard from a cybersecurity researcher who’d created a virtual “honeypot” environment designed to simulate hackable IoT devices. The source, who asked to remain anonymous, said his honeypot soon began seeing traffic destined for Asus and Linksys routers running default credentials. When he examined what that traffic was designed to do, he found his honeypot systems were being told to download a piece of malware from a destination on the Web.

My source grabbed a copy of the malware, analyzed it, and discovered it had two basic functions: To announce to a set of Internet addresses hard-coded in the malware a registration “I’m here” beacon; and to listen for incoming commands, such as scanning for new vulnerable hosts or running additional malware. He then wrote a script to simulate the hourly “I’m here” beacons, interpret any “download” commands, and then execute the download and “run” commands.

The researcher found that the malware being pushed to his honeypot system was designed to turn his faux infected router into a “SOCKS proxy server,” essentially a host designed to route traffic between a client and a server. Most often, SOCKS proxies are used to anonymize communications because they can help obfuscate the true origin of the client that is using the SOCKS server.


When he realized how his system was being used, my source fired up several more virtual honeypots, and repeated the process. Employing a custom tool that allows the user to intercept (a.k.a. “man-in-the-middle”) encrypted SSL traffic, the researcher was able to collect the underlying encrypted data passing through his SOCKS servers and decrypt it.

What he observed was that all of the systems were being used for a variety of badness, from proxying Web traffic destined for cybercrime forums to testing stolen credit cards at merchant Web sites. Further study of the malware files and the traffic beacons emanating from the honeypot systems indicated his honeypots were being marketed on a Web-based criminal service that sells access to SOCKS proxies in exchange for Bitcoin.

Unfortunately, this type of criminal proxying is hardly new. Crooks have been using hacked PCs to proxy their traffic for eons. KrebsOnSecurity has featured numerous stories about cybercrime services that sell access to hacked computers as a means of helping thieves anonymize their nefarious activities online.

And while the activity that my source witnessed with his honeypot project targeted ill-secured Internet routers, there is no reason the same type of proxying could not be done via other default-insecure IoT devices, such as Internet-based security cameras and digital video recorders.

Indeed, my guess is that this is exactly how these other types of hacked IoT devices are being used right now (in addition to being forced to participate in launching huge denial-of-service attacks against targets that criminals wish to knock offline).

“In a way, this feels like 1995-2000 with computers,” my source told me. “Devices were getting online, antivirus wasn’t as prevalent, and people didn’t know an average person’s computer could be enslaved to do something else. The difference now is, the number of vendors and devices has proliferated, and there is an underground ecosystem with the expertise to fuzz, exploit, write the custom software. Plus, what one person does can be easily shared to a small group or to the whole world.”


As I wrote last week on the lingering and coming IoT security mess, a great many IoT devices are equipped with little or no security protections. On a large number of Internet-connected DVRs and IP cameras, changing the default passwords on the device’s Web-based administration panel does little to actually change the credentials hard-coded into the devices.

Routers, on the other hand, generally have a bit more security built in, but users still need to take several steps to harden these devices out-of-the-box.

For starters, make sure to change the default credentials on the router. This is the username and password pair that was factory installed by the router maker. The administrative page of most commercial routers can be accessed by typing, or into a Web browser address bar. If neither of those work, try looking up the documentation at the router maker’s site, or checking to see if the address is listed here. If you still can’t find it, open the command prompt (Start > Run/or Search for “cmd”) and then enter ipconfig. The address you need should be next to Default Gateway under your Local Area Connection.

If you don’t know your router’s default username and password, you can look it up here. Leaving these as-is out-of-the-box is a very bad idea. Most modern routers will let you change both the default user name and password, so do both if you can. But it’s most important to pick a strong password.

When you’ve changed the default password, you’ll want to encrypt your connection if you’re using a wireless router (one that broadcasts your modem’s Internet connection so that it can be accessed via wireless devices, like tablets and smart phones). has published some video how-tos on enabling wireless encryption on your router. WPA2 is the strongest encryption technology available in most modern routers, followed by WPA and WEP (the latter is fairly trivial to crack with open source tools, so don’t use it unless it’s your only option).

But even users who have a strong router password and have protected their wireless Internet connection with a strong WPA2 passphrase may have the security of their routers undermined by security flaws built into these routers. At issue is a technology called “Wi-Fi Protected Setup” (WPS) that ships with many routers marketed to consumers and small businesses. According to the Wi-Fi Alliance, an industry group, WPS is “designed to ease the task of setting up and configuring security on wireless local area networks. WPS enables typical users who possess little understanding of traditional Wi-Fi configuration and security settings to automatically configure new wireless networks, add new devices and enable security.”

However, WPS also may expose routers to easy compromise. Read more about this vulnerability here. If your router is among those listed as vulnerable, see if you can disable WPS from the router’s administration page. If you’re not sure whether it can be, or if you’d like to see whether your router maker has shipped an update to fix the WPS problem on their hardware, check this spreadsheet.

Finally, the hardware inside consumer routers is controlled by software known as “firmware,” and occasionally the companies that make these products ship updates for their firmware to correct security and stability issues. When you’re logged in to the administrative panel, if your router prompts you to update the firmware, it’s a good idea to take care of that at some point. If and when you decide to take this step, please be sure to follow the manufacturer’s instructions to the letter: Failing to do so could leave you with an oversized and expensive paperweight.

Personally, I never run the stock firmware that ships with these devices. Over the years, I’ve replaced the firmware in various routers I purchased with an open source alternative, such as DD-WRT (my favorite) or Tomato. These flavors generally are more secure and offer a much broader array of options and configurations. Again, though, before you embark on swapping out your router’s stock firmware with an open source alternative, take the time to research whether your router model is compatible, and that you understand and carefully observe all of the instructions involved in updating the firmware.

Since October is officially National Cybersecurity Awareness Month, it probably makes sense to note that the above tips on router security come directly from a piece I wrote a while back called Tools for a Safer PC, which includes a number of other suggestions to help beef up your personal and network security.

Worse Than FailureGuaranteed LOC PITA

Linux kernel loc

The task Tama set out to accomplish was rather straightforward. One of the clients had a legacy inventory management application, and they needed a simple text field added to an entry form.

Though he'd never seen the code, and the word "legacy" sent chills through his spine, he was confident he could get it done quickly and painlessly. Without hesitation, he downloaded the sources and dug in to acquaint himself with the codebase.

The database migration went easily, but the actual application—a WebForms solution spanning multiple projects and hundreds of files—turned out to be the strangest code Tama had ever seen. The first thing that caught his attention was an abundance of redundant and pointless code. Gems like

bool refreshView = false;
if (refreshView)


if (val == null || val.IsNull)
    return false;
return true;

popped up in nearly every method, bloating them and making Resharper wince in pain as it rendered thousands of warnings. Then there were the methods themselves. Instead of generics, each table column had multiple overloads of various methods with word-for-word identical contents.

The more Tama kept digging, the more he kept cursing at the obviously novice programmer who'd written the application. The code was atrociously unmaintainable, and even the slightest changes required modifying tens of code files across different projects. Instance methods doing nothing but calling out to static ones, multi-line calculations returning a constant value every time—it was as if the previous developer's sole goal was to write as much code as they possibly could, without any regard for proper coding standards or common sense.

Finally, Tama realized what was really going on.

public static string GetXMLString()
    if(_DefinitionString == "")
        System.Text.StringBuilder tbf = new System.Text.StringBuilder();
        tbf.Append(@"<XMLDefinition Generator=""SpeedyWare Table Designer"" Version=""3.4"" Type=""GENERIC"">");
        _DefinitionString = tbf.ToString();
        tbf.Append(  @"<ColumnDefinition>");
        _DefinitionString = tbf.ToString();
        tbf.Append(    @"<Column ColNumber=""0"">");
        _DefinitionString = tbf.ToString();
        tbf.Append(      @"<columnName>StateId</columnName>");
        _DefinitionString = tbf.ToString();
        tbf.Append(      @"<columnUIName>State</columnUIName>");
        _DefinitionString = tbf.ToString();
        tbf.Append(      @"<columnType>Number</columnType>");
        _DefinitionString = tbf.ToString();
        tbf.Append(      @"<columnDBType>int</columnDBType>");
        _DefinitionString = tbf.ToString();
        tbf.Append(      @"<columnLengthSet>10.0</columnLengthSet>");
        _DefinitionString = tbf.ToString();
        tbf.Append(      @"<columnDefault></columnDefault>");
        _DefinitionString = tbf.ToString();
        tbf.Append(      @"<columnDBDefault></columnDBDefault>");
        _DefinitionString = tbf.ToString();
        tbf.Append(      @"<columnIndex>Y</columnIndex>");
        _DefinitionString = tbf.ToString();
        tbf.Append(      @"<columnUnique>Y</columnUnique>");

SpeedyWare Table Designer, Tama thought to himself.

It was all generated code, or at least used to be at some point, since neither the actual generator nor the sources were anywhere to be found. But why was it so terrible? What possible motivation could lie behind utterly defeating the purpose of a StringBuilder by calling ToString() after each line, among all the other sins and indecencies?

The answer to that question was a quick Google search away. Looking past the various warez sites, Tama managed to find SpeedyWare's webpage, where a large heading proudly boasted:

Write 10,000 Lines of Code in 10 minutes!

It seemed that on some fateful day in the past, someone had had the great idea to make a simple app generator. Then Marketing had come along to milk the idea for all it was worth: This tool will write your business apps for you! Marketing promised that it would generate thousands of lines of code, since lines of code were obviously the best metric of effort and quality. Perhaps their management took this claim literally. Perhaps a customer complained that they only received 9,995 lines of code. Whatever the reason, the developers had been forced to bend and twist the generator until it extruded the magical 10,000 lines at any cost.

How well did it work out for the company? Tama scrolled down the webpage, expecting to find testimonials from happy customers announcing how much time the tool saved them. Instead, he found this little blurb:

Technical support for SpeedyWare Table Designer was discontinued on December 31, 2015 as part of our company ceasing operations.

As it turns out, even 10,000 lines of code will not save you if there's not a single good one among them.

[Advertisement] Infrastructure as Code built from the start with first-class Windows functionality and an intuitive, visual user interface. Download Otter today!

Sky CroeserOverlapping edges

Over the last few days, I’ve been thinking more about the idea of ‘belonging’ in academia, following on from my reflections post-AoIR. The converse of not having a single place that feels, unproblematically and fully, like my academic home, and the place where I belong, is that I get to have many spaces where I get energy and inspiration, where I connect well with a few people, and where I find ideas and frameworks that stretch me to think about my research in news ways.

I think about the activism and academia pre-AoIR satellite event, where people were crossing different approaches (the gaps between ‘activism’ and ‘civil society’; between anti-capitalist and more reformist perspectives; between different ways of seeing governance). About conversations I had this year at AoIR about content moderation, feminist research methods, teaching, and finding different ways to fit within academia. About the first time I went to AoIR, and my excitement at finding so much space for critical methodologies, and for women’s voices, and for connecting the personal and the political. About last year’s AoIR, and the attention paid there to how we engage with the broader politics of the world (also a theme this year).

Every conference and symposium I’ve been to has had these kinds of moments. Sometimes it’s only a few talks that shift my understanding in a key way, sometimes I meet people who are working on radically different areas but still offer me a new way to think about research, or about my negotiations with academia. Collaborations that help me link my work with others.

And then there  interviews and protests, where I get to learn more about how activism works in practice. Or workshops where my research intermingles with people’s daily experiences, and always changes. Talking at a huge event in Athens, and dancing with friends there afterwards, because that’s important too. Adacamp and Barcamp unconferences, World Social Forums, and other events. And threaded through them all, conversations with people who are changing the world in so many ways.

And, when I go home, my department, and my gradual exploration since returning to Perth of the other researchers at Curtin who are working on overlapping areas. Because Internet studies is a jumble of areas, I’m often working on very different issues to my colleagues, but I’m learning so much from starting to read more of their work. More importantly, it’s been a space within academia where I feel like I can be honest about who I am and what I care about, and where I can find support.

I may not have a clear academic home, but I’m grateful for all these overlapping spaces.

Sociological ImagesTrump’s Brilliant Manipulation of the Science of Group Conflict

Why do so many Americans continue to support Donald Trump with such fervor?

Hillary Clinton now leads Donald Trump in presidential polls by double-digits, but Trump’s hardiest supporters have not only stood by him, many have actually increased their commitment. It seems clear that in a little less than a month’s time, tens of millions of Americans will cast a vote for a man who overtly seeks to overthrow basic institutions that preserve the American ideal such as a free pressfreedom of religionuniversal suffragethe right of the accused to legal counsel, and the right of habeas corpus. This is over-and-above his loudly proclaimed bigotry, sexism, boasts of sexual assault, ableism, history of racial and anti-Muslim bias, and other execrable personal characteristics that would have completely destroyed the electoral prospects of past presidential candidates.

Trump is a uniquely odious candidate who is quite likely going to lose, but more than 40% of Americans plan to vote for him. The science of group conflict might help us understand why.

Photograph by Gage Skidmore via Flickr
Photograph by Gage Skidmore via Flickr.

In a powerful 2003 article in the journal American Psychologist, Roy Eidelson and Judy Eidelson foreshadowed Trump’s popularity. Drawing on a close reading of both history and social science literature, they identified five beliefs that — if successfully inculcated in people by a leader — motivate people to initiate group conflict. Trump’s campaign rhetoric deftly mobilizes all five.

  • Confidence in one’s superiority: Trump constantly broadcasts a message that he and his followers are superior to other Americans, whereas those who oppose him are “stupid” and deserve to be punched in the face. His own followers’ violent acts are excused as emanating from “tremendous love and passion for the country.”
  • Claims of unjust treatment: Trump is obsessed with the concept of fairness, but only when it goes his way. Given his presumed superiority, it naturally follows that the only way he and his supporters could fail is if injustice occurs.
  • Fears of vulnerability: Accordingly, Trump has overtly stated that he believes the presidential election will be rigged. His supporters believe him. In one recent poll, only 16 percent of North Carolina Trump supporters agreed that if Clinton wins it would be because she got more votes.
  • Distrust of the other: Trump and his supporters routinely claim that the mediagovernmenteducational institutions, and other established entities are overtly undermining Trump, his supporters, and their values. To many Trump supporters, merely being published or broadcast by a major news outlet is evidence that a fact is not credible, given the certainty they have that media professionals are conspiring against Trump.
  • A sense of helplessness: When Trump allows that it’s possible that he might lose the election because of fraud, conspiracy, or disloyalty, he taps into his followers’ sense of helplessness. No matter how superior he and his followers truly are, no matter how unjustly they are treated, there is little that they can do in the face of a nation-wide plot against him. Accordingly, many of Trump’s most ardent supporters will see the impending rejection of their candidate not as a corrective experience to lead them to reconsider their beliefs, but as further evidence that they are helpless in the face of a larger, untrustworthy outgroup.

By ably nurturing these five beliefs, Trump has gained power far beyond the level most could have dreamed prior to the present election cycle.

It seems clear that, if and when Trump loses, he won’t be going anywhere. He has a constituency, stoked by effective rhetorics shown to propel people to group conflict, one some of his supporters are already preparing for. And, since he has convinced so many of his supporters that he alone can bring the changes they desire, it is a surety that he will use their mandate for his own future purposes.

Sean Ransom, PhD is an assistant clinical professor in the Department of Psychiatry and Behavioral Sciences at Tulane University School of Medicine and founder of the Cognitive Behavioral Therapy Center of New Orleans. He received his PhD in clinical psychology at the University of South Florida.

(View original at

Planet Linux AustraliaGlen Turner: Activating IPv6 stable privacy addressing from RFC7217

Understand stable privacy addressing

In Three new things to know about deploying IPv6 I described the new IPv6 Interface Identifier creation scheme in RFC7217.* This scheme results in an IPv6 address which is stable, and yet has no relationship to the device's MAC address, nor can an address generated by the scheme be used to track the machine as it moves to other subnets.

This isn't the same as RFC4941 IP privacy addressing. RFC4941 addresses are more private, as they change regularly. But that instability makes attaching to a service on the host very painful. It's also not a great scheme for support staff: an unstable address complicates network fault finding. RFC7217 seeks a compromise position which provides an address which is difficult to use for host tracking, whilst retaining a stable address within a subnet to simplify fault finding and make for easy hosting of services such as SSH.

The older RFC4291 EUI-64 Interface Identifier scheme is being deprecated in favour of RFC7217 stable privacy addressing.

For servers you probably want to continue to use static addressing with a unique address per service. That is, a server running multiple services will hold multiple IPv6 addresses, and each service on the server bind()s to its address.

Configure stable privacy addressing

To activate the RFC7217 stable privacy addressing scheme in a Linux which uses Network Manager (Fedora, Ubuntu, etc) create a file /etc/NetworkManager/conf.d/99-local.conf containing:


Then restart Network Manager, so that the configuration file is read, and restart the interface. You can restart an interface by physically unplugging it or by:

systemctl restart NetworkManagerip link set dev eth0 down && ip link set dev eth0 up

This may drop your SSH session if you are accessing the host remotely.

Verify stable privacy addressing

Check the results with:

ip --family inet6 addr show dev eth0 scope global
1: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP qlen 1000
    inet6 2001:db8:1:2:b03a:86e8:e163:2714/64 scope global noprefixroute dynamic 
       valid_lft 2591932sec preferred_lft 604732sec

The highlighted Interface Identifier part of the IPv6 address should have changed from the EUI-64 Interface Identifier; that is, the Interface Identifier should not contain any bytes of the interface's MAC address. The other parts of the IPv6 address — the Network Prefix, Subnet Identifier and Prefix Length — should not have changed.

If you repeat the test on a different subnet then the Interface Identifier should change. Upon returning to the original subnet the Interface Identifier should return to the original value.

Planet Linux AustraliaMaxim Zakharov: One more fix for AMP WordPress plugin

With the recent AMP update at Google you may notice increased number of AMP parsing errors in your search console. They look like

The mandatory tag 'html ⚡ for top-level html' is missing or incorrect.

Some plugins, e.g. Add Meta Tags, may alter language_attributes() using 'language_attributes' filter and adding XML-related attributes which are disallowed (see ) and that causes the error mentioned above.

I have made a fix solving this problem and made pull request for WordPress AMP plugin, you may see it here:


Worse Than FailureCodeSOD: A Rusty Link

Kevin did the freelance thing, developing websites for small businesses. Sometimes, they had their own IT teams that would own the site afterwards, and perform routine maintenance. In those cases, they often dictated their technical standards, like “use VB.Net with WebForms”.

Kevin took a job, delivered the site, and moved onto the next job. Years later, that company needed some new features added. They called him in to do the work. He saw some surprises in the way the code base had changed.

It was the “Contact Us” link that drew his attention. The link had a simple job: cause the browser to navigate to a contact form screen. A simple <a href=""> could handle the job. But that tech-savvy boss used this anti-pattern, instead.

First, in the aspx file- the template of the view in WebForms, he added this button:

<asp:LinkButton ID="lnkContactUs" runat="server">Contact Us</asp:LinkButton>

Then, in the click event handler, he could do this:

    Protected Sub lnkContactUs_Click(sender As Object, e As EventArgs) Handles lnkContactUs.Click
      Dim strFullURL As String = String.Format("{0}{1}", Config.PublicWebsiteURL, "/?page_id=38")
      ClientScript.RegisterStartupScript(Me.GetType(), "Load", String.Format("<script type='text/javascript'>window.parent.location.href = '{0}';</script>", strFullURL))
   End Sub

This method runs whenever a click of that button in the browser triggers an HTTP request. In the response sent back, it injects a JavaScript file that forces the parent of this window to navigate to the clearly named URL for the contact page- page_id=38. Now, you might think, “well, if this link is visible due to a call, this kind of makes sense…”, but that doesn’t apply here. Instead, it’s almost certain this code was copy/pasted from StackOverflow without any understanding.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!


LongNowLong Now Member Discount for “The Next Billion conference” Thursday October 13th

On Thursday October 13th at the SFJAZZ Center, the digital news outlet Quartz is producing a one day conference called “The Next Billion“, and have offered Long Now Members a 40% discount.

The Next Billion is a metaphor for the future of the internet — mobile, global, exponential growth in emerging markets, as well as the growth of next level tech in more mature markets. At The Next Billion conference, they’ll explore how networked innovation in every sector is transforming business, society and opportunity across the globe.

If you are interested in purchasing a ticket and would like the discount code, please write into with your member number and we’ll be happy to help you.





CryptogramIndiana's Voter Registration Data Is Frighteningly Insecure

You can edit anyone's information you want:

The question, boiled down, was haunting: Want to see how easy it would be to get into someone's voter registration and make changes to it? The offer from Steve Klink -- a Lafayette-based public consultant who works mainly with Indiana public school districts -- was to use my voter registration record as a case study.

Only with my permission, of course.

"I will not require any information from you," he texted. "Which is the problem."

Turns out he didn't need anything from me. He sent screenshots of every step along the way, as he navigated from the "Update My Voter Registration" tab at the Indiana Statewide Voter Registration System maintained since 2010 at to the blank screen that cleared the way for changes to my name, address, age and more.

The only magic involved was my driver's license number, one of two log-in options to make changes online. And that was contained in a copy of every county's voter database, a public record already in the hands of political parties, campaigns, media and, according to Indiana open access laws, just about anyone who wants the beefy spreadsheet.

Krebs on SecurityMicrosoft: No More Pick-and-Choose Patching

Adobe and Microsoft today each issued updates to fix critical security flaws in their products. Adobe’s got fixes for Acrobat and Flash Player ready. Microsoft’s patch bundle for October includes fixes for at least five separate “zero-day” vulnerabilities — dangerous flaws that attackers were already exploiting prior to today’s patch release. Also notable this month is that Microsoft is changing how it deploys security updates, removing the ability for Windows users to pick and choose which individual patches to install.

brokenwindowsZero-day vulnerabilities describe flaws that even the makers of the targeted software don’t know about before they start seeing the flaws exploited in the wild, meaning the vendor has “zero days” to fix the bugs.

According to security vendor Qualys, Patch Tuesday updates fix zero-day bugs in Internet Explorer and Edge — the default browsers on different versions of Windows. MS16-121 addresses a zero-day in Microsoft Office. Another zero-day flaw affects GDI+ — a graphics component built into Windows that can be exploitable through the browser. The final zero-day is present in the Internet Messaging component of Windows.

Starting this month, home and business Windows users will no longer be able to pick and choose which updates to install and which to leave for another time. For example, I’ve often advised home users to hold off on installing .NET updates until all other patches for the month are applied — reasoning that .NET updates are very large and in my experience have frequently been found to be the source of problems when applying huge numbers of patches simultaneously.

But that cafeteria-style patching goes out the…err…Windows with this month’s release. Microsoft made the announcement in May of this year and revisited the subject again in August to add more detail behind its decision:

“Historically, we have released individual patches for these platforms, which allowed you to be selective with the updates you deployed,” wrote Nathan Mercer, a senior product marketing manager at Microsoft. “This resulted in fragmentation where different PCs could have a different set of updates installed leading to multiple potential problems:

  • Various combinations caused sync and dependency errors and lower update quality
  • Testing complexity increased for enterprises
  • Scan times increased
  • Finding and applying the right patches became challenging
  • Customers encountered issues where a patch was already released, but because it was in limited distribution it was hard to find and apply proactively

By moving to a rollup model, we bring a more consistent and simplified servicing experience to Windows 7 SP1 and 8.1, so that all supported versions of Windows follow a similar update servicing model. The new rollup model gives you fewer updates to manage, greater predictability, and higher quality updates. The outcome increases Windows operating system reliability, by eliminating update fragmentation and providing more proactive patches for known issues. Getting and staying current will also be easier with only one rollup update required. Rollups enable you to bring your systems up to date with fewer updates, and will minimize administrative overhead to install a large number of updates.”

Microsoft’s patch policy changes are slightly different for home versus business customers. Consumers on Windows 7 Service Pack 1 and Windows 8.1 will henceforth receive what Redmond is calling a “Monthly Rollup,” which addresses both security issues and reliability issues in a single update. The “Security-only updates” option — intended for enterprises and not available via Windows Update —  will only include new security patches that are released for that month. 

What this means is that if any part of the patch bundle breaks, the only option is to remove the entire bundle (instead of the offending patch, as was previously possible). I have no doubt this simplifies things for Microsoft and likely saves them a ton of money, but my concern is this will leave end-users unable to apply critical patches simply due to a single patch breaking something.

It’s important to note that several update types won’t be included in a rollup, including those for Adobe Flash Player. As it happens, Adobe today issued an update for its Flash Player browser plugin that fixes a dozen security vulnerabilities in the program. The company said it is currently not aware of any attempts to exploit these flaws in the wild (i.e., no zero-days in this month’s Flash patch).

brokenflash-aThe latest update brings Flash to v. for Windows and Mac users alike. If you have Flash installed, you should update, hobble or remove Flash as soon as possible. To see which version of Flash your browser may have installed, check out this page.

The smartest option is probably to ditch the program once and for all and significantly increase the security of your system in the process. I’ve got more on that approach (as well as slightly less radical solutions ) in A Month Without Adobe Flash Player.

If you choose to update, please do it today. The most recent versions of Flash should be available from this Flash distribution page or the Flash home page. Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.). Chrome and IE should auto-install the latest Flash version on browser restart (users may need to manually check for updates in and/or restart the browser to get the latest Flash version).

Finally, Adobe released security updates that correct a whopping 71 flaws in its PDF Reader and Acrobat products. If you use either of these software packages, please take a moment to update them.

TEDLarry Brilliant’s new book shows how pandemics can be eradicated

In 1996, a potential pandemic could stay hidden for 167 days before being detected — but by 2009, that number was down to 23 days. Our pandemic detection technology has gotten much more sophisticated, as Larry Brilliant told us at TED2013, but there is still work to do. Photo: Ryan Lash/TED

In 1996, a potential pandemic could stay hidden for 167 days before being detected — but by 2009, that number was down to 23 days. Our pandemic detection technology has gotten much more sophisticated, as Larry Brilliant told us at TED2013, but there is still work to do. Photo: Ryan Lash/TED

Epidemiologist Larry Brilliant remembers the day in 1974 when, while working for the United Nations in India, a mother handed him her young son, who had died only moments earlier from smallpox. Brilliant also remembers the day, about a year later, when he traveled by speedboat to an island in Bangladesh and met a 3-year-old girl who had survived the disease. Hers was the last case of killer smallpox in the world.

These two memories bookend the new autobiography, Sometimes Brilliant. In the book, Brilliant tells the story of how killer smallpox — a 10,000-year-old disease that killed half a billion people in the 20th century alone — was eradicated, through tireless groundwork and an effort to understand the cultural dynamics that allowed the disease to spread. Brilliant’s work ending smallpox, and later polio, earned him the 2006 TED Prize. His wish at the time: to harness the power of technology and build a global detection system for pandemics. He hammered on the mantra, “Early detection, early response.”

With the TED Prize, Brilliant launched InSTEDD, a worldwide surveillance system that monitors the web and social media for patterns that may signal a pandemic. While it’s not the topic of his book, InSTEDD has grown a lot in 10 years, and morphed from a single system to a web of approaches. InSTEDD now connects more than 100 digital disease-detection partners and provides tools that help the UN, WHO and CDC track potential pandemics. InSTEDD has also opened two iLabs in regions considered pandemic hotspots, one in Cambodia and the other in Argentina.

sometimes-brilliant-cover“It’s the best of all possible worlds,” said Brilliant in a phone call last week. “Instead of one major top-down system, where my vision was flawed, we have this proliferation of hundreds of systems working on early detection. Some look at parking lots at ERs, and whether there’s more cars than expected for the season. Others hold hackathons to create epidemiological tools.”

“A whole new science has emerged called ‘participatory surveillance,’” he continued. He applauded opt-in systems in Australia, Brazil, the US and many other countries, where — say, once a week — participants get a text message or email that asks them how they feel. “Not everyone responds, but enough do that you can make a map,” said Brilliant. “Those systems are faster at detecting pandemic potential than reports made by governments.”

Still, we can do better, said Brilliant. In the case of Ebola, for example, it took months before the WHO declared an outbreak in West Africa — and the delay cost thousands of lives, he said. The movement of MERS further underscored the importance of early response. The disease originated in Saudi Arabia, and when a case exported to Korea in 2015, it led to 186 cases. When a case exported to Thailand months later, health officials dodged an outbreak. “Thailand has one of the world’s best detection systems,” said Brilliant, pointing to the participatory surveillance app DoctorMe. “They found that case of MERS immediately.”

In the epilogue of Sometimes Brilliant, Brilliant calls winning the TED Prize “a turning point in my life.” It led to increased public attention on early pandemic detection, inspiring the 2011 film Contagion and energizing foundations to invest in pandemic control. It connected Brilliant with Google, where he became the director of, and introduced him to Contagion producer Jeff Skoll. Brilliant now serves as Chair of the Skoll Global Threats Fund, where he has his eye on pandemics — as well as on climate change, water security, nuclear proliferation and Middle East conflict.

Brilliant said he will always look back on the day he saw the last case of smallpox as proof that serious threats can be neutralized. He said, “The image of the last case of smallpox is what I offer as an antidote to all the pessimism and to the feeling that we’re a hopeless mob, and the best we can do is find our own bunker.”

Google AdsenseInfographic: Get the free All-In-One Policy Compliance Guide

We’ve shared that high quality content and consistency are key ingredients to earning and maintaining the trust of online users. What about maintaining the trust of your ad networks so that you can continue to earn revenue? For AdSense, it’s important to protect the interests of everyone in the online ecosystem, including our users, our advertisers, and our publishers. This focus on maintaining a healthy balance is the reason we set strict policies about AdSense for everyone in the ecosystem to follow.

Your feedback helped us realize that some publishers may be confused by some of our policies, which is why we’re launching a series of blog posts, infographics, new notifications, access to customer support, and #AskAdSense office hours to help increase transparency about AdSense policy processes. We hope that these insights can help turn your #PassionIntoProfit and grow your business as you focus on your users and provide unique content.

We have found that there are two types of publishers who receive AdSense policy violations. The first are publishers who unintentionally violate AdSense policies. For those, we hope that increased transparency into our policy processes can decrease these unintentional violations and help our publishers play by the rules. The second are publishers who intentionally bypass our rules, ending up with a variety of violations. That’s why we work hard to maintain a policy compliant ecosystem for our publishers, advertisers, and users. In short, if you play by the rules, AdSense is here to help you grow your business.

Policy compliant sites with unique content attract advertisers who are willing to spend more money and allow users to enjoy friendly web experiences. So without further ado, here’s your All-In-One Policy Compliance Guide. Download it, print it out, and hang it at your desk for reference. In the coming weeks, we’ll dive into policy topics to provide additional context and insights.

  • Part 1: Top triggers of policy violation warnings
  • Part 2: Did you receive a policy violation warning?
  • Part 3: Best practices for keeping your account in good standing

As always, we’re looking forward to hearing your feedback and invite you to join the conversation with us on Twitter and Google+.

Posted by: Anastasia Almiasheva from the AdSense team

Worse Than FailureAll Zipped Up


Moving to version control is hard. It's a necessary step as a company grows into developing more complex software, with more developers working on the various products, but that doesn't make it any easier. Like all change, it's often delayed far too long, half-assed, and generally resented until everyone's forgotten about the indignity and moved on to complaining about the next improvement.

For Elle's company, the days before Subversion consisted of a few dedicated PCs holding the source code for various customers, to ensure that none of them got mixed up with code for the others. By the time she joined, the company had long since moved to version control, but the source-controlled PCs remained, a curiosity to be laughed over.

Then the budget cuts came, and the team continued to grow. In an effort to reuse the PCs, Hiro, the head of IT, decided to repurpose them as developer workstations. "This is all in version control, right?" he asked nervously. "I can wipe the box?"

"I'm not sure, but I can guess where the repo is if you want me to take a look." Elle knew most of the development was happening in newer codebases, the ones that'd been redone since the bad old days, and she wasn't sure if she even had access rights. Some of the older repositories were locked down weirdly, during a time when paranoia reigned and "security concerns" loomed.

"No, no, it's fine, I'll check myself. Just to be safe." He didn't tell Elle what he found, but the PC was missing the next day when the new guy started, so she figured it must have been there.

Six months later, The Incident happened: their main competitor poached the five most senior staff, offering them better pay and benefits. Elle was jealous; apparently she was ranked number six on the team, and didn't get such a juicy offer herself. Still, she wished them the best of luck before they were frog-marched out to the parking lot by a furious Hiro. Security Concerns. According to the rumor mill, lawyers were brought in to prosecute the other firm for violating their non-compete. Life moved along, now with Elle training the new juniors hired to get the headcount back up.

Two weeks after The Incident, Hiro stopped by Elle's desk. "Hey, remember PegasusCorp?" he asked, naming one of their older clients—one that hadn't required anything from them since before Elle had joined.

"What about them?" asked Elle suspiciously, smelling unpleasant work coming her way.

Sure enough, they wanted some changes. The software needed a facelift. It seemed PegasusCorp's CEO had gotten a copy of Windows 8 and was loving the new "Metro" style. Elle rolled her eyes, but figured, whatever, a few visual changes shouldn't be too hard. She requested access rights to the old repo, checked out the trunk branch, and popped open the folder.

She was faced with a single file: a neat, packaged zip. She blinked. What?

She double-clicked on the zip, and it popped open a password entry field. What?!

She tried the obvious things: password, admin, p@$$w0rd, the name of the company, the name of the product, even Pegasus. No dice. Frowning, she got up and went to ask Hiro.

Hiro's eyes bulged and his face paled. "There's a what?!"

"A password on the zip file. Hey, if you don't know, I'll just ask ..." She trailed off. She was the most senior dev now, and she had no idea what the password could be. If the head of the department didn't know, that meant ...

"Just call? Please? I'm sure they'll be reasonable," begged Hiro.

Elle groaned as she trudged back to her desk, digging out the company off-hours cellphone directory. Whose bright idea was it to password-protect the damn source code?!

She called the first of the five, and only got as far as, "I'm calling from CompanyName" before she was met with a furious expletive followed by a dial tone. She stared at the phone in horror. What had Hiro done to the guy?!

The second person was more forthcoming, if not more helpful. "No way! Not unless you call off your lawyers. I had a great thing lined up, you ruined it, and now you come begging for help?"

Elle had a sinking feeling about her remaining prospects, but had to keep calling. The third person had changed their number without updating the roster. The fourth was apologetic, but simply didn't know what the password was. The fifth just laughed and laughed until an unnerved Elle hung up the phone herself.

God, I need a beer, she thought, lowering her head into her hands. How hard could it be to crack?

Elle grabbed a dictionary file of English words and threw together a batch file with a handful of basic fuzz techniques: backwards, with digits appended, in l33t, etc. She let it run overnight, confident that she'd have an answer—and source code—in the morning.

Beer time.

But she didn't have the password in the morning. Or the next morning, or the next. By the time she left Friday night, she was seriously worried for her job. Hiro looked miserable and frantic, and her script was almost out of words to try.

Monday morning, as she was rubbing the sleep from her eyes, Elle was startled by a loud thud. She lowered her hands, staring at the center-most table where ...

Is that Hiro? she wondered. With a ... keg?!

"Whoever can crack the PegasusCorp password gets this keg!" Hiro announced to the sleepy techs. "Have at ye!"

By the end of the day, the source was revealed: resigning technician number five had been a huge Star Wars fan, and the password had merely been JengoFett. The keg was shared around the office, and come Tuesday morning, Elle was able to do the facelifts required.

Meanwhile, the lawsuit backfired. The resigning staff got their jobs, and Elle's company had to pay for the suit, compensation for impounding their company vehicles, and salary up until their official resignation date.

And life moved on, as it always does.

[Advertisement] Universal Package Manager - ProGet easily integrates with your favorite Continuous Integration and Build Tools, acting as the central hub to all your essential components. Learn more today!


TEDTEDWomen update: Wellesley inaugurates Dr. Paula Johnson

Congratulations to TEDWomen 2013 speaker Dr. Paula Johnson who, earlier this month, was sworn in as the 14th president of Wellesley College. She is the first African-American president of the institution.

President Paula Johnson received the charter, seal, and keys to the College. Photo: Richard Howard

President Paula Johnson received the charter, seal, and keys to the College. Photo: Richard Howard

Dr. Johnson is a pioneer in looking at health from a woman’s perspective. Before taking the helm at Wellesley, she was the chief of the Division of Women’s Health at Harvard Medical School and Boston’s Brigham and Women’s Hospital, where she founded and was executive director of the Connors Center for Women’s Health and Gender Biology.

In her career, she has looked at how sex and gender impact health and health outcomes. Because of her work, we now know that every cell has a sex, and women and men are different down to the cellular level. In her TED Talk, she shared her research on the differences in the ways that men and women experience disease, and what that means in terms of clinical care and treatment.

Dr. Johnson’s inaugural ceremony featured a number of greeters who welcomed her to her new post, including Senator Elizabeth Warren, Harvard President Drew Gilpin Faust, Smith College President Kathleen McCartney and National Institutes of Health Senior Scientist Emerita Dr. Vivian Pinn.

In her speech, Dr. Johnson talked about all the women before her who had carried her to this moment.

I am here today because of a 30-year career in women’s health, and my deep commitment to women’s education. I stand before you on the shoulders and hard-won wisdom of so many women who laid the groundwork and pointed the way: my Brooklyn-born mother, Grayce Adina Johnson’s fierce belief in the power of education; my grandmother, Louise Young, who struggled with depression, which inspired me to enter medicine, with the ultimate mission of discovering how women’s and men’s biology differ in ways that go far beyond our reproductive functions.

I stand on the shoulders of my most important mentors and role models: Ruth Hubbard, Harvard University’s first tenured woman biology professor—a scholar who broke with tradition to explore the deep connections between women’s biology and social inequities. Women such as Shirley Chisholm, my “unbought and unbossed” Brooklyn congresswoman who burst on the scene at the crossroads of the civil rights and women’s movements in the 1970s.

In these women, I see the power of education to change women’s lives and create a better world. I see the power of shared experience, shared ideas, shared commitments, across time and space, across cultures and identities. I give gratitude to them and for them. I give gratitude to be here and now, looking at our future, together.

Watch Dr. Johnson’s entire acceptance speech.

Sociological ImagesStrict Voter Identification Laws Advantage Whites—And Skew American Democracy to the Right

Originally posted at Scholars Strategy Network.

Strict voter identification laws are proliferating all around the country. In 2006, only one U.S. state required identification to vote on Election Day. By now, 11 states have this requirement, and 34 states with more than half the nation’s population have some version of voter identification rules. With many states considering stricter laws and the courts actively evaluating the merits of voter identification requirements in a series of landmark cases, the actual consequences of these laws need to be pinned down. Do they distort election outcomes?

Ongoing Arguments – and a More Precise Study

Arguments rage about these laws. Proponents claim that voter identification rules are necessary to reduce fraud and restore trust in the democratic system – and they point out that identification rules are popular and do not preclude legitimate voters from participating. In the view of supporters, no new barriers are raised for the vast majority of American voters who already have the necessary forms of identification – and for those who don’t, the new hurdles are small and easily surmounted.

But critics argue that voter identification laws limit election participation by racial and ethnic minorities and other disadvantaged groups. There is no good reason to enact these impediments, critics claim, given little documented evidence of fraud by individual voters. Opponents believe that GOP legislatures and governors are instituting these laws to discourage Democratic voters and bias election outcomes in their party’s favor.

Who is right? Researchers have shown that racial and ethnic minorities, the poor, and younger Americans are disproportionately likely to lack legally specified kinds of identification – which means they must take extra steps to qualify as voters. Other studies have found that poll workers apply these rules unevenly across the population, disproportionately burdening minorities.

Nevertheless, the key question is not whether there could be worrisome effects from these laws, but whether clear-cut shifts in election participation and outcomes have actually occurred. Do voter identification laws reduce participation among specific segments of the population? Do they skew the electorate in favor of one set of interests over others? By focusing on U.S. elections from 2006 to 2014 and using validated voting data from the Cooperative Congressional Election Study, our research team has found more definitive answers. Because our data include large samples from every state in each election cycle, we can analyze voter turnout for various sub-groups – to see if states with strict voter identification rules exhibit different patterns than those without such rules.

Clear and Disturbing Findings

Our findings are clear: strict voter identification laws double or triple existing U.S. racial voting gaps, because they have a negative impact on the turnout of Hispanics, blacks, and Asian Americans, but do not discourage white voters. In general elections, Hispanic turnout is 7.1 points lower in general elections and 5.3 points lower in primaries in states with strict identification laws, compared to turnout in other states. For blacks, the drop is negligible in general elections but a full 4.6 points in primaries. Finally, in states with strict rules, Asian American general election turnout falls by 5.4 points in general elections and by 6.2 points in primaries. Whites are little affected, except for a slight boost in their turnout for primaries.


These findings persist even when we take many other factors into account – including partisanship, demographic characteristics, election contexts, and other laws that encourage or discourage participation. Racial gaps persist even when we limit our analysis to Democrats or track shifts in turnout in the first year after strict rules are implemented.

Do these laws advantage one party over the other? We found little consistent impact in general elections, but clear effects in primaries. In states that institute strict identification laws, the primary turnout gap favoring Republicans more than doubles from 4.3 points to 9.8 points. Likewise, the turnout gap favoring conservatives over liberals goes from 7.7 to 20.4 points.

Distorting American Democracy

In U.S. states with strict voter identification rules, the voices of Latinos, blacks, and Asian American voters become more muted as white voter influence grows. U.S. elections have long had a racial skew in favor of whites – and these recently proliferating laws make the imbalance worse. Furthermore, when the new rules go into effect, the influence of Democrats and liberals wanes compared to the clout of Republicans and conservatives. If courts considering the fate of voter identification laws want to understand their actual impact, the evidence that they distort American democracy is clear and convincing.

Read more in Zoltan Hajnal, Nazita Lajevardi, and Lindsay Nielson, “Voter Identification Laws and the Suppression of Minority Votes,” University of California, San Diego, 2016.

(View original at

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV Main November 2016 Meeting: The Internet of Toys / Special General Meeting / Functional Programming

Nov 2 2016 18:30
Nov 2 2016 20:30
Nov 2 2016 18:30
Nov 2 2016 20:30

6th Floor, 200 Victoria St. Carlton VIC 3053


• Nick Moore, The Internet of Toys: ESP8266 and MicroPython
• Special General Meeting
• Les Kitchen, Functional Programming

200 Victoria St. Carlton VIC 3053 (the EPA building)

Late arrivals needing access to the building and the sixth floor please call 0490 627 326.

Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.

LUV would like to acknowledge Red Hat for their help in obtaining the venue.

Linux Users of Victoria Inc. is an incorporated association, registration number A0040056C.

November 2, 2016 - 18:30

read more

Worse Than FailureCodeSOD: Grumpy Cat

At the end of the lecture session, students immediately started packing up their laptops to race across campus for their next class. Andrew’s professor droned on, anyway, describing their assignment. “I’ve provided parser code,” he said, “so you can just download the datafile and use it.”

He had more to say, but no one was paying attention. Perhaps he even had a correction to the assignment- because when Andrew went to download the data file for the assignment 404ed.

Andrew emailed the professor, but figured- “Hey, I can get started. I’ll just read the parser code to understand the file format.”

String line = br.readLine();
for (int i=0; i<result.length; i++) {
    line = br.readLine();
    index = line.indexOf(" ");
    int number = Integer.parseInt(line.substring(0,index));
    result[i] = number;

The parsing code itself was an overly complicated reinvention of String.split with absolutely no error handling. That was enough to make Andrew cringe, but it’s hardly the worst code you could have. No, the real bad code was the code that read from the file.

You might be thinking to yourself, “Well, Java does make file I/O a bit complicated, but you just link a FileInputStream with a BufferedReader, or even save yourself all that trouble and use the very convenient FileReader.”

Now, I don’t want to put words in this professor’s mouth, but I think they would point out that a FileReader reinvents a wheel that’s already on your OS. Why go through all this trouble to open a file and read from it if you already have a program on your OS that can open and read files?

List<String> command = new ArrayList<String>();
ProcessBuilder builder = new ProcessBuilder(command);
final Process process = builder.start();
InputStream is = process.getInputStream();
InputStreamReader isr = new InputStreamReader(is);
BufferedReader br = new BufferedReader(isr);
// discard the first line
String line = br.readLine();
for (int i=0; i<result.length; i++) {
    line = br.readLine();
    index = line.indexOf(" ");
    int number = Integer.parseInt(line.substring(0,index));
    result[i] = number;
if (debugPrint) System.out.println("Program terminated?");
int rc = process.waitFor();
if (debugPrint) System.out.println("Program terminated!");

Why not open the cat command? I mean, sure, now you’re tied to a system with cat installed, losing the cross-platform benefits of Java. I hope none of these students are using Windows. Then there’s the overhead of launching a new process. Also, process.waitFor is a blocking call that doesn’t terminate until the called program terminates, which means a bad input can hang the application.

It’s worth noting that this code wasn’t merely written for this assignment, either. In fact, this code started life as part of a research project conducted by multiple members of the CS department’s faculty. Using this parser was not an optional part of the project, either- use of this code was required.

[Advertisement] Universal Package Manager - ProGet easily integrates with your favorite Continuous Integration and Build Tools, acting as the central hub to all your essential components. Learn more today!


Sky CroeserAoIR2016: on not finding home

A lot of people attending talk about having found their academic ‘home’, or about having found their ‘people’. This is understandable: AoIR is an eclectic space, full of amazing, interesting people who are tackling important new problems (and often having to create new methodologies in order to do so).

mosaic.jpgIt’s not my home, though. Except insofar, perhaps, as there’s often a significant gap between idealised images of home and many places in which I’ve actually lived. I’ve gone to quite a few different conferences, across a number of different fields, and I’ve never found ‘home’. I’ve always felt a little out of place, a little unsure about where my work fits in, a little like everyone else seems to know each other and I’m the one standing awkwardly at the edge of groups at social events wondering if I should just give up and go home.

Conferences are a challenge because of this, but still valuable. I’ve met a lot of good people, heard about interesting work that is mostly a few steps away from my own, and occasionally prodded other people’s ideas in the direction of new approaches that I think are important. I think a lot about cross-pollination.

AoIR felt especially hard this year. Part of that was the fact that I’m at a low ebb in terms of energy. In the last few months I’ve moved house, had a rather intense teaching semester, and had some health issues that left me feeling more exhausted than I can ever remember being before. The couple of years before that already depleted my reserves: they’ve been tremendously difficult on a personal level. But part of it was also the strangeness of feeling out of place amidst this narrative of AoIR as home, as ‘our people’. It’s especially jarring to feel my usual awkwardness as so many other people are talking about their sense of belonging.

And in academia, belonging is important. I’ve helped build some collaborations with other early-career scholars that I’ve found tremendously valuable, but I haven’t found mentors to help with some of the tougher aspects of navigating academia. I do okay at publishing, I think, but grants are hard to navigate when you don’t have more established scholars to include you on their projects so you can get your own track record.

I’ve had some very generous advice provided by older academics, but because they’re not quite in my area, it’s not always easy to implement: ‘Believe in your work!’ (I do!) ‘Try applying for grants x and y, they’re low-hanging fruit!’ (Their terms specifically prohibit the kind of research I’m most excited about.) I’m still trying to make these connections, but it’s hard to continually approach more senior academics with a lot of demands on their time and ask: can you help me?

glassI don’t know if there’s a place in academia that would fit me in the way that AoIR seems to fit others. Much of the work at AoIR is very close: there’s a significant concern with critiquing power structures and creating change in the world, albeit often coming with a different set of assumptions to my own. I met many wonderful people who I hope to stay in touch with, even if I often felt like I was getting in the way of them talking to people more important for their work. I also missed a few chances to meet and talk with others working in similar areas – perhaps in other, less exhausted years, I will be better at finding these connections.

But for others who might have felt the same way as I did at AoIR, worried that we weren’t belonging in the ways that others seem to, I wanted to write this. To remind myself, too, that it’s okay if there isn’t already a perfect home for me in academia. To remind myself that it’s okay to be at the margins. That sometimes even though it would be good to have a place already waiting to accept me, I just have to keep working at building communities where I can fit. Finding people, stitching networks, helping others who also feel out of place, questioning the assumptions that other people are working within. Sometimes, perhaps even often, remaining awkward, and doing my best to make the most of that space on the edge.

CryptogramNSA Contractor Arrested for Stealing Classified Information

The NSA has another contractor who stole classified documents. It's a weird story: "But more than a month later, the authorities cannot say with certainty whether Mr. Martin leaked the information, passed them on to a third party or whether he simply downloaded them." So maybe a potential leaker. Or a spy. Or just a document collector.

My guess is that there are many leakers inside the US government, even more than what's on this list from last year.

EDITED TO ADD (10/7): More information.

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV Beginners October Meeting: Build a Simple RC Bot!

Oct 15 2016 12:30
Oct 15 2016 16:30
Oct 15 2016 12:30
Oct 15 2016 16:30

Infoxchange, 33 Elizabeth St. Richmond

Build a Simple RC Bot! Getting started with Arduino and Android

In this introductory talk, Ivan Lim Siu Kee will take you through the process of building a simple remote controlled bot. Find out how you can get started on building simple remote controlled bots of your own. While effort has been made to keep the presentation as beginner friendly as possible, some programming experience is still recommended to get the most out of this talk.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121 (enter via the garage on Jonas St.)

Late arrivals, please call (0490) 049 589 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

October 15, 2016 - 12:30

read more

Planet Linux AustraliaCraig Sanders: Converting to a ZFS rootfs

My main desktop/server machine (running Debian sid) at home has been running XFS on mdadm raid-1 on a pair of SSDs for the last few years. A few days ago, one of the SSDs died.

I’ve been planning to switch to ZFS as the root filesystem for a while now, so instead of just replacing the failed drive, I took the opportunity to convert it.

NOTE: at this point in time, ZFS On Linux does NOT support TRIM for either datasets or zvols on SSD. There’s a patch almost ready (TRIM/Discard support from Nexenta #3656), so I’m betting on that getting merged before it becomes an issue for me.

Here’s the procedure I came up with:

1. Buy new disks, shutdown machine, install new disks, reboot.

The details of this stage are unimportant, and the only thing to note is that I’m switching from mdadm RAID-1 with two SSDs to ZFS with two mirrored pairs (RAID-10) on four SSDs (Crucial MX300 275G – at around $100 AUD each, they’re hard to resist). Buying four 275G SSDs is slightly more expensive than buying two of the 525G models, but will perform a lot better.

When installed in the machine, they ended up as /dev/sdp, /dev/sdq, /dev/sdr, and /dev/sds. I’ll be using the symlinks in /dev/disk/by-id/ for the zpool, but for partition and setup, it’s easiest to use the /dev/sd? device nodes.

2. Partition the disks identically with gpt partition tables, using gdisk and sgdisk.

I need:

  • A small partition (type EF02, 1MB) for grub to install itself in. Needed on gpt.
  • A small partition (type EF00, 1MB) for EFI System. I’m not currently booting with UEFI but I want the option to move to it later.
  • A small partition (type 8300, 2GB) for /boot.

    I want /boot on a separate partition to make it easier to recover from problems that might occur with future upgrades. 2GB might seem excessive, but as this is my tftp & dhcp server I can’t rely on network boot for rescues, so I want to be able to put rescue ISO images in there and boot them with grub and memdisk.

    This will be mdadm RAID-1, with 4 copies.

  • A larger partition (type 8200, 4GB) for swap. With 4 identically partitioned SSDs, I’ll end up with 16GB swap (using zswap for block-device backed compressed RAM swap)

  • A large partition (type bf07, 210GB) for my rootfs

  • A small partition (type bf08, 2GBB) to provide ZIL for my HDD zpools

  • A larger partition (type bf09, 32GB) to provide L2ARC for my HDD zpools

ZFS On Linux uses partition type bf08 (“Solaris Reserved 1”) natively, but doesn’t seem to care what the partition types are for ZIL and L2ARC. I arbitrarily used bf08 (“Solaris Reserved 2”) and bf09 (“Solaris Reserved 3”) for easy identification. I’ll set these up later, once I’ve got the system booted – I don’t want to risk breaking my existing zpools by taking away their ZIL and L2ARC (and forgetting to zpool remove them, which I might possibly have done once) if I have to repartition.

I used gdisk to interactively set up the partitions:

# gdisk -l /dev/sdp
GPT fdisk (gdisk) version 1.0.1

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sdp: 537234768 sectors, 256.2 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 4234FE49-FCF0-48AE-828B-3C52448E8CBD
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 537234734
Partitions will be aligned on 8-sector boundaries
Total free space is 6 sectors (3.0 KiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1              40            2047   1004.0 KiB  EF02  BIOS boot partition
   2            2048         2099199   1024.0 MiB  EF00  EFI System
   3         2099200         6293503   2.0 GiB     8300  Linux filesystem
   4         6293504        14682111   4.0 GiB     8200  Linux swap
   5        14682112       455084031   210.0 GiB   BF07  Solaris Reserved 1
   6       455084032       459278335   2.0 GiB     BF08  Solaris Reserved 2
   7       459278336       537234734   37.2 GiB    BF09  Solaris Reserved 3

I then cloned the partition table to the other three SSDs with this little script:

#! /bin/bash


targets=( 'sdq' 'sdr' 'sds' )

for tgt in "${targets[@]}"; do
  sgdisk --replicate="/dev/$tgt" /dev/"$src"
  sgdisk --randomize-guids "/dev/$tgt"

3. Create the mdadm for /boot, the zpool, and and the root filesystem.

Most rootfs on ZFS guides that I’ve seen say to call the pool rpool, then create a dataset called "$(hostname)-1" and then create a ROOT dataset under that. so on my machine, that would be rpool/ganesh-1/ROOT. Some reverse the order of hostname and the rootfs dataset, for rpool/ROOT/ganesh-1.

There might be uses for this naming scheme in other environments but not in mine. And, to me, it looks ugly. So I’ll use just $(hostname)/root for the rootfs. i.e. ganesh/root

I wrote a script to automate it, figuring I’d probably have to do it several times in order to optimise performance. Also, I wanted to document the procedure for future reference, and have scripts that would be trivial to modify for other machines.

#! /bin/bash

exec &> ./create.log

hn="$(hostname -s)"

md_parts=( $(/bin/ls -1 /dev/disk/by-id/${base}*-part${md_part}) )


# 4 disks, so use the top half and bottom half for the two mirrors.
zmirror1=( $(/bin/ls -1 /dev/disk/by-id/${base}*-part${zfs_part} | head -n 2) )
zmirror2=( $(/bin/ls -1 /dev/disk/by-id/${base}*-part${zfs_part} | tail -n 2) )

# create /boot raid array
mdadm "$md" --create \
    --bitmap=internal \
    --raid-devices=4 \
    --level 1 \
    --metadata=0.90 \

mkfs.ext4 "$md"

# create zpool
zpool create -o ashift=12 "$hn" \
    mirror "${zmirror1[@]}" \
    mirror "${zmirror2[@]}"

# create zfs rootfs
zfs set compression=on "$hn"
zfs set atime=off "$hn"
zfs create "$hn/root"
zpool set bootfs="$hn/root"

# mount the new /boot under the zfs root
mount "$md" "/$hn/root/boot"

If you want or need other ZFS datasets (e.g. for /home, /var etc) then create them here in this script. Or you can do that later after you’ve got the system up and running on ZFS.

If you run mysql or postgresql, read the various tuning guides for how to get best performance for databases on ZFS (they both need their own datasets with particular recordsize and other settings). If you download Linux ISOs or anything with bit-torrent, avoid COW fragmentation by setting up a dataset to download into with recordsize=16K and configure your BT client to move the downloads to another directory on completion.

I did this after I got my system booted on ZFS. For my db, I stoppped the postgres service, renamed /var/lib/postgresql to /var/lib/p, created the new datasets with:

zfs create -o recordsize=8K -o logbias=throughput -o mountpoint=/var/lib/postgresql \
  -o primarycache=metadata ganesh/postgres

zfs create -o recordsize=128k -o logbias=latency -o mountpoint=/var/lib/postgresql/9.6/main/pg_xlog \
  -o primarycache=metadata ganesh/pg-xlog

followed by rsync and then started postgres again.

4. rsync my current system to it.

Logout all user sessions, shut down all services that write to the disk (postfix, postgresql, mysql, apache, asterisk, docker, etc). If you haven’t booted into recovery/rescue/single-user mode, then you should be as close to it as possible – everything non-esssential should be stopped. I chose not to boot to single-user in case I needed access to the web to look things up while I did all this (this machine is my internet gateway).


hn="$(hostname -s)"
time rsync -avxHAXS -h -h --progress --stats --delete / /boot/ "/$hn/root/"

After the rsync, my 130GB of data from XFS was compressed to 91GB on ZFS with transparent lz4 compression.

Run the rsync again if (as I did), you realise you forgot to shut down postfix (causing newly arrived mail to not be on the new setup) or something.

You can do a (very quick & dirty) performance test now, by running zpool scrub "$hn". Then run watch zpool status "$hn". As there should be no errorss to correct, you should get scrub speeds approximating the combined sequential read speed of all vdevs in the pool. In my case, I got around 500-600M/s – I was kind of expecting closer to 800M/s but that’s good enough….the Crucial MX300s aren’t the fastest drive available (but they’re great for the price), and ZFS is optimised for reliability more than speed. The scrub took about 3 minutes to scan all 91GB. My HDD zpools get around 150 to 250M/s, depending on whether they have mirror or RAID-Z vdevs and on what kind of drives they have.

For real benchmarking, use bonnie++ or fio.

5. Prepare the new rootfs for chroot, chroot into it, edit /etc/fstab and /etc/default/grub.

This script bind mounts /proc, /sys, /dev, and /dev/pts before chrooting:

#! /bin/sh

hn="$(hostname -s)"

for i in proc sys dev dev/pts ; do
  mount -o bind "/$i" "/${hn}/root/$i"

chroot "/${hn}/root"

Change /etc/fstab (on the new zfs root to) have the zfs root and ext4 on raid-1 /boot:

/ganesh/root    /         zfs     defaults                                         0  0
/dev/md0        /boot     ext4    defaults,relatime,nodiratime,errors=remount-ro   0  2

I haven’t bothered with setting up the swap at this point. That’s trivial and I can do it after I’ve got the system rebooted with its new ZFS rootfs (which reminds me, I still haven’t done that :).

add boot=zfs to the GRUB_CMDLINE_LINUX variable in /etc/default/grub. On my system, that’s:

GRUB_CMDLINE_LINUX="iommu=noagp usbhid.quirks=0x1B1C:0x1B20:0x408 boot=zfs"

NOTE: If you end up needing to run rsync again as in step 4. above copy /etc/fstab and /etc/default/grub to the old root filesystem first. I suggest to /etc/fstab.zfs and /etc/default/grub.zfs

6. Install grub

Here’s where things get a little complicated. Running install-grub on /dev/sd[pqrs] is fine, we created the type ef02 partition for it to install itself into.

But running update-grub to generate the new /boot/grub/grub.cfg will fail with an error like this:

/usr/sbin/grub-probe: error: failed to get canonical path of `/dev/ata-Crucial_CT275MX300SSD1_163313AADD8A-part5'.

IMO, that’s a bug in grub-probe – it should look in /dev/disk/by-id/ if it can’t find what it’s looking for in /dev/

I fixed that problem with this script:

#! /bin/sh

cd /dev
ln -s /dev/disk/by-id/ata-Crucial* .

After that, update-grub works fine.

NOTE: you will have to add udev rules to create these symlinks, or run this script on every boot otherwise you’ll get that error every time you run update-grub in future.

7. Prepare to reboot

Unmount proc, sys, dev/pts, dev, the new raid /boot, and the new zfs filesystems. Set the mount point for the new rootfs to /

#! /bin/sh

hn="$(hostname -s)"

for i in dev/pts dev sys proc ; do
  umount "/${hn}/root/$i"

umount "$md"

zfs umount "${hn}/root"
zfs umount "${hn}"
zfs set mountpoint=/ "${hn}/root"
zfs set canmount=off "${hn}"

8. Reboot

Remember to configure the BIOS to boot from your new disks.

The system should boot up with the new rootfs, no rescue disk required as in some other guides – the rsync and chroot stuff has already been done.

9. Other notes

  • If you’re adding partition(s) to a zpool for ZIL, remember that ashift is per vdev, not per zpool. So remember to specify ashift=12 when adding them. e.g.

    zpool add -o ashift=12 export log \
      mirror ata-Crucial_CT275MX300SSD1_163313AAEE5F-part6 \

    Check that all vdevs in all pools have the correct ashift value with:

    zdb | grep -E 'ashift|vdev|type' | grep -v disk

10. Useful references

Reading these made it much easier to come up with my own method. Highly recommended.

Converting to a ZFS rootfs is a post from: Errata


Krebs on SecurityEurope to Push New Security Rules Amid IoT Mess

The European Commission is drafting new cybersecurity requirements to beef up security around so-called Internet of Things (IoT) devices such as Web-connected security cameras, routers and digital video recorders (DVRs). News of the expected proposal comes as security firms are warning that a great many IoT devices are equipped with little or no security protections.

iotb2According to a report at, the Commission is planning the new IoT rules as part of a new plan to overhaul the European Union’s telecommunications laws. “The Commission would encourage companies to come up with a labeling system for internet-connected devices that are approved and secure,” wrote Catherine Stupp. “The EU labelling system that rates appliances based on how much energy they consume could be a template for the cybersecurity ratings.”

In last week’s piece, “Who Makes the IoT Things Under Attack?,” I looked at which companies are responsible for IoT products being sought out by Mirai — malware that scans the Internet for devices running default usernames and passwords and then forces vulnerable devices to participate in extremely powerful attacks designed to knock Web sites offline.

One of those default passwords — username: root and password: xc3511 — is in a broad array of white-labeled DVR and IP camera electronics boards made by a Chinese company called XiongMai Technologies. These components are sold downstream to vendors who then use it in their own products.

That information comes in an analysis published this week by Flashpoint Intel, whose security analysts discovered that the Web-based administration page for devices made by this Chinese company (http://ipaddress/Login.htm) can be trivially bypassed without even supplying a username or password, just by navigating to a page called “DVR.htm” prior to login.

Worse still, even if owners of these IoT devices change the default credentials via the device’s Web interface, those machines can still be reached over the Internet via communications services called “Telnet” and “SSH.” These are command-line, text-based interfaces that are typically accessed via a command prompt (e.g., in Microsoft Windows, a user could click Start, and in the search box type “cmd.exe” to launch a command prompt, and then type “telnet” to reach a username and password prompt at the target host).

“The issue with these particular devices is that a user cannot feasibly change this password,” said Flashpoint’s Zach Wikholm. “The password is hardcoded into the firmware, and the tools necessary to disable it are not present. Even worse, the web interface is not aware that these credentials even exist.”

Flashpoint’s researchers said they scanned the Internet on Oct. 6 for systems that showed signs of running the vulnerable hardware, and found more than 515,000 of them were vulnerable to the flaws they discovered.

Flashpoint says the majority of media coverage surrounding the Mirai attacks on KrebsOnSecurity and other targets has outed products made by Chinese hi-tech vendor Dahua as a primary source of compromised devices. Indeed, Dahua’s products were heavily represented in the analysis I published last week.

For its part, Dahua appears to be downplaying the problem. On Thursday, Dahua published a carefully-worded statement that took issue with a Wall Street Journal story about the role of Dahua’s products in the Mirai botnet attacks.

“To clarify, Dahua Technology has maintained a B2B business model and sells its products through the channel,” the company said. “Currently in the North America market, we don’t sell our products directly to consumers and businesses through [our] website or retailers like Amazon. Amazon is not an approved Dahua distributor and we proactively conduct research to identify and take action against the unauthorized sale of our products. A list of authorized distributors is available here.”

Dahua said the company’s investigation determined the devices that became part of the DDoS attack had one or more of these characteristics:

-The devices were using firmware dating prior to January 2015.
-The devices were using the default user name and password.
-The devices were exposed to the internet without the protection of an effective network firewall.

The default login page of Xiongmai Technologies “Netsurveillance” and “CMS” software. Image: Flashpoint.

The default login page of Xiongmai Technologies “Netsurveillance” and “CMS” software. Image: Flashpoint.

Dahua also said that to the best of the company’s knowledge, DDoS [distributed denial-of-service attacks] threats have not affected any Dahua-branded devices deployed or sold in North America.

Flashpoint’s Wikholm said his analysis of the Mirai infected nodes found differently, that in the United States Dahua makes up about 65% of the attacking sources (~3,000 Internet addresses in the US out of approximately 400,000 addresses total).


Dahua’s statement that devices which were enslaved as part of the DDoS botnet were likely operating under the default password is duplicitous, given that threats like Mirai spread via Telnet and because the default password can’t effectively be changed.

Dahua and other IoT makers who have gotten a free pass on security for years are about to discover that building virtually no security into their products is going to have consequences. It’s a fair bet that the European Commission’s promised IoT regulations will cost a handful of IoT hardware vendors plenty.

Also, in the past week I’ve heard from two different attorneys who are weighing whether to launch class-action lawsuits against IoT vendors who have been paying lip service to security over the years and have now created a massive security headache for the rest of the Internet.

I don’t normally think class-action lawsuits move the needle much, but in this case they seem justified because these companies are effectively dumping toxic waste onto the Internet. And make no mistake, these IoT things have quite a long half-life: A majority of them probably will remain in operation (i.e., connected to the Internet and insecure) for many years to come — unless and until their owners take them offline or manufacturers issue product recalls.

Perhaps Dahua is seeing the writing on the wall as well. In its statement this week, the company confirmed rumors reported by KrebsOnSecurity earlier, stating that it would be offering replacement discounts as “a gesture of goodwill to customers who wish to replace pre-January 2015 models.” But it’s not clear yet whether and/or how end-users can take advantage of this offer, as the company maintains it does not sell to consumers directly. “Dealers can bring such products to an authorized Dahua dealer, where a technical evaluation will be performed to determine eligibility,” the IoT maker said.

In a post on Motherboard this week, security expert Bruce Schneier argued that the universe of IoT things will largely remain insecure and open to compromise unless and until government steps in and fixes the problem.

“When we have market failures, government is the only solution,” Schneier wrote. “The government could impose security regulations on IoT manufacturers, forcing them to make their devices secure even though their customers don’t care. They could impose liabilities on manufacturers, allowing people like Brian Krebs to sue them. Any of these would raise the cost of insecurity and give companies incentives to spend money making their devices secure.”

I’m not planning on suing anyone related to these attacks, but I wonder what you think, dear reader? Are lawsuits and government regulations going to help mitigate the security threat from the 20 billion IoT devices that Gartner estimates will be plugged into the Internet by 2020? Sound off in the comments below.

Sky CroeserAoIR2016: Bending

This was a fascinating session run by The Fourchettes collective, with a focus on un-block-boxing, thinking about it through axes of power, and by recognising spaces of invisibility (and the importance of preparing them). It was facilitated by Alison Harvey (University of Leicester, United Kingdom), Mary Elizabeth Luka (York University, Canada), Jessalynn Keller (University of Calgary, Canada), Tamara Shepherd (University of Calgary, Canada), and Mélanie Millette (Université de Québec à Montréal, Canada).

I’m afraid my notes are rather partial here, as I’m not always good at balancing participation with note-taking in the enlarged-fishbowl-more-of-a-pond format.

Tamara Shepherd’s introduction was very effective in setting the context for the discussion, noting the need for critical approaches to methodology, and the complications and ambiguities in developing those approaches.

forkMary Luka talking about some of the ways in which an ethics of care is useful for thinking more deeply about methods. The ethics of care approach allows us to think about how ethics protocols obfuscate and manage different bodies of research. Building communities and networks is important, but this is often actually used with reference to ‘impact’ (in the ways it’s measured for academic work).

Mélanie Millette spoke on the paradoxes of balancing her personal experiences of research and the limitations of the formal ethics approach.

Jessalynn Keller talked about research pollination. Her research looks as how girls and women use digital technologies to challenge rape culture online, including their experiences and feelings around this kind of practice. This involves collecting a wide variety of materials across platforms. In constructing an archive of this material, it’s challenging to balance different priorities (including requirements as a junior researcher, and the desire to centre young women’s voices).

Alison Harvey talked about the obsession that develops in academia with typologies, and the benefits of taking individual words and thinking more deeply about them. Words that are very normalised in everyday practice, like, ‘data’ are beginning to feel uncomfortable. Her participants are experts sharing their stories, and talking to them doesn’t feel like, ‘data collection’. But we can’t just change these words, because we’re working within particular contexts. We also need to remember that no methodology is necessarily feminist. Feminist research approaches need to engage critically with the epistemological underpinnings of the process of research.

To give a very rough overview of some of the discussions that came up (perhaps more as a reminder to myself than anything else), with apologies for not being able to keep track of speakers:

  • Citation practices: who do we cite? Do we try to take texts that aren’t overtly feminist and try to read them against themselves? When we’re citing important contributions, including conference papers, how do we also protect people who may have been obfuscating their arguments for reasons like safety?
  • How do we support alternate citing practices as journal reviewers?
  • How do we find sources beyond the cannon? Especially when most of the tools we use (like Google Scholar and our internal library research) embed the existing status quo?
  • Open Humanities Press is a useful place to look for resources, in particular Photomediations.
  • How do we escape marginal spaces within academia? (For example, not getting stuck within ‘work on queer issues’, ‘work on country x’.) How do we as readers help in this (remembering that an article or conference presentation on India or Poland may still be relevant)?
  • Emma Lawson? ‘Publish and perish’ talks about the challenges of making research more open.
  • We need to think about how industries surrounding academia (like publishing) can also be engaged in this work.
  • How do we use our privilege, including our privilege as researchers, to create change?
  • When we think about what communities we work with want, we need to keep asking what will be useful. The answers are surprising: sometimes it is to publish academic articles. How do we ask what communities want at scale? Or when we’re bringing communities into being through our research?
  • When we’re working with ‘unlikeable’ movements, often we don’t want to point the ethics of care in their direction: we might be researching movements that we know don’t want to be researched, but their desires aren’t the most ethically pressing.
  • How do we use a feminist ethics of care when doing larger-scale research?
  • How do we use teaching to create change? Whose texts do we foreground? How do we make students pay attention to the authors of texts (many students assume that authors are white men)? What teaching practices create change?



Sky CroeserAoIR2016: Forced migration and digital connectivity in(to) Europe – communicative infrastructures, regulations and media discourses

Mark Latonero (USC Annenberg School) spoke on they ways in which data is being collected around forced migration flows. Latonera is interested in the technologies that are being used to track and managed refugees’ movements across borders. People were stopping at the short border between Serbia and Croatia for a variety of reasons, including to get medical treatment, food, money transfers, and wireless access.

wifiAs we research these infrastructures, we also need to examine which actors are inserting themselves into these flows (or being drawn into them). Platforms like Facebook, Whatsapp, and Viber are being used to organise travel, while others, including Google and IBM, are developing responses aimed at supporting refugees or managing refugee flows. Coursera is offering online study for refugees, and there are also other edutech responses.

Aid agencies like UNHCR are teaming up with technology companies to try to develop support infrastructures: the World Food Program, for example, is coordinating with Mastercard. The ‘tech for good’ area, including techfugees, is also getting involved. Latonera is deeply doubtful that a lot of the hackathons in the West are going to produce systems that can help in meaningful ways.

We need to think about the social, political, and ethical consequences of the ways in which these technological structures of support, management, and surveillance are emerging.

Paula H. Kift (New York University, NY) In search of safe harbors: privacy and surveillance of refugees at the borders of Europe

There are two important EU regulations: Eurosur (drone and satellite surveillance of the Mediterranean sea), and Eurodac (which governs biometric data).

At the moment, the EU engages in drone and satellite surveillance of boats arriving, arguing that this doesn’t impinge on privacy because it tracks boats, not individuals. However, Kift argues that the right to privacy should impact on non-identifiability as well, and the data currently being gathered does have the ability to identify individuals in aggregate.

There are claims that data on boats may be used for humanitarian reasons, to respond to boats in distress, but the actual regulations don’t specify anything about how this might happen, or who would be responsible, which suggests that humanitarian claims are tacked on, rather than central to Eurosur.

4639Similarly, biometric data is being collected for indefinite storage and collection, and this is justified with claims that it will be used to help deal with crime. This is clearly discriminatory, as refugees are no more likely to be involved in crime that citizens. Extensive biometric data is now being collected on children as young as six. This is particularly worrying for people who are fleeing government persecution.

The right to privacy should apply to refugees: blanket surveillance is discriminatory, has the potential to create serious threats to refugee safety, and is frequently being used for surveillance and control rather than any humanitarian purposes.

Kift suggests that the refusal to collect personally identifiable information can also be seen as problematic: states are refusing to process refugee claims, which creates further flow-on effects in terms of a lack of support and access to services.

Emerging coordination with tech firms creates further concerns: one organisation suggested creating an app that offered to give information on crossing borders and resettlement, but actually tracked refugee flows.

Çiğdem Bozdağ (Kadir Has University, Turkey) and Kevin Smets (Vrije Universiteit Brussel, Belgium and Universiteit Antwerpen, Belgium). Discourses about refugees and #AylanKurdi on Social Media

After the image of Aylan Kurdi was shared, research showed huge peaks in online discussions of refugees, and searches for information on refugees and Syria. However, these findings also raise further questions. Did this actually alter the debate on refugees? How did different actors use the impact of the image? And how did this take shape in different local and national contexts?

This research focused on Turkey and Belgium (and specifically on Flanders). Belgium has taken much fewer refugees than Turkey, but nevertheless there are significant debates about refugee issues in Belgium. In Maximiliaanpark, a refugee camp was set up outside the immigration offices in response to slow processing times.

In the tweets studied, there were a lot of ironic/cynical/sarcastic tweets, which would be hard to code quantitatively: qualitative methods were more appropriate to understanding these practices.

6748738-1x1-340x340Among the citizen tweets studied, the two dominant narratives were refugees as victims, or refugees as threats. In Turkey, anti-government tweeters blame the government for victimising refugees, pro-government tweets blame the opposition, Assad, or humanity as a whole. In Belgium, refugees were mostly seen as victims of a lack of political action, or as the victims of instrumentalisation (by politicians, media, and NGOs). When refugees were seen as a threat, in Turkey this focused on Aylan’s Kurdish ethnicity, whereas in Belgium this drew on far-right frames.

Research also looked at reasons given for the refugee ‘crisis’: those who are against migration tended to focus on economic pull factors, those in favour tended to give more vague reasons (‘failure of humanity’). When solutions were provided, those employing a victim representation called for action and solidarity, whereas those seeing refugees as threats called for measures like closing borders.

When the image of Aylan emerged, it was usually incorporated into existing narratives, rather than changing them. The exception was ‘one-time tweeters’: people who had affective responses (a single tweet about their sadness about Aylan, and then returning to their non-refugee tweets). Both Belgium and Turkish users tended to see Gulf countries as bad ‘others’ who do not take refugees. There was little focus on Daesh.

Twitter users who were opposed to immigration tended to employ the clearest vocabulary and framework: there were very strong in oppressing what they saw as the problem, and the solutions.

Unfortunately, the conclusion is pessimistic: the power of this image (on Twitter) is limited: it didn’t disrupt existing discourses, and there were also great similarities with how refugees and refugee issues are portrayed in the mainstream media.

Eugenia Siapera (Dublin City University, Ireland) and Moses Boudourides’ (University of Patras, Greece) work looks at the representation of refugee issues on Twitter.

There are two important theoretical frameworks: digital storytelling (Nick Coludry) and affective publics (Zizi Papacharissi). Affective publics both reflect and reorganise structures of feeling: the culture, mood, and feel of given historical moments. The refugee issue is a polymedia event, but this research focuses specifically on Twitter.

What are the affective publics around the ‘refugee issue’? There wasn’t one debate, but overlapping rhythms. Here, there were four key events: the Paris attacks, the Cologne station sexual assaults, the Idomeni crisis, and the Brussels bombing.

This research used large-scale capture of relevant tweets across many different languages. The overall story is about crisis, about European countries and their response, about chilrden and human rights told in many languages. It concerns political institutions ad politicians, as well as about terrorist attacks and US right-wig politics. Canada and Australia are also very much involved.

Incidents in particular countries rapidly become entangled with narratives elsewhere, as they were incorporated into national debates. There’s a tendency for discussions on Twitter to fit into existing narratives and discourses.

Kaarina Nikunen, University of Tampere. Embodied solidarities: online participation of refugees and migrants as a political struggle

By drawing together the public and private, campaigns build affective engagement that can be thought of as media solidarities. This research looks at ‘Once I was a refugee’, where refugees use their own voice and bodies to embody solidarity.

In Finland, the refugee population is very low: since 1973, the country has only 42,000 people with refugee status. In 2015, 30,000 refugees came, which was a significant change. The refugee presence in the public debate is very small. Debates are really between politicians, and some NGOs. Refugees are silent in the mainstream media.

‘Once I was a refugee’ was initiated by two journalists, following from examples in other European countries. It began in June 2015, which was crucial timing: August and September saw attacks on several reception centres, and anti-refugee rallies calling for borders to be closed. Public debates focused on the economic cost on Finland’s welfare state. The campaign tried to build a counter-narrative to these claims.

11951262_1648702685410427_3715380495121936746_nWithin a few days, many young Finns shared their photos on the site: there are now 172 stories on the Facebook site. The format for stories is the same: “Once I was a refugee, now I’m a …” The site gained national attention, including in the mainstream press. It provided alternative images of labour, education, and value. The narratives are united by optimism: while they may have a sense of struggle, they highlight successful integration.

Most end with gratitude: “thank you, Finland”. This highlights the sense that refugees had (and have) of having to earn their citizenship. Uniforms are used to signal order and belonging. In particular, there are many images of people wearing army uniforms – these also gain the most shares. This can be seen as an attempt to counter claims of ‘dangerous’ refugee bodies.

Responses sometimes drew divisions between these ‘acceptable’ refugees and the need to refuse others. We should also recognise that the campaign requires former refugees to become vulnerable and visible: this is clear from the ways in which images become the focus of discussion for those against immigration. The campaign didn’t disrupt the narrative of refugees as primarily an economic burden which needed to be dealt with (merely promising

However, ‘Once I was a refugee’ did open space where refugees spoke up in their defence (when others weren’t), emphasising their value and agency, and engaging in the national political debate.