Planet Russell


Planet DebianNiels Thykier: Introducing the debhelper buildlabel prototype for multi-building packages

For most packages, the “dh” short-hand rules (possibly with a few overrides) work great.  It can often auto-detect the buildsystem and handle all the trivial parts.

With one notably exception: What if you need to compile the upstream code twice (or more) with different flags?  This is the case for all source packages building both regular debs and udebs.

In that case, you would previously need to override about 5-6 helpers for this to work at all.  The five dh_auto_* helpers and usually also dh_install (to call it with different –sourcedir for different packages).  This gets even more complex if you want to support Build-Profiles such as “noudeb” and “nodoc”.

The best way to support “nodoc” in debhelper is to move documentation out of dh_install’s config files and use dh_installman, dh_installdocs, and dh_installexamples instead (NB: wait for compat 11 before doing this).  This in turn will mean more overrides with –sourcedir and -p/-N.

And then there is “noudeb”, which currently requires manual handling in debian/rules.  Basically, you need to use make or shell if-statements to conditionally skip the udeb part of the builds.

All of this is needlessly complex.

Improving the situation

In an attempt to make things better, I have made a new prototype feature in debhelper called “buildlabels” in experimental.  The current prototype is designed to deal with part (but not all) of the above problems:

  • It will remove the need for littering your rules file for supporting “noudeb” (and in some cases also other “noX” profiles).
  • It will remove the need for overriding the dh_install* tools just to juggle with –sourcedir and -p/-N.

However, it currently not solve the need for overriding the dh_auto_* tools and I am not sure when/if it will.

The feature relies on being able to relate packages to a given series of calls to dh_auto_*.  In the following example, I will use udebs for the secondary build.  However, this feature is not tied to udebs in any way and can be used any source package that needs to do two or more upstream builds for different packages.

Assume our example source builds the following binary packages:

  • foo
  • libfoo1
  • libfoo-dev
  • foo-udeb
  • libfoo1-udeb

And in the rules file, we would have something like:


    dh_auto_configure -B build-deb -- --with-feature1 --with-feature2
    dh_auto_configure -B build-udeb -- --without-feature1 --without-feature2


What is somewhat obvious to a human is that the first configure line is related to the regular debs and the second configure line is for the udebs.  However, debhelper does not know how to infer this and this is where buildlabels come in.  With buildlabels, you can let debhelper know which packages and builds that belong together.

How to use buildlabels

To use buildlabels, you have to do three things:

  1. Pick a reasonable label name for the secondary build.  In the example, I will use “udeb”.
  2. Add “–buildlabel=$LABEL” to all dh_auto_* calls related to your secondary build.
  3. Tag all packages related to “my-label” with “X-DH-Buildlabel: $LABEL” in debian/control.  (For udeb packages, you may want to add “Build-Profiles: <!noudeb>” while you are at it).

For the example package, we would change the debian/rules snippet to:


    dh_auto_configure -B build-deb -- --with-feature1 --with-feature2
    dh_auto_configure --buildlabel=udeb -B build-udeb -- --without-feature1 --without-feature2


(Remember to update *all* calls to dh_auto_* helpers; the above only lists dh_auto_configure to keep the example short.)  And then add “X-DH-Buildlabel: udeb” in the stanzas for foo-udeb + libfoo1-udeb.

With those two minor changes:

  • debhelper will skip the calls to dh_auto_* with –buildlabel=udeb if the udeb packages are skipped.
  • dh_auto_install will automatically pick a separate destination directory by default for the udeb build (assuming you do not explicitly override it with –destdir).
  • dh_install will now automatically pick up files from the destination directory.that dh_auto_install used for the given package (even if you overwrote it with –destdir).  Note that you have to remove any use of “–sourcedir” first as this disables the auto-detection.  This also works for other dh_install* tools supporting –sourcedir in compat 11 or later.

Real example

Thanks to Michael Biebl, I was able to make an branch in the systemd git repository to play with this feature.  Therefore I have an real example to use as a show case.  The gist of it is in the following three commits:

Full branch can be seen at:

Request for comments / call for testing

This prototype is now in experimental (debhelper/10.7+exp.buildlabels) and you are very welcome to take it for a spin.  Please let me know if you find the idea useful and feel free to file bugs or feature requests.  If deemed useful, I will merge into master and include in a future release.

If you have any questions or comments about the feature or need help with trying it out, you are also very welcome to mail the debhelper-devel mailing list.

Known issues / the fine print:

  • It is experimental feature and may change without notice.
  • The current prototype may break existing packages as it is not guarded by a compat bump to ease your testing.  I am still very curious to hear about any issues you may experience.
  • The default build directory is re-used even with different buildlabels, so you still need to use explicit build dirs for buildsystems that prefer building in a separate directory (e.g. meson).
  • udebs are not automatically tagged with an “udeb” buildlabel.  This is partly by design as some source packages only build udebs (and no regular debs).  If they were automatically tagged, the existing packages would break.
  • Label names are used in path names, so you may want to refrain from using “too exciting” label names.
  • It is experimental feature and may change without notice. (Yes, I thought it deserved repeating)

Filed under: Debhelper, Debian


Planet Linux AustraliaDavid Rowe: QSO Today Podcast

Eric, 4Z1UG, has kindly interviewed me for his fine QSO Today Podcast.

Planet DebianAntoine Beaupré: My free software activities, July 2017

Debian Long Term Support (LTS)

This is my monthly working on Debian LTS. This time I worked on various hairy issues surrounding ca-certificates, unattended-upgrades, apache2 regressions, libmtp, tcpdump and ipsec-tools.

ca-certificates updates

I've been working on the removal of the Wosign and StartCom certificates (Debian bug #858539) and, in general, the synchronisation of ca-certificates across suites (Debian bug #867461) since at least last march. I have made an attempt at summarizing the issue which led to a productive discussion and it seems that, in the end, the maintainer will take care of synchronizing information across suites.

Guido was right in again raising the question of synchronizing NSS across all suites (Debian bug #824872) which itself raised the other question of how to test reverse dependencies. This brings me back to Debian bug #817286 which, basically proposed the idea of having "proposed updates" for security issues. The problem is while we can upload test packages to stable proposed-updates, we can't do the same in LTS because the suite is closed and we operate only on security packages. This issue came up before in other security upload and we need to think better about how to solve this.


Speaking of security upgrades brings me to the question of a bug (Debian bug #867169) that was filed against the wheezy version of unattended-upgrades, which showed that the package simply stopped working since the latest stable release, because wheezy became "oldoldstable". I first suggested using the "codename" but that appears to have been introduced only after wheezy.

In the end, I proposed a simple update that would fix the configuration files and uploaded this as DLA-1032-1. This is thankfully fixed in later releases and will not require such hackery when jessie becomes LTS as well.


Next up is the work on the libmtp vulnerabilities (CVE-2017-9831 and CVE-2017-9832). As I described in my announcement, the work to backport the patch was huge, as upstream basically backported a whole library from the gphoto2 package to fix those issues (and probably many more). The lack of a test suite made it difficult to trust my own work, but given that I had no (negative) feedback, I figured it was okay to simply upload the result and that became DLA-1029-1.


I then looked at reproducing CVE-2017-11108, a heap overflow triggered tcpdump would parse specifically STP packets. In Debian bug #867718, I described how to reproduce the issue across all suites and opened an issue upstream, given that the upstream maintainers hadn't responded responded in weeks according to notes in the RedHat Bugzilla issue. I eventually worked on a patch which I shared upstream, but that was rejected as they were already working on it in their embargoed repository.

I can explain this confusion and duplication of work with:

  1. the original submitter didn't really contact
  2. he did and they didn't reply, being just too busy
  3. they replied and he didn't relay that information back

I think #2 is most likely: the folks are probably very busy with tons of reports like this. Still, I should probably have contacted directly before starting my work, even though no harm was done because I didn't divulge issues that were already public.

Since then, tcpdump has released 4.9.1 which fixes the issue, but then new CVEs came out that will require more work and probably another release. People looking into this issue must be certain to coordinate with the tcpdump security team before fixing the actual issues.


Another package that didn't quite have a working solution is the ipsec-tools suite, in which the racoon daemon was vulnerable to a remotely-triggered DOS attack (CVE-2016-10396). I reviewed and fixed the upstream patch which introduced a regression. Unfortunately, there is no test suite or proof of concept to control the results.

The reality is that ipsec-tools is really old, and should maybe simply be removed from Debian, in favor of strongswan. Upstream hasn't done a release in years and various distributions have patched up forks of those to keep it alive... I was happy, however, to know that a maintainer will take care of updating the various suites, including LTS, with my improved patch. So this fixes the issue for now, but I would strongly encourage users to switch away from ipsec-tools in the future.


Finally, I was bitten by the old DLA-841-1 upload I did all the way back in February, as it introduced a regression (Debian bug #858373). It turns out it was possible to segfault Apache workers with a trivial HTTP request, in certain (rather exotic, I might add) configurations (ErrorDocument 400 directive pointing to a cgid script in worker mode).

Still, it was a serious regression and I found a part of the nasty long patch we worked on back then that was faulty, and introduced a small fix to correct that. The proposed package unfortunately didn't yield any feedback, and I can only assume it will work okay for people. The result is the DLA-841-2 upload which fixes the regression. I unfortunately didn't have time to work on the remaining CVEs affecting apache2 in LTS at the time of writing.


I also did some miscellaneous triage by filing Debian bug #867477 for poppler in an effort to document better the pending issue.

Next up was some minor work on eglibc issues. CVE-2017-8804 has a patch, but it's been disputed. since the main victim of this and the core of the vulnerability (rpcbind) has already been fixed, I am not sure this vulnerability is still a thing in LTS at all.

I also looked at CVE-2014-9984, but the code is so different in wheezy that I wonder if LTS is affected at all. Unfortunately, the eglibc gymnastics are a little beyond me and I do not feel confident enough to just push those issues aside for now and let them open for others to look at.

Other free software work

And of course, there's my usual monthly volunteer work. My ratio is a little better this time, having reached an about even ratio between paid and volunteer work, whereas this was 60% volunteer work in march.

Announcing ecdysis

I recently published ecdysis, a set of template and code samples that I frequently reuse across project. This is probably the least pronounceable project name I have ever chosen, but this is somewhat on purpose. The goal of this project is not collaboration or to become a library: it's just a personal project which I share with the world as a curiosity.

To quote the README file:

The name comes from what snakes and other animals do to "create a new snake": they shed their skin. This is not so appropriate for snakes, as it's just a way to rejuvenate their skin, but is especially relevant for anthropods since then "ecdysis" may be associated with a metamorphosis:

Ecdysis is the moulting of the cuticle in many invertebrates of the clade Ecdysozoa. Since the cuticle of these animals typically forms a largely inelastic exoskeleton, it is shed during growth and a new, larger covering is formed. The remnants of the old, empty exoskeleton are called exuviae. — Wikipedia

So this project is metamorphosed into others when the documentation templates, code examples and so on are reused elsewhere. For that reason, the license is an unusally liberal (for me) MIT/Expat license.

The name also has the nice property of being absolutely unpronounceable, which makes it unlikely to be copied but easy to search online.

It was an interesting exercise to go back into older projects and factor out interesting code. The process is not complete yet, as there are older projects I'm still curious in reviewing. A bunch of that code could also be factored into upstream project and maybe even the Python standard library.

In short, this is stuff I keep on forgetting how to do: a proper config, some fancy argparse extensions and so on. Instead of having to remember where I had written that clever piece of code, I now shove it in the crazy chaotic project where I can find it again in the future.

Beets experiments

Since I started using Subsonic (or Libresonic) to manage the music on my phone, album covers are suddenly way more interesting. But my collection so far has had limited album covers: my other media player (gmpc) would download those on the fly on its own and store them in its own database - not on the filesystem. I guess this could be considered to be a limitation of Subsonic, but I actually appreciate the separation of duty here. Garbage in, garbage out: the quality of Subsonic's rendering depends largely on how well setup your library and tags are.

It turns out there is an amazing tool called beets to do exactly that kind of stuff. I originally discarded that "media library management system for obsessive-compulsive [OC] music geeks", trying to convince myself i was not an "OC music geek". Turns out I am. Oh well.

Thanks to beets, I was able to download album covers for a lot of the albums in my collection. The only covers that are missing now are albums that are not correctly tagged and that beets couldn't automatically fix up. I still need to go through those and fix all those tags, but the first run did an impressive job at getting album covers.

Then I got the next crazy idea: after a camping trip where we forgot (again) the lyrics to Georges Brassens, I figured I could start putting some lyrics on my ebook reader. "How hard can that be?" of course, being the start of another crazy project. A pull request and 3 days later, I had something that could turn a beets lyrics database into a Sphinx document which, in turn, can be turned into an ePUB. In the process, I probably got blocked from MusixMatch a hundred times, but it's done. Phew!

The resulting e-book is about 8000 pages long, but is still surprisingly responsive. In the process, I also happened to do a partial benchmark of Python's bloom filter libraries. The biggest surprise there was the performance of the set builtin: for small items, it is basically as fast as a bloom filter. Of course, when the item size grows larger, its memory usage explodes, but in this case it turned out to be sufficient and bloom filter completely overkill and confusing.

Oh, and thanks to those efforts, I got admitted in the beetbox organization on GitHub! I am not sure what I will do with that newfound power: I was just scratching an itch, really. But hopefully I'll be able to help here and there in the future as well.

Debian package maintenance

I did some normal upkeep on a bunch of my packages this month, that were long overdue:

  • uploaded slop 6.3.47-1: major new upstream release
  • uploaded an NMU for maim 5.4.64-1.1: maim was broken by the slop release
  • uploaded pv 1.6.6-1: new upstream release
  • uploaded kedpm 1.0+deb8u1 to jessie (oldstable): one last security fix (Debian bug #860817, CVE-2017-8296) for that derelict password manager
  • uploaded charybdis 3.5.5-1: new minor upstream release, with optional support for mbedtls
  • filed Debian bug #866786 against cryptsetup to make the remote initramfs SSH-based unlocking support multiple devices: thanks to the maintainer, this now works flawlessly in buster and may be backported to stretch
  • expanded on Debian bug #805414 against gdm3 and Debian bug #845938 against pulseaudio, because I had trouble connecting my computer to this new Bluetooth speaker. turns out this is a known issue in Pulseaudio: whereas it releases ALSA devices, it doesn't release Bluetooth devices properly. Documented this more clearly in the wiki page
  • filed Debian bug #866790 regarding old stray Apparmor profiles that were lying around my system after an upgrade, which got me interested in Debian bug #830502 in turn
  • filed Debian bug #868728 against cups regarding a weird behavior I had interacting with a network printer. turns out the other workstation was misconfigured... why are printers still so hard?
  • filed Debian bug #870102 to automate sbuild schroots upgrades
  • after playing around with rash tried to complete the packaging (Debian bug #754972) of percol with this pull request upstream. this ended up to be way too much overhead and I reverted to my old normal history habits.

Planet DebianDirk Eddelbuettel: Updated overbought/oversold plot function

A good six years ago I blogged about plotOBOS() which charts a moving average (from one of several available variants) along with shaded standard deviation bands. That post has a bit more background on the why/how and motivation, but as a teaser here is the resulting chart of the SP500 index (with ticker ^GSCP):

Example chart of overbought/oversold levels from plotOBOS() function 

The code uses a few standard finance packages for R (with most of them maintained by Joshua Ulrich given that Jeff Ryan, who co-wrote chunks of these, is effectively retired from public life). Among these, xts had a recent release reflecting changes which occurred during the four (!!) years since the previous release, and covering at least two GSoC projects. With that came subtle API changes: something we all generally try to avoid but which is at times the only way forward. In this case, the shading code I used (via polygon() from base R) no longer cooperated with the beefed-up functionality of plot.xts(). Luckily, Ross Bennett incorporated that same functionality into a new function addPolygon --- which even credits this same post of mine.

With that, the updated code becomes

## plotOBOS -- displaying overbough/oversold as eg in Bespoke's plots
## Copyright (C) 2010 - 2017  Dirk Eddelbuettel
## This is free software: you can redistribute it and/or modify it
## under the terms of the GNU General Public License as published by
## the Free Software Foundation, either version 2 of the License, or
## (at your option) any later version.

suppressMessages(library(quantmod))     # for getSymbols(), brings in xts too
suppressMessages(library(TTR))          # for various moving averages

plotOBOS <- function(symbol, n=50, type=c("sma", "ema", "zlema"),
                     years=1, blue=TRUE, current=TRUE, title=symbol,
                     ticks=TRUE, axes=TRUE) {

    today <- Sys.Date()
    if (class(symbol) == "character") {
        X <- getSymbols(symbol, from=format(today-365*years-2*n), auto.assign=FALSE)
        x <- X[,6]                          # use Adjusted
    } else if (inherits(symbol, "zoo")) {
        x <- X <- as.xts(symbol)
        current <- FALSE                # don't expand the supplied data

    n <- min(nrow(x)/3, 50)             # as we may not have 50 days

    sub <- ""
    if (current) {
        xx <- getQuote(symbol)
        xt <- xts(xx$Last,$`Trade Time`))
        colnames(xt) <- paste(symbol, "Adjusted", sep=".")
        x <- rbind(x, xt)
        sub <- paste("Last price: ", xx$Last, " at ",
                     format(as.POSIXct(xx$`Trade Time`), "%H:%M"), sep="")

    type <- match.arg(type)
    xd <- switch(type,                  # compute xd as the central location via selected MA smoother
                 sma = SMA(x,n),
                 ema = EMA(x,n),
                 zlema = ZLEMA(x,n))
    xv <- runSD(x, n)                   # compute xv as the rolling volatility

    strt <- paste(format(today-365*years), "::", sep="")
    x  <- x[strt]                       # subset plotting range using xts' nice functionality
    xd <- xd[strt]
    xv <- xv[strt]

    xyd <- xy.coords(.index(xd),xd[,1]) # xy coordinates for direct plot commands
    xyv <- xy.coords(.index(xv),xv[,1])

    n <- length(xyd$x)
    xx <- xyd$x[c(1,1:n,n:1)]           # for polygon(): from first point to last and back

    if (blue) {
        blues5 <- c("#EFF3FF", "#BDD7E7", "#6BAED6", "#3182BD", "#08519C") # cf brewer.pal(5, "Blues")
        fairlylight <<- rgb(189/255, 215/255, 231/255, alpha=0.625) # aka blues5[2]
        verylight <<- rgb(239/255, 243/255, 255/255, alpha=0.625) # aka blues5[1]
        dark <<- rgb(8/255, 81/255, 156/255, alpha=0.625) # aka blues5[5]
        ## buglet in xts 0.10-0 requires the <<- here
    } else {
        fairlylight <<- rgb(204/255, 204/255, 204/255, alpha=0.5)  # two suitable grays, alpha-blending at 50%
        verylight <<- rgb(242/255, 242/255, 242/255, alpha=0.5)
        dark <<- 'black'

    plot(x, ylim=range(range(x, xd+2*xv, xd-2*xv, na.rm=TRUE)), main=title, sub=sub, 
         major.ticks=ticks, minor.ticks=ticks, axes=axes) # basic xts plot setup
    addPolygon(xts(cbind(xyd$y+xyv$y, xyd$y+2*xyv$y),, on=1, col=fairlylight)  # upper
    addPolygon(xts(cbind(xyd$y-xyv$y, xyd$y+1*xyv$y),, on=1, col=verylight)    # center
    addPolygon(xts(cbind(xyd$y-xyv$y, xyd$y-2*xyv$y),, on=1, col=fairlylight)  # lower
    lines(xd, lwd=2, col=fairlylight)   # central smooted location
    lines(x, lwd=3, col=dark)           # actual price, thicker

and the main change are the three calls to addPolygon. To illustrate, we call plotOBOS("SPY", years=2) with an updated plot of the ETF representing the SP500 over the last two years:

Updated example chart of overbought/oversold levels from plotOBOS() function 

Comments and further enhancements welcome!

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianRobert McQueen: Welcome, Flathub!

Alex Larsson talks about Flathub at GUADEC 2017At the Gtk+ hackfest in London earlier this year, we stole an afternoon from the toolkit folks (sorry!) to talk about Flatpak, and how we could establish a “critical mass” behind the Flatpak format. Bringing Linux container and sandboxing technology together with ostree, we’ve got a technology which solves real world distribution, technical and security problems which have arguably held back the Linux desktop space and frustrated ISVs and app developers for nearly 20 years. The problem we need to solve, like any ecosystem, is one of users and developers – without stuff you can easily get in Flatpak format, there won’t be many users, and without many users, we won’t have a strong or compelling incentive for developers to take their precious time to understand a new format and a new technology.

As Alex Larsson said in his GUADEC talk yesterday: Decentralisation is good. Flatpak is a tool that is totally agnostic of who is publishing the software and where it comes from. For software freedom, that’s an important thing because we want technology to empower users, rather than tell them what they can or can’t do. Unfortunately, decentralisation makes for a terrible user experience. At present, the Flatpak webpage has a manually curated list of links to 10s of places where you can find different Flatpaks and add them to your system. You can’t easily search and browse to find apps to try out – so it’s clear that if the current situation remains we’re not going to be able to get a critical mass of users and developers around Flatpak.

Enter Flathub. The idea is that by creating an obvious “center of gravity” for the Flatpak community to contribute and build their apps, users will have one place to go and find the best that the Linux app ecosystem has to offer. We can take care of the boring stuff like running a build service and empower Linux application developers to choose how and when their app gets out to their users. After the London hackfest we sketched out a minimum viable system – Github, Buildbot and a few workers – and got it going over the past few months, culminating in a mini-fundraiser to pay for the hosting of a production-ready setup. Thanks to the 20 individuals who supported our fundraiser, to Mythic Beasts who provided a server along with management, monitoring and heaps of bandwidth, and to Codethink and Scaleway who provide our ARM and Intel workers respectively.

We inherit our core principles from the Flatpak project – we want the Flatpak technology to succeed at alleviating the issues faced by app developers in targeting a diverse set of Linux platforms. None of this stops you from building and hosting your own Flatpak repos and we look forward to this being a wide and open playing field. We care about the success of the Linux desktop as a platform, so we are open to proprietary applications through Flatpak’s “extra data” feature where the client machine downloads 3rd party binaries. They are correctly labeled as such in the AppStream, so will only be shown if you or your OS has configured GNOME Software to show you apps with proprietary licenses, respecting the user’s preference.

The new infrastructure is up and running and I put it into production on Thursday. We rebuilt the whole repository on the new system over the course of the week, signing everything with our new 4096-bit key stored on a Yubikey smartcard USB key. We have 66 apps at the moment, although Alex is working on bringing in the GNOME apps at present – we hope those will be joined soon by the KDE apps, and Endless is planning to move over as many of our 3rd party Flatpaks as possible over the coming months.

So, thanks again to Alex and the whole Flatpak community, and the individuals and the companies who supported making this a reality. You can add the repository and get downloading right away. Welcome to Flathub! Go forth and flatten… 🙂

Flathub logo

Cory DoctorowA Hopeful Look At The Apocalypse: interview with Innovation Hub

I’m on the latest episode of Innovation Hub (MP3):

Science-fiction is a genre that imagines the future. It doesn’t necessarily predict the future (after all, where are flying cars?), but it grapples with the technological and societal changes happening today to better understand our world and where it’s heading.

So, what does it mean when so much of our most popular science-fiction – The Handmaid’s Tale, The Walking Dead, and The Hunger Games – present bleak, depressing futures? Cory Doctorow might just have an answer. He’s a blogger, writer, activist, and author of the new book Walkaway, an optimistic disaster novel.

Three Takeaways

* Doctorow thinks that science-fiction can give people “ideas for what to do if the future turns out in different ways.” Like how William Gibson’s Neuromancer didn’t just predict the internet, it predicted the intermingling of corporations and the state.

* When you have story after story about how people turn on each other after disaster, Doctorow believes it gives us the largely false impression that people act like jerks in crises. When in fact, people usually rise to the occasion.

* With Walkaway, his “optimistic” disaster novel, Doctorow wanted to present a new narrative about resolving differences between people who are mostly on the same side.

CryptogramRoombas will Spy on You

The company that sells the Roomba autonomous vacuum wants to sell the data about your home that it collects.

Some questions:

What happens if a Roomba user consents to the data collection and later sells his or her home -- especially furnished -- and now the buyers of the data have a map of a home that belongs to someone who didn't consent, Mr. Gidari asked. How long is the data kept? If the house burns down, can the insurance company obtain the data and use it to identify possible causes? Can the police use it after a robbery?

EDITED TO ADD (6/29): Roomba is backtracking -- for now.

Planet DebianRobert McQueen: Hello again!

Like all good blog posts, this one starts with an apology about not blogging for ages – in my case it looks like it’s been about 7 years which is definitely a new personal best (perhaps the equally or more remarkable thing is that I have diligently kept WordPress running in the meantime). In that time, as you might expect, a few things have happened, like I met a wonderful woman and fell in love and we have two wonderful children. I also decided to part ways with my “first baby” and leave my role as CTO & co-founder of Collabora. This was obviously a very tough decision – it’s a fantastic team where I met and made many life-long friends, and they are still going strong and doing awesome things with Open Source. However, shortly after that, in February last year, I was lucky enough to land what is basically a dream job working at Endless Computers as the VP of Deployment.

As I’m sure most readers know, Endless is building an OS to bring personal computing to millions of new users across the world. It’s based on Free Software like GNOME, Debian, ostree and Flatpak, and the more successful Endless is, the more people who get access to education, technology and opportunity – and the more FOSS users and developers there will be in the world. But in my role, I get to help define the product, understand our users and how technology might help them, take Open Source out to new people, solve commercial problems to get Endless OS out into the world, manage a fantastic team and work with bright people, learn from great managers and mentors, and still find time to squash bugs, review and write more code than I used to. Like any startup, we have a lot to do and not enough time to do it, so although there aren’t quite enough days in the week, I’m really happy!

In any case, the main point of this blog post is that I’m at GUADEC in Manchester right now, and I’d like to blog about Flathub, but I thought it would be weird to just show up and say that after 7 years of silence without saying hello again. 🙂

Planet DebianRussell Coker: Apache Mesos on Debian

I decided to try packaging Mesos for Debian/Stretch. I had a spare system with a i7-930 CPU, 48G of RAM, and SSDs to use for building. The i7-930 isn’t really fast by today’s standards, but 48G of RAM and SSD storage mean that overall it’s a decent build system – faster than most systems I run (for myself and for clients) and probably faster than most systems used by Debian Developers for build purposes.

There’s a github issue about the lack of an upstream package for Debian/Stretch [1]. That upstream issue could probably be worked around by adding Jessie sources to the APT sources.list file, but a package for Stretch is what is needed anyway.

Here is the documentation on building for Debian [2]. The list of packages it gives as build dependencies is incomplete, it also needs zlib1g-dev libapr1-dev libcurl4-nss-dev openjdk-8-jdk maven libsasl2-dev libsvn-dev. So BUILDING this software requires Java + Maven, Ruby, and Python along with autoconf, libtool, and all the usual Unix build tools. It also requires the FPM (Fucking Package Management) tool, I take the choice of name as an indication of the professionalism of the author.

Building the software on my i7 system took 79 minutes which includes 76 minutes of CPU time (I didn’t use the -j option to make). At the end of the build it turned out that I had mistakenly failed to install the Fucking Package Management “gem” and it aborted. At this stage I gave up on Mesos, the pain involved exceeds my interest in trying it out.

How to do it Better

One of the aims of Free Software is that bugs are more likely to get solved if many people look at them. There aren’t many people who will devote 76 minutes of CPU time on a moderately fast system to investigate a single bug. To deal with this software should be prepared as components. An example of this is the SE Linux project which has 13 source modules in the latest release [3]. Of those 13 only 5 are really required. So anyone who wants to start on SE Linux from source (without considering a distribution like Debian or Fedora that has it packaged) can build the 5 most important ones. Also anyone who has an issue with SE Linux on their system can find the one source package that is relevant and study it with a short compile time. As an aside I’ve been working on SE Linux since long before it was split into so many separate source packages and know the code well, but I still find the separation convenient – I rarely need to work on more than a small subset of the code at one time.

The requirement of Java, Ruby, and Python to build Mesos could be partly due to language interfaces to call Mesos interfaces from Ruby and Python. Ohe solution to that is to have the C libraries and header files to call Mesos and have separate packages that depend on those libraries and headers to provide the bindings for other languages. Another solution is to have autoconf detect that some languages aren’t installed and just not try to compile bindings for them (this is one of the purposes of autoconf).

The use of a tool like Fucking Package Management means that you don’t get help from experts in the various distributions in making better packages. When there is a FOSS project with a debian subdirectory that makes barely functional packages then you will be likely to have an experienced Debian Developer offer a patch to improve it (I’ve offered patches for such things on many occasions). When there is a FOSS project that uses a tool that is never used by Debian developers (or developers of Fedora and other distributions) then the only patches you will get will be from inexperienced people.

A software build process should not download anything from the Internet. The source archive should contain everything that is needed and there should be dependencies for external software. Any downloads from the Internet need to be protected from MITM attacks which means that a responsible software developer has to read through the build system and make sure that appropriate PGP signature checks etc are performed. It could be that the files that the Mesos build downloaded from the Apache site had appropriate PGP checks performed – but it would take me extra time and effort to verify this and I can’t distribute software without being sure of this. Also reproducible builds are one of the latest things we aim for in the Debian project, this means we can’t just download files from web sites because the next build might get a different version.

Finally the fpm (Fucking Package Management) tool is a Ruby Gem that has to be installed with the “gem install” command. Any time you specify a gem install command you should include the -v option to ensure that everyone is using the same version of that gem, otherwise there is no guarantee that people who follow your documentation will get the same results. Also a quick Google search didn’t indicate whether gem install checks PGP keys or verifies data integrity in other ways. If I’m going to compile software for other people to use I’m concerned about getting unexpected results with such things. A Google search indicates that Ruby people were worried about such things in 2013 but doesn’t indicate whether they solved the problem properly.

Planet DebianChris Lamb: More Lintian hacking

Lintian is static analysis tool for Debian packages, reporting on various errors, omissions and quality-assurance issues to the maintainer.

I seem to have found myself hacking on it a bit more recently (see my previous installment). In particular, here's the code of mine — which made for a total of 20 bugs closed — that made it into the recent 2.5.52 release:

New tags

  • Check for the presence of an .asc signature in a .changes file if an upstream signing key is present. (#833585, tag)
  • Warn when dpkg-statoverride --add is called without a corresponding --list. (#652963, tag)
  • Check for years in debian/copyright that are later than the top entry in debian/changelog. (#807461, tag)
  • Trigger a warning when DEB_BUILD_OPTIONS is used instead of DEB_BUILD_MAINT_OPTIONS. (#833691, tag)
  • Look for "FIXME" and similar placeholders in various files in the debian directory. (#846009, tag)
  • Check for useless build-dependencies on dh-autoreconf or autotools-dev under Debhelper compatibility levels 10 or higher. (#844191, tag)
  • Emit a warning if GObject Introspection packages are missing dependencies on ${gir:Depends}. (#860801, tag)
  • Check packages do not contain upstart configuration under /etc/init. (#825348, tag)
  • Emit a classification tag if maintainer scripts such as debian/postinst is an ELF binary. (tag)
  • Check for overly-generic manual pages such as README.3pm.gz. (#792846, tag)
  • Ensure that (non-ELF) maintainer scripts begin with #!. (#843428, tag)

Regression fixes

  • Ensure r-data-without-readme-source checks the source package, not the binary; README.source files are not installed in the latter. (#866322, tag)
  • Don't emit source-contains-prebuilt-ms-help-file for files generated by Halibut. (#867673, tag)
  • Add .yml to the list of file extensions to avoid false positives when emitting extra-license-file. (#856137, tag)
  • Append a regression test for enumerated lists in the "a) b) c) …" style, which would previously trigger a "duplicate word" warning if the following paragraph began with an "a." (#844166, tag)

Documentation updates

  • Rename copyright-contains-dh-make-perl-boilerplate to copyright-contains-automatically-extracted-boilerplate as it can be generated by other tools such as dh-make-elpa. (#841832, tag)
  • Changes to new-package-should-not-package-python2-module (tag):
    • Upgrade from I: to W:. (#829744)
    • Clarify wording in description to make the justification clearer.
  • Clarify justification in debian-rules-parses-dpkg-parsechangelog. (#865882, tag)
  • Expand the rationale for the latest-debian-changelog-entry-without-new-date tag to mention possible implications for reproducible builds. (tag)
  • Update the source-contains-prebuilt-ms-help-file description; there exists free software to generate .chm files. (tag)
  • Append an example shell snippet to explain how to prevent init.d-script-sourcing-without-test. (tag)
  • Add a missing "contains" verb to the description of the debhelper-autoscript-in-maintainer-scripts tag. (tag)
  • Consistently use the same "Debian style" RFC822 date format for both "Mirror timestamp" and "Last updated" on the Lintian index page. (#828720)


  • Allow the use of suppress-tags=<tag>[,<tag>[,<tag>]] in ~/.lintianrc. (#764486)
  • Improve the support for "3.0 (git)" packages. However, they remain marked as unsupported-source-format as they are not accepted by the Debian archive. (#605999)
  • Apply patch from Dylan Aïssi to also check for .RData files (not just .Rdata) when checking for the copyright status of R Project data files. (#868178, tag)
  • Match more Lena Söderberg images. (#827941, tag)
  • Refactor a hard-coded list of possible upstream key locations to the common/signing-key-filenames Lintian::Data resource.

Don MartiExtracting just the audio from big video files

Got a big video, and want a copy of just the audio for listening on a device with limited storage? Use Soundconverter.

soundconverter -b -m mp3 -s .mp3 long-video.webm

(MP3 patents are expired now, hooray! I'm just using MP3 here because if I get a rental car that lets me plug in a USB stick for listening, the MP3 format is most likely to be supported.)

Soundconverter has a GUI but you can use -b for batch mode from the shell. soundconverter --help for help. You do need to set both the MIME type, with -m, and the file suffix, with -s.

Planet DebianNorbert Preining: Gaming: The Long Dark

I normally don’t play survival games or walking simulators, but The Long Dark by Hinterland Games, which I purchased back then when it was still in early access on Steam, took me into new realms. You are tossed out into the Canadian wilderness with hardly anything, and your only aim is to survive, find shelter, food, craft tools, hunt for food, explore. And while everything by now is Sandbox mode, on August 1st the first episode of Story mode is released. Best time to get the game!

You will be greeted with some icy nights, but also with great vistas, relaxed evenings at a fire place, you will try to survive on moldy food and rotten energy bars, but also feast on the fireside while reading a good book. A real treat this game!

Sandbox mode features five different areas to explore. Each one is large enough to spend weeks (in game time) wandering around. The easiest area to start with is Mystery Lake, with plenty of shelter (several huts) and abundance of resources. And just in case you are getting bored, all the areas are connected via tunnels or caves and one can wander of into the neighboring places. My home in Mystery Lake was always the Camp Office, the usual suspect. Nice views, fishing huts nearby to get fresh fish, lots of space.

After managing to get from your starting place (which is arbitrary as far as I see) to one of the shelters, one starts collecting food, wood, savaging every accessible place for tools, weapons, burning material. And soon the backpack becomes to heavy, and one needs to store stuff and decide what to take.

This is a very well done part of the game. The backpack is not limited by number of items, but you are limited in weight you can carry. That includes clothes (which can get quite heavy) and all the items in your backpack. In addition, the longer the day and the more tired you become, the less weight you can carry. And if the backpack starts getting too heavy you crawl to a very slow movement.

There are many influences of the outside world on the player’s condition: temperature, the wetness of your clothes, hunger, thirst, exhaustion, but also infections and bruises, all need to be taken care of, otherwise the end is coming faster than one wishes for.

I have only two things to complain: First, if one walks outside, or runs outside, the own body temperature does not rise. This is unrealistically and should have been taken into account. The other thing is the difficulty: I have played weeks in game time in the easiest level, without any problem. But the moment I switched to the second level of difficulty (of 5!), I not even manage it for 2(!) days. Wolves, starvation, thirst, any of those kills me within an instant. I don’t want to know how the hardest level feels, but it has a certain steep step here.

The game takes a very realistic view onto the weather: Every day is different, sunny, foggy, blizzard, windy, often changing very quickly. It is wise to plan one’s activities according to the weather, as it is very unforgiving.

With beautifully crafted landscapes, loads of areas to explore, your own pride to survive for at least a few weeks, and lots of tools to find and craft and try out, this games, even while it is still in Sandbox mode, is a real feat. My absolute favorite since I have finished the Talos Principle and Portal series, absolutely recommendable!


Krebs on SecuritySuspended Sentence for Mirai Botmaster Daniel Kaye

Last month, KrebsOnSecurity identified U.K. citizen Daniel Kaye as the likely real-life identity behind a hacker responsible for clumsily wielding a powerful botnet built on Mirai, a malware strain that enslaves poorly secured Internet of Things (IoT) devices for use in large-scale online attacks. Today, a German court issued a suspended sentence for Kaye, who now faces cybercrime charges in the United Kingdom.

Daniel Kaye's Facebook profile page.

Daniel Kaye’s Facebook profile page.

In February 2017, authorities in the United Kingdom arrested a 29-year-old U.K. man on suspicion of knocking more than 900,000 Germans offline in a Mirai attack in November 2016. Shortly after that 2016 attack, a hacker using the nickname “Bestbuy” told reporters he was responsible for the outage, apologizing for the incident.

Prosecutors in Europe had withheld Kaye’s name from the media throughout the trial. But a court in Germany today confirmed Kaye’s identity as it handed down a suspended sentence on charges stemming from several failed attacks from his Mirai botnet — which nevertheless caused extensive internet outages for ISPs in the U.K., Germany and Liberia last year.

On July 5, KrebsOnSecurity published Who is the GovRAT Author and Mirai Botmaster BestBuy. The story followed clues from reports produced by a half-dozen security firms that traced common clues between this BestBuy nickname and an alter-ego, “Spiderman.”

Both identities were connected to the sale of an espionage tool called GovRAT, which is documented to have been used in numerous cyber espionage campaigns against governments, financial institutions, defense contractors and more than 100 corporations.

That July 5 story traced a trail of digital clues left over 10 years back to Daniel Kaye, a 29-year-old man who had dual U.K. and Israeli citizenship and who was engaged to be married to a U.K. woman.

A “mind map” tracing some of the research mentioned in this post.

Last week, a 29-year-old identified by media only as “Daniel K” pleaded guilty in a German court for launching the attacks that knocked 900,000 Deutsche Telekom customers offline. Prosecutors said Daniel K sold access to his Mirai botnet as an attack-for-hire service.

The defendant reportedly told the court that the incident was the biggest mistake of his life, and that he took money in exchange for launching attacks in order to help start a new life with his fiancee.

Today, the regional court in the western city of Cologne said it would suspend the sentence of one year and eight months against Kaye, according to a report from Agence France Presse.

While it may seem that Kaye was given a pass by the German court, he is still facing criminal charges in Britain, where authorities have already requested his extradition.

As loyal readers here no doubt know, KrebsOnSecurity last year was massively attacked by the first-ever Mirai botnet — an attack which knocked this site offline for almost four days before it came back online under the protection of Google’s Project Shield service.

In January 2017, this blog published the results of a four-month investigation into who was likely responsible for not only for writing Mirai, but for leaking the source code for the malware — spawning dozens of competing Mirai botnets like the one that Kaye built. To my knowledge, no charges have yet been filed against any of the individuals named in that story.

CryptogramFriday Squid Blogging: Giant Squids Have Small Brains

New research:

In this study, the optic lobe of a giant squid (Architeuthis dux, male, mantle length 89 cm), which was caught by local fishermen off the northeastern coast of Taiwan, was scanned using high-resolution magnetic resonance imaging in order to examine its internal structure. It was evident that the volume ratio of the optic lobe to the eye in the giant squid is much smaller than that in the oval squid (Sepioteuthis lessoniana) and the cuttlefish (Sepia pharaonis). Furthermore, the cell density in the cortex of the optic lobe is significantly higher in the giant squid than in oval squids and cuttlefish, with the relative thickness of the cortex being much larger in Architeuthis optic lobe than in cuttlefish. This indicates that the relative size of the medulla of the optic lobe in the giant squid is disproportionally smaller compared with these two cephalopod species.

From the New York Times:

A recent, lucky opportunity to study part of a giant squid brain up close in Taiwan suggests that, compared with cephalopods that live in shallow waters, giant squids have a small optic lobe relative to their eye size.

Furthermore, the region in their optic lobes that integrates visual information with motor tasks is reduced, implying that giant squids don't rely on visually guided behavior like camouflage and body patterning to communicate with one another, as other cephalopods do.

Planet DebianSteve Kemp: So I'm considering a new project

In the past there used to be a puppet-labs project called puppet-dashboard, which would let you see the state of your managed-nodes. Having even a very basic and simple "report user-interface" is pretty neat when you're pushing out a change, and you want to see it be applied across your fleet of hosts.

There are some other neat features, such as allowing you to identify failures easily, and see nodes that haven't reported-in recently.

This was spun out into a community-supported project which is largely stale:

Having a dashboard is nice, but the current state of the software is less good. It turns out that the implementation is pretty simple though:

  • Puppet runs on a node.
  • The node reports back to the puppet-master what happened.
  • The puppet-master can optionally HTTP-post that report to the reporting node.

The reporting node can thus receive real-time updates, and do what it wants with them. You can even sidestep the extra server if you wish:

  • The puppet-master can archive the reports locally.

For example on my puppet-master I have this:

  root@master /var/lib/puppet/reports # ls | tail -n4

Inside each directory is a bunch of YAML files which describe the state of the host, and the recipes that were applied. Parsing those is pretty simple, the hardest part would be making a useful/attractive GUI. But happily we have the existing one to "inspire" us.

I think I just need to write down a list of assumptions and see if they make sense. After all the existing installation(s) won't break, it's just a matter of deciding whether it is useful/worthwhile way to spend some time.

  • Assume you have 100+ hosts running puppet 4.x
  • Assume you want a broad overview:
    • All the nodes you're managing.
    • Whether their last run triggered a change, resulted in an error, or logged anything.
    • If so what changed/failed/was output?
  • For each individual run you want to see:
    • Rough overview.
  • Assume you don't want to keep history indefinitely, just the last 50 runs or so of each host.

Beyond that you might want to export data about the managed-nodes themselves. For example you might want a list of all the hosts which have "bash" installed on them. Or "All nodes with local user "steve"." I've written that stuff already, as it is very useful for auditing & etc.

The hard part about that is that to get the extra data you'll need to include a puppet module to collect it. I suspect a new dashboard would be broadly interesting/useful but unless you have that extra detail it might not be so useful. You can't point to a slightly more modern installation and say "Yes this is worth migrating to". But if you have extra meta-data you can say:

  • Give me a list of all hosts running wheezy.
  • Give me a list of all hosts running exim4 version 4.84.2-2+deb8u4.

And that facility is very useful when you have shellshock, or similar knocking at your door.

Anyway as a hacky start I wrote some code to parse reports, avoiding the magic object-fu that the YAML would usually invoke. The end result is this:

 root@master ~# dump-run
    Puppet Version: 4.8.2
    Runtime: 2.16
    Time:2017-07-29 18:13:04 +0000
            total -> 176
            skipped -> 2
            failed -> 0
            changed -> 3
            out_of_sync -> 3
            scheduled -> 0
            corrective_change -> 3
    Changed Resources
            Ssh_authorized_key[skx@shelob-s-fi] /etc/puppet/code/environments/production/modules/ssh_keys/manifests/init.pp:17
            Ssh_authorized_key[skx@deagol-s-fi] /etc/puppet/code/environments/production/modules/ssh_keys/manifests/init.pp:22
            Ssh_authorized_key[] /etc/puppet/code/environments/production/modules/ssh_keys/manifests/init.pp:27
    Skipped Resources
            Exec[clone sysadmin utils]
            Exec[update sysadmin utils]

CryptogramMe on Restaurant Surveillance Technology

I attended the National Restaurant Association exposition in Chicago earlier this year, and looked at all the ways modern restaurant IT is spying on people.

But there's also a fundamentally creepy aspect to much of this. One of the prime ways to increase value for your brand is to use the Internet to practice surveillance of both your customers and employees. The customer side feels less invasive: Loyalty apps are pretty nice, if in fact you generally go to the same place, as is the ability to place orders electronically or make reservations with a click. The question, Schneier asks, is "who owns the data?" There's value to collecting data on spending habits, as we've seen across e-commerce. Are restaurants fully aware of what they are giving away? Schneier, a critic of data mining, points out that it becomes especially invasive through "secondary uses," when the "data is correlated with other data and sold to third parties." For example, perhaps you've entered your name, gender, and age into a taco loyalty app (12th taco free!). Later, the vendors of that app sell your data to other merchants who know where and when you eat, whether you are a vegetarian, and lots of other data that you have accidentally shed. Is that what customers really want?

Planet DebianOsamu Aoki: exim4 configuration for Desktop (better gmail support)

For most of our Desktop PC running with stock exim4 and mutt, I think sending out mail is becoming a bit rough since using random smarthost causes lots of trouble due to the measures taken to prevent spams.

As mentioned in Exim4 user FAQ , /etc/hosts should have FQDN with external DNS resolvable domain name listed instead of localdomain to get the correct EHLO/HELO line.  That's the first step.

The stock configuration of exim4 only allows you to use single smarthost for all your mails.  I use one address for my personal use which is checked by my smartphone too.  The other account is for subscribing to the mailing list.  So I needed to tweak ...

Usually, mutt is smart enough to set the From address since my .muttrc has

# Set default for From: for replyes for alternates.
set reverse_name

So how can I teach exim4 to send mails depending on the  mail accounts listed in the From header.

For my gmail accounts, each mail should be sent to the account specific SMTP connection matching your From header to get all the modern SPAM protection data in right state.  DKIM, SPF, DMARC...  (Besides, they overwrite From: header anyway if you use wrong connection.)

For my mails, mails should be sent from my shell account on so it is very unlikely to be blocked.  Sometimes, I wasn't sure some of these mails sent through my ISP's smarthost are really getting to the intended person.

To these ends, I have created small patches to the /etc/exim4/conf.d files and reported it to Debian BTS: #869480 Support multiple smarthosts (gmail support).  These patches are for the source package.

To use my configuration tweak idea, you have easier route no matter which exim version you are using.  Please copy and read pertinent edited files from my github site to your installed /etc/exim4/conf.d files and get the benefits.
If you really wish to keep envelope address etc. to match From: header, please rewite agressively using the From: header using eddited rewrite/31_exim4-config_rewriting as follows:

*@+local_domains "${lookup{${local_part}}lsearch{/etc/email-addresses}\
                   {$value}fail}" f
# identical rewriting rule for /etc/mailname
*@ETC_MAILNAME "${lookup{${local_part}}lsearch{/etc/email-addresses}\
                   {$value}fail}" f
* "$h_from:" Frs

So far its working fine for me but if you find bug, let me know.


CryptogramZero-Day Vulnerabilities against Windows in the NSA Tools Released by the Shadow Brokers

In April, the Shadow Brokers -- presumably Russia -- released a batch of Windows exploits from what is presumably the NSA. Included in that release were eight different Windows vulnerabilities. Given a presumed theft date of the data as sometime between 2012 and 2013 -- based on timestamps of the documents and the limited Windows 8 support of the tools:

  • Three were already patched by Microsoft. That is, they were not zero days, and could only be used against unpatched targets. They are EMERALDTHREAD, EDUCATEDSCHOLAR, and ECLIPSEDWING.

  • One was discovered to have been used in the wild and patched in 2014: ESKIMOROLL.

  • Four were only patched when the NSA informed Microsoft about them in early 2017: ETERNALBLUE, ETERNALSYNERGY, ETERNALROMANCE, and ETERNALCHAMPION.

So of the five serious zero-day vulnerabilities against Windows in the NSA's pocket, four were never independently discovered. This isn't new news, but I haven't seen this summary before.

Worse Than FailureError'd: The Things That Should Not Be

"I tried to export my game to HTML5, but I guess it just wasn't meant to be," Edward W. writes.


Tom H. wrote, "I guess the build server never saw that memo."


"I love going out to dinner with my friend null null," writes Adam R., "She never steals any of my food!"


Mike C. wrote, "Sorry JIRA, all the keys on my keyboard are defined."


"You guys! I caught an error! 🎣 🎣" writes Nick.


Hamakei asks, "Never mind who's watching the Watchmen...who helps the helpers?"


[Advertisement] High availability, Load-balanced or Basic – design your own Universal Package Manager, allow the enterprise to scale as you grow. Download and see for yourself!

Don MartiOnline ads don't matter to P&G

In the news: P&G Cuts More Than $100 Million in ‘Largely Ineffective’ Digital Ads

Not surprising.

Proctor & Gamble makes products that help you comply with widely held cleanliness norms.

Digital ads are micro-targeted to you as an individual.

That's the worst possible brand/medium fit. If you don't know that the people who expect you to keep your house or body clean are going to be aware of the same product, how do you know whether to buy it?

Bonus link from Bob Hoffman last year: Will The P&G Story Bring Down Ad Tech? Please?

Planet Linux AustraliaPia Waugh: RegTech – a primer for the uninitiated

Whilst working at AUSTRAC I wrote a brief about RegTech which was quite helpful. I was given permission to blog the generically useful parts of it for general consumption :) Thanks Leanne!

Overview – This brief is the most important thing you will read in planning transformation! Government can’t regulate in the way we have traditionally done. Traditional approaches are too small, too slow and too ineffective. We need to explore new ways to regulate and achieve the goal of a stronger financial sector resistance to abuse that leverages data, automation, machine learning, technology and collaboration. We are here to help!

The key here is to put technology at the heart of the business strategy, rather than as simply an implementation mechanism. By embracing technology thinking, which means getting geeks into the strategy and policy rooms, we can build the foundation of a modern, responsive, agile, proactive and interactive regulator that can properly scale.

The automation of compliance with RegTech has the potential to overcome individual foibles and human error in a way that provides the quantum leap in culture and compliance that our regulators, customers, policy makers and the community are increasingly demanding… The Holy Grail is when we start to actually write regulation and legislation in code. Imagine the productivity gains and compliance savings of instantaneous certified compliance… We are now in one of the most exciting phases in the development of FinTech since the inception of e-banking.Treasurer Morrison, FinTech Australia Summit, Nov 2016

On the back of the FinTech boom, there is a growth in companies focused on “RegTech” solutions and services to merge technology and regulation/compliance needs for a more 21st century approach to the problem space. It is seen as a logical next step to the FinTech boom, given the high costs and complexity of regulation in the financial sector, but the implications for the broader regulatory sector are significant. The term only started being widely used in 2015. Other governments have started exploring this space, with the UK Government investing significantly.

Core themes of RegTech can be summarised as: data; automation; security; disruption; and enabling collaboration. There is also an overall drive towards everything being closer to real-time, with new data or information informing models, responses and risk in an ongoing self-adjusting fashion.

  • Data driven regulation – better monitoring, better use of available big and small data holdings to inform modelling and analysis (rather than always asking a human to give new information), assessment on the fly, shared data and modelling, trends and forecasting, data analytics for forward looking projections rather than just retrospective analysis, data driven risk and adaptive modelling, programmatic delivery of regulations (regulation as a platform).
  • Automation – reporting, compliance, risk modelling of transactions to determine what should be reported as “suspicious”, system to system registration and escalation, use of machine learning and AI, a more blended approach to work combining humans and machines.
  • Security – biometrics, customer checks, new approaches to KYC, digital identification and assurance, sharing of identity information for greater validation and integrity checking.
  • Disruptive technologies – blockchain, cloud, machine learning, APIs, cryptography, augmented reality and crypto-currencies just to start!
  • Enabling collaboration – for-profit regulation activities, regulation/compliance services and products built on the back of government rules/systems/data, access to distributed ledgers, distributed risk models and shared data/systems, broader private sector innovation on the back of regulator open data and systems.

Some useful references for the more curious:

Planet DebianJoachim Breitner: How is coinduction the dual of induction?

Earlier today, I demonstrated how to work with coinduction in the theorem provers Isabelle, Coq and Agda, with a very simple example. This reminded me of a discussion I had in Karlsruhe with my then colleague Denis Lohner: If coinduction is the dual of induction, why do the induction principles look so different? I like what we observed there, so I’d like to share this.

The following is mostly based on my naive understanding of coinduction based on what I observe in the implementation in Isabelle. I am sure that a different, more categorial presentation of datatypes (as initial resp. terminal objects in some category of algebras) makes the duality more obvious, but that does not necessarily help the working Isabelle user who wants to make sense of coninduction.

Inductive lists

I will use the usual polymorphic list data type as an example. So on the one hand, we have normal, finite inductive lists:

datatype 'a list = nil | cons (hd : 'a) (tl : "'a list")

with the well-known induction principle that many of my readers know by heart (syntax slightly un-isabellized):

P nil → (∀x xs. P xs → P (cons x xs)) → ∀ xs. P xs

Coinductive lists

In contrast, if we define our lists coinductively to get possibly infinite, Haskell-style lists, by writing

codatatype 'a llist = lnil | lcons (hd : 'a)  (tl : "'a llist")

we get the following coinduction principle:

(∀ xs ys.
    R xs ys' → (xs = lnil) = (ys = lnil) ∧
               (xs ≠ lnil ⟶ ys' ≠ lnil ⟶
	         hd xs = hd ys ∧ R (tl xs) (tl ys))) →
→ (∀ xs ys. R xs ys → xs = ys)

This is less scary that it looks at first. It tell you “if you give me a relation R between lists which implies that either both lists are empty or both lists are nonempty, and furthermore if both are non-empty, that they have the same head and tails related by R, then any two lists related by R are actually equal.”

If you think of the infinte list as a series of states of a computer program, then this is nothing else than a bisimulation.

So we have two proof principles, both of which make intuitive sense. But how are they related? They look very different! In one, we have a predicate P, in the other a relation R, to point out just one difference.

Relation induction

To see how they are dual to each other, we have to recognize that both these theorems are actually specializations of a more general (co)induction principle.

The datatype declaration automatically creates a relator:

rel_list :: ('a → 'b → bool) → 'a list → 'b list → bool

The definition of rel_list R xs ys is that xs and ys have the same shape (i.e. length), and that the corresponding elements are pairwise related by R. You might have defined this relation yourself at some time, and if so, you probably introduced it as an inductive predicate. So it is not surprising that the following induction principle characterizes this relation:

Q nil nil →
(∀x xs y ys. R x y → Q xs ys → Q (cons x xs) (cons y ys)) →
(∀xs ys → rel_list R xs ys → Q xs ys)

Note how how similar this lemma is in shape to the normal induction for lists above! And indeed, if we choose Q xs ys ↔ (P xs ∧ xs = ys) and R x y ↔ (x = y), then we obtain exactly that. In that sense, the relation induction is a generalization of the normal induction.

Relation coinduction

The same observation can be made in the coinductive world. Here, as well, the codatatype declaration introduces a function

rel_llist :: ('a → 'b → bool) → 'a llist → 'b llist → bool

which relates lists of the same shape with related elements – only that this one also relates infinite lists, and therefore is a coinductive relation. The corresponding rule for proof by coinduction is not surprising and should remind you of bisimulation, too:

(∀xs ys.
    R xs ys → (xs = lnil) = (ys = lnil) ∧
              (xs ≠ lnil ⟶ ys ≠ lnil ⟶
	        Q (hd xs) (hd ys) ∧ R (tl xs) (tl ys))) →
(∀ xs ys → R xs ys → rel_llist Q xs ys)

It is even more obvious that this is a generalization of the standard coinduction principle shown above: Just instantiate Q with equality, which turns rel_llist Q into equality on the lists, and you have the theorem above.

The duality

With our induction and coinduction principle generalized to relations, suddenly a duality emerges: If you turn around the implication in the conclusion of one you get the conclusion of the other one. This is an example of “cosomething is something with arrows reversed”.

But what about the premise(s) of the rules? What happens if we turn around the arrow here? Although slighty less immediate, it turns out that they are the same as well. To see that, we start with the premise of the coinduction rule, reverse the implication and then show that to be equivalent to the two premises of the induction rule:

(∀xs ys.
    R xs ys ← (xs = lnil) = (ys = lnil) ∧
              (xs ≠ lnil ⟶ ys ≠ lnil ⟶
	        Q (hd xs) (hd ys) ∧ R (tl xs) (tl ys)))
= { case analysis (the other two cases are vacuously true) }
  (∀xs ys.
    xs = lnil → ys = lnil →
    R xs ys ← (xs = lnil) = (ys = lnil) ∧
              (xs ≠ lnil ⟶ ys ≠ lnil ⟶
	        Q (hd xs) (hd ys) ∧ R (tl xs) (tl ys)))
∧ (∀xs ys.
    xs ≠ lnil ⟶ ys ≠ lnil
    R xs ys ← (xs = lnil) = (ys = lnil) ∧
              (xs ≠ lnil ⟶ ys ≠ lnil ⟶
	        Q (hd xs) (hd ys) ∧ R (tl xs) (tl ys)))
= { simplification }
  (∀xs ys.  xs = lnil → ys = lnil → R xs ys
∧ (∀x xs y ys.  R (cons x xs) (cons y ys) ← (Q x y ∧ R xs ys))
= { more rewriting }
  R nil nil
∧ (∀x xs y ys. Q x y → R xs ys → R (cons x xs) (cons y ys))


The coinduction rule is not the direct dual of the induction rule, but both are specializations of more general, relational proof methods, where the duality is clearly present.

More generally, this little excursion shows that it is often beneficial to think of types less as sets, and more as relations – this way of thinking is surprisingly fruitful, and led to proofs of parametricity and free theorems and other nice things.

Planet DebianNOKUBI Takatsugu: uim package integraion is working in progress

Now I am helping to maintain uim-package. It is splited into too many packages (over 40, including dbgsym).

Working status is written as the debian wiki entry

Currentry it is not finished, when I’ve done all of them, I’ll upload them to experimental.


Cory DoctorowHey, Little Rock, AR: there’s a special stage performance of Little Brother coming your way for Banned Books Week!

Adapted by Josh Costello from the novel by Cory Doctorow
September 15, 16, 22, 23, 24, 28, 29, 30, 2017
Directed by Ryan Whitfield and Jason Green

While skipping school and playing an alternate reality game, San Francisco teenager Marcus Yallow ends up in the middle of a terrorist attack and on the wrong side of the Department of Homeland Security. This play asks “What is the right thing to do when authorities become oppressors?”

Marcus – Jeffrey Oakley
Ange – Kayley Shettles
Jolu – Yusuf Richardson
Daryl – Jack Clay

Severe Haircut – Madison McMichael
Benson/Sutherland – Robert Gatlin
Guard – Essence Robinson
Mom – Isabelle Marchese
Dad – Max Green
Turk/CHP Officer – Braden Hammock
Ms. Galvez – Anais Moore
Charles – Elijah White
Police Officer 1 – Kyndall Jackson
Police Officer 2- Mia Simone Parker
Trudy Doo – Emily Shull
NPR Announcer – Allison Boggs
Concertgoer – Rachel Worthington
Reporter – Hannah Livingston
Fox Commentator – Katie Rasure
BBC Reporter – Olivia Ward
Pirate Queen – Abigail Harris
On stage light/sound/projection tech – Trenton Gorman, Claire Green

$16— Adults
$12— Students & Seniors
Thursday, Friday and Saturday night curtain time is 7:30 pm.
Sunday afternoon curtain time is 2:30 pm.

The Box Office and the theater open one (1) hour prior to curtain.
The House opens 30 minutes prior to curtain.
Please arrive promptly. There will be no late admission.

Planet DebianJoachim Breitner: Coinduction in Coq and Isabelle

The DeepSpec Summer School is almost over, and I have had a few good discussions. One revolved around coinduction: What is it, how does it differ from induction, and how do you actually prove something. In the course of the discussion, I came up with a very simple coinductive exercise, and solved it both in Coq and Isabelle

The task

Define the extended natural numbers coinductively. Define the min function and the  ≤  relation. Show that min(n, m) ≤ n holds.


The definitions are straight forward. Note that in Coq, we use the same command to define a coinductive data type and a coinductively defined relation:

CoInductive ENat :=
  | N : ENat
  | S : ENat -> ENat.

CoFixpoint min (n : ENat) (m : ENat)
  :=match n, m with | S n', S m' => S (min n' m')
                    | _, _       => N end.

CoInductive le : ENat -> ENat -> Prop :=
  | leN : forall m, le N m
  | leS : forall n m, le n m -> le (S n) (S m).

The lemma is specified as

Lemma min_le: forall n m, le (min n m) n.

and the proof method of choice to show that some coinductive relation holds, is cofix. One would wish that the following proof would work:

Lemma min_le: forall n m, le (min n m) n.
  destruct n, m.
  * apply leN.
  * apply leN.
  * apply leN.
  * apply leS.
    apply min_le.

but we get the error message

In environment
min_le : forall n m : ENat, le (min n m) n
Unable to unify "le N ?M170" with "le (min N N) N

Effectively, as Coq is trying to figure out whether our proof is correct, i.e. type-checks, it stumbled on the equation min N N = N, and like a kid scared of coinduction, it did not dare to “run” the min function. The reason it does not just “run” a CoFixpoint is that doing so too daringly might simply not terminate. So, as Adam explains in a chapter of his book, Coq reduces a cofixpoint only when it is the scrutinee of a match statement.

So we need to get a match statement in place. We can do so with a helper function:

Definition evalN (n : ENat) :=
  match n with | N => N
               | S n => S n end.

Lemma evalN_eq : forall n, evalN n = n.
Proof. intros. destruct n; reflexivity. Qed.

This function does not really do anything besides nudging Coq to actually evaluate its argument to a constructor (N or S _). We can use it in the proof to guide Coq, and the following goes through:

Lemma min_le: forall n m, le (min n m) n.
  destruct n, m; rewrite <- evalN_eq with (n := min _ _).
  * apply leN.
  * apply leN.
  * apply leN.
  * apply leS.
    apply min_le.


In Isabelle, definitions and types are very different things, so we use different commands to define ENat and le:

theory ENat imports  Main begin

codatatype ENat =  N | S  ENat

primcorec min where
   "min n m = (case n of
       N ⇒ N
     | S n' ⇒ (case m of
        N ⇒ N
      | S m' ⇒ S (min n' m')))"

coinductive le where
  leN: "le N m"
| leS: "le n m ⟹ le (S n) (S m)"

There are actually many ways of defining min; I chose the one most similar to the one above. For more details, see the corec tutorial.

Now to the proof:

lemma min_le: "le (min n m) n"
proof (coinduction arbitrary: n m)
  case le
  show ?case
  proof(cases n)
    case N then show ?thesis by simp
    case (S n') then show ?thesis
    proof(cases m)
      case N then show ?thesis by simp
      case (S m')  with ‹n = _› show ?thesis
        unfolding min.code[where n = n and m = m]
        by auto

The coinduction proof methods produces this goal:

proof (state)
goal (1 subgoal):
 1. ⋀n m. (∃m'. min n m = N ∧ n = m') ∨
          (∃n' m'.
               min n m = S n' ∧
               n = S m' ∧
	       ((∃n m. n' = min n m ∧ m' = n) ∨ le n' m'))

I chose to spell the proof out in the Isar proof language, where the outermost proof structure is done relatively explicity, and I proceed by case analysis mimiking the min function definition.

In the cases where one argument of min is N, Isabelle’s simplifier (a term rewriting tactic, so to say), can solve the goal automatically. This is because the primcorec command produces a bunch of lemmas, one of which states n = N ∨ m = N ⟹ min n m = N.

In the other case, we need to help Isabelle a bit to reduce the call to min (S n) (S m) using the unfolding methods, where min.code contains exactly the equation that we used to specify min. Using just unfolding min.code would send this method into a loop, so we restrict it to the concrete arguments n and m. Then auto can solve the remaining goal (despite all the existential quantifiers).


Both theorem provers are able to prove the desired result. To me it seems that it is slightly more convenient in Isabelle because a lot of Coq infrastructure relies on the type checker being able to effectively evaluate expressions, which is tricky with cofixpoints, wheras evaluation plays a much less central role in Isabelle, where rewriting is the crucial technique, and while one still cannot simply throw min.code into the simpset, so working with objects that do not evaluate easily or completely is less strange.


I was challenged to do it in Agda. Here it is:

module ENat where

open import Coinduction

data ENat : Set where
  N : ENat
  S : ∞ ENat → ENat

min : ENat → ENat → ENat
min (S n') (S m') = S (♯ (min (♭ n') (♭ m')))
min _ _ = N

data le : ENat → ENat → Set where
  leN : ∀ {m} → le N m
  leS : ∀ {n m} → ∞ (le (♭ n) (♭ m)) → le (S n) (S m)

min_le : ∀ {n m} → le (min n m) n
min_le {S n'} {S m'} = leS (♯ min_le)
min_le {N}    {S m'} = leN
min_le {S n'} {N} = leN
min_le {N}    {N} = leN

I will refrain from commenting it, because I do not really know what I have been doing here, but it typechecks, and refer you to the official documentation on coinduction in Agda. But let me note that I wrote this using plain inductive types and recursion, and added , and until it worked.

TEDAnonymous ideas worth spreading — and the surprising discoveries behind their curation

The intimacy of listening: Producer Cloe Shasha shares what she and her team learned while producing TED and Audible’s original audio series “Sincerely, X.”

In the spring of 2016, we put out a call for submissions for anonymous talks from around the world for the first season of our new podcast, Sincerely, X. We received hundreds of ideas — stories touching on a broad range of topics. As we read through them, we found ourselves flooded by tragedy, comedy, intrigue and surprise. Stories of victims of abuse, struggles with mental health, lessons from prison, insider secrets within companies and governmental organizations, and so much more.

>> Sincerely, X was co-produced with Audible. Episode 1, “Dr. Burnout,” is available now on Apple Podcasts and the TED Android app. <<

The premise of the podcast Sincerely, X felt simple at first: sharing important ideas, anonymously. The episodes would include speakers who need to separate their professional ideas from their personal lives; those who want to share an idea, but fear it would hurt someone in their family if they did so publicly; and quiet idealists whose solutions could transform lives. Why anonymous? Our theory was that inviting people to share ideas without having to reveal their identity might allow for an entirely new category of talks.

We dove into this pool of submissions to figure out who would make a great speaker for the show, and started interviewing people by phone. We were looking for compelling stories that had a strong need for anonymity while also considering them through the lens that we use for TED Talk submissions. In other words, did each story have an idea worth spreading?

Throughout the process of creating Sincerely, X season 1, we realized that we had to think about these talks quite differently from TED Talks on a stage, and we adapted along the way.

Signposting in an audio talk

When you’re watching a speaker on a stage, context and sentiment are communicated through the speaker’s body language, facial expressions and images (if they have slides). In audio, with only one of our senses engaged, a lot more information has to be transmitted through a speaker’s voice alone.

This came up when we worked with the speaker in episode 2, “Pepper Spray.” It’s the story of a woman who lived a normal-seeming life — until one day she lashed out in a department store and began pepper-spraying strangers. There are a lot of details that she shares about her life in that episode — both before and after the pepper spray incident. If she were telling this story on a stage, the audience would experience visual cues that would indicate whether she were reflecting on the far past versus the recent past, or whether she felt shameful or justified in her actions. (Watch a TED Talk with the sound off sometime, and you’ll be surprised at how much context you can pick up!) But when we shared the audio with colleagues for their feedback, they were at times confused by the sequence of events in the story. So we worked with the speaker to help her find places to include signposting sentences such as, “But I want to come back to the hero of the story.” In other words, phrases that could ground the listener in what’s about to come.  

The intimacy of listening

In the same way that hearing a ghost story around a campfire conjures up scary visualizations, hearing a difficult story on a podcast can build intense images in your mind. Drawing the line between deeply moving content and manipulative content can be tricky and nuanced.

In the case of some Sincerely, X episodes, a few of the early drafts of talks contained details that felt disturbingly intimate — details that might have packed an emotional punch from the distance of a stage, but that felt too intimate coming out of earbuds. We had to learn how to mitigate that intensity by listening to the content and getting feedback from early screeners who shared honest reactions.

This was a relevant dynamic for several of our speakers, including our speaker in episode 6, “Rescued by Ritual.” This speaker talks about a private ritual she invented in order to cope with the horror of her abusive marriage before she left her ex-husband. In the earliest draft, in order to provide context for the purpose of her ritual, the leadup to the reenactment of the ritual involved details that were difficult to hear for some early listeners. So we worked with the speaker to figure out which details she felt were most needed in order to paint an accurate picture of that time in her life.

To read or to memorize?

When it comes to our TED speakers on the stage, we typically encourage two ways of preparing for a talk: either memorizing their content so thoroughly that they can recite it seamlessly while standing on one foot with the television blaring, or memorizing an outline and riffing off that rehearsed structure once onstage. As Chris Anderson says, partially memorizing a talk produces an “uncanny valley” effect — a seemingly robotic or artificial performance. It’s hard to appear authentic while devoting a fair amount of energy to the process of recall. So if someone is not a great memorizer, we encourage improvising the sentences based on a solid outline of the concepts. Both of these forms of preparation are aimed at fostering an authentic delivery from the speaker, which cultivates a powerful connection between the speaker and the audience.

In the context of Sincerely, X, we thought about how to foster that authentic delivery, and considered that preparing speakers to read their talks might be a lower-stress way to record speakers in the studio. But it soon became clear that unless a speaker had acting experience, reading a talk sounded like… reading. So we experimented with having speakers memorize their talks extremely thoroughly before coming into the studio. And this worked for some speakers; when we recorded the speaker in episode 1, “Dr. Burnout,” she delivered her talk beautifully once she had fully committed it to memory.

Sincerely, X was co-produced by TED and Audible. The team was led by executive producers Collin Campbell, Deron Triff and June Cohen (who is also the host). Episode 1, “Dr. Burnout,” is available now on Apple Podcasts and the TED Android app. We’ll be releasing new episodes every Thursday for the next ten weeks.

We’ll be releasing new episodes every Thursday for the next ten weeks.

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV Beginners August Meeting: TBD

Aug 19 2017 12:30
Aug 19 2017 16:30
Aug 19 2017 12:30
Aug 19 2017 16:30
Infoxchange, 33 Elizabeth St. Richmond

Workshop to be announced.

There will also be the usual casual hands-on workshop, Linux installation, configuration and assistance and advice. Bring your laptop if you need help with a particular issue. This will now occur BEFORE the talks from 12:30 to 14:00. The talks will commence at 14:00 (2pm) so there is time for people to have lunch nearby.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121 (enter via the garage on Jonas St.) Late arrivals, please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

August 19, 2017 - 12:30

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV Main August 2017 Meeting

Aug 1 2017 18:30
Aug 1 2017 20:30
Aug 1 2017 18:30
Aug 1 2017 20:30
The Dan O'Connell Hotel, 225 Canning Street, Carlton VIC 3053

Tuesday, August 1, 2017

6:30 PM to 8:30 PM
The Dan O'Connell Hotel
225 Canning Street, Carlton VIC 3053


  • Tony Cree, CEO Aboriginal Literacy Foundation (to be confirmed)
  • Russell Coker, QEMU and ARM on AMD64

Russell Coker will demonstrate how to use QEMU to run software for ARM CPUs on an x86 family CPU.

The Dan O'Connell Hotel, 225 Canning Street, Carlton VIC 3053

Food and drinks will be available on premises.

Before and/or after each meeting those who are interested are welcome to join other members for dinner.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

August 1, 2017 - 18:30

CryptogramFiring a Locked Smart Gun

The Armatix IP1 "smart gun" can only be fired by someone who is wearing a special watch. Unfortunately, this security measure is easily hackable.

Krebs on SecurityGas Pump Skimmer Sends Card Data Via Text

Skimming devices that crooks install inside fuel station gas pumps frequently rely on an embedded Bluetooth component allowing thieves to collect stolen credit card data from the pumps wirelessly with any mobile device. The downside of this approach is that Bluetooth-based skimmers can be detected by anyone else with a mobile device. Now, investigators in the New York say they are starting to see pump skimmers that use cannibalized cell phone components to send stolen card data via text message.

Skimmers that transmit stolen card data wirelessly via GSM text messages and other mobile-based communications methods are not new; they have been present — if not prevalent — in ATM skimming devices for ages.

But this is the first instance KrebsOnSecurity is aware of in which such SMS skimmers have been found inside gas pumps, and that matches the experience of several states hardest hit by pump skimming activity.

The beauty of the GSM-based skimmer is that it can transmit stolen card data wirelessly via text message, meaning thieves can receive real-time transmissions of the card data anywhere in the world — never needing to return to the scene of the crime. That data can then be turned into counterfeit physical copies of the cards.

Here’s a look at a new skimmer pulled from compromised gas pumps at three different filling stations in New York this month. Like other pump skimmers, this device was hooked up to the pump’s internal power, allowing it to operate indefinitely without relying on batteries.

A GSM-based card skimmer found embedded in a gas pump in the northeastern United States.

A GSM-based card skimmer found embedded in a gas pump in the northeastern United States.

It may be difficult to see from the picture above, but the skimmer includes a GSM-based device with a SIM card produced by cellular operator T-Mobile. The image below shows the other side of the pump skimmer, with the SIM card visible in the upper right corner of the circuitboard:

The reverse side of this GSM-based pump skimmer shows a SIM card from T-Mobile.

The reverse side of this GSM-based pump skimmer shows a SIM card from T-Mobile.

It’s not clear what type of mobile device was used in this skimmer, and the police officer who shared these images with KrebsOnSecurity said the forensic analysis of the device was ongoing.

Here’s a close-up of the area around the SIM card:


The officer, who shared these photos on condition of anonymity, said this was thought to be the first time fraud investigators in New York had ever encountered a GSM-based pump skimmer.

Skimmers used at all three New York filling stations impacted by the scheme included T-Mobile SIM cards, but the investigator said analysis so far showed the cards held no other data other than the SIM’s card’s unique serial number (ICCID).

KrebsOnSecurity reached out to weights and measures officials in several states most heavily hit by pump skimming activity, including Arizona, California and Florida.

Officials in all three states said they’ve yet to find a GSM-based skimmer attached to any of their pumps.

Skimmers at the pump are most often the work of organized crime rings that traffic in everything from stolen credit and debit cards to the wholesale theft and commercial resale of fuel — in some cases from (and back to) the very fuel stations that have been compromised with the gang’s skimming devices.

Investigators say skimming gangs typically gain access to station pumps by using a handful of master keys that still open a great many pumps in use today. In a common scenario, one person will distract the station attendant as fuel thieves pull up alongside the pump in a van with doors that obscure the machine on both sides. For an in-depth look at the work on one fuel-theft gang working out of San Diego, check out this piece.

There are generally no outward signs when a pump has been compromised by a skimmer, but a study KrebsOnSecurity published last year about a surge in pump skimming activity in Arizona suggests that skimmer gangs can spot the signs of a good mark.

Fraud patterns show fuel theft gangs tend to target stations that are close to major highway arteries; those with older pumps; and those without security cameras, and/or a regular schedule for inspecting security tape placed on the pumps.

Many filling stations are upgrading their pumps to include more physical security — such as custom locks and security cameras. In addition, newer pumps can accommodate more secure chip-based payment cards that are already in use by all other G20 nations.

But these upgrades are disruptive and expensive, and some stations are taking advantage of recent moves by Visa to delay adding much-needed security improvements, such as chip-capable readers.

Until late 2016, fuel station owners in the United States had until October 1, 2017 to install chip-capable readers at their pumps. Under previous Visa rules, station owners that didn’t have chip-ready readers in place by then would have been on the hook to absorb 100 percent of the costs of fraud associated with transactions in which the customer presented a chip-based card yet was not asked or able to dip the chip (currently, card-issuing banks and consumers eat most of the fraud costs from fuel skimming).

But in December 2016, Visa delayed the requirements, saying fuel station owners would now have until October 1, 2020 to meet the liability shift deadline.

The best advice one can give to avoid pump skimmers is to frequent stations that appear to place an emphasis on physical security. More importantly, some pump skimming devices are capable of stealing debit card PINs as wellso it’s good idea to avoid paying with a debit card at the pump.

Armed with your PIN and debit card data, thieves can clone the card and pull money out of your account at an ATM. Having your checking account emptied of cash while your bank sorts out the situation can be a huge hassle and create secondary problems (bounced checks, for instance).

Worse Than FailureTable 12

We've all encountered database tables that look like this:

  ID    Data
  ----- --------------------------------------------
  00002 MALE|FEMALE|TRANS|EUNUCH|OTHER|M|Q|female|Female|male|Male|$
  00003 <?xml version="1.0" encoding="UTF-8"?><item id="1234"><name "Widget"/>...</item>
  00004 1234|Fred,Lena,Dana||||||||||||1.3DEp42|

Oh the joy of figuring out what each field of each row represents. The fun of deciphering the code that writes and reads/parses each row of data. In a moment, you will fondly look back on that experience as the Good-Old-Days.

People waving the Canadian Flag

The task of administering elections in the Great White North is handled by the appropriately-named agency Elections Canada. As part of their mandate, they provide the results of past elections in granular detail, both as nicely formatted web pages and as downloadable raw files. The latter are meant to be used by researchers for studying how turnout varies across provinces, ages, races, etc., as well as arguing about the merits of proportional representation versus single transferable votes; and so forth.

One of the more comprehensive data files is descriptively known as Table-Twelve, and it contains a record for every candidate who ran in the election. Each record contains how many votes they got, the riding (electoral district) in which they competed, their affiliated party, home town, occupation, and hundreds of other details about the candidate. This file has been published for every election since the 38th general in 2004. Vicki was charged with creating a new parser for this data.

Table-Twelve is a CSV file in the same way that managers describe their new agile process as <details of waterfall here>. While parsing a CSV file in general is no big deal, writing a function to parse this data was far harder than she expected. For one thing, the column titles change from year to year. One might think Who cares, as long as the data is in the same sequence. One would be wrong. As an example, depending upon the year, the identifier for the electoral district might be in a column named "Electoral District Name", "Electoral District" or "District", and might contain a string representing the district name, or a numeric district identifier, either of which may or may not be enclosed in single or double quotes. Just to make it interesting, some of the quoted strings have commas, and some of the numbers are commafied as well.

Further inspection revealed that the columns are not only inconsistently named, but named so as to be completely misleading. There's a column labeled "Majority". If you're thinking that it contains a boolean to indicate whether the candidate got a majority, or 50%+1 of the number of cast votes (i.e.: "How many votes do you need for a majority?"), you'd be mistaken. Nor is it even a slight misuse (where it should have been "Plurality"). Instead, it's the delta between the winning candidate and the second-place candidate in that riding. They also helpfully give you the quotient of this delta to the total cast votes as the "Majority Percentage".

Canada has a parliamentary system; it's also important to know how many candidates of each party won, so the party designation is obviously going to be easy to access, right? Or maybe you'd like to sort by surname? Well, it turns out that the party is appended to the field containing the candidate's name, delimited with a single space (and possibly an asterisk if they were incumbent). But the candidate's name and the party are already each a variable number of words (some have middle names or two surnames) delimited by single spaces. The party name, however, must be given in both English and French, separated by a forward slash. Of course, some parties already have a slash in their name! Oh, and if the candidate didn't run as a member of a party, they might be listed as "Independent" or as "No affiliation"; both are used in any given file.

Above and beyond the call of making something difficult to parse, the files are full of French accented text, so the encoding changes from file to file, here ISO-8859, there UTF-8, over there a BOM or two.

Don't get me wrong, I've written parsers for this sort of garbage by creating a bunch of routines to do trivial parsing and using them for larger logical parsers, and so on until you can parse all of the fields in an entire row, and all the special cases that spew forth. But the files they were supposed to parse were consistent from one day to the next.

Vicki is considering pulling out all of her hair, braiding it together and using it to hang the person who designed Table-Twelve.

[Advertisement] High availability, Load-balanced or Basic – design your own Universal Package Manager, allow the enterprise to scale as you grow. Download and see for yourself!

Planet DebianMichal Čihař: Weblate 2.16: Call for translations

Weblate 2.16 is almost ready (I expect no further code changes), so it's really great time to contribute to it's translations! Weblate 2.16 will be probably released during my presence at DebConf 17.

As you might expect, Weblate is translated using Weblate, so the contributions should be really easy. In case there is something unclear, you can look into Weblate documentation.

I'd especially like to see improvements in the Italian translation which was one of the first in Weblate beginnings, but hasn't received much love in past years.

Filed under: Debian English SUSE Weblate


Planet DebianNorbert Preining: Software Development as mathematician in academia – everyone bites the dust

Is it possible to do software development, mathematical or not, as mathematician in academics? This is a question I was asking myself recently a lot, seeing my own development from logician at a state university getting rid of foreigners to software developer. And then, a friend pointed me to this very depressing document: The origins of SageMath by William Stein, the main developer of SageMath. And I realized that it seems to be a global phenomenon that mathematicians who are interested in software development have to leave academics. What a sad affair.

SageMath has a clear mission:

Creating a viable free open source alternative to Magma, Maple, Mathematica and Matlab.

All the “Ma”-software packages are commercial, and expensive. On the other hand they often have very good algorithms implemented. The Sage developers invested lots of time, energy, and brain power to develop excellent algorithm in an open source project for the mathematical researcher, but this investment wasn’t honored in academic life. To quote from the presentation:

Issues with software dev in academia

  • Hard money for software development is virtually nonexistent: I can’t think of anyone I know who got tenured based on his or her software.
  • Researchers on soft money are systematically discriminated against in favor of tenure-track and tenured faculty.
  • Researchers are increasingly evaluated solely on bibliometric counts rather than an informed assessment of their overall portfolio of papers, code, software, industry engagement, or student supervision.

The origins of SageMath, p.31

I can fully agree to this. Both from my own experience as well as from those around me. The presentation slides are full of other examples, from the developers of NumPy, Jupyter, as well as statements by Stephen Wolfram from Mathematica about this issue. A textbook how to not setup academia.

My assumption was that this hits only on non-tenured staff, the academic precariat. It is shocking to see that even William Stein with a tenure position is leaving academics. It seems the times are not ready 🙁

Every great open source math library is built on the ashes of someone’s academic career.
The origins of SageMath, p.32

Rondam RamblingsThe definition of dishonorable

Donald Trump during the campaign: Donald Trump in office: I wonder if he even knows what the T in LGBT stands for. The bigotry and ignorance behind this decision are truly staggering.  The implication that a transgender person imposes "tremendous medical costs and disruption" which impedes "decisive and overwhelming victory" when they serve "in any capacity" (emphasis mine) is

TEDTEDGlobal 2017: Announcing the speaker lineup for our Arusha conference

TEDGlobal 2017 kicks off August 27–30, 2017, in Arusha, Tanzania. Ten years after the last TEDGlobal in Arusha, we’ll again gather a community from across the continent and around the world to explore ideas that may propel Africa’s next leap — in business, politics and justice, creativity and entrepreneurship, science and tech.

Today, we’re thrilled to announce our speaker lineup for TEDGlobal 2017! It’s a powerful list you can skim here — to dive into speaker bios and learn about the 8 themed sessions of TEDGlobal 2017, visit our full Program Guide.

OluTimehin Adegbeye, Writer and activist: Writing on gender justice, sexual and reproductive rights, urban poverty and media OluTimehin Adegbeye shares her (often very strong) opinions on Twitter and in long-form work. @OhTimehin

Oshiorenoya Agabi, Neurotechnology entrepreneur: Oshiorenoya Agabi is engineering neurons to express synthetic receptors which give them an unprecedented ability to become aware of surroundings.

Nabila Alibhai, Place-maker: Nabila Alibhai leads inCOMMONS, a new organization focused on civic engagement, public spaces, and building collective responsibility for our shared places.@NabilaAlibhai

Bibi Bakare-Yusuf, Publisher: Bibi Bakare-Yusuf is co-founder and publishing director of one of Africa’s leading publishing houses, Cassava Republic Press.

Christian Benimana, Architect: Christian Benimana is co-founder of the African Design Center, a training program for young architects.

Gus Casely-Hayford, Cultural historian: Gus Casely-Hayford writes, lectures, curates and broadcasts widely about African culture.

In Session 5, Repatterning, speakers will talk about the worlds we create — in fiction, fashion, design, music.

Natsai Audrey Chieza, Designer: Natsai Audrey Chieza is a design researcher whose fascinating work crosses boundaries between technology, biology, design and cultural studies. @natsaiaudrey

Tania Douglas, Biomedical engineer: Tania Douglas imagines how biomedical engineering can help address some of Africa’s health challenges. @tania_douglas

Touria El Glaoui, Art fair curator: To showcase vital new art from African nations and the diaspora, Touria El Glaoui founded the powerhouse 1:54 Contemporary African Art Fair. @154artfair

Meron Estefanos, Refugee activist: Meron Estefanos is the executive director of the Eritrean Initiative on Refugee Rights, advocating for refugees and victims of trafficking and torture. @meronina

Chika Ezeanya-Esiobu, Indigenous knowledge expert: Working across disciplines, Chika Ezeanya-Esiobu explores indigenous knowledge, homegrown and grassroots approaches to the sustainable advancement of Sub-Saharan Africa.

Kamau Gachigi, Technologist: At Gearbox, Kamau Gachigi empowers Kenya’s next generation of creators to prototype and fabricate their visions. @kamaufablab

Ameenah Gurib-Fakim: President of Mauritius: Ameenah Gurib-Fakim is the 6th president of the island of Mauritius. As a biodiversity scientist as well, she explores the medical and nutrition secrets of her home. @aguribfakim

Leo Igwe, Human rights activist: Leo Igwe works to end a variety of human rights violations that are rooted in superstition, including witchcraft accusations, anti-gay hate, caste discrimination and ritual killing. @leoigwe

Joel Jackson, Transport entrepreneur: Joel Jackson is the founder and CEO of Mobius Motors, set to launch a durable, low-cost SUV made in Africa.

Tunde Jegede, Composer, cellist, kora virtuoso: TED Fellow Tunde Jegede combines musical traditions to preserve classical forms and create new ones.

Paul Kagame, President of the Republic of Rwanda: As president of Rwanda, Paul Kagame has received recognition for his leadership in peace-building, development, good governance, promotion of human rights and women’s empowerment, and advancement of education and ICT. @PaulKagame

Zachariah Mampilly, Political scientist: Zachariah Mampilly is an expert on the politics of both violent and non-violent resistance. He is the author of “Rebel Rulers: Insurgent Governance and Civilian Life during War” and “Africa Uprising: Popular Protest and Political Change.” @Ras_Karya

Vivek Maru, Legal empowerment advocate: Vivek Maru is the founder of Namati, a movement for legal empowerment around the world powered by cadres of grassroots legal advocates. Global Legal Empowerment Network

In Session 6: A Hard Look, these speakers will confront myths and hard facts about the continent, from the lens of politics and human rights as well as the reality of life as a small farmer.

Kola Masha, Agricultural leader: Kola Masha is the managing director of Babban Gona, an award-winning, high-impact, financially sustainable and highly scalable social enterprise, part-owned by the farmers they serve. @BabbanGona

Clapperton Chakanetsa Mavhunga, MIT professor, grassroots thinker-doer, author: Clapperton Chakanetsa Mavhunga studies the history, theory, and practice of science, technology, innovation, and entrepreneurship in the international context, with a focus on Africa.

Thandiswa Mazwai, Singer: Thandiswa is one of the most influential South African musicians of this generation. @thandiswamazwai

Yvonne Chioma Mbanefo, Digital learning advocate: After searching for an Igbo language learning tool for her kids, digital strategist Yvonne Mbanefo helped create the first illustrated Igbo dictionary for children. Now she’s working on Yoruba, Hausa, Gikuyu and more. @yvonnembanefo

Sara Menker, Technology entrepreneur: Sara Menker is founder and CEO of Gro Intelligence, a tech company that marries the application of machine learning with domain expertise and enables users to understand and predict global food and agriculture markets. @SaraMenker

Eric Mibuari, Computer scientist: Eric Mibuari studies the blockchain at IBM Research, and is the founder of the Laare Community Technology Centre in Meru, Kenya.

Kingsley Moghalu, Political economist: Kingsley Moghalu is a global leader who has made contributions to the stability, progress and wealth of nations, societies and individuals across such domains as academia, economic policy, banking and finance, entrepreneurship, law and diplomacy.

Sethembile Msezane, Artist: Sethembile Msezane the act of public commemoration — how it creates myths, constructs histories, includes some and excludes others. @sthemse

Kisilu Musya, Farmer and filmmaker: For six years, Kisilu Musya has filmed his life on a small farm in South East Kenya, to make the documentary “Thank You for the Rain.”

Robert Neuwirth, Author: To research his book “Stealth of Nations,” Robert Neuwirth spent four years among street vendors, smugglers and “informal” import/export firms. @RobertNeuwirth

Kevin Njabo, Biodiversity scientist: Kevin Njabo is coordinating the development of UCLA’s newly established Congo Basin Institute (CBI) in Yaoundé, Cameroon.

Alsarah and the Nubatones, East African retro-popsters: Inspired by both the golden age of Sudanese pop music of the ’70s and the New York effervescence, Alsarah & the Nubatones have built a repertoire where an exhilarating oud plays electric melodies on beautiful jazz-soul bass lines, and where sharp and modern percussions breathe new life to age-old rhythms.

Ndidi Nwuneli, Social innovation expert: Through her work in food and agriculture, and as a leadership development mentor, Ndidi Okonkwo Nwuneli commits to building economies in West Africa. @ndidiNwuneli

Dayo Ogunyemi, Cultural media builder: Dayo Ogunyemi is the founder of 234 Media, which makes principal investments in the media, entertainment and technology sectors. @AfricaMET

Nnedi Okorafor, Science fiction writer: Nnedi Okorafor weaves African cultures into the evocative settings and memorable characters of her science fiction work for kids and adults. @Nnedi

Fredros Okumu, Mosquito scientist: Fredros Okumu studies human-mosquito interactions, hoping to understand how to keep people from getting malaria.

Qudus Onikeku, Dancer, choreographer: With a background as an acrobat and dancer, Qudus Onikeku is one of the preeminent Nigerian choreographers working today. @qudusonikeku

DK Osseo-Asare, Designer: DK Osseo-Asare is a designer who makes buildings, landscapes, cities, objects and digital tools. @dkoa

Keller Rinaudo, Robotics entrepreneur: Keller Rinaudo is CEO and co-founder of Zipline, building drone delivery for global public health customers. @kellerrinaudo

Reeta Roy, President and CEO, The Mastercard Foundation: A thoughtful leader and an advocate for the world’s most vulnerable, Reeta Roy has worked tirelessly to build a foundation that is collaborative and known for its lasting impact.

Chris Sheldrick, Co-founder & CEO, what3words: With what3words, Chris Sheldrick is providing a precise and simple way to talk about location, by dividing the world into a grid of 3m x 3m squares and assigning each one a unique 3 word address.

George Steinmetz, Aerial photographer: Best known f­or his exploration photography, George Steinmetz has a restless curiosity for the unknown: remote deserts, obscure cultures, the ­mysteries of science and technology.

Olúfẹ́mi Táíwò, Historian and philosopher: Drawing on a rich cultural and personal history, Olúfẹ́mi Táíwò studies philosophy of law, social and political philosophy, Marxism, and African and Africana philosophy.

Pierre Thiam, Chef: Pierre Thiam shares the cuisine of his home in Senegal through global restaurants and highly praised cookbooks.

Iké Udé, Artist: The work of Nigerian-born Iké Udé explores a world of dualities: photographer/performance artist, artist/spectator, African/postnationalist, mainstream/marginal, individual/everyman and fashion/art.

Washington Wachira, Wildlife ecologist and nature photographer: Birder and ecologist Washington Wachira started the Youth Conservation Awareness Programme (YCAP) to nurture young environmental enthusiasts in Kenya.

Ghada Wali, Designer: A pioneering graphic designer in Egypt, Ghada Wali has designed fonts, brands and design-driven art projects.

Planet DebianThomas Lange: Building a Debian Live CD with FAI

In this wiki entry, I describe how to extend a FAI nfsroot, so it can be used as a file system for a diskless client or a Live CD. A host can mount it via NFS when booting via PXE. You can create a Live CD easily by using the command fai-cd.

This works also nicely with a Xfce desktop, and I've prepared a ISO image for easy testing.

You can log in as user demo, and the password is fai.

The next thing is to check, if we can use FAI's dirinstall or install method for creating the same environment, so it will be easy to create customized Live images.


LongNowWhy Do Some Forms of Knowledge Go Extinct?

The History of Art and Architecture slide library at Trinity College, Dublin. Via the Department of Ultimology.

Fiona Hallinan is an artist and researcher based at Trinity College, Dublin. She’s co-founder of a project along with curator Kate Strain called the Department of Ultimology. Ultimology is the study of that which is dead or dying in a series or process. When applied to academic disciplines, it becomes the study of extinct or endangered subjects, theories, and tools of learning. Long Now recently spoke with Hallinan when she visited The Interval. What follows is a transcript of our conversation, edited for length and clarity.

LONG NOW: What was the inspiration for a department studying extinct or endangered subjects and theories?

Fiona Hallinan: It began back when Kate and I were both alumni of the History of Art and Architecture Department at Trinity University College, Dublin. We learned everything we studied from a rather limited slide library. And we were speculating how in the last ten years those slides probably had been digitized, and students now probably had access to an infinite number of images compared to our limited selection. We wondered how that had impacted how people learned the discipline, and therefore how that had actually evolved the discipline of art history itself. So we came up with an idea for a department within the university that would examine all the other disciplines and departments from that perspective.

Via the Department of Ultimology.

We had encountered the term “ultimology” in the context of the study of endangered languages and thought that that could be expanded to become a general discipline across the university that looked at that which was dead or dying. In 02014 we applied for and won the Trinity Creative Challenge, which was a provost’s award for artistic projects that would explore the university and present the knowledge being produced there to the general public. We spent the next year conducting interviews with different heads of departments and disciplines about what was ultimological in their disciplines. Based off of our findings, we organized the First International Conference of Ultimology, a public event that presented a mix of artistic commissions, presentations and real academic papers. Through that we were invited to be hosted as the Department of Ultimology in residence at CONNECT, which is the center for future networks at Trinity.

LN: What is your methodology when approaching a given academic discipline? Are you reaching out to specific fields and subjects that you suspect as having ultimological potential?

FH: At the beginning we just wanted to get as wide as scope as possible; we had a particular narrative that we expected to encounter, namely, that there was an increasing commercialization of the university because certain disciplines could receive funding that perhaps other modes of knowledge production could not on account of phasing out of interest and activity. We thought that a subject like, say, medieval architecture might be virtually impossible to get funding for nowadays versus something like computational linguistics. And as a result, this was causing a shift or change in the structure of the university.

The Illusion of Infinite Resources,” by the Department of Ultimology.

While we did find that that was true to an extent, we also found that as a term, “ultimology” was really exciting for lots of the academics that we spoke to, and there was a sense of relief that finally there was somewhere they could put all of this endangered or extinct knowledge. Often, we would go into a meeting and people would be prepared with heaps of examples, whereas other times people would be interested but say that ultimology wasn’t really that relevant to their discipline, only to realize through inquiry that it was.

One example of that was in Trinity’s Department of Psychology, where the department head, Dr. Jean Quigley, said that psychology didn’t really have anything ultimological because ideas and tools were added all the time instead of being taken away. We asked her for an example of something that had been recently added, and she described the concept of personality. From that, we asked what would the set of qualities we call “personality” been described as before. And she said that people would have spoken about the soul. So from that conversation we started to think about different methodologies, and we described that methodology as negative space—the space that the concept would have occupied before.

A second methodology we developed was the idea of ultimology as a service. We hold clinics where academics come to us and speak to us, and the ultimological becomes a service akin to therapy where people can get things off their chest or they can talk about their research papers that didn’t go anywhere. It becomes a repository for the burden of the recent past.

Another methodology we began to utilize was the idea of embodiment, where we embody the Department of Ultimology through commissioning artists to make us the accessories or trappings of a real department, like bureaucratic forms.

Lanyards designed by Dennis McNulty for the First International Conference of Ultimology. Via the Department of Ultimology.

For our conference, we found a company in Dublin that had a hundred remaining lanyards with mobile phone loops on them, which would have been used in the pre-smartphone age. We commissioned an artist, Dennis McNulty, to riff on these lanyards with a poetic piece of text on them about the designer of the iPhone. The lanyard itself looked like an iPhone. And so there was this potential in an object like a lanyard that connoted a certain context and space of knowledge production, and I think there’s scope there to work with artists to consider those objects and what they mean and what their associations are for us. The bureaucratic questionnaire fulfills a similar function: it asks what research is, and talks about the idea of a person’s practice. While it looks very bureaucratic, its purpose is to get people to go deeply into reflecting on what they actually do.

The performativity of being a “department”  is essential. By doing it, it becomes real. While the Department of Ultimology is technically an art project, it’s not about just a specific outcome or a specific object coming out of it;  it’s more about using an artistic process to re-evaluate everything critically.

LN: What role does nostalgia play in the Department of Ultimology? Do the academics you interview bemoan a lost discipline or practice?  

FH: We try to be careful to avoid nostalgia, to avoid people being sad for something just because of a kind of fondness for it. While I’m not against nostalgia personally, I think it’s less interesting to fetishize the past, and more interesting to look at how these things actually affect the future.

Glassware blown by Trinity’s resident glassblower John Kelly.

For example, we met with Dr. Sylvia Draper, Head of the School of Chemistry at Trinity, and asked her what had changed in the discipline of Chemistry. She spoke about how glassware used to be an essential part of research. If you were a student of chemistry, you might actually design a piece of glassware that goes with your research. Draper told us that Trinity College had a glassblowing workshop on site with a glassblower named John Kelly, but that he was going to retire in two years and would not be replaced. It ties back to the commercialization of the university: the reason he’s not being replaced is because he’s salaried and a salaried employee is a high cost for the university. And so he and his work become expendable because in theory the department can just bring in cheaper, standard glassware from abroad.

However, if you’re a student and you’re planning your experiment and it requires an intricate, strange, unique piece of glass, it might now be much more expensive for you to get it, which might impact how you look at your research. You might be less willing or able to do something weirder, essentially. I picture it like these tiny little cracks that maybe can’t be explored in a discipline as people are funnelled down into a more particular standard route.

John Kelly at work in his lab at Trinity College, Dublin. Via the Department of Ultimology.

So while there’s a sense of nostalgia thinking about John Kelly in his lab and his beautiful glassware, it’s less about trying to preserve what he’s doing for the sake of it; there’s an actual reason behind it that’s important to know about. It’s also very short-term thinking. Say his salary is 50,000 Euro a year, and a piece of special glassware costs 1,000 Euro to ship in. it’s really quickly not going to add up, and is a short-sighted view of saving money now without much thought to the future.

LN: Looking to the future, what’s next for the Department of Ultimology?

Kate Strain and Fiona Hallinan, founders of the Department of Ultimology.

We’re hoping to publish a journal in December. We’re treating the journey of making it all as part of the project as well. So it won’t be a roll-out of a finished product, and I think that we might think of the field of peer review as potential for a public event.  

Ultimately, we would like to start a Department of Ultimology in every time zone. We say “time zones” because  it’s a way of dividing the world that is perhaps more timeless than countries or nation-states. There’s an instability to those, particularly at the moment, whereas time zones have a celestial, larger-than-us quality.

Keep up with the Department of Ultimology by heading to its website or following it on Twitter.

Worse Than FailureCodeSOD: The Nuclear Option

About a decade ago, Gerald worked at a European nuclear plant. There was a “minor” issue where a controller connected to a high-voltage power supply would start missing out on status messages. “Minor”, because it didn’t really pose a risk to life and limb- but still, any malfunction with a controller attached to a high-voltage power supply in a nuclear power plant needs to be addressed.

So Gerald went off and got the code. It was on a file share, in a file called Or, wait, was it in the file called Or Or,

It took a few tries, but eventually he picked out the correct one. To his surprise, in addition to the .c and .h files he expected to see, there was also a mysterious .xls. And that’s where things went bad.

Pause for a moment to consider a problem: you receive a byte containing an set of flags to represent an error code. So, you need to check each individual bit to understand what the exact error is. At this point, you’re probably reaching for a bitshift operator, because that’s the easiest way to do it.

I want you to imagine, for a moment, however, that you don’t really know C, or bitwise operations, or even what a bit is. Instead, you know two things: that there are 255 possible error codes, and how to use Excel. With those gaps in knowledge, you might perhaps, just manually write an Excel spreadsheet with every possible option, using Excel's range-drag operation to fill in the columns with easily predictable values. You might do this for 254 rows of data. Which, as a note, the range of possible values is 255, so guess what was causing the error?

if (variable==   0       ) {     a=      0       ; b=    0       ; c=    0       ; d=    0       ;e=      0       ;f=     0       ;g=     0       ;h=     0       ;}
if (variable==   1       ) {     a=      1       ; b=    0       ; c=    0       ; d=    0       ;e=      0       ;f=     0       ;g=     0       ;h=     0       ;}
if (variable==   2       ) {     a=      0       ; b=    1       ; c=    0       ; d=    0       ;e=      0       ;f=     0       ;g=     0       ;h=     0       ;}
if (variable==   3       ) {     a=      1       ; b=    1       ; c=    0       ; d=    0       ;e=      0       ;f=     0       ;g=     0       ;h=     0       ;}
if (variable==   4       ) {     a=      0       ; b=    0       ; c=    1       ; d=    0       ;e=      0       ;f=     0       ;g=     0       ;h=     0       ;}
if (variable==   5       ) {     a=      1       ; b=    0       ; c=    1       ; d=    0       ;e=      0       ;f=     0       ;g=     0       ;h=     0       ;}
if (variable==   6       ) {     a=      0       ; b=    1       ; c=    1       ; d=    0       ;e=      0       ;f=     0       ;g=     0       ;h=     0       ;}
if (variable==   7       ) {     a=      1       ; b=    1       ; c=    1       ; d=    0       ;e=      0       ;f=     0       ;g=     0       ;h=     0       ;}
if (variable==   8       ) {     a=      0       ; b=    0       ; c=    0       ; d=    1       ;e=      0       ;f=     0       ;g=     0       ;h=     0       ;}
if (variable==   9       ) {     a=      1       ; b=    0       ; c=    0       ; d=    1       ;e=      0       ;f=     0       ;g=     0       ;h=     0       ;}
if (variable==   10      ) {     a=      0       ; b=    1       ; c=    0       ; d=    1       ;e=      0       ;f=     0       ;g=     0       ;h=     0       ;}
if (variable==   11      ) {     a=      1       ; b=    1       ; c=    0       ; d=    1       ;e=      0       ;f=     0       ;g=     0       ;h=     0       ;}
if (variable==   12      ) {     a=      0       ; b=    0       ; c=    1       ; d=    1       ;e=      0       ;f=     0       ;g=     0       ;h=     0       ;}
if (variable==   13      ) {     a=      1       ; b=    0       ; c=    1       ; d=    1       ;e=      0       ;f=     0       ;g=     0       ;h=     0       ;}
if (variable==   14      ) {     a=      0       ; b=    1       ; c=    1       ; d=    1       ;e=      0       ;f=     0       ;g=     0       ;h=     0       ;}
if (variable==   15      ) {     a=      1       ; b=    1       ; c=    1       ; d=    1       ;e=      0       ;f=     0       ;g=     0       ;h=     0       ;}
if (variable==   16      ) {     a=      0       ; b=    0       ; c=    0       ; d=    0       ;e=      1       ;f=     0       ;g=     0       ;h=     0       ;}
if (variable==   17      ) {     a=      1       ; b=    0       ; c=    0       ; d=    0       ;e=      1       ;f=     0       ;g=     0       ;h=     0       ;}
if (variable==   18      ) {     a=      0       ; b=    1       ; c=    0       ; d=    0       ;e=      1       ;f=     0       ;g=     0       ;h=     0       ;}
if (variable==   19      ) {     a=      1       ; b=    1       ; c=    0       ; d=    0       ;e=      1       ;f=     0       ;g=     0       ;h=     0       ;}
if (variable==   20      ) {     a=      0       ; b=    0       ; c=    1       ; d=    0       ;e=      1       ;f=     0       ;g=     0       ;h=     0       ;}
if (variable==   21      ) {     a=      1       ; b=    0       ; c=    1       ; d=    0       ;e=      1       ;f=     0       ;g=     0       ;h=     0       ;}
if (variable==   22      ) {     a=      0       ; b=    1       ; c=    1       ; d=    0       ;e=      1       ;f=     0       ;g=     0       ;h=     0       ;}
if (variable==   23      ) {     a=      1       ; b=    1       ; c=    1       ; d=    0       ;e=      1       ;f=     0       ;g=     0       ;h=     0       ;}
if (variable==   24      ) {     a=      0       ; b=    0       ; c=    0       ; d=    1       ;e=      1       ;f=     0       ;g=     0       ;h=     0       ;}
if (variable==   25      ) {     a=      1       ; b=    0       ; c=    0       ; d=    1       ;e=      1       ;f=     0       ;g=     0       ;h=     0       ;}
if (variable==   26      ) {     a=      0       ; b=    1       ; c=    0       ; d=    1       ;e=      1       ;f=     0       ;g=     0       ;h=     0       ;}
if (variable==   27      ) {     a=      1       ; b=    1       ; c=    0       ; d=    1       ;e=      1       ;f=     0       ;g=     0       ;h=     0       ;}
if (variable==   28      ) {     a=      0       ; b=    0       ; c=    1       ; d=    1       ;e=      1       ;f=     0       ;g=     0       ;h=     0       ;}
if (variable==   29      ) {     a=      1       ; b=    0       ; c=    1       ; d=    1       ;e=      1       ;f=     0       ;g=     0       ;h=     0       ;}
if (variable==   30      ) {     a=      0       ; b=    1       ; c=    1       ; d=    1       ;e=      1       ;f=     0       ;g=     0       ;h=     0       ;}
if (variable==   31      ) {     a=      1       ; b=    1       ; c=    1       ; d=    1       ;e=      1       ;f=     0       ;g=     0       ;h=     0       ;}
if (variable==   32      ) {     a=      0       ; b=    0       ; c=    0       ; d=    0       ;e=      0       ;f=     1       ;g=     0       ;h=     0       ;}
if (variable==   33      ) {     a=      1       ; b=    0       ; c=    0       ; d=    0       ;e=      0       ;f=     1       ;g=     0       ;h=     0       ;}
if (variable==   34      ) {     a=      0       ; b=    1       ; c=    0       ; d=    0       ;e=      0       ;f=     1       ;g=     0       ;h=     0       ;}
if (variable==   35      ) {     a=      1       ; b=    1       ; c=    0       ; d=    0       ;e=      0       ;f=     1       ;g=     0       ;h=     0       ;}
if (variable==   36      ) {     a=      0       ; b=    0       ; c=    1       ; d=    0       ;e=      0       ;f=     1       ;g=     0       ;h=     0       ;}
if (variable==   37      ) {     a=      1       ; b=    0       ; c=    1       ; d=    0       ;e=      0       ;f=     1       ;g=     0       ;h=     0       ;}
if (variable==   38      ) {     a=      0       ; b=    1       ; c=    1       ; d=    0       ;e=      0       ;f=     1       ;g=     0       ;h=     0       ;}
if (variable==   39      ) {     a=      1       ; b=    1       ; c=    1       ; d=    0       ;e=      0       ;f=     1       ;g=     0       ;h=     0       ;}
if (variable==   40      ) {     a=      0       ; b=    0       ; c=    0       ; d=    1       ;e=      0       ;f=     1       ;g=     0       ;h=     0       ;}
if (variable==   41      ) {     a=      1       ; b=    0       ; c=    0       ; d=    1       ;e=      0       ;f=     1       ;g=     0       ;h=     0       ;}
if (variable==   42      ) {     a=      0       ; b=    1       ; c=    0       ; d=    1       ;e=      0       ;f=     1       ;g=     0       ;h=     0       ;}
if (variable==   43      ) {     a=      1       ; b=    1       ; c=    0       ; d=    1       ;e=      0       ;f=     1       ;g=     0       ;h=     0       ;}
if (variable==   44      ) {     a=      0       ; b=    0       ; c=    1       ; d=    1       ;e=      0       ;f=     1       ;g=     0       ;h=     0       ;}
if (variable==   45      ) {     a=      1       ; b=    0       ; c=    1       ; d=    1       ;e=      0       ;f=     1       ;g=     0       ;h=     0       ;}
if (variable==   46      ) {     a=      0       ; b=    1       ; c=    1       ; d=    1       ;e=      0       ;f=     1       ;g=     0       ;h=     0       ;}
if (variable==   47      ) {     a=      1       ; b=    1       ; c=    1       ; d=    1       ;e=      0       ;f=     1       ;g=     0       ;h=     0       ;}
if (variable==   48      ) {     a=      0       ; b=    0       ; c=    0       ; d=    0       ;e=      1       ;f=     1       ;g=     0       ;h=     0       ;}
if (variable==   49      ) {     a=      1       ; b=    0       ; c=    0       ; d=    0       ;e=      1       ;f=     1       ;g=     0       ;h=     0       ;}
if (variable==   50      ) {     a=      0       ; b=    1       ; c=    0       ; d=    0       ;e=      1       ;f=     1       ;g=     0       ;h=     0       ;}
if (variable==   51      ) {     a=      1       ; b=    1       ; c=    0       ; d=    0       ;e=      1       ;f=     1       ;g=     0       ;h=     0       ;}
if (variable==   52      ) {     a=      0       ; b=    0       ; c=    1       ; d=    0       ;e=      1       ;f=     1       ;g=     0       ;h=     0       ;}
if (variable==   53      ) {     a=      1       ; b=    0       ; c=    1       ; d=    0       ;e=      1       ;f=     1       ;g=     0       ;h=     0       ;}
if (variable==   54      ) {     a=      0       ; b=    1       ; c=    1       ; d=    0       ;e=      1       ;f=     1       ;g=     0       ;h=     0       ;}
if (variable==   55      ) {     a=      1       ; b=    1       ; c=    1       ; d=    0       ;e=      1       ;f=     1       ;g=     0       ;h=     0       ;}
if (variable==   56      ) {     a=      0       ; b=    0       ; c=    0       ; d=    1       ;e=      1       ;f=     1       ;g=     0       ;h=     0       ;}
if (variable==   57      ) {     a=      1       ; b=    0       ; c=    0       ; d=    1       ;e=      1       ;f=     1       ;g=     0       ;h=     0       ;}
if (variable==   58      ) {     a=      0       ; b=    1       ; c=    0       ; d=    1       ;e=      1       ;f=     1       ;g=     0       ;h=     0       ;}
if (variable==   59      ) {     a=      1       ; b=    1       ; c=    0       ; d=    1       ;e=      1       ;f=     1       ;g=     0       ;h=     0       ;}
if (variable==   60      ) {     a=      0       ; b=    0       ; c=    1       ; d=    1       ;e=      1       ;f=     1       ;g=     0       ;h=     0       ;}
if (variable==   61      ) {     a=      1       ; b=    0       ; c=    1       ; d=    1       ;e=      1       ;f=     1       ;g=     0       ;h=     0       ;}
if (variable==   62      ) {     a=      0       ; b=    1       ; c=    1       ; d=    1       ;e=      1       ;f=     1       ;g=     0       ;h=     0       ;}
if (variable==   63      ) {     a=      1       ; b=    1       ; c=    1       ; d=    1       ;e=      1       ;f=     1       ;g=     0       ;h=     0       ;}
if (variable==   64      ) {     a=      0       ; b=    0       ; c=    0       ; d=    0       ;e=      0       ;f=     0       ;g=     1       ;h=     0       ;}
if (variable==   65      ) {     a=      1       ; b=    0       ; c=    0       ; d=    0       ;e=      0       ;f=     0       ;g=     1       ;h=     0       ;}
if (variable==   66      ) {     a=      0       ; b=    1       ; c=    0       ; d=    0       ;e=      0       ;f=     0       ;g=     1       ;h=     0       ;}
if (variable==   67      ) {     a=      1       ; b=    1       ; c=    0       ; d=    0       ;e=      0       ;f=     0       ;g=     1       ;h=     0       ;}
if (variable==   68      ) {     a=      0       ; b=    0       ; c=    1       ; d=    0       ;e=      0       ;f=     0       ;g=     1       ;h=     0       ;}
if (variable==   69      ) {     a=      1       ; b=    0       ; c=    1       ; d=    0       ;e=      0       ;f=     0       ;g=     1       ;h=     0       ;}
if (variable==   70      ) {     a=      0       ; b=    1       ; c=    1       ; d=    0       ;e=      0       ;f=     0       ;g=     1       ;h=     0       ;}
if (variable==   71      ) {     a=      1       ; b=    1       ; c=    1       ; d=    0       ;e=      0       ;f=     0       ;g=     1       ;h=     0       ;}
if (variable==   72      ) {     a=      0       ; b=    0       ; c=    0       ; d=    1       ;e=      0       ;f=     0       ;g=     1       ;h=     0       ;}
if (variable==   73      ) {     a=      1       ; b=    0       ; c=    0       ; d=    1       ;e=      0       ;f=     0       ;g=     1       ;h=     0       ;}
if (variable==   74      ) {     a=      0       ; b=    1       ; c=    0       ; d=    1       ;e=      0       ;f=     0       ;g=     1       ;h=     0       ;}
if (variable==   75      ) {     a=      1       ; b=    1       ; c=    0       ; d=    1       ;e=      0       ;f=     0       ;g=     1       ;h=     0       ;}
if (variable==   76      ) {     a=      0       ; b=    0       ; c=    1       ; d=    1       ;e=      0       ;f=     0       ;g=     1       ;h=     0       ;}
if (variable==   77      ) {     a=      1       ; b=    0       ; c=    1       ; d=    1       ;e=      0       ;f=     0       ;g=     1       ;h=     0       ;}
if (variable==   78      ) {     a=      0       ; b=    1       ; c=    1       ; d=    1       ;e=      0       ;f=     0       ;g=     1       ;h=     0       ;}
if (variable==   79      ) {     a=      1       ; b=    1       ; c=    1       ; d=    1       ;e=      0       ;f=     0       ;g=     1       ;h=     0       ;}
if (variable==   80      ) {     a=      0       ; b=    0       ; c=    0       ; d=    0       ;e=      1       ;f=     0       ;g=     1       ;h=     0       ;}
if (variable==   81      ) {     a=      1       ; b=    0       ; c=    0       ; d=    0       ;e=      1       ;f=     0       ;g=     1       ;h=     0       ;}
if (variable==   82      ) {     a=      0       ; b=    1       ; c=    0       ; d=    0       ;e=      1       ;f=     0       ;g=     1       ;h=     0       ;}
if (variable==   83      ) {     a=      1       ; b=    1       ; c=    0       ; d=    0       ;e=      1       ;f=     0       ;g=     1       ;h=     0       ;}
if (variable==   84      ) {     a=      0       ; b=    0       ; c=    1       ; d=    0       ;e=      1       ;f=     0       ;g=     1       ;h=     0       ;}
if (variable==   85      ) {     a=      1       ; b=    0       ; c=    1       ; d=    0       ;e=      1       ;f=     0       ;g=     1       ;h=     0       ;}
if (variable==   86      ) {     a=      0       ; b=    1       ; c=    1       ; d=    0       ;e=      1       ;f=     0       ;g=     1       ;h=     0       ;}
if (variable==   87      ) {     a=      1       ; b=    1       ; c=    1       ; d=    0       ;e=      1       ;f=     0       ;g=     1       ;h=     0       ;}
if (variable==   88      ) {     a=      0       ; b=    0       ; c=    0       ; d=    1       ;e=      1       ;f=     0       ;g=     1       ;h=     0       ;}
if (variable==   89      ) {     a=      1       ; b=    0       ; c=    0       ; d=    1       ;e=      1       ;f=     0       ;g=     1       ;h=     0       ;}
if (variable==   90      ) {     a=      0       ; b=    1       ; c=    0       ; d=    1       ;e=      1       ;f=     0       ;g=     1       ;h=     0       ;}
if (variable==   91      ) {     a=      1       ; b=    1       ; c=    0       ; d=    1       ;e=      1       ;f=     0       ;g=     1       ;h=     0       ;}
if (variable==   92      ) {     a=      0       ; b=    0       ; c=    1       ; d=    1       ;e=      1       ;f=     0       ;g=     1       ;h=     0       ;}
if (variable==   93      ) {     a=      1       ; b=    0       ; c=    1       ; d=    1       ;e=      1       ;f=     0       ;g=     1       ;h=     0       ;}
if (variable==   94      ) {     a=      0       ; b=    1       ; c=    1       ; d=    1       ;e=      1       ;f=     0       ;g=     1       ;h=     0       ;}
if (variable==   95      ) {     a=      1       ; b=    1       ; c=    1       ; d=    1       ;e=      1       ;f=     0       ;g=     1       ;h=     0       ;}
if (variable==   96      ) {     a=      0       ; b=    0       ; c=    0       ; d=    0       ;e=      0       ;f=     1       ;g=     1       ;h=     0       ;}
if (variable==   97      ) {     a=      1       ; b=    0       ; c=    0       ; d=    0       ;e=      0       ;f=     1       ;g=     1       ;h=     0       ;}
if (variable==   98      ) {     a=      0       ; b=    1       ; c=    0       ; d=    0       ;e=      0       ;f=     1       ;g=     1       ;h=     0       ;}
if (variable==   99      ) {     a=      1       ; b=    1       ; c=    0       ; d=    0       ;e=      0       ;f=     1       ;g=     1       ;h=     0       ;}
if (variable==   100     ) {     a=      0       ; b=    0       ; c=    1       ; d=    0       ;e=      0       ;f=     1       ;g=     1       ;h=     0       ;}
if (variable==   101     ) {     a=      1       ; b=    0       ; c=    1       ; d=    0       ;e=      0       ;f=     1       ;g=     1       ;h=     0       ;}
if (variable==   102     ) {     a=      0       ; b=    1       ; c=    1       ; d=    0       ;e=      0       ;f=     1       ;g=     1       ;h=     0       ;}
if (variable==   103     ) {     a=      1       ; b=    1       ; c=    1       ; d=    0       ;e=      0       ;f=     1       ;g=     1       ;h=     0       ;}
if (variable==   104     ) {     a=      0       ; b=    0       ; c=    0       ; d=    1       ;e=      0       ;f=     1       ;g=     1       ;h=     0       ;}
if (variable==   105     ) {     a=      1       ; b=    0       ; c=    0       ; d=    1       ;e=      0       ;f=     1       ;g=     1       ;h=     0       ;}
if (variable==   106     ) {     a=      0       ; b=    1       ; c=    0       ; d=    1       ;e=      0       ;f=     1       ;g=     1       ;h=     0       ;}
if (variable==   107     ) {     a=      1       ; b=    1       ; c=    0       ; d=    1       ;e=      0       ;f=     1       ;g=     1       ;h=     0       ;}
if (variable==   108     ) {     a=      0       ; b=    0       ; c=    1       ; d=    1       ;e=      0       ;f=     1       ;g=     1       ;h=     0       ;}
if (variable==   109     ) {     a=      1       ; b=    0       ; c=    1       ; d=    1       ;e=      0       ;f=     1       ;g=     1       ;h=     0       ;}
if (variable==   110     ) {     a=      0       ; b=    1       ; c=    1       ; d=    1       ;e=      0       ;f=     1       ;g=     1       ;h=     0       ;}
if (variable==   111     ) {     a=      1       ; b=    1       ; c=    1       ; d=    1       ;e=      0       ;f=     1       ;g=     1       ;h=     0       ;}
if (variable==   112     ) {     a=      0       ; b=    0       ; c=    0       ; d=    0       ;e=      1       ;f=     1       ;g=     1       ;h=     0       ;}
if (variable==   113     ) {     a=      1       ; b=    0       ; c=    0       ; d=    0       ;e=      1       ;f=     1       ;g=     1       ;h=     0       ;}
if (variable==   114     ) {     a=      0       ; b=    1       ; c=    0       ; d=    0       ;e=      1       ;f=     1       ;g=     1       ;h=     0       ;}
if (variable==   115     ) {     a=      1       ; b=    1       ; c=    0       ; d=    0       ;e=      1       ;f=     1       ;g=     1       ;h=     0       ;}
if (variable==   116     ) {     a=      0       ; b=    0       ; c=    1       ; d=    0       ;e=      1       ;f=     1       ;g=     1       ;h=     0       ;}
if (variable==   117     ) {     a=      1       ; b=    0       ; c=    1       ; d=    0       ;e=      1       ;f=     1       ;g=     1       ;h=     0       ;}
if (variable==   118     ) {     a=      0       ; b=    1       ; c=    1       ; d=    0       ;e=      1       ;f=     1       ;g=     1       ;h=     0       ;}
if (variable==   119     ) {     a=      1       ; b=    1       ; c=    1       ; d=    0       ;e=      1       ;f=     1       ;g=     1       ;h=     0       ;}
if (variable==   120     ) {     a=      0       ; b=    0       ; c=    0       ; d=    1       ;e=      1       ;f=     1       ;g=     1       ;h=     0       ;}
if (variable==   121     ) {     a=      1       ; b=    0       ; c=    0       ; d=    1       ;e=      1       ;f=     1       ;g=     1       ;h=     0       ;}
if (variable==   122     ) {     a=      0       ; b=    1       ; c=    0       ; d=    1       ;e=      1       ;f=     1       ;g=     1       ;h=     0       ;}
if (variable==   123     ) {     a=      1       ; b=    1       ; c=    0       ; d=    1       ;e=      1       ;f=     1       ;g=     1       ;h=     0       ;}
if (variable==   124     ) {     a=      0       ; b=    0       ; c=    1       ; d=    1       ;e=      1       ;f=     1       ;g=     1       ;h=     0       ;}
if (variable==   125     ) {     a=      1       ; b=    0       ; c=    1       ; d=    1       ;e=      1       ;f=     1       ;g=     1       ;h=     0       ;}
if (variable==   126     ) {     a=      0       ; b=    1       ; c=    1       ; d=    1       ;e=      1       ;f=     1       ;g=     1       ;h=     0       ;}
if (variable==   127     ) {     a=      1       ; b=    1       ; c=    1       ; d=    1       ;e=      1       ;f=     1       ;g=     1       ;h=     0       ;}
if (variable==   128     ) {     a=      0       ; b=    0       ; c=    0       ; d=    0       ;e=      0       ;f=     0       ;g=     0       ;h=     1       ;}
if (variable==   129     ) {     a=      1       ; b=    0       ; c=    0       ; d=    0       ;e=      0       ;f=     0       ;g=     0       ;h=     1       ;}
if (variable==   130     ) {     a=      0       ; b=    1       ; c=    0       ; d=    0       ;e=      0       ;f=     0       ;g=     0       ;h=     1       ;}
if (variable==   131     ) {     a=      1       ; b=    1       ; c=    0       ; d=    0       ;e=      0       ;f=     0       ;g=     0       ;h=     1       ;}
if (variable==   132     ) {     a=      0       ; b=    0       ; c=    1       ; d=    0       ;e=      0       ;f=     0       ;g=     0       ;h=     1       ;}
if (variable==   133     ) {     a=      1       ; b=    0       ; c=    1       ; d=    0       ;e=      0       ;f=     0       ;g=     0       ;h=     1       ;}
if (variable==   134     ) {     a=      0       ; b=    1       ; c=    1       ; d=    0       ;e=      0       ;f=     0       ;g=     0       ;h=     1       ;}
if (variable==   135     ) {     a=      1       ; b=    1       ; c=    1       ; d=    0       ;e=      0       ;f=     0       ;g=     0       ;h=     1       ;}
if (variable==   136     ) {     a=      0       ; b=    0       ; c=    0       ; d=    1       ;e=      0       ;f=     0       ;g=     0       ;h=     1       ;}
if (variable==   137     ) {     a=      1       ; b=    0       ; c=    0       ; d=    1       ;e=      0       ;f=     0       ;g=     0       ;h=     1       ;}
if (variable==   138     ) {     a=      0       ; b=    1       ; c=    0       ; d=    1       ;e=      0       ;f=     0       ;g=     0       ;h=     1       ;}
if (variable==   139     ) {     a=      1       ; b=    1       ; c=    0       ; d=    1       ;e=      0       ;f=     0       ;g=     0       ;h=     1       ;}
if (variable==   140     ) {     a=      0       ; b=    0       ; c=    1       ; d=    1       ;e=      0       ;f=     0       ;g=     0       ;h=     1       ;}
if (variable==   141     ) {     a=      1       ; b=    0       ; c=    1       ; d=    1       ;e=      0       ;f=     0       ;g=     0       ;h=     1       ;}
if (variable==   142     ) {     a=      0       ; b=    1       ; c=    1       ; d=    1       ;e=      0       ;f=     0       ;g=     0       ;h=     1       ;}
if (variable==   143     ) {     a=      1       ; b=    1       ; c=    1       ; d=    1       ;e=      0       ;f=     0       ;g=     0       ;h=     1       ;}
if (variable==   144     ) {     a=      0       ; b=    0       ; c=    0       ; d=    0       ;e=      1       ;f=     0       ;g=     0       ;h=     1       ;}
if (variable==   145     ) {     a=      1       ; b=    0       ; c=    0       ; d=    0       ;e=      1       ;f=     0       ;g=     0       ;h=     1       ;}
if (variable==   146     ) {     a=      0       ; b=    1       ; c=    0       ; d=    0       ;e=      1       ;f=     0       ;g=     0       ;h=     1       ;}
if (variable==   147     ) {     a=      1       ; b=    1       ; c=    0       ; d=    0       ;e=      1       ;f=     0       ;g=     0       ;h=     1       ;}
if (variable==   148     ) {     a=      0       ; b=    0       ; c=    1       ; d=    0       ;e=      1       ;f=     0       ;g=     0       ;h=     1       ;}
if (variable==   149     ) {     a=      1       ; b=    0       ; c=    1       ; d=    0       ;e=      1       ;f=     0       ;g=     0       ;h=     1       ;}
if (variable==   150     ) {     a=      0       ; b=    1       ; c=    1       ; d=    0       ;e=      1       ;f=     0       ;g=     0       ;h=     1       ;}
if (variable==   151     ) {     a=      1       ; b=    1       ; c=    1       ; d=    0       ;e=      1       ;f=     0       ;g=     0       ;h=     1       ;}
if (variable==   152     ) {     a=      0       ; b=    0       ; c=    0       ; d=    1       ;e=      1       ;f=     0       ;g=     0       ;h=     1       ;}
if (variable==   153     ) {     a=      1       ; b=    0       ; c=    0       ; d=    1       ;e=      1       ;f=     0       ;g=     0       ;h=     1       ;}
if (variable==   154     ) {     a=      0       ; b=    1       ; c=    0       ; d=    1       ;e=      1       ;f=     0       ;g=     0       ;h=     1       ;}
if (variable==   155     ) {     a=      1       ; b=    1       ; c=    0       ; d=    1       ;e=      1       ;f=     0       ;g=     0       ;h=     1       ;}
if (variable==   156     ) {     a=      0       ; b=    0       ; c=    1       ; d=    1       ;e=      1       ;f=     0       ;g=     0       ;h=     1       ;}
if (variable==   157     ) {     a=      1       ; b=    0       ; c=    1       ; d=    1       ;e=      1       ;f=     0       ;g=     0       ;h=     1       ;}
if (variable==   158     ) {     a=      0       ; b=    1       ; c=    1       ; d=    1       ;e=      1       ;f=     0       ;g=     0       ;h=     1       ;}
if (variable==   159     ) {     a=      1       ; b=    1       ; c=    1       ; d=    1       ;e=      1       ;f=     0       ;g=     0       ;h=     1       ;}
if (variable==   160     ) {     a=      0       ; b=    0       ; c=    0       ; d=    0       ;e=      0       ;f=     1       ;g=     0       ;h=     1       ;}
if (variable==   161     ) {     a=      1       ; b=    0       ; c=    0       ; d=    0       ;e=      0       ;f=     1       ;g=     0       ;h=     1       ;}
if (variable==   162     ) {     a=      0       ; b=    1       ; c=    0       ; d=    0       ;e=      0       ;f=     1       ;g=     0       ;h=     1       ;}
if (variable==   163     ) {     a=      1       ; b=    1       ; c=    0       ; d=    0       ;e=      0       ;f=     1       ;g=     0       ;h=     1       ;}
if (variable==   164     ) {     a=      0       ; b=    0       ; c=    1       ; d=    0       ;e=      0       ;f=     1       ;g=     0       ;h=     1       ;}
if (variable==   165     ) {     a=      1       ; b=    0       ; c=    1       ; d=    0       ;e=      0       ;f=     1       ;g=     0       ;h=     1       ;}
if (variable==   166     ) {     a=      0       ; b=    1       ; c=    1       ; d=    0       ;e=      0       ;f=     1       ;g=     0       ;h=     1       ;}
if (variable==   167     ) {     a=      1       ; b=    1       ; c=    1       ; d=    0       ;e=      0       ;f=     1       ;g=     0       ;h=     1       ;}
if (variable==   168     ) {     a=      0       ; b=    0       ; c=    0       ; d=    1       ;e=      0       ;f=     1       ;g=     0       ;h=     1       ;}
if (variable==   169     ) {     a=      1       ; b=    0       ; c=    0       ; d=    1       ;e=      0       ;f=     1       ;g=     0       ;h=     1       ;}
if (variable==   170     ) {     a=      0       ; b=    1       ; c=    0       ; d=    1       ;e=      0       ;f=     1       ;g=     0       ;h=     1       ;}
if (variable==   171     ) {     a=      1       ; b=    1       ; c=    0       ; d=    1       ;e=      0       ;f=     1       ;g=     0       ;h=     1       ;}
if (variable==   172     ) {     a=      0       ; b=    0       ; c=    1       ; d=    1       ;e=      0       ;f=     1       ;g=     0       ;h=     1       ;}
if (variable==   173     ) {     a=      1       ; b=    0       ; c=    1       ; d=    1       ;e=      0       ;f=     1       ;g=     0       ;h=     1       ;}
if (variable==   174     ) {     a=      0       ; b=    1       ; c=    1       ; d=    1       ;e=      0       ;f=     1       ;g=     0       ;h=     1       ;}
if (variable==   175     ) {     a=      1       ; b=    1       ; c=    1       ; d=    1       ;e=      0       ;f=     1       ;g=     0       ;h=     1       ;}
if (variable==   176     ) {     a=      0       ; b=    0       ; c=    0       ; d=    0       ;e=      1       ;f=     1       ;g=     0       ;h=     1       ;}
if (variable==   177     ) {     a=      1       ; b=    0       ; c=    0       ; d=    0       ;e=      1       ;f=     1       ;g=     0       ;h=     1       ;}
if (variable==   178     ) {     a=      0       ; b=    1       ; c=    0       ; d=    0       ;e=      1       ;f=     1       ;g=     0       ;h=     1       ;}
if (variable==   179     ) {     a=      1       ; b=    1       ; c=    0       ; d=    0       ;e=      1       ;f=     1       ;g=     0       ;h=     1       ;}
if (variable==   180     ) {     a=      0       ; b=    0       ; c=    1       ; d=    0       ;e=      1       ;f=     1       ;g=     0       ;h=     1       ;}
if (variable==   181     ) {     a=      1       ; b=    0       ; c=    1       ; d=    0       ;e=      1       ;f=     1       ;g=     0       ;h=     1       ;}
if (variable==   182     ) {     a=      0       ; b=    1       ; c=    1       ; d=    0       ;e=      1       ;f=     1       ;g=     0       ;h=     1       ;}
if (variable==   183     ) {     a=      1       ; b=    1       ; c=    1       ; d=    0       ;e=      1       ;f=     1       ;g=     0       ;h=     1       ;}
if (variable==   184     ) {     a=      0       ; b=    0       ; c=    0       ; d=    1       ;e=      1       ;f=     1       ;g=     0       ;h=     1       ;}
if (variable==   185     ) {     a=      1       ; b=    0       ; c=    0       ; d=    1       ;e=      1       ;f=     1       ;g=     0       ;h=     1       ;}
if (variable==   186     ) {     a=      0       ; b=    1       ; c=    0       ; d=    1       ;e=      1       ;f=     1       ;g=     0       ;h=     1       ;}
if (variable==   187     ) {     a=      1       ; b=    1       ; c=    0       ; d=    1       ;e=      1       ;f=     1       ;g=     0       ;h=     1       ;}
if (variable==   188     ) {     a=      0       ; b=    0       ; c=    1       ; d=    1       ;e=      1       ;f=     1       ;g=     0       ;h=     1       ;}
if (variable==   189     ) {     a=      1       ; b=    0       ; c=    1       ; d=    1       ;e=      1       ;f=     1       ;g=     0       ;h=     1       ;}
if (variable==   190     ) {     a=      0       ; b=    1       ; c=    1       ; d=    1       ;e=      1       ;f=     1       ;g=     0       ;h=     1       ;}
if (variable==   191     ) {     a=      1       ; b=    1       ; c=    1       ; d=    1       ;e=      1       ;f=     1       ;g=     0       ;h=     1       ;}
if (variable==   192     ) {     a=      0       ; b=    0       ; c=    0       ; d=    0       ;e=      0       ;f=     0       ;g=     1       ;h=     1       ;}
if (variable==   193     ) {     a=      1       ; b=    0       ; c=    0       ; d=    0       ;e=      0       ;f=     0       ;g=     1       ;h=     1       ;}
if (variable==   194     ) {     a=      0       ; b=    1       ; c=    0       ; d=    0       ;e=      0       ;f=     0       ;g=     1       ;h=     1       ;}
if (variable==   195     ) {     a=      1       ; b=    1       ; c=    0       ; d=    0       ;e=      0       ;f=     0       ;g=     1       ;h=     1       ;}
if (variable==   196     ) {     a=      0       ; b=    0       ; c=    1       ; d=    0       ;e=      0       ;f=     0       ;g=     1       ;h=     1       ;}
if (variable==   197     ) {     a=      1       ; b=    0       ; c=    1       ; d=    0       ;e=      0       ;f=     0       ;g=     1       ;h=     1       ;}
if (variable==   198     ) {     a=      0       ; b=    1       ; c=    1       ; d=    0       ;e=      0       ;f=     0       ;g=     1       ;h=     1       ;}
if (variable==   199     ) {     a=      1       ; b=    1       ; c=    1       ; d=    0       ;e=      0       ;f=     0       ;g=     1       ;h=     1       ;}
if (variable==   200     ) {     a=      0       ; b=    0       ; c=    0       ; d=    1       ;e=      0       ;f=     0       ;g=     1       ;h=     1       ;}
if (variable==   201     ) {     a=      1       ; b=    0       ; c=    0       ; d=    1       ;e=      0       ;f=     0       ;g=     1       ;h=     1       ;}
if (variable==   202     ) {     a=      0       ; b=    1       ; c=    0       ; d=    1       ;e=      0       ;f=     0       ;g=     1       ;h=     1       ;}
if (variable==   203     ) {     a=      1       ; b=    1       ; c=    0       ; d=    1       ;e=      0       ;f=     0       ;g=     1       ;h=     1       ;}
if (variable==   204     ) {     a=      0       ; b=    0       ; c=    1       ; d=    1       ;e=      0       ;f=     0       ;g=     1       ;h=     1       ;}
if (variable==   205     ) {     a=      1       ; b=    0       ; c=    1       ; d=    1       ;e=      0       ;f=     0       ;g=     1       ;h=     1       ;}
if (variable==   206     ) {     a=      0       ; b=    1       ; c=    1       ; d=    1       ;e=      0       ;f=     0       ;g=     1       ;h=     1       ;}
if (variable==   207     ) {     a=      1       ; b=    1       ; c=    1       ; d=    1       ;e=      0       ;f=     0       ;g=     1       ;h=     1       ;}
if (variable==   208     ) {     a=      0       ; b=    0       ; c=    0       ; d=    0       ;e=      1       ;f=     0       ;g=     1       ;h=     1       ;}
if (variable==   209     ) {     a=      1       ; b=    0       ; c=    0       ; d=    0       ;e=      1       ;f=     0       ;g=     1       ;h=     1       ;}
if (variable==   210     ) {     a=      0       ; b=    1       ; c=    0       ; d=    0       ;e=      1       ;f=     0       ;g=     1       ;h=     1       ;}
if (variable==   211     ) {     a=      1       ; b=    1       ; c=    0       ; d=    0       ;e=      1       ;f=     0       ;g=     1       ;h=     1       ;}
if (variable==   212     ) {     a=      0       ; b=    0       ; c=    1       ; d=    0       ;e=      1       ;f=     0       ;g=     1       ;h=     1       ;}
if (variable==   213     ) {     a=      1       ; b=    0       ; c=    1       ; d=    0       ;e=      1       ;f=     0       ;g=     1       ;h=     1       ;}
if (variable==   214     ) {     a=      0       ; b=    1       ; c=    1       ; d=    0       ;e=      1       ;f=     0       ;g=     1       ;h=     1       ;}
if (variable==   215     ) {     a=      1       ; b=    1       ; c=    1       ; d=    0       ;e=      1       ;f=     0       ;g=     1       ;h=     1       ;}
if (variable==   216     ) {     a=      0       ; b=    0       ; c=    0       ; d=    1       ;e=      1       ;f=     0       ;g=     1       ;h=     1       ;}
if (variable==   217     ) {     a=      1       ; b=    0       ; c=    0       ; d=    1       ;e=      1       ;f=     0       ;g=     1       ;h=     1       ;}
if (variable==   218     ) {     a=      0       ; b=    1       ; c=    0       ; d=    1       ;e=      1       ;f=     0       ;g=     1       ;h=     1       ;}
if (variable==   219     ) {     a=      1       ; b=    1       ; c=    0       ; d=    1       ;e=      1       ;f=     0       ;g=     1       ;h=     1       ;}
if (variable==   220     ) {     a=      0       ; b=    0       ; c=    1       ; d=    1       ;e=      1       ;f=     0       ;g=     1       ;h=     1       ;}
if (variable==   221     ) {     a=      1       ; b=    0       ; c=    1       ; d=    1       ;e=      1       ;f=     0       ;g=     1       ;h=     1       ;}
if (variable==   222     ) {     a=      0       ; b=    1       ; c=    1       ; d=    1       ;e=      1       ;f=     0       ;g=     1       ;h=     1       ;}
if (variable==   223     ) {     a=      1       ; b=    1       ; c=    1       ; d=    1       ;e=      1       ;f=     0       ;g=     1       ;h=     1       ;}
if (variable==   224     ) {     a=      0       ; b=    0       ; c=    0       ; d=    0       ;e=      0       ;f=     1       ;g=     1       ;h=     1       ;}
if (variable==   225     ) {     a=      1       ; b=    0       ; c=    0       ; d=    0       ;e=      0       ;f=     1       ;g=     1       ;h=     1       ;}
if (variable==   226     ) {     a=      0       ; b=    1       ; c=    0       ; d=    0       ;e=      0       ;f=     1       ;g=     1       ;h=     1       ;}
if (variable==   227     ) {     a=      1       ; b=    1       ; c=    0       ; d=    0       ;e=      0       ;f=     1       ;g=     1       ;h=     1       ;}
if (variable==   228     ) {     a=      0       ; b=    0       ; c=    1       ; d=    0       ;e=      0       ;f=     1       ;g=     1       ;h=     1       ;}
if (variable==   229     ) {     a=      1       ; b=    0       ; c=    1       ; d=    0       ;e=      0       ;f=     1       ;g=     1       ;h=     1       ;}
if (variable==   230     ) {     a=      0       ; b=    1       ; c=    1       ; d=    0       ;e=      0       ;f=     1       ;g=     1       ;h=     1       ;}
if (variable==   231     ) {     a=      1       ; b=    1       ; c=    1       ; d=    0       ;e=      0       ;f=     1       ;g=     1       ;h=     1       ;}
if (variable==   232     ) {     a=      0       ; b=    0       ; c=    0       ; d=    1       ;e=      0       ;f=     1       ;g=     1       ;h=     1       ;}
if (variable==   233     ) {     a=      1       ; b=    0       ; c=    0       ; d=    1       ;e=      0       ;f=     1       ;g=     1       ;h=     1       ;}
if (variable==   234     ) {     a=      0       ; b=    1       ; c=    0       ; d=    1       ;e=      0       ;f=     1       ;g=     1       ;h=     1       ;}
if (variable==   235     ) {     a=      1       ; b=    1       ; c=    0       ; d=    1       ;e=      0       ;f=     1       ;g=     1       ;h=     1       ;}
if (variable==   236     ) {     a=      0       ; b=    0       ; c=    1       ; d=    1       ;e=      0       ;f=     1       ;g=     1       ;h=     1       ;}
if (variable==   237     ) {     a=      1       ; b=    0       ; c=    1       ; d=    1       ;e=      0       ;f=     1       ;g=     1       ;h=     1       ;}
if (variable==   238     ) {     a=      0       ; b=    1       ; c=    1       ; d=    1       ;e=      0       ;f=     1       ;g=     1       ;h=     1       ;}
if (variable==   239     ) {     a=      1       ; b=    1       ; c=    1       ; d=    1       ;e=      0       ;f=     1       ;g=     1       ;h=     1       ;}
if (variable==   240     ) {     a=      0       ; b=    0       ; c=    0       ; d=    0       ;e=      1       ;f=     1       ;g=     1       ;h=     1       ;}
if (variable==   241     ) {     a=      1       ; b=    0       ; c=    0       ; d=    0       ;e=      1       ;f=     1       ;g=     1       ;h=     1       ;}
if (variable==   242     ) {     a=      0       ; b=    1       ; c=    0       ; d=    0       ;e=      1       ;f=     1       ;g=     1       ;h=     1       ;}
if (variable==   243     ) {     a=      1       ; b=    1       ; c=    0       ; d=    0       ;e=      1       ;f=     1       ;g=     1       ;h=     1       ;}
if (variable==   244     ) {     a=      0       ; b=    0       ; c=    1       ; d=    0       ;e=      1       ;f=     1       ;g=     1       ;h=     1       ;}
if (variable==   245     ) {     a=      1       ; b=    0       ; c=    1       ; d=    0       ;e=      1       ;f=     1       ;g=     1       ;h=     1       ;}
if (variable==   246     ) {     a=      0       ; b=    1       ; c=    1       ; d=    0       ;e=      1       ;f=     1       ;g=     1       ;h=     1       ;}
if (variable==   247     ) {     a=      1       ; b=    1       ; c=    1       ; d=    0       ;e=      1       ;f=     1       ;g=     1       ;h=     1       ;}
if (variable==   248     ) {     a=      0       ; b=    0       ; c=    0       ; d=    1       ;e=      1       ;f=     1       ;g=     1       ;h=     1       ;}
if (variable==   249     ) {     a=      1       ; b=    0       ; c=    0       ; d=    1       ;e=      1       ;f=     1       ;g=     1       ;h=     1       ;}
if (variable==   250     ) {     a=      0       ; b=    1       ; c=    0       ; d=    1       ;e=      1       ;f=     1       ;g=     1       ;h=     1       ;}
if (variable==   251     ) {     a=      1       ; b=    1       ; c=    0       ; d=    1       ;e=      1       ;f=     1       ;g=     1       ;h=     1       ;}
if (variable==   252     ) {     a=      0       ; b=    0       ; c=    1       ; d=    1       ;e=      1       ;f=     1       ;g=     1       ;h=     1       ;}
if (variable==   253     ) {     a=      1       ; b=    0       ; c=    1       ; d=    1       ;e=      1       ;f=     1       ;g=     1       ;h=     1       ;}
if (variable==   254     ) {     a=      0       ; b=    1       ; c=    1       ; d=    1       ;e=      1       ;f=     1       ;g=     1       ;h=     1       ;}
[Advertisement] Infrastructure as Code built from the start with first-class Windows functionality and an intuitive, visual user interface. Download Otter today!

Don MartiIncentivizing production of information goods

Just thinking about approaches to incentivizing production of information goods, and where futures markets might fit in.

Artificial property

Article 1, Section 8, of the US Constitution still covers this one best.

To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries;

We know about the problems with this one. It encourages all kinds of rent-seeking and freedom-menacing behavior by the holders of property interests in information. And the transaction costs are too high to incentivize the production of some useful kinds of information.

Commoditize the complement

Joel Spolsky explained it best, in Strategy Letter V. Smart companies try to commoditize their products’ complements. (See also: the list of business models in the Some Easily Rebutted Objections to GNU's Goals section of the GNU Manifesto)

This one has been shown to work for some categories of information goods but not others. (We have Free world-class browsers and OS kernels because search engines and hardware are complements. We don't have free world-class software in categories such as CAD.)


Release a free information good as a way to signal competence in performing a service, or at least a large investment by the author in persuading others that the author is competent. Works at the level of the individual labor market and in consulting. Don't know if this works in other areas.

Game and market mechanisms

With "gamified crowdsourcing" you can earn play rewards for very low transaction costs, and contribute very small tasks.

Common Voice

Higher transaction costs are associated with "crowdfunding" which sounds similar but requires more collaboration and administration.

In the middle, between crowdsourcing and crowdfunding, is a niche for a mechanism with lower transaction costs than crowdfunding but more rewards than crowdsourcing.

By using the existing bug tracker to resolve contracts, a bug futures market keeps transaction costs low. By connecting to an existing cryptocurrency, a bug futures market enables a kind of reward that is more liquid, and transferrable among projects.

We don't know how wide the bug futures niche is. Is it a tiny space between increasingly complex tasks that can be resolved by crowdsourcing and increasingly finer-grained crowdfunding campaigns?

Or are bug futures capable of achieving low enough transaction costs to be an attractive incentivization mechanism for a lot of tasks that go into a variety of information goods?

Don MartiGot a reply from Twitter

I thought it would be fun to try Twitter ads, and, not surprisingly, I started getting fake followers pretty quickly after I started a Twitter follower campaign.

Since I'm paying nine cents a head for these followers, I don't want to get ripped off. So naturally I put in a support ticket to Twitter, and just heard back.

Thanks for writing in about the quality of followers and engagements. One of the advantages of the Twitter Ads platform is that any RTs of your promoted ads are sent to the retweeting account's followers as an organic tweet. Any engagements that result are not charged, however followers gained may not align with the original campaign's targeting criteria. These earned followers or engagements do show in the campaign dashboard and are used to calculate cost per engagement, however you are not charged for them directly.

Twitter also passes all promoted engagements through a filtering mechanism to avoid charging advertisers for any low-quality or invalid engagements. These filters run on a set schedule so the engagements may show in the campaign dashboard, but will be deducted from the amount outstanding and will not be charged to your credit card.

If you have any further questions, please don't hesitate to reply.

That's pretty dense San Francisco speak, so let me see if I can translate to the equivalent for a normal product.

Hey, what are these rat turds doing in my raisin bran?

Thanks for writing in about the quality of your raisin bran eating experience. One of the advantages of the raisin bran platform is that during the production process, your raisin bran is made available to our rodent partners as an organic asset.

I paid for raisin bran, so why are you selling me raisin-plus-rat-turds bran?

Any ingredients that result from rodent engagement are not charged, however ingredients gained may not align with your original raisin-eating criteria.

Can I have my money back?

We pass all raisin bran sales through a filtering mechanism to avoid charging you for invalid ingredients. The total weight of the product, as printed on the box, includes these ingredients, but the weight of invalid ingredients will be deducted from the amount charged to your credit card.

So how can I tell which rat turds are "organic" so I'm not paying for them, and which are the ones that you just didn't catch and are charging me for?


Buying Twitter followers: Fiverr or Twitter?

On Fiverr, Twitter followers are about half a cent each ($5/1000). On Twitter, I'm gettting followers for about 9 cents each. The Twitter price is about 18x the Fiverr price.

But every follower that someone else buys on Fiverr has to be "aged" and disguised in order to look realistic enough not to get banned. The bot-herders have to follow legit follower campaigns such as mine and not just their paying customers.

If Twitter is selling those "follow" actions to me for nine cents each, and the bot-herder is only making half a cent, how is Twitter not making more from bogus Twitter followers than the bot-herders are?

If you're verified on Twitter, you may not be seeing how much of a shitshow their ad business is. Maybe the're going to have to sell Twitter to me sooner than I thought.


Planet DebianNorbert Preining: Debian/TeX Live 2017.20170724-1

Yesterday I uploaded the first update of the TeX Live packages in Debian after TeX Live 2017 has entered Debian/unstable. The packages should by now have reached most mirrors. Nothing spectacular here besides a lot of updates and new packages.

If I have to pick one update it would be the one of algorithm2e, a package that has seen lots of use and some bugs due to two years of inactivity. Good to see a new release.


New packages

algolrevived, invoice2, jfmutil, maker, marginfit, pst-geometrictools, pst-rputover, pxufont, shobhika, tikzcodeblocks, zebra-goodies.

Updated packages

acmart, adobemapping, algorithm2e, arabluatex, archaeologie, babel, babel-french, bangorexam, beamer, beebe, biblatex-gb7714-2015, bibleref, br-lex, bxjscls, combofont, computational-complexity, dozenal, draftfigure, elzcards, embrac, esami, factura, fancyhdr, fei, fithesis, fmtcount, fontspec, fonttable, forest, fvextra, genealogytree, gotoh, GS1, l3build, l3experimental, l3kernel, l3packages, latexindent, limap, luapackageloader, lwarp, mcf2graph, microtype, minted, mptopdf, pdfpages, polynom, powerdot, probsoln, pxbase, pxchfon, pythontex, reledmac, siunitx, struktex, tcolorbox, tetex, texdirflatten, uowthesistitlepage, uptex-fonts, xcharter.

Planet DebianPetter Reinholdtsen: Norwegian Bokmål edition of Debian Administrator's Handbook is now available

I finally received a copy of the Norwegian Bokmål edition of "The Debian Administrator's Handbook". This test copy arrived in the mail a few days ago, and I am very happy to hold the result in my hand. We spent around one and a half year translating it. This paperbook edition is available from If you buy it quickly, you save 25% on the list price. The book is also available for download in electronic form as PDF, EPUB and Mobipocket, as can be read online as a web page.

This is the second book I publish (the first was the book "Free Culture" by Lawrence Lessig in English, French and Norwegian Bokmål), and I am very excited to finally wrap up this project. I hope "Håndbok for Debian-administratoren" will be well received.

Planet DebianReproducible builds folks: Reproducible Builds: week 117 in Buster cycle

Here's what happened in the Reproducible Builds effort between Sunday July 16 and Saturday July 22 2017:

Toolchain development

Bernhard M. Wiedemann wrote a tool to automatically run through different sources of non-determinism, and report which of these caused irreproducibility.

Dan Kegel's patches to fpm were merged.

Bugs filed

Patches submitted upstream:

Patches filed in Debian:

Reviews of unreproducible packages

73 package reviews have been added, 44 have been updated and 50 have been removed in this week, adding to our knowledge about identified issues.

No issue types were updated.

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (106)
  • Daniel Stender (1)
  • Drew Parsons (1)
  • Félix Sipma (1)
  • Lucas Nussbaum (25)

diffoscope development

  • Juliana Rodrigues:
    • Add new XML comparator. (Closes: #866120)
  • Guangyuan Yang:
    • Fix 2 cases in test_device on FreeBSD
  • Chris Lamb:
    • comparators.xml: Fix EPUB "missing file" tests; they ship a META-INF/container.xml file.
    • comparators.sqlite: Simplify file detection in Sqlite3Database.RE_FILE_TYPE
    • Style and attribution fixes to XML comparator and
  • Ximin Luo:
    • main, logging: restore old logger settings to avoid pytest vomiting in certain situations
    • comparators/directory: Fix #868534 by expecting less strict test output

reprotest development

  • Ximin Luo:
    • Use autopkgtest upstream paths, makes things easier to import
    • Add script for importing autopkgtest code, and import autopkgtest 4.4

Ximin also restarted the discussion with autopkgtest-devel about code reuse for reprotest.

Santiago Torres began a series of patches to make reprotest more distro-agnostic, with the aim of making it usable on Arch Linux. Ximin reviewed these patches.


This week's edition was written by Ximin Luo, Bernhard M. Wiedemann and Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Krebs on SecurityHow a Citadel Trojan Developer Got Busted

A U.S. District Court judge in Atlanta last week handed a five year prison sentence to Mark Vartanyan, a Russian hacker who helped develop and sell the once infamous and widespread Citadel banking trojan. This fact has been reported by countless media outlets, but far less well known is the fascinating backstory about how Vartanyan got caught.

For several years, Citadel ruled the malware scene for criminals engaged in stealing online banking passwords and emptying bank accounts. U.S. prosecutors say Citadel infected more than 11 million computers worldwide, causing financial losses of at least a half billion dollars.

Like most complex banking trojans, Citadel was marketed and sold in secluded, underground cybercrime markets. Often the most time-consuming and costly aspect of malware sales and development is helping customers with any tech support problems they may have in using the crimeware.

In light of that, one innovation that Citadel brought to the table was to crowdsource some of this support work, easing the burden on the malware’s developers and freeing them up to spend more time improving their creations and adding new features.

Citadel users discuss the merits of including a module to remove other parasites from host PCs.

Citadel users discuss the merits of including a module to remove other parasites from host PCs.

Citadel boasted an online tech support system for customers designed to let them file bug reports, suggest and vote on new features in upcoming malware versions, and track trouble tickets that could be worked on by the malware developers and fellow Citadel users alike. Citadel customers also could use the system to chat and compare notes with fellow users of the malware.

It was this very interactive nature of Citadel’s support infrastructure that FBI agents would ultimately use to locate and identify Vartanyan, who went by the nickname “Kolypto.” The nickname of the core seller of Citadel was “Aquabox,” and the FBI was keen to identify Aquabox and any programmers he’d hired to help develop Citadel.

In June 2012, FBI agents bought several licenses of Citadel from Aquabox, and soon the agents were suggesting tweaks to the malware that they could use to their advantage. Posing as an active user of the malware, FBI agents informed the Citadel developers that they’d discovered a security vulnerability in the Web-based interface that Citadel customers used to keep track of and collect passwords from infected systems (see screenshot below).

A screenshot of the Citadel botnet panel.

A screenshot of the Web-based Citadel botnet control panel.

Aquabox took the bait, and asked the FBI agents to upload a screen shot of the bug they’d found. As noted in this September 2015 story, the FBI agents uploaded the image to file-sharing giant and then subpoenaed the logs from Sendspace to learn the Internet address of the user that later viewed and downloaded the file.

The IP address came back as the same one they had previously tied to Aquabox. The other address that accessed the file was in Ukraine and tied to Vartanyan. Prosecutors said Vartanyan’s address soon after was seen uploading to Sendspace a patched version of Citadel that supposedly fixed the vulnerability identified by the agents posing as Citadel users.

Mark Vartanyan. Source: Twitter.

Mark Vartanyan. Source: Twitter.

“In the period August 2012 to January 2013, there were in total 48 files uploaded from Marks IP to Sendspace,” reads a story in the Norwegian daily VG that KrebsOnSecurity had translated into English here (PDF). “Those files were downloaded by ‘Aquabox’ with 2 IPs ( and”

Investigators would learn that Vartanyan was a Russian citizen who’d grown up in Ukraine. At the time of his arrest, Mark was living in Norway, which later extradited him to the United States for prosecution. In March 2017, Vartanyan pleaded guilty to one count of computer fraud, and was sentenced on July 19 to five years in federal prison.

Another Citadel developer, Dimitry Belorossov (a.k.a. “Rainerfox”), was arrested and sentenced in 2015 to four years and six months in prison after pleading guilty to distributing Citadel.

Early in its heydey, some text strings were added to the Citadel Trojan which named Yours Truly as the real author of Citadel (see screenshot below). While I obviously had no involvement in writing the trojan, I have written a great deal about its core victims — mainly dozens of small businesses here in the United States who saw their bank accounts drained of hundreds of thousands or millions of dollars after a Citadel infection.

A text string inside of the Citadel trojan. Source: AhnLab

A text string inside of the Citadel trojan. Source: AhnLab

Planet DebianRussell Coker: Forking Mon and DKIM with Mailing Lists

I have forked the “Mon” network/server monitoring system. Here is a link to the new project page [1]. There hasn’t been an upstream release since 2010 and I think we need more frequent releases than that. I plan to merge as many useful monitoring scripts as possible and support them well. All Perl scripts will use strict and use other best practices.

The first release of etbe-mon is essentially the same as the last release of the mon package in Debian. This is because I started work on the Debian package (almost all the systems I want to monitor run Debian) and as I had been accepted as a co-maintainer of the Debian package I put all my patches into Debian.

It’s probably not a common practice for someone to fork upstream of a package soon after becoming a comaintainer of the Debian package. But I believe that this is in the best interests of the users. I presume that there are other collections of patches out there and I hope to merge them so that everyone can get the benefits of features and bug fixes that have been separate due to a lack of upstream releases.

Last time I checked mon wasn’t in Fedora. I believe that mon has some unique features for simple monitoring that would be of benefit to Fedora users and would like to work with anyone who wants to maintain the package for Fedora. I am also interested in working with any other distributions of Linux and with non-Linux systems.

While setting up the mailing list for etbemon I wrote an article about DKIM and mailing lists (primarily Mailman) [2]. This explains how to setup Mailman for correct operation with DKIM and also why that seems to be the only viable option.

CryptogramAlternatives to Government-Mandated Encryption Backdoors

Policy essay: "Encryption Substitutes," by Andrew Keane Woods:

In this short essay, I make a few simple assumptions that bear mentioning at the outset. First, I assume that governments have good and legitimate reasons for getting access to personal data. These include things like controlling crime, fighting terrorism, and regulating territorial borders. Second, I assume that people have a right to expect privacy in their personal data. Therefore, policymakers should seek to satisfy both law enforcement and privacy concerns without unduly burdening one or the other. Of course, much of the debate over government access to data is about how to respect both of these assumptions. Different actors will make different trade-offs. My aim in this short essay is merely to show that regardless of where one draws this line -- whether one is more concerned with ensuring privacy of personal information or ensuring that the government has access to crucial evidence -- it would be shortsighted and counterproductive to draw that line with regard to one particular privacy technique and without regard to possible substitutes. The first part of the paper briefly characterizes the encryption debate two ways: first, as it is typically discussed, in stark, uncompromising terms; and second, as a subset of a broader problem. The second part summarizes several avenues available to law enforcement and intelligence agencies seeking access to data. The third part outlines the alternative avenues available to privacy-seekers. The availability of substitutes is relevant to the regulators but also to the regulated. If the encryption debate is one tool in a game of cat and mouse, the cat has other tools at his disposal to catch the mouse -- and the mouse has other tools to evade the cat. The fourth part offers some initial thoughts on implications for the privacy debate.

Blog post.

Planet DebianSatyam Zode: Maya - the OpenEBS Go Kit Project

The Kit project in Go is the common project containing all the standard libraries or packages used across all the Go projects in the organization.


I attended GopherCon India 2017, there was a talk on “Package Oriented Design In Go” by William Kennedy. In that talk, William explained some really important and thoughtful design principles which We can apply in our day to day life, while writing Go. Hence, I wanted to apply these design philosophies to the Go projects in which I have been working on as a part of OpenEBS project. I learnt a good practice of having a Go Kit project at the organization level from William’s talk.

What is the Kit Project?

The Kit project in Go is the common project containing all the standard libraries or packages used across all the Go projects in the organization. Packages in the Kit project should follow design philosophies.

Need for a Kit project

Sometimes, We write same Go packages again and again to do the same task at different levels in the different Go projects under the same organization. For example, we write custom logger package in the different Go projects. If the custom logger package is same across the organization and can be reused by simply importing it, then this custom logger package is the perfect fit for Kit project. You can sense how much time and cost it will save for us when we have a Kit project.

How to convert existing projects to have “kit”

Maya is the kit project in the progress. I will walk through our journey of creating a Kit project called maya for OpenEBS organization from existing Go projects. At OpenEBS, as a open source and growing Go project, We value Go principles and We try hard to leverage Go’s offerings. Maya is the Kit project for the Application projects like maya-apiserver, maya-storage-bot etc. Maya contains all the kubernetes & nomad API’s, common utilities etc. needed for development of maya-apiserver and maya-storage-bot. In the near future, we are trying to push our custom libraries to maya. So that, it will become a promising Go kit project for OpenEBS community.

We have specifically followed the package oriented design principles in Go to create maya as a kit project.

  • Usability

    We moved common packages such as orchprovider, types, pkg to maya from maya-apiserver. These packages are very generic and can be used in most of the Go projects in OpenEBS organization. Brief details about new packages in Maya.
    • Orchprovider : orchprovider contains packages of different orchestrators such as kubernetes and nomad.

    • types: types provides all the generic types related to orchestrator.

    • pkg: pkg contains packages like nethelper, util etc.

    • volumes: volumes contain packages related to volume provisioner and profiles.

  • Purpose

    Packages in the Kit project should be purposeful. These packages must provide, not contain. In maya, we have packages like types, orchprovider, volumes etc. name of these packages suggests the functionality provided by them.

  • Portability

    Portability is important factor for packages in kit project. Hence, we are making maya in such a way that it will be easy to import and use in any Go project. Packages in the Maya are not single point of dependency and all the packages are independent of each other.

Project structure for the Maya as Kit project

  • Without having Maya as a kit project:

    • Project structure for the Maya

      ├── buildscripts
      │   └── docker
      ├── command
      ├── docs
      ├── example
      ├── scripts
      │   └── config
      └── templates
    • Project structure for the Maya-apiserver

      ├── buildscripts
      │   └── docker
      ├── cmd
      ├── docs
      ├── lib
      │   ├── api
      │   │   └── v1
      │   ├── artifacts
      │   │   ├── docker
      │   │   ├── docker-compose
      │   │   └── k8s
      │   ├── config
      │   ├── flaghelper
      │   ├── loghelper
      │   ├── mockit
      │   │   └── etcmayaserver
      │   ├── nethelper
      │   ├── orchprovider
      │   │   ├── k8s
      │   │   └── nomad
      │   ├── profile
      │   │   ├── orchprovider
      │   │   └── volumeprovisioner
      │   ├── server
      │   ├── util
      │   └── volumeprovisioner
      │       └── jiva
      └── proposals
  • Maya as a kit project

    • Project structure for the Maya (Kit Project)

      ├── buildscripts
      │   └── docker
      ├── command
      ├── docs
      ├── example
      ├── orchprovider
      │   ├── k8s
      │   │   └── v1
      │   └── nomad
      │       └── v1
      ├── pkg
      │   ├── nethelper
      │   └── util
      ├── scripts
      │   └── config
      ├── templates
      ├── types
      │   └── v1
      │       └── profile
      │           └── orchestrator
      └── volumes
          ├── profile
          │   └── volumeprovisioner
          └── provisioner
              └── jiva
    • Project structure for the Maya-apiserver (Application Project)

      ├── buildscripts
      │   └── docker
      ├── cmd
      ├── docs
      ├── lib
      │   ├── artifacts
      │   │   ├── docker
      │   │   ├── docker-compose
      │   │   └── k8s
      │   ├── config
      │   ├── flaghelper
      │   ├── loghelper
      │   └── server
      └── proposals

Example usage of maya kit project in maya-apiserver

Maya-apiserver uses maya as a Kit project. Maya-apiserver exposes OpenEBS operations in form of REST APIs. This allows multiple clients e.g. volume related plugins to consume OpenEBS storage operations exposed by Maya API server.

  • Need for orchestration providers in maya-apiserver

    Maya-apiserver has been designed keeping storage inside containers in mind. In other words, containerized storage. This leads to maya-apiserver using best of breed container orchestrators. Hence maya-apiserver abstracts the container related orchestration needs in form of orchprovider namespace. Therefore,to achieve above need maya-api server uses orchprovider package provided by Maya.

  • Need for volumes in the maya-apiserver

    There can be multiple persistent volume provisioners e.g. jiva & cstor which has the details of workings of its volume. maya-apiserver tries to abstract these into volumes operations whose specifics will reside in individual namespaces i.e. jiva, cstor, etc. Hence, maya-apiserver will make use of volumes package from Maya to satisfy above requirements.


Go Kit project should contain packages which are usable, purposeful and portable. Go Kit projects will improve the efficiency of the organization at both human and code level.

Worse Than FailureThe Logs Don't Lie

She'd resisted the call for years. As a senior developer, Makoto knew how the story ended: one day, she'd be drafted into the ranks of the manager, forswearing her true love webdev. When her boss was sacked unexpectedly, mere weeks after the most senior dev quit, she looked around and realized she was holding the short straw. She was the most senior. This is her story.

As she settled into her new responsibilities, Makoto started coming in earlier and earlier in the hopes of getting some development work done. As such, she started to get accustomed to the rhythm of the morning shift, before most devs had rolled out of bed, but after the night shift ops guys had gone home.

Bad sign number 1: the CEO wandering past, looking a bit lost and vaguely concerned.

"Can I help you?" Makoto asked, putting down her breakfast pastry.

Bad sign number 2 was his reply: "Does the Internet look down to you?"

Makoto quickly pulled up her favorite Internet test site, /r/aww, to verify that she still had connectivity. "Seems all right to me."

"Well, I can't get online."

Webdev-Makoto would've shrugged and thought, Not my circus. Manager-Makoto forced a grin onto her face and said, "I'll get my guys on that."

"Thanks, you're a real champ." Satisfied, the CEO wandered back to whatever it was he did all day, leaving Makoto to explain a problem she wasn't experiencing to guys way more qualified to work on this than she was.

Hoping to explain the discrepancy, she unplugged her laptop. This time, the adorable kittens failed to load.

"Success!" she told the empty office. "This is officially some weird wi-fi problem."

She drafted up a notice to that effect, sent it to the office mailing list, and assigned her teammate Sven to find and fix the problem. By 9:00 AM, all was well, and her team had sent out an update to that effect.

Now well into her daily routine, Makoto put the incident behind her. After all, it was resolved, wasn't it?

4:00 PM rolled around, and Makoto was somehow the recipient for an angry email from Greg in Sales. Is the internet still out? I need to close out my sales!!! Why hasn't your team fixed this yet! We could lose $300,000 if I can't close out my sales by 5PM!!!!!

Makoto rolled her eyes at the unnecessary number of exclamation points and checked the sales pipeline. Sure enough, there was nothing preventing her from accessing Greg's queue and verifying that all $100 worth of sales were present and accounted for.

Makoto cracked her knuckles and crafted the most polite response she could muster: As per my update at 9am, the Internet is back online and you should be able to perform any and all job duties at this time.

The reply came 2 minutes later: I cannot close my opportunities!!!

Makoto forwarded the email chain to Sven before rolling over to his desk. "Greg's being a drama llama again. Can you pull the firewall logs and prove he's got Internet?"


10 minutes and 4 raised eyebrows later, Sven replied to the ticket, copying Greg's boss and attaching a screenshot of the logs. As Makoto stated, we are online at this time. Is it possible your computer received a virus from browsing PornHub since 9:30 this morning?

Greg spent the next day in meetings with HR, and the next week on unpaid leave to think about what he'd done. To this day, he cannot look Sven or Makoto in the eye as they pass each other in the hallway. Makoto suspects he won't suffer long—only as long as it takes him to find another job. Maybe one with IT people who don't know what search keywords he uses.

[Advertisement] Scale your release pipelines, creating secure, reliable, reusable deployments with one click. Download and learn more today!

Planet DebianGunnar Wolf: Getting ready for DebConf17 in Montreal!

(image shamelessly copied from Noodles' Emptiness)

This year I will only make it to DebConf, not to DebCamp. But, still, I am very very happy and excited as the travel date looms nearer! I have ordered some of the delicacies for the Cheese and Wine party, signed up for the public bicycle system of Montreal, and done a fair share of work with the Content Team; finally today we sent out the announcement for the schedule of talks. Of course, there are several issues yet to fix, and a lot of things to do before traveling... But, no doubt about this: It will be an intense week!

Oh, one more thing while we are at it: The schedule as it was published today does not really look like we have organized stuff into tracks — But we have! This will be soon fixed, adding some color-coding to make tracks clearer on the schedule.

This year, I pushed for the Content Team to recover the notion of tracks as an organizative measure, and as something that delivers value to DebConf as a whole. Several months ago, I created a Wiki page for the DebConf tracks, asking interested people to sign up for them. We currently have the following tracks registered:

Andreas Tille
Debian Science
Michael Banck
Cloud and containers
Luca Filipozzi
Systems administration, automation and orchestation
Gunnar Wolf

We have two tracks still needing a track coordinator. Do note that most of the tasks mentioned by the Wiki have already been carried out; what a track coordinator will now do is to serve as some sort of moderator, maybe a recurring talkmeister, ensuring continuity and probably providing for some commentary, giving some unity to its sessions. So, the responsibilities for a track coordinator right now are quite similar to what is expected for video team volunteers — but to a set of contiguous sessions.

If you are interested in being the track coordinator/moderator for Embedded or for Systems administration, automation and orchestation or even to share the job with any of the other, registered, coordinators, please speak up! Mail and update the table in the Wiki page.

See you very soon in Montreal!


Planet DebianBits from Debian: DebConf17 Schedule Published!

The DebConf17 orga team is proud to announce that over 120 activities have been scheduled so far, including 45- and 20-minute talks, team meetings, and workshops, as well as a variety of other events.

Most of the talks and BoFs will be streamed and recorded, thanks to our amazing video team!

We'd like to remind you that Saturday August 5th is also our much anticipated Open Day! This means a program for a wider audience, including special activities for newcomers, such as an AMA session about Debian, a beginners workshop on packaging, a thoughtful talk about freedom with regard to today's popular gadgets and more.

In addition to the published schedule, we'll provide rooms for ad-hoc sessions where attendees will be able to schedule activities at any time during the whole conference.

The current schedule is available at

This is also available through an XML feed. You can use ConfClerk in Debian to consume this, or Giggity on Android devices:

We look forward to seeing you in Montreal!

DebConf17 logo

Planet DebianNorbert Preining: Garmin fenix 5x – broken by design

Some month ago I upgrade my (European) fenix3 to a (Japanese) fenix 5x, looking forward to the built-in maps as well as support for Japanese. I was about to write a great review, how content I have been with the fenix 3 and how much better the 5x is. Well, until I realized that Garmin’s engineers seem to be brain-damaged and shipping broken by design devices: Just one word: Set an alarm on the watch, and wonder …

.. because you will never wake up the next day, since the alarm was deleted due to an unattended (so-called) sync operation. Happened to me, several times, worst was when I was with clients working as guide.

So the problem is well known, see this link and this link and this link, and it is not restricted to the fenix. The origin of the problem is that Garmin is incapable of implementing a synchronization protocol. They have three sources of data: The Connect website, the Connect application on the smartphone, and the watch itself. And interestingly they only do pushes from Web to application to device, overwriting settings on the lower end. Which means, any alarm that I create on the watch will be overwritten, ie deleted, on every sync – sorry, not sync, on every forced push from the mobile.

It is a bit depressing, but I think the urgent solution is to exempt alarms from the synchronization completely, and remove them from both the connect web page and the application, until a better, a real, synchronization is implemented.

I found a work-around in one of the threads on the Garmin forum, that is: create an alarm on the web or application, better alarms for each of the times you might need it. These will be synced to the device, and can be activated/deactivated on the device without problems. But to be honest – this is not a solution I can really trust in when I need to get up in the morning.

Besides this issue, which is unfortunately rather severe a restriction for me as mountaineer and mountain guide, I really love the watch and will write a more detailed review soon.

Planet DebianJonathan McDowell: Learning to love Ansible

This post attempts to chart my journey towards getting usefully started with Ansible to manage my system configurations. It’s a high level discussion of how I went about doing so and what I got out of it, rather than including any actual config snippets - there are plenty of great resources out there that handle the actual practicalities of getting started much better than I could.

I’ve been convinced about the merits of configuration management for machines for a while now; I remember conversations about producing an appropriate set of recipes to reproduce our haphazard development environment reliably over 4 years ago. That never really got dealt with before I left, and as managing systems hasn’t been part of my day job since then I never got around to doing more than working my way through the Puppet Learning VM. I do, however, continue to run a number of different Linux machines - a few VMs, a hosted dedicated server and a few physical machines at home and my parents’. In particular I have a VM which handles my parents’ email, and I thought that was a good candidate for trying to properly manage. It’s backed up, but it would be nice to be able to redeploy that setup easily if I wanted to move provider, or do hosting for other domains in their own VMs.

I picked Ansible, largely because I wanted something lightweight and the agentless design appealed to me. All I really need to do is ensure Python is on the host I want to manage and everything else I can bootstrap using Ansible itself. Plus it meant I could use the version from Debian testing on my laptop and not require backports on the stable machines I wanted to manage.

My first attempt was to write a single Ansible YAML file which did all the appropriate things for the email VM; installed Exim/Apache/Roundcube, created users, made sure the appropriate SSH keys were in place, installed configuration files, etc, etc. This did the job, but I found myself thinking it was no better than writing a shell script to do the same things.

Things got a lot better when instead of concentrating on a single host I looked at what commonality was shared between hosts. I started with simple things; Debian is my default distro so I created an Ansible role debian-system which configured up APT and ensured package updates were installed. Then I added a task to setup my own account and install my SSH keys. I was then able to deploy those 2 basic steps across a dozen different machine instances. At one point I got an ARM64 VM from Scaleway to play with, and it was great to be able to just add it to my Ansible hosts file and run the playbook against it to get my basic system setup.

Adding email configuration got trickier. In addition to my parents’ email VM I have my own email hosted elsewhere (along with a whole bunch of other users) and the needs of both systems are different. Sitting down and trying to manage both configurations sensibly forced me to do some rationalisation of the systems, pulling out the commonality and then templating the differences. Additionally I ended up using the lineinfile module to edit the Debian supplied configurations, rather than rolling out my own config files. This helped ensure more common components between systems. There were also a bunch of differences that had grown out of the fact each system was maintained by hand - I had about 4 copies of each Let’s Encrypt certificate rather than just putting one copy in /etc/ssl and pointing everything at that. They weren’t even in the same places on different systems. I unified these sorts of things as I came across them.

Throughout the process of this rationalisation I was able to easily test using containers. I wrote an Ansible role to create systemd-nspawn based containers, doing all of the LVM + debootstrap work required to produce a system which could then be managed by Ansible. I then pointed the same configuration as I was using for the email VM at this container, and could verify at each step along the way that the results were what I expected. It was still a little nerve-racking when I switched over the live email config to be managed by Ansible, but it went without a hitch as hoped.

I still have a lot more configuration to switch to being managed by Ansible, especially on the machines which handle a greater number of services, but it’s already proved extremely useful. To prepare for a jessie to stretch upgrade I fired up a stretch container and pointed the Ansible config at it. Most things just worked and the minor issues I was able to fix up in that instance leaving me confident that the live system could be upgraded smoothly. Or when I want to roll out a new SSH key I can just add it to the Ansible setup, and then kick off an update. No need to worry about whether I’ve updated it everywhere, or correctly removed the old one.

So I’m a convert; things were a bit more difficult by starting with existing machines that I didn’t want too much disruption on, but going forward I’ll be using Ansible to roll out any new machines or services I need, and expect that I’ll find that new deployment to be much easier now I have a firm grasp on the tools available.

TEDWhat if? … and other questions that lead to big ideas: The talks of TED@UPS

Hosts Bryn Freedman and Kelly Stoetzel welcome us to the show at TED@UPS, July 20, 2017, at SCADshow in Atlanta, Georgia. (Photo: Mary Anne Morgan / TED)

What if one person could change the world? What if we could harness our collective talent, insight and wisdom? And what if, together, we could spark a movement with positive impact far into the future?

For a third year, UPS has partnered with TED to bring experts in business, logistics, design and technology to the stage to share ideas from the forefront of innovation. At this year’s TED@UPS — held on July 20, 2017, at SCADShow in Atlanta, Georgia — 18 speakers and performers showed how daring human imagination can solve our most difficult problems. 

After opening remarks from Juan Perez, UPS’s chief information and engineering officer, the talks in Session 1

Why protectionism isn’t a good deal. We’ve heard a lot of rhetoric lately suggesting that importers, like the US, are losing valuable manufacturing jobs to exporters like China, Mexico and Vietnam. In reality, those manufacturing jobs haven’t disappeared for the reasons you may think, says border and logistics specialist Augie Picado. Automation, not offshoring, is really to blame, he says; in fact, of the 5.7 million manufacturing jobs lost in the US between 2000 and 2010, 87 percent of them were lost to automation. If that trend continues, it means that future protectionist policies would save 1 in 10 manufacturing jobs, at best — but, more likely, they’d lead to tariffs and trade wars. And with the nature of modern manufacturing inexorably trending toward shared production, in which individual products are manufactured using materials produced in many different countries, protectionist policies make even less sense. Shared production allows us to manufacture higher-quality products at prices we can afford, but it’s impossible without efficient cross-border movement of materials and products. As Picado asks: “Does it make more sense to drive up prices to the point where we can’t afford basic goods, for the sake of protecting a job that might be eliminated by automation in a few years anyway?” 

Christine Thach shares her experience growing up in a refugee community — and the lessons it taught her about life and business — at TED@UPS. (Photo: Mary Anne Morgan / TED)

Capitalism for the collective. Christine Thach was raised within a tight-knit community of Cambodian refugees in the United States. Time after time, she witnessed the triumphs of community-first thinking through her own family’s hardships, steadfast relationships and continuous investment in refugee-owned businesses. “This collective-success mindset we’ve seen in refugees can actually improve the way we do business,” she says. “The self-interested foundations of capitalism, and the refugee collectivist mindset, are not in direct conflict with each other. They’re actually complementary.” Thach thinks an all-for-one, one-for-all mentality may just be able to shake up capitalism in a way that benefits everyone — if companies shift away from the individual and rally for group prosperity.

In defense of perfectionism. Some people think perfectionism is a bad thing, that it only leaves us disappointed. Jon Bowers disagrees; he sees perfectionism as “a willingness to do what is difficult to achieve what is right.” Bowers manages a facility where he trains professional delivery drivers. The stakes are high — 100 people in the US die every day in car accidents. So he’s a fan of striving to get as close to perfect as possible. We shouldn’t lower our standards because we’re afraid to fail, Bowers says. “We need to fail … failure is a natural stepping stone toward perfection.”

Uma Adwani shares the joys of teaching math at TED@UPS. (Photo: Mary Anne Morgan / TED)

Math’s hidden messages. “I hated math until it saved my life,” says Uma Adwani. As a young woman, Adwani left her small hometown of Akola, India, to start a career and life for herself in an unfamiliar city on her own. For months, she scraped by on three dollars a day — until a primary school hired her to teach the subject she loathed the most: math. But as Uma worked to prepare her lessons (and keep her job!), she started to discover “the magic of even and odd numbers, the poetry, the symmetry.” She shares the secret wisdom she found in the multiplication tables, like this one: if I am even to myself, no matter what I am multiplied with or what I go through in life, the result will always be even!

Truck driver turned activist John McKown tells sobering stories of human trafficking at TED@UPS. (Photo: Mary Anne Morgan / TED)

Activism on the road. As a small-town police officer, John McKown dealt with his share of prostitution cases. But after he left the force and became a truck driver, he faced prostitution in a new light — at truck stops. After first brushing them off as an annoyance, Bowers came to realize that the many prostitutes who go from truck to truck offering “dates” at truck stops weren’t just stuck, they were enslaved. According to the FBI, 293,000 American children are at risk of enslavement, McKown says, and now he sees it as a moral imperative to help. When he pulls into a truck stop, he’s not just looking for a parking spot; he’s looking for a way to help — and he encourages others not to turn a blind eye to this problem.

A life of awe. For artist Jennifer Allison, getting dressed can feel like rubbing against a cactus, the lights at the grocery store seem more like strobes at a disco, and the number four is always royal blue. It wasn’t until Allison was an adult that she was given a name for the strange, and often painful, way her brain processes information — Sensory Processing Disorder (SPD). Allison shares the many ways she tried to cope with her condition — from stealing cars (and returning them) to self-medication and eventually an overdose — before returning to her childhood love: art. In an intimate talk, Allison shares how art saved her life, transforming her world “from pain and chaos to mesmerizing awe and wonder.” She urges us to find what transforms our own worlds, “whether it’s through art or science, nature or religion.” Because, she explains, it’s this sense of awe that connects us to the bigger picture and each other, grounding us and making life worth living.

Johnny Staats grew up singing gospel in church and his family band. Now a UPS driver and bluegrass virtuoso, he plays music with people along his route and at Carnegie Hall. Joined by multi-instrumentalist Davey Vaughn, Staats closes out Session 1 of TED@UPS with a performance of his original song, “His Love Has Got a Hold on Me.”

Singer Stella Stevenson and pianist Danny Bauer open Session 2 by transforming the TED@UPS stage into a jazz lounge with a bold, smoky cover of “Our Day Will Come.”

What’s the point of living in the city? Leading organizations predict that by 2050, 66 percent of the population will live in cities with worsening crime, congestion and inequality. Julio Gil believes the opposite. Trends come and go, he says, and city living will eventually go, as people realize we can now get the same benefits of city while living in the countryside. With the delivery innovations and ubiquitous technology of modern life, there’s no reason not to settle outside the city for a bigger piece of land. Soon enough, he says, “city life” will able to be lived anywhere with the help of drones, social media and augmented reality. Gil challenges the TED@UPS audience to think outside big-city walls to consider the advantages of greener pastures.

Sebastian Guo heralds the arrival of the Chinese millennials — the biggest emerging consumer demographic in the world — at TED@UPS. (Photo: Mary Anne Morgan / TED)

Pay attention to Chinese millennials. The business world is obsessed with American millennials, but Sebastian Guo suggests that a different group is about to take over the world: Chinese millennials. If they were their own country, Chinese millennials would be the world’s third largest. They’re well-educated and super motivated — 57 percent have a bachelor’s degree and 23 percent have a master’s, and they’re choosing majors that give them a competitive edge, specifically STEM and business management. As the biggest emerging consumer demographic on the planet, Chinese millennials spend four times more on mobile purchases than their American counterparts. And then there are the intangibles. The Chinese are big-picture people whose thinking starts from the overview and makes its way to the specific, Guo says, which means they focus on growth and the future in the workplace. And 10 years of smartphones hasn’t erased thousands of years of Confucian ideals, which emphasize a sense of hierarchy in social relations and suggest that a Chinese millennial might be more deferential to their managers at work. The world is tilted towards China now, Guo says, and Chinese millennials are ready to be explorers in this new adventure.

Robot-proof our jobs. “Driver” is the most common job in 29 of the 50 states — and with self-driving cars on the horizon, this could quickly turn into a big problem. To keep robots from taking our jobs, innovation architect David Lee says that we should stop asking people to work like robots and let work feel like … the weekend! “Human beings are amazing on weekends,” Lee says. They’re artists, carpenters, chefs and athletes. The key is to start asking people what problems they are inspired to solve and what talents they want to bring to work. Let them lead the way. “When you invite people to be more, they can amaze us with how much more they can be,” Lee says.

Back with a welcomed musical interlude, Johnny Staats and Davey Vaughn return to the TED@UPS stage to perform an original song, “The West Virginia Coal Miner.”

How drones are revolutionizing healthcare. Partnering across disciplines, UPS joined with Zipline, Gavi and the Rwandan government to create the world’s first drone-based medical delivery system. The scalable system transports emergency medical supplies to remote villages in Rwanda. On track to its goal of saving thousands of lives a year, it could help transform how we deliver medical resources in the future as populations outgrow aging infrastructure. Learn more about this unique partnership in the mini-doc “Collaboration Lifeline,” shown for the first time at TED@UPS.

Planning happiness. City planners are already busy designing futures full of bike paths and LED-certified buildings. But are they designing for our happiness? It’s hard to define, and even harder to plan for, but urban planner Thomas Madrecki has a simple solution: Ask the public. “Our quality of life improves most when we feel engaged and empowered,” he explains, and one of the best ways planners can do this is by making public participation a priority. He calls for an “overhaul of the planning process” through public engagement, clear communication, and meetings the public actually want to attend. It’s not enough for urban planners to be trained in zoning regulations, data methods and planning history — they need to be trained in people, says Madrecki. After all, happiness and health are not engineering problems; they’re people problems.

Innovators don’t see different things; they see things differently. As a Colonel in the Air Force Reserve and an MD-11 Captain at UPS, Jeff Kozak thinks a lot about fuel, and for good reason. For his airline, fuel is by far the largest expense, at over $1.3 billion a year. Kozak tells the story of a counterintuitive idea he had to optimize fuel efficiency and cut carbon emissions by focusing on finding the exact amount of fuel needed for each plane to get to each leg of its journey. Initially met with resistance by an industry that believed more fuel was always better, the plan worked — after just ten days the airline saved $500,000 and eliminated 1,300 tons of CO2 emissions. “Let’s all continue to strive to see things differently and stay open to ideas that go against conventional thinking,” Kozak says. “Despite the resistance this type of thinking can often bring, embracing the counterintuitive can make all the difference.”

Former professional wrestler Mike Kinney encourages us all to turn ourselves up at TED@UPS. (Photo: Mary Anne Morgan / TED)

That’s me … in the chaps. How do you go from a typical high school senior to a sweaty wild man in chaps and a cowboy hat? “You turn yourself up!” says retired professional wrestler and UPS sales supervisor Mike Kinney. For years Kinney was a professional wrestler with the stage name Cowboy Gator Magraw, a persona he invented for the ring by amplifying the best parts of himself, the things about him that made him unique. In a talk equal parts funny and smart, Kinney taps into some locker-room wisdom to show us how we can all turn up to reach our full potential.

To close out the show, violinist Jessica Cambron and flutist Paige James play a moving rendition of the goodnight waltz (and Ken Burns fan favorite) “Ashokan Farewell,” accompanied by Johnny Staats and Davey Vaughn.

TEDOur podcast “Sincerely, X” co-produced with Audible now available free worldwide

Last year, TED and Audible co-produced a new audio series that invited speakers to share ideas—anonymously. Our goal was to make room for an entirely new trove of ideas: those that could only be broadcast publicly if the speaker’s identity remained private.

The series debuted with a number of powerful stories, and we learned a lot in the process (read about producer Cloe Shasha’s personal experience here).

Now, we’re bringing that first season for free to Apple Podcasts, the TED Android app, or wherever you get your podcasts.

We begin with our first episode, “Dr. Burnout,” featuring a doctor who says she committed a fatal mistake with a patient, leading her to a disturbing diagnosis: the medical field pushes for professional burnout. She unveils a powerful perspective on how doctors must deepen their self-awareness.

We’ll be releasing new episodes every Thursday for the next 10 weeks.

Fans can also access all the episodes today at


CryptogramUS Army Researching Bot Swarms

The US Army Research Agency is funding research into autonomous bot swarms. From the announcement:

The objective of this CRA is to perform enabling basic and applied research to extend the reach, situational awareness, and operational effectiveness of large heterogeneous teams of intelligent systems and Soldiers against dynamic threats in complex and contested environments and provide technical and operational superiority through fast, intelligent, resilient and collaborative behaviors. To achieve this, ARL is requesting proposals that address three key Research Areas (RAs):

RA1: Distributed Intelligence: Establish the theoretical foundations of multi-faceted distributed networked intelligent systems combining autonomous agents, sensors, tactical super-computing, knowledge bases in the tactical cloud, and human experts to acquire and apply knowledge to affect and inform decisions of the collective team.

RA2: Heterogeneous Group Control: Develop theory and algorithms for control of large autonomous teams with varying levels of heterogeneity and modularity across sensing, computing, platforms, and degree of autonomy.

RA3: Adaptive and Resilient Behaviors: Develop theory and experimental methods for heterogeneous teams to carry out tasks under the dynamic and varying conditions in the physical world.

Slashdot thread.

And while we're on the subject, this is an excellent report on AI and national security.

Worse Than FailureCodeSOD: This or That

Processing financial transactions is not the kind of software you want to make mistakes in. If something is supposed to happen, it is definitely supposed to happen. Not partially happen. Not maybe happen.

Thus, a company like Charles R’s uses a vendor-supplied accounting package. That vendor has a professional services team, so when the behavior needs to be customized, Charles’s company outsources that development to the vendor.

Of course, years later, that code needs to get audited, and it’s about then that you find out that the vendor outsourced their “professional services” to the lowest bidder, creating a less-than-professional service result.

If you want to make sure than when the country code is equal to "HND", you want to be really sure.

if( == config.country_code.HND || == config.country_code.HND)
    parts[0] = parts[0].replace(/\B(?=(\d{3})+(?!\d))/g, ",");
    parts[0] = parts[0].replace(/\B(?=(\d{3})+(?!\d))/g, ".");
[Advertisement] Application Release Automation for DevOps – integrating with best of breed development tools. Free for teams with up to 5 users. Download and learn more today!

Planet DebianJonathan Carter: Plans for DebCamp17

In a few days, I’ll be attending DebCamp17 in Montréal, Canada. Here are a few things I plan to pay attention to:

  • Calamares: My main goal is to get Calamares (ooops, they have a TSL problem currently so weblink disabled until that’s fixed) in a great state for inclusion in the archives, along with sane default configuration for Debian (calamares-settings-debian) that can easily be adapted for derivatives. Calamares itself is already looking good and might make it into the archives before DebCamp even starts, but the settings package will still need some work.
  • Gnome Shell Extensions: During the stretch development cycle I made great progress in packaging some of the most popular shell extensions, but there are a few more that I’d like to get in for buster: apt-update-indicator, dash-to-panel (done), proxy-switcher, tilix-dropdown, tilix-shortcut
  • zram-tools: Fix a few remaining issues in zram-tools and get it packaged into the archive.
  • DebConf Committee: Since January, I’ve been a delegate on the DebConf committee, I’m hoping that we get some time to catch up in person before DebConf starts. We’ve been working on some issues together recently and so far we’ve been working together really well. We’re working on keeping DebConf organisation stable and improving community processes, and I hope that by the end of DebConf we’ll have some proposals that will prevent some re-occurring problems and also help mend old wounds from previous years.
  • ISO Image Writer: I plan to try out Jonathan Riddell’s iso image writer tool and check whether it works with the use cases I’m interested in (Debian install and live media, images created with Calamares, boots UEFI/BIOS on optical and usb media). If it does what I want I’ll probably package it too if Jonathan Riddell didn’t get to it yet.
  • Hashcheck: Kyle Robertze wrote a tool called Hashcheck for checking install media checksums from a live environment. If he gets a break during DebCamp from video team stuff, I’m hoping we can look at some improvements for it and also getting it packaged in Debian.

Planet DebianJoerg Jaspert: Automated wifi login, update

With recent changes the automated login script for WifiOnICE stopped working. Fortunately a fix is easy, it is enough to add a referrer header to the call and have de/ added to the url.

Updated script:


# (Some) docs at


case ${ACTION} in
        CONID=${CONNECTION_ID:-$(iwgetid "${IFACE}" -r)}
        if [[ ${CONID} == WIFIonICE ]]; then
            /usr/bin/timeout -k 20 15 /usr/bin/wget -q -O - --referer > /dev/null
        # We are not interested in this


Planet Linux AustraliaOpenSTEM: This Week in HASS – term 3, week 3

This week our youngest students are playing games from different places around the world, in the past. Slightly older students are completing the Timeline Activity. Students in Years 4, 5 and 6 are starting to sink their teeth into their research project for the term, using the Scientific Process.

Foundation/Prep/Kindy to Year 3

Playing hoopsThis week students in stand-alone Foundation/Prep/Kindy classes (Unit F.3) and those integrated with Year 1 (Unit F-1.3) are examining games from the past. The teacher can choose to match these to the stories from Week 1 of the unit, as games are listed matching each of the places and time periods included in those stories. However, some games are more practical to play than others, and some require running around, so the teacher may wish to choose games which suit the circumstances of each class. Teachers can discuss how different places have different types of games and why these games might be chosen in those places (e.g. dragons in China and lions in Africa).

Students in Years 1 (Unit 1.3), 2 (Unit 2.3) and 3 (Unit 3.3) have this week to finish off the Timeline Activity. The Timeline activity requires some investment of time, which can be done as 2 half hour sessions or one longer session. Some flexible timing is built into the unit for teachers who want to match this activity to the number line in Maths, and other revise or cover the number line in more depth as a complement to this activity.

Years 3 to 6

Arthur Phillip

Last week students in Years 3 to 6 chose a research topic, related to a theme in Australian History. Different themes are studied by different year levels. Students in Year 3 (Unit 3.7) study a topic in the history of their capital city or local community. Students in Year 4 (Unit 4.3) study a topic from Australian history in the precolonial or early colonial periods. Students in Year 5 (Unit 5.3) study a topic from Australian colonial history and students in Year 6 (Unit 6.3) study a topic related to Federation or 20th century Australian history. These research topics are undertaken as a Scientific Investigation. This week the focus is on defining a Research Question and undertaking Background Research. Student workbooks will guide students through the process of choosing a research question within their chosen topic, and then how to start the Background Research. These sections will be included in the Scientific Report each student produces at the end of this unit. OpenSTEM resources available with each unit provide a starting point for this Background Research.


Rondam RamblingsDonald Trump shows that democracy is working. Alas.

I must confess to indulging in a certain amount of schadenfreude watching Donald Trump squirm.  I have been an unwavering never-Trumper since before he announced he was running for president.  And yet I am mindful of the fact that nearly all of the predictions I have made about Trump's political fortunes have been wrong.  In fact, while researching links for this post I realized that I wrote

Planet DebianGregor Herrmann: RC bugs 2017/08-29

long time no blog post. – & the stretch release happened without many RC bug fixes from me; in practice, the auto-removals are faster & more convenient.

what I nevertheless did in the last months was to fix RC bugs in pkg-perl packages (it still surprises me how fast rotting & constantly moving code is); prepare RC bug fixes for jessie (also for pkg-perl packages); & in the last weeks provide patches & carry out NMUs for perl packages as part of the ongoing perl 5.26 transition.

  • #783656 – libhtml-microformats-perl: "libhtml-microformats-perl: missing dependency on libmodule-pluggable-perl"
    fix in jessie (pkg-perl)
  • #788008 – libcgi-application-plugin-anytemplate-perl: "libcgi-application-plugin-anytemplate-perl: missing dependency on libclone-perl"
    fix in jessie (pkg-perl)
  • #788350 – libhttp-proxy-perl: "libhttp-proxy-perl: FTBFS - proxy tests"
    fix in jessie (pkg-perl)
  • #808454 – src:libdata-faker-perl: "libdata-faker-perl: FTBFS under some locales (eg. fr_CH.UTF-8)"
    fix in jessie (pkg-perl)
  • #824843 – libsys-syscall-perl: "libsys-syscall-perl: FTBFS on arm64: test suite failures"
    fix in jessie (pkg-perl)
  • #824936 – libsys-syscall-perl: "libsys-syscall-perl: FTBFS on mips*: test failures"
    fix in jessie (pkg-perl)
  • #825012 – libalgorithm-permute-perl: "libalgorithm-permute-perl: FTBFS with Perl 5.24: undefined symbol: PUSHBLOCK"
    upload new upstream release (pkg-perl)
  • #826136 – libsys-syscall-perl: "libsys-syscall-perl FTBFS on hppa arch (with patch)"
    fix in jessie (pkg-perl)
  • #826471 – intltool: "intltool: Unescaped left brace in regex is deprecated at /usr/bin/intltool-update line 1070"
    extend existing patch, uploaded by maintainer
  • #826502 – quilt: "quilt: FTBFS with perl 5.26: Unescaped left brace in regex"
    apply patch from ntyni (pkg-perl)
  • #826505 – swissknife: "swissknife: FTBFS with perl 5.26: Unescaped left brace in regex"
    add patches from ntyni, upload to DELAYED/5
  • #839208 – libio-compress-perl: "libio-compress-perl: uninstallable, current version superseded by Perl 5.24.1"
    upload newer upstream release (pkg-perl)
  • #839218 – nama: "nama: FTBFS because of perl's lack of stack reference counting"
    apply patch from Balint Reczey (pkg-perl)
  • #848060 – src:libx11-protocol-other-perl: "libx11-protocol-other-perl: FTBFS randomly (failing tests)"
    fix in jessie (pkg-perl)
  • #855920 – src:fail2ban: "fail2ban: FTBFS: test_rewrite_file: AssertionError: False is not true"
    try to reproduce, later propose different fix
  • #855951 – src:libsecret: "libsecret FTBFS with test failures on many architectures"
    try to reproduce
  • #856133 – src:shiboken: "shiboken FTBFS on i386/armel/armhf: other_collector_external_operator test failed"
    try to reproduce
  • #857087 – nsscache: "nsscache: /usr/share/nsscache not installed"
    triaging, later fixed by maintainer
  • #858501 – libmojomojo-perl: "libmojomojo-perl: broken symlink: /usr/share/perl5/MojoMojo/root/static/js/jquery.autocomplete.js -> ../../../../../javascript/jquery-ui/ui/jquery.ui.autocomplete.min.js"
    fix symlink (pkg-perl)
  • #858509 – cl-csv-data-table: "cl-csv-data-table: broken symlink: /usr/share/common-lisp/systems/cl-csv-data-table.asd -> ../source/cl-csv-data-table/cl-csv-data-table.asd"
    propose a patch
  • #859539 – filezilla: "filezilla: Filezilla crashes at startup"
    simply close the bug :)
  • #859963 – src:mimetic: "mimetic FTBFS on architectures where char is unsigned"
    add patch to make variable signed
  • #860142 – libgeo-ip-perl: "libgeo-ip-perl: should recommend geoip-database and geoip-database-extra"
    analyse, lower severity (pkg-perl)
  • #863049 – "jessie-pu: package shutter/0.92-0.1+deb8u2"
    fix in jessie (pkg-perl)
  • #865045 – xmltv: "xmltv: FTBFS with Perl 5.26: t/test_filters.t failure"
    sponsor maintainer upload
  • #865224 – src:uwsgi: "uwsgi: ftbfs with multiple supported python3 versions"
    adjust build dependencies, upload NMU with maintainer's permission
  • #865380 – libtest-unixsock-perl: "libtest-unixsock-perl: Build-Conflicts-Indep: libtest-simple-perl (>= 1.3), including Perl 5.26"
    conditionally skip a test (pkg-perl)
  • #865888 – nagios-plugin-check-multi: "nagios-plugin-check-multi: FTBFS with Perl 5.26: Unescaped left brace in regex is deprecated here"
    update unescaped-left-brace-in-regex.patch, upload to DELAYED/5
  • #866317 – html2ps: "html2ps: relies on deprecated Perl syntax/features, breaks with 5.26"
    prepare a patch, later do QA upload
  • #866934 – libhttp-oai-perl: "libhttp-oai-perl: /usr/bin/oai_pmh uses the 'encoding' pragma, breaks with Perl 5.26"
    patch out "use encoding" (pkg-perl)
  • #866944 – libmecab-perl: "libmecab-perl: FTBFS: no such file or directory: /var/lib/mecab/dic/debian/dicrc"
    build-depend on fixed version (pkg-perl)
  • #866981 – src:libpgobject-type-datetime-perl: "libpgobject-type-datetime-perl FTBFS: You planned 15 tests but ran 5."
    upload new upstream release (pkg-perl)
  • #866984 – src:libpgobject-type-json-perl: "libpgobject-type-json-perl FTBFS: You planned 9 tests but ran 5."
    upload new upstream release (pkg-perl)
  • #866991 – libcpan-meta-perl: "libcpan-meta-perl: uninstallable in unstable"
    fix versioned Breaks/Replaces (pkg-perl)
  • #867046 – gwhois: "gwhois: fails with perl 5.26: The encoding pragma is no longer supported at /usr/bin/gwhois line 80."
    drop "use encoding", upload to DELAYED/5
  • #867208 – src:libpgobject-type-bytestring-perl: "libpgobject-type-bytestring-perl FTBFS: Failed 1/5 test programs. 0/8 subtests failed."
    upload new upstream release (pkg-perl)
  • #867210 – libtext-mecab-perl: "libtext-mecab-perl: FTBFS: test failures: Failed to create mecab instance"
    build-depend on fixed version (pkg-perl)
  • #867983 – libclass-load-perl: "libclass-load-perl: FTBFS: t/012-without-implementation.t failure"
    upload new upstream release (pkg-perl)
  • #867984 – libclass-load-xs-perl: "libclass-load-xs-perl: FTBFS: t/012-without-implementation.t failure"
    upload new upstream release (pkg-perl)
  • #868069 – liburi-namespacemap-perl: "liburi-namespacemap-perl: unbuildable with sbuild"
    fix build dependencies (pkg-perl)
  • #868075 – libperlx-assert-perl: "libperlx-assert-perl: missing dependency on libkeyword-simple-perl | libdevel-declare-perl"
    fix dependencies (pkg-perl)
  • #868613 – src:liblog-report-lexicon-perl: "liblog-report-lexicon-perl FTBFS: Failed 2/10 test programs. 2/305 subtests failed."
    upload new upstream release (pkg-perl)
  • #868888 – get-flash-videos: "Fails: Can't locate Term/"
    add missing dependency (pkg-perl)
  • #869208 – src:libalgorithm-svm-perl: "libalgorithm-svm-perl FTBFS on 32bit: SVM.c: loadable library and perl binaries are mismatched (got handshake key 0x80c0080, needed 0x81c0080)"
    add patch to add ccflags (pkg-perl)
  • #869360 – slic3r: "slic3r: missing dependency on perlapi-*"
    propose a patch
  • #869383 – src:clanlib: "clanlib FTBFS with perl 5.26"
    propose a patch
  • #869418 – src:libtest-taint-perl: "libtest-taint-perl FTBFS on armel: not ok 2 - $ENV{TEST_ACTIVE} is tainted"
    add a patch (pkg-perl)
  • #869429 – vile: "vile: Uninstallable after BinNMU / not binNMU-safe"
    some triaging

Planet DebianNorbert Preining: Gaming: Refunct

A lovely little game, Refunct, just crossed my Steam installation. A simple platformer developed with much love. The game play consists of reviving a lost area by stepping on towers, finding buttons, and making more towers appear through this.

The simple game play idea is enriched with wonderful lightening due to the movement of the sun. I really enjoyed the changing of environment and mood that was created by this effect.

Although not terrible useful by now, one can also swim and dive and enjoy the world from below, which is just an added nice bonus.

The game is currently (till Monday night as far as I see) on sale on Steam for 149 Yen, i.e., slightly above a Euro/Dollar. Well worth the investment for about 1h of game play.

Addition: The fastest run at the moment is at 2:46.57, that is under 3min! I managed barely under 25min. Incredible.

Planet DebianVincent Fourmond: Screen that hurts my eyes, take 2

Six month ago, I wrote a lengthy post about my new computer hurting my eyes. I haven't made any progress with that, but I've accidentally upgraded my work computer from Kernel 4.4 to 4.8 and the nvidia drivers from 340.96-4 to 375.66-2. Well, my work computer now hurts, I've switched back to the previous kernel and drivers, I hope it'll be back to normal.

Any ideas of something specific that changed, either between 4.4 and 4.8 (kernel startup code, default framebuffer modes, etc ?), or between the 340.96 and the 375.66 drivers ? In any case, I'll try that specific combination of kernels/drivers home to see if I can get it to a useable state.

Update, July 23rd

I reverted to the earlier drivers/kernel, to no avail. But it seems in fact the problem with my work computer is linked to an allergy of some kind, since antihistamine drugs have an effect (there are construction works in my lab, perhaps they are the cause ?). No idea still for my home computer, for which the problem is definitely not solved.

Planet Linux AustraliaGabriel Noronha: test post

test posting from

01 – [Jul-24 13:35 API] Volley error on – exception: null
02 – [Jul-24 13:35 API] StackTrace:

03 – [Jul-24 13:35 API] Dispatching action: PostAction-PUSHED_POST
04 – [Jul-24 13:35 POSTS] Post upload failed. GENERIC_ERROR: The Jetpack site is inaccessible or returned an error: transport error – HTTP status code was not 200 (403) [-32300]
05 – [Jul-24 13:35 POSTS] updateNotificationError: Error while uploading the post: The Jetpack site is inaccessible or returned an error: transport error – HTTP status code was not 200 (403) [-32300]
06 – [Jul-24 13:35 EDITOR] Focus out callback received

Don MartiMy bot parsed 12,387 RSS feeds and all I got were these links.

Bryan Alexander has a good description of an "open web" reading pipeline in I defy the world and go back to RSS. I'm all for the open web, but 40 separate folders for 400 feeds? That would drive me nuts. I'm a lumper, not a splitter. I have one folder for 12,387 feeds.

My chosen way to use RSS (and one of the great things about RSS is you can choose UX independently of information sources) is a "scored river". Something like Dave Winer's River of News concept, that you can navigate by just scrolling, but not exactly a river of news.

  • with full text if available, but without images. I can click through if I want the images.

  • items grouped by score, not feed. (Scores assigned managed by a dirt-simple algorithm where a feed "invests" a percentage of its points in every link, and the investments pay out in a higher score for that feed if the user likes a link.)

I also put the byline at the bottom of each item. Anyway, one thing I have found out about manipulating my own filter bubble is that linklog feeds and blogrolls are great inputs. So here's a linklog feed. (It's mirrored from the live site, which annoys everyone except me.)

Here are some actual links.

This might look funny: How I ran my kids like an Atlassian team for a month. But think about it for a minute. Someone at every app or site your kids use is doing the same thing, and their goals don't include "Dignity and Respect" or "Hard Work Smart Work".

Global network of 'hunters' aim to take down terrorists on the internet It took me a few days to figure things out and after a few weeks I was dropping accounts like flies…

Google's been running a secret test to detect bogus ads — and its findings should make the industry nervous. (This is a hella good idea. Legit publishers could borrow it: just go ad-free for a few minutes at random, unannounced, a couple of times a week, then send the times straight to CMOs. Did you buy ads that someone claimed ran on our site at these times? Well, you got played.)

For an Inclusive Culture, Try Working Less As I said, to this day, my team at J.D. Edwards was the most diverse I’ve ever worked on....Still, I just couldn’t get over that damned tie.

The Al Capone theory of sexual harassment Initially, the connection eluded us: why would the same person who made unwanted sexual advances also fake expense reports, plagiarize, or take credit for other people’s work?

Jon Tennant - The Cost of Knowledge But there’s something much more sinister to consider; recently a group of researchers saw fit to publish Ebola research in a ‘glamour magazine’ behind a paywall; they cared more about brand association than the content. This could be life-saving research, why did they not at least educate themselves on the preprint procedure....

Twitter Is Still Dismissing Harassment Reports And Frustrating Victims

This Is How Your Fear and Outrage Are Being Sold for Profit (Profit? What about TEH LULZ??!?!1?)

Fine, have some cute animal photos, I was done with the other stuff anyway: Photographer Spends Years Taking Adorable Photos of Rats to Break the Stigma of Rodents


Planet DebianSean Whitton: I'm going to DebCamp17, Montréal, Canada

Here’s what I’m planning to work on – please get in touch if you want to get involved with any of these items. In rough order of priority/likelihood of completion:

  • Debian Policy sprint

  • conversations about using git for Debian packaging, especially with regard to dgit

    • writing up or following up on feature requests
  • Emacs team sprint

    • talking to maintainers about transitioning their packages to use dh-elpa
  • submitting and revising patch series to dgit

  • writing a test suite for git-remote-gcrypt

Cory DoctorowCome see me at San Diego Comic-Con!

There are three more stops on my tour for Walkaway: tomorrow at San Diego Comic-Con, next weekend at Defcon 25 in Las Vegas, and August 10th at the Burbank Public Library.

My Comic-Con day is tomorrow/Sunday, July 23: first, a 10AM signing at the Tor Books booth (#2701); then a panel, The Future is Bleak, with Annalee Newitz, Scott Westerfeld, Scott Reintgen and Alex R. Kahler; and finally a 1:15PM signing at autographic area AA06.

(Image: Gage Skidmore, CC-BY-SA)

Planet DebianNorbert Preining: Making fun of Trump – thanks France

I mean, it is easy to make fun of Trump, he is just too stupid and incapable and uneducated. But what the French president Emmanuel Macron did on Bastille Day, in presence of the usual Trumpies, was just above the usual level of making fun of Trump. The French made Trump watch a French band playing a medley of Daft Punk. And as we know – Trump seemed to be very unimpressed, most probably because he doesn’t have a clue.

I mean, normally you play these pathetic rubbish, look at the average US (or Chinese or North Korean) parades, and here we have the celebration of an event much older then anything the US can put on the table, and they are playing Daft Punk!

France, thanks. You made my day – actually not only one!

Planet DebianNiels Thykier: Improving bulk performance in debhelper

Since debhelper/10.3, there has been a number of performance related changes.  The vast majority primarily improves bulk performance or only have visible effects at larger “input” sizes.

Most visible cases are:

  • dh + dh_* now scales a lot better for large number of binary packages.  Even more so with parallel builds.
  • Most dh_* tools are now a lot faster when creating many directories or installing files.
  • dh_prep and dh_clean now bulk removals.
  • dh_install can now bulk some installations.  For a concrete corner-case, libssl-doc went from approximately 11 seconds to less than a second.  This optimization is implicitly disabled with –exclude (among other).
  • dh_installman now scales a lot better with many manpages.  Even more so with parallel builds.
  • dh_installman has restored its performance under fakeroot (regression since 10.2.2)


For debhelper, this mostly involved:

  • avoiding fork+exec of commands for things doable natively in perl.  Especially, when each fork+exec only process one file or dir.
  • bulking as many files/dirs into the call as possible, where fork+exec is still used.
  • caching / memorizing slow calls (e.g. in parts of pkgfile inside Dh_Lib)
  • adding an internal API for dh to do bulk check for pkgfiles. This is useful for dh when checking if it should optimize out a helper.
  • and, of course, doing things in parallel where trivially possible.


How to take advantage of these improvements in tools that use Dh_Lib:

  • If you use install_{file,prog,lib,dir}, then it will come out of the box.  These functions are available in Debian/stable.  On a related note, if you use “doit” to call “install” (or “mkdir”), then please consider migrating to these functions instead.
  • If you need to reset owner+mode (chown 0:0 FILE + chmod MODE FILE), consider using reset_perm_and_owner.  This is also available in Debian/stable.
    • CAVEAT: It is not recursive and YMMV if you do not need the chown call (due to fakeroot).
  • If you have a lot of items to be processed by a external tool, consider using xargs().  Since 10.5.1, it is now possible to insert the items anywhere in the command rather than just in the end.
  • If you need to remove files, consider using the new rm_files function.  It removes files and silently ignores if a file does not exist. It is also available since 10.5.1.
  • If you need to create symlinks, please consider using make_symlink (available in Debian/stable) or make_symlink_raw_target (since 10.5.1).  The former creates policy compliant symlinks (e.g. fixup absolute symlinks that should have been relative).  The latter is closer to a “ln -s” call.
  • If you need to rename a file, please consider using rename_path (since 10.5).  It behaves mostly like “mv -f” but requires dest to be a (non-existing) file.
  • Have a look at whether on_pkgs_in_parallel() / on_items_in_parallel() would be suitable for enabling parallelization in your tool.
    • The emphasis for these functions is on making parallelization easy to add with minimal code changes.  It pre-distributes the items which can lead to unbalanced workloads, where some processes are idle while a few keeps working.


I would like to thank the following for reporting performance issues, regressions or/and providing patches.  The list is in no particular order:

  • Helmut Grohne
  • Kurt Roeckx
  • Gianfranco Costamagna
  • Iain Lane
  • Sven Joachim
  • Adrian Bunk
  • Michael Stapelberg

Should I have missed your contribution, please do not hesitate to let me know.


Filed under: Debhelper, Debian

Planet DebianJunichi Uekawa: asterisk fails to start on my raspberry pi.

asterisk fails to start on my raspberry pi. I don't quite understand what the error message is but systemctl tells me there was a timeout. Don't know which timeout it hits.

Don Martithe other dude

Making the rounds, this is a fun one: A computer was asked to predict which start-ups would be successful. The results were astonishing.

  • 2014: When there's no other dude in the car, the cost of taking an Uber anywhere becomes cheaper than owning a vehicle. So the magic there is, you basically bring the cost below the cost of ownership for everybody, and then car ownership goes away.

  • 2018 (?): When there's no other dude in the fund, the cost of financing innovation anywhere becomes cheaper than owning a portfolio of public company stock. So the magic there is, you basically bring the transaction costs of venture capital below the cost of public company ownership for everybody, and then public companies go away.

Could be a thing for software/service companies faster than we might think. Futures contracts on bugs→equity crowdfunding and pre-sales of tokens→bot-managed follow-on fund for large investors.


TEDProsthetics that feel more natural, how mushrooms may help save bees, and more

Please enjoy your roundup of TED-related news:

Prosthetics that feel more natural. A study in Science Robotics lays out a surgical technique developed by Shriya Srinivasan, Hugh Herr and others that may help prosthetics feel more like natural limbs. During an amputation, the muscle pairs that allow our brains to sense how much force is applied to a limb and where it is in space are severed, halting sensory feedback to and from the brain and affecting one’s ability to balance, handle objects and move. But nerves that send signals to the amputated limb remain intact in many amputees. Using rats, the scientists connected these nerves with muscles grafted from other parts of the body — a technique that successfully restored the muscle pair relationship and sensory feedback being sent to the brain. Combined with other research on translating nerve signals into instructions for moving the prosthetic limb, the technique could help amputees regain the ability to sense where the prosthetic is in space and the forces applied to it. They plan to begin implementing this technique in human amputees. (Watch Herr’s TED Talk)

From mathematician to politician. Emmanuel Macron wants France to be at the forefront of science, and science to be incorporated in global politics, but this is easier said than done. The election of Cédric Villani to the French National Assembly—a mathematician, Fields medalist, and TED speaker—provides a reason for optimism. “Currently, scientific knowledge within French political circles is close to zero,” Villani said in an interview with Science. “It’s important that some scientific expertise is present in the National Assembly.” Villani’s election is a step in that direction. (Watch Villani’s TED Talk)

A digital upgrade for the US government. The United States Digital Services, of which Matt Cutts is acting administrator, released its July Report to Congress. Since 2014, the USDS has worked with Silicon Valley engineers and experienced government employees to streamline federal websites and online services. Currently, the USDS is working with seven federal agencies, including the Department of Defense, the Department of Health and Human Services and the Department of Education. Ultimately, the USDS’ digital intervention is not just about reducing cost and increasing efficiency– it’s about restoring people’s trust in government. (Watch Cutts’ TED Talk)

Can mushrooms help save bees? Bee populations have been in decline for the past decade, and the consequences could be dire. But in a video for Biographic, produced by Louie Schwartzberg and including mycologist Paul Stamets, scientists discuss an unexpected solution: mushrooms. The spores and extract from Metarhizium anisopliae, a common species of mushroom, are toxic to varroa mites, the vampiric parasite which sucks blood from bees and causes colony collapse disorder. However, bees can tolerate low doses free of harm. Metarhizium anisopliae has even been shown to promote beehive longevity. This could be a step forward in curbing the mortality rate of nature’s most prolific pollinator. (Watch Schwartzberg’s TED Talk and Stamets’ TED Talk)

Support for women entrepreneurs. The World Bank Group announced its creation of The Women Entrepreneurs Finance Initiative (We-Fi), a facility that will create a $1 billion fund to support and encourage female entrepreneurship. Initiated by the U.S. and Germany, it quickly received support from other nations including Canada, Japan, Saudi Arabia and South Korea. Nearly 70% of small and medium-sized enterprises owned by women in developing countries are denied or unable to receive adequate financial services. We-Fi aims to overcome these and many other obstacles by providing early support, networking opportunities and access to markets. “Women’s economic empowerment is critical to achieve the inclusive economic growth required to end extreme poverty, which is why it has been such a longstanding priority for us,” World Bank Group President Jim Yong Kim said. “This new facility offers an unprecedented opportunity to harness both the public and private sectors to open new doors of opportunity for women entrepreneurs and women-owned firms in developing countries around the globe.” (Watch Kim’s TED Talk)

Daring to drive. Getting behind the wheel of a car is something many of us take for granted. However, as Manal al-Sharif details in her new memoir, Daring to Drive: A Saudi Woman’s Awakening, it’s not that way for everybody. The daughter of a taxi driver, al-Sharif got an education and landed a good job. The real challenge was simply getting to work—as a rule, Saudi women are not allowed to drive. Daring to Drive tells the story of her activism in the face of adversity. (Watch al-Sharif’s TED Talk)

Have a news item to share? Write us at and you may see it included in this biweekly round-up.

CryptogramFriday Squid Blogging: Giant Squid Caught Off the Coast of Ireland

It's the second in two months. Video.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramHacking a Segway

The Segway has a mobile app. It is hackable:

While analyzing the communication between the app and the Segway scooter itself, Kilbride noticed that a user PIN number meant to protect the Bluetooth communication from unauthorized access wasn't being used for authentication at every level of the system. As a result, Kilbride could send arbitrary commands to the scooter without needing the user-chosen PIN.

He also discovered that the hoverboard's software update platform didn't have a mechanism in place to confirm that firmware updates sent to the device were really from Segway (often called an "integrity check"). This meant that in addition to sending the scooter commands, an attacker could easily trick the device into installing a malicious firmware update that could override its fundamental programming. In this way an attacker would be able to nullify built-in safety mechanisms that prevented the app from remote-controlling or shutting off the vehicle while someone was on it.

"The app allows you to do things like change LED colors, it allows you to remote-control the hoverboard and also apply firmware updates, which is the interesting part," Kilbride says. "Under the right circumstances, if somebody applies a malicious firmware update, any attacker who knows the right assembly language could then leverage this to basically do as they wish with the hoverboard."

Worse Than FailureError'd: No Thanks Necessary

"I guess we're not allowed to thank the postal carriers?!" Brian writes.


"So, does the CPU time mean that Microsoft has been listening to every noise I have made since before I was born?" writes Shaun F.


"No problem. I will not attempt to re-use your error message without permission," wrote Alex K.


Mark B. writes, "Ah, if only we could have this in real life."


"Good work Google! Another perfect translation into German," Kolja wrote.


"I was searching for an Atmel MCU, so I naturally opened Atmel's Product Finder. I kind of wish that I didn't," writes Michael B.,


[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet DebianMichal Čihař: Making Weblate more secure and robust

Having publicly running web application always brings challenges in terms of security and in generally in handling untrusted data. Security wise Weblate has been always quite good (mostly thanks to using Django which comes with built in protection against many vulnerabilities), but there were always things to improve in input validation or possible information leaks.

When Weblate has joined HackerOne (see our first month experience with it), I was hoping to get some security driven core review, but apparently most people there are focused on black box testing. I can certainly understand that - it's easier to conduct and you need much less knowledge of the tested website to perform this.

One big area where reports against Weblate came in was authentication. Originally we were mostly fully relying on default authentication pipeline coming with Python Social Auth, but that showed some possible security implications and we ended up with having heavily customized authentication pipeline to avoid several risks. Some patches were submitted back, some issues reported, but still we've diverged quite a lot in this area.

Second area where scanning was apparently performed, but almost none reports came, was input validation. Thanks to excellent XSS protection in Django nothing was really found. On the other side this has triggered several internal server errors on our side. At this point I was really happy to have Rollbar configured to track all errors happening in the production. Thanks to having all such errors properly recorded and grouped it was really easy to go through them and fix them in our codebase.

Most of the related fixes have landed in Weblate 2.14 and 2.15, but obviously this is ongoing effort to make Weblate better with every release.

Filed under: Debian English SUSE Weblate


Krebs on SecurityExclusive: Dutch Cops on AlphaBay ‘Refugees’

Following today’s breaking news about U.S. and international authorities taking down the competing Dark Web drug bazaars AlphaBay and Hansa Market, KrebsOnSecurity caught up with the Dutch investigators who took over Hansa on June 20, 2017. When U.S. authorities shuttered AlphaBay on July 5, police in The Netherlands saw a massive influx of AlphaBay refugees who were unwittingly fleeing directly into the arms of investigators. What follows are snippets from an exclusive interview with Petra Haandrikman, team leader of the Dutch police unit that infiltrated Hansa.

Vendors on both AlphaBay and Hansa sold a range of black market items — most especially controlled substances like heroin. According to the U.S. Justice Department, AlphaBay alone had some 40,000 vendors who marketed a quarter-million sales listings for illegal drugs to more than 200,000 customers. The DOJ said that as of earlier this year, AlphaBay had 238 vendors selling heroin. Another 122 vendors advertised Fentanyl, an extremely potent synthetic opioid that has been linked to countless overdoses and deaths.

In our interview, Haandrikman detailed the dual challenges of simultaneously dealing with the exodus of AlphaBay users to Hansa and keeping tabs on the giant increase in new illicit drug orders that were coming in daily as a result.

The profile and feedback of a top AlphaBay vendor.

The profile and feedback of a top AlphaBay vendor. Image:

KrebsOnSecurity (K): Talk a bit about how your team was able to seize control over Hansa.

Haandrikman (H): When we knew the FBI was working on AlphaBay, we thought ‘What’s better than if they come to us?’ The FBI wanted [the AlphaBay takedown] to look like an exit scam [where the proprietors of a dark web marketplace suddenly abscond with everyone’s money]. And we knew a lot of vendors on AlphaBay would probably come over to Hansa when AlphaBay was closed.

K: Where was Hansa physically based?

H: We knew the Hansa servers were in Lithuania, so we sent an MLAT (mutual legal assistance treaty) request to Lithuania and requested if we could proceed with our planned actions in their country. They were very willing to help us in our investigations.

K: So you made a copy of the Hansa servers?

H: We gained physical access to the machines in Lithuania, and were able to set up some clustering between the [Hansa] database servers in Lithuania and servers we were running in our country. With that, we were able to get a real time copy of the Hansa database, and then copy over the Web site code itself.

K: Did you have to take Hansa offline for a while during this process?

H: No, it didn’t really go offline. We were able to create our own copy of the site that was running on servers in the Netherlands. So there were two copies of the site running simultaneously.

The now-defunct Hansa Market.

The now-defunct Hansa Market.

K: At a press conference on this effort at the U.S. Justice Department in Washington, D.C. today, Rob Wainwright, director of the European law enforcement organization Europol, detailed how the closure of AlphaBay caused a virtual stampede of former AlphaBay buyers and sellers taking their business to Hansa Market. Tell us more about what that influx was like, and how you handled it.

H: Yes, we called them “AlphaBay refugees.” It wasn’t the technical challenge that caused problems. Because this was a police operation, we wanted to keep up with the orders to see if there were any large amounts [of drugs] being ordered to one place, [so that] we could share information with our law enforcement partners internationally.

K: How exactly did you deal with that? Were you able to somehow slow down the orders coming in?

H: We just closed registration on Hansa for new users for a few days. So there was a temporary restriction for being able to register on the site, which slowed down the orders each day to make sure that we could cope with the orders that were coming in.

K: Did anything unexpected happen as a result?

H: Some people started selling their Hansa accounts on Reddit. I read somewhere that one Hansa user sold his account for $40. The funny part about that was that sale happened about five minutes before we re-opened registration. There was a lot of frustration from ex-AlphaBay users that weren’t allowed to register on the site. But we also got defended by the Hansa community on social media, who said it was a great decision by us to educate certain AlphaBay users on Hansa etiquette, which doesn’t allow the sale of things permitted on AlphaBay and other dark markets, such as child pornography and firearms.

A message from Dutch authorities listing the top dark market vendors by nickname.

A message from Dutch authorities listing the top dark market vendors by nickname.

K: You mentioned earlier that the FBI wanted AlphaBay users to think that the reason for the closure of that marketplace was that its operators and administrators had conducted an ‘exit scam’ where they ran off with all of the Bitcoin and virtual currency that vendors and buyers had stored in their marketplace wallets temporarily. Why do you think they wanted this to look like an exit scam?

H: The idea was to hit the dark markets even harder when they think they’re just moving to another market and it turns to be law enforcement. Breaking the trust, so that [users] would not feel safe on a dark market.

K: It has been reported that just a few days ago the Hansa market administrators decided to ban the sale of Fentanyl. Were Dutch police involved in that at all?

H: It was a combination of things. One of the site’s employees or moderators started a discussion about this drug. We obviously also had our own opinion about it. It was a pretty good dialogue between us and the Hansa moderators to ban this from the site, and [that decision received] a lot of support from the community. But we didn’t instigate that discussion.

K: Have the Dutch police arrested anyone in connection with this investigation so far?

H: Yes, we identified several people in the Netherlands using the site, and there have already been several arrests made [tied to] Fentanyl.

K: Can you talk about whether your control over Hansa helped you identify users?

H: We did use some technical tricks to find out who people are, but we can’t go into that a lot because the investigation is still going on. But we did try to change the behavior [of some Hansa users] by asking for things that helped us to identify a lot of people and money.

K: What is your overall strategy in all of this?

H: Our strategy is that we want people to know that the Dark Web is not an anonymous place for criminals. Don’t think you can just buy or sell your drugs there without eventually getting caught by law enforcement. We want people to know you’re not safe on the Dark Web. Sooner or later we will come to get you.

Further reading: After AlphaBay’s Demise, Customers Flocked to Dark Market Run by Dutch Police

Krebs on SecurityAfter AlphaBay’s Demise, Customers Flocked to Dark Market Run by Dutch Police

Earlier this month, news broke that authorities had seized the Dark Web marketplace AlphaBay, an online black market that peddled everything from heroin to stolen identity and credit card data. But it wasn’t until today, when the U.S. Justice Department held a press conference to detail the AlphaBay takedown that the other shoe dropped: Police in The Netherlands for the past month have been operating Hansa Market, a competing Dark Web bazaar that enjoyed a massive influx of new customers immediately after the AlphaBay takedown.

The normal home page for the dark Web market Hansa has been replaced by this message from U.S. law enforcement authorities.

The normal home page for the dark Web market Hansa has been replaced by this message from U.S. law enforcement authorities.

U.S. Attorney General Jeff Sessions called the AlphaBay closure “the largest takedown in world history,” targeting some 40,000 vendors who marketed a quarter-million listings for illegal drugs to more than 200,000 customers.

“By far, most of this activity was in illegal drugs, pouring fuel on the fire of a national drug epidemic,” Sessions said. “As of earlier this year, 122 vendors advertised Fentanyl. 238 advertised heroin. We know of several Americans who were killed by drugs on AlphaBay.”

Andrew McCabe, acting director of the FBI, said AlphaBay was roughly 10 times the size of the Silk Road, a similar dark market that was shuttered in a global law enforcement sting in October 2013.

As impressive as those stats may be, the real coup in this law enforcement operation became evident when Rob Wainwright, director of the European law enforcement organization Europol, detailed how the closure of AlphaBay caused a virtual stampede of former AlphaBay buyers and sellers taking their business to Hansa Market, which had been quietly and completely taken over by Dutch police one month earlier — on June 20.

“What this meant…was that we could identify and disrupt the regular criminal activity that was happening on Hansa Market but also sweep up all of those new users that were displaced from AlphaBay and looking for a new trading plot form for their criminal activities,” Wainwright told the media at today’s press conference, which seemed more interested in asking Attorney General Sessions about a recent verbal thrashing from President Trump.

“In fact, they flocked to Hansa in droves,” Wainwright continued. “We recorded an eight times increase in the number of human users on Hansa immediately following the takedown of AlphaBay. Since the undercover operation to take over Hansa market by the Dutch Police, usernames and passwords of thousands of buyers and sellers of illicit commodities have been identified and are the subject of follow-up investigations by Europol and our partner agencies.”

On July 5, the same day that AlphaBay went offline, authorities in Thailand arrested Alexandre Cazes — a 25-year-old Canadian citizen living in Thailand — on suspicion of being the creator and administrator of AlphaBay. He was charged with racketeering, conspiracy to distribute narcotics, conspiracy to commit identity theft and money laundering, among other alleged crimes.

Alexandre Cazes, standing in front of one of four Lamborghini sports cars he owned. Image:

Alexandre Cazes, standing in front of one of four Lamborghini sports cars he owned. Image:

Law enforcement authorities in the US and abroad also seized millions of dollars worth of Bitcoin and other assets allegedly belonging to Cazes, including four Lamborghini cars and three properties.

However, law enforcement officials never got a chance to extradite Cazes to the United States to face trial. Cazes, who allegedly went by the nicknames “Alpha02” and “Admin,” reportedly committed suicide while still in custody in Thailand.

Online discussions dedicated to the demise of AlphaBay, Hansa and other Dark Web markets — such as this megathread over at Reddit — observe that law enforcement officials may have won this battle with their clever moves, but that another drug bazaar will simply step in to fill the vacuum.

But Ronnie Tokazowski, a senior analyst at New York City-based threat intelligence firm Flashpoint, said the actions by the Dutch and American authorities could make it more difficult for established vendors from AlphaBay and Hansa to build a presence using the same identities at alternative Dark Web marketplaces.

Vendors on Dark Web markets tend to re-use the same nickname across multiple marketplaces, partly so that other cybercriminals won’t try to assume and abuse their good names on other forums, but also because a reputation for quality customer service means everything on these marketplaces and is worth a pretty penny.

But Tokazowski said even if top vendors from AlphaBay/Hansa already have a solid reputation among buyers on other marketplaces, some of those vendors may choose to walk away from their former identities and start anew.

“One of the things [the Dutch Police and FBI] mentioned was they were going after other markets using some of the several thousand password credentials they had from AlphaBay and Hansa, as a way to get access to vendor accounts,” on other marketplaces, he said. “These actions are really going to have a lot of people asking who they can trust.”

A message from Dutch authorities listing the top dark market vendors by nickname.

A message from Dutch authorities listing the top dark market vendors by nickname.

“There are dozens of these Dark Web markets, people will start to scatter to them, and it will be interesting to see who steps up to become the next AlphaBay,” Tokazowski continued. “But if people were re-using usernames and passwords across dark markets, it’s going to be a bad day for them. And from a vendor perspective, [the takedowns] make it harder for sellers to transfer reputation to another market.”

For more on how the Dutch Police’s National High Tech Crimes Unit (NHTCU) quietly assumed control over the Hansa Market, check out this story.

This story may be updated throughout the day (as per usual, any updates will be noted with a timestamp). In the meantime, the Justice Department has released a redacted copy of the indictment against Cazes (PDF), as well as a forfeiture complaint (PDF).

Update, 4:00 p.m. ET: Added perspectives from Flashpoint, and link to exclusive interview with the leader of the Dutch police unit that infiltrated Hansa.

CryptogramEthereum Hacks

The press is reporting a $32M theft of the cryptocurrency Ethereum. Like all such thefts, they're not a result of a cryptographic failure in the currencies, but instead a software vulnerability in the software surrounding the currency -- in this case, digital wallets.

This is the second Ethereum hack this week. The first tricked people in sending their Ethereum to another address.

This is my concern about digital cash. The cryptography can be bulletproof, but the computer security will always be an issue.

Worse Than FailureFinding the Lowest Value

Max’s team moved into a new office, which brought with it the low-walled, “bee-hive” style cubicle partitions. Their project manager cheerfully explained that the new space “would optimize collaboration”, which in practice meant that every random conversation between any two developers turned into a work-stopping distraction for everyone else.

That, of course, wasn’t the only change their project manager instituted. The company had been around for a bit, and their original application architecture was a Java-based web application. At some point, someone added a little JavaScript to the front end. Then a bit more. This eventually segregated the team into two clear roles: back-end Java developers, and front-end JavaScript developers.

An open pit copper mine

“Silos,” the project manager explained, “are against the ethos of collaboration. We’re all going to be full stack developers now.” Thus everyone’s job description and responsibilities changed overnight.

Add an overly ambitious release schedule and some unclear requirements, and the end result is a lot of underqualified developers rushing to hit targets with tools that they don’t fully understand, in an environment that isn’t conducive to concentration in the first place.

Max was doing his best to tune out the background noise, when Mariella stopped into Dalton’s cube. Dalton, sitting straight across from Max, was the resident “front-end expert”, or at least, he had been before everyone was now a full-stack developer. Mariella was a long-time backend JEE developer who hadn’t done much of the web portion of their application at all, and was doing her best to adapt to the new world.

“Dalton, what’s the easiest way to get the minimum value of an array of numbers in JavaScript?” Mariella asked.

Max did his best to ignore the conversation. He was right in the middle of a particularly tricky ORM-related bug, and was trying to figure out why one fetch operation was generating just awful SQL.

“Hrmmmm…” Dalton said, tapping at his desk and adding to the distraction while he thought. “That’s a tough one. Oh! You should use a filter!”

“A filter, what would I filter on?”

Max combed through the JPA annotations that controlled their data access, cursing the “magic” that generated SQL queries, but as he started to piece it together, Dalton and Mariella continued their “instructional” session.

“In the filter callback, you’d just check to see if each value is the lowest one, and if it is, return true, otherwise return false.” Dalton knocked out a little drum solo on his desk, to celebrate his cleverness.

“But… I wouldn’t know which value is the lowest one, yet,” Mariella said.

“Oh, yeah… I see what you mean. Yeah, this is a tricky one.”

Max traced through the code. Okay, so the @JoinColumn is CUST_ID, so why is it generating a LIKE comparison instead of an equals? Wait, I think I’ve-

“Ah ha!” Dalton said, chucking Max’s train of thought off the rails and through an HO-scale village. “You just sort the array and take the first value!” *Thumpa thumpa tadatada* went Dalton’s little desk drum solo.

“I guess that makes sense,” Mariella said.

At this point, Max couldn’t stay out of the conversation. “No! Don’t do that. Use reduce. Sorting’s an n(lg n) operation.”

“Hunh?” Dalton said. His fingers nervously hovered over his desk, ready to play his next drum solo once he had a vague clue what Max was talking about. “In logs in? We’re not doing logging…”

Max tried again, in simple English. “Sorting is slow. The computer does a lot of extra work to sort all the elements.”

“No it won’t,” Dalton said. “It’ll just take the first element.”

“Ahem.” Max turned to discover the project manager looming over his cube. “We want to encourage collaboration,” the PM said, sternly, “but right now, Max, you’re being disruptive. Please be quiet and let the people around you work.”

And that was how Dalton’s Minimum Finding Algorithm got implemented, and released as part of their production code base.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet DebianGunnar Wolf: Hey, everybody, come share the joy of work!

I got several interesting and useful replies, both via the blog and by personal email, to my two previous posts where I mentioned I would be starting a translation of the Made With Creative Commons book. It is my pleasure to say: Welcome everybody, come and share the joy of work!

Some weeks ago, our project was accepted as part of Hosted Weblate, lowering the bar for any interested potential contributor. So, whoever wants to be a part of this: You just have to log in to Weblate (or create an account if needed), and start working!

What is our current status? Amazingly better than anything I have exepcted: Not only we have made great progress in Spanish, reaching >28% of translated source strings, but also other people have started translating into Norwegian Bokmål (hi Petter!) and Dutch (hats off to Heimen Stoffels!). So far, Spanish (where Leo Arias and myself are working) is most active, but anything can happen.

I still want to work a bit on the initial, pre-po4a text filtering, as there are a small number of issues to fix. But they are few and easy to spot, your translations will not be hampered much when I solve the missing pieces.

So, go ahead and get to work! :-D Oh, and if you translate sizeable amounts of work into Spanish: As my university wants to publish (in paper) the resulting works, we would be most grateful if you can fill in the (needless! But still, they ask me to do this...) authorization for your work to be a part of a printed book.

Planet DebianNorbert Preining: The poison of

All those working in academics or research have surely heard about It started out as a service for academics, in their own words: is a platform for academics to share research papers. The company’s mission is to accelerate the world’s research.

But as with most of these platforms, they need to get money, and since some months now is pressing users to pay into a premium account at the incredible rate of 8.25USD per month.

This is about he same you pay for Netflix, or some other streaming service. If you remain on the free side, what remains for you to do is SNS-like stuff, and uploading your papers so that can make money from it.

What I am really surprised that they can pull this of at a .edu domain. The registry requirements state

For Institutions Within the United States. To obtain an Internet name in the .edu domain, your institution must be a postsecondary institution that is institutionally accredited by an agency on the U.S. Department of Education’s list of Nationally Recognized Accrediting Agencies (see recognized accrediting bodies).
Educause web site

Seeing what they are doing I think it is high time to request removal of the domain name.

So let us see what they are offering for their paid service:

  • Reader “The Readers feature tells you who is reading, downloading, and bookmarking your papers.”
  • Mentions “Get notified when you’re cited or mentioned, including in papers, books, drafts, theses, and syllabuses that Google Scholar can’t find.”
  • Advanced Search “Search the full text and citations of over 18 million papers”
  • Analytics “Learn more about who visits your profile”
  • Homepage – automatically generated home page from the data you enter into the system

On the other hand, the free service is consisting of SNS elements where you can follow other researchers, see when they upload/input an event, and that is it more or less. They have lured a considerable amount of academics into this service, gathered lots of papers, and now they are showing their real face – money.

In contrast to LinkedIn, which also offers paid tier, but keeps the free tier reasonably usable, has broken its promise to “accelerate the world’s research” and even worse, it is NOT a “platform for academics to share research papers”. They are collecting papers and sell access to them, like the publisher paywalls.

I consider this kind of service highly poisonous for the academic environment and researchers.

Planet Linux AustraliaOpenSTEM: New Dates for Earliest Archaeological Site in Aus!

Thylacine or Tasmanian Tiger.

This morning news was released of a date of 65,000 years for archaeological material at the site of Madjedbebe rock shelter in the Jabiluka mineral lease area, surrounded by Kakadu National Park. The site is on the land of the Mirarr people, who have partnered with archaeologists from the University of Queensland for this investigation. It has also produced evidence of the earliest use of ground-stone tool technology, the oldest seed-grinding tools in Australia and stone points, which may have been used as spears. Most fascinating of all, there is the jawbone of a Tasmanian Tiger or Thylacine (which was found across continental Australia during the Ice Age) coated in a red pigment, thought to be the reddish rock, ochre. There is much evidence of use of ochre at the site, with chucks and ground ochre found throughout the site. Ochre is often used for rock art and the area has much beautiful rock art, so we can deduce that these rock art traditions are as old as the occupation of people in Australia, i.e. at least 65,000 years old! The decoration of the jawbone hints at a complex realm of abstract thought, and possibly belief, amongst our distant ancestors – the direct forebears of modern Aboriginal people.

Kakadu view, NT Tourism.

Placing the finds from Madjebebe rock shelter within the larger context, the dating, undertaken by Professor Zenobia Jacobs from the University of Wollongong, shows that people were living at the site during the Ice Age, a time when many, now-extinct, giant animals roamed Australia; and the tiny Homo floresiensis was living in Indonesia. These finds show that the ancestors of Aboriginal people came to Australia with much of the toolkit of their rich, complex lives already in place. This technology, extremely advanced for the time, allowed them to populate the entire continent of Australia, first managing to survive in the hash Ice Age environment and then also managing to adapt to the enormous changes in sea level, climate and vegetation at the end of the Ice Age.

The team of archaeologists working at Madjebebe rock shelter, in conjunction with Mirarr traditional owners, are finding all sorts of wonderful archaeological material, from which they can deduce much rich, detailed information about the lives of the earliest people in Australia. We look forward to hearing more from them in the future. Students who are interested, especially those in Years 4, 5 and 6, can read more about these sites and the animals and lives of people in Ice Age Australia in our resources People Reach Australia, Early Australian Sites, Ice Age Animals and the Last Ice Age, which are covered in Units 4.1, 5.1 and 6.1.

Planet DebianBenjamin Mako Hill: Testing Our Theories About “Eternal September”

Graph of subscribers and moderators over time in /r/NoSleep. The image is taken from our 2016 CHI paper.

Last year at CHI 2016, my research group published a qualitative study examining the effects of a large influx of newcomers to the /r/nosleep online community in Reddit. Our study began with the observation that most research on sustained waves of newcomers focuses on the destructive effect of newcomers and frequently invokes Usenet’s infamous “Eternal September.” Our qualitative study argued that the /r/nosleep community managed its surge of newcomers gracefully through strategic preparation by moderators, technological systems to reign in on norm violations, and a shared sense of protecting the community’s immersive environment among participants.

We are thrilled that, less a year after the publication of our study, Zhiyuan “Jerry” Lin and a group of researchers at Stanford have published a quantitative test of our study’s findings! Lin analyzed 45 million comments and upvote patterns from 10 Reddit communities that a massive inundation of newcomers like the one we studied on /r/nosleep. Lin’s group found that these communities retained their quality despite a slight dip in its initial growth period.

Our team discussed doing a quantitative study like Lin’s at some length and our paper ends with a lament that our findings merely reflected, “propositions for testing in future work.” Lin’s study provides exactly such a test! Lin et al.’s results suggest that our qualitative findings generalize and that sustained influx of newcomers need not doom a community to a descent into an “Eternal September.” Through strong moderation and the use of a voting system, the subreddits analyzed by Lin appear to retain their identities despite the surge of new users.

There are always limits to research projects work—quantitative and qualitative. We think the Lin’s paper compliments ours beautifully, we are excited that Lin built on our work, and we’re thrilled that our propositions seem to have held up!

This blog post was written with Charlie Kiene. Our paper about /r/nosleep, written with Charlie Kiene and Andrés Monroy-Hernández, was published in the Proceedings of CHI 2016 and is released as open access. Lin’s paper was published in the Proceedings of ICWSM 2017 and is also available online.

TED10 books from TEDWomen for your summer reading list — and beyond

There’s no doubt that the speakers we invite to TEDWomen each year have amazing stories to tell. And many of them are published authors (or about to be!) whose work is worth exploring beyond their brief moments in the TED spotlight. So, if you’re looking for some inspiring, instructive and provocative books to add to your summer reading list, these recent books from 2016 TEDWomen speakers are worthy additions.

1. Beyond Respectability: The Intellectual Thought of Race Women by Brittney Cooper

Brittney Cooper wowed us at TEDWomen with her presentation on the racial politics of time. And in her new book, Beyond Respectability: The Intellectual Thought of Race Women, released in May, she doesn’t disappoint. Brittney says she got started studying black women intellectuals in graduate school. Although she learned a lot about the histories of black male intellectuals as an undergrad at Howard University, she “somehow managed not to learn anything about” the storied history of black women intellectuals in her four years there.

In her book, Brittney looks at the far-reaching intellectual achievements of female thinkers and activists like Ida B. Wells, Anna Julia Cooper, Mary Church Terrell, Fannie Barrier Williams, Pauli Murray and Toni Cade Bambara. NPR’s Genevieve Valentine writes that Brittney’s book is “a work of crucial cultural study … [that] lays out the complicated history of black woman as intellectual force, making clear how much work she has done simply to bring that category into existence.”

2. South of Forgiveness by Thordis Elva and Tom Stranger

One of the most intensely personal talks in San Francisco came from Thordis Elva and Tom Stranger. In 1996, 16-year-old Thordis shared a teenage romance with Tom, an exchange student from Australia. After a school dance, Tom raped Thordis. They didn’t speak for many years. Then, in her twenties, Thordis wrote to Tom, wanting to talk about what he did to her, and remarkably, he responded. For the first time, in front of the TEDWomen audience, Thordis and Tom talked openly about what happened and why she wanted to talk to him, and he to her.

South of Forgiveness: A True Story of Rape and Responsibility is a profoundly moving, open-chested and critical book. It is an exploration into sexual violence and self-knowledge that shines a healing light into the shrouded corners of our universal humanity. There is a disarming power in these pages that has the potential to change our language, shift our divisions, and invite us to be brave in discussing this pressing, global issue.

3. Girls & Sex by Peggy Orenstein

In a TED Talk that has already been viewed over 1.5 million times, author and journalist Peggy Orenstein, shared some of the things she learned about young girls and how they think about sex while researching her 2016 book, Girls & Sex: Navigating the Complicated New Landscape. In it, she explores the changing landscape of modern sexual expectations and its troubling impact on adolescents and especially young women. If you’re the parent of a young girl (or boy), it’s a must-read for understanding the “hidden truths, hard lessons, and important possibilities of girls’ sex lives in the modern world.”

4. Born Bright by C. Nicole Mason

At TEDWomen, C. Nicole Mason talked about what happens when we disrupt the path that society has paved for us based on where we were born, stereotypes and stigma. In her memoir, Born Bright: A Young Girl’s Journey from Nothing to Something in America, Nicole talks about how she did it in her own life, chronicling her own path out of poverty. In a beautifully written book, she examines “the conditions that make it nearly impossible to escape” and her own struggles with feeling like an outsider in academia and professional settings because of the way she talked, dressed and wore her hair.

5. The Gutsy Girl by Caroline Paul

Caroline Paul has a pretty amazing backstory. Once a young self-described “scaredy-cat,” Caroline grew up to fly planes, raft rivers, climb mountains, and fight fires. That’s right, she was one of the first women to work for the San Francisco Fire Department — a job that inspired her first work of nonfiction, Fighting Fire. In her most recent book, The Gutsy Girl: Escapades for Your Life of Epic Adventure, she expands on some of the stories she shared in her TED Talk, writing about “her greatest escapades — as well as those of other girls and women from throughout history.”

6. Marrow: A Love Story by Elizabeth Lesser

In a beautiful and surprisingly funny talk about strained family relationships and the death of a loved one, Elizabeth Lesser described the healing process of putting aside pride and defensiveness to make way for honest communication. “You don’t have to wait for a life-or-death situation to clean up the relationships that matter to you,” she says. “Be like a new kind of first responder … the one to take the first courageous step toward the other.”

In her courageous memoir, Marrow: A Love Story, the bestselling author of Broken Open shares the full story of her sister Maggie’s cancer and the difficult conversations they had during her illness as they healed their imperfect relationship and learned to love each other’s true selves.

7. I Know How She Does It by Laura Vanderkam

The theme of last year’s TEDWomen, as many of you will recall, was Time — all of us wrestle with how to be more productive, more engaged, more informed, to use our time wisely and well, to be more fully present in our lives. Writer and author Laura Vanderkam tackled the practical aspects of time management in her TED Talk. There are 168 hours in each week. How do we find time for what matters most?

In her book I Know How She Does It, Laura explains how successful women make the most of their time. With research, hard data and a lot of analysis, Laura “offers a framework for anyone who wants to thrive at work and life.”

8. Always Another Country by Sisonke Msimang

In her work, South African writer and activist Sisonke Msimang untangles the threads of race, class and gender that run through the fabric of African and global culture. In her popular TED Talk, she addressed the power of stories to promote change in our world and their “limitations, particularly for those of us who are interested in social justice.”

I am so pleased to report that after a very competitive bidding war, Sisonke will be publishing her first book, to be titled Always Another Country, in October.  The book, a memoir, will cover “her childhood in exile in Zambia and Kenya, her young adulthood and student years in North America and her return to South Africa during the euphoria of the 1990s.” I am so looking forward to reading her book and so should you.

9. When They Call You a Terrorist by Patrisse Cullors

Patrisse Cullors, one of the three co-founders of Black Lives Matter, is also working on a memoir due out in January 2018 titled When They Call You a Terrorist. Activist Eve Ensler writes that Patrisse “is a leading visionary and activist, feminist, civil rights leader who has literally changed the trajectory of politics and resistance in America.” Co-written with asha bandele, the memoir will recount the founding of the movement and serve as a reminder “that protest in the interest of the most vulnerable comes from love.”

10. On Intersectionality: Essential Writings by Kimberlé Crenshaw

Civil rights advocate Kimberlé Crenshaw had the TEDWomen audience on their feet during her passionate talk dissecting intersectionality, a term she coined 20 years ago that describes the double bind faced by victims of simultaneous racial and gender prejudice. “What do you call being impacted by multiple forces and then abandoned to fend for yourself?” she asked the audience. “Intersectionality seemed to do it for me.”

In a new collection of her writing, titled On Intersectionality: Essential Writings, due to be released next year, “readers will find the key essays and articles that have defined the concept of intersectionality and made Crenshaw a legal superstar.” Don’t miss it.

TEDWomen 2017

I also want to mention that registration for TEDWomen 2017 is open, so if you haven’t registered yet, please click this link and apply today — space is limited. This year, TEDWomen will be held November 1–3 in New Orleans. The theme is Bridges: We build them, we cross them, and sometimes we even burn them. We’ll explore the many aspects of this year’s theme through curated TED Talks, community dinners and activities.

Join us!
– Pat

Featured image: Reading a book at the beach (Simon Cocks, Flickr CC 2.0)


Cory DoctorowRudy Rucker on Walkaway

Walkaway is my first novel for adults since 2009 and I had extremely high hopes (and not a little anxiety) for it as it entered the world, back in April. Since then, I’ve been gratified by the kind words of many of my literary heroes, from William Gibson to Bruce Sterling to the kind cover quotes from Edward Snowden, Neal Stephenson and Kim Stanley Robinson.

Today I got a most welcome treat on those lines: a review by Rudy Rucker, lavishly illustrated with some of his excellent photos. Rucker really got the novel, got excited about the parts that excited me, and you can’t really ask for better than that.

“I’m groundhog daying again, aren’t I?”

Who’s saying this? It’s the character Dis. Her body is dead, but before she died, they managed (thanks to Dis’s work) to copy or transfer the brain processes into the cloud, that is, into a network of computers. And she can run as a sim in there. And she’s having trouble getting her sim to stabilize. It keeps freaking out and crashing. And each time she restarts the character Iceweasel sits there talking to the computer sim, trying to mellow it out, and Dis will realize she’s been rebooted, or restarted like Bill Murray in that towering cinematic SF masterpiece Groundhog Day. And Cory has the antic wit to make that verb.

The first half of the book is kind of a standard good young people against evil corporate rich people thing. But then, when Dis is talking about groundhog dayhing, it kicks into another gear. Cory pulls out a different stop on the mighty SF Wurlitzer organ: the software immortality trope. As I’m fond of saying, in my 1980 novel Software, I became one of the very first authors to write about the by-now-familiar notion of the mind as software. That is, your mind is in some sense like software running on your physical body. If we could create a sufficiently rich and flexible computer, the computer might be able to emulate a person.

There’s been a zillion movies, TV shows, SF stories and novels using this idea since then. What I liked so much about Walkaway is that Cory finds a way to make this (still fairly fantastic and unlikely) idea seem real and new.

Cory Doctorow’s WALKAWAY [Rudy Rucker]

LongNowInterview: Alexander Rose and Phil Libin on Long-Term Thinking

Long Now Executive Director Alexander Rose and former Evernote CEO Phil Libin recently spoke with the design agency Dialogue about the layers of civilization, the future of products, and the Clock of the Long Now.

The interview is wide-ranging, covering everything from the early tech, design and science fiction influences in Rose and Libin’s childhoods to how Long Now’s pace layers theory helps reconcile the tension between long-term planning and Silicon Valley’s fast-paced approach to entrepreneurship and product innovation.

The interview also provides a look at a little-known chapter in Long Now’s history, namely, how Alexander Rose left a career in video games and virtual world design after hearing about The Clock Project:

Stewart told me about The Clock Project. Back then the project was just a conversation between Danny Hillis, Brian Eno, and Stewart, but I just couldn’t get it out of my head when I heard about it. By strange luck, there was a Board meeting a week after where I met Danny for the first time. It was then that he told me he had a funder for the first prototype of the Clock and asked if I wanted to help build it. I immediately said, “Yes, this is what I want to do. I don’t want to work on video games anymore.”

Read Dialogue’s interview with Alexander Rose and Phil Libin in full (LINK).

Watch Stewart Brand and Long Now board member Paul Saffo discuss the Pace Layers of Civilization in a 02015 Conversation at The Interval (LINK).

Krebs on SecurityTrump Hotels Hit By 3rd Card Breach in 2 Years

Maybe some of you missed this amid all the breach news recently (I know I did), but Trump International Hotels Management LLC last week announced its third credit-card data breach in the past two years. I thought it might be useful to see these events plotted on a timeline, because it suggests that virtually anyone who used a credit card at a Trump property in the past two years likely has had their card data stolen and put on sale in the cybercrime underground as a result.

On May 2, 2017, KrebsOnSecurity broke the story that travel industry giant Sabre Corp. experienced a significant breach of its payment and customer data tied to bookings processed through a reservations system that serves more than 32,000 hotels and other lodging establishments. Last week, Trump International Hotels disclosed the SABRE breach impacted at least 13 Trump Hotel properties between August 2016 and March 2017. Trump Hotels said it was first notified of the breach on June 5.

A timeline of Trump Hotels’ credit card woes over the past two years. Click to enlarge.

According to Verizon‘s latest annual Data Breach Investigations Report (DBIR), malware attacks on point-of-sale systems used at front desk and hotel restaurant systems “are absolutely rampant” in the hospitality sector. Accommodation was the top industry for point-of-sale intrusions in this year’s data, with 87% of breaches within that pattern.

Other hotel chains that disclosed this past week getting hit in the Sabre breach include 11 Hard Rock properties (another chain hit by multiple card breach incidents); Four Seasons Hotels and Resorts; and at least two dozen Loews Hotels in the United States and Canada.


Given its abysmal record of failing to protect customer card data, you might think the hospitality industry would be anxious to assuage guests who may already be concerned that handing over their card at the hotel check-in desk also means consigning that card to cybercrooks (e.g. at underground carding shops like Trumps Dumps).

However, so far this year I’ve been hard-pressed to find any of the major hotel chains that accept more secure chip-based cards, which are designed to make card data stolen by point-of-sale malware and skimmers much more difficult to turn into counterfeit cards. I travel quite a bit — at least twice a month — and I have yet to experience a single U.S.-based hotel in the past year asking me to dip my chip-based card as opposed to swiping it.

A carding shop that sells stolen credit cards and invokes 45's likeness and name. No word yet on whether this cybercriminal store actually sold any cards stolen from Trump Hotel properties.

A carding shop that sells stolen credit cards and invokes 45’s likeness and name. No word yet on whether this cybercriminal store actually sold any cards stolen from Trump Hotel properties.

True, chip cards alone aren’t going to solve the whole problem. Hotels and other merchants that implement the ability to process chip cards still need to ensure the data is encrypted at every step of the transaction (known as “point-to-point” or “end-to-end” encryption). Investing in technology like tokenization — which allows merchants to store a code that represents the customer’s card data instead of the card data itself — also can help companies become less of a target.

Maybe it wouldn’t be so irksome if those of us concerned about security or annoyed enough at getting our cards replaced three or four times a year due to fraud could stay at a major hotel chain in the United States and simply pay with cash. But alas, we’re talking about an industry that essentially requires customers to pay by credit card.

Well, at least I’ll continue to accrue reward points on my credit card that I can use toward future rounds of Russian roulette with the hotel’s credit card systems.

It’s bad enough that cities and states routinely levy huge taxes on lodging establishments (the idea being the tax is disproportionately paid by people who don’t vote or live in the area); now we have the industry-wide “carder tax” conveniently added to every stay.

What’s the carder tax you ask? It’s the sense of dread and the incredulous “really?” that wells up when one watches his chip card being swiped yet again at the check-out counter.

It’s the time wasted on the phone with your bank trying to sort out whether you really made all those fraudulent purchases, and then having to enter your new card number at all those sites and services where the old one was stored. It’s that awkward moment when the waiter says in front of your date or guests that your card has been declined.

If you’re brave enough to pay for everything with a debit card (bad idea), it may be the time you spend without access to cash while your bank sorts things out. It may be the aggravation of dealing with bounced checks as a result of the fraud.

I can recall a recent stay wherein right next to the credit card machine at the hotel’s front desk was a stack of various daily newspapers, one of which had a very visible headline warning of an ongoing credit card breach at the same hotel that was getting ready to swipe my card yet again (by the way, I’m still kicking myself for not snapping a selfie right then).

After I checked out of that particular hotel, I descended to the parking garage to retrieve a rental car. The garage displayed large signs everywhere warning customers that the property was not responsible for any damage or thefts that may be inflicted on vehicles parked there. I recall thinking at the time that this same hotel probably should have been required to display a similar sign over their credit card machines (actually, they all should).

“The privacy and protection of our guests’ information is a matter we take very seriously.” This is from boilerplate text found in both the Trump Hotels and Loews Hotel statements. It sounds nice. Too bad it’s all hogwash. Once again, the timeline above speaks far more about the hospitality industry’s attitudes on credit card security than any platitudes offered in these all-too-common breach notifications.

Further reading:

Banks: Card Breach at Trump Hotel Properties
Trump Hotel Collection Confirms Card Breach
Sources: Trump Hotels Breached Again
Trump Hotels Settles Over Data Breach: To Pay $50,000 for 70,000 Stolen Cards
Breach at Sabre Corp.’s Hospitality Unit

CryptogramPassword Masking

Slashdot asks if password masking -- replacing password characters with asterisks as you type them -- is on the way out. I don't know if that's true, but I would be happy to see it go. Shoulder surfing, the threat is defends against, is largely nonexistent. And it is becoming harder to type in passwords on small screens and annoying interfaces. The IoT will only exacerbate this problem, and when passwords are harder to type in, users choose weaker ones.

Planet DebianDirk Eddelbuettel: RcppAPT 0.0.4

A new version of RcppAPT -- our interface from R to the C++ library behind the awesome apt, apt-get, apt-cache, ... commands and their cache powering Debian, Ubuntu and the like -- arrived on CRAN yesterday.

We added a few more functions in order to compute on the package graph. A concrete example is shown in this vignette which determines the (minimal) set of remaining Debian packages requiring a rebuild under R 3.4.* to update their .C() and .Fortran() registration code. It has been used for the binNMU request #868558.

As we also added a NEWS file, its (complete) content covering all releases follows below.

Changes in version 0.0.4 (2017-07-16)

  • New function getDepends

  • New function reverseDepends

  • Added package registration code

  • Added usage examples in scripts directory

  • Added vignette, also in docs as rendered copy

Changes in version 0.0.3 (2016-12-07)

  • Added dumpPackages, showSrc

Changes in version 0.0.2 (2016-04-04)

  • Added reverseDepends, dumpPackages, showSrc

Changes in version 0.0.1 (2015-02-20)

  • Initial version with getPackages and hasPackages

A bit more information about the package is available here as well as as the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Worse Than FailureCodeSOD: A Pre-Packaged Date

Microsoft’s SQL Server Integration Services is an ETL tool that attempts to mix visual programming (for designing data flows) with the reality that at some point, you’re just going to need to write some code. Your typical SSIS package starts as a straightforward process that quickly turns into a sprawling mix of spaghetti-fied .NET code, T-SQL stored procedures, and developer tears.

TJ L. inherited an SSIS package. This particular package contained a step where a C# sub-module needed to pass a date (but not a date-time) to the database. Now, this could be done easily by using C#’s date-handling objects, or even in the database by simply using the DATE type, instead of the DATETIME type.

Instead, TJ’s predecessor took this route instead:

CREATE PROC [dbo].[SetAsOfDate]
        @Date datetime = NULL
                                        WHEN @Date IS NULL THEN GETDATE()
                                        ELSE @Date


The good about this code is that it checks its input parameters. That’s defensive programming. The ugly is the less-than 1950 check, which I can only assume is a relic of some Y2K bugfixes. The bad is the `CAST(FLOOR(CAST(@Date AS FLOAT)) as DATETIME).

[Advertisement] Otter, ProGet, BuildMaster – robust, powerful, scalable, and reliable additions to your existing DevOps toolchain.

Planet DebianLars Wirzenius: Dropping Yakking from Planet Debian

A couple of people objected to having Yakking on Planet Debian, so I've removed it.


Harald WelteVirtual Um interface between OsmoBTS and OsmocomBB

During the last couple of days, I've been working on completing, cleaning up and merging a Virtual Um interface (i.e. virtual radio layer) between OsmoBTS and OsmocomBB. After I started with the implementation and left it in an early stage in January 2016, Sebastian Stumpf has been completing it around early 2017, with now some subsequent fixes and improvements by me. The combined result allows us to run a complete GSM network with 1-N BTSs and 1-M MSs without any actual radio hardware, which is of course excellent for all kinds of testing scenarios.

The Virtual Um layer is based on sending L2 frames (blocks) encapsulated via GSMTAP UDP multicast packets. There are two separate multicast groups, one for uplink and one for downlink. The multicast nature simulates the shared medium and enables any simulated phone to receive the signal from multiple BTSs via the downlink multicast group.


In OsmoBTS, this is implemented via the new osmo-bts-virtual BTS model.

In OsmocomBB, this is realized by adding virtphy virtual L1, which speaks the same L1CTL protocol that is used between the real OsmcoomBB Layer1 and the Layer2/3 programs such as mobile and the like.

Now many people would argue that GSM without the radio and actual handsets is no fun. I tend to agree, as I'm a hardware person at heart and I am not a big fan of simulation.

Nevertheless, this forms the basis of all kinds of possibilities for automatized (regression) testing in a way and for layers/interfaces that osmo-gsm-tester cannot cover as it uses a black-box proprietary mobile phone (modem). It is also pretty useful if you're traveling a lot and don't want to carry around a BTS and phones all the time, or get some development done in airplanes or other places where operating a radio transmitter is not really a (viable) option.

If you're curious and want to give it a shot, I've put together some setup instructions at the Virtual Um page of the Osmocom Wiki.

Planet DebianDaniel Silverstone: Yay, finished my degree at last

A little while back, in June, I sat my last exam for what I hoped would be the last module in my degree. For seven years, I've been working on a degree with the Open University and have been taking advantage of the opportunity to have a somewhat self-directed course load by taking the 'Open' degree track. When asked why I bothered to do this, I guess my answer has been a little varied. In principle it's because I felt like I'd already done a year's worth of degree and didn't want it wasted, but it's also because I have been, in the dim and distant past, overlooked for jobs simply because I had no degree and thus was an easy "bin the CV".

Fed up with this, I decided to commit to the Open University and thus began my journey toward 'qualification' in 2010. I started by transferring the level 1 credits from my stint at UCL back in 1998/1999 which were in a combination of basic programming in Java, some mathematics including things like RSA, and some psychology and AI courses which at the time were aiming at a degree called 'Computer Science with Cognitive Sciences'.

Then I took level 2 courses, M263 (Building blocks of software), TA212 (The technology of music) and MS221 (Exploring mathematics). I really enjoyed the mathematics course and so...

At level 3 I took MT365 (Graphs, networks and design), M362 (Developing concurrent distributed systems), TM351 (Data management and analysis - which I ended up hating), and finally finishing this June with TM355 (Communications technology).

I received an email this evening telling me the module result for TM355 had been posted, and I logged in to find I had done well enough to be offered my degree. I could have claimed my degree 18+ months ago, but I persevered through another two courses in order to qualify for an honours degree which I have now been awarded. Since I don't particularly fancy any ceremonial awarding, I just went through the clicky clicky and accepted my qualification of 'Batchelor of Science (Honours) Open, Upper Second-class Honours (2.1)' which grants me the letters 'BSc (Hons) Open (Open)' which, knowing me, will likely never even make it onto my CV because I'm too lazy.

It has been a significant effort, over the course of the past few years, to complete a degree without giving up too much of my personal commitments. In addition to earning the degree, I have worked, for six of the seven years it has taken, for Codethink doing interesting work in and around Linux systems and Trustable software. I have designed and built Git server software which is in use in some universities, and many companies, along with a good few of my F/LOSS colleagues. And I've still managed to find time to attend plays, watch films, read an average of 2 novel-length stories a week (some of which were even real books), and be a member of the Manchester Hackspace.

Right now, I'm looking forward to a stress free couple of weeks, followed by an immense amount of fun at Debconf17 in Montréal!

Krebs on SecurityExperts in Lather Over ‘gSOAP’ Security Flaw

Axis Communications — a maker of high-end security cameras whose devices can be found in many high-security areas — recently patched a dangerous coding flaw in virtually all of its products that an attacker could use to remotely seize control over or crash the devices.

The problem wasn’t specific to Axis, which seems to have reacted far more quickly than competitors to quash the bug. Rather, the vulnerability resides in open-source, third-party computer code that has been used in countless products and technologies (including a great many security cameras), meaning it may be some time before most vulnerable vendors ship out a fix — and even longer before users install it.cam2cam

At issue is a flaw in a bundle of reusable code (often called a “code library“) known as gSOAP, a widely-used toolkit that software or device makers can use so that their creations can talk to the Internet (or “parse XML” for my geek readers). By some estimates, there are hundreds — if not thousands — of security camera types and other so-called “Internet of Things”(IoT) devices that rely upon the vulnerable gSOAP code.

By exploiting the bug, an attacker could force a vulnerable device to run malicious code, block the owner from viewing any video footage, or crash the system. Basically, lots of stuff you don’t want your pricey security camera system to be doing.

Genivia, the company that maintains gSOAP, released an update on June 21, 2017 that fixes the flaw. In short order, Axis released a patch to plug the gSOAP hole in nearly 250 of its products.

Genivia chief executive Robert Van Engelen said his company has already reached out to all of its customers about the issue. He said a majority of customers use the gSOAP software to develop products, but that mostly these are client-side applications or non-server applications that are not affected by this software crash issue.

“It’s a crash, not an exploit as far as we know,” Van Engelen said. “I estimate that over 85% of the applications are unlikely to be affected by this crash issue.”

Still, there are almost certainly dozens of other companies that use the vulnerable gSOAP code library and haven’t (or won’t) issue updates to fix this flaw, says Stephen Ridley, chief technology officer and founder of Senrio — the security company that discovered and reported the bug. What’s more, because the vulnerable code is embedded within device firmware (the built-in software that powers hardware), there is no easy way for end users to tell if the firmware is affected without word one way or the other from the device maker.

“It is likely that tens of millions of products — software products and connected devices — are affected by this,” Ridley said.

“Genivia claims to have more than 1 million downloads of gSOAP (most likely developers), and IBM, Microsoft, Adobe and Xerox as customers,” the Senrio report reads. “On Sourceforge, gSOAP was downloaded more than 1,000 times in one week, and 30,000 times in 2017. Once gSOAP is downloaded and added to a company’s repository, it’s likely used many times for different product lines.”

Anyone familiar with the stories published on this blog over the past year knows that most IoT devices — security cameras in particular — do not have a stellar history of shipping in a default-secure state (heck, many of these devices are running versions of Linux that date back more than a decade). Left connected to the Internet in an insecure state, these devices can quickly be infected with IoT threats like Mirai, which enslave them for use in high-impact denial-of-service attacks designed to knock people and Web sites offline.

When I heard about this bug I pinged the folks over at IPVM, a trade publication that tracks the video surveillance industry. IPVM Business Analyst Brian Karas said the type of flaw (known as a buffer overflow) in this case doesn’t expose the vulnerable systems to IoT worms like Mirai, which can spread to devices that are running under factory-default usernames and passwords.

IPVM polled almost a dozen top security camera makers, and said only two (including Axis) responded that they used the vulnerable gSOAP library in their products. Another three said they hadn’t yet determined whether any of their products were potentially vulnerable.

“You probably wouldn’t be able to make a universal, Mirai-style exploit for this flaw because it lacks the elements of simplicity and reproduceability,” Karas said, noting that the exploit requires that an attacker be able to upload at least a 2 GB file to the Web interface for a vulnerable device.

“In my experience, I don’t think it’s that common for embedded systems to accept a 2-gigabyte file upload,” Karas said. “Every device is going to respond slightly differently, and it would probably take a lot of time to research each device and put together some kind of universal attack tool. Yes, people should be aware of this and patch if they can, but this is nowhere near as bad as [the threat from] Mirai.”

Karas said similar to most other cyber security vulnerabilities in network devices, restricting network access to the unit will greatly reduce the chance of exploit.

“Cameras utilizing a VMS (video management system) or recorder for remote access, instead of being directly connected to the internet, are essentially immune from remote attack (though it is possible for the VMS itself to have vulnerabilities),” IPVM wrote in an analysis of the gSOAP bug. In addition, changing the factory default settings (e.g., picking decent administrator passwords) and updating the firmware on the devices to the latest version may go a long way toward sidestepping any vulnerabilities.

TEDSneak preview lineup unveiled for Africa’s next TED Conference

On August 27, an extraordinary group of people will gather in Arusha, Tanzania, for TEDGlobal 2017, a four-day TED Conference for “those with a genuine interest in the betterment of the continent,” says curator Emeka Okafor.

As Okafor puts it: “Africa has an opportunity to reframe the future of work, cultural production, entrepreneurship, agribusiness. We are witnessing the emergence of new educational and civic models. But there is, on the flip side, a set of looming challenges that include the youth bulge and under-/unemployment, a food crisis, a risky dependency on commodities, slow industrializations, fledgling and fragile political systems. There is a need for a greater sense of urgency.”

He hopes the speakers at TEDGlobal will catalyze discussion around “the need to recognize and amplify solutions from within the Africa and the global diaspora.”

Who are these TED speakers? A group of people with “fresh, unique perspectives in their initiatives, pronouncements and work,” Okafor says. “Doers as well as thinkers — and contrarians in some cases.” The curation team, which includes TED head curator Chris Anderson, went looking for speakers who take “a hands-on approach to solution implementation, with global-level thinking.”

Here’s the first sneak preview — a shortlist of speakers who, taken together, give a sense of the breadth and topics to expect, from tech to the arts to committed activism and leadership. Look for the long list of 35–40 speakers in upcoming weeks.

The TEDGlobal 2017 conference happens August 27–30, 2017, in Arusha, Tanzania. Apply to attend >>

Kamau Gachigi, Maker

“In five to ten years, Kenya will truly have a national innovation system, i.e. a system that by its design audits its population for talented makers and engineers and ensures that their skills become a boon to the economy and society.” — Kamau Gachigi on Engineering for Change

Dr. Kamau Gachigi is the executive director of Gearbox, Kenya’s first open makerspace for rapid prototyping, based in Nairobi. Before establishing Gearbox, Gachigi headed the University of Nairobi’s Science and Technology Park, where he founded a Fab Lab full of manufacturing and prototyping tools in 2009, then built another one at the Riruta Satellite in an impoverished neighborhood in the city. At Gearbox, he empowers Kenya’s next generation of creators to build their visions. @kamaufablab

Mohammed Dewji, Business leader

“My vision is to facilitate the development of a poverty-free Tanzania. A future where the opportunities for Tanzanians are limitless.” — Mohammed Dewji

Mohammed Dewji is a Tanzanian businessman, entrepreneur, philanthropist, and former politician. He serves as the President and CEO of MeTL Group, a Tanzanian conglomerate operating in 11 African countries. The Group operates in areas as diverse as trading, agriculture, manufacturing, energy and petroleum, financial services, mobile telephony, infrastructure and real estate, transport, logistics and distribution. He served as Member of Parliament for Singida-Urban from 2005 until his retirement in 2015. Dewji is also the Founder and Trustee of the Mo Dewji Foundation, focused on health, education and community development across Tanzania. @moodewji

Meron Estefanos, Refugee activist

“Q: What’s a project you would like to move forward at TEDGlobal?
A: Bringing change to Eritrea.” —Meron Estefanos

Meron Estefanos is an Eritrean human rights activist, and the host and presenter of Radio Erena’s weekly program “Voices of Eritrean Refugees,” aired from Paris. Estefanos is executive director of the Eritrean Initiative on Refugee Rights (EIRR), advocating for the rights of Eritrean refugees, victims of trafficking, and victims of torture. Ms Estefanos has been key in identifying victims throughout the world who have been blackmailed to pay ransom for kidnapped family members, and was a key witness in the first trial in Europe to target such blackmailers. She is co-author of Human Trafficking in the Sinai: Refugees between Life and Death and The Human Trafficking Cycle: Sinai and Beyond, and was featured in the film Sound of Torture. She was nominated for the 2014 Raoul Wallenberg Award for her work on human rights and victims of trafficking. @meronina

Touria El Glaoui, Art fair founder

“I’m looking forward to discussing the roles we play as leaders and tributaries in redressing disparities within arts ecosystems. The art fair is one model which has had a direct effect on the ways in which audiences engage with art, and its global outlook has contributed to a highly mobile and dynamic means of interaction.” — Touria El Glaoui

Touria El Glaoui is the founding director of the 1:54 Contemporary African Art Fair, which takes place in London and New York every year and, in 2018, launches in Marrakech. The fair highlights work from artists and galleries across Africa and the diaspora, bringing visibility in global art markets to vital upcoming visions. El Glaoui began her career in the banking industry before founding 1:54 in 2013. Parallel to her career, Touria has organised and co-curated exhibitions of her father’s work, the Moroccan artist Hassan El Glaoui, in London and Morocco. @154artfair

Gus Casely-Hayford, Historian

“Technological, demographic, economic and environmental change are recasting the world profoundly and rapidly. The sentiment that we are traveling through unprecedented times has left many feeling deeply unsettled, but there may well be lessons to learn from history — particularly African history — lessons that show how brilliant leadership and strategic intervention have galvanised and united peoples around inspirational ideas.” — Gus Casely-Hayford

Dr. Gus Casely-Hayford is a curator and cultural historian who writes, lectures and broadcasts widely on African culture. He has presented two series of The Lost Kingdoms of Africa for the BBC and has lectured widely on African art and culture, advising national and international bodies on heritage and culture. He is currently developing a National Portrait Gallery exhibition that will tell the story of abolition of slavery through 18th- and 19th-century portraits — an opportunity to bring many of the most important paintings of black figures together in Britain for the first time.

Oshiorenoya Agabi, Computational neuroscientist

“Koniku eventually aims to build a device that is capable of thinking in the biological sense, like a human being. We think we can do this in the next two to five years.” — Oshiorenoya Agabi on

With his startup Koniku, Oshiorenoya Agabi is working to integrate biological neurons and silicon computer chips, to build computers that can think like humans can. Faster, cleverer computer chips are key to solving the next big batch of computing problems, like particle detection or sophisticated climate modeling — and to get there, we need to move beyond the limitations of silicon, Agabi believes. Born and raised in Lagos, Nigeria, Agabi is now based in the SF Bay Area, where he and his lab mates are working on the puzzle of connecting silicon to biological systems.

Natsai Audrey Chieza, Design researcher

Photo: Natsai Audrey Chieza

Natsai Audrey Chieza is a design researcher whose fascinating work crosses boundaries between technology, biology, design and cultural studies. She is founder and creative director of Faber Futures, a creative R&D studio that conceptualises, prototypes and evaluates the resilience of biomaterials emerging through the convergence of bio-fabrication, digital fabrication and traditional craft processes. As Resident Designer at the Department of Biochemical Engineering, University College London, she established a design-led microbiology protocol that replaces synthetic pigments with natural dyes excreted by bacteria — producing silk scarves dyed brilliant blues, reds and pinks. The process demands a rethink of the entire system of fashion and textile production — and is also a way to examine issues like resource scarcity, provenance and cultural specificity. @natsaiaudrey

Stay tuned for more amazing speakers, including leaders, creators, and more than a few truth-tellers … learn more >>

Valerie AuroraThe Al Capone theory of sexual harassment

This post was co-authored by Valerie Aurora and Leigh Honeywell and cross-posted on both of our blogs.

Mural of Al Capone, laughing and smoking a cigar
CC BY-SA 2.0 r2hox

We’re thrilled with the recent trend towards sexual harassment in the tech industry having actual consequences – for the perpetrator, not the target, for a change. We decided it was time to write a post explaining what we’ve been calling “the Al Capone Theory of Sexual Harassment.” (We can’t remember which of us came up with the name, Leigh or Valerie, so we’re taking joint credit for it.) We developed the Al Capone Theory over several years of researching and recording racism and sexism in computer security, open source software, venture capital, and other parts of the tech industry. To explain, we’ll need a brief historical detour – stick with us.

As you may already know, Al Capone was a famous Prohibition-era bootlegger who, among other things, ordered murders to expand his massively successful alcohol smuggling business. The U.S. government was having difficulty prosecuting him for either the murdering or the smuggling, so they instead convicted Capone for failing to pay taxes on the income from his illegal business. This technique is standard today – hence the importance of money-laundering for modern successful criminal enterprises – but at the time it was a novel approach.

The U.S. government recognized a pattern in the Al Capone case: smuggling goods was a crime often paired with failing to pay taxes on the proceeds of the smuggling. We noticed a similar pattern in reports of sexual harassment and assault: often people who engage in sexually predatory behavior also faked expense reports, plagiarized writing, or stole credit for other people’s work. Just three examples: Mark Hurd, the former CEO of HP, was accused of sexual harassment by a contractor, but resigned for falsifying expense reports to cover up the contractor’s unnecessary presence on his business trips. Jacob Appelbaum, the former Tor evangelist, left the Tor Foundation after he was accused of both sexual misconduct and plagiarism. And Randy Komisar, a general partner at venture capital firm KPCB, gave a book of erotic poetry to another partner at the firm, and accepted a board seat (and the credit for a successful IPO) at RPX that would ordinarily have gone to her.

Initially, the connection eluded us: why would the same person who made unwanted sexual advances also fake expense reports, plagiarize, or take credit for other people’s work? We remembered that people who will admit to attempting or committing sexual assault also disproportionately commit other types of violence and that “criminal versatility” is a hallmark of sexual predators. And we noted that taking credit for others’ work is a highly gendered behavior.

Then we realized what the connection was: all of these behaviors are the actions of someone who feels entitled to other people’s property – regardless of whether it’s someone else’s ideas, work, money, or body. Another common factor was the desire to dominate and control other people. In venture capital, you see the same people accused of sexual harassment and assault also doing things like blacklisting founders for objecting to abuse and calling people nasty epithets on stage at conferences. This connection between dominance and sexual harassment also shows up as overt, personal racism (that’s one reason why we track both racism and sexism in venture capital).

So what is the Al Capone theory of sexual harassment? It’s simple: people who engage in sexual harassment or assault are also likely to steal, plagiarize, embezzle, engage in overt racism, or otherwise harm their business. (Of course, sexual harassment and assault harms a business – and even entire fields of endeavor – but in ways that are often discounted or ignored.) Ask around about the person who gets handsy with the receptionist, or makes sex jokes when they get drunk, and you’ll often find out that they also violated the company expense policy, or exaggerated on their résumé, or took credit for a colleague’s project. More than likely, they’ve engaged in sexual misconduct multiple times, and a little research (such as calling previous employers) will show this, as we saw in the case of former Uber and Google employee Amit Singhal.

Organizations that understand the Al Capone theory of sexual harassment have an advantage: they know that reports or rumors of sexual misconduct are a sign they need to investigate for other incidents of misconduct, sexual or otherwise. Sometimes sexual misconduct is hard to verify because a careful perpetrator will make sure there aren’t any additional witnesses or records beyond the target and the target’s memory (although with the increase in use of text messaging in the United States over the past decade, we are seeing more and more cases where victims have substantial written evidence). But one of the implications of the Al Capone theory is that even if an organization can’t prove allegations of sexual misconduct, the allegations themselves are sign to also urgently investigate a wide range of aspects of an employee’s conduct.

Some questions you might ask: Can you verify their previous employment and degrees listed on their résumé? Do their expense reports fall within normal guidelines and include original receipts? Does their previous employer refuse to comment on why they left? When they give references, are there odd patterns of omission? For example, a manager who doesn’t give a single reference from a person who reported to them can be a hint that they have mistreated people they had power over.

Another implication of the Al Capone theory is that organizations should put more energy into screening potential employees or business partners for allegations of sexual misconduct before entering into a business relationship with them, as recently advocated by LinkedIn cofounder and Greylock partner Reid Hoffman. This is where tapping into the existing whisper network of targets of sexual harassment is incredibly valuable. The more marginalized a person is, the more likely they are to be the target of this kind of behavior and to be connected with other people who have experienced this behavior. People of color, queer people, people with working class jobs, disabled people, people with less money, and women are all more likely to know who sends creepy text messages after a business meeting. Being a member of more than one of these groups makes people even more vulnerable to this kind of harassment – we don’t think it was a coincidence that many of the victims of sexual harassment who spoke out last month were women of color.

What about people whose well-intentioned actions are unfairly misinterpreted, or people who make a single mistake and immediately regret it? The Al Capone theory of sexual harassment protects these people, because when the organization investigates their overall behavior, they won’t find a pattern of sexual harassment, plagiarism, or theft. A broad-ranging investigation in this kind of case will find only minor mistakes in expense reports or an ambiguous job title in a resume, not a pervasive pattern of deliberate deception, theft, or abuse. To be perfectly clear, it is possible for someone to sexually harass someone without engaging in other types of misconduct. In the absence of clear evidence, we always recommend erring on the side of believing accusers who have less power or privilege than the people they are accusing, to counteract the common unconscious bias against believing those with less structural power and to take into account the enormous risk of retaliation against the accuser.

Some people ask whether the Al Capone theory of sexual harassment will subject men to unfair scrutiny. It’s true, the majority of sexual harassment is committed by men. However, people of all genders commit sexual harassment. We personally know of two women who have sexually touched other people without consent at tech-related events, and we personally took action to stop these women from abusing other people. At the same time, abuse more often occurs when the abuser has more power than the target – and that imbalance of power is often the result of systemic oppression such as racism, sexism, cissexism, or heterosexism. That’s at least one reason why a typical sexual harasser is more likely to be one or all of straight, white, cis, or male.

What does the Al Capone theory of sexual harassment mean if you are a venture capitalist or a limited partner in a venture fund? Your first priority should be to carefully vet potential business partners for a history of unethical behavior, whether it is sexual misconduct, lying about qualifications, plagiarism, or financial misdeeds. If you find any hint of sexual misconduct, take the allegations seriously and step up your investigation into related kinds of misconduct (plagiarism, lying on expense reports, embezzlement) as well as other incidents of sexual misconduct.

Because sexual harassers sometimes go to great lengths to hide their behavior, you almost certainly need to expand your professional network to include more people who are likely to be targets of sexual harassment by your colleagues – and gain their trust. If you aren’t already tapped into this crucial network, here are some things you can do to get more access:

These are all aspects of ally skills – concrete actions that people with more power and privilege can take to support people who have less.

Finally, we’ve seen a bunch of VCs pledging to donate the profits of their investments in funds run by accused sexual harassers to charities supporting women in tech. We will echo many other women entrepreneurs and say: don’t donate that money, invest it in women-led ventures – especially those led by women of color.

Tagged: ally skills, feminism, tech

TEDThe TED2018 Fellows application is open. Apply now!


TED is looking for early-career, visionary thinkers from around the world to join the Fellows program at the upcoming TED2018 conference in Vancouver, British Columbia.

Do you have an original approach to your work that’s worth sharing with the world? Are you working to uplift and empower your local community through innovative science, art or entrepreneurship? Are you ready to take full advantage of the TED platform and the support of a dynamic global community of innovators? If yes, you should apply to be a TED Fellow.

TED Fellows are a multidisciplinary group of remarkable individuals who are chosen through an open and rigorous application process. For each TED conference, we select a class of 20 Fellows based on their exceptional achievement and an innovative approach to tackling the world’s toughest problems, as well as on their character, grit and collaborative spirit.

Apply by September 10 at

TED2018 — themed “The Age of Amazement” — will take a deep-dive into the key developments driving our future, from jaw-dropping AI to glorious new forms of creativity to courageous advocates of radical social change. If selected, you will attend the TED2018 conference and participate in a Fellows-only pre-conference designed especially to inspire, empower and support your work. Fellows also deliver a TED Talk at the conference, filmed and considered for publication on  

The TED Fellows program is designed to catapult your career through transformational support like coaching and mentorship, public relations guidance for sharing your latest projects, hands on speaker training — and, most importantly, access to the vibrant global network of more than 400 Fellows from over 90 countries.

The online application includes general biographical questions, short essays on your work and three references. Only those aged 18 and older can apply. If selected, Fellows must reserve April 10 – April 15, 2018 on their calendars for the TED2018 conference in Vancouver, British Columbia.

Think you have what it takes to be a TED Fellow? Apply now.

More information
Follow: @TEDFellow

CryptogramMany of My E-Books for Cheap

Humble Bundle is selling a bunch of cybersecurity books very cheaply. You can get copies of Applied Cryptography, Secrets and Lies, and Cryptography Engineering -- and also Ross Anderson's Security Engineering, Adam Shostack's Threat Modeling, and many others.

This is the cheapest you'll ever see these books. And they're all DRM-free.

Worse Than FailureThe Little Red Button

Bryan T. had worked for decades to amass the skills, expertise and experience to be a true architect, but never quite made the leap. Finally, he got a huge opportunity in the form of an interview with a Silicon Valley semi-conductor firm project manager who was looking for a consultant to do just that. The discussions revolved around an application that three developers couldn't get functioning correctly in six months, and Bryan was to be the man to reign it in and make it work; he was lured with the promise of having complete control of the software.

The ZF-1 pod weapon system from the Fifth Element

Upon starting and spelunking through the code-base, Bryan discovered the degree of total failure that caused them to yield complete control to him. It was your typical hodgepodge of code slapped together with anti-patterns, snippets of patterns claiming to be the real deal, and the usual Assortment-o-WTF™ we've all come to expect.

Once he recognized the futility of attempting to fix this mess, Bryan scrapped it and rewrote it as a full-blown modular and compositional application, utilizing MVVM, DDD, SOA, pub/sub; the works. Within three weeks, he had it back to the point it was when he started, only his version actually worked.

While he had righted the sinking ship, it was so successful that the project team started managing it, which proved to be its undoing.

Given the sudden success of the project, the department head committed the application to all the divisions company wide within three quarters - without informing Bryan or anyone else on the team. After all, it's not like developers need to plan for code and resource scalability issues beyond the original design requirements or anything.

We've read countless stories about how difficult it is to work with things like dates and even booleans, but buttons are pretty much solidly understood. Some combination of text, text+image or just image, and an onAction callback pretty much covers it. Oh sure, you can set fg/bg colors and the font, but that's usually to just give visual clues. Unfortunately, buttons would be the beginning of a downward spiral so steep, that sheer inertia would derail the project.

The project manager decided that images were incredibly confusing, so all buttons should have text instead of icons. Bryan had created several toolbars (because ribbons were shot down already) which, according to management, made the application unusable. In particular, there was a fairly standard user icon with a pencil bullet that was meant to (as you might have guessed it) edit users...

  Manager:  So I looked at it with Lisa and she had no clue what it was.  
            It was so confusing that no one would ever be able to use our 
            application with it.  Buttons should all be text and not images!

OK, let’s forget that ribbons and toolbars have been an application standard for decades now; let’s focus on how confusing this really is. To that end, Bryan did the nanny test. He asked his nanny what she thought it meant and she thought that the button had something to do with people. Awesome, on the ball! After explaining what it did she agreed it made sense.

  Bryan: How about we explain it to the users and add a tooltip?
  Mgr:   Tooltips take way too long to display and it’s still 
         incredibly confusing – no one would remember it. We
         don't want people pressing the wrong buttons!
         And why are some of the buttons different colors than others?

Bryan wasn't sure if the manager realized how stupidly he was treating his users, if he was just oblivious, or if he was just pushing for his personal preference. In the end, all the toolbars were removed and the icons were replaced with text. This left an application with assorted colored buttons with text. Unfortunately, some of the buttons were so small that the text got displayed as a truncated string. Also, no amount of explanation could convey that color could also convey meaning (think traffic lights).

As his opinion in UI matters dwindled to nothing over the next couple of months, one of the four BA’s on the team of six pinged Bryan for a meeting about scalability. He wanted to make sure that the project was scalable for the next three quarters. Enter the Holy Hell Twilight Zone moment in the land where no ribbons or toolbars exist, as the project manager was also involved.

  Mgr:   I’ve got to make sure we have everything we need to 
         scale for the next three quarters.
  Bryan: I can’t get the project manager to commit to lay out
         three weeks of planning for development. I can’t even 
         begin to guess if we have what we need for the next three 
  Mgr:   Well the vice president has a commitment to deploy this 
         to all divisions in the company within three quarters and 
         I’m tasked to make sure we have what we need.

Now Bryan could make up statistics better than 84.3% of people, but what was asked was impossible to determine. Additionally there was a flat out refusal to even vaguely commit to development more than a week or two in advance, so there was heavy resistance just to get the information needed to try!

At this point in time, Bryan felt the need to bail out, but before he left town he grabbed his prized coffee mug from the office. He wasn’t going to be back in town for at least three weeks and from his prior experiences he knew where this was going.

Of course the guy who originally sunk the ship in the first place had a true killer instinct, apparently knew better than Bryan and was left to steer the ship again. All these problems and issues that Bryan saw coming were either over exaggerations or without merit. The project manager felt so comfortable with the architecture and frameworks that Bryan put in place that he felt confident that there was absolutely nothing that he couldn’t handle. After all, he now had the buttons he wanted and understood. Bryan repeatedly asked if he wanted code walkthroughs and was denied. He didn't need to know what the different colors on the buttons were for. Bryan was even given 40 free consulting hours and even told not to check in his latest bug fixes.

Bryan sent his final farewell with a picture of him drinking from his coffee mug at home and out of state.

A real killer, when handed the ZF-1, would've immediately asked about the little red button on the bottom of the gun.

[Advertisement] Infrastructure as Code built from the start with first-class Windows functionality and an intuitive, visual user interface. Download Otter today!

Planet DebianFoteini Tsiami: Internationalization, part three

The first builds of the LTSP Manager were uploaded and ready for testing. Testing involves installing or purging the ltsp-manager package, along with its dependencies, and using its GUI to configure LTSP, create users, groups, shared folders etc. Obviously, those tasks are better done on a clean system. And the question that emerges is: how can we start from a clean state, without having to reinstall the operating system each time?

My mentors pointed me to an answer for that: VirtualBox snapshots. VirtualBox is a virtualization application (others are KVM or VMware) that allows users to install an operating system like Debian in a contained environment inside their host operating system. It comes with an easy to use GUI, and supports snapshots, which are points in time where we mark the guest operating system state, and can revert to that state later on.

So I started by installing Debian Stretch with the MATE desktop environment in VirtualBox, and I took a snapshot immediately after the installation. Now whenever I want to test LTSP Manager, I revert to that snapshot, and that way I have a clean system where I can properly check the installation procedure and all of its features!

Planet DebianReproducible builds folks: Reproducible Builds: week 116 in Stretch cycle

Here's what happened in the Reproducible Builds effort between Sunday July 9 and Saturday July 15 2017:

Packages reviewed and fixed, and bugs filed

Reviews of unreproducible packages

13 package reviews have been added, 12 have been updated and 19 have been removed in this week, adding to our knowledge about identified issues.

2 issue types have been added:

3 issue types have been updated:

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (47)

diffoscope development

Version 84 was uploaded to unstable by Mattia Rizzolo. It included contributions already reported from the previous weeks, as well as new ones:

After the release, development continued in git with contributions from:

strip-nondeterminism development

Versions 0.036-1, 0.037-1 and 0.038-1 were uploaded to unstable by Chris Lamb. They included contributions from:

reprotest development

Development continued in git with contributions from: development

  • Mattia Rizzolo:
    • Make database backups quicker to restore by avoiding --column-inserts's pg_dump option.
    • Fixup the deployment scripts after the stretch migration.
    • Fixup Apache redirects that were broken after introducing the buster suite
    • Fixup diffoscope jobs that were not always installing the highest possible version of diffoscope
  • Holger Levsen:
    • Add a node health check for a too big jenkins.log.


This week's edition was written by Bernhard M. Wiedemann, Chris Lamb, Mattia Rizzolo, Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Don MartiStupid ideas department

Here's a probably stupid idea: give bots the right to accept proposed changes to a software project. Can automation encourage less burnout-provoking behavior?

A set of bots could interact in interesting ways.

  • Regression-test-bot: If a change only adds a test, applies cleanly to both the current version and to a previous version, and the previous version passses the test, accept it, even if the test fails for the current version.

  • Harmless-change-bot: If a change is below a certain size, does not modify existing tests, and all tests (including any new ones) pass, accept it.

  • Revert-bot: If any tests are failing on the current version, and have been failing for more than a certain amount of time, revert back to a version that passes.

Would more people write regression tests for their issues if they knew that a bot would accept them? Or say that someone makes a bad change but gets it past harmless-change-bot because no existing test covers it. No lengthy argument needed. Write a regression test and let regression-test-bot and revert-bot team up to take care of the problem. In general, move contributor energy away from arguing with people and toward test writing, and reduce the size of the maintainer's to-do list.

Planet DebianMatthew Garrett: Avoiding TPM PCR fragility using Secure Boot

In measured boot, each component of the boot process is "measured" (ie, hashed and that hash recorded) in a register in the Trusted Platform Module (TPM) build into the system. The TPM has several different registers (Platform Configuration Registers, or PCRs) which are typically used for different purposes - for instance, PCR0 contains measurements of various system firmware components, PCR2 contains any option ROMs, PCR4 contains information about the partition table and the bootloader. The allocation of these is defined by the PC Client working group of the Trusted Computing Group. However, once the boot loader takes over, we're outside the spec[1].

One important thing to note here is that the TPM doesn't actually have any ability to directly interfere with the boot process. If you try to boot modified code on a system, the TPM will contain different measurements but boot will still succeed. What the TPM can do is refuse to hand over secrets unless the measurements are correct. This allows for configurations where your disk encryption key can be stored in the TPM and then handed over automatically if the measurements are unaltered. If anybody interferes with your boot process then the measurements will be different, the TPM will refuse to hand over the key, your disk will remain encrypted and whoever's trying to compromise your machine will be sad.

The problem here is that a lot of things can affect the measurements. Upgrading your bootloader or kernel will do so. At that point if you reboot your disk fails to unlock and you become unhappy. To get around this your update system needs to notice that a new component is about to be installed, generate the new expected hashes and re-seal the secret to the TPM using the new hashes. If there are several different points in the update where this can happen, this can quite easily go wrong. And if it goes wrong, you're back to being unhappy.

Is there a way to improve this? Surprisingly, the answer is "yes" and the people to thank are Microsoft. Appendix A of a basically entirely unrelated spec defines a mechanism for storing the UEFI Secure Boot policy and used keys in PCR 7 of the TPM. The idea here is that you trust your OS vendor (since otherwise they could just backdoor your system anyway), so anything signed by your OS vendor is acceptable. If someone tries to boot something signed by a different vendor then PCR 7 will be different. If someone disables secure boot, PCR 7 will be different. If you upgrade your bootloader or kernel, PCR 7 will be the same. This simplifies things significantly.

I've put together a (not well-tested) patchset for Shim that adds support for including Shim's measurements in PCR 7. In conjunction with appropriate firmware, it should then be straightforward to seal secrets to PCR 7 and not worry about things breaking over system updates. This makes tying things like disk encryption keys to the TPM much more reasonable.

However, there's still one pretty major problem, which is that the initramfs (ie, the component responsible for setting up the disk encryption in the first place) isn't signed and isn't included in PCR 7[2]. An attacker can simply modify it to stash any TPM-backed secrets or mount the encrypted filesystem and then drop to a root prompt. This, uh, reduces the utility of the entire exercise.

The simplest solution to this that I've come up with depends on how Linux implements initramfs files. In its simplest form, an initramfs is just a cpio archive. In its slightly more complicated form, it's a compressed cpio archive. And in its peak form of evolution, it's a series of compressed cpio archives concatenated together. As the kernel reads each one in turn, it extracts it over the previous ones. That means that any files in the final archive will overwrite files of the same name in previous archives.

My proposal is to generate a small initramfs whose sole job is to get secrets from the TPM and stash them in the kernel keyring, and then measure an additional value into PCR 7 in order to ensure that the secrets can't be obtained again. Later disk encryption setup will then be able to set up dm-crypt using the secret already stored within the kernel. This small initramfs will be built into the signed kernel image, and the bootloader will be responsible for appending it to the end of any user-provided initramfs. This means that the TPM will only grant access to the secrets while trustworthy code is running - once the secret is in the kernel it will only be available for in-kernel use, and once PCR 7 has been modified the TPM won't give it to anyone else. A similar approach for some kernel command-line arguments (the kernel, module-init-tools and systemd all interpret the kernel command line left-to-right, with later arguments overriding earlier ones) would make it possible to ensure that certain kernel configuration options (such as the iommu) weren't overridable by an attacker.

There's obviously a few things that have to be done here (standardise how to embed such an initramfs in the kernel image, ensure that luks knows how to use the kernel keyring, teach all relevant bootloaders how to handle these images), but overall this should make it practical to use PCR 7 as a mechanism for supporting TPM-backed disk encryption secrets on Linux without introducing a hug support burden in the process.

[1] The patchset I've posted to add measured boot support to Grub use PCRs 8 and 9 to measure various components during the boot process, but other bootloaders may have different policies.

[2] This is because most Linux systems generate the initramfs locally rather than shipping it pre-built. It may also get rebuilt on various userspace updates, even if the kernel hasn't changed. Including it in PCR 7 would entirely break the fragility guarantees and defeat the point of all of this.

comment count unavailable comments

Planet DebianNorbert Preining: Calibre and rar support

Thanks to the cooperation with upstream authors and the maintainer Martin Pitt, the Calibre package in Debian is now up-to-date at version 3.4.0, and has adopted a more standard packaging following upstream. In particular, all the desktop files and man pages have been replaced by what is shipped by Calibre. What remains to be done is work on RAR support.

Rar support is necessary in the case that the eBook uses rar as compression, which happens quite often in comic books (cbr extension). Calibre 3 has split out rar support into a dynamically loaded module, so what needs to be done is packaging it. I have prepared a package for the Python library unrardll which allows Calibre to read rar-compressed ebooks, but it depends on the unrar shared library, which unfortunately is not built in Debian. I have sent a patch to fix this to the maintainer, see bug 720051, but without reaction from the maintainer.

Thus, I am publishing updated packages for unrar shipping also libunrar5, and unrardll Python package in my calibre repository. After installing python-unrardll Calibre will happily import meta-data from rar-compressed eBooks, as well as display them.

deb calibre main
deb-src calibre main

The releases are signed with my Debian key 0x6CACA448860CDC13



Cory DoctorowSan Diego! Come hear me read from Walkaway tomorrow night at Comickaze Liberty Station!

I’m teaching the Clarion Science Fiction writing workshop at UCSD in La Jolla this week, and tomorrow night at 7PM, I’ll be reading from my novel Walkaway at Comickaze Liberty Station, 2750 Historic Decatur Rd #101, San Diego, CA 92106. Hope to see you!

Planet DebianJonathan McDowell: Just because you can, doesn't mean you should

There was a recent Cryptoparty Belfast event that was aimed at a wider audience than usual; rather than concentrating on how to protect ones self on the internet the 3 speakers concentrated more on why you might want to. As seems to be the way these days I was asked to say a few words about the intersection of technology and the law. I think people were most interested in all the gadgets on show at the end, but I hope they got something out of my talk. It was a very high level overview of some of the issues around the Investigatory Powers Act - if you’re familiar with it then I’m not adding anything new here, just trying to provide some sort of details about why it’s a bad thing from both a technological and a legal perspective.


CryptogramAustralia Considering New Law Weakening Encryption

News from Australia:

Under the law, internet companies would have the same obligations telephone companies do to help law enforcement agencies, Prime Minister Malcolm Turnbull said. Law enforcement agencies would need warrants to access the communications.

"We've got a real problem in that the law enforcement agencies are increasingly unable to find out what terrorists and drug traffickers and pedophile rings are up to because of the very high levels of encryption," Turnbull told reporters.

"Where we can compel it, we will, but we will need the cooperation from the tech companies," he added.

Never mind that the law 1) would not achieve the desired results because all the smart "terrorists and drug traffickers and pedophile rings" will simply use a third-party encryption app, and 2) would make everyone else in Australia less secure. But that's all ground I've covered before.

I found this bit amusing:

Asked whether the laws of mathematics behind encryption would trump any new legislation, Mr Turnbull said: "The laws of Australia prevail in Australia, I can assure you of that.

"The laws of mathematics are very commendable but the only law that applies in Australia is the law of Australia."

Next Turnbull is going to try to legislate that pi = 3.2.

Another article. BoingBoing post.

EDITED TO ADD: More commentary.

Planet DebianSteinar H. Gunderson: Solskogen 2017: Nageru all the things

Solskogen 2017 is over! What a blast that was; I especially enjoyed that so many old-timers came back to visit, it really made the party for me.

This was the first year we were using Nageru for not only the stream but also for the bigscreen mix, and I was very relieved to see the lack of problems; I've had nightmares about crashes with 150+ people watching (plus 200-ish more on stream), but there were no crashes and hardly a dropped frame. The transition to a real mixing solution as well as from HDMI to SDI everywhere gave us a lot of new opportunities, which allowed a number of creative setups, some of them cobbled together on-the-spot:

  • Nageru with two cameras, of which one was through an HDMI-to-SDI converter battery-powered from a 20000 mAh powerbank (and sent through three extended SDI cables in series): Live music compo (with some, er, interesting entries).
  • 1080p60 bigscreen Nageru with two computer inputs (one of them through a scaler) and CasparCG graphics run from an SQL database, sent on to a 720p60 mixer Nageru (SDI pass-through from the bigscreen) with two cameras mixed in: Live graphics compo
  • Bigscreen Nageru switching from 1080p50 to 1080p60 live (and stream between 720p50 and 720p60 correspondingly), running C64 inputs from the Framemeister scaler: combined intro compo
  • And finally, it's Nageru all the way down: A camera run through a long extended SDI cable to a laptop running Nageru, streamed over TCP to a computer running VLC, input over SDI to bigscreen Nageru and sent on to streamer Nageru: Outdoor DJ set/street basket compo (granted, that one didn't run entirely smoothly, and you can occasionally see Windows device popups :-) )

It's been a lot of fun, but also a lot of work. And work will continue for an even better show next year… after some sleep. :-)

Cory DoctorowI’m profiled in the new issue of Locus Magazine

Cory Doctorow: Bugging In:

‘‘Walkaway is an ‘optimistic disaster novel.’ It’s about people who, in a crisis, come together, rather than turning on each other. Its villains aren’t the people next door, who’ve secretly been waiting for civilization’s breakdown as an excuse to come and eat you, but the super-rich who are convinced that without the state and its police, the poors are coming to eat them.

‘‘In Walkaway, the economy has comprehensively broken down, and so has the planet. Climate refugees drift in huge, unstoppable numbers from place to place, seeking refuge. The world has no jobs for most people, because when robots do all the work, the forces of capital require a few foremen to boss the robots, and a few unemployed people mooching around the factory gates to threaten the supervisors with if they demand higher wages. Everyone else is surplus to requirements.

‘‘Awareness of self-deception is a tactic that’s deployed very usefully by a lot of people now. It’s at the core of things like cognitive behavioral therapy – the idea that you must become an empiricist of your emotions because your recollections of emotions are always tainted, so you have to write down your experiences and go back to see what actually happened. Do you remember the term Endless September? It’s from when AOL came on to the net, and suddenly new people were getting online all the time, who didn’t know how things worked. The onboarding process to your utopian project is always difficult. It’s a thing Burning Man is struggling with, and it’s a thing fandom is struggling with right now. We were just talking about what it’s like to go to a big media convention, a San Diego Comic-Con or something, and to what extent that’s a new culture, or it’s continuous with the old culture, or it’s preserving the best things or bringing in the worst things, or it’s overwhelming the old, or whatever. It’s a real problem, and there is a shibboleth, which is, ‘I don’t object to all these newcomers, but they’re coming in such numbers that they’re overwhelming our ability to assimilate them.’ This is what every xenophobe who voted for Brexit said, but you hear that lament in science fiction too, and you hear it even about such things as gender parity in the workplace.”


‘‘For me, I live by the aphorism, ‘fail better, fail faster.’ To double your success rate, triple your failure rate. What the walkaways figured out how to do is reduce the cost of failure, to make it cheaper to experiment with new ways of succeeding. One of the great bugaboos of the rationalist movement is loss aversion. There is another name for it, ‘the entitlement effect’: basically, people value something they have more than they would pay for it before they got it. How much is your IKEA furniture worth before and after you assemble it? People grossly overestimate the value of their furniture after they’ve assembled it, because having infused it with their labor and ownership, they feel an attachment to it that is not economically rational. Sunk cost is another great fallacy. You can offer somebody enough money to buy the furniture again, and pay somebody to assemble it, and they’ll turn you down, because now that they have it, they don’t want to lose it. That was the wisdom of Obama with Obamacare. He understood that Obamacare is not sustainable, that basically letting insurance companies set the price without any real limits means that the insurance companies will eventually price it out of the government’s ability to pay, but he also understood that once you give 22 million people healthcare, when the insurance companies blew it up, the people would then demand some other healthcare system be found. The idea of just going without healthcare, which was a thing that people were willing to put up with for decades, is something they’ll never go back to. Any politician who proposes that when Obamacare blows up that we replace it with nothing, as opposed to single payer – where it’s going to end up – that politician is dead in the water. ”


Worse Than FailureCodeSOD: Impersonated Programming

Once upon a time, a long long time ago, I got contracted to show a government office how to build and deliver applications… in Microsoft Access. I’m sorry. I’m so, so sorry. As horrifying and awful as it is, Access is actually built with some mechanisms to actually support that- you can break the UI and behavior off into one file, while keeping the data in another, and you can actually construct linked tables that connect to a real database, if you don’t mind gluing a UI made out of evil and sin to your “real” database.

Which brings us to poor Alex Rao. Alex has an application built in Access. This application uses linked tables, which he wants to convert to local tables. The VBA API exposed by Access doesn’t give him any way to do this, so he came up with this solution…

Public Function convertToLocal()
    Dim dBase As DAO.Database
    Dim tdfTable As DAO.TableDef

    Set dBase = CurrentDb

    For Each tdfTable In dBase.TableDefs
        If tdfTable.Connect <> "" Then
            ' OH. MY. GOSH. I hate myself so much for doing this. For the love of everything holy,
            ' dear reader, if you can come up with a better way to do this, please tell me about it
            ' AS SOON AS POSSIBLE
            ' I have literally been trying to do this for the past week. For reference, here is what I
            ' am trying to do:
            '   Convert a "linked" table to a "local" one
            '   Keep relationships intact.
            ' Now, Access has this handy tool, "Convert to Local Table" - you'll see it if you right click
            ' on a linked table. However, THERE IS NO WAY IN VBA TO DO THIS. I am aware of the following:
            '   DoCmd.SelectObject acTable, "Company", True RunCommand acCmdConvertLinkedTableToLocal
            ' Note that this no longer works as of Access 2016 because the wonderful programmers at Microsoft decided
            ' that "It wasn't used anymore".
            ' So, onto my solution:
            '   First, I select the table object, making sure it's actually selected (i.e., like a user selected it)
            '   Then, I pause for one second (I hope to the man upstairs that's long enough)
            '   Then, I send the "Context Menu" key (SHIFT+F10)
            '   Then, I pause for another second (Again, fingers crossed)
            '   Then, I send the "v" key - to activate the "ConVert to Local Table" command shortcut
            ' I literally send KEYPRESSES to the active application, and hope to God that Access is ready to go.
            ' And if the user selected a different application (or literally anything else) in that time? Well,
            ' then Screw you, user.
            ' God help us.
            DoCmd.SelectObject acTable, tdfTable.Name, True
            Pause 1
            SendKeys "+{F10}", True
            Pause 1
            SendKeys "v", True
        End If
    Next tdfTable
End Function

Don’t feel bad, Alex. I’m certain this isn’t the worst thing ever built in Access.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Don MartiPlaying for third place

Just tried a Twitter advertising trick that a guy who goes by "weev" posted two years ago.

It still works.

They didn't fix it.

Any low-budget troll who can read that old blog post and come up with a valid credit card number can still do it.

Maybe Twitter is a bad example, but the fast-moving nationalist right wing manages to outclass its opponents on other social marketing platforms, too. Facebook won't even reveal how badly they got played in 2016. They thought they were putting out cat food for cute Internet kittens, but the rats ate it.

This is not new. Right-wing shitlords, at least the best of them, are the masters of database marketing. They absolutely kill it, and they have been ever since Marketing as we know it became a thing. Some good examples:

All the creepy surveillance marketing stuff they're doing today is just another set of tools in an expanding core competency.

Every once in a while you get an exception. The environmental movement became a direct mail operation in response to Interior Secretary James G. Watt, who alarmed environmentalists enough that organizations could reliably fundraise with direct mail copy quoting from Watt's latest speech. And the Democrats tried that "Organizing for America" thing for a little while, but, man, their heart just wasn't in it. They dropped it like a Moodle site during summer vacation. Somehow, the creepier the marketing, the more it skews "red". The more creativity involved, the more it skews "blue" (using the USA meanings of those colors.) When we make decisions about how much user surveillance we're going to allow on a platform, we're making a political decision.

Anyway. News Outlets to Seek Bargaining Rights Against Google and Facebook.

The standings so far.

  1. Shitlords and fraud hackers

  2. Adtech and social media bros


News sites want to go to Congress, to get permission to play for third place in their own business? You want permission to bring fewer resources and less experience to a surveillance marketing game that the Internet companies are already losing?

We know the qualities of a medium that you win by being creepier, and we know the qualities of a medium that you can win with reputation and creativity. Why waste time and money asking Congress for the opportunity to lose, when you could change the game instead?

Maybe achieving balance in political views depends on achieving balance in business model. Instead of buying in to the surveillance marketing model 100%, and handing an advantage to one side, maybe news sites should help users control what data they share in order to balance competing political interests.

Planet Linux Australiasthbrx - a POWER technical blog: XDP on Power

This post is a bit of a break from the standard IBM fare of this blog, as I now work for Canonical. But I have a soft spot for Power from my time at IBM - and Canonical officially supports 64-bit, little-endian Power - so when I get a spare moment I try to make sure that cool, officially-supported technologies work on Power before we end up with a customer emergency! So, without further ado, this is the story of XDP on Power.


eXpress Data Path (XDP) is a cool Linux technology to allow really fast processing of network packets.

Normally in Linux, a packet is received by the network card, an SKB (socket buffer) is allocated, and the packet is passed up through the networking stack.

This introduces an inescapable latency penalty: we have to allocate some memory and copy stuff around. XDP allows some network cards and drivers to process packets early - even before the allocation of the SKB. This is much faster, and so has applications in DDOS mitigation and other high-speed networking use-cases. The IOVisor project has much more information if you want to learn more.


XDP processing is done by an eBPF program. eBPF - the extended Berkeley Packet Filter - is an in-kernel virtual machine with a limited set of instructions. The kernel can statically validate eBPF programs to ensure that they terminate and are memory safe. From this it follows that the programs cannot be Turing-complete: they do not have backward branches, so they cannot do fancy things like loops. Nonetheless, they're surprisingly powerful for packet processing and tracing. eBPF programs are translated into efficient machine code using in-kernel JIT compilers on many platforms, and interpreted on platforms that do not have a JIT. (Yes, there are multiple JIT implementations in the kernel. I find this a terrifying thought.)

Rather than requiring people to write raw eBPF programs, you can write them in a somewhat-restricted subset of C, and use Clang's eBPF target to translate them. This is super handy, as it gives you access to the kernel headers - which define a number of useful data structures like headers for various network protocols.

Trying it

There are a few really interesting project that are already up and running that allow you to explore XDP without learning the innards of both eBPF and the kernel networking stack. I explored the samples in the bcc compiler collection and also the samples from the netoptimizer/prototype-kernel repository.

The easiest way to get started with these is with a virtual machine, as recent virtio network drivers support XDP. If you are using Ubuntu, you can use the uvt-kvm tooling to trivially set up a VM running Ubuntu Zesty on your local machine.

Once your VM is installed, you need to shut it down and edit the virsh XML.

You need 2 vCPUs (or more) and a virtio+vhost network card. You also need to edit the 'interface' section and add the following snippet (with thanks to the xdp-newbies list):

<driver name='vhost' queues='4'>
    <host tso4='off' tso6='off' ecn='off' ufo='off'/>
    <guest tso4='off' tso6='off' ecn='off' ufo='off'/>

(If you have more than 2 vCPUs, set the queues parameter to 2x the number of vCPUs.)

Then, install a modern clang (we've had issues with 3.8 - I recommend v4+), and the usual build tools.

I recommend testing with the prototype-kernel tools - the DDOS prevention tool is a good demo. Then - on x86 - you just follow their instructions. I'm not going to repeat that here.


What happens when you try this on Power? Regular readers of my posts will know to expect some minor hitches.

XDP does not disappoint.

Firstly, the prototype-kernel repository hard codes x86 as the architecture for kernel headers. You need to change it for powerpc.

Then, once you get the stuff compiled, and try to run it on a current-at-time-of-writing Zesty kernel, you'll hit a massive debug splat ending in:

32: (61) r1 = *(u32 *)(r8 +12)
misaligned packet access off 0+18+12 size 4
load_bpf_file: Permission denied

It turns out this is because in Ubuntu's Zesty kernel, CONFIG_HAS_EFFICIENT_UNALIGNED_ACCESS is not set on ppc64el. Because of that, the eBPF verifier will check that all loads are aligned - and this load (part of checking some packet header) is not, and so the verifier rejects the program. Unaligned access is not enabled because the Zesty kernel is being compiled for CPU_POWER7 instead of CPU_POWER8, and we don't have efficient unaligned access on POWER7.

As it turns out, IBM never released any officially supported Power7 LE systems - LE was only ever supported on Power8. So, I filed a bug and sent a patch to build Zesty kernels for POWER8 instead, and that has been accepted and will be part of the next stable update due real soon now.

Sure enough, if you install a kernel with that config change, you can verify the XDP program and load it into the kernel!

If you have real powerpc hardware, that's enough to use XDP on Power! Thanks to Michael Ellerman, maintainer extraordinaire, for verifying this for me.

If - like me - you don't have ready access to Power hardware, you're stuffed. You can't use qemu in TCG mode: to use XDP with a VM, you need multi-queue support, which only exists in the vhost driver, which is only available for KVM guests. Maybe IBM should release a developer workstation. (Hint, hint!)

Overall, I was pleasantly surprised by how easy things were for people with real ppc hardware - it's encouraging to see something not require kernel changes!

eBPF and XDP are definitely growing technologies - as Brendan Gregg notes, now is a good time to learn them! (And those on Power have no excuse either!)


Planet Linux AustraliaOpenSTEM: This Week in HASS – term 3, week 2.

This week older students start their research projects for the term, whilst younger students are doing the Timeline Activity. Our youngest students are thinking about the places where people live and can join together with older students as buddies to Build A Humpy together.

Foundation/Prep/Kindy to Year 3

Students in stand-alone Foundation/Prep/Kindy classes (Unit F.3), or those in classes integrated with Year 1 (Unit F-1.3) are considering different types of homes this week. They will think about where the people in the stories from last week live and compare that to their own houses. They can consider how homes were different in the past and how our homes help us meet our basic needs. There is an option this week for these students to buddy with older students, especially those in Years 4, 5 and 6, to undertake the Building A Humpy activity together. In this activity students collect materials to build a replica Aboriginal humpy or shelter outside. Many teachers find that both senior primary and the younger students get a lot of benefit from helping each other with activities, enriching the learning experience. The Building a Humpy activity is one where the older students can assist the younger students with the physical requirements of building a humpy, whilst each group considers aspects of the activity relevant to their own studies, and comparing past ways of life to their own.

Students in Years 1 (Unit 1.3), 2 (Unit 2.3) and 3 (Unit 3.3) are undertaking the Timeline Activity this week. This activity is designed to complement the concept of the number line from the Mathematics curriculum, whilst helping students to learn to visualise the abstract concepts of the past and different lengths of time between historical events and the present. In this activity students walk out a timeline, preferably across a large open space such as the school Oval, whilst attaching pieces of paper at intervals to a string. The pieces of paper refer to specific events in history (starting with their own birth years) and cover a wide range of events from the material covered this year. Teachers can choose from events in Australian and world history, covering 100s, 1000s and even millions of years, back to the dinosaurs. Teachers can also add their own events. Thus the details of the activity are able to be altered in different years to maintain student interest. Depending on the class, the issue of scale can be addressed in various ways. By physically moving their bodies, students will start to understand the lengths of time involved in examinations of History. This activity is repeated in increasing detail in higher years, to make sure that the fundamental concepts are absorbed by students over time.

Years 3 to 6

Science ExplosionStudents in Years 3 to 6 are starting their term research projects on Australian history this week. Students in Year 3 (Unit 3.7) concentrate on topics from the history of their capital city or local community. Suggested topics are included for Brisbane, Melbourne, Sydney, Adelaide, Darwin, Hobart, Perth and Canberra. Teachers can substitute their own topics for a local community study. Students will undertake a Scientific Investigation into an aspect of their chosen research project and will produce a Scientific Report. It is recommended that teachers supplement the resources provided with old photographs, books, newspapers etc, many of which can be accessed online, to provide the students with extra material for their investigation.

First Fleet 1788First Fleet

Students in Year 4 (Unit 4.3) will be focusing on Australia in the period up to and including the arrival of the First Fleet and the early colonial period. OpenSTEM’s Understanding Our World® program encompasses the whole Australian curriculum for HASS and thus does not simply rely on “flogging the First Fleet to death”! There are 7 research themes for Year 4 students: “Australia Before 1788”; “The First Fleet”; “Convicts and Settlers”; “Aboriginal People in Colonial Australia”; “Australia and Other Nations in the 17th, 18th and 19th centuries”; “Colonial Children”; “Colonial Animals and their Impact”. These themes are allocated to groups of students and each student chooses an individual research topic within their groups themes. Suggested topics are given in the Teacher Handbook, as well as suggested resources.

19th century china dolls

Year 5 (Unit 5.3) students focus on the colonial period in Australia. There are 9 research themes for Year 5 students. These are: “The First Fleet”; “Convicts and Settlers”; “The 6 Colonies”; “Aboriginal People in Colonial Australia”; “Resistance to Colonial Authorities”; “Sugar in Queensland”; “Colonial Children”; “Colonial Explorers” and “Colonial Animals and their Impact”. As well as themes unique to Year 5, some overlap is provided to facilitate teaching in multi-year classes. The range of themes also allows for the possibility of teachers choosing different themes in different years. Once again individual topics and resources are suggested in the Teacher Handbook.

Year 6 (Unit 6.3) students will examine research themes around Federation and the early 20th century. There are 8 research themes for Year 6 students: “Federation and Sport”; “Women’s Suffrage”; “Aboriginal Rights in Australia”; “Henry Parkes and Federation”; “Edmund Barton and Federation”; “Federation and the Boer War”; “Samuel Griffith and the Constitution”; “Children in Australian History”. Individual research topics and resources are suggested in the Teachers Handbook. It is expected that students in Year 6 will be able to research largely independently, with weekly guidance from their teacher. OpenSTEM’s Understanding Our World® program is aimed at developing research skills in students progressively, especially over the upper primary years. If the program is followed throughout the primary years, students are well prepared for high school by the end of Year 6, having practised individual research skills for several years.


Rondam RamblingsThings I wish someone had told me before I started angel investing

Back in 2005 I suddenly found myself sitting on a big pile of money after the Google IPO so I did what any young nouveau-riche high-tech dilettante would do: I started angel investing.  I figured it would be more fun to be the beggee than the beggor for a change, and I was right about that.  But I was wrong about just about everything else, and I got a very expensive education as a result. Now

Planet DebianJose M. Calhariz: Crossgrading a complex Desktop and Debian Developer machine running Debian 9

This article is an experiment in progress, please recheck, while I am updating with the new information.

I have a very old installation of Debian, possibly since v2, dot not remember, that I have upgraded since then both in software and hardware. Now the hardware is 64bits, runs a kernel of 64bits but the run-time is still 32bits. For 99% of tasks this is very good. Now that I have made many simulations I may have found a solution to do a crossgrade of my desktop. I write here the tentative procedure and I will update with more ideias on the problems that I may found.

First you need to install a 64bits kernel and boot with it. See my previous post on how to do it.

Second you need to do a bootstrap of crossgrading and the instalation of all the libs as amd64:

 apt-get update
 apt-get upgrade
 apt-get clean
 dpkg --list > original.dpkg
 apt-get --download-only install dpkg:amd64 tar:amd64 apt:amd64 bash:amd64 dash:amd64 init:amd64 mawk:amd64
 cd /var/cache/apt/archives/
 dpkg --install dpkg_*amd64.deb tar_*amd64.deb apt_*amd64.deb bash_*amd64.deb dash_*amd64.deb *.deb
 dpkg --configure --pending
 dpkg -i --skip-same-version dpkg_*_amd64.deb apt_*_amd64.deb bash_*_amd64.deb dash_*_amd64.deb mawk_*_amd64.deb *.deb
 for pack32 in $(grep i386 original.dpkg  | egrep "^ii " | awk '{print $2}' ) ; do 
   echo $pack32 ; 
   if dpkg --status $pack32 | grep -q "Multi-Arch: same" ; then 
     apt-get --download-only install -y --allow-remove-essential ${pack32%:i386}:amd64 ; 
   fi ; 

 dpkg --install /var/cache/apt/archives/*_amd64.deb
 dpkg --install /var/cache/apt/archives/*_amd64.deb
 dpkg --print-architecture
 dpkg --print-foreign-architectures

But this procedure does not prevent the "apt-get install" to have broken dependencies.

So trying to install the core packages and the libraries using "dpkg -i".

apt-get update
apt-get upgrade
apt-get autoremove
apt-get clean
dpkg --list > original.dpkg
apt-get --download-only install dpkg:amd64 tar:amd64 apt:amd64 bash:amd64 dash:amd64 init:amd64 mawk:amd64
for pack32 in $(grep i386 original.dpkg | egrep "^ii " | awk '{print $2}' ) ; do 
  echo $pack32 ; 
  if dpkg --status $pack32 | grep -q "Multi-Arch: same" ; then 
    apt-get --download-only install -y --allow-remove-essential ${pack32%:i386}:amd64 ; 
  fi ; 
cd /var/cache/apt/archives/
dpkg --install dpkg_*amd64.deb tar_*amd64.deb apt_*amd64.deb bash_*amd64.deb dash_*amd64.deb *.deb
dpkg --configure --pending
dpkg --install --skip-same-version dpkg_*_amd64.deb apt_*_amd64.deb bash_*_amd64.deb dash_*_amd64.deb mawk_*_amd64.deb *.deb

dpkg --remove libcurl4-openssl-dev
dpkg -i libcurl4-openssl-dev_*_amd64.deb

Remove packages until all there is no brokens packages

dpkg --print-architecture
dpkg --print-foreign-architectures
apt-get --fix-broken --allow-remove-essential install

Still broken, because apt-get removed dpkg

So instead of only installing the libs with dpkg -i, I am going to try to install all the packages with dpkg -i:

apt-get update
apt-get upgrade
apt-get autoremove
apt-get clean
dpkg --list > original.dpkg
apt-get --download-only install dpkg:amd64 tar:amd64 apt:amd64 bash:amd64 dash:amd64 init:amd64 mawk:amd64
for pack32 in $(grep i386 original.dpkg | egrep "^ii " | awk '{print $2}' ) ; do 
  echo $pack32 ; 
  apt-get --download-only install -y --allow-remove-essential ${pack32%:i386}:amd64 ; 
cd /var/cache/apt/archives/
dpkg --install dpkg_*amd64.deb tar_*amd64.deb apt_*amd64.deb bash_*amd64.deb dash_*amd64.deb *.deb
dpkg --configure --pending
dpkg --install --skip-same-version dpkg_*_amd64.deb apt_*_amd64.deb bash_*_amd64.deb dash_*_amd64.deb mawk_*_amd64.deb *.deb
dpkg --configure --pending

Remove packages and reinstall selected packages until you fix all off them. Follow the trial for my machine:

dpkg --remove rkhunter
dpkg --remove libmarco-private1:i386 marco mate-control-center mate-desktop-environment-core mate-desktop-environment-core  mate-desktop-environment mate-desktop-environment-core mate-desktop-environment-extras
dpkg --remove libmate-menu2:i386 libmate-window-settings1:i386 mate-panel mate-screensaver python-mate-menu libmate-slab0:i386 mozo mate-menus
dpkg --remove libmate-menu2:i386 mate-panel python-mate-menu mate-applets mate-menus
dpkg -i libmate-menu2_1.16.0-2_amd64.deb
dpkg --remove  gir1.2-ibus-1.0:i386 gnome-shell gnome-shell-extensions gdm3 gnome-session
dpkg --remove  gir1.2-ibus-1.0:i386
dpkg --remove libmateweather1:i386
dpkg -i libmateweather1_1.16.1-2_amd64.deb

apt-get --fix-broken --download-only install
dpkg --skip-same-version --install dpkg_*amd64.deb tar_*amd64.deb apt_*amd64.deb bash_*amd64.deb dash_*amd64.deb *.deb
dpkg --configure --pending
dpkg -i python_2.7.13-2_amd64.deb
dpkg --configure --pending
dpkg -i perl_5.24.1-3+deb9u1_amd64.deb perl-base_5.24.1-3+deb9u1_amd64.deb
dpkg -i exim4-daemon-light_4.89-2+deb9u1_amd64.deb exim4-base_4.89-2+deb9u1_amd64.deb
dpkg -i libuuid-perl_0.27-1_amd64.deb
dpkg --configure --pending
dpkg --install gstreamer1.0-plugins-bad_1.10.4-1_amd64.deb libmpeg2encpp-2.1-0_1%3a2.1.0+debian-5_amd64.deb libmplex2-2.1-0_1%3a2.1.0+debian-5_amd64.deb
dpkg --configure --pending
dpkg --audit

Now fixing broken dependencies on apt-get. Found no other way than removing all the broken packages.

dpkg --remove $(apt-get --fix-broken install | cut -f 2 -d ' ' )
apt-get install $(grep -v ":i386" ~/original.dpkg | egrep "^ii" | grep -v "aiccu" | grep -v "acroread" | grep -v "flash-player-properties" | grep -v "flashplayer-mozilla" | egrep -v "tp-flash-marillat" | awk '{print $2}')

Planet DebianVasudev Kamath: Overriding version information from with pbr

I recently raised a pull request on zfec for converting its python packaging from pure to pbr based. Today I got review from Brian Warner and one of the issue mentioned was python --version is not giving same output as previous version of

Previous version used versioneer which extracts version information needed from VCS tags. Versioneer also provides flexibility of specifying type of VCS used, style of version, tag prefix (for VCS) etc. pbr also does extract version information from git tag but it expects git tag to be of format tags/refs/x.y.z format but zfec used a zfec- prefix to tag (example zfec-1.4.24) and pbr does not process this. End result, I get a version in format 0.0.devNN where NN is number of commits in the repository from its inception.

Me and Brian spent few hours trying to figure out a way to tell pbr that we would like to override version information it auto deduces, but there was none other than putting version string in PBR_VERSION environment variable. That documentation was contributed by me 3 years back to pbr project.

So finally I used versioneer to create a version string and put it in the environment variable PBR_VERSION.

import os
import versioneer

os.environ['PBR_VERSION'] = versioneer.get_version()


And added below snippet to setup.cfg which is how versioneer can be configured with various information including tag prefixes.

VCS = git
style = pep440
versionfile_source = zfec/
versionfile_build = zfec/
tag_prefix = zfec-
parentdir_prefix = zfec-

Though this work around gets the work done, it does not feel correct to set environment variable to change the logic of other part of same program. If you guys know the better way do let me know!. Also probably I should consider filing an feature request against pbr to provide a way to pass tag prefix for version calculation logic.

Planet DebianLior Kaplan: PDO_IBM: tracking changes publicly

As part of my work at Zend (now a RogueWave company), I maintain the various patch sets. One of those is the changes for PDO_IBM extension for PHP.

After some patch exchange I decided it’s would be easier to manage the whole process over a public git repository, and maybe gain some more review / feedback along the way. Info at

Another aspect of this, is having IBMi specific patches from YIPS (young i professionals) at, which itself are patches on top of vanilla releases. Info at

So keeping track over these changes as well is easier while using git’s ability to rebase efficiently, so when a new release is done, I can adapt my patches quite easily. Make sure the changes can be back and forward ported between vanilla and IBMi versions of the extension.

Filed under: PHP

Krebs on SecurityPorn Spam Botnet Has Evil Twitter Twin

Last month KrebsOnSecurity published research into a large distributed network of apparently compromised systems being used to relay huge blasts of junk email promoting “online dating” programs — affiliate-driven schemes traditionally overrun with automated accounts posing as women. New research suggests that another bot-promoting botnet of more than 80,000 automated female Twitter accounts has been pimping the same dating scheme and prompting millions of clicks from Twitter users in the process.

One of the 80,000+ Twitter bots ZeroFOX found that were enticing male Twitter users into viewing their profile pages.

One of the 80,000+ Twitter bots ZeroFOX found that were enticing male Twitter users into viewing their profile pages.

Not long after I published Inside a Porn-Pimping Spam Botnet, I heard from researchers at ZeroFOX, a security firm that helps companies block attacks coming through social media.

Zack Allen, manager of threat operations at ZeroFOX, said he had a look at some of the spammy, adult-themed domains being promoted by the botnet in my research and found they were all being promoted through a botnet of bogus Twitter accounts.

Those phony Twitter accounts all featured images of attractive or scantily-clad women, and all were being promoted via suggestive tweets, Allen said.

Anyone who replied was ultimately referred to subscription-based online dating sites run by Deniro Marketing, a company based in California. This was the same company that was found to be the beneficiary of spam from the porn botnet I’d written about in June. Deniro did not respond to requests for comment.

“We’ve been tracking this thing since February 2017, and we concluded that the social botnet controllers are probably not part of Deniro Marketing, but most likely are affiliates,” Allen said.

ZeroFOX found more than 86,262 Twitter accounts were responsible for more than 8.6 million posts on Twitter promoting porn-based sites, many of them promoting domains in a swath of Internet address space owned by Deniro Marketing (ASN19884).

Allen said 97.4% of bot display names had the pattern “Firstname Surname” with the first letters of each name capitalized, and each name separated by a single whitespace character that corresponded to common female names.

An analysis of the Twitter bot names used in the scheme. Graphic: ZeroFOX.

An analysis of the Twitter bot names used in the scheme. Graphic: ZeroFOX.

The accounts advertise adult content by routinely injecting links from their twitter profiles to a popular hashtag, or by @-mentioning a popular user or influencer on Twitter. Those profile links are shortened with Google’s link shortening service, which then redirects to a free hosting domain in the dot-tk (.tk) domain space (.tk is the country code for Tokelau — a group of atolls in the South Pacific).

From there the system is smart enough to redirect users back to Twitter if they appear to be part of any automated attempt to crawl the links (e.g. by using site download and mirroring tools like cURL), the researchers found. They said this was likely a precaution on the part of the spammers to avoid detection by automated scanners looking for bot activity on Twitter. Requests from visitors who look like real users responding to tweets are redirected to the porn spam sites.

Because the links promoted by those spammy Twitter accounts all abused short link services from Twitter and Google, the researchers were able to see that this entire botnet has generated more than 30 million unique clicks from February to June 2017.

[SIDE NOTE: Anyone seeking more context about what’s being promoted here can check out the Web site datinggold[dot]com [Caution: Not-Safe-for-Work], which suggests it’s an affiliate program that rewards marketers who drive new signups to its array of “online dating” offerings — mostly “cheating,” “hookup” and “affair-themed” sites like “AdsforSex,” “Affair Hookups,” and “LocalCheaters.” Note that this program is only interested in male signups.]

The datinggold affiliate site which pays spammers to bring male signups to "online dating" services.

The datinggold affiliate site which pays spammers to bring male signups to “online dating” services.

Allen said the Twitter botnet relies heavily on accounts that have been “aged” for a period of time as another method to evade anti-spam techniques used by Twitter, which may treat tweets from new accounts with more prejudice than those from established accounts. ZeroFOX said about 20 percent of the Twitter accounts identified as part of the botnet were aged at least one year before sending their first tweet, and that the botnet overall demonstrates that these affiliate programs have remained lucrative by evolving to harness social media.

“The final redirect sites encourage the user to sign up for subscription pornography, webcam sites, or fake dating,” ZeroFOX wrote in a report being issued this week. “These types of sites, although legal, are known to be scams.”

Perhaps the most well-known example of the subscription-based dating/cheating service that turned out to be mostly phony was AshleyMadison. After AshleyMadison’s user databases were plundered and published online, the company admitted that its service used at least 70,000 female chatbots that were programmed to message new users and try to entice them into replying — which required a paid account.

“Many of the sites’ policies claim that the site owners operate most of the profiles,” ZeroFOX charged. “They also have overbearing policies that can use personally information of their customers to send to other affiliate programs, yielding more spam to the victim. Much like the infamous ‘partnerka’ networks from the Russian Business Network, money is paid out via clicks and signups on affiliate programs” [links added].

Although the Twitter botnet discovered by ZeroFOX has since been dismantled, it not hard to see how this same approach could be very effective at spreading malware. Keep your wits about you while using or cruising social media sites, and be wary of any posts or profiles that match the descriptions and behavior of the bot accounts described here.

For more on this research, see ZeroFOX’s blog post Inside a Massive Siren Social Network Spam Botnet.


CryptogramBook Review: Twitter and Tear Gas, by Zeynep Tufekci

There are two opposing models of how the Internet has changed protest movements. The first is that the Internet has made protesters mightier than ever. This comes from the successful revolutions in Tunisia (2010-11), Egypt (2011), and Ukraine (2013). The second is that it has made them more ineffectual. Derided as "slacktivism" or "clicktivism," the ease of action without commitment can result in movements like Occupy petering out in the US without any obvious effects. Of course, the reality is more nuanced, and Zeynep Tufekci teases that out in her new book Twitter and Tear Gas.

Tufekci is a rare interdisciplinary figure. As a sociologist, programmer, and ethnographer, she studies how technology shapes society and drives social change. She has a dual appointment in both the School of Information Science and the Department of Sociology at University of North Carolina at Chapel Hill, and is a Faculty Associate at the Berkman Klein Center for Internet and Society at Harvard University. Her regular New York Times column on the social impacts of technology is a must-read.

Modern Internet-fueled protest movements are the subjects of Twitter and Tear Gas. As an observer, writer, and participant, Tufekci examines how modern protest movements have been changed by the Internet­ -- and what that means for protests going forward. Her book combines her own ethnographic research and her usual deft analysis, with the research of others and some big data analysis from social media outlets. The result is a book that is both insightful and entertaining, and whose lessons are much broader than the book's central topic.

"The Power and Fragility of Networked Protest" is the book's subtitle. The power of the Internet as a tool for protest is obvious: it gives people newfound abilities to quickly organize and scale. But, according to Tufekci, it's a mistake to judge modern protests using the same criteria we used to judge pre-Internet protests. The 1963 March on Washington might have culminated in hundreds of thousands of people listening to Martin Luther King Jr. deliver his "I Have a Dream" speech, but it was the culmination of a multi-year protest effort and the result of six months of careful planning made possible by that sustained effort. The 2011 protests in Cairo came together in mere days because they could be loosely coordinated on Facebook and Twitter.

That's the power. Tufekci describes the fragility by analogy. Nepalese Sherpas assist Mt. Everest climbers by carrying supplies, laying out ropes and ladders, and so on. This means that people with limited training and experience can make the ascent, which is no less dangerous -- to sometimes disastrous results. Says Tufekci: "The Internet similarly allows networked movements to grow dramatically and rapidly, but without prior building of formal or informal organizational and other collective capacities that could prepare them for the inevitable challenges they will face and give them the ability to respond to what comes next." That makes them less able to respond to government counters, change their tactics­ -- a phenomenon Tufekci calls "tactical freeze" -- make movement-wide decisions, and survive over the long haul.

Tufekci isn't arguing that modern protests are necessarily less effective, but that they're different. Effective movements need to understand these differences, and leverage these new advantages while minimizing the disadvantages.

To that end, she develops a taxonomy for talking about social movements. Protests are an example of a "signal" that corresponds to one of several underlying "capacities." There's narrative capacity: the ability to change the conversation, as Black Lives Matter did with police violence and Occupy did with wealth inequality. There's disruptive capacity: the ability to stop business as usual. An early Internet example is the 1999 WTO protests in Seattle. And finally, there's electoral or institutional capacity: the ability to vote, lobby, fund raise, and so on. Because of various "affordances" of modern Internet technologies, particularly social media, the same signal -- a protest of a given size -- reflects different underlying capacities.

This taxonomy also informs government reactions to protest movements. Smart responses target attention as a resource. The Chinese government responded to 2015 protesters in Hong Kong by not engaging with them at all, denying them camera-phone videos that would go viral and attract the world's attention. Instead, they pulled their police back and waited for the movement to die from lack of attention.

If this all sounds dry and academic, it's not. Twitter and Tear Gasis infused with a richness of detail stemming from her personal participation in the 2013 Gezi Park protests in Turkey, as well as personal on-the-ground interviews with protesters throughout the Middle East -- particularly Egypt and her native Turkey -- Zapatistas in Mexico, WTO protesters in Seattle, Occupy participants worldwide, and others. Tufekci writes with a warmth and respect for the humans that are part of these powerful social movements, gently intertwining her own story with the stories of others, big data, and theory. She is adept at writing for a general audience, and­despite being published by the intimidating Yale University Press -- her book is more mass-market than academic. What rigor is there is presented in a way that carries readers along rather than distracting.

The synthesist in me wishes Tufekci would take some additional steps, taking the trends she describes outside of the narrow world of political protest and applying them more broadly to social change. Her taxonomy is an important contribution to the more-general discussion of how the Internet affects society. Furthermore, her insights on the networked public sphere has applications for understanding technology-driven social change in general. These are hard conversations for society to have. We largely prefer to allow technology to blindly steer society or -- in some ways worse -- leave it to unfettered for-profit corporations. When you're reading Twitter and Tear Gas, keep current and near-term future technological issues such as ubiquitous surveillance, algorithmic discrimination, and automation and employment in mind. You'll come away with new insights.

Tufekci twice quotes historian Melvin Kranzberg from 1985: "Technology is neither good nor bad; nor is it neutral." This foreshadows her central message. For better or worse, the technologies that power the networked public sphere have changed the nature of political protest as well as government reactions to and suppressions of such protest.

I have long characterized our technological future as a battle between the quick and the strong. The quick -- dissidents, hackers, criminals, marginalized groups -- are the first to make use of a new technology to magnify their power. The strong are slower, but have more raw power to magnify. So while protesters are the first to use Facebook to organize, the governments eventually figure out how to use Facebook to track protesters. It's still an open question who will gain the upper hand in the long term, but Tufekci's book helps us understand the dynamics at work.

This essay originally appeared on Vice Motherboard.

The book on

CryptogramFriday Squid Blogging: Eyeball Collector Wants a Giant-Squid Eyeball

They're rare:

The one Dubielzig really wants is an eye from a giant squid, which has the biggest eye of any living animal -- it's the size of a dinner plate.

"But there are no intact specimens of giant squid eyes, only rotten specimens that have been beached," he says.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramForged Documents and Microsoft Fonts

A set of documents in Pakistan were detected as forgeries because their fonts were not in circulation at the time the documents were dated.

Worse Than FailureError'd: Unfortunate Timing

"Apparently, I viewed the page during one of those special 31 seconds of the year," wrote Richard W.


"Well, it looks like I'll be paying full price for this repair," wrote Ryan W.


Marco writes, "So...that's like September 2nd?"


"Office 365????? I guess so, but ONLY if you're sure, Microsoft..." writes Leonid T.


Brandon writes, "Being someone who lives in 'GMT +1', I have to wonder, is it the 6th, 11th, or 12th of July or the 7th of June, November, or December we're talking about here?"


"Translated, it reads, 'Bayrou and Modem in angry mode', assuming of course it really is Mr. Bayrou in the pic," wrote Matt.


[Advertisement] Universal Package Manager - ProGet easily integrates with your favorite Continuous Integration and Build Tools, acting as the central hub to all your essential components. Learn more today!

Don MartiSmart futures contracts on software issues talk, and bullshit walks?

Previously: Benkler’s Tripod, transactions from a future software market, more transactions from a future softwware market

Owning "equity" in an outcome

John Robb: Revisiting Open Source Ventures:

Given this, it appears that an open source venture (a company that can scale to millions of worker/owners creating a new economic ecosystem) that builds massive human curated databases and decentralizes the processing load of training these AIs could become extremely competitive.

But what if the economic ecosystem could exist without the venture? Instead of trying to build a virtual company with millions of workers/owners, build a market economy with millions of participants in tens of thousands of projects and tasks? All of this stuff scales technically much better than it scales organizationally—you could still be part of a large organization or movement while only participating directly on a small set of issues at any one time. Instead of holding equity in a large organization with all its political risk, you could hold a portfolio of positions in areas where you have enough knowledge to be comfortable.

Robb's opportunity is in training AIs, not in writing code. The "oracle" for resolving AI-training or dataset-building contracts would have to be different, but the futures market could be the same.

The cheating project problem

Why would you invest in a futures contract on bug outcomes when the project maintainer controls the bug tracker?

And what about employees who are incentivized from both sides: paid to fix a bug but able to buy futures contracts (anonymously) that will let them make more on the market by leaving it open?

In order for the market to function, the total reputation of the project and contributors must be high enough that outside participants believe that developers are more motivated to maintain that reputation than to "take a dive" on a bug.

That implies that there is some kind of relationship between the total "reputation capital" of a project and the maximum market value of all the futures contracts on it.

Open source metrics

To put that another way, there must be some relationship between the market value of futures contracts on a project and the maximum reputation value of the project. So that could be a proxy for a difficult-to-measure concept such as "open source health."

Open source journalism

Hey, tickers to put into stories! Sparklines! All the charts and stuff that finance and sports reporters can build stories around!


Krebs on SecurityThieves Used Infrared to Pull Data from ATM ‘Insert Skimmers’

A greater number of ATM skimming incidents now involve so-called “insert skimmers,” wafer-thin fraud devices made to fit snugly and invisibly inside a cash machine’s card acceptance slot. New evidence suggests that at least some of these insert skimmers — which record card data and store it on a tiny embedded flash drive  — are equipped with technology allowing them to transmit stolen card data wirelessly via infrared, the same communications technology that powers a TV remote control.

Last month the Oklahoma City metropolitan area experienced rash of ATM attacks involving insert skimmers. The local KFOR news channel on June 28, 2017 ran a story stating that at least four banks in the area were hit with insert skimmers.

The story quoted a local police detective saying “the skimmer contains an antenna which transmits your card information to a tiny camera hidden somewhere outside the ATM.”

Financial industry sources tell KrebsOnSecurity that preliminary analysis of the insert skimmers used in the attacks suggests they were configured to transmit stolen card data wirelessly to the hidden camera using infrared, a short-range communications technology most commonly found in television remote controls.

Here’s a look at one of the insert skimmers that Oklahoma authorities recently seized from a compromised ATM:

An insert skimmer retrieved from a compromised cash machine in Oklahoma City.

An insert skimmer retrieved from a compromised cash machine in Oklahoma City. Image:

In such an attack, the hidden camera has a dual function: To record time-stamped videos of ATM users entering their PINs; and to receive card data recorded and transmitted by the insert skimmer. In this scenario, the fraudster could leave the insert skimmer embedded in the ATM’s card acceptance slot, and merely swap out the hidden camera whenever its internal battery is expected to be depleted.

Of course, the insert skimmer also operates on an embedded battery, but according to my sources the skimmer in question was designed to turn on only when someone uses the cash machine, thereby preserving the battery.

Thieves involved in skimming attacks have hidden spy cameras in some pretty ingenious places, such as a brochure rack to the side of the cash machine or a safety mirror affixed above the cash machine (some ATMs legitimately place these mirrors so that customers will be alerted if someone is standing behind them at the machine).

More often than not, however, hidden cameras are placed behind tiny pinholes cut into false fascias that thieves install directly above or beside the PIN pad. Unfortunately, I don’t have a picture of a hidden camera used in the recent Oklahoma City insert skimming attacks.

Here’s a closer look at the insert skimmer found in Oklahoma:



A source at a financial institution in Oklahoma shared the following images of the individuals who are suspected of installing these insert skimming devices.

Individuals suspected of installing insert skimmers in a rash of skimming attacks last month in Oklahoma City. Image:

Individuals suspected of installing insert skimmers in a rash of skimming attacks last month in Oklahoma City. Image:

As this skimming attack illustrates, most skimmers rely on a hidden camera to record the victim’s PIN, so it’s a good idea to cover the pin pad with your hand, purse or wallet while you enter it.

Yes, there are skimming devices that rely on non-video methods to obtain the PIN (such as PIN pad overlays), but these devices are comparatively rare and quite a bit more expensive for fraudsters to build and/or buy.

So cover the PIN pad. It also protects you against some ne’er-do-well behind you at the ATM “shoulder surfing” you to learn your PIN (which would likely be followed by a whack on the head).

It’s an elegant and simple solution to a growing problem. But you’d be amazed at how many people fail to take this basic, hassle-free precaution.

If you’re as fascinated as I am with all these skimming devices, check out my series All About Skimmers.

CryptogramTomato-Plant Security

I have a soft spot for interesting biological security measures, especially by plants. I've used them as examples in several of my books. Here's a new one: when tomato plants are attacked by caterpillars, they release a chemical that turns the caterpillars on each other:

It's common for caterpillars to eat each other when they're stressed out by the lack of food. (We've all been there.) But why would they start eating each other when the plant food is right in front of them? Answer: because of devious behavior control by plants.

When plants are attacked (read: eaten) they make themselves more toxic by activating a chemical called methyl jasmonate. Scientists sprayed tomato plants with methyl jasmonate to kick off these responses, then unleashed caterpillars on them.

Compared to an untreated plant, a high-dose plant had five times as much plant left behind because the caterpillars were turning on each other instead. The caterpillars on a treated tomato plant ate twice as many other caterpillars than the ones on a control plant.

Planet Linux AustraliaAnthony Towns: Bitcoin: ASICBoost – Plausible or not?

So the first question: is ASICBoost use plausible in the real world?

There are plenty of claims that it’s not:

  • “Much conspiracy around today. I don’t believe SegWit non-activation has anything to do with AsicBoost!” – Timo Hanke, one of the patent applicants, on twitter
  • “there’s absolutely nothing but baseless accusations flying around” – Emin Gun Sirer’s take, linked from the Bitmain statement
  • “no company would ever produce a chip that would have a switch in to hide that it’s actually an ASICboost chip.” – Sam Cole formerly of KNCMiner which went bankrupt due to being unable to compete with Bitmain in 2016
  • “I believe their claim about not activating ASICBoost. It is very small money for them.” – Guy Corem of SpoonDoolies, who independently discovered ASICBoost
  • “No one is even using Asicboost.” – Roger Ver (/u/memorydealers) on reddit

A lot of these claims don’t actually match reality though: ASICBoost is implemented in Bitmain miners sold to the public, and since it defaults to off, a switch to hide it is obviously easily possible since it’s disabled by default, contradicting Sam Cole’s take. There’s plenty of circumstantial evidence of ASICBoost-related transaction structuring in blocks, contradicting the basis on which Emin Gun Sirer’s dismisses the claims. The 15%-30% improvement claims that Guy Corem and Sam Cole cite are certainly large enough to be worth looking into — and  Bitmain confirms to have done on testnet. Even Guy Corem’s claim that they only amount to $2,000,000 in savings per year rather than $100,000,000 seems like a reason to expect it to be in use, rather than so little that you wouldn’t bother.

If ASICBoost weren’t in use on mainnet it would probably be relatively straightforward to prove that: Bitmain could publish the benchmarks results they got when testing on testnet, and why that proved not to be worth doing on mainnet, and provide instructions for their customers on how to reproduce their results, for instance. Or Bitmain and others could support efforts to block ASICBoost from being used on mainnet, to ensure no one else uses it, for the greater good of the network — if, as they claim, they’re already not using it, this would come at no cost to them.

To me, much of the rhetoric that’s being passed around seems to be a much better match for what you would expect if ASICBoost were in use, than if it was not. In detail:

  • If ASICBoost were in use, and no one had any reason to hide it being used, then people would admit to using it, and would do so by using bits in the block version.
  • If ASICBoost were in use, but people had strong reasons to hide that fact, then people would claim not to be using it for a variety of reasons, but those explanations would not stand up to more than casual analysis.
  • If ASICBoost were not in use, and it was fairly easy to see there is no benefit to it, then people would be happy to share their reasoning for not using it in detail, and this reasoning would be able to be confirmed independently.
  • If ASICBoost were not in use, but the reasons why it is not useful require significant research efforts, then keeping the detailed reasoning private may act as a competitive advantage.

The first scenario can be easily verified, and does not match reality. Likewise the third scenario does not (at least in my opinion) match reality; as noted above, many of the explanations presented are superficial at best, contradict each other, or simply fall apart on even a cursory analysis. Unfortunately that rules out assuming good faith — either people are lying about using ASICBoost, or just dissembling about why they’re not using it. Working out which of those is most likely requires coming to our own conclusion on whether ASICBoost makes sense.

I think Jimmy Song had some good posts on that topic. His first, on Bitmain’s ASICBoost claims finds some plausible examples of ASICBoost testing on testnet, however this was corrected in the comments as having been performed by Timo Hanke, rather than Bitmain. Having a look at other blocks’ version fields on testnet seems to indicate that there hasn’t been much other fiddling of version fields, so presumably whatever testing of ASICBoost was done by Bitmain, fiddling with the version field was not used; but that in turn implies that Bitmain must have been testing covert ASICBoost on testnet, assuming their claim to have tested it on testnet is true in the first place (they could quite reasonably have used a private testnet instead). Two later posts, on profitability and ASICBoost and Bitmain’s profitability in particular, go into more detail, mostly supporting Guy Corem’s analysis mentioned above. Perhaps interestingly, Jimmy Song also made a proposal to the bitcoin-dev shortly after Greg’s original post revealing ASICBoost and prior to these posts; that proposal would have endorsed use of ASICBoost on mainnet, making it cheaper and compatible with segwit, but would also have made use of ASICBoost readily apparent to both other miners and patent holders.

It seems to me there are three different ways to look at the maths here, and because this is an economics question, each of them give a different result:

  • Greg’s maths splits miners into two groups each with 50% of hashpower. One group, which is unable to use ASICBoost is assumed to be operating at almost zero profit, so their costs to mine bitcoins are only barely below the revenue they get from selling the bitcoin they mine. Using this assumption, the costs of running mining equipment are calculated by taking the number of bitcoin mined per year (365*24*6*12.5=657k), multiplying that by the price at the time ($1100), and halving the costs because each group only mines half the chain. This gives a cost of mining for the non-ASICBoost group of $361M per year. The other group, which uses ASICBoost, then gains a 30% advantage in costs, so only pays 70%, or $252M, a comparative saving of approximately $100M per annum. This saving is directly proportional to hashrate and ASICBoost advantage, so using Guy Corem’s figures of 13.2% hashrate and 15% advantage, this reduces from $95M to $66M, saving about $29M per annum.
  • Guy Corem’s maths estimates Bitmain’s figures directly: looking at the AntPool hashpower share, he estimates 500PH/s in hashpower (or 13.2%); he uses the specs of the AntMiner S9 to determine power usage (0.1 J/GH); he looks at electricity prices in China and estimates $0.03 per kWh; and he estimates the ASICBoost advantage to be 15%. This gives a total cost of 500M GH/s * 0.1 J/GH / 1000 W/kW * $0.03 per kWh * 24 * 365 which is $13.14 M per annum, so a 15% saving is just under $2M per annum. If you assume that the hashpower was 50% and ASICBoost gave a 30% advantage instead, this equates to about 1900 PH/s, and gives a benefit of just under $15M per annum. In order to get the $100M figure to match Greg’s result, you would also need to increase electricity costs by a factor of six, from 3c per kWH to 20c per kWH.
  • The approach I prefer is to compare what your hashpower would be keeping costs constant and work out the difference in revenue: for example, if you’re spending $13M per annum in electricity, what is your profit with ASICBoost versus without (assuming that the difficulty retargets appropriately, but no one else changes their mining behaviour). Following this line of thought, if you have 500PH/s with ASICBoost giving you a 30% boost, then without ASICBoost, you have 384 PH/s (500/1.3). If that was 13.2% of hashpower, then the remaining 86.8% of hashpower is 3288 PH/s, so when you stop using ASICBoost and a retarget occurs, total hashpower is now 3672 PH/s (384+3288), and your percentage is now 10.5%. Because mining revenue is simply proportional to hashpower, this amounts to a loss of 2.7% of the total bitcoin reward, or just under $20M per year. If you match Greg’s assumptions (50% hashpower, 30% benefit) that leads to an estimate of $47M per annum; if you match Guy Corem’s assumptions (13.2% hashpower, 15% benefit) it leads to an estimate of just under $11M per annum.

So like I said, that’s three different answers in each of two scenarios: Guy’s low end assumption of 13.2% hashpower and a 15% advantage to ASICBoost gives figures of $29M/$2M/$11M; while Greg’s high end assumptions of 50% hashpower and 30% advantage give figures of $100M/$15M/$47M. The differences in assumptions there is obviously pretty important.

I don’t find the assumptions behind Greg’s maths realistic: in essence, it assumes that mining be so competitive that it is barely profitable even in the short term. However, if that were the case, then nobody would be able to invest in new mining hardware, because they would not recoup their investment. In addition, even if at some point mining were not profitable, increases in the price of bitcoin would change that, and the price of bitcoin has been increasing over recent months. Beyond that, it also assumes electricity prices do not vary between miners — if only the marginal miner is not profitable, it may be that some miners have lower costs and therefore are profitable; and indeed this is likely the case, because electricity prices vary over time due to both seasonal and economic factors. The method Greg uses does is useful for establishing an upper limit, however: the only way ASICBoost could offer more savings than Greg’s estimate would be if every block mined produced less revenue than it cost in electricity, and miners were making a loss on every block. (This doesn’t mean $100M is an upper limit however — that estimate was current in April, but the price of bitcoin has more than doubled since then, so the current upper bound via Greg’s maths would be about $236M per year)

A downside to Guy’s method from the point of view of outside analysis is that it requires more information: you need to know the efficiency of the miners being used and the cost of electricity, and any error in those estimates will be reflected in your final figure. In particular, the cost of electricity needs to be a “whole lifecycle” cost — if it costs 3c/kWh to supply electricity, but you also need to spend an additional 5c/kWh in cooling in order to keep your data-centre operating, then you need to use a figure of 8c/kWh to get useful results. This likely provides a good lower bound estimate however: using ASICBoost will save you energy, and if you forget to account for cooling or some other important factor, then your estimate will be too low; but that will still serve as a loose lower bound. This estimate also changes over time however; while it doesn’t depend on price, it does depend on deployed hashpower — since total hashrate has risen from around 3700 PH/s in April to around 6200 PH/s today, if Bitmain’s hashrate has risen proportionally, it has gone from 500 PH/s to 837 PH/s, and an ASICBoost advantage of 15% means power cost savings have gone from $2M to $3.3M per year; or if Bitmain has instead maintained control of 50% of hashrate at 30% advantage, the savings have gone from $15M to $25M per year.

The key difference between my method and both Greg’s and Guy’s is that they implicitly assume that consuming more electricity is viable, and costs simply increase proportionally; whereas my method assumes that this is not viable, and instead that sufficient mining hardware has been deployed that power consumption is already constrained by some other factor. This might be due to reaching the limit of what the power company can supply, or the rating of the wiring in the data centre, or it might be due to the cooling capacity, or fire risk, or some other factor. For an operation spanning multiple data centres this may be the case for some locations but not others — older data centres may be maxed out, while newer data centres are still being populated and may have excess capacity, for example. If setting up new data centres is not too difficult, it might also be true in the short term, but not true in the longer term — that is having each miner use more power due to disabling ASICBoost might require shutting some miners down initially, but they may be able to be shifted to other sites over the course of a few weeks or month, and restarted there, though this would require taking into account additional hosting costs beyond electricity and cooling. As such, I think this is a fairly reasonable way to produce an plausible estimate, and it’s the one I’ll be using. Note that it depends on the bitcoin price, so the estimates this method produces have also risen since April, going from $11M to $24M per annum (13.2% hash, 15% advantage) or from $47M to $103M (50% hash, 30% advantage).

The way ASICBoost works is by allowing you to save a few steps: normally when trying to generate a proof of work, you have to do essentially six steps:

  1. A = Expand( Chunk1 )
  2. B = Compress( A, 0 )
  3. C = Expand( Chunk2 )
  4. D = Compress( C, B )
  5. E = Expand( D )
  6. F = Compress( E )

The expected process is to do steps (1,2) once, then do steps (3,4,5,6) about four billion (or more) times, until you get a useful answer. You do this process in parallel across many different chips. ASICBoost changes this process by observing that step (3) is independent of steps (1,2) — so by finding a variety of Chunk1s — call them Chunk1-A, Chunk1-B, Chunk1-C and Chunk1-D that are each compatible with a common Chunk2. In that case, you do steps (1,2) four times for each different Chunk1, then do step (3) four billion (or more) times, and do steps (4,5,6) 16 billion (or more) times, to get four times the work, while saving 12 billion (or more) iterations of step (3). Depending on the number of Chunk1’s you set yourself up to find, and the relative weight of the Expand versus Compress steps, this comes to (n-1)/n / 2 / (1+c/e), where n is the number of different Chunk1’s you have. If you take the weight of Expand and Compress steps as about equal, it simplifies to 25%*(n-1)/n, and with n=4, this is 18.75%. As such, an ASICBoost advantage of about 20% seems reasonably practical to me. At 50% hash and 20% advantage, my estimates for ASICBoost’s value are $33M in April, and $72M today.

So as to the question of whether you’d use ASICBoost, I think the answer is a clear yes: the lower end estimate has risen from $2M to $3.3M per year, and since Bitmain have acknowledged that AntMiner’s support ASICBoost in hardware already, the only additional cost is finding collisions which may not be completely trivial, but is not difficult and is easily automated.

If the benefit is only in this range, however, this does not provide a plausible explanation for opposing segwit: having the Bitcoin industry come to a consensus about how to move forward would likely increase the bitcoin price substantially, definitely increasing Bitmain’s mining revenue — even a 2% increase in price would cover their additional costs. However, as above, I believe this is only a lower bound, and a more reasonable estimate is on the order of $11M-$47M as of April or $24M-$103M as of today. This is a much more serious range, and would require an 11%-25% increase in price to not be an outright loss; and a far more attractive proposition would be to find a compromise position that both allows the industry to move forward (increasing the price) and allows ASICBoost to remain operational (maintaining the cost savings / revenue boost).


It’s possible to take a different approach to analysing the cost-effectiveness of mining given how much you need to pay in electricity costs. If you have access to a lot of power at a flat rate, can deal with other hosting issues, can expand (or reduce) your mining infrastructure substantially, and have some degree of influence in how much hashpower other miners can deploy, then you can derive a formula for what proportion of hashpower is most profitable for you to control.

In particular, if your costs are determined by an electricity (and cooling, etc) price, E, in dollars per kWh and performance, r, in Joules per gigahash, then given your hashrate, h in terahash/second, your power usage in watts is (h*1e3*r), and you run this for 600 seconds on average between each block (h*r*6e5 Ws), which you divide by 3.6M to convert to kWh (h*r/6), then multiply by your electricity cost to get a dollar figure (h*r*E/6). Your revenue depends on the hashrate of the everyone else, which we’ll call g, and on average you receive (p*R*h/(h+g)) every 600 seconds where p is the price of Bitcoin in dollars and R is the reward (subsidy and fees) you receive from a block. Your profit is just the difference, namely h*(p*R/(h+g) – r*E/6). Assuming you’re able to manufacture and deploy hashrate relatively easily, at least in comparison to everyone else, you can optimise your profit by varying h while the other variables (bitcoin price p, block reward R, miner performance r, electricity cost E, and external hashpower g) remain constant (ie, set the derivative of that formula with respect to h to zero and simplify) which gives a result of 6gpR/Er = (g+h)^2.

This is solvable for h (square root both sides and subtract g), but if we assume Bitmain is clever and well funded enough to have already essentially optimised their profits, we can get a better sense of what this means. Since g+h is just the total bitcoin hashrate, if we call that t, and divide both sides, we get 6gpR/Ert = t, or g/t = (Ert)/(6pR), which tells us what proportion of hashrate the rest of the network can have (g/t) if Bitmain has optimised its profits, or, alternative we can work out h/t = 1-g/t = 1-(Ert)/(6pR) which tells us what proportion of hashrate Bitmain will have if it has optimised its profits.  Plugging in E=$0.03 per kWH, r=0.1 J/GH, t=6e6 TH/s, p=$2400/BTC, R=12.5 BTC gives a figure of 0.9 – so given the current state of the network, and Guy Corem’s cost estimate, Bitmain would optimise its day to day profits by controlling 90% of mining hashrate. I’m not convinced $0.03 is an entirely reasonable figure, though — my inclination is to suspect something like $0.08 per kWh is more reasonable; but even so, that only reduces Bitmain’s optimal control to around 73%.

Because of that incentive structure, if Bitmain’s current hashrate is lower than that amount, then lowering manufacturing costs for own-use miners by 15% (per Sam Cole’s estimates) and lowering ongoing costs by 15%-30% by using ASICBoost could have a compounding effect by making it easier to quickly expand. (It’s not clear to me that manufacturing a line of ASICBoost-only miners to reduce manufacturing costs by 15% necessarily makes sense. For one thing, this would come at a cost of not being able to mine with them while they are state of the art, then sell them on to customers once a more efficient model has been developed, which seems like it might be a good way to manage inventory. For another, it vastly increases the impact of ASICBoost not being available: rather than simply increasing electricity costs by 15%-30%, it would mean reducing output to 10%-25% of what it was, likely rendering the hardware immediately obsolete)

Using the same formula, it’s possible to work out a ratio of bitcoin price (p) to hashrate (t) that makes it suboptimal for a manufacturer to control a hashrate majority (at least just due to normal mining income): h/t < 0.5, 1-Ert/6pR < 0.5, so t > 3pR/Er. Plugging in p=2400, R=12.5, e=0.08, r=0.1, this gives a total hash rate of 11.25M TH/s, almost double the current hash rate. This hashrate target would obviously increase as the bitcoin price increases, halve if the block reward halves (if a fall in the inflation subsidy is not compensated by a corresponding increase in fee income eg), increase if the efficiency of mining hardware increases, and decrease if the cost of electricity increases. For a simpler formula, assuming the best hosting price is $0.08 per kWh, and while the Antminer S9’s efficiency at 0.1 J/GH is state of the art, and the block reward is 12.5 BTC, the global hashrate in TH/s should be at least around 5000 times the price (ie 3R/Er = 4787.5, near enough to 5000).

Note that this target also sets a limit on the range at which mining can be profitable: if it’s just barely better to allow other people to control >50% of miners when your cost of electricity is E, then for someone else whose cost of electricity is 2*E or more, optimal profit is when other people control 100% of hashrate, that is, you don’t mine at all. Thus if the best large scale hosting globally costs $0.08/kWh, then either mining is not profitable anywhere that hosting costs $0.16/kWh or more, or there’s strong centralisation pressure for a mining hardware manufacturer with access to the cheapest electrictiy to control more than 50% of hashrate. Likewise, if Bitmain really can do hosting at $0.03/kWh, then either they’re incentivised to try to control over 50% of hashpower, or mining is unprofitable at $0.06/kWh and above.

If Bitmain (or any mining ASIC manufacturer) is supplying the majority of new hashrate, they actually have a fairly straightforward way of achieving that goal: if they dedicate 50-70% of each batch of ASICs built for their own use, and sell the rest, with the retail price of the sold miners sufficient to cover the manufacturing cost of the entire batch, then cashflow will mostly take care of itself. At $1200 retail price and $500 manufacturing costs (per Jimmy Song’s numbers), that strategy would imply targeting control of up to about 58% of total hashpower. The above formula would imply that’s the profit-maximising target at the current total hashrate and price if your average hosting cost is about $0.13 per kWh. (Those figures obviously rely heavily on the accuracy of the estimated manufacturing costs of mining hardware; at $400 per unit and $1200 retail, that would be 67% of hashpower, and about $0.09 per kWh)

Strategies like the above are also why this analysis doesn’t apply to miners who buy their hardware rather from a vendor, rather than building their own: because every time they increase their own hash rate (h), the external hashrate (g) also increases as a direct result, it is not valid to assume that g is constant when optimising h, so the partial derivative and optimisation is in turn invalid, and the final result is not applicable.


Bitmain’s mining pool, AntPool, obviously doesn’t directly account for 58% or more of total hashpower; though currently they’re the pool with the most hashpower at about 20%. As I understand it, Bitmain is also known to control at least and ConnectBTC which add another 7.6%. The other “Emergent Consensus” supporting pools (,, ViaBTC) account for about 22% of hashpower, however, which brings the total to just under 50%, roughly the right ballpark — and an additional 8% or 9% could easily be pointed at other public pools like slush or f2pool. Whether the “emergent consensus” pools are aligned due to common ownership and contractual obligations or simply similar interests is debatable, though. ViaBTC is funded by Bitmain, and Canoe was built and sold by Bitmain, which means strong contractual ties might exist, however  Jihan Wu, Bitmain’s co-founder, has disclaimed equity ties to is owned by Roger Ver, but I haven’t come across anything implying a business relationship between Bitmain and beyond supplier and customer. However John McAffee’s apparently forthcoming MGT mining pool is both partnered with Bitmain and advised by Roger Ver, so the existence of tighter ties may be plausible.

It seems likely to me that Bitmain is actually behaving more altruistically than is economically rational according to the analysis above: while it seems likely to me that,, ViaBTC and Canoe have strong ties to Bitmain and that Bitmain likely has a high level of influence — whether due to contracts, business relationships or simply due to the loyalty and friendship — this nevertheless implies less control over the hashpower than direct ownership and management, and likely less profit. This could be due to a number of factors: perhaps Bitmain really is already sufficiently profitable from mining that they’re focusing on building their business in other ways; perhaps they feel the risks of centralised mining power are too high (and would ultimately be a risk to their long term profits) and are doing their best to ensure that mining power is decentralised while still trying to maximise their return to their investors; perhaps the rate of expansion implied by this analysis requires more investment than they can cover from their cashflow, and additional hashpower is funded by new investors who are simply assigned ownership of a new mining pool, which may helps Bitmain’s investors assure themselves they aren’t being duped by a pyramid scheme and gives more of an appearance of decentralisation.

It seems to me therefore there could be a variety of ways in which Bitmain may have influence over a majority of hashpower:

  • Direct ownership and control, that is being obscured in order to avoid an economic backlash that might result from people realising over 50% of hashpower is controlled by one group
  • Contractual control despite independent ownership, such that customers of Bitmain are committed to follow Bitmain’s lead when signalling blocks in order to maintain access to their existing hardware, or to be able to purchase additional hardware (an account on reddit appearing to belong to the GBMiners pool has suggested this is the case)
  • Contractual control due to offering essential ongoing services, eg support for physical hosting, or some form of mining pool services — maintaining the infrastructure for covert ASICBoost may be technically complex enough that Bitmain’s customers cannot maintain it themselves, but that Bitmain could relatively easily supply as an ongoing service to their top customers.
  • Contractual influence via leasing arrangements rather than sale of hardware — if hardware is leased to customers, or financing is provided, Bitmain could retain some control of the hardware until the leasing or financing term is complete, despite not having ownership
  • Coordinated investment resulting in cartel-like behaviour — even if there is no contractual relationship where Bitmain controls some of its customers in some manner, it may be that forming a cartel of a few top miners allows those miners to increase profits; in that case rather than a single firm having control of over 50% of hashrate, a single cartel does. While this is technically different, it does not seem likely to be an improvement in practice. If such a cartel exists, its members will not have any reason to compete against each other until it has maximised its profits, with control of more than 70% of the hashrate.


So, conclusions:

  • ASICBoost is worth using if you are able to. Bitmain is able to.
  • Nothing I’ve seen suggest Bitmain is economically clueless; so since ASICBoost is worth doing, and Bitmain is able to use it on mainnet, Bitmain are using it on mainnet.
  • Independently of ASICBoost, Bitmain’s most profitable course of action seems to be to control somewhere in the range of 50%-80% of the global hashrate at current prices and overall level of mining.
  • The distribution of hashrate between mining pools aligned with Bitmain in various ways makes it plausible, though not certain, that this may already be the case in some form.
  • If all this hashrate is benefiting from ASICBoost, then my estimate is that the value of ASICBoost is currently about $72M per annum
  • Avoiding dominant mining manufacturers tending towards supermajority control of hashrate requires either a high global hashrate or a relatively low price — the hashrate in TH/s should be about 5000 times the price in dollars.
  • The current price is about $2400 USD/BTC, so the corresponding hashrate to prevent centralisation at that price point is 12M TH/s. Conversely, the current hashrate is about 6M TH/s, so the maximum price that doesn’t cause untenable centralisation pressure is $1200 USD/BTC.

Worse Than FailureCodeSOD: Changing Requirements

Requirements change all the time. A lot of the ideology and holy wars that happen in the Git processes camps arise from different ideas about how source control should be used to represent these changes. Which commit changed which line of code, and to what end? But what if your source control history is messy, unclear, or… you’re just not using source control?

For example, let’s say you’re our Anonymous submitter, and find the following block of code. Once upon a time, this block of code enforced some mildly complicated rules about what dates were valid to pick for a dashboard display.

Can you tell which line of code was in a reaction to a radically changed requirement?

function validateDashboardDateRanges (dashboard){
    return true; 
    if(dashboard.currentDateRange && dashboard.currentDateRange.absoluteEndDate && dashboard.currentDateRange.absoluteStartDate) {
        if(!isDateRangeCorrectLength(dashboard.currentDateRange.absoluteStartDate, dashboard.currentDateRange.absoluteEndDate)){
            return false;
        for(var c = 0; c < dashboard.containers.length; c++){
            var container = dashboard.containers[c];
            for(var i = 0; i < container.widgets.length; i++){
                            if (container.widgets[i].settings.dateRange.relativeDate === '-1y'){
                                return false;
                        } else if (!isDateRangeCorrectLength(container.widgets[i].settings.dateRange.absoluteStartDate, container.widgets[i].settings.dateRange.absoluteEndDate)){
                            return false;
    return true;
[Advertisement] Otter allows you to easily create and configure 1,000's of servers, all while maintaining ease-of-use, and granular visibility down to a single server. Find out more and download today!

Don MartiBlind code reviews experiment

In case you missed it, here's a study that made the rounds earlier this year: Gender differences and bias in open source: Pull request acceptance of women versus men:

This paper presents the largest study to date on gender bias, where we compare acceptance rates of contributions from men versus women in an open source software community. Surprisingly, our results show that women's contributions tend to be accepted more often than men's. However, women's acceptance rates are higher only when they are not identifiable as women.

A followup, from Alice Marshall, breaks out the differences between acceptance of "insider" and "outsider" contributions.

For outsiders, women coders who use gender-neutral profiles get their changes accepted 2.8% more of the time than men with gender-neutral profiles, but when their gender is obvious, they get their changes accepted 0.8% less of the time.

We decided to borrow the blind auditions concept from symphony orchestras for the open source experiments program.

The experiment, launching this month, will help reviewers who want to try breaking habits of unconscious bias (whether by gender or insider/outsider status) by concealing the name and email adddress of a code author during a review on Bugzilla. You'll be able to un-hide the information before submitting a review, if you want, in order to add a personal touch, such as welcoming a new contributor.

Built with the WebExtension development work of Tomislav Jovanovic ("zombie" on IRC), and the Bugzilla bugmastering of Emma Humphries. For more info, see the Bugzilla bug discussion.

Data collection

The extension will "cc" one of two special accounts on a bug, to indicate if the review was done partly or fully blind. This lets us measure its impact without having to make back-end changes to Bugzilla.

(Yes, WebExtensions let you experiment with changing a user's experience of a site without changing production web applications or content sites. Bonus link: FilterBubbler.)

Coming soon

A first release is on a.m.o., here: Blind Reviews BMO Experiment, if you want an early look. We'll send out notifications to relevant places when the "last" bugs are fixed and it's ready for daily developer use.

Planet Linux AustraliaColin Charles: CFP for Percona Live Europe Dublin 2017 closes July 17 2017!

I’ve always enjoyed the Percona Live Europe events, because I consider them to be a lot more intimate than the event in Santa Clara. It started in London, had a smashing success last year in Amsterdam (conference sold out), and by design the travelling conference is now in Dublin from September 25-27 2017.

So what are you waiting for when it comes to submitting to Percona Live Europe Dublin 2017? Call for presentations close on July 17 2017, the conference has a pretty diverse topic structure (MySQL [and its diverse ecosystem including MariaDB Server naturally], MongoDB and other open source databases including PostgreSQL, time series stores, and more).

And I think we also have a pretty diverse conference committee in terms of expertise. You can also register now. Early bird registration ends August 8 2017.

I look forward to seeing you in Dublin, so we can share a pint of Guinness. Sláinte.


LongNowThe Artangel Longplayer Letters: Alan Moore writes to Stewart Lee

Alan Moore (left) chose comedian Stewart Lee as the recipient of his Longplayer letter.

In January 02017, Iain Sinclair, a writer and filmmaker whose recent work focuses on the psychogeography of London, wrote a letter to writer Alan Moore as part of the Artangel Longplayer Letters series. The series is a relay-style correspondence, with the recipient of the letter responding with a letter to a different recipient of his choosing. Iain Sinclair wrote to Alan Moore. Now, Alan Moore has written a letter to standup comedian Stewart Lee.

The first series of correspondence in the Longplayer Letters, lasting from 02013-02015 and including correspondence with Long Now board members Stewart Brand, Esther Dyson, and Brian Eno, ended when a letter from Manuel Arriga to Giles Fraser went unanswered. You can find the previous correspondences here.

From: Alan Moore, Phippsville, Northampton

To: Stewart Lee, London

2 February 2017

Dear Stewart,

I’ll hopefully have spoken to you before you receive this and filled you in on what our bleeding game is. Iain Sinclair kicked off by writing a letter to me, and the idea is that I should kind-of-reply to Iain’s letter in the form of a letter to a person of my choosing. The main criteria seems to be that this person be anybody other than Iain, so you’re probably beginning to see how perfectly you fit the bill. Also, since your comedy often consists of repeating the same phrase, potentially dozens of times in the space of a couple of minutes, I thought you’d bring a contrasting, if not jarring, point of view to the whole Longplayer process.

As you’ve no doubt realised, this is actually a chain letter. In 2016, dozens of the world’s most beloved celebrities, the pro-European British public and the population of the USA all broke the chain, as did my first choice as a recipient of this letter, Kim Jong-nam. I’m just saying.

In his letter, Iain raised the point of what a problematic concept ‘long-term’ is for those of us at this far end of our character-arc; little more than apprentice corpses, really. Mind you – with the current resident of the White House – I suppose this is currently a problem whatever age we are. In terms of existential unease, eleven is the new eighty.

Iain also talks about “having tried, for too many years, to muddy the waters with untrustworthy fictions and ‘alternative truths’, books that detoured into other books, I am now colonised by longplaying images and private obsessions, vinyl ghost in an unmapped digital multiverse”, quoting Sebald with “And now, I am living the wrong life.”

I have to admit, that resonated with me. I’ve been thinking lately about the relationship between art and the artist, and I keep coming back to that Escher image of two hands, each holding a pencil, each sketching and creating the other (EscherSketch?). Yes, on a straightforward material level we are creating our art – our writing, our music, our comedy – but at the same time, in that a creator is modified by any significant work that they bring into being, the art is also altering and creating us. And when we embark upon our projects, it’s generally on little more than an inspired whim and with no idea at all about the person that we’ll be at the end of the process. Inevitably, we fictionalise ourselves. In terms of our personal psychology, we clearly don’t have what you’d call a plan, do we? Thus we actually have little say in the person that we end up as. Nobody could be this deliberately.

Adding to the problem for some of us is that, as artists, we tend to cultivate multiplying identities. The person that I am when I’m writing an introduction to William Hope Hodgson’s The House on the Borderland is different to my everyday persona as someone who is continually worshipping a snake and being angry about Batman. My persona when writing introductions is wearing an Edwardian smoking jacket and puffing smugly on a Meerschaum. I know that my recent infatuation with David Foster Wallace stemmed from an awareness that the persona he adopted for many of his essays and the various fictionalised versions of David Foster Wallace that appear in his novels and short stories were different entities to him-in-himself. I wonder how you, and also how the Comedian Stewart Lee, feels about this? I suppose at the end of the day this applies to everybody, doesn’t it? I mean you don’t have to be an artist to present yourself differently according to who you’re presenting yourself to, and in what role. We don’t talk to our parents the way that we do to our sexual partners, and we don’t talk to our sexual partners how we do to our houseplants. With good reason. The upshot of this is that all human identity is probably consciously or unconsciously constructed, and that for this reason its default position is shifting and fluid. What I’m saying is we may be normal.

Iain Sinclair quotes the verdict on George Michael’s death, “Unexplained but not suspicious”, as a fair assessment of all human lives, before going on to mention Jeremy Prynne’s offended response to a request for an example of his thought, “Like a lump of basalt.” The idea being that any thought is really part of a field of awareness; part of a cerebral weather condition that can’t be hacked out of its context without being “as horrifying as that morning radio interlude when listeners channel-hop and make their cups of tea: Thought for the Day.” I know what he means, but of course couldn’t help thinking about your New Year’s Day curatorship of the Today program, where you impishly got me to contribute to the religiously-inclined Thought for the Day section, broadcast at an unearthly hour of the morning, with an unfathomable diatribe about my sock-puppet snake deity, Glycon. Of course, I’m not saying that this is the specific edition of the show that Iain tuned into and found horrifying, but we have no indication that it wasn’t. Thinking about it, assuming that Thought for the Day is archived, then thanks to your clever inclusion of the world’s sole Glycon worshipper in amongst a fairly uniform rota of rabbis, imams and vicars, future social historians are going to have a drastically skewed and disproportionate view regarding early 21st-century spiritual beliefs. Actually, that’s a halfway decent call back to the idea of long-term thinking.

After circling around like an unusually keen-eyed intellectual buzzard for a couple of pages, Iain alights on the ‘time as solid’ premise of Jerusalem, which he likens to “a form of discontinued public housing in which pastpresentfuture coexist” (and yes, I am going to continue quoting his letter in an effort to bring some quality prose to my stretch of this serial epistle). He talks about that central idea of a muttering community of the living, the dead and the unborn, all existing in their different reaches of an eternal Now, referencing Yeats with “the living can assist the imagination of the dead” and remarking how much these words have defined his literary project since he first started writing about London. In light of what I was saying about identity earlier, I was reading in New Scientist that what we think of as ‘our’ consciousness is actually partly infiltrated by and composed of the consciousnesses that surround us. This of course includes the consciousness of a deceased person that we may be considering, as well as that of any imaginary person we may be projecting on the present or the future. I would be a slightly different creator and a slightly different person, for example, had I never entered into a consideration of the work and the consciousness of Flann O’Brien, or Mervyn Peake, or Angela Carter, or Kathy Acker, or William Burroughs, or a thousand other creators. Looked at like that, it’s as if we take on elements of other people’s consciousness, living or dead or imaginary, almost as a way of building up our psychological immune system. This makes us all fluctuating composites, feeding into and out of each other, and perhaps suggests a long-term possibility that goes beyond our mortal lifespan or status as individuals. If this were the case, if identity were a fluid commodity and we all flowed in and out of each other, then you’d have to see someone like John Clare as an instance where the levees had been overwhelmed and he was pretty much drowning in everybody.

Mentioning Clare’s asylum-mate Lucia Joyce and my rather brave stab at approximating her dad’s language, Iain introduced a notion of William Burroughs’ manufacture that I hadn’t come across before, that of the ‘word vine’: write a word, and the next word will suggest itself, and so on. I have to say, that is how the writing process seems to work with me. Perhaps people assume that writers have an idea and then they write it down fully formed, but that isn’t how it works in practice, or at least not for me. I don’t know if it’s the same for you, but for me most of the writing is generated by the act of writing itself. An average idea, if properly examined, may turn out to have strong neural threads of association or speculation that link it to an absolutely brilliant idea. I’m sure you must have found this with comedy routines; that a minor commonplace absurdity will open up logic gates on a string of increasingly striking or funny ideas, like finding a nugget that leads to a small but profitable gold seam. I don’t think creative people have ideas like a hen lays eggs; more that they arise by a scarcely-definable action from largely involuntary mental processes.

Still talking about Burroughs, Iain then moved on to a discussion of dreams via Burroughs’ My Education – “Couldn’t find my room as usual in the Land of the Dead. Followed by bounty hunters.” On Jerusalem’s pet subject of Eternalism, Iain took a position that sees our real eternity as residing in dreams, “in residues of sleep, in the community of sleepers, between worlds, beyond mortality.” While I’m not sure about that, it does admittedly feel right, perhaps because of the classical world’s equivalence between the world of dreams and the world of the dead, dreams being the only place you reliably met dead people.

I’ve become much more interested in dreams since Steve Moore’s death in 2014. Realising how much I missed reading through the last couple of weeks of Steve’s methodically recorded dreams on visits up to Shooters Hill, I’ve even started a much sparser and more impoverished dream-record of my own. I just really love the flavour of dreams, whether mine or other people’s. Iain reports a dream where me and him were ascending the stairs of the Princelet Street synagogue, the one featured in Rodinsky’s Room, in what seemed to him almost like an out-take from Chris Petit’s The Cardinal and the Corpse. After a ritual exchange of velvet cricket caps between us, the vista outside the synagogue window began to strobe like the climactic vision in Hodgson’s The House on the Borderland. Martin Stone – Pink Fairies, Mighty Baby – was laying out a pattern of white lines on a black case, and explained that he’d recently found a first edition of Hodgson’s book in an abandoned villa, lavishly inscribed and previously owned by Aleister Crowley. Isn’t that fantastic? Knowing that I was there in the dream gives me a sort of pseudo-memory of actually having been present at this unlikely event.

While I’m not sure about the connection between dreams and Eternalism, Iain is probably right. I very recently stumbled across – in a fascinating collection of outré individuals from David Bramwell entitled The Odditorium – the peculiar scientific theories of J.W. Dunne. Dunne proposed a kind of solid time similar to the Einsteinian state of affairs posited in Jerusalem, which I found mildly pleasing just as a supporting argument from another source, but I was taken aback by Dunne’s reasoning, in which he suggests that the accreted ‘substance’ of our dreams is somehow crucial to the phenomenon. This is so like the idea in Jerusalem of the timeless higher dimension of Mansoul being made from accumulated dream-stuff and chimes so well with Iain’s comment about eternity being located “in residues of sleep” that I should probably chew these notions over a bit more before coming to any conclusions.

Dreams certainly seem to be in the air at the moment, with dreams being the theme of our next-but-one Arts Lab magazine to be released, and Iain winding up his letter by referring to Steve Moore’s dream-centred rehearsals for eternity up there on Shooters Hill. For the anniversary of Steve’s death, I’ve decided to pay a long-postponed visit to his house, or rather to the structure that’s replaced it since the place was sold and rebuilt. I’ll hopefully get a chance to visit the Shrewsbury Lane burial mound where we scattered Steve’s ashes by the light of the supermoon following tropical storm Bertha in August 2014. Around a month or so ago I noticed an article in The Guardian about the classification of various places as World Heritage sites by English History, and was ridiculously pleased to find that the Shrewsbury Mound, the last surviving Bronze Age burial mound of several on Shooters Hill with the others having all been bulldozed in the early 1930s, was to be included. Steve’s instructions that his final resting place should be his favourite local landmark seems to me to be a way of fusing with the landscape and its history, its dreamtime if you like, which is perhaps as close to a genuine long-term strategy as its possible for a human being to get.

Anyway, it’s late – the moon tonight is a beautiful first-quarter crescent – and I should probably wrap this up. I’d like to leave you with the ‘Brexit Poem’ that I jotted down in an idle moment a month or two ago:

“I wrote this verse the moment that I heard/ the good news that we’d got our language back/ whence I, in a misjudged racial attack,/ kicked out French, German and Italian words/ and then I ”

With massive love and respect, as ever –-


Alan Moore was born in Northampton in 1953 and is a writer, performer, recording artist, activist and magician.
His comic-book work includes Lost Girls with Melinda Gebbie, From Hell with Eddie Campbell and The League of Extraordinary Gentlemen with Kevin O’Neill. He has worked with director Mitch Jenkins on the Showpieces cycle of short films and on forthcoming feature film The Show, while his novels include Voice of the Fire (1996) and his current epic Jerusalem (2016). Only about half as frightening as he looks, he lives in Northampton with his wife and collaborator Melinda Gebbie.

Stewart Lee was born in 1968 in Shropshire but grew up in Solihull. He started out on the stand-up comedy circuit in London in 1989, and started to work out whatever it was he was trying to do zoo after the turn of the century. He has written and performed four series of his own stand-up show on BBC2, had shows on at the Edinburgh fringe for 28 of the last 30 years, and has the last six of his full length stand-up shows out on DVD/download/whatever. He is the writer or co-writer of five theatre pieces and two art installations, a number of radio series, three books about comedy and a bad novel. He lives in Stoke Newington, North London with his wife, also a comedian, and two children. He was enrolled in the literary society The Friends of Arthur Machen by Alan Moore, and is a regular, if disguised, presence on London’s volunteer-fronted arts radio station Resonance 104.4 fm. His favourite comics book characters are Deathlok The Demolisher, Howard The Duck, Conan The Barbarian, Concrete and The Thing. His favourite bands/musicians are The Fall, Giant Sand, Dave Graney, John Coltrane, Miles Davis, Derek Bailey, Evan Parker, Bob Dylan, The Byrds and Shirley Collins. His favourite filmmakers are Sergio Leone, Sergio Corbucci, Andrew Kotting, Hal Hartley, and Akira Kurowsa. His favourite writers are Arthur Machen, William Blake, Ian Sinclair, Alan Moore, Stan Lee, Ray Bradbury, DH Lawrence, Thomas Hardy, Philip Larkin, Richard Brautigan, Geoff Dyer, Neil M Gunn, Francis Brett Young, and Eric Linklater and Robert E Howard.

Debian Administration Implementing two factor authentication with perl

Two factor authentication is a system which is designed to improve security, by requiring a second factor, in addition to a username/password pair, to access a particular resource. Here we'll look briefly at how you add two factor support to your applications with Perl.

Cory DoctorowTalking Walkaway with Reason Magazine

Of all the press-stops I did on my tour for my novel Walkaway, I was most excited about my discussion with Katherine Mangu-Ward, editor-in-chief of Reason Magazine, where I knew I would have a challenging and meaty conversation with someone who was fully conversant with the political, technological and social questions the book raised.

I was not disappointed.

The interview, which was published online today, is a substantial and challenging one that gets at the core of what I’d hoped to do with the book. I hope you’ll give it a read.

But I think Uber is normal and dystopian for a lot of people, too. All the dysfunctions of Uber’s reputation economics, where it’s one-sided—I can tank your business by giving you an unfair review. You have this weird, mannered kabuki in some Ubers where people are super obsequious to try and get you to five-star them. And all of that other stuff that’s actually characteristic of Down and Out in the Magic Kingdom. I probably did predict Uber pretty well with what would happen if there are these reputation economies, which is that you would quickly have a have and a have-not. And the haves would be able to, in a very one-sided way, allocate reputation to have-nots or take it away from them, without redress, without rule of law, without the ability to do any of the things we want currency to do. So it’s not a store of value, it’s not a unit of exchange, it’s not a measure of account. Instead this is just a pure system for allowing the powerful to exercise power over the powerless.

Isn’t the positive spin on that: Well, yeah, but the way we used to do that allocation was by punching each other in the face?

Well, that’s one of the ways we used to. I was really informed by a book by David Graeber called Debt: The First 5,000 Years, where he points out that the anthropological story that we all just used to punch each other in the face all the time doesn’t really match the evidence. That there’s certainly some places where they punched each other in the face and there’s other places where they just kind of got along. Including lots of places where they got along through having long arguments or guilting each other.

I don’t know. Kabuki for stars on the Uber app still seems better than the long arguments or the guilt.

That’s because you don’t drive Uber for a living and you’ve never had to worry that tomorrow you won’t be able to.

Cory Doctorow’s ‘Fully Automated Luxury Communist Civilization’
[Katherine Mangu-Ward/Reason]

(Photo: Julian Dufort)

CryptogramMore on the NSA's Use of Traffic Shaping

"Traffic shaping" -- the practice of tricking data to flow through a particular route on the Internet so it can be more easily surveiled -- is an NSA technique that has gotten much less attention than it deserves. It's a powerful technique that allows an eavesdropper to get access to communications channels it would otherwise not be able to monitor.

There's a new paper on this technique:

This report describes a novel and more disturbing set of risks. As a technical matter, the NSA does not have to wait for domestic communications to naturally turn up abroad. In fact, the agency has technical methods that can be used to deliberately reroute Internet communications. The NSA uses the term "traffic shaping" to describe any technical means the deliberately reroutes Internet traffic to a location that is better suited, operationally, to surveillance. Since it is hard to intercept Yemen's international communications from inside Yemen itself, the agency might try to "shape" the traffic so that it passes through communications cables located on friendlier territory. Think of it as diverting part of a river to a location from which it is easier (or more legal) to catch fish.

The NSA has clandestine means of diverting portions of the river of Internet traffic that travels on global communications cables.

Could the NSA use traffic shaping to redirect domestic Internet traffic -- ­emails and chat messages sent between Americans, say­ -- to foreign soil, where its surveillance can be conducted beyond the purview of Congress and the courts? It is impossible to categorically answer this question, due to the classified nature of many national-security surveillance programs, regulations and even of the legal decisions made by the surveillance courts. Nevertheless, this report explores a legal, technical, and operational landscape that suggests that traffic shaping could be exploited to sidestep legal restrictions imposed by Congress and the surveillance courts.

News article. NSA document detailing the technique with Yemen.

This work builds on previous research that I blogged about here.

The fundamental vulnerability is that routing information isn't authenticated.

Worse Than FailureThe Defensive Contract

Working for a contractor within the defense industry can be an interesting experience. Sometimes you find yourself trying to debug an application from a stack trace which was handwritten and faxed out of a secured facility with all the relevant information redacted by overzealous security contractors who believe that you need a Secret clearance just to know that it was a System.NullReferenceException. After weeks of frustration when you are unable to solve anything from a sheet of thick black Sharpie stripes, they may bring you there for on-site debugging.

Security cameras perching on a seaside rock, like they were seagulls

Beforehand, they will lock up your cell phone, cut out the WiFi antennas from your development laptop, and background check you so thoroughly that they’ll demand explanations for the sins of your great-great-great-great grandfather’s neighbor’s cousin’s second wife’s stillborn son before letting you in the door. Once inside, they will set up temporary curtains around your environment to block off any Secret-rated workstation screens to keep you from peeking and accidentally learning what the Top Secret thread pitch is for the lug nuts of the latest black-project recon jet. Then they will set up an array of very annoying red flashing lights and constant alarm whistles to declare to all the regular staff that they need to watch their mouths because an uncleared individual is present.

Then you’ll spend several long days trying to fix code. But you’ll have no Internet connection, no documentation other than whatever three-ring binders full of possibly-relevant StackOverflow questions you had forseen to prepare, and the critical component which reliably triggers the fault has been unplugged because it occasionally sends Secret-rated UDP packets, and, alas, you’re still uncleared.

When you finish the work, if you’re lucky they’ll even let you keep your laptop. Minus the hard drive, of course. That gets pulled, secure-erased ten times over, and used for target practice at the local Marine battalion’s next SpendEx.

Despite all the inherent difficulties though, defense work can be very financially-rewarding. If you play your cards right, your company may find itself milking a 30-year-long fighter jet development project for all it’s worth with no questions asked. That’s good for salaries, good for employee morale, and very good for job security.

That’s not what happened to Nikko, of course. No. His company didn’t play its cards right at all. In fact, they didn’t even have cards. They were the player who walked up to the poker table after the River and went all-in despite not even being dealed into the game. “Hey,” the company’s leaders said to themselves, “Yeah we’ll lose some money, but at least we get to play with the big boys. That’s worth a lot, and someday we’ll be the lead contractor for the software on the next big Fire Control Radar!”

So Nikko found himself working on a project his company was the subcontractor (a.k.a. the lowest bidder) for. But in their excited rush to take on the work, nobody read the contract and signed it as-is. The customer’s requirements for this component were vague, contradictory, at times absurd, and of course the contract offered no protection for Nikko’s company.

In fact, months later when Nikko–not yet aware of the mess he was in–met with engineers from the lead contractor–whom we’ll call Acme–for guidance on the project, one of them plainly told him in an informal context “Yeah, it’s a terrible component. We just wanted to get out from under it. It’s a shame you guys bid on it…”

The project began, using a small team of a project manager, Nikko as the experienced lead, and two junior engineers. Acme did not make things easy on them. They were expected to write all code at Acme’s facilities, on a network with no Internet access. They were asked to bring their own laptops in to develop on, but the information security guys refused and instead offered them one 15-year-old Pentium 4 that the three engineers were expected to share. Of course, using such an ancient system meant that a clean compile took 20 minutes, and the hidden background process that the security guys used to audit file access constantly brought disk I/O to a halt.

But development started anyway, depsite all the red flags. They were required to use an API from another subcontractor. However, that subcontractor would only give them obfuscated JAR files with no documentation. Fortunately it was a fairly simple API and the team had some success decompiling it and figuring out how it works.

But their next hurdle was even worse. All the JAR did was communicate with a REST interface from a server. But due to the way the Acme security guys had things set up, there was no test server on the development network. It wasn’t allowed. Period.

The actual server lived in an integration lab located several miles away, but coding was not allowed there. Access to it was tightly-controlled and scheduled. Nikko found himself writing code, checking it in, and scheduling a time slot at the lab (which often took days) to try out his changes.

The integration lab was secured. He could not bring anything in and Acme information security specialists had to sign off on strict paperwork every time he wanted to transfer the latest build there. Debuggers were forbidden due to the fears of giving an uncleared individual access to the system’s memory, and Nikko had to hand-copy any error logs using pen and paper to bring any error messages out of the facility and back to the development lab.

Three months into the project, Nikko was alone. The project manager threw some kind of temper tantrum and either quit or was fired. One of the junior engineers gave birth and quit the company during maternity leave. And the other junior engineer accepted an offer from another company and also left.

Nikko, feeling burned out and unable to sleep one night, then remembered his father’s story of punchcard programming in computing’s early days. Back then, your program was a stack of punchcards, with each card performing a single machine instruction. You had to schedule a 15-minute timeslot with the computer to run through your program which was actually a stack of punchcards. And sometimes the operator accidentally dropped your box of punchcards on the way to the machine but made no effort to ensure they were executed in the correct order, ruining the job.

The day after that revelation, Nikko met with his bosses. He was upset, and flatly told them that the project could not succeed, they were following 1970’s punchcard programming methodologies in the year 2016, and that he would have no part in it anymore.

He then took on a job at a different defense contractor. And then found himself working again as a subcontractor on an Acme component. He decided to stick with it for a while since his new company actually read contracts before signing, so maybe it would be better this time? Still, in the back of his mind he started to wonder if he had died and Acme was his purgatory.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!


Krebs on SecurityAdobe, Microsoft Push Critical Security Fixes

It’s Patch Tuesday, again. That is, if you run Microsoft Windows or Adobe products. Microsoft issued a dozen patch bundles to fix at least 54 security flaws in Windows and associated software. Separately, Adobe’s got a new version of its Flash Player available that addresses at least three vulnerabilities.

brokenwindowsThe updates from Microsoft concern many of the usual program groups that seem to need monthly security fixes, including Windows, Internet Explorer, Edge, Office, .NET Framework and Exchange.

According to security firm Qualys, the Windows update that is most urgent for enterprises tackles a critical bug in the Windows Search Service that could be exploited remotely via the SMB file-sharing service built into both Windows workstations and servers.

Qualys says the issue affects Windows Server 2016, 2012, 2008 R2, 2008 as well as desktop systems like Windows 10, 7 and 8.1.

“While this vulnerability can leverage SMB as an attack vector, this is not a vulnerability in SMB itself, and is not related to the recent SMB vulnerabilities leveraged by EternalBlue, WannaCry, and Petya.” Qualys notes, referring to the recent rash of ransomware attacks which leveraged similar vulnerabilities.

Other critical fixes of note in this month’s release from Microsoft include at least three vulnerabilities in Microsoft’s built-in browser — Edge or Internet Explorer depending on your version of Windows. There are at least three serious flaws in these browsers that were publicly detailed prior to today’s release, suggesting that malicious hackers may have had some advance notice on figuring out how to exploit these weaknesses.

brokenflash-aAs it is accustomed to doing on Microsoft’s Patch Tuesday, Adobe released a new version of its Flash Player browser plugin that addresses a trio of flaws in that program.

The latest update brings Flash to v. for Windows, Mac and Linux users alike. If you have Flash installed, you should update, hobble or remove Flash as soon as possible. To see which version of Flash your browser may have installed, check out this page.

The smartest option is probably to ditch the program once and for all and significantly increase the security of your system in the process. An extremely powerful and buggy program that binds itself to the browser, Flash is a favorite target of attackers and malware. For some ideas about how to hobble or do without Flash (as well as slightly less radical solutions) check out A Month Without Adobe Flash Player.

If you choose to keep Flash, please update it today. The most recent versions of Flash should be available from the Flash home page. Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.).

Chrome and IE should auto-install the latest Flash version on browser restart (users may need to manually check for updates in and/or restart the browser to get the latest Flash version). A green arrow in the upper right corner of my Chrome installation today gave me the prompt I needed to update my version to the latest.

Chrome users may need to restart the browser to install or automatically download the latest version. When in doubt, click the vertical three dot icon to the right of the URL bar, select “Help,” then “About Chrome”: If there is an update available, Chrome should install it then.

As always, if you experience any issues downloading or installing any of these updates, please leave a note about it in the comments below.

Rondam RamblingsThere's yer smoking gun

I predicted the existence of a Russia-gate smoking gun back in March, but I didn't expect it to actually turn up so soon.  And I certainly didn't expect it to turn up by having Donald Trump Jr. whip it out, shoot himself in the foot with it (twice!), and then loudly shout, "I told you there's nothing to see here, move along!" Here is the most damning part of the email chain released by Trump Jr.

Google AdsenseClarification around Pop-Unders

At Google, we value users, advertisers and publishers equally. We have policies in place that define where Google ads should appear and how they must be implemented. These policies help ensure a positive user experience, as well as maintain a healthy ads ecosystem that benefits both publishers and advertisers.

To get your attention, some ads pop up in front of your current browser window, obscuring the content you want to see. Pop-under ads can be annoying as well, as they will "pop under" your window, so that you don't see them until you minimize your browser. We do not believe these ads provide a good user experience, and therefore are not suitable for Google ads.

That is why we recently clarified our policies around pop-ups and pop-unders to help remove any ambiguity. To simplify our policies, we are no longer permitting the placement of Google ads on pages that are loaded as a pop-up or pop-under. Additionally, we do not permit Google ads on any site that contains or triggers pop-unders, regardless of whether Google ads are shown in the pop-unders.

We continually review and evaluate our policies to address emerging trends, and in this case we determined that a policy change was necessary.

As with all policies, publishers are ultimately responsible for ensuring that traffic to their site is compliant with Google policies. To assist publishers, we’ve provided guidance on best practices for buying traffic.

CryptogramHacking Spotify

Some of the ways artists are hacking the music-streaming service Spotify.

Planet Linux AustraliaJames Morris: Linux Security Summit 2017 Schedule Published

The schedule for the 2017 Linux Security Summit (LSS) is now published.

LSS will be held on September 14th and 15th in Los Angeles, CA, co-located with the new Open Source Summit (which includes LinuxCon, ContainerCon, and CloudCon).

The cost of LSS for attendees is $100 USD. Register here.

Highlights from the schedule include the following refereed presentations:

There’s also be the usual Linux kernel security subsystem updates, and BoF sessions (with LSM namespacing and LSM stacking sessions already planned).

See the schedule for full details of the program, and follow the twitter feed for the event.

This year, we’ll also be co-located with the Linux Plumbers Conference, which will include a containers microconference with several security development topics, and likely also a TPMs microconference.

A good critical mass of Linux security folk should be present across all of these events!

Thanks to the LSS program committee for carefully reviewing all of the submissions, and to the event staff at Linux Foundation for expertly planning the logistics of the event.

See you in Los Angeles!

Worse Than FailureCodeSOD: Questioning Existence

Michael got a customer call, from a PHP system his company had put together four years ago. He pulled up the code, which thankfully was actually up to date in source control, and tried to get a grasp of what the system does.

There, he discovered a… unique way to define functions in PHP:

        function GetFileAsString($FileName){
                $Contents = "";
                        $Contents = file_get_contents($FileName);
                return $Contents;
        function RemoveEmptyArrayValuesAndDuplicates($ArrayName){
                foreach( $ArrayName as $Key => $Value ) {
                        if( empty( $ArrayName[ $Key ] ) )
                   unset( $ArrayName[ $Key ] );
                $ArrayName = array_unique($ArrayName);
                $ReIndexedArray = array();
                $ReIndexedArray = array_values($ArrayName);
                return $ReIndexedArray;

Before we talk about the function_exists nonsense, let’s comment on the functions themselves: they’re both unnecessary. file_get_contents returns a false-y value that gets converted to an empty string if you ever treat it as a string , which is exactly the same thing that GetFileAsString does. The replacement for RemoveEmptyArrayValuesAndDuplicates could also be much simpler: array_values(array_unique(array_filter($rawArray)));. That’s still complex enough it could merit its own function, but without the loop, it’s far easier to understand.

Neither of these functions needs to exist, which is why, perhaps, they’re conditional. I can only guess about how these came to be, but here’s my guess:

Once upon a time, in the Dark Times, many developers were working on the project. They worked with no best-practices, no project organization, no communication, and no thought. Each of them gradually built up a library of include files, that carried with it their own… unique solution to common problems. It became spaghetti, and then eventually a forest, and eventually, in the twisted morass of code, Sallybob’s GetFileAsString conflicted with Joebob’s GetFileAsString. The project teetered on the edge of failure… so someone tried to “fix” it, by decreeing every utility function needed this kind of guard.

I’ve seen PHP projects go down this path, though never quite to this conclusion.

[Advertisement] Application Release Automation for DevOps – integrating with best of breed development tools. Free for teams with up to 5 users. Download and learn more today!


Sociological ImagesHow LSD opened minds and changed America

In the 1950s and ’60s, a set of social psychological experiments seemed to show that human beings were easily manipulated by low and moderate amounts of peer pressure, even to the point of violence. It was a stunning research program designed in response to the horrors of the Holocaust, which required the active participation of so many people, and the findings seemed to suggest that what happened there was part of human nature.

What we know now, though, is that this research was undertaken at an unusually conformist time. Mothers were teaching their children to be obedient, loyal, and to have good manners. Conformity was a virtue and people generally sought to blend in with their peers. It wouldn’t last.

At the same time as the conformity experiments were happening, something that would contribute to changing how Americans thought about conformity was being cooked up: the psychedelic drug, LSD.

Lysergic acid diethylamide was first synthesized in 1938 in the routine process of discovering new drugs for medical conditions. The first person to discover it psychedelic properties — its tendency to alter how we see and think — was the scientist who invented it, Albert Hoffmann. He ingested it accidentally, only to discover that it induces a “dreamlike state” in which he “perceived an uninterrupted stream of fantastic pictures, extraordinary shapes with intense, kaleidoscopic play of colors.”

By the 1950s , LSD was being administered to unwitting American in a secret, experimental mind control program conducted by the United States Central Intelligence Agency, one that would last 14 years and occur in over 80 locations. Eventually the fact of the secret program would leak out to the public, and so would LSD.

It was the 1960s and America was going through a countercultural revolution. The Civil Rights movement was challenging persistent racial inequality, the women’s and gay liberation movements were staking claims on equality for women and sexual minorities, the sexual revolution said no to social rules surrounding sexuality and, in the second decade of an intractable war with Vietnam, Americans were losing patience with the government. Obedience had gone out of style.

LSD was the perfect drug for the era. For its proponents, there was something about the experience of being on the drug that made the whole concept of conformity seem absurd. A new breed of thinker, the “psychedelic philosopher,” argued that LSD opened one’s mind and immediately revealed the world as it was, not the world as human beings invented it. It revealed, in other words, the social constructedness of culture.

In this sense, wrote the science studies scholar Ido Hartogsohn, LSD was truly “countercultural,” not only “in the sense of being peripheral or opposed to mainstream culture [but in] rejecting the whole concept of culture.” Culture, the philosophers claimed, shut down our imagination and psychedelics were the cure. “Our normal word-conditioned consciousness,” wrote one proponent, “creates a universe of sharp distinctions, black and white, this and that, me and you and it.” But on acid, he explained, all of these rules fell away. We didn’t have to be trapped in a conformist bubble. We could be free.

The cultural influence of the psychedelic experience, in the context of radical social movements, is hard to overstate. It shaped the era’s music, art, and fashion. It gave us tie-dye, The Grateful Dead, and stuff like this:


The idea that we shouldn’t be held down by cultural constrictions — that we should be able to live life as an individual as we choose — changed America.

By the 1980s, mothers were no longer teaching their children to be obedient, loyal, and to have good manners. Instead, they taught them independence and the importance of finding one’s own way. For decades now, children have been raised with slogans of individuality: “do what makes you happy,” “it doesn’t matter what other people think,” “believe in yourself,” “follow your dreams,” or the more up-to-date “you do you.”

Today, companies choose slogans that celebrate the individual, encouraging us to stand out from the crowd. In 2014, for example, Burger King abandoned its 40-year-old slogan, “Have it your way,” for a plainly individualistic one: “Be your way.” Across the consumer landscape, company slogans promise that buying their products will mark the consumer as special or unique. “Stay extraordinary,” says Coke; “Think different,” says Apple. Brands encourage people to buy their products in order to be themselves: Ray-Ban says “Never hide”; Express says “Express yourself,” and Reebok says “Let U.B.U.”

In surveys, Americans increasingly defend individuality. Millennials are twice as likely as Baby Boomers to agree with statements like “there is no right way to live.” They are half as likely to think that it’s important to teach children to obey, instead arguing that the most important thing a child can do is “think for him or herself.” Millennials are also more likely than any other living generation to consider themselves political independents and be unaffiliated with an organized religion, even if they believe in God. We say we value uniqueness and are critical of those who demand obedience to others’ visions or social norms.

Paradoxically, it’s now conformist to be an individualist and deviant to be conformist. So much so that a subculture emerged to promote blending in. “Normcore,” it makes opting into conformity a virtue. As one commentator described it, “Normcore finds liberation in being nothing special…”

Obviously LSD didn’t do this all by itself, but it was certainly in the right place at the right time. And as a symbol of the radical transition that began in the 1960s, there’s hardly one better.

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.

(View original at

CryptogramThe Future of Forgeries

This article argues that AI technologies will make image, audio, and video forgeries much easier in the future.

Combined, the trajectory of cheap, high-quality media forgeries is worrying. At the current pace of progress, it may be as little as two or three years before realistic audio forgeries are good enough to fool the untrained ear, and only five or 10 years before forgeries can fool at least some types of forensic analysis. When tools for producing fake video perform at higher quality than today's CGI and are simultaneously available to untrained amateurs, these forgeries might comprise a large part of the information ecosystem. The growth in this technology will transform the meaning of evidence and truth in domains across journalism, government communications, testimony in criminal justice, and, of course, national security.

I am not worried about fooling the "untrained ear," and more worried about fooling forensic analysis. But there's an arms race here. Recording technologies will get more sophisticated, too, making their outputs harder to forge. Still, I agree that the advantage will go to the forgers and not the forgery detectors.

Worse Than FailureRubbed Off

Early magnetic storage was simple in its construction. The earliest floppy disks and hard drives used an iron (III) oxide surface coating a plastic film or disk. Later media would use cobalt-based surfaces, providing a smaller data resolution than iron oxide, but wouldn’t change much.

Samuel H. never had think much about this until he met Micah.

an 8 inch floppy disk

The Noisiest Algorithm

In the fall of 1980, Samuel was a freshman at State U. The housing department had assigned him Micah as his roommate, assuming that since both were Computer Science majors, they would get along.

On their first night together, Samuel asked why Micah kept throwing his books off the shelf onto the floor. “Oh, I just keep shuffling the books around until they’re in the right order,” Micah said.

“Have you tried, I don’t know, taking out one book at a time, starting from the left?” Samuel asked. “Or sorting the books in pairs, then sorting pairs of pairs, and so on?” He had read about sorting algorithms over the summer.

Micah shrugged, continuing to throwing books on the floor.

Divided Priorities

In one of their shared classes, Samuel and Micah were assigned as partners on a project. The assignment: write a program in Altair BASIC that analyzes rainfall measurements from the university weather station, then displays a graph and some simple statistics, including the dates with the highest and lowest values.

All students had access to Altair 8800s in the lab. They were given one 8“ floppy disk with the rainfall data, and a second for additional code. Samuel wanted to handle the data read/write code and leave the display to Micah, but Micah insisted on doing the data-handling code himself. ”I’ve learned a lot," he said. Samuel remembered the sounds of books crashing on the floor and flinched. Still, he thought the display code would be easier, so he let Micah at it.

Samuel finished his half of the code early. Micah, though, was obsessed with Star Trek, a popular student-coded space tactics game, and waited until the day before to start work. “Okay, tonight, I promise,” he said, as Samuel left him in the computer lab at an Altair. As he left, he hard Micah close the drive, and the read-head start clacking across the disk.


The next morning, Samuel found Micah still in the lab at his Altair. He was in tears. “The data’s gone,” he said. “I don’t know what I did. I started getting read errors around 1AM. I think the data file got corrupted somehow.”

Samuel gasped when Micah handed him the floppy cask. Through the read window in the cover, he could see transparent stripes in the disk. The magnetic write surface had been worn away, leaving the clear plastic backing.

Micah explained. He had written his code to read the data file, find the lowest value, write it to an entirely new file, then mark the value in the original file as read. Then it would read the original file again, write another new file, and so on.

Samuel calculated that, with Micah’s “algorithm,” the original file would be read and written to n–1 times, given n entries.

Old Habits Die Hard

Samuel went to the professor and pleaded for a new partner, showing him the floppy with the transparent medium inside. Samuel was given full credit for his half of the program. Micah would have to write his entire program from scratch with a new copy of the data.

Samuel left Micah for another roommate that spring. He didn’t see much of him, as the latter had left Computer Science to major in Philosophy. He didn’t hear about Micah again until a spring day in 1989, after he had finished his PhD.

A grad student, who worked at the computer help desk, told Samuel about a strange man at the computer lab one night. He was despondent, trying to find someone who could help him recover his thesis from a 3 1/2" floppy disk. The student offered to help, and when the man handed him the disk, he pulled the metal tab aside to check for dust.

Most of the oxide coating had been worn away, leaving thin, transparent stripes.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet Linux AustraliaOpenSTEM: This Week in HASS – term 3, week 1

Today marks the start of a new term in Queensland, although most states and territories have at least another week of holidays, if not more. It’s always hard to get back into the swing of things in the 3rd term, with winter cold and the usual round of flus and sniffles. OpenSTEM’s 3rd term units branch into new areas to provide some fresh material and a new direction for the new semester. This term younger students are studying the lives of children in the past from a narrative context, whilst older students are delving into aspects of Australian history.

Foundation/Prep/Kindy to Year 3

The main resource for our youngest students for Unit F.3 is Children in the Past – a collection of stories of children from a range of different historical situations. This resource contains 6 stories of children from Aboriginal Australia more than 1,000 years ago, Ancient Egypt, Ancient Rome, Ancient China, Aztec Mexico and Zulu Southern Africa several hundred years ago. Teachers can choose one or two stories from this resource to study in depth with the students this term. The range of stories allows teachers to tailor the material to their class and ensure that there is no need to repeat the same stories in consecutive years. Students will compare the lives of children in the stories with their own lives – focusing on different aspects in different weeks of the term. In this first week teachers will read the stories to the class and help them find the places described on the OpenSTEM “Our World” map and/or a globe.

Students in integrated Foundation/Prep/Kindy and Year 1 classes (Unit F-1.3), will also be examining stories from the Children in the Past resource. Students in Years 1 (Unit 1.3), 2 (Unit 2.3) and 3 (Unit 3.3) will also be comparing their own lives with those of children in the past; however, they will use a collection of stories called Living in the Past, which covers the same areas and time periods as Children in the Past, but provides more in-depth information about a broader range of subject areas and includes the story of the young Tom Petrie, growing up in Brisbane in the 1840s. Students in Year 1 will be considering family structures and the differences and similarities between their own families and the families described in the stories. Students in Year 2 are starting to understand the differences which technology makes to peoples’ lives, especially the technology behind different modes of transport. Students in Year 3 retain a focus on local history. In fact, the Understanding Our World® units for Year 3, term 3 are tailored to match the capital city of the state or territory in which the student lives. Currently units are available for Brisbane and Perth, other capital cities are in preparation. Additional resources are available describing the foundation and growth of Brisbane and Perth, with other cities to follow. Teachers may also prefer to focus on the local community in a smaller town and substitute their own resources for those of the capital city.

Years 3 to 6

Opening of the first parliamentFirst Australian Parliament

Older students are focusing on Australian history this term – Year 3 students (Unit 3.7) will be considering the history of their capital city (or local community) within the broader context of Australian history. Students in Year 4 (Unit 4.3) will be examining Australia in the period up to and including the first half of the 19th century. Students in Year 5 (Unit 5.3) examine the colonial period in Australian history; whilst students in Year 6 (Unit 6.3) are investigating Federation and Australia in the 20th century. In this first week of term, students in Years 3 to 6 will be compiling a timeline of Australian history and filling in important events which they already know about or have learnt about in previous units. Students will revisit this timeline in later weeks to add additional information. The main resources for this week are The History of Australia, a broad overview of Australian history from the Ice Age to the 20th century; and the History of Australian Democracy, an overview of the development of the democratic process in Australia.

Governor's House Sydney 1791The rest of the 3rd term will be spent compiling a scientific report on an investigation into an aspect of Australian history. Students in Year 3 will choose a research topic from a list of themes concerning the history of their capital city. Students in Year 4 will choose from themes on Australia before 1788, the First Fleet, experiences of convicts and settlers, including children, as well as the impact of different animals brought to Australia during the colonial period. Students in Year 5 will choose from themes on the Australian colonies and people including explorers, convicts and settlers, massacres and resistance, colonial animals and industries such as sugar in Queensland. Students in Year 6 will choose from themes on Federation, including personalities such as Henry Parkes and Edmund Barton, Sport, Women’s Suffrage, Children, the Boer War and Aboriginal experiences. This research topic will be undertaken as a guided investigation throughout the term.

Planet Linux AustraliaAnthony Towns: Bitcoin: ASICBoost and segwit2x – Background

I’ve been trying to make heads or tails of what the heck is going on in Bitcoin for a while now. I’m not sure I’ve actually made that much progress, but I’ve at least got some thoughts that seem coherent now.

First, this post is background for people playing along at home who aren’t familiar with the issues or jargon: Bitcoin is a currency based on an electronic ledger that essentially tracks how much Bitcoin exists, and how someone can be authorised to transfer it to someone else; that ledger is currently about 100GB in size, growing at a rate of about a gigabyte a week. The ledger is updated by miners, who compete by doing otherwise pointless work running cryptographic hashes (and in so doing obtain a “proof of work”), and in return receive a reward (denominated in bitcoin) made up from fees by people transacting and an inflation subsidy. Different miners are competing in an essentially zero-sum game, because fees and inflation are essentially a fixed amount that is (roughly) divided up amongst miners according to how much work they do — so while you get more reward for doing more work, it comes at a cost of other miners receiving less reward.

Because the ledger only grows by (about) a gigabyte each week (or a megabyte per block, which is roughly every ten minutes), there is a limit on how many transactions can be included each week (ie, supply is limited), which both increases fees and limits adoption — so for quite a while now, people in the bitcoin ecosystem with a focus on growth have wanted to work out ways to increase the transaction rate. Initial proposals in mid 2015 suggested allowing miners to regularly determine the limit with no official upper bound (nominally “BIP100“, though never actually formally submitted as a proposal), or to increase by a factor of eight within six months, then double every two years after that, until reaching almost 200 times the current size by 2036 (BIP101), or to increase at a rate of about 17% per annum (suggested on the mailing list, but never formally proposed BIP103). These proposals had two major risks: locking in a lot of growth that may turn out to be unnecessary or actively harmful, and requiring what is called a “hard fork”, which would render the existing bitcoin software unable to track the ledger after the change took affect with the possible outcome that two ledgers would coexist and would in turn cause a range of problems. To reduce the former risk, a minimal compromise proposal was made to “kick the can down the road” and just double the ledger growth rate, then figure out a more permanent solution down the road (BIP102) (or to double it three times — to 2MB, 4MB then 8MB — over four years, per Adam Back). A few months later, some of the devs figured out a way to more or less achieve this that also doesn’t require a hard fork, and comes with a host of other benefits, and proposed an update called “segregated witness” at the December 2015 Scaling Bitcoin conference.

And shortly after that things went completely off the rails, and have stayed that way since. Ultimately there seem to be two camps: one group is happy to deploy segregated witness, and is eager to make further improvements to Bitcoin based on that (this is my take on events); while the other group does not, perhaps due to some combination of being opposed to the segregated witness changes directly, wanting a more direct upgrade immediately, being afraid deploying segregated witness will block other changes, or wanting to take control of the bitcoin codebase/roadmap from the current developers (take this with a grain of salt: these aren’t opinions I share or even find particularly reasonable, so I can’t do them justice when describing them; cf ViaBTC’s post to get that side of the argument made directly, eg)

Most recently, and presumably on the basis that the opposed group are mostly worried that deploying segregated witness will prevent or significantly delay a more direct increase in capacity, a bitcoin venture capitalist, Barry Silbert, organised an agreement amongst a number of companies including many miners, to both activate segregated witness within the next month, and to do a hard fork capacity increase by the end of the year. This is the “segwit2x” project; named because it takes segregated witness, (“segwit”) and then additionally doubles its capacity increase (“2x”). This agreement is not supported by any of the existing dev team, and is being developed by Jeff Garzik (who was behind BIP100 and BIP102 mentioned above) in a forked codebase renamed “btc1“, so if successful, this may also satisfy members of the opposed group motivated by a desire to take control of the bitcoin codebase and roadmap, despite that not being an explicit part of the agreement itself.

To me, the arguments presented for opposing segwit don’t really seem plausible. As far as future development goes, a roadmap was put out in December 2015 and endorsed by many developers that explicitly included a hard fork for increased capacity (“moderate block size increase proposals (such as 2/4/8 …)”), among many other things, so the risk of no further progress happening seems contrary to the facts to me. The core bitcoin devs are extremely capable in my estimation, so replacing them seems a bad idea from the start, but even more than that, they take a notably hands off approach to dictating where Bitcoin will go in future — so, to my mind, it seems like a more sensible thing to try would be working with them to advance the bitcoin ecosystem in whatever direction you want, rather than to try to replace them outright. In that context, it seems particularly notable to me that in the eighteen months between the segregated witness proposal and the segwit2x agreement, there hasn’t been any serious attempt to propose a hard fork capacity increase that meets the core dev’s quality standards; for instance there has never been any code for BIP100, and of the various hard forking codebases that have arisen by advocates of the hard fork approach — Bitcoin XT, Bitcoin Classic, Bitcoin Unlimited, btc1, and Bitcoin ABC — none have been developed in a way that’s suitable for the changes to be reviewed and merged into core via a pull request in the normal fashion. Further, since one of the main criticisms of a hard fork is that deployment costs are higher when it is done in a short time frame (weeks or a few months versus a year or longer), that lack of engagement over the past 18 months followed by a desperate rush now seems particularly poor to me.

A different explanation for the opposition to segwit became public in April, however. ASICBoost is a patent-pending optimisation to the way Bitcoin miners do the work that entitles them to extend the ledger (for which they receive the rewards described earlier), and while there are a few ways of making use of ASICBoost, perhaps the most effective way turns out to be incompatible with segwit. There are three main alternatives to the covert, segwit-incompatible approach, all of which have serious downsides. The first, overt ASICBoost via modifying the block version reveals that you’re using ASICBoost, which would either (a) encourage other miners to also use the optimisation reducing your profits, (b) give the patent holder cause to charge you royalties or cause other problems (assuming the patent is eventually granted and deemed valid), or (c) encourage the bitcoin community at large to change the ecosystem rules so that the optimisation no longer works. The second, mining empty blocks via ASICBoost means you don’t gain any fee income, reducing your revenue and hence profit. And the third, rolling the extranonce to find a collision rather than combining partial transaction trees increases the preparation work by a factor of ten or so, which is probably enough to outweigh the savings from the optimisation in the first place.

If ASICBoost were being used by a significant number of miners, and segregated witness prevents its continued use in practice, then we suddenly have a very plausible explanation for much of the apparent madness: the loss of the optimisation could significantly increase some miners’ costs or reduce their revenue, reducing profit either way (a high end estimate of $100,000,000 per year was given in the original explanation), which would justify significant investment in blocking that change. Further, an honest explanation of the problem would not be feasible, because this would be just as bad as doing the optimisation overtly — it would increase competition, alert the potential patent owners, and might cause the optimisation to be deliberately disabled — all of which would also negatively affect profits. As a result, there would be substantial opposition to segwit, but the reasons presented in public for this opposition would be false, and it would not be surprising if the people presenting these reasons only give half-hearted effort into providing evidence — their purpose is simply to prevent or at least delay segwit, rather than to actually inform or build a new consensus. To this line of thinking the emphasis on lack of communication from core devs or the desire for a hard fork block size increase aren’t the actual goal, so the lack of effort being put into resolving them over the past 18 months from the people complaining about them is no longer surprising.

With that background, I think there are two important questions remaining:

  1. Is it plausible that preventing ASICBoost would actually cost people millions in profit, or is that just an intriguing hypothetical that doesn’t turn out to have much to do with reality?
  2. If preserving ASICBoost is a plausible motivation, what will happen with segwit2x, given that by enabling segregated witness, it does nothing to preserve ASICBoost?

Well, stay tuned…


Harald WelteTen years after first shipping Openmoko Neo1973

Exactly 10 years ago, on July 9th, 2007 we started to sell+ship the first Openmoko Neo1973. To be more precise, the webshop actually opened a few hours early, depending on your time zone. Sean announced the availability in this mailing list post

I don't really have to add much to my ten years [of starting to work on] Openmoko anniversary blog post a year ago, but still thought it's worth while to point out the tenth anniversary.

It was exciting times, and there was a lot of pioneering spirit: Building a Linux based smartphone with a 100% FOSS software stack on the application processor, including all drivers, userland, applications - at a time before Android was known or announced. As history shows, we'd been working in parallel with Apple on the iPhone, and Google on Android. Of course there's little chance that a small taiwanese company can compete with the endless resources of the big industry giants, and the many Neo1973 delays meant we had missed the window of opportunity to be the first on the market.

It's sad that Openmoko (or similar projects) have not survived even as a special-interest project for FOSS enthusiasts. Today, virtually all options of smartphones are encumbered with way more proprietary blobs than we could ever imagine back then.

In any case, the tenth anniversary of trying to change the amount of Free Softwware in the smartphone world is worth some celebration. I'm reaching out to old friends and colleagues, and I guess we'll have somewhat of a celebration party both in Germany and in Taiwan (where I'll be for my holidays from mid-September to mid-October).

Sam VargheseFrench farce spoils great Test series in New Zealand

Referees or umpires can often put paid to an excellent game of any sport by making stupid decisions. When this happens — and it does so increasingly these days — the reaction of the sporting body concerned is to try and paper over the whole thing.

Additionally, teams and their coaches/managers are told not to criticise referees or umpires and to respect them. Hence a lot tends to be covered up.

But the fact is that referees and umpires are employees who are being paid well, especially when the sports they are officiating are high-profile. Do they not need to be competent in what they do?

And you can’t get much higher profile than a deciding rugby test between the New Zealand All Blacks and the British and Irish Lions. A French referee, Romain Poite, screwed up the entire game in the last few minutes through a wrong decision.

Poite awarded a penalty to the All Blacks when a restart found one of the Lions players offside. He then changed his mind and awarded a scrum to the All Blacks instead, using mafia-like language, “we will make a deal about this” before he mentioned the change of decision.

When he noticed the infringement initially, Poite should have held off blowing his whistle and allowed New Zealand the advantage as one of their players had gained possession of the ball and was making inroads into Lions territory. But he did not.

He blew, almost as a reflex action, and stuck his arm up to signal a penalty to New Zealand. It was in a position which was relatively easy to convert and would have given New Zealand almost certain victory as the teams were level 15-all at that time. There were just two minutes left to play when this incident happened.

The New Zealand coach Steve Hansen tried to paper over things at his post-match press conference by saying that his team should have sewn up things much earlier — they squandered a couple of easy chances and also failed to kick a penalty and convert one of their two tries — and could not blame Poite for their defeat.

This kind of talk is diplomacy of the worst kind. It encourages incompetent referees.

One can cast one’s mind back to 2007 and the quarter-finals of the World Cup rugby tournament when England’s Wayne Barnes failed to spot a forward pass and awarded France a try which gave them a 20-18 lead over New Zealand; ultimately the French won the game by this same score.

Barnes was never pulled into line and to this day he seems to be unable to spot a forward pass. He continues to referee international games and must be having quite powerful sponsors to continue.

Hansen did make one valid point though: that there should be consistency in decisions. And that did not happen either over the three tests. It is funny that referees use the same rulebook and interpret things differently depending on whether they are from the southern hemisphere or northern hemisphere.

Is there no chief of referees to thrash out a common ruling for the officials? It makes rugby look very amateurish and spoils the game for the viewer.

Associations that run various sports are often heard complaining that people do not come to watch games. Put a couple more people like Poite to officiate and you will soon have empty stadiums.


Krebs on SecuritySelf-Service Food Kiosk Vendor Avanti Hacked

Avanti Markets, a company whose self-service payment kiosks sit beside shelves of snacks and drinks in thousands of corporate breakrooms across America, has suffered of breach of its internal networks in which hackers were able to push malicious software out to those payment devices, the company has acknowledged. The breach may have jeopardized customer credit card accounts as well as biometric data, Avanti warned.

According to Tukwila, Wash.-based Avanti’s marketing literature, some 1.6 million customers use the company’s break room self-checkout devices — which allow customers to pay for drinks, snacks and other food items with a credit card, fingerprint scan or cash.

An Avanti Markets kiosk. Image: Avanti

An Avanti Markets kiosk. Image: Avanti

Sometime in the last few hours, Avanti published a “notice of data breach” on its Web site.

“On July 4, 2017, we discovered a sophisticated malware attack which affected kiosks at some Avanti Markets. Based on our investigation thus far, and although we have not yet confirmed the root cause of the intrusion, it appears the attackers utilized the malware to gain unauthorized access to customer personal information from some kiosks. Because not all of our kiosks are configured or used the same way, personal information on some kiosks may have been adversely affected, while other kiosks may not have been affected.”

Avanti said it appears the malware was designed to gather certain payment card information including the cardholder’s first and last name, credit/debit card number and expiration date.

Breaches at point-of-sale vendors have become almost regular occurrences over the past few years, but this breach is especially notable as it may also have jeopardized customer biometric data. That’s because the newer Avanti kiosk systems allow users to pay using a scan of their fingerprint.

“In addition, users of the Market Card option may have had their names and email addresses compromised, as well as their biometric information if they used the kiosk’s biometric verification functionality,” the company warned.

On Thursday, KrebsOnSecurity learned from a source at a law firm that the food vending machine in its employee lunchroom was no longer able to accept credit cards. The source said his firm’s information technology personnel told him the credit card functionality had been temporarily disabled because of a breach at Avanti.

Another source told this author that Avanti’s corporate network had been breached, and that Avanti had made the decision to turn off all self-checkouts for now — although the source said customers could still use cash at the machines.

“I was told that about half of the self-checkouts do not have P2Pe,” the source said, on condition of anonymity. P2Pe is short for “point-to-point encryption,” and it’s a technological solution that encrypts sensitive data such as credit card information at every point in the card transaction. In theory, P2Pe should to be able to protect card data even if there is malicious software resident on the device or network in question.

Avanti said in its notice that it had shut down payment processing at some locations, and that the company was working with its operators to purge infected systems of any malware from the attack and to take steps to “substantially minimize the risk of a data compromise in the future.”


On Friday evening, security firm RiskAnalytics published a blog post that detailed an experience from a customer who shared a remarkably similar experience to the one referenced by the anonymous law firm source above.

RiskAnalytics’s Noah Dunker wrote that the company’s technology on July 4 flagged suspicious behavior by a break room vending kiosk. Further inspection of the device and communications traffic emanating from it revealed it was infected with a family of point-of-sale malware known as PoSeidon (a.k.a. “FindPOS”) that siphons credit card data from point-of-sale devices.

“In our analysis of the incident, it seems most likely that the larger vendor was compromised, and some or all of the kiosks maintained by local vendors were impacted,” Dunker wrote. “We’ve been able to identify at least two smaller vendors with local operations that have been impacted in two different cities though we are not naming any impacted vendors yet, as we’ve been unable to contact them directly.”

KrebsOnSecurity reached out to RiskAnalytics to see if the vendor of the snack machine used by the victim organization he wrote about also was Avanti. Dunker confirmed that the kiosk vendor that was the subject of his post was indeed Avanti.

Dunker noted that much like point-of-sale devices at many restaurant chains, these snack machines usually are installed and managed by third-party technology companies, adding another layer of complexity to the challenge of securing these devices from hackers.

Dunker said RiskAnalytics first noticed something wasn’t right with its client’s break room snack machine after it began sending data out of the client’s network using an SSL encryption certificate that has long been associated with cybercrime activity — including ransomware activity dating back to 2015.

“This is a textbook example of an ‘Internet of Things’ (IoT) threat: A network-connected device, controlled and maintained by a third party, which cannot be easily patched, audited, or controlled by your own IT staff,” Dunker wrote.


Credit card machines and point-of-sale devices are favorite targets of malicious hackers, mainly because the data stolen from those systems is very easy to monetize. However, the point-of-sale industry has a fairly atrocious record of building insecure products and trying to tack on security only after the products have already gone to market. Given this history, it’s remarkable that some of these same vendors are now encouraging customers to entrust them with biometric data.

Credit cards can be re-issued, but biometric identifiers are for life. Companies that choose to embed biometric capabilities in their products should be held to a far higher security standard than those used to protect card data.

For starters, any device that requests, stores or transmits biometric data should at a minimum ensure that the data remains strongly encrypted both at rest and in transit. Judging by Avanti’s warning that some customer biometric data may have been compromised in this breach, it seems this may not have been the case for at least a subset of their products.

I would like see some industry acknowledgement of this before we start to see more stand-alone payment applications entice users to supply biometric data, but I share Dunker’s fear that we may soon see biometric components added to a whole host of Internet-connected (IoT) devices that simply were not designed with security in mind.

Also, breaches like this illustrate why it’s critically important for organizations to segment their internal networks, and to keep payment systems completely isolated from the rest of the network. However, neither of the victim organizations referenced above appear to have taken this important precaution.

To illustrate this concept a bit further, it may well be that the criminal masterminds behind this attack could have made far more money had they used the remote access they apparently had to these Avanti devices to push ransomware out to Microsoft Windows computers residing on the same internal network as the payment kiosks.

Planet Linux AustraliaLev Lafayette: One Million Jobs for Spartan

Whilst it is a loose metric, our little cluster, "Spartan", at the University of Melbourne ran its 1 millionth job today after almost exactly a year since launch.

The researcher in question is doing their PhD in biochemistry. The project is a childhood asthma study:

"The nasopharynx is a source of microbes associated with acute respiratory illness. Respiratory infection and/ or the asymptomatic colonisation with certain microbes during childhood predispose individuals to the development of asthma.

Using data generated from 16S rRNA sequencing and metagenomic sequencing of nasopharyn samples, we aim to identify which specific microbes and interactions are important in the development of asthma."

Moments like this is why I do HPC.

Congratulations to the rest of the team and to the user community.

read more